World Summit AI 2022: Highlights from our Fireside Chat

Last week, we were pleased to tell you about the results of the ALA-Cogment Challenge. The Challenge recognized new uses of our Cogment platform in papers at the ALA (Adaptive and Learning Agents) Workshop at AAMAS 2022. We’re excited to continue these research collaborations.

More good news: we have had a chance to speak at some major AI and deep tech events recently. On May 5, Dorian Kieken, AIR’s President, and Dr. Matthew E. Taylor of the University of Alberta and the Alberta Machine Intelligence Institute (Amii) spoke in a fireside chat at World Summit AI Americas in Montreal.

Kieken and Taylor had a conversation about how they are tackling the challenge of human-AI collaboration from their vantage points in industry and academia (respectively). Here are some highlights from their conversation (transcript edited in places for clarity.)

Dr. Matthew E. Taylor (left) and Dorian Kieken (right) discuss human-AI teaming onstage at World Summit AI Americas in Montreal on May 5.

How Kieken and Taylor became interested in human-AI collaboration

Kieken: “One of the big challenges [when we founded AIR in 2017] was that solutions for AI training were pretty decorrelated from the human. To take two examples: supervised learning, which is essentially about learning from large patterns of data without human involvement, except maybe the labeling process that is quite limited and indirect. Then you have reinforcement learning, which is a fantastic method which is about learning from trial and error, and that process is done at superhuman speed. If you’re training an AI to play chess with RL, you’ll be doing 6,000 years of human chess in just a matter of a few weeks. Unless you personally know Superman or Flash, it’s quite difficult for a human to train alongside an AI. So, there are inherent flaws in how we’re approaching AI training if we want to build solutions that augment humans. That was the genesis of our company.”

Taylor: “With reinforcement learning, we’ve been seeing all these great successes in Starcraft and Dota and Go and they use millions of years of compute. What if we wanted to use less compute? What if we didn’t want to use a simulator and work in the real world? How could we bootstrap that learning? So I originally got into this thinking: ‘I’ve got this human who has all this information. How can I use that to help our reinforcement learning agent learn much, much faster and not have that poor initial performance?’ So it sounds like we are coming at this from slightly different angles, but converging in the same area.”

Moving from bootstrapping to true human-AI collaboration

Taylor: “What are some cases where we want the agent and the human to collaborate together, and not just the human to bootstrap the agent?”

Kieken: “Let’s take the example of the 911 responder…It’s a very interesting space, as it has some high-combinatory elements that make it a very good space to train AI agents. [But] there are so many edge cases that require context reasoning that ML is not capable of. That’s why emergency dispatch is a great paradigm of human-machine teaming…The AI can help out the human. As the human gets more overwhelmed, the AI can take more and more initiative, while at the same time it knows that there are certain things that are beyond its boundaries.”

The cutting edge of RL: explainability and a range of human involvement

Kieken: “What are the things that are exciting you on the lab side when it comes to your work on RL?”

Taylor: “One of the other things we’ve been talking a lot about is explainability. So thinking about how, once the agent learns, it can explain what’s doing to the humans, so that the human can trust it more and so that it can possibly even teach the human more. Another really important thing we’ve been thinking about is: in these human-agent teams, what are the characteristics of a human? Are they a machine learning nerd? Are they a subject matter expert? Are they a layperson? Because you probably want the agent to interact with [a human] in different ways depending on [the human’s] background. So, thinking about this teaming has to really be fluid. Thinking about the context of what you’re doing (and who you’re doing it with) is a critical, underexplored area.”

Kieken: “What are the barriers to entry in doing human-machine collaboration?” 

Taylor: “The first barrier, I think, is making sure that people know that this is a technology that’s ready to be deployed. It’s not sci fi. It’s not just in the lab basement. It’s actually useful. From a research standpoint, one of the things we’re working on, that we need to continue working on, is how to get laypeople or subject matter experts to better interact with and teach our agents…If the complex reinforcement learning agent can explain to a subject matter expert ‘Here’s why I did this,’ or ‘Here’s where I’m confused.’ If you can open up that black box just a little bit, you can have people who aren’t reinforcement learning experts, aren’t machine learning experts, really make a big impact on helping these agents to perform tasks on their own, or in conjunction helping the agent and the human perform together.”

Take-home messages

Taylor: “The one take-home message I would like to make is that I think this human-AI teaming is going to be critical going forward both for getting just pure AI deployed, but also because the human-AI team can accomplish more than just humans or just AI alone.”

Kieken: “In the very short term, most machine learning systems have a problem because they just can’t do context reasoning. We’re still far from there. As long as [AI is] a big correlation machine, it’s dangerous if you don’t have a way to provide context to this correlation machine. I think human-machine teaming is needed more than ever.”

Thanks to the organizers and audience!

We would like to thank Inspired Minds, the organizers of World Summit AI, for their hospitality. Thanks also to everyone who attended the talk.

Interested in chatting with us? Please feel free to connect with us via our social media channels (LinkedIn, Twitter, and Facebook), send us an email, or collaborate with us on our Discord server.

Previous
Previous

AI Redefined Welcomes New Hires, Continues Commercial & Research Expansion

Next
Next

ALA-Cogment Challenge: Announcing the Results!