Human-AI Collaboration: Cogment’s Blueprint for Efficient & Steerable AI

Smart cities. Autonomous vehicles. Digital assistance with vital tasks like air traffic control. All around us are exciting technologies that signal the growing power of human-machine interactions. Indeed, in some respects, the advent of AGI (artificial general intelligence) seems closer than ever. In a hypothetical world with AGI, artificial and human neural networks would be similarly powerful and, in some scenarios, merge with one another.

Many AI practitioners caution that AGI is still far off, if it is even possible. A 2020 report by Federico Berruti, Pieter Nel, and Rob Whiteman of McKinsey & Company notes that many leading researchers “argue that we are decades away from realizing AGI, and some even predict that we won’t see AGI this century.” The same report, however, advises decision-makers to start preparing for the world-altering effects of more powerful machine intelligence, no matter how distant (or how likely) full AGI might be.

In order to prepare, we need to take stock of where we are. At present, even the most powerful human-machine interactions to date involve machines serving as assistants or apprentices to humans, rather than behaving like full partners. In fact, we would argue that our current relationship with AI is best described as cooperation, rather than collaboration. Cooperation hinges on the division of labor, with the AI agent taking on automated tasks. Collaboration, in contrast, would involve humans and AI agents formulating and pursuing shared goals. We’re not there yet.

Our existing cooperation with AI assistants can, of course, be enormously helpful. The process of achieving such cooperation, however, has introduced two significant challenges:

First, the huge datasets and resources that are often required to train AI agents. In situations in which data is scarce, it is vital that we have access to more efficient approaches, including ones that put human contributions at the center of the process. For example, humans can supplement the training of AI agents through imitation learning, a technique that has already shown promising results in robotics. We need to continue to explore a range of methods to ensure that AI training is as efficient and human-friendly as possible.

Second, the risks of trusting AI assistants to operate independently in high-stakes situations like air traffic control, satellite operations, and emergency dispatch, even though many baseline tasks in such environments can be fully automated. A recent Georgetown University (Center for Security and Emerging Technology) policy brief by Helen Toner and Zachary Arnold investigates why AI accidents are so dangerous in such scenarios. Toner and Arnold write that “[e]ven with simpler systems, it has proven difficult to design user interfaces that allow humans to effectively monitor and intervene” in the workings of AI (p. 13). In order for AI to be safer, in other words, we need (among other things) to make it steerable and transparent.

We at AI Redefined (AIR) are driven by a desire to move human-AI interactions toward a safer and more efficient collaboration model. We think that humans and AI agents should be able to explore context, identify a shared worldview, and work together to bring that worldview into being. We took an important step in this direction with the June 2021 launch of the 1.0 version of Cogment, an open-source framework that enables humans to learn and explore contextually alongside artificial agents of various kinds. We describe Cogment’s key features in our white paper as well as in its executive summary.

Here are five ways that Cogment can make human-AI interactions safer, more efficient, and more collaborative:

Accelerating AI training through human inputs. We are already seeing promising results where humans can speed up the training of AI agents through their expertise or by providing context that AI does not have. Interestingly, we are also seeing evidence that AI agents can dynamically adapt to human inputs in real time or even compensate for human learning difficulties.

Implementation swapping: In Cogment, humans, AI agents, non-AI agents, and even aggregates of AI agents can all be considered “actors” who can step into different “roles” at any point during training or in an actual operation. We use the term “implementation” to describe the technical process of carrying out a role. Cogment allows humans and other agents to swap implementations in a transparent, easy manner – and in real time. In other words, a task started by an AI agent could be finished by a human, and vice versa. This flexibility can allow decision-makers to experiment with different problem-solving strategies to find the fastest and safest options.

Offering a high degree of flexibility, which can lead to more efficient (as well as faster) AI training. In fact, we consider Cogment’s emphasis on flexibility to be one of its key competitive advantages. The wealth of options to be found in Cogment’s training scenarios can lead to more thoughtful use of both human and artificial productivity, thus addressing another important conversation in the AI community. For instance, human operators sometimes have to work alongside an untrained machine agent for too long, which is a poor use of human time. Cogment’s implementation swapping (described above) can solve this problem by exchanging the human “role” at any phase of training with an AI agent that mimics part of the human’s behavior. This approach frees up the human to work on other things and step back into the training process when it makes the most sense to do so. Cogment’s flexibility also supports the benchmarking of multiple training strategies (human-in-the-loop learning, pure reinforcement learning, etc.) against one another, which can help practitioners select the best method for their needs. In fact, Cogment even allows the mixing of different training techniques.

Providing an open-source framework as an invitation to researchers around the world to innovate with us. Key decision-makers are aware of the vital role that open-source software (OSS) has to play in future AI developments, as outlined in Alex Engler’s recent analysis at the Brookings Institution. Moreover, the 2021 edition of Nathan Benaich and Ian Hogarth’s State of AI Report identifies open-source AI as an element of emerging governance paradigms. Cogment is well-positioned to join other open-source AI at the table.

Addressing the regulatory and compliance contexts in which AI operates. Ongoing conversations about AI regulation in many jurisdictions make it important for AI applications to take into account complex stakeholder considerations. Cogment helps meet this need with its capacity to orchestrate the deployment of rule-based agents alongside humans and other digital agents, which can be helpful in situations like the monitoring of no-fly zones. Cogment also allows humans to set the rules of AI engagement, specifying which tasks require AI agents to request prior approval and which ones can be undertaken by AI independently.

We’re not the only ones that believe in collaboration as the next step in AI’s evolution. AI100, Stanford University’s preeminent study (Sept 2021) of the current state of AI, states that: 

“AI researchers should also recognize that complete autonomy is not the eventual goal for AI systems. Our strength as a species comes from our ability to work together and accomplish more than any of us could alone. AI needs to be incorporated into that community-wide system, with clear lines of communication between human and automated decision-makers. At the end of the day, the success of the field will be measured by how it has empowered all people, not by how efficiently machines devalue the very people we are trying to help.”

Source: “Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report.” Stanford University, Stanford, CA, September 2021. http://ai100.stanford.edu/2021-report. Accessed: October 22, 2021.

We designed Cogment to help nurture this kind of successful human-AI collaboration. We’re happy for our work to be a part of these important conversations.

Previous
Previous

Human – AI Teaming to Fight Wildfires

Next
Next

AIR wins 2021 AI TechAward, named Deep Tech Pioneer, featured in AI Québec book