Trustworthy Multiagency

Friday, March 6, 2020 - 10:15am to 11:15am
Storey Innovation Center (Room 2277)


Intelligent decision making is at the heart of Artificial Intelligence (AI). A large number of real-world domains, such as autonomous vehicles, delivery robots, cyber security, and so many others, involve multiple AI decision makers, or agents, that cooperate with collective efforts in a distributed manner, where each agent's decisions are based on its local information, with often limited communication with others. This distributed nature makes it challenging to design efficient and reliable multiagency. Issues like failure to coordinate, unsafe interactions, and resource misallocation can easily arise.

A promising approach to tackling these challenges is to explicitly build dependencies among cooperating agents, where one agent can be trusted to facilitate the execution of another. In this talk, I will present a framework that achieves trustworthy dependency by borrowing the notion of social commitments. Intuitively, a commitment regularizes an agent’s behavior so that it can be well anticipated and exploited by another. This talk will build up a formalism of this intuition, and discuss how multiagency commitments can be efficiently identified and faithfully fulfilled. Finally, this talk will conclude with my future agenda, covering topics of verification of trustworthy multiagency, discovery of safety-related dependency, and interpretability-performance tradeoff in multiagency.


Qi Zhang is a final year Ph.D. student at the University of Michigan, advised by Edmund Durfee and Satinder Singh. His research interest is in artificial intelligence, with focuses on planning under uncertainty, reinforcement learning, and multiagent coordination. His long-term goal is to build safe, reliable, and trustworthy AI systems that retain their power and flexibility to handle complex, diverse contexts.

Friday, March 6, Storey Innovation Center (Room 2277) from 10:15 am - 11:15 am.