Helen N. and Emmett H. Jones Professor in Engineering; Professor, Computer Science & Industrial and Systems Engineering Departments, University of Southern California
Abstract: With the maturing of AI and multiagent systems research, we have a tremendous opportunity to direct these advances towards addressing complex societal problems. I will focus on the problems of public safety and security, wildlife conservation and public health in low-resource communities, and present research advances in multiagent systems to address one key cross-cutting challenge: how to effectively deploy our limited intervention resources in these problem domains. Results from our deployments from around the world show concrete improvements over the state of the art. In pushing this research agenda, we believe AI can indeed play an important role in fighting social injustice and improving society.
Bio: Milind Tambe is Helen N. and Emmett H. Jones Professor in Engineering and Founding Co-Director of the Center for AI in Society at the University of Southern California. He is a fellow of AAAI and ACM, and recipient of the IJCAI John McCarthy Award, AAAI Robert S. Engelmore Memorial Lecture Award, ACM/SIGAI Autonomous Agents Research Award, INFORMS Wagner prize, the Rist Prize of the Military Operations Research Society, the Christopher Columbus Fellowship Foundation Homeland security award, International Foundation for Agents and Multiagent Systems influential paper award, Meritorious Commendation from the US Coast Guard, LA Airport Police, and US Federal Air Marshals Service. Prof. Tambe has also co-founded a company based on his research, Avata Intelligence , where he serves as the director of research. Prof. Tambe received his Ph.D. from the School of Computer Science at Carnegie Mellon University.
Professor, Dept. of Computer Science & Engg., Fulton School of Engineering, Arizona State University, Tempe Arizona
Abstract: Research in AI suffers from a longstanding ambivalence to humans–swinging as
it does, between their replacement and augmentation. Now, as AI technologies enter our
everyday lives at an ever increasing pace, there is a greater need for AI systems to work
synergistically with humans. To do this effectively, AI systems must pay more attention to
aspects of intelligence that helped humans work with each other–including emotional and
social intelligence.
I will discuss the research challenges in designing such human-aware AI systems, including
modeling the mental states of humans in the loop, recognizing their desires and intentions,
providing proactive support, exhibiting explicable behavior, giving cogent explanations on
demand, and engendering trust. I will survey the progress made so far on these challenges,
and highlight some promising directions. I will also touch on the additional ethical
quandaries that such systems pose.
I will end by arguing that the quest for human-aware AI systems broadens the scope of AI
enterprise, necessitates and facilitates true inter-disciplinary collaborations, and can go a
long way towards increasing public acceptance of AI technologies.
Bio: Subbarao Kambhampati (Rao) is a professor of Computer Science at Arizona State
University. He received his B.Tech. in Electrical Engineering (Electronics) from Indian
Institute of Technology, Madras (1983), and M.S.(1985) and Ph.D.(1989) in Computer
Science (1985,1989) from University of Maryland, College Park. Kambhampati studies
fundamental problems in planning and decision making, motivated in particular by the
challenges of human-aware AI systems. Kambhampati is a fellow of AAAI and AAAS, and
was an NSF Young Investigator. He received multiple teaching awards, including a
university last lecture recognition. Kambhampati served as the President of AAAI and as a
trustee of IJCAI. He was the program chair for IJCAI 2016 , ICAPS 2013, AAAI 2005 and
AIPS 2000. He serves on the board of directors of Partnership on AI. Kambhampati’s
research as well as his views on the progress and societal impacts of AI have been
featured in multiple national and international media outlets. URL rakaposhi.eas.asu.edu
Twitter @rao2z
Professor of Computer Science, The University of Texas at Austin; Director of the UT AI Laboratory
Abstract: Artificial Intelligence systems’ ability to explain their conclusions is crucial to their utility and trustworthiness. Deep neural networks have enabled significant progress on many challenging problems such as visual question answering (VQA), the task of answering natural language questions about images. However, most of them are opaque black boxes with limited explanatory capability. The goal of Explainable AI is to increase the transparency of complex AI systems such as deep networks. We have developed a novel approach to XAI and used it to build a high-performing VQA system that can elucidate its answers with integrated textual and visual explanations that faithfully reflect important aspects of its underlying reasoning while capturing the style of comprehensible human explanations. Crowd-sourced human evaluation of these explanations demonstrate the advantages of our approach.
Bio: Raymond J. Mooney is a Professor in the Department of Computer Science at the University of Texas at Austin. He received his Ph.D. in 1988 from the University of Illinois at Urbana/Champaign.
He is an author of over 170 published research papers, primarily in the areas of machine learning and natural language processing. He was the President of the International Machine Learning Society from 2008-2011, program co-chair for AAAI 2006, general chair for HLT-EMNLP 2005, and co-chair for ICML 1990. He is a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the Association for Computational Linguistics and the recipient of best paper awards from AAAI-96, KDD-04, ICML-05 and ACL-07
Faculty, Head, Networked Systems Research Group, Max Planck Institute for Software Systems (MPI-SWS)
Abstract: As algorithms are increasingly used to make important decisions that affect human lives, ranging from social benefit assignment to predicting risk of criminal recidivism, concerns have been raised about the fairness of algorithmic (data-driven and learning-based) decision making. A number of recent works have proposed methods to measure and eliminate unfairness in algorithmic decisions. In this talk, I will argue that the notions of fairness considered in these early works are limited along several dimensions: (i) they focus on distributive fairness (i.e., fairness of the outcomes or ends of decision making) at the expense of procedural fairness (i.e., fairness of the process or means of decision making); (b) they normatively prescribe how fair decisions ought to be made rather than descriptively study how people perceive and reason about fairness of decisions; and (c) they ignore the influence of status quo (i.e., how decisions are made by existing decision systems) on people’s perceptions about fairness of decisions. I will present a few measures and mechanisms to quantify and mitigate algorithmic unfairness along these previously overlooked dimensions and discuss the challenging tradeoffs that arise when we attempt to account for all the different fairness considerations simultaneously.
Bio: To be Updated.