AI Systems for Decision-Making

Artificial Intelligence (AI) is experiencing a period of rapid growth, with the potential for profound impacts on the economy and society at large. The current AI revolution is fueled, in large part, by machine learning. Notably, deep learning has made enormous recent progress on perception tasks from speech recognition to image understanding to biometric security. However, even if machine learning worked as well as it possibly could, AI would not be “solved”. Given a reliable predictive model of a system, one inevitably wants to use this model as a basis for decision-making: that is, for taking action.

AI systems for decision-making can be understood as lying along a spectrum according to their levels of autonomy. In some cases, human experts use AI techniques to support them in reasoning about a single, high-stakes decision. In other settings, it makes sense for an AI system to make autonomous decisions. And in between there are ‘mixed-initiative’ systems that share varying degrees of responsibility for action between humans and an AI system. We span this spectrum of decision-making problems and also leverage work on various enabling technologies (from discrete optimization to statistical inference to natural language processing) that are useful across the board. In what follows we illustrate the different scenarios via concrete examples from research by CAIDA members, emphasizing conceptual challenges that arise in practical applications.


Single high-stakes decisions

E.g., reallocation of radio spectrum; environmental policy

Mixed initiative Systems

E.g., medical diagnosis with clinician override; intelligent tutoring

Autonomous Systems

E.g., advanced autopilots; closed loop control systems


Single high-stakes decisions

In 2012 the US Federal Communication Commission was directed by Congress to repurpose radio spectrum allocated to broadcast television. The decision of how to carry out this directive (from broad strokes to algorithmic implementation details) was made with extensive support from AI techniques. These included game-theoretic analysis of proposed auction mechanisms, large computer simulations of bidder behavior, and the automated design of algorithms to determine in real time how densely remaining broadcasters could be packed into the remaining radio spectrum. This problem was challenging because no such reallocation had ever taken place before (and thus little applicable data was available); broadcasters’ valuations were closely held secrets; and the underlying problem of repacking stations into channels is a provably hard problem for which no reliably efficient algorithm is known to exist. In the end, the auction was a great success, raising $20 billion from wireless internet companies, paying over $10 billion to broadcasters, reducing the national debt by over $7 billion, and putting radio spectrum across North America to more efficient use.

Single, high-stakes decisions also arise in environmental applications. In geology, consider the question of where to perform test drilling, given a predictive model of mineral deposits (as, for instance, published in a geology journal); the interests of various stakeholders; forecasts about future commodity prices; and the costs of acquiring rights, operating, and extracting minerals from different areas. In environmental policy, in a pilot study to help decide how to handle storm water runoff for the Orchard Commons building at UBC, the experts and stakeholders involved used an AI system to express and refine their preferences, focusing discussion on where there were fundamental disagreements. This application of computational sustainability is an example of AI deployed in the UBC as a Living Laboratory theme.

Mixed-Initiative Systems

Unlike in the decision support settings described above, some settings require rapid, repeated decision-making, while still keeping humans in the loop. These are collaborative systems, with humans and artificial agents working together. Many such examples arise in healthcare domains, where AI systems show promise for use in performing diagnosis, recommending treatment or screening, and otherwise tailoring patient care. The goal of ‘precision health’ can be enacted by tailoring patients’ treatment based on their genetic, phenotypic, behavioural and preference profiles. A common feature across this domain is that the recommendations of AI systems can be overridden by medical professionals or by the patients themselves. To help them decide when to do so, it is important that AI systems be able to justify their recommendations in terms that the humans in the loop can understand (i.e., they must be auditable and transparent). It is also important that these systems be certifiably fair (e.g., they do not inadvertently favor one population of patients over another) and that they be guaranteed not to make harmful mistakes. Some of the additional challenges in the healthcare sector include missing or incomplete data, data storage in disparate systems, and lack of interoperability between disparate systems. Development of AI techniques to overcome these issues while preserving patient privacy is likely to translate to other sectors. In addition, several international efforts are focusing on shared decision- making between patients and healthcare providers. Development of AI recommender systems that take into consideration patients’ perceptions of the utility of their choices (ideally in real time) has the potential to dramatically advance the field.

Education is a second domain in which mixed-initiative decision-making is inescapable. Consider systems that act as personal tutors, assessing both students’ abilities and their affective states in order to help them learn. This support is often best delivered as suggestions that can be discussed and refined based on a student’s feedback, both to maintain the student’s sense of control and to overcome inevitable inaccuracies in the assessment process.

Autonomous Decision-Making and Action

In some settings, it is desirable for AI systems to act autonomously rather than simply recommending action. For example, this is the case in advanced autopilots for flying airplanes or drones—the whole point is to act without human intervention. However, such systems must still be auditable and have provable safety properties. Such autonomous decision-making is used to make realistic animations, to support advanced manufacturing, and to match online advertisers with internet users. In some areas of healthcare, closed-loop-control systems can safely deliver medications or other treatments in an autopilot mode. Again, these systems require provable safety properties.

Reinforcement learning (RL) is one mechanism used by AI to make autonomous decisions, and hence falls within the scope of AIDA. While RL is a powerful and important technique, we note that it relies on learning how to act by evaluating the “goodness” of actions via random exploration, which is not viable in many of the decision problems that are our focus; for example, in the autopilot example above, it would involve repeatedly crashing drones until the system learned how to fly. Such methods also tend to produce “black box” solutions that are difficult to verify for correctness or safety.