Designing decision algorithms in an uncertain world

Anyone setting out to design an intelligent system for making decisions in the face of an uncertain and ever-changing world might want to start by reading Algorithms for decision makinga new book by Mykel Kochenderfer, associate professor of aeronautics and astronautics at Stanford University and director of Stanford Intelligent Systems Laboratory (SISL), and his colleagues, Tim A. Wheeler and Kyle H. Wray.

Decision-making algorithms ingest relevant information about the problem from the environment and produce an action. Think of an AI algorithm that takes into account the patient’s vital signs and generates a diagnosis, or a stock trading system that summarizes daily market prices and suggests stock buys.

But building an agent in a highly uncertain environment is a challenge for any developer. In this new book, Kochenderfer and his co-authors recommend various approaches for designers to solve different kinds of problems. For example, if a designer knows that there is a particular type of uncertainty in the environment, or that an input source, such as a sensor, is imperfect, then he will go to a particular part of the book, and that will describe a number of different algorithms they could use.

Here, Kochenderfer discusses the value of computerized algorithmic decision-making and key themes from the book.

When do computer algorithms make better decisions than humans?

Humans aren’t very good at reasoning about low-probability events or complex scenarios where many things are happening at the same time. This is where an IT approach can add tremendous value to a decision-making process. This reduces the burden on the human designer to anticipate all possible scenarios.

For example, a human designing a self-driving car cannot anticipate every possible driving scenario or the types of sensor failures and errors that might occur when things go wrong.

A related example – we studied aircraft collision avoidance systems that make decisions using a process called dynamic programming that can reason about very low probability events, such as unexpected maneuvers, and optimize the best course of action possible given the various sources of uncertainty. Rigorous analyzes have shown that these collision avoidance systems are both safer and more effective than something a team of humans could have produced on their own.

Why is decision-making under uncertainty a particular focus of the book?

Most of the decisions we make in our lives are based on imperfect information. When we make a medical decision, for example, we know that diagnostic tests can be flawed — there can be false positives or false negatives; and when we build robots, the sensor systems can somehow fail; or a self-driving car may encounter occlusions in the environment, such as a van blocking our ability to see a pedestrian.

We’re just inherently uncertain about the state of the world and sometimes that uncertainty is a big factor. So we want to tackle the problems of uncertainty head on, and that’s a key aspect of decision making that this book attempts to address. We want to help people create decision-making algorithms that can take in imperfect information and make decisions that achieve a goal or set of goals.

And when we talk about uncertainty, it includes uncertainty about the effects of our own actions, uncertainty about the state of the environment, uncertainty about how others might react to our actions, and the uncertainty in our conception or “model” of how the world works. .

And the book breaks these forms of uncertainty down to their essence – to very simple calculations. And it turns out that computers can do these calculations quite easily using multiplication and addition of potentially small numbers.

How does time play a role in algorithmic decision making?

Time is critical. We generally need to reason about the effects of our actions over an extended window of time, including keeping track of the recent past as well as making predictions about the future. Most real-world problems don’t involve one-size-fits-all solutions.

In the book, we begin by introducing probability theory and utility theory in unique contexts so that the reader gains a solid understanding in this more simplified context. But then we move on to sequential problems, because decision makers usually want to achieve an objective that is going to require a series of actions. For example, in a medical context, doctors don’t make a single decision and that’s it. They hopefully have a lasting relationship with the patient, so we don’t want an algorithm to greedily make what seems like the best decision at the time. We need it to reason about the future.

How have different disciplines contributed to the field of algorithmic decision making?

Many different communities inspired the content of this book. It’s not just AI, which is traditionally a subfield of computer science, but also operations research, control theory, psychology, neuroscience, and economics. All of these areas have made major contributions to the concepts of the book.

In fact, the economy is the first that comes to mind. In the 1940s, John von Neumann and Oskar Morgenstern published a book called Game theory and economic behavior which states a set of axioms about rational preferences. These are properties that we just accept, like if I prefer apples to bananas and bananas to cookies, then I better prefer apples to cookies. And their work supported the idea of ​​utility theory, which says that as long as you have these rational preferences, you can assign utilities – numerical values ​​– to different outcomes. And this allows you to define the problem of decision making under uncertainty: just choose the action that maximizes your expected utility. This is the principle of maximum expected utility, and this principle, which comes from economics, underlies the whole book.

How do you see algorithmic decision making benefiting or harming society?

The book highlights several examples of beneficial deployment of algorithms. For example, thanks to aircraft collision avoidance systems, we will have safer and more efficient air transport. In the financial sector, algorithmic decision-making can help people invest their resources so that they can have a sustainable level of consumption throughout their lives. And medical decision support systems can help promote safer, higher quality medical care.

There are also ambitious types of research that have yet to be deployed, such as figuring out how to fight forest fires. Firefighting resources are limited and there is a lot of uncertainty as to exactly how a fire will develop depending on wind, vegetation, terrain, etc. Algorithms that account for these uncertainties could help us fight fires more efficiently and safely.

On the other hand, there are potential pitfalls. If these systems are deployed without proper validation, there may be a risk to life, and there could be unfairness and bias. Thus, one of the main objectives of our research is not only to build systems worthy of our trust, but to propose methodologies to validate that they will behave as expected or as desired when deployed in the world. real. We want to proactively ensure that these systems are safe and have the desired societal impact. And because we want to understand potential issues with our systems long before they’re deployed, this book includes a chapter that talks about validation – a topic that’s actually worthy of an entire book we’re currently writing titled Validation algorithms.

Stanford HAI’s mission is to advance AI research, education, policy, and practice to improve the human condition. Learn more.

Sharon D. Cole