Business leaders can benefit from the transparency of machine learning algorithms

Newswise — In today’s business world, machine learning algorithms are increasingly being applied to decision-making processes, affecting jobs, education and access to credit. But the companies generally keep the algorithms secret, citing concerns about games by users that can undermine the predictive power of the algorithms. Amid growing calls to require companies to make their algorithms transparent, a new study has developed an analytical model to compare company profits with and without such transparency. The study concluded that there are benefits but also risks in algorithmic transparency.

Led by researchers from Carnegie Mellon University (CMU) and the University of Michigan, the study appears in management sciences.

“As managers are called upon to enhance transparency, our findings can help them make decisions that benefit their businesses,” says Param Vir Singh, professor of business technology and marketing at CMU’s Tepper School of Business, co-author of the study.

Researchers investigated how algorithmic transparency affects companies and job applicants (also called agents) by developing and analyzing a game-theoretic model that captures how both parties act in opaque and transparent scenarios. In doing so, the authors sought to answer four questions: 1) From the point of view of the business (the decision-maker), are there advantages to making an algorithm transparent even when it could be manipulated by officers? 2) How would agents be affected if companies made their algorithms transparent? 3) How would the results be affected by the predictive power of the features most likely to be played by agents? 4) How would the composition of the market (in terms of desirable and undesirable agents) affect these results?

The study concluded that algorithmic transparency can have positive effects for executives and companies and negative effects for agents. Under a wide range of conditions, transparency benefits businesses, allowing them to motivate agents to invest in improving features valuable to the business and, in some situations, increasing the predictive power of the algorithm. This challenges the conventional wisdom that making algorithms transparent will always hurt businesses economically.

But the study also concluded that agents aren’t always better off with algorithmic transparency. Companies use algorithms to separate high type (more desirable) agents from low type (less desirable) agents. These algorithms use desirable characteristics (that is, causal characteristics that directly affect the performance of a company, such as relevant education or training received when hired) and generally actionable correlational characteristics ( i.e., characteristics that are correlated with agent type but do not affect business performance, for example that high type agents may be more likely to wear glasses).

High-type agents can get away with underinvesting in expensive features that are desirable by the business when the correlational features used by the firm’s opaque algorithms give them a classification advantage. When a company makes its algorithm transparent, then all agents would play on the correlational features and the predictive power of the correlational features would disappear. As a result, high type agents must invest in expensive and desirable functionality to distinguish themselves from low type agents.

“Our analysis suggests that companies should not always worry about the potential loss of predictive power of transparency when confronting strategic individuals,” says Qiaochu Wang, Ph.D. student in business technologies at Tepper CMU School of Business, co-author of the study. “Instead, they can use algorithmic transparency as leverage to motivate agents to invest in more desirable features.”

One of the study’s limitations, the authors note, is that while it shows the economic benefits that transparent algorithms can bring to companies, there may be reasons why companies don’t want to make their algorithm transparent. These reasons, including privacy and competition, were not addressed in the study.

“The results of our model, which focused on a hiring scenario, are generalizable to other scenarios in which a company tries to gain a better understanding or knowledge of individuals’ private information,” notes Yan Huang, associate professor of enterprise technologies at CMU. Tepper School of Business, co-author of the study.

###

Summary of an article by management sciences, Algorithmic transparency with strategic users by Wang, Q (Carnegie Mellon University), Huang, Y (Carnegie Mellon University), Jasin, S (University of Michigan), and Singh, PV (Carnegie Mellon University). Copyright 2022. All rights reserved.

Sharon D. Cole