Tuck School of Business | Could algorithms create a win-win for society and crowdfunding platforms?

Like many advances in the digital age, the algorithms are very promising, but also very perilous.

The Brookings Institute defines algorithms as “a set of step-by-step instructions that computers follow to perform a task”, and notes that they have “become sophisticated and ubiquitous tools for automated decision-making”. When you type an entry into the Google search box, Google’s proprietary algorithm is what decides the search results and in particular the order in which they appear. Google’s convenience aside, a major problem with algorithms is that they’re often opaque and notorious for perpetuating unfair biases and stereotypes.

in a new work document, Tuck’s Associate Professor Prasad Vana examines the role of algorithms on consumer choice and asks how we can use them to promote socially beneficial outcomes while protecting ourselves from their potential harms. He and co-author Anja Lambrecht from London Business School find that researchers need to be able to see the inner workings of algorithms to accurately understand their influence, and that algorithms can be designed to help those who need them most. without undermining the profit imperatives that fuel them.

The context of their study is the crowdfunding platform DonorsChoose, which allows teachers to raise funds for classroom improvements and supplies. At any given time, tens of thousands of projects are seeking funding on the platform, and the website’s algorithm determines the order in which projects appear when a user searches for them. DonorsChoose has given researchers full access to the code of its algorithm, so they can study how changing the code affects search results and funding results.

Professor Vana teaches the Analytics I and II core courses and the Quantitative Digital Marketing elective course in Tuck’s MBA program.

The first question researchers ask is how well the DonorsChoose algorithm achieves its goals of benefiting disadvantaged groups and achieving a high project completion rate. DonorsChoose wants to help schools in high-poverty areas, so its algorithm ranks projects that are in the highest poverty category higher, and projects from schools that serve a high rate of free or reduced lunches. Additionally, the platform only earns money when projects are fully funded, so it favors projects that have a chance of succeeding. To see if these two goals conflict, Vana and Lambrecht tuned the algorithm code and ran simulations. First, they disabled preferences related to poverty and free or reduced lunches. Removing these parameters reduced contributions to these schools by 12.98 percentage points. Then they increased the power of poverty and free or reduced preferences, to see if that would translate into funding for more of these projects. Surprisingly, they found the effect to be minimal. “If you remove the poverty components, the schools suffer a lot,” Vana explains, “but if you double the components, it doesn’t add much more. This means that there is an upper limit for how much the algorithm can help these schools, and the platform is already very close to this limit.

We show that the goal of helping people in need does not detract from the overall goal of the platform, which is to stay in business.

Second, Vana and Lambrecht tweaked the algorithm code again, this time to simulate the role of parameters in the algorithm that prefer projects most likely to complete. For example, among others, the algorithm ranks higher the projects that have already raised the most requested money, or the projects that have small target amounts. They find that turning these preferences on or off does not significantly change the proportion of money going to schools with high levels of poverty. This suggests that a platform’s primary goal of maximizing the number of successful fundraising projects does not necessarily have to come at the expense of disadvantaged groups.

As society has learned of the potential of algorithms to perpetuate bias, there have been calls for “algorithm transparency,” so regulators and researchers can properly study how algorithms make decisions. One of the conclusions of Vana and Lambrecht’s paper is that such transparency is empirically critical, if we seek to accurately understand consumer preferences. Another takeaway, Vana says, is that the platform’s fundraising and profit goals are “orthogonal,” meaning they don’t conflict with each other. . “We show that the goal of helping people in need doesn’t take away from the overall goal of the platform, which is to stay in business.”

Sharon D. Cole