Who’s running the show – us or our algorithms?

They colonize almost every corner of our lives, from search engines, financial commerce, facial recognition, robotics and social media lures to the music industry. They are so ubiquitous and so taken for granted that they are, at our peril, less subject to critical inquiry. Sculpting our everyday cosmology, their complex and quantified mathematical coding portends objectivity, reliability and certainty.

In their most schematic form, algorithms are sets of instructions for specific problems. Today, having evolved through sophisticated advances in machine learning, especially artificial intelligence, their power and impact are enormous. As a professor of the history of science at the University of California at Berkeley, Massimo Mazzotti the dishthey point to “a program running on a physical machine, as well as its effects on other systems. Algorithms have thus become agents. …Algorithms now do things.” Doing things? When my iPhone app tells me that my total steps for the day are below my goal, I aim to walk more tomorrow. My phone’s algorithms not only affect what I think, but, more subtly, How? ‘Or’ What I think.

But why do we assume their objectivity and impartiality? As always, we create our own idols. When we give a metaphysical cachet to numbers and coding, we condense quality to a matter of tastes, computing power and good codes. Such blind trust in the algorithm represents a data fetish that further erodes our human-to-human interaction.

This is not to question their effectiveness. Surely they work. When you type “Hell hath no fury”, Google autocomplete not only completes the quote, but informs us of its origin – not, as many believe, from Shakespeare, but from the 1697 play by William Congreve “The Mourning Bride”.

Yet, do we trust them more than they warrant? For example, with self-tracking devices, Amazon recommendations, and the tyranny of social media transparency, are algorithmic nudges manipulating us more than we realize? Not only influence our behavior, but determine it?

Think about that. Complex algorithms are the result of the complex and involved contribution of designers and scientists, who themselves do not design without a vision and a value concerning what is designed. Design begins with a vision.

Consider caring robots, or “carebots,” the subject of my latest book. The designers of these robots have an idea of ​​what caregiving entails. This raises a thorny philosophical and moral question: what is the nature of attention? “Take care of” and “take care of” are not the same. Since caregiving algorithms are designed with a certain benevolence in mind, they are not detached mirrors of the essence of benevolence. As I pointed out in my last article on firearms, maintaining a laissez-faire attitude that a technology is “just a tool” is ill-conceived. There’s always an agenda, often with far-reaching consequences, like an assault rifle’s built-in capability for rapid carnage.

Resulting from a multi-layered process of unconscious negotiated meanings from experiential interactions and gestalts, algorithms with their mathematical coding are shaped within an environmental context of human interpretation and purpose, presupposing in themselves, for example, what it means to “take care”. Although we may think of mathematics as disembodied objectivity, its structure and origins are deeply metaphorical, as noted by cognitive scientists George Lakoff and Rafael Nuñez in their groundbreaking study “Where Did Mathematics Come From: How the Mind embodied brings mathematics into existence”.

As algorithms increasingly shape our world, to consider them as harbingers of high-tech utopia or dystopia is naïve. It all comes down to how we view them, our ingrained assumptions about how we relate to our devices, and the authority we give them. How far will we allow the automation of human choice? Algorithms, at least for now, are designed by humans, humans with their own goals, values ​​and biases. To ensure that they remain our tools and that we do not become theirs, our first step is to start from the simple but profound principle that they are neither objective nor impartial.

Michael Brannigan is a philosopher, author, and lecturer whose latest book is “Caregiving, Carebots, and Contagion.” His email and website: [email protected]; www.michaelcbrannigan.com.

Sharon D. Cole