Get ahead of AI bias with inclusive algorithms

The societal impact of machine learning algorithms and artificial intelligence systems is multifaceted. The use of big data and algorithms in a variety of fields, including insurance, advertising, education and beyond, can lead to decisions that harm the poor, reinforce racism and amplify inequality. Models relying on fake proxies and bad datasets are scalable, amplifying any inherent bias to affect larger and larger populations. At the same time, these systems can also provide game-changing solutions and result in socially positive efficiencies.

BMW’s AI challenge wants to overcome internal biases with data
BMW Group is launching the “Joyful Diversity with AI” challenge, encouraging participants to come up with new ideas on how AI solutions can help the automaker support diversity, equity and inclusion in its work environment. work and communications with data-driven solutions. The submission deadline is October 3, 2022 and winners will be announced in December. BMW

EU to regulate impact of AI on life-changing decisions
To tackle machine-based discrimination, the EU plans to introduce a comprehensive global model to regulate the type of AI models used to support “high-risk” decisions such as filtering application requests. employment, studies or social assistance, as well as for banks. assess the creditworthiness of potential buyers. These are all potentially life-altering decisions that impact a person’s ability to afford a home, a student loan, or even to be employed. The Guardian

FairPlay is the first “fairness as a service” solution to algorithmic biases
Designed primarily for financial institutions, FairPlay’s solution aims to prevent bias from the past from being coded into the algorithms that decide the future, and uses next-generation tools to assess automated decision models and increase both equity and profits for financial institutions. Fair play

Even artificial intelligence has a Heinz bias when asked for ketchup
The popular DALL-E 2 AI image generator clearly shows brand preference when provided with generic prompts to create ketchup images, showing how even objective models can reinforce pre-existing biases. Heinz

Sharon D. Cole