Federated Cohort Learning, Deep Reinforcement Learning, and Oppression Algorithms
Machine learning trains computers to behave like humans by providing them with historical data and predictions about what might happen in the future. This section will examine exciting machine learning algorithms such as federated cohort learning, deep reinforcement learning, and oppression algorithms.
Federated Cohort Learning
Federated Cohort Learning The algorithm examines what users do online in the browser. It creates a “cohort ID” for each user using the SimHash algorithm to group them with other users watching similar content. Each cohort has several thousand users, making it harder to find a specific user. Cohorts are updated weekly. Websites can then get the cohort ID through an API and use it to decide which ads to display. Unfortunately, Google doesn’t label cohorts based on interests beyond grouping users together and giving them an ID. Therefore, advertisers must determine for themselves what types of users are in each cohort.
Federated Learning Cohorts, or FLoC, is a way to track people on the web. It places people into “cohorts” based on their browsing history to see ads that are more likely to interest them. FLoC was created as part of Google’s Privacy Sandbox project, which includes several other ad technologies with bird-related names. Even though its name is “federated learning”, the FLoC does not use federated learning.
Deep Reinforcement Learning
Deep Reinforcement Learning (deep RL) is a branch of machine learning that combines reinforcement learning (RL) and deep learning. RL examines the problem of a computer learning to make decisions by trying things out and seeing what works and what doesn’t. Deep RL is a solution that uses deep learning to allow agents to make decisions based on unstructured input data without having to manually design the state space. Deep RL algorithms can take a lot of data into account, like every pixel on a video game screen, and determine what actions to take to maximize a goal (for example, maximizing game score). Deep reinforcement learning has been used for many things.
Markov decision process (MDP) states are very complex in many real-world decision-making problems. Instead, deep reinforcement learning algorithms use deep learning to solve these map problems. Policy or other learned functions are often represented as a neural network, and unique algorithms that perform well within this framework are created.
Oppression algorithms is a book based on over six years of academic research into Google’s search algorithms. The research examined search results from 2009 to 2015. The book discusses how search engines can cause discriminatory bias. Noble says search algorithms are racist and make society’s problems worse because they reflect the negative biases of society and the people who make them. Noble breaks down the idea that search engines are neutral by showing how their algorithms favor whiteness by showing positive cues when the word “white” is searched for instead of “Asian”, “Hispanic” or “black”. His main example is the difference between search results for “black girls” and “white girls” and how they show bias. Second, these algorithms can be biased against women of color and other marginalized groups. They can also harm internet users by leading to “racial and gender profiling, misrepresentation and even economic redlining”. The book says the algorithms keep people in vulnerable positions and are unfair to people of color, especially women of color.
Noble’s argument also explains how racism is built into Google’s algorithm. This is the case with many coding systems, such as facial recognition and medical care programs. Noble takes issue with the idea that many new technological methods are progressive and fair. He says many technologies, like Google’s algorithm, “reflect and replicate existing inequalities.”