Elon Musk would open the algorithms of Twitter. Regulators could get ahead of it.

Placeholder while loading article actions

Happy Monday! We hope that whether you observe Easter or not, there will be chocolate in the near future. Send topical tips to: [email protected]

Below: US authorities are investigating child exploitation on TikTok and how Ukraine is using facial recognition technology. The first standing:

Elon Musk would open the algorithms of Twitter. Regulators could get ahead of it.

Elon MuskThe attempt to take over Twitter allegedly in the name of ‘free speech’ has reignited partisan debates over whether major social media platforms are over- or under-medicating police misinformation, hate speech and incitement to violence.

But one of the less polarizing ideas Musk has floated to change the platform — opening up his recommendation algorithms to the public — shares a goal with a bipartisan push that’s already gaining momentum around the world.

Musk recently tapped Twitter to make the algorithms it uses to decide which posts are displayed or amplified “open source,” meaning the code can be viewed and used by anyone.

The thinking behind it, Musk said during an interview on Thursdayis that users “would know if something was done to promote, demote, or otherwise affect a tweet.”

It is a big task. Like my colleague Reed Albergotti reported, the researchers say Musk’s remarks are likely an oversimplification of what it would take to make this data public, in large part because the companies’ algorithms are so sprawling and derived from proprietary and private data.

While policymakers aren’t pushing to make the platforms’ algorithms fully open to the public, some are calling on social networks to release treasure troves of new data to outside researchers and regulators.

The European Union aims to finalize a historic proposal this month, the Digital Services Act (DSA)it could force major digital platforms to fork over algorithmic data and check whether their recommendation systems create risks for users by amplifying illegal content.

UK lawmakers are proposing a separate measure known as Online security bill it would give its regulator the power to demand information from companies about how their algorithms decide what content gets amplified. The regulator would even be “able to enter company premises to access data and equipment”, UK officials said. said last month.

If passed as expected, these proposals would likely shed much more light on the algorithms that power social media across the industry than if a single company voluntarily opened its books, as Musk hopes to do with Twitter.

While the campaign to create new disclosure requirements for social media platforms is not as advanced in the United States, the push has accelerated following the whistleblower disclosures. Frances Haugen on the risks posed by Facebook.

More recently, Sens. Chris Coon (D-Delete) and Rob Portman (R-Ohio) unveiled a bill it would require platforms to provide data about their algorithms to outside researchers and give regulators the power to require companies to proactively disclose other data.

It’s not immediately clear how directly some of these proposals would impact Twitter, a small rival to Meta’s Facebook, Google’s YouTube and TikTok.

While the EU proposal would impose new obligations on a wide range of digital platforms, many of its algorithmic transparency requirements only apply to what the bloc calls “very large platforms”. online forms”.

They proposed defining this as platforms used by 10% of EU citizens, or currently 45 million users – a mark Twitter might miss. (The company hasn’t commented on how many users it has in Europe.)

And while UK lawmakers have stipulated that their legislation will include tougher regulations for big business, their regulator Ofcom is tasked with setting exact thresholds.

Coons and Portman’s bipartisan bill, however, sets a lower threshold of 25 million unique monthly users, which would apply to Twitter and its larger peers.

Much like with Musk’s call for Twitter to proactively reveal its algorithms, policymakers are sure to face technical hurdles in enacting their plans, especially with data that may contain private information about users. or potential trade secrets.

The success of either group in shedding light on corporate practices — even Musk’s long-term ownership offer and the idea of ​​sweeping disclosure — would inevitably hinge on their ability to overcome these hurdles.

Musk’s vision for Twitter is outdated, say technologists

Critics say Musk’s ambition for what the platform should be – a largely unmonitored space without censorship – is naive, would make the site unsafe and ultimately hurt the company’s growth prospects, Elizabeth Dwoskin reports.

“What Musk apparently fails to recognize is that to really have free speech today, you need moderation,” said Katie Harbath, a former director of public policy for Facebook who is managing director of the consulting firm Anchor Change. “Otherwise, only those who bully and harass will stay because they will drive others away.”

Musk’s attempt to buy the company also has experts worried that putting the company in one person’s hands will hurt democracy, my colleagues Joseph Menn, Cat Zakrzewski and Craig Timberg report. If Musk were to take control of the company, it could increase pressure on U.S. policymakers to regulate social media companies, former officials said.

US authorities investigate TikTok to find out how it protects children

The Department of Homeland Security is investigating child sexual abuse material on TikTok, while the Department of Justice is investigating how predators are exploiting one of the app’s privacy features, the Financial Times’s Cristina Criddle reports.

Erin Burke, who heads DHS’ Child Exploitation Investigations Unit, took aim at TikTok and called it a “platform of choice” for predators, Criddle reports. “It’s a perfect place for predators to meet, care for and engage children,” Burke said, also saying international companies like TikTok aren’t doing enough to proactively ensure children aren’t being exploited. on their applications.

TikTok defended itself, telling the FT it “has zero tolerance for child sexual abuse material” and “when we see an attempt to post, obtain or distribute [child sexual abuse material]we remove content, ban accounts and devices, immediately report to [the National Center for Missing and Exploited Children]and engage with law enforcement if necessary.

Ukraine scans faces of dead Russians, then contacts mothers

Ukrainian officials have performed more than 8,600 facial recognition searches on dead or captured Russian soldiers, Drew Harwell reports. Ukraine’s Computer Army, a volunteer force of activists and hackers, says it used these IDs to tell the families of 582 Russians they were dead. They even sent some family members pictures of their abandoned corpses.

Some Ukrainians say facial recognition software is an effective way to communicate with Russians and tell them about the costs of war. But the technology raises questions about the effectiveness of such a strategy and even the future of warfare. “If it was Russian soldiers doing this with Ukrainian mothers, you would say, ‘Oh my God, this is barbaric,'” said a surveillance researcher. Stephanie Hare mentioned. “And does it really work? Or does he make them say, “Look at those cruel, lawless Ukrainians doing this to our boys?” ”

Many people watched Musk’s interview at a TED talk. Our colleague Rachel Leman:

Our colleague Faiz Siddiqui:

Workers at Apple’s Grand Central store head for unionization (Reed Albergotti)

Quiet Industry Lobbyists Water Down State Privacy Laws (Protocol)

Activision says it is cooperating with federal insider trading investigations (Reuters)

The Russian propaganda machine takes another hit (Politico Europe)

Intel thinks its AI knows what students are thinking and feeling in the classroom (Protocol)

Mid-term politics jeopardize Biden’s tech agenda (Politico)

Congress Seeks Compromise to Boost Computer Chip (AP) Industry

Read the Facebook papers for yourself (Gizmodo)

Times New Roman and Arial fonts blocked in Russia – Vedomosti (The Moscow Times)

  • head of the NTIA Alan Davidson discuss broadband at a Brookings Institution event Thursday at 10 a.m.

ThatThat’s all for today — thank you so much for joining us! Be sure to tell others to subscribe to the Technology 202 here. Get in touch with advice, comments or greetings on Twitter or E-mail.

Sharon D. Cole