For example, Melanie Dawes, chief executive officer of Ofcom, the UK-based social media management company, said that social media platforms will have to explain how their code works. And the recently passed European Union Digital Services Act, agreed on April 23, will also force platforms to provide transparency to algorithms. In the United States, Democratic Senators have tabulated Proposal for Algorithm Accountability Act in February 2022. Their goal is to bring new transparency and oversight to the algorithms that govern our timeline and news feed, and more besides.
Allowing Twitter’s algorithm to be visible to others and competitors could be tweaked, which would theoretically mean someone could copy Twitter’s source code and release their own rebranded version . Large portions of the Internet run on open source software—The most famous OpenSSLA suite of security tools used by much of the world, it suffered a major security breach in 2014.
There have even been examples of open source social networks. Mastodon, a microblogging platform founded after concerns about Twitter’s dominant position, is open source, allowing users to test the code posted on GitHub software repository.
But seeing the code behind an algorithm doesn’t necessarily tell you how it works, and certainly doesn’t provide the average person with much insight into the structure and business processes that go into the creation process. out it.
“It’s like trying to understand ancient organisms with just genetic material,” said Jonathan Gray, senior lecturer in critical infrastructure research at King’s College London. “It tells us more than nothing, but it would be a nerve-wracking thing to say that we know about how they live.”
Nor is there an algorithm that controls Twitter. “Some of them will determine what people see on their timeline about them,” said Catherine Flick, who studies computers and social responsibility at De Montfort University, Leicester. trend or content, or suggested to follow”. The algorithms people will be interested in are mainly those that control what is shown in the user’s timeline, but even that wouldn’t be very useful without training data.
“Most of the time when people talk about algorithmic accountability today, we realize that the algorithms themselves are not necessarily what they are,” said Jennifer Cobbe, a postdoctoral research associate. we want to see — what we really want is information about how they are developed.” at Cambridge University. That is largely due to concerns about AI algorithms perpetuating human prejudice in the data used to train them. Who develops the algorithms and What data do they use?can make a meaningful difference to their results.
For Cobbe, the risks outweigh the potential benefits. The computer code doesn’t give us any insight into how the algorithms were trained or tested, what factors or considerations went into them, or what was prioritized in the process. this.
Its open source, algorithmic source may not make a meaningful difference to transparency at Twitter – and it could pose some significant security risks.
Companies often publish their data protection impact assessments, which probe and test systems to highlight weaknesses and flaws. When they are discovered, they are repaired, but the data is often redacted to prevent security risks. By open-sourcing Twitter’s algorithms, anyone can access the site’s entire codebase, potentially allowing bad actors to break into software and find vulnerabilities to exploit. waterfall.
“I don’t believe Elon Musk is looking at open sourcing for all of Twitter’s infrastructure and security aspects,” said Eerke Boiten, professor of cybersecurity at De Montfort University, Leicester.