TRUST OUR TWITTER ALGORITHM, NOT YOUR LYING EYES

As a genuflection to the brave new world of woke, Twitter has announced an “algorithmic bias bounty challenge”, promising rewards for anyone who can find bias in its code.
So, will Donald Trump get some cold cash for pointing out that hundreds of his tweets were restricted or banned during election 2020, while not one tweet of Joe Biden received similar treatment?
Not quite. The initiative appears to be focused on microaggressions that are much too subtle for the average non-woke American to discern. The new initiative was partly born out of a relatively obscure incident from September of 2020, where Twitter’s photo preview feature was found to potentially have a “racial bias.”
Twitter has earned scorn for becoming a hopeless biased censor. During the 2020 election cycle, countless average conservatives, as well as news outlets, pundits and even prominent politicians, were purged or suppressed.
But the bans and suppression have also covered for Big Pharma, Chinese COVID narratives, and wildly contradictory government directives and policies that have had disastrous consequences that are still playing out.
So it’s unsurprising that many see Twitter’s latest move as little more than a PR stunt that will do nothing to address its plain and corrosive manipulations of public discourse.
“Finding bias in machine learning (ML) models is difficult, and sometimes, companies find out about unintended ethical harms once they’ve already reached the public. We want to change that,” Twitter executives Rumman Chowdhury and Jutta Williams wrote in a blog post.
Tech company bounty programs that reward hackers for finding bugs and vulnerabilities in computer programs and code have been around for a long time. Twitter is ostensibly applying that to the algorithms – but not really.
It’s not providing the bulk of its sophisticated and proprietary AI driven software code to any neutral group for examination.
Instead, it’s providing a relatively small portion of code involved in image cropping, for review:
“We’re inspired by how the research and hacker communities helped the security field establish best practices for identifying and mitigating vulnerabilities in order to protect the public. We want to cultivate a similar community, focused on ML ethics, to help us identify a broader range of issues than we would be able to on our own. With this challenge, we aim to set a precedent at Twitter, and in the industry, for proactive and collective identification of algorithmic harms.”
If any “bias” is found in Twitter’s PR play, winners will receive prizes of up to $3,500.
As some reports have generously put it, the bias bounty program “is a great move by Twitter in showcasing the company in a positive light and it is one that may very well pay off.”

Comments are closed.

Skip to content