On Wednesday, Twitter announced to study whether its algorithms cause unintended harms, a move that comes as social media firms face huge scrutiny for their role in spreading hazardous conspiracy theories and enabling harassment for users. In the coming few months, The Responsible Machine Learning Initiative of Twitter will release reports investigating possible gender and racial favoritism in its algorithm of image cropping, a fairness evaluation of Home timeline approvals across racial subgroups, and an investigation of content approvals for different political ideologies crosswise seven countries.
After last year criticism on Twitter’s image algorithm for focusing on white faces than darker ones, the firm maintained that its tests had not publicized any gender or racial bias; however, in the outcome, it announced features giving more controls to more users over images because the company recognizes that the way it automatically crops images means there is a possibility for harm. Moreover, Twitter said the report results might tell changes to the social media platform, new strategies for how it designs some specific products, and raise awareness around ethical machine learning.
The initiative of Twitter comes as social media firms face allegations about their algorithms which are responsible for growing polarization, echo misrepresentation, chambers, and online radicalization. Above all, analysts criticized Twitter for not making enough efforts to tackle harassment. Charlie Wetzel, the former New York Times columnist, claimed in his Galaxy Brain newsletter that the Trending section of Twitter draw off the whole attention of the internet onto single users, leading to inconsistent crime and the appearance of an extensive cancel culture.
Facebook Doesn’t have a Commercial Interest in Intensifying Extreme Content
Jack Dorsey, the CEO of Twitter, previously said that he wants to see a future of the social media platform where users can decide which algorithm they want to apply inside an App Store-like interface instead of depending on one Twitter algorithm. The tacit acknowledgment of the social media giant that its algorithms might be damaging to society and users at a larger scale with defenses of Facebook for its recommendation algorithms.
Nick Clegg, the Vice President of Global Affairs, argued in a Medium post that Facebook does not have a commercial interest in intensifying extreme content and that the social media service is not exclusively responsible for growing political polarization because user choices also impact what people see on their News Feeds.