Back to Articles
Saving Representative Democracy from Online Trolls

The Strategist

SKIPPED

Details

Date Published
8 Jan 2024
Priority Score
2
Australian
Yes
Created
8 Mar 2025, 02:41 pm

Authors (1)

Description

More than 70 national elections are scheduled for 2024, including in eight of the 10 most populous countries. But one group is likely to be significantly under-represented: women. A major reason is the disproportionate amount ...

Summary

The article highlights the challenges faced by women politicians globally due to online abuse, exacerbated by AI-generated deepfakes and reduced content moderation by major platforms like Meta, X and YouTube. It emphasizes the role of AI in creating toxic environments that inhibit women's political participation, thus weakening democracy. The piece calls for technology companies to enhance their content moderation policies and integrate 'safety by design' into new products. It also discusses global policy efforts to combat gendered online harassment, relevant for reducing democratic risks and fostering inclusive governance.

Body

SHAREShare to FacebookShare to TwitterShare to LinkedInShare to EmailPrint This PostWith ImagesWithout ImagesMore than 70 national elections are scheduled for 2024, including in eight of the 10 most populous countries. But one group is likely to be significantly under-represented: women. A major reason is the disproportionate amount of abuse female politicians and candidates receive online, including threats of rape and violence. The rise of artificial intelligence, which can be used to create sexually explicit deepfakes, is only compounding the problem.And yet, over the past year, platforms such as Meta, X and YouTube have de-emphasised content moderation and rolled back policies that kept hate, harassment and lies in check. According to anew report, this has fuelled a ‘toxic online environment that is vulnerable to exploitation from anti-democracy forces, white supremacists and other bad actors.’Online attacks against women in politics are alreadyon the rise. Four out of fivefemale parliamentarianshave been subjected to psychological violence such as bullying, intimidation, verbal abuse or harassment, while more than 40% have been threatened with assault, sexual violence or death.The 2020 US election was particularly revealing. Arecent analysisof congressional candidates found that female Democrats received 10 times more abusive comments on Facebook than their male counterparts. And immediately after presidential candidate Joe Biden named Kamala Harris as his running mate, false claims about her were being shared at least3,000 times per houron Twitter.Similar trends have been documented inIndia, theUnited Kingdom,UkraineandZimbabwe.Minority womenface the worst abuse, together with those who arehighly visiblein the media or speak out onfeminist issues. In India,one in every seven tweetsabout female politicians is problematic or abusive. Muslim women and women belonging to marginalised castes bear the brunt of the vitriol.The disproportionate targeting of women discourages them from running for office, drives themout of politicsor leads them todisengagefrom online discourse in ways that harm their political effectiveness—all of which weaken democracy. In Italy, ‘threats of rape are used to intimidate women politicians and push them out of the public sphere,’saysLaura Boldrini, an Italian politician who served as president of the country’s Chamber of Deputies, adding that political leaders themselves often issue these menacing remarks. This creates a vicious cycle—a dearth of women in government has been shown to result in policies that areless effectivein reducing violence against women.Technology companies should take four steps to counter this trend. For starters, they should publish guidelines on what constitutes hate speech and threatening and intimidating harassment on their platforms. Some tech giants have included, and even provided examples of, gendered hate speech in their policies. Google’s YouTubepolicyis one example.Second, platforms need to reinvest in effective content moderation forall countries, not just the US and Europe. That means using a combination of human capital and improved automated systems (during the Covid-19 pandemic, when tech companies relied more heavily on algorithms, campaigners in France noticed that hate speech on Twitter increased bymore than 40%). Equally important is training human moderators to identify online violence against women in politics and more equitable investment in effective content moderation. Until now, theunpleasant jobof finding and deleting offensive content has typically beenoutsourcedto regions where labour is least expensive.Third, ‘safety by design’ principles should be embedded in new products and tools. That could mean buildingmechanismsthat ‘increase friction’ for users and make it harder for gendered hate speech and disinformation to spread in the first place. Companies should improve theirrisk-assessment practicesprior to launching products and tools or introducing them in a new market. Investing in innovation, such as theParityBOT, which serves as a monitoring and counterbalancing tool by detecting problematic tweets about female candidates and responding with positive messages, will also be important.Lastly, independent monitoring by researchers or citizen groups would help societies keep track of the problem and how well tech platforms are handling it. Such monitoring would require companies to provide access to their data on the number and nature of complaints received, disaggregated by gender, country and responses.In the context of social-media companies’ rollback of content policies and lowerinvestmentin moderation, it’s important to note that the percentage of women in tech leadership roles is currently28%andfalling. If, as in politics, female tech leaders are more likely to address violence against women, this trend could create a similar vicious cycle.Crucially,governmentsmust also take steps to prevent gendered online abuse from undermining democracy.TunisiaandBoliviahave outlawed political violence and harassment against women, whileMexicorecently enacted a law that punishes, with up to nine years in prison, those who create or disseminate intimate images or videos of women or attack women on social networks.In the UK, legal guidelines issued in2016and2018enable the prosecution of internet trolls who create derogatory hashtags, engage in virtual mobbing (inciting people to harass others) or circulate doctored images. In 2017,Germanyintroduced a law that requires platforms to remove hate speech or illegal content within 24 hours or risk millions of dollars in fines (a similar measure wasstruck downin France for fear of censorship).But even when laws exist, female politicians speak of‘virtually constant’ abuseandreportthat law-enforcement officials don’t take online threats and abuse seriously. In the UK, for example,less than 1%of cases reported to Scotland Yard’s online hate crime unit have resulted in charges.Police officers and judgesneed better training to understand how existing laws can be applied to online violence against female politicians; too many think that it’s simply ‘part of the job’.Tech companies and governments must act now to ensure that both men and women can participate equally in this year’s elections. Unless they do, representative democracies will become less representative and less democratic.