What we can learn from the first weeks of Musk’s Twitter-Takeover

Photo by Ravi Sharma on Unsplash

Twitter has been one of my favorite social media platforms for the past few years. Through it, I’ve connected with academics, policymakers, and activists. It’s provided a great platform for my work. It’s also where I first met my co-founder for Glitterpill LLC, Samantha Kutner. It’s been a key component of my social and professional life, as it has been for many.

Watching Elon Musk’s Twitter takeover has not only been professionally concerning. On a personal level, it feels like the end of an era.

Glitterpill’s informal slogan is “You moved fast and broke things. We’re here to help you fix it.” Our team feels it is worth examining Musk’s actions over the past couple of weeks as a lens to view social media companies’ values as they intersect with stakeholders’ concerns about where the tears in the social fabric have occurred and what we can do to repair them.

In the spirit of disclosure; Glitterpill LLC consults with major social media companies. Twitter is not currently among our clients. However, we frequently work with members of the Twitter team through our engagement with the Global Internet Forum to Counter Terrorism (GIFCT) and other cross-sector engagements.

Context

Advertisers care deeply about the reputation of their brands, and few things make them more skittish than running the risk of damaging their brand’s reputation. This damage can come from their brands appearing in the wrong context or a lack of clarity on who to trust to help navigate politically charged, complex geopolitical issues with no roadmap.

These are unprecedented times, and advertisers should be cautious about Twitter’s latest developments. Their concern is reflected in companies such as General Mills, United Airlines, Pfizer, Mondelez International and Volkswagen, pulling their advertising.

According to the New York Times, advertisers are responding to the fear that hate speech and misinformation would proliferate freely under new leadership, leadership that does not appear to consider the human rights implications of a US-oriented flavor of free speech absolutism.

Our team has seen how rhetoric surrounding free speech has been used to launder dehumanizing ideology over the years. This year alone, we have seen unchecked false grooming claims maligning the LGBTQ community escalate to intimidation at library events and bomb threats against hospitals that provide gender-affirming care.

Financial Context

Twitter is no longer publicly traded, so determining its current market value in terms of current traded share value is impossible.

Looking at Musk’s other publicly traded company, Tesla’s stock has, in the past month, dropped by 31.66%, and over the course of 2022, it’s dropped by 52,17%. In an initial response to the deal to take over Twitter, Tesla’s stock value dropped by more than $125 billion, and with that, Musk lost $30bn of his net worth — looking at the markets as a gauge of public trust, this does not look good for Musk, nor Twitter.

While Musk’s companies are struggling, the companies that publicly announced they had pulled ads from Twitter have been rewarded for acting socially consciously. In the past month (writing on the 9th of November), General Mills’ stock value has increased by 4.12% in the past month, United has increased by 23.33%, Pfizer by 13.41%, and Volkswagen by 9.13%. These are unprecedented times, but these values offer insight into and perhaps permission to other companies. Doing the right thing for society is a very smart business decision.

Our Guiding Philosophy

As we monitor risks and develop strategies to restore brand reputation, our key objective is to help companies make smart and socially conscious business decisions. We frequently see brands appear in conspiratorial networks or other contexts connected to ban evasion from known dangerous individuals. As leadership shown by former Twitter advertisers demonstrates, it pays to be responsive and adaptive to emerging extremist threats.

Taking a clear stance against hate, misinformation, and its consequences demonstrates responsible leadership and has direct payoffs for companies — it builds public trust. This, in turn, strengthens brand reputation and increases the likelihood of user retention when new developments arise.

Glitterpill is always happy to work with companies committed to regaining public trust through concrete, data-driven action. There is still time to deeply understand what went wrong and envision the potential paths forward.

fREeZe pEaCh

In this analysis, some fundamental causes and effects are worth highlighting.

Elon Musk offered to buy Twitter in April after he bought a significant portion of Twitter’s stock (in breach of SEC laws) — this followed a series of tweets in which Musk questioned Twitter’s commitment to freedom of speech. Musk is currently dealing with the fallout of free speech absolutism. His effort to monetize the platform resulted in a proliferation of fake Twitter accounts. Additionally, key leaders on these topics within the company are now actively resigning in response to the unsustainable climate created by Musk’s moves.

Freedom of speech is an essential human right. It has empowered journalists to uncover corruption and enabled human rights activists to topple dictatorships. It is why authoritarians fear it so intensely. As frequently demonstrated in building privately owned and transnational “public spheres” for expression implementing this fundamental right consistently is perilous. If this process is mismanaged or subverted, it will create a chilling effect on already marginalized groups, who depend on their ability to speak truth to power in the struggle against authoritarian rule and injustice. A reductionist understanding of freedom of expression may lead to repression. There’s no easy fix for this. An overreliance on automated moderation is frequently circumvented and outright abused. It also disproportionately affects marginalized voices. Enhancing and protecting freedom of expression for marginalized voices working where it’s most needed thus requires deep qualitative insights and understanding of context — something we frequently offer assistance to our clients and partners.

Towards A Global Understanding

Absolutist democracy as majority mob rule was never the intention of the founders of liberal, constitutional democracy. Mob rule, frequently under the guise of “democracy,” will, as the past has shown, frequently lead to devastation, such as genocide. That is why functional democracies have built-in protections, often codified in constitutions, including freedom of expression, separations and limitations of powers, and parliamentary representations. All these protections help ensure that minority groups and parties are heard and that there are established paths both to speak truth to power and seek justice against violations, even by the government itself.

At this point, Twitter has a single owner and decision-maker who sympathizes with Russia despite their unprovoked war in Ukraine. This is a centralized governance model with no checks and balances in place. This opens Twitter for abuse by malevolent state- and non-state actors.

Companies and governments who have engaged in prior information operations and spread Pro-Kremlin propaganda already see social media as a cheap, effective tool for influencing election outcomes and contributing to an environment ripe for sectarian violence. Global bad-faith actors, mercenaries, extremist organizations, or state actors should not be allowed to buy their way in and actively distort public perception.

If special interest groups can mass purchase verified accounts and advance their agenda, and shape their discourse based on a very narrow worldview, the balance of power is lost, corruption fills the void, and we get farther and farther away from a shared understanding of reality.


Former Twitter executive Dorsey’s response, “Accurate to whom?” reflects public concern.

Worker’s Rights

Musk let the board, key decision makers within the corporate leadership structure, and about 3700 employees go. He is currently being sued for it. His corporate takeover is not aligned with a desire to protect rights or even respect for the rule of rights and law — as is shown in several international cases of breaches of employment law.

As demonstrated by advertiser behavior and stock prices, Musk’s moves do not instill public trust. From a freedom of expression & human rights perspective, they are deeply concerning.

Musk is attempting to implement a reductionist approach to freedom of expression. He may thus drastically limit content moderation, which often protects marginalized voices. These voices are often pushed out of the public sphere due to extremist group behavior, hate speech, and targeted direct and indirect threats of violence on and off platforms. From many human rights perspectives, it is apparent that targeted intimidation and harassment of human rights defenders is a form of censorship. It is rarely accounted for in techno-authoritarian free speech models.

When Musk claims that parody accounts must have parody in their title, he also fails to understand the intelligence perspective of how parody is used to evade violent calls to action. Parody and gaming references that name targets have been recognized as a threat by DHS.

Extremist Exploitation of Vulnerabilities

Extremist groups like the Proud Boys have gamed social media platforms and comment threads to launder their ideology and recruit. One of their many tactics is to claim they are victims of censorship.

When Proud Boys accounts were banned from Twitter, members frequently doubled down on the claims of being censored. In one instance, one Proud Boy on Twitter claimed he was being discriminated against for being a “MAGA-loving Patriot.” A default yet dangerous response to this behavior has frequently been to take such claims at face value. This disregards the harm caused by such extremists gaming the platforms to spread and amplify hate speech, overloading their opposition and effectively silencing them.

In our view, platforms provided by private corporations are privately owned spaces. Thus they are governed by the standards set by the companies, in line with the existing rule of law. Platforms are not responsible for upholding, i.e., constitutional rights to free expression. The right to freedom of expression is, in other words, neither a right to a platform nor a right to amplification on that platform. Or, as Samantha responded to the Proud Boy claiming discrimination: if you go to a bar and you piss on the carpet, the owners have every right to kick you out.

There isn’t a finer car study on a platform going toxic due to unmoderated expression than Monster Island. Monster Island was an experimental project. It ran as a private Facebook community for four years and attempted to serve as a safe haven for free expression. While originally led by one left-leaning individual, one right-leaning individual, and one moderate, it aspired to serve as a space for civil discourse. The group was quickly hijacked by relatively few members of the far-right who weaponized claims of repression in order to dominate the space, effectively silencing their opponents.

The result was several years of the purest banality of evil. We ended up needing to add rules against doxxing, blocking admins, explicit threats of physical violence, and taking photos from people’s personal profiles and photoshopping them into sex acts with military dictators. Meanwhile, the quality of discourse deteriorated from semi-functional, where some folks could have actual arguments or at least do a dance that looked vaguely like presenting evidence, to endless spam of the most disturbing memes you’ve thankfully never seen.

https://www.skeptic.org.uk/2020/10/monster-island-free-speech-experiment/

A key member of the Monster Island group was Ryan Balch. Balch ultimately ended up joining Kyle Rittenhouse and other members of his “squad” in the role of “Tactical advisor” in their attack on the Black Lives Matter protest surrounding the Civic Center Park area in Kenosha — this ultimately led to Rittenhouse under the supervision of Balch, shooting and killing two Black Lives Matter protestors.

Beyond highlighting the realities of unmoderated spaces, “Monster Island” also highlights the thin lines between online proponents of hate speech under the guise of free speech and offline vigilante violence. This thin line is further evidenced by the threats expressed on Gab, Telegram, and Truth Social and the violence that ensued in August following the FBI’s search of former President Trump’s property in Mar-a-Lago, where they found classified documents.

In our work with social media companies and consortiums that aim to make social media companies better at dealing with violent extremism, terrorism, hate speech, and misinformation, we often see groups like Monster Island and, more overtly, extremist groups communities issue direct and coded calls to violence.

At recent conferences like the Eradicate Hate Summit, we listened to Integrity First For America describe how nazis repeatedly used racial slurs in the courtroom. The function of their verbal behavior was not just for shock value. It was an attempt to desensitize jurors to dehumanizing language. The scene described by Integrity First For America in Sines v. Kessler provided insight into how individuals marinate in a conspiratorial soup of racism and misogyny and begin to increase their commitment to their group.

When free speech has devolved to the extent that dehumanizing language, imagery, and content on mainstream social media platforms or in courtrooms is indistinguishable from 4chan and other more encrypted platforms, the violence that ensues gets normalized too. It’s why public outrage significantly diminished when Neo-Nazis dropped a banner last month stating Kanye West was right about the Jews. We are tired and desensitized, but the threats we collectively face are no less severe.

In more public corners of the platforms, we see cryptofascism, speech that covertly, in context and by virtue of its target audience, constitutes extremist narratives, hate speech and incitements to violence. Detecting this requires in-depth expertise, which we, as subject matter experts, provide. We assist investigative teams so they can fully understand what’s happening on their platforms and take data-driven action to protect people from physical harm. We do this while protecting the freedom of speech of those targeted by hatred and threats of violence on the platforms.

In the hours following the purchase of Twitter, NCRI observed a 500% increase in racial slurs on the platform. The Washington Post noted increased pro-nazi, anti-LGBTQ+, and misogynistic speech; some far-right accounts saw a surge in new followers. Meanwhile, Musk and Twitter underlined that their hate-speech policy had not changed. However, what had changed was the company's public perception, transitioning from a space of civil discourse to a space welcoming reductionist freedom of expression absolutists who’d previously proclaimed censorship when facing the consequences of hate speech and encouraging violence.

In the context of this, three more significant things happened that will leave an impact both on the reality of life on Twitter as well as in the world of human rights and the protection of freedom of expression where it’s most needed: Musk reduced the workforce at Twitter by half, defacto leaving it in an even more vulnerable position when it comes to content-moderation, addressing issues of Trust & Safety, as well as Human Rights; and Musk announced he would make it possible to pay for the blue-tick “verified” badge — a tool frequently used to discern between real accounts and fakes. As of today, November 10th, the FCC is investigating the removal of consumer privacy safeguards stemming from Musk’s attempt to recoup money he’s invested in the platform. This compromises the safety of all Twitter users.

Verification

Verification of profiles has long been a lifeline for human rights activists, journalists, and other public voices in the human rights sphere, who are frequently impersonated by fake accounts in attempts to discredit and undermine them. By putting “verification” up for sale and by decoupling it from actual verification of identity, Musk made impersonation possible for anyone at the low price of $8 — meanwhile, marginalized groups, activists, and others who depend on verification for protection would be beholden to pay the wealthiest man in the world, a man many have deep ethical concerns about, $8 per month — which in the ears of many North Americans may sound like a low price, yet in countries like Sri Lanka and Ghana would account for approximately 5.33% of an average monthly salary of $150 — a high price to pay for activists who often are overworked and underpaid. In response, Twitter may be reinstating account verification differently than before. Still, we have yet to see how that works and whether it will be available to unpaying users.

As platforms such as MeWe demonstrate, there’s nothing inherently wrong with moving from a pure advertiser-based business model to a subscriber-based or mixed model. Advertiser-based models come with their own pitfalls in terms of, i.e., incentives for how sorting algorithms work.

We would, however, in the case of Twitter, have appreciated a more cautious approach that remained mindful of potential real-world harm to those fighting to improve their communities and who face risks to their lives and livelihoods to speak truth to power.

We would also welcome a deeper conversation about the true meaning and practical implications of protecting free expression and empowering marginalized voices to freely participate in public discourse on the platform without facing silencing hatred, death threats, and the risk of violence.

Despite Musk’s prior criticisms of Twitter bots, the company under his leadership has not earnestly attempted to address the challenges posed by inauthentic behavior. Neither have they held those seeking to disrupt democratic processes through adversarial disinformation networks accountable. Recent developments as of Nov 10 have highlighted major information vulnerabilities Musk’s carelessness has left open for hostile nations to exploit.

Balancing rights is never easy, nor is it something we take lightly. These are complex, nuanced conversations we have with our clients and partners on a daily basis in order to support their efforts to make communities safer. As expressed by Albert Camus, it’s the job of thinking people not to be on the side of the executioners.

The market’s response to the acquisition, as well as stakeholder community concerns, reflects a demand for a more cautious, globally conscious, and well-thought-out approach to the future of the company before significant harms occur.

Twitter moved fast, and they certainly broke some things in the process. They’re suffering financial and reputational damage from that as well as FCC investigations. If they wish to, we’re here to help them fix it. In the meantime, we’re excited to put our collective skills to use and continue our work with brands and companies who see the value of deeply understanding the risks posed by extremism, hatred, and disinformation on social media and creative ways to counter them.

An ounce of prevention is worth a pound of cure.

Previous
Previous

The Martyrs of the Far Right: Accountability After January 6th