close
close

Violence in Crowdsourcing | Special report

Violence in Crowdsourcing | Special report

— Photos of Shit But
— Photos of Shit But

rowsourcing violence—a term I often use to describe the idea of ​​invoking digital mobs through organized, premeditated means—can be understood through deeply personal stories. “Love Jihad”, for example, is an example where misinformation, through various complicated means, is made to turn into acts of aggression. In almost all such cases, pre-existing societal flaws such as religious or ethnic hatred and discrimination are exploited to invoke anger and rage, both of which are catalysts for social media virality and often have devastating consequences.

Violence in crowdsourcing

Imagine a young couple in love caught in the crosshairs of this digital mob. Their once private and sacred relationship becomes a battlefield as a single rumor – amplified through social media – marks them as part of a fabricated conspiracy. In Indian cities, these couples are not just a disgrace; they are hunted. The mobsters, fueled by the false narrative that Hindu women are lured into converting to Islam through marriage, take it upon themselves to “defend” their communities. The digital mob organizes, fueled by anger and misinformation, to impose its version of justice, often violently disrupting lives in the process. For the couple, what was once a pure connection turns into a life-threatening ordeal where love is overshadowed by fear.

This is not an isolated event. These digital mobs, who assume anger and violence, gather power from the ease with which rumors spread online. Disinformation, often sown by political or religious actors, finds its way into the hearts and minds of ordinary people, turning them into unwitting soldiers of a false cause. In the case of “love jihad,” the ensuing violence is not organized in traditional ways; there is no central leader giving orders. Instead, ordinary citizens, united by the outrage they feel through their screens, become the culprits. With each shared post and viral video, they become more convinced of their righteousness, and the violence that ensues becomes a collective act of law enforcement—uncoordinated, decentralized, but devastating in its impact.

Violence in crowdsourcing

The concept of crowdsourced violence as seen in the ‘love jihad’ narrative finds a parallel in the recent Lahore College case. As with baseless allegations about inter-faith relations, the Lahore case has been thrown into disarray by misinformation. In both cases, the digital space became a battleground, where false rumors were strategically spread to incite public outrage. What started as unverified reports of an attack on a student at a college in Lahore quickly spread on social media, leading to protests, fear and unrest in the community.

As digital mobs are mobilized, social media platforms remain complicit in amplifying the disinformation that feeds them. Their design choices prioritize profit over social responsibility, allowing harmful narratives to proliferate, often at the cost of human lives and societal cohesion.

In most such cases, the real damage is done by the digital mob—a leaderless, self-organized force fueled by anger and misinformation. Ordinary citizens, without any personal connection to the situation, become instruments of violence and chaos. They act out of a sense of self-righteousness, convinced by the stories they consume online, but oblivious to the real truth. This crowdsourcing of violence, whether through the lens of religion in India or the distorted narrative of a college attack in Lahore, shows the power of digital platforms to ignite real-world evil. Disinformation no longer remains within the confines of a screen; it spills into our streets and cities, leaving broken lives and institutions in its wake.

The elephant in the room is the social media platforms that profit significantly from the crowdsourced violence phenomenon. Their algorithms are designed to amplify content that drives engagement, whether that content is positive or harmful. Misinformation, especially that which incites anger or fear, spreads quickly because it provokes strong emotional reactions, leading to more shares, comments and likes. In cases like the “love jihad” or the Lahore college incident, platforms benefit from the virality of unverified information, as heated debates, outrage and mob mentality generate high traffic and user activity, which in turn boosts ad revenue. These platforms take advantage of the attention economy, where increased engagement translates directly into financial gain through advertising.

The damage to individuals and communities in this process is profound. Algorithms that favor sensationalism over accuracy create a feedback loop where divisive, hateful content dominates, leading to violence and unrest in the real world. As digital mobs are mobilized, social media platforms remain complicit in amplifying the disinformation that feeds them. Their design choices prioritize profit over social responsibility, allowing harmful narratives to proliferate, often at the cost of human lives and societal cohesion.

Violence in crowdsourcing

To address the harmful spread of misinformation and assistance in the crowdsourcing of violence, we must seek solutions that preserve the benefits of social media platforms while addressing the root causes of digital mob behavior. Banning platforms, like suspending X, or criminalizing speech is not the answer. Besides political abuse, most such actions would likely do more harm than good, pushing toxic speech into darker, unregulated spaces where it is even harder to combat. Instead, we need targeted, actionable solutions that focus on regulation, platform accountability and user empowerment.

First, social media platforms must be held accountable for the content their algorithms amplify. An effective solution is algorithmic transparency. Platforms should disclose how their algorithms prioritize content, allowing both users and regulators to understand the mechanisms behind the spread of misinformation. In terms of user timelines, there should be an option for users to customize their feeds to favor more credible, fact-checked information over sensationalist posts. This could be strengthened by fostering partnerships with independent media outlets, ensuring that false or unverified information is flagged and slowed down before it gains dangerous momentum.

Second, regulation must shift its focus from content censorship to structural accountability. Instead of simply banning harmful speech, which risks violating free speech and other fundamental rights, regulators can require platforms to build more robust content moderation systems, provide faster responses to harmful viral trends, and imposes stricter penalties for the intentional dissemination of disinformation. Improved AI tools could be deployed, for example, to detect and disrupt the spread of violent narratives before they lead to real-world harm.

Ultimately, the most sustainable solution lies in improving digital literacy. Empowering users to critically evaluate the information they encounter can break the cycle of crowdsourced violence at the source. This can be achieved through comprehensive media literacy programs integrated into schools and public campaigns that teach people to identify misinformation and avoid contributing to digital mobs.

Social media platforms provide an invaluable service by providing spaces for connection, activism and innovation. Instead of scapegoating them, we need to work together to ensure they contribute to the public good by stopping the spread of dangerous misinformation, encouraging critical thinking and holding them accountable for the content they amplify. The challenge is great, but solutions are within reach if approached through cooperation, innovation and responsible regulation.


The writer is the director and founder of Media Matters for Democracy. He writes about media and digital freedoms, media sustainability and combating disinformation.