March 8, 2025 - Comments Off on Gendered Slurs in Urdu: Social Media’s Moderation Failure and Its Consequences
Gendered Slurs in Urdu: Social Media’s Moderation Failure and Its Consequences
By Hamna Iqbal Baig
As technology advances, it provides an opportunity for people, particularly women, to take up space on social media—whether to run businesses, build communities, sell products, share ideas, express opinions, or showcase their work. However, there is a downside. Social media platforms are increasingly becoming breeding grounds for digital misogyny, hate speech, and abuse against women. This is worsened by algorithmic biases, targeted harassment, and weak moderation policies leaving women at the mercy of systems and platforms that fail to protect them.
Gendered Slurs
One of the main challenges is the use of derogatory terms and gendered slurs in Urdu and other regional languages. A quick search on Facebook for the word گشتی (Gashti)—a derogatory term implying promiscuity, often used to insult or shame women—revealed a Facebook account named "مریم گشتی بلاول کنجری" (Maryam promiscuous Bilawal low moral character), followed by 1,683 people. The account appears to use this derogatory term to target politicians Maryam Nawaz Sharif and Bilawal Bhutto Zardari.
Some other common words which can still be found on the platform include کنجری (Kanjari) meaning a person with low moral character, another highly offensive gendered slur in Urdu and Punjabi, used to degrade and insult women as well as رںڈی, a highly offensive slur implying sex work, frequently used to degrade and humiliate women. بہن چود (Behenchod) is another vulgar Hindi/Urdu slur that translates to "sister-f***er” and is considered a highly offensive and misogynistic abuse in South Asian languages.
Moderation policies
Meta’s ‘Hateful Conduct’ policy states that it removes dehumanising speech, serious insults, slurs, and harmful stereotypes that have historically been used to attack or exclude specific groups, particularly those linked to offline violence. It also prohibits expressions of contempt, disgust, and calls for exclusion or segregation based on protected characteristics. However, exceptions are made for content that uses slurs in a self-referential or empowering way, or when shared to condemn or report harmful speech—provided the speaker’s intent is clear.
Meta defines slurs as words that foster an atmosphere of exclusion and intimidation due to their ties to historical discrimination, oppression, and violence. Yet, the presence of profiles containing such terms, along with the frequent appearance of slurs in comments and captions, underscores how platforms like Facebook have failed—or been unwilling—to effectively moderate them. This gap in enforcement enables misogynistic abuse to thrive unchecked, leaving women vulnerable to online harassment with little recourse.
Recently, on 7 January 2025, Meta announced changes to its content moderation policies, expanding permissible speech under its hateful conduct policy. Users can now post content that was previously banned, including referring to women as household objects or property. The Center for Countering Digital Hate (CCDH) has criticised these changes and warned that weakened moderation could halt enforcement in 97% of key areas, leading to 277 million additional harmful posts annually.
Other platforms like TikTok and YouTube also have content moderation policies in place to protect women from online abuse, however, there is a question mark on how effectively they are implemented.
TikTok's content moderation policies prohibit harassment, bullying, and bans hate speech hateful behavior, including content that explicitly or implicitly attacks protected groups, leading to its probable exclusion from the For You Feed (FYF). The platform provides tools like comment restrictions, duet/stitch limitations, and messaging controls to help users manage harmful interactions.
Despite these policies, research indicates that abusive hashtags and gender-based abuse targeting women, particularly female politicians, have been present on the platform. The Institute of Strategic Dialogue found that female politicians were targeted with abuse on TikTok and Instagram ahead of the 2022 US midterm elections, suggesting TikTok's content moderation policies are not always enforced properly.
Linguistic Gap
Experts suggest that a significant linguistic gap exists which means that platforms’ content moderation systems are primarily trained in dominant languages like English and fail to effectively detect harmful content in less widely spoken languages. This gap extends beyond simple translation issues—social media platforms struggle to account for linguistic diversity, regional dialects, and cultural nuances. As a result, slurs, hate speech, and gendered abuse in Urdu and other regional languages often go undetected and harmful content persists on social media.
Ramna Saeed, a journalist who has reported for various international and regional outlets, experienced this firsthand when covering a minority rights march on 11 August 2024 for a Turkish digital news Urdu platform. The march, organised to mark National Minorities Day, was met with opposition on ground from right-wing parties and Islamist groups. After posting her report on the outlet’s Facebook page which carried details of what happened during the march, she faced a barrage of Punjabi and Urdu slurs, including the use of گشتی and رنڈslurs from supporters of a right-wing party. Other comments targeted her personal appearance, and some accused her of spreading "propaganda".
Despite the severity of the abuse, she chose not to report the comments, since she feared that the language barrier would render her complaints ineffective. Instead, she requested her organisation to remove some of the abusive comments."South Asian languages have never been a priority for these platforms," she laments. One of the primary challenges lies in the failure of social media platforms to recognise and track, "Their focus is on English, despite the fact that there is a massive user base in this region."
After facing online abuse, Saeed was forced to change how she navigates digital spaces. “As a journalist, my profile should be public—that was my choice. But after these incidents, I self-censored myself, made my accounts private, and became more cautious about whom I keep or remove,” she said.
The experience also affected her journalism, making her hesitant to cover sensitive topics like minority rights, blasphemy, and sexual and reproductive health. “I have always specialised in video content, and the report that went viral was a video report. But now, I am reluctant to do video stories and prefer writing instead,” she explained, adding that video exposes a journalist’s face, making them more vulnerable to targeting compared to a byline on a written piece.
A representative from Digital Rights Foundation (DRF)—a Pakistan-based non-profit organisation that works to protect digital freedoms, advocate for online privacy, and combat cyber harassment, particularly focusing on the rights of women, journalists, and marginalised communities—told us, “Social media platforms struggle to moderate such content effectively. While they continuously collect keywords, languages, and hashtags, they often fail to grasp cultural nuances. This allows bad actors to bypass content moderation systems successfully.”
“They also fail to moderate gendered slurs effectively, not just in Urdu but even more so in regional languages. However, this issue exists in English as well. The problem is more pronounced in regional languages because platforms do not invest enough resources—whether human moderators or automated tools—to understand, flag, and remove such content. Even when they detect slurs, challenges arise when they are embedded in audio, video, or images,” the representative added.
Sadaf Khan, co-founder of Media Matters for Democracy (MMfD), former journalist and policy advocate focusing on journalist safety, media ethics, and digital rights, is of the view that while there are content moderators at social media companies with expertise in local languages, significant gaps remain. “The limited number of moderators prevents a comprehensive review of content, leaving many issues unaddressed. Most importantly, the majority of content moderation is automated, relying on various AI tools. While these tools can detect some problematic content, they struggle with different formats, scripts, and spelling variations,” she said.
Language is highly contextual in social media content moderation, as words and phrases can have different meanings based on culture, region, and intent. “For example, the term gustakh (someone who is perceived as disrespecting religious figures, sacred texts, or Islamic beliefs) is problematic only in a Pakistan-specific context and does not necessarily need to be included in global content filtering. However, current moderation frameworks do not account for such nuances, resulting in a system that fails to effectively address local challenges and needs,” Khan said.
Similarly, in the context of gender, the word tawaif—once used to describe courtesans skilled in classical music and dance—has evolved into a derogatory term often weaponised against women on social media. While not inherently offensive in its historical or artistic context, its modern usage reflects a broader trend in Urdu, where neutral or historical words have transformed into gendered slurs. Other examples include badchalan—once meaning "misbehaved," now used to shame women and imply promiscuity and badkaar—historically meaning "wrongdoer," but now a common insult targeting women's morality. These linguistic shifts highlight how social media platforms struggle to moderate harmful language, particularly in non-English and culturally specific contexts.
AI’s implications
Apart from gendered slurs, another major shortcoming in content moderation is the challenge posed by rapidly advancing Artificial Intelligence (AI) technology, as platforms are still struggling to keep up with effective moderation. With the advent of AI, women—particularly journalists, politicians, activists and influencers—are increasingly concerned about being targeted with AI-generated, manipulated images and deepfakes. These technologies are being weaponised to harass, discredit, and silence women. This often leads to reputational damage, emotional distress, and even threats to their safety.
For many women, these online attacks are not just virtual threats but have real-world consequences, forcing them to withdraw from public life or abandon their careers. Shukria Ismail, a female journalist from District Kurram, is one of them. She worked actively in the media industry for two years before becoming the target of severe online harassment. “Fake accounts were created against me on Facebook, Instagram, and TikTok,” she said. “The individual responsible manipulated my pictures, merging them with inappropriate images. Messages were sent in my name to my family, relatives, and even to individuals who were known adversaries of my parents. False accusations were made against my character, and in some cases, money was even demanded using my identity.”
For over a month, her family endured immense distress. Despite repeatedly approaching the Federal Investigation Agency (FIA), she said, “they did not take the matter seriously.” Meta, too, failed to respond. “We consistently reported these fake accounts through FIA, sent multiple emails to the concerned social media platforms, yet we received no response,” she explained.
Ultimately, the consequences of this ordeal fell entirely on her. Unable to bear the situation, her parents decided to confine her to their home, effectively ending her career in media. “I am only able to write from home now,” she said. The incident also led to her emergency and compromise-based marriage, further restricting her independence.
Experts believe that if complaints’ regarding AI-generated content are not moderated effectively due to the platforms’ current content moderation policies. They face several limitations in detecting and removing AI-generated deepfakes or manipulated images targeting women.
“First, they have pretty limited detection capabilities—they rely on automated detection tools that are quickly becoming less effective as generative AI models grow stronger and more advanced. These tools are essential for large-scale monitoring and moderation, but they lack the nuance needed, especially in non-Western contexts,” Khan said.
She also said that gender is a complex subject, and AI is unlikely to engage effectively with gender and harm, particularly when these concepts vary drastically from one context to another. “What counts as online abuse, what qualifies as dangerous or inciting speech, can differ vastly from one country to another, making AI models for monitoring largely ineffective.”
Interestingly, The Oversight Board, an independent body reviewing Meta’s content moderation decisions, has intervened in two cases, concerning AI-generated explicit images of female public figures on Instagram and Facebook where the platform failed to uphold its own policies.
The first case involves an AI-generated nude image resembling an Indian public figure, which remained on Instagram despite user reports. Meta later admitted its mistake and removed it for violating its Bullying and Harassment policy. The second case concerns an AI-generated image of an American public figure being groped, posted in a Facebook group. Meta initially removed a duplicate of the image, adding it to its automated enforcement system. However, when the image was posted again and appealed, the report was automatically closed.
The Board emphasised that non-consensual deepfake intimate images disproportionately harm women and should be strictly prohibited under a clearer "Non-Consensual Sexual Content" policy. It urged Meta to improve reporting mechanisms, avoid relying on media reports to detect violations, and ensure appeals for image-based sexual abuse are not auto-closed.
One major aspect of taking down AI-generated content is how platforms define what is considered harmful. Regarding this, DRF’s representative said: “It all depends on how these companies define what is harmful and what should be taken down—whether it's AI-generated content, Non-Consensual Sexual Content (NCSC) or Non-Consensual Intimate Images (NCII). Their policies determine what qualifies as sensitive or intimate imagery and how it should be removed.”
They said that platforms often need to be briefed on what constitutes harmful content in a Pakistani context. "Even partially nude images can be harmful in our context. Culturally, women here [in Pakistan] are fully clothed—we don’t typically wear bikinis or expose our legs. These nuances don’t always fit within their existing policies, so we have to make them understand the cultural context and why such images, whether real or AI-generated, can be damaging to women," they said.
Role of trusted partners
When platforms struggle to moderate content—particularly gendered slurs, AI manipulation, and abuse against women—trusted partners like the Digital Rights Foundation step in. “Platforms constantly turn to us for guidance, and we actively flag bad actors, behaviors, keywords, and emerging trends to help them improve their moderation efforts,” the representative said.
DRF operates a Cyber Harassment Helpline, which plays a crucial role in mediating between social media platforms and survivors of online abuse. “We receive numerous cases—not just involving women in general, but also women journalists, women activists, and members of the transgender community facing gendered slurs,” a DRF representative explained. “We constantly flag these slurs to the platforms, inform them about emerging trends, and work to make them understand the context. Often, they either don’t grasp the nuances or fail to recognise the real-world consequences of such language.”
Saeed shared that after the online gender abuse incident, an impersonator started actively posting content under Saeed’s name implying that she supports a right-wing group and their ideology. “I came across it on a Sunday, and I panicked—I was crying," she recalled. However, she remembered DRF from a training session she had attended and reached out for help.
DRF intervened and facilitated her, but the process took four to five days as they had to send a request to Meta, who then reviewed the case. Eventually, the account was taken down. "They told me, ‘Since you are a woman journalist, your case was prioritised,’" Saeed said. She pointed out how dangerous such delays could be, given how easily blasphemy allegations can be weaponised in Pakistan.
DRF has gone as far as facilitating a meeting between social media companies and transgender activists during a surge in gendered disinformation and hate speech targeting the trans community. “We wanted them to hear directly from those affected—so they could understand the real-life consequences of these slurs,” they said. “Sometimes, platforms respond by taking down content immediately. Other times, they have to consult their policy teams before acting.”
While DRF’s efforts have led to some policy updates and country-specific improvements in content moderation, the burden should not fall solely on civil society. “We can only do so much—this is something platforms should be addressing proactively. They need to take responsibility for identifying gaps, understanding nuances, and tracking the techniques bad actors use to circumvent moderation.”
Although Saeed is thankful for organisations like DRF, she believes that addressing digital rights issues shouldn't fall solely on NGOs and civil society organisations—it is ultimately the government's responsibility. She pointed out Pakistan's structural and governance challenges, with policymakers lacking awareness of the consequences of their laws.
Given the grim situation, there is a need for a stronger advocacy network, with more organisations representing Pakistan globally and engaging with platforms like Meta, X and others so that journalists like Saeed and Ismail don’t have to self-censor themselves or abandon their careers.
Screenshots:
Published by: Digital Rights Foundation in Digital 50.50, Feminist e-magazine
Comments are closed.