November 5, 2024 - Comments Off on Online Gendered Violence against trans community in Pakistan: Dolphin Ayan Khan Case

Online Gendered Violence against trans community in Pakistan: Dolphin Ayan Khan Case

DRF investigates the dissemination of harmful content on social media platforms against transgender community member Dolphin Khan and identifies gaps in the implementation of platforms’ content moderation rules

Trigger Warnings: Discussions of Non-consensual Nudity, Threats of Bodily Harm, Technology-Facilitated Gendered Violence, Blurred/Obscured screenshots from a non-consensual video (for information purposes).

Context:

Pakistan’s transgender community has persistently experienced violence, ostracization at the societal level, and sexual exploitation over the years. In the province of Khyber Pakhtunkhwa alone, 267 cases of violence against the transgender community were reported during 2019-2023, but which resulted in only one conviction. To make matters worse, the community has long been a victim of online hate on social media platforms. In October 2024, DRF released “Gendered Disinformation in South Asia Case Study - Pakistan, which focused on the discrimination and online hate speech directed at Pakistan’s trans community. According to the report, it was found that at least 22% of harmful social media posts (including TFGBV, gendered disinformation, gendered hate speech)  were aimed at the transgender community. However, as the report noted, meetings and escalations with social media platforms concerning trans-specific hate speech were unsatisfactory, owing to the latter’s responses - or, in the case of X, largely unresponsive post-change in ownership.

Dolphin Khan Case:

On 29 October, a non-consensual video of a Pakistani trans woman was leaked online, with users on multiple social media platforms sharing it. The video, which the victim Dolphin Ayan Khan - also known as Dolphin Ayaan - has described as being “forcibly recorded”, shows her being forced to strip down entirely and dance, by someone in the background who can be heard but not seen by the viewer, after being abducted at gunpoint.

On the back of this incident, transgender rights activist Dr. Mehrub Awan brought the video to everyone’s attention on X on 30 October. Expressing disdain towards the persistent harassment of the trans community, Dr. Awan stressed:

 “...We have literally written papers, done podcasts, book chapters, and spoken to media and officials about “Beela violence” and how organised it is. We, ourselves, have presented data and identified hotspots - Mardan and Peshawar - and profiled the criminals involved. We have done everything that we, as a broken and battered community, could do. Ayyan (sic) was on the roads just a month ago organizing protests, and a year ago injured with bullets. When does this end? What else is expected from a community literally on the receiving end of genocidal murders in Pakhtunkhwa to do?”

Following the incident, Ms. Khan issued a video statement naming the alleged perpetrator behind the video and confirming that police authorities had been informed about the incident. Seeking  justice on the matter, she vowed to hold a press conference on this issue in November. According to news reports, on 01 November a case against the perpetrators of the video - which was filmed in 2023 - was filed in Khyber-Pakhtunkhwa, under the Prevention of Electronic Crimes Act 2016 (PECA).

Harmful Content On  Social Media Platforms: DRF’s findings

DRF conducted preliminary investigations into the matter to understand whether or not these videos were available on social media platforms especially X, Facebook, Instagram, Youtube and Tiktok. Social media platforms were reviewed on 31 October between 0900 AM-0300 PM, using the search terms “Dolphin Khan”, “Dolphin Ayan”, and “Dolphin Ayan Khan”. Owing to capacity and time constraints, DRF was unable to look at other platforms such as WhatsApp/Snack Video or Snapchat.

  • Non-consensual nude content:
    As per initial investigations, it was found that at least two accounts on X had reportedly posted clips of the actual video. However, none of these video clips were posted or available on Facebook, Tiktok or Instagram. Furthermore, DRF came across at least two concrete instances where a link shared by an account on X led to an uncensored video being hosted on at least two different pornographic websites (screenshots provided, but with URLs removed and images blurred). As of the 31 of October, these X profiles were still up, and with active links.
    Platform rules on non-consensual video:
    The availability of Dolphin’s non-consensual videos on X are in violation of X’s non-consensual nudity policy. It underscores how users cannot “post or share intimate photos or videos of someone that were produced or distributed without their consent.” In violation of these policies, an account can either be suspended or temporarily locked.
    While the accounts posting Dolphin’s videos had not been suspended at the time of data gathering, it was unclear if they were locked or not.  Irrespective of the restrictions, merely locking an account without removing the harmful post seemed an ineffective strategy in this case. DRF’s Cyber Harassment Helpline in the past has recorded similar incidents like these where transgender activists had their pictures/videos shared on the platform which were in violation of this policy, and yet had not been removed by the platform.

Accounts on X actively sharing clips from the leaked non-consensual video

 

Example of users on X sharing an external link to a pornographic website that is hosting the video in question

  • Posts containing malicious links:
    Multiple social media accounts on Facebook and X collectively were luring users towards suspected malicious web links that offered full access to the video of Ms. Khan which leads to a non-conformity of a second violation of policies of these platforms.  In the case of X, DRF found at least eight unique accounts that purported to offer full access to the video and at least one account that made three posts with different images, but linking out to the same spam or suspicious link, as indicated in the screenshots in this report. Similarly on Facebook, DRF came across at least ten unique accounts that claimed to offer full access to the video, but with each sharing the same screenshots. One Facebook account made at least two posts that offered the same spam or suspicious link, and another with a slightly different account name that shared the same spam or suspicious link.  Similarly, On YouTube, DRF found at least one example of someone purporting to offer the video in full in their comment section (with a censored screenshot), only to find it link out to a spam website (this appears to be this particular YouTube account’s modus operandi, as an attempt to garner views/likes, for different sorts of videos). No such posts containing suspected malicious links were found on Instagram or Tiktok within the time period under investigation, at the time of this report.


Example of users on X sharing suspected malicious web links

It is unclear whether all of the links observed lead to active executions of malware, and requires further investigation. As per initial investigations into a few links found on X and Facebook, it was found that clicking on a link within said posts would redirect users to external websites that would  install or attempt to install software, adult material or other unexpected programmes. This is a common malware redistribution tactic that can trick people into downloading harmful software posed as legitimate (if unethical and hateful) material. Furthermore, VirusTotal also found these links to be suspicious or malicious.

 

 

Platform rules on malicious links:

Social media platforms have slightly different rules when it comes to regulating accounts posting suspicious links. Meta’s policy on Cyber Security prohibits “Attempts to share, develop, host, or distribute malicious or harmful code…” Such accounts would be suspended with or without a warning. Similarly, YouTube accounts posting suspected malicious links are in violation of YouTube’s policies concerning “external links”. Youtube violations - whether this pertains to malicious or suspicious links - will be subject to a three strike system: strike one, where an account is suspended for one week; strike two, where the account is suspended for two weeks (if within 90 days of the first strike); strike three, which, if occurring in the 90 day period mentioned, will lead to termination. On the other hand, X’s policies are less restrictive but only state that the platform “may take action to limit the spread” of “malicious links that could steal personal information or harm electronic devices” or spam “links that disrupt their experience.

Thus, the accounts posting malicious links on Meta and YouTube will be liable to be suspended (with or without a warning) but those on X cannot be suspended as long as the links’  outreach is limited by X. In theory, the accounts posted on Youtube and Meta should have been suspended at least for posting suspected malicious links and those on X would have their reach limited. However, the content moderation measures in place seemed insufficient in this case to protect the users from potentially harmful software.

In order to timely raise awareness about the presence of suspected malicious links on platforms, DRF’s Executive Director Nighat Dad posted on Facebook and instagram, warning about a profile sharing these links especially on X. However, her Facebook post was taken down by Meta within hours, claiming that the post violated Community Standards regarding Cybersecurity. Interestingly, the same post was not removed from Instagram.  Reflecting on the experience, Nighat noted:

“While my story only aimed at warning users against harmful content that itself violated the platform’s rules, it looks like the automated checking system highlighted my post on Facebook as problematic and removed it but not on Instagram.”

Conclusion:

Transgender women in Pakistan are extremely susceptible to violence, as already noted in DRF’s case study on gendered disinformation in South Asia. The transgender community in Pakistan has been subjected to offline violence, accusations of blasphemy and economic harm which has been perpetuated with orchestrated campaigns like these by using trans individuals non-consensual images to call for more harm and violence. Trans individuals acceptance within society is under constant threat and the rise in targeted attacks against them has already led to a question mark around their rights as a citizen of Pakistan.. The Federal Shariat Court judgement striking down important sections of the Transgender (Protection of Rights) Act 2018 pertaining to self identity and later the National Database & Registration Authority (NADRA) temporarily halting issuing identity cards for the community grows to show the systematic and institutional violence that the community faces due to these disinformation campaigns online.

In Pakistan, non-consensual intimate and nude images are weaponized against women and gendered minorities. DRF’s Cyber Harassment Helpline has over time highlighted to platforms that these visuals cause imminent harm to transgender individuals and in most instances can lead to offline violence. Despite platforms prioritising this type of content that causes imminent physical harm, platforms’ approach with its automated content moderation policies leaves this harmful content online.

In recent trends in cases of gender based violence it has been witnessed that victims facing graphic violence and threats are filmed and photographed during the violence as an act of authority and intimidation towards the victims. The presence of harmful content on social media platforms in Dolphin Khan’s case is a reminder of these challenges and growing trends pertaining to regulating harmful content from social media platforms. Irrespective of the posts’ wider reach on the platforms, anyone curiously looking for Dolphin’s videos could have found them on X. Furthermore, they would have also been vulnerable to malicious links not only on X but also on Facebook and Youtube. Thus, merely locking or warning accounts for violating community standards might not be enough to proactively protect users from harmful content.

As technology is exacerbating technology facilitated gender based violence (TFGBV) and the degrees of violence of graphic harms are becoming more frequent and dangerous, platforms need to identify these patterns and look at non-consensual visuals in the global south from an intersectional lens particularly ensuring rapid and quick response mechanisms to deal with this problem. DRF will continue to work with platforms to highlight these challenges and ensure that online spaces are safe for users across all demographics.

 

Published by: Digital Rights Foundation in Uncategorized

Comments are closed.