All Posts in Uncategorized

June 17, 2025 - Comments Off on May 2025 Newsletter: Research on Misinformation, Geo-Blocking & TFGBV Post Pahalgam Attack and During Indo-Pak Escalations

May 2025 Newsletter: Research on Misinformation, Geo-Blocking & TFGBV Post Pahalgam Attack and During Indo-Pak Escalations

In the wake of the Pahalgam attack, DRF analysed 5 major misinformation campaigns and 4 viral hate speech trends to showcase how social media plays a role in spreading inflammatory content. DRF also collected data during the Indo-Pak escalations to reveal a troubling trend of geo-blocking and censorship on the other side of the border, raising human rights concerns and inconsistent platform moderation policies. These findings were released in the form of a short investigative report with an online explainer.

Finally, DRF analysed 295 unique posts pertaining to the escalations across 5 platforms, and discovered 25% of these were directly connected to gendered disinformation, TFGBV,  and gendered hatespeech, identifying 5 key content categories of platform-enabled misogyny on both sides of the border. These findings were released as a report with an online explainer.

Regional Engagements & Initiatives

Nighat Dad at Online Panel Discussion by CSOH & St. The London Story

DRF ED Nighat Dad spoke on a panel discussion titled “The Digital Warfare Between India and Pakistan on 22 May. The panel highlighted the rise of online hate, misinformation, and disinformation after the Pahalgam attack and outbreak of hostilities between India and Pakistan.

Our Latest Research & Advocacy

Second Issue of Digital 50.50 Launched

This issue spotlights the alarming nexus between tech platforms and bad actors to suppress free speech online, threatening journalist safety and human rights across digital spaces. Given the recent Indo-Pak escalations with cyber warfare and disinformation campaigns, this issue strives to highlight how media, civil society, and digital rights advocates must reclaim and protect online spaces from such threats.

 

DRF Releases Annual Report 2024

In 2024, DRF continued its mission to advance digital rights and online freedoms across South Asia and the Global Majority. From addressing pressing digital rights challenges to amplifying critical conversations on AI governance and data privacy, our work has remained at the forefront of change. Our 2024 annual report captures a year of progress, resilience, and impact in our ongoing mission to create a safer, more inclusive digital landscape.

 

 

World Press Freedom Day

For International World Press Freedom Day, DRF drew attention to Pakistan’s latest ranking in the World Press Freedom Index 2025, i.e., 158/180. This ranking is a drop of six places from Pakistan’s ranking in 2024. Read our analysis on the shrinking space for press freedom here.

 

 

Resources and Toolkits During Indo-Pak Escalations

Given the rampant misinformation, digital security threats and psychological overwhelm citizens experienced during the Indo-Pak escalations, DRF issued several resource lists and toolkits to support citizens (as well as journalists) during this precarious period.

The State of Free Speech in Pakistan

According to this year’s Future of Free Speech Index, a report by The Future of Free Speech measuring free speech support across 33 countries, Pakistan ranked among the lowest in terms of support for free speech in 2024. The findings also reveal higher support for certain types of free speech compared to 2021.

Digital Rights Tracker Updates

DRF continued to provide weekly updates from its Digital Rights Tracker, However, given the rapid pace of escalations between India and Pakistan in May, a limited edition Indo-Pak Escalations Tracker was launched to provide frequent updates on cybersecurity breaches, arbitrary online restrictions, and misinformation/disinformation.

Press Coverage

DRF Statement on Verifying Information 

On 7 May, at the onset of escalations between India and Pakistan, and amidst significant panic, there was a proliferation of misinformation being shared online. DRF quickly issued a statement to caution citizens against spreading information without verification, and urged everyone to share responsibly.

DRF Advice on Spotting Fake News for Geo News 

DRF shared expert advice and tips to educate citizens on how to identify fake news and misinformation during the Indo-Pak escalations for Geo News. Read more here.

 

 

Nighat Dad Highlights Misinformation War to ABC News Australia

 

 

View this post on Instagram

 

A post shared by Nighat Dad (@nighatdad)


Nighat Dad shared insights with ABC News Australia about AI-generated war disinformation during the Indo-Pak escalations. DRF Research Associate Sara Imran also weighed in, highlighting an example of a viral video that falsely claimed to show a couple’s final moments before the Pahalgam attack. Read the article here.

Seerat Khan on The DigiPod

DRF Senior Researcher Seerat Khan appeared on The DigiPod with Farieha Aziz to discuss increased misinformation and disinformation post Pahalgam and during the Indo-Pak escalations. She discussed DRF’s latest research in this regard as well. Listen to the episode here.

Anam Baloch on Rights Watch

DRF Programs Lead Anam Baloch appeared on VoicePK’s Rights Watch to discuss DRF’s findings on misinformation and disinformation post Pahalgam and during Indo-Pak escalations, shedding lights on the necessity of local crisis monitoring mechanisms, CSO fact-checking and cross-border verification initiatives. Watch the segment here.

Nighat Dad Examines Strategic Bitcoin Reserve on Geo

Nighat Dad weighed in on the Strategic Bitcoin Reserve on Geo Pakistan, emphasizing transparent and timely regulation, as well as enhanced digital literacy, as key to enabling new forms of employment and digital financial frontier. Watch the segment here, and read its coverage here and here.

DRF was also featured in the following press coverage:

Media Outlet Date Title (with link)
The News 1 May 2025 HRCP’s 2024 human rights report paints bleak picture
Dawn 4 May 2025 Safety for women
The News 5 May 2025 Inside the fight against online harassment
Ananke Mag 5 May 2025 Op-Ed: Can Gender And Development Address Tech Facilitated Violence Against Women in Pakistan?
APP 9 May 2025 Digital expert warns against misinformation, urges citizens to rely on official sources
Dawn 9 May 2025 Digital rights expert Nighat Dad urges public to be cautious of misinformation online
The Nation 10 May 2025 Digital expert warns against misinformation, urges citizens to rely on official sources
Digital Information World 10 May 2025 How to Detect Misinformation During India-Pakistan Tension Without Falling for Manipulated or Misleading Sources
Yahoo News (AFP) 14 May 2025 Pakistan military gets social media boost after India flare-up
365 News 14 May 2025 Decoding Deepfakes with Nighat Dad: Understanding The Risk Of AI-Generated Misinformation | 365 News
Cape Times 15 May 2025 Social media boost for Pakistan military after India flare-up - PressReader
Global Voices 15 May 2025 Guns, gags and trolls: Disinformation and censorship are shaping the India–Pakistan conflict
Radio New Zealand 26 May 2025 Misinformation war rages online amid India-Pakistan tensions
The News 28 May 2025 Experts call for stronger online protection as massive data breach unveiled

 

Events

Workshop for Marginalized Groups on Online Safety 

DRF was invited by Christian Study Centre to speak with 15 participants from marginalized communities on Social Cohesion in Online Spaces, and Digital Safety. The Youth Digital Media Training covered dis/misinformation, fact checking, ethical use of AI, Gender Harm and importance of consent along with keeping accounts protected and surfing the web safely.   

 

Tech Trends

Pakistan’s Crypto Confusion

While the State Bank of Pakistan and Ministry of Finance insist that crypto remains illegal, the government simultaneously unveiled its first Strategic Bitcoin Reserve at a high-profile event in Las Vegas. The contradiction of a national ban alongside pro-crypto rhetoric and mining plans has left investors puzzled. As the State Bank refers crypto cases to law enforcement, Pakistan positions itself as a future digital finance hub, but without a clear legal framework, this trend walks a regulatory tightrope.

 

Pakistan taps Starlink to boost digital connectivity

Pakistan is in advanced talks with SpaceX to bring Starlink’s satellite internet to underserved regions. During a high-level visit to SpaceX HQ, officials explored collaboration on expanding broadband via low Earth orbit satellites. With licensing expected to wrap up soon, Starlink services are projected to launch in Pakistan by November 2025. The move could revolutionise digital access in remote areas, marking a major step in bridging Pakistan’s digital divide and supporting its growing freelancing and tech ecosystem.

 

Tip of the Month: Digital Spring Cleaning: Refresh Your Online Presence

Out with the old: Delete unused accounts and outdated profiles to reduce your digital footprint.

Unsubscribe and breathe: Clear your inbox by unsubscribing from newsletters you no longer read. (But not this one! 🙂)

Tidy up your apps: Remove apps you haven't used in months to free up space and enhance security.

Update for safety: Keep your devices and software updated to protect against vulnerabilities.

Privacy check: Review and adjust your social media privacy settings to control what you share.

Clear browsing data: Delete cookies and cache to improve browser performance and privacy.

Stay vigilant: Regular digital cleanups help maintain your online security and peace of mind.

DRF Resources

Digital Security Helpline

The Digital Security Helpline received 247 complaints in May 2025, of which 208 were related to cyber harassment.

The Helpline issued a resource list of safety tools to navigate travel with digital risks like device searches or data extraction. A public advisory was also issued when the National Cyber Emergency Response Team (CERT) announced a major data breach exposing over 184 million passwords.

 

If you’re encountering a problem online, you can reach out to our helpline at 0800-39393, email us at helpdesk@digitalrightsfoundation.pk or reach out to us on our social media accounts. We’re available for assistance from 9 AM to 5 PM, Monday to Sunday.

IWF Portal

www.report.iwf.org.uk/pk 

 

StopNCII.org

https://stopncii.org/

 

May 2, 2025 - Comments Off on Rampant misinformation and hate speech surrounding Pahalgam attack on Indian and Pakistani media

Rampant misinformation and hate speech surrounding Pahalgam attack on Indian and Pakistani media

By Sara Imran and Maria Nazar, Research Associates, Digital Rights Foundation

Note: This is a developing story; updates will be provided as the situation develops.

In the deadliest attack in Kashmir since 2000, suspected rebels killed 26 tourists in Pahalgam, a tourist resort in Indian-administered Kashmir, on 22 April 2025. Chaos erupted on social media since the attack, with Indian media outlets and users pointing fingers at Pakistan as the instigator, whereas the Pakistani government denied its involvement and blamed “home-grown” forces within the Indian-administered territory, additionally terming it a “false flag operation”. Indian Prime Minister Narendra Modi vowed to punish "terrorists and their backers" and pursue them to “the ends of the earth”.

A statement issued in the name of The Resistance Front (TRF), an armed group that emerged in Kashmir in 2019, allegedly claimed responsibility for the attack. On 25 April, they then allegedly denied any involvement in the attack, citing in their statement a “brief and unauthorized message” posted on one of their digital platforms, which after an internal audit they have “reason to believe…was the result of a coordinated cyber intrusion – a familiar tactic in the Indian state’s digital warfare arsenal.” The TRF statement further read: “This is not the first time India has manufactured chaos for political gain.”

In the chain reaction of escalations triggered by the Pahalgam attack, India closed its main border with Pakistan, expelled diplomats, cancelled SAARC visas, and suspended the Indus Waters Treaty (IWT). Pakistan, reading the suspension of the IWT as an “act of war”, reacted with a series of countermeasures, including the closure of its airspace and the Wagah border, and the possible suspension of the Simla Agreement.

In tandem with the breakdown in diplomatic ties between the two countries, rampant misinformation was observed across social media platforms and news outlets in India and Pakistan. At a recent briefing with diplomats, Pakistani Foreign Secretary Amna Baloch rejected what she termed the “Indian misinformation campaign against Pakistan”.

In this tense atmosphere, aiming to timely capture and report on the dangerous misinformation surrounding the Pahalgam attack, DRF analysed 72 unique posts by 52 unique users and media outlets across 6 social media platforms–X (formerly Twitter), Facebook, Instagram, Twitter, Reddit, and YouTube–and discovered instances of not only mass misinformation, but also hate speech, threats, and even genocidal intent, by a mix of Indian, Pakistani, and other users and media outlets.

The data on misinformation and hate speech respectively has been categorised by DRF into five major misinformation claims, i.e. the claims that went most viral or were most under discussion, and three major hate speech-laden threats.

Major misinformation claims

1. The attack was a Pakistan military operation

In the finger-pointing that directly followed the attack, a major point of content online was the alleged involvement of the Pakistani military.

The allegation of Pakistani military involvement was propagated by Indian news channel Times Now, which has been spreading much misinformation and fake news around the Pahalgam attack, including the viral couple video, which will be discussed further on.

Indian news channels like Republic TV continue to run headlines with inflammatory hashtags like #WeWantRevenge flashing across their tickers, with an on-site reporter claiming “Terrorists from Pakistan have attacked here" on a YouTube live broadcast.

Interestingly, the claim of Pakistani military involvement in the attack was not leveled solely by Indian media, but also by at least one Pakistani, Adil Raja, a war veteran and investigative journalist with a large Pakistani following.

However, these claims entirely lack evidence to support them. The Pakistani government has outright denied any involvement in the attack, with Deputy Prime Minister and Foreign Minister Ishaq Dar rejecting Indian allegations of cross-border terrorism as “baseless blame games”.

2. ‘Final video’ of couple allegedly killed

As the identities of the tourists came to light, one of the victims was identified as Indian Navy officer Lieutenant Vinay Narwal. A video of Lt. Narwal and his wife dancing in Kashmir on their honeymoon began to go viral, with captions such as “The last video shared by Lt. Vinay Narwal before the #PahalgamTerrorAttack”. These were shared by major Indian news outlets, such as Times Now.

These videos were widely posted and reshared, stirring up a great deal of public sentiment around the tragic death of these individuals. In one Reddit post, the video was shared with the caption “...Pak will pay for this..”.

 

However, in a surprising turn of events, the next day, the couple from the dancing videos posted a video on their Instagram account clarifying that they were, indeed, not a victim of the Pahalgam attack, and were alive and well. The video was being shared as fake news by falsely linking it to the attack.

The couple expressed distress at their video being shared by all major Indian outlets without being fact checked for veracity, and urged their audience to report all such videos which, in their view, had been posted only for views, “making it challenging to trust news sources”.

It was noted that the majority of news outlets who posted the viral video, took them down after the couple debunked it. Meta has also added fact-check disclaimers to the posts still up on Facebook. However, the misinformation had already spread widely before these measures.

3. Kashmiri locals sheltering terrorists

Another major misinformation claim that gained immense popularity among Indian social media users and news outlets and led to a considerable volume of hate speech being perpetuated against Kashmiri locals–the majority of whom are Muslim–was that these locals had been giving ‘shelter to terrorists’ and were involved in the attack.

These claims, in addition to being baseless, also seemed particularly unbelievable and contradictory once reports started emerging of a local Kashmiri pony handler, Adil Hussain Shah, who heroically lost his life trying to protect the tourists from the attackers. The residents of Srinagar, Kashmir, also held a candlelight vigil protesting against the killings, further casting a shadow on the veracity of these claims.

4. Pakistan Army resignations

On 27 April, a new wave of misinformation spread across social media. An Indian X user posted a letter with the Pakistan Army emblem, alleging that Pakistani military officers and soldiers had resigned en masse “amid rising tensions”.

The post was viewed 1M+ times, and reposted widely.

Following this, on 28 April, another post was shared by the Executive Editor for the Indian -Telugu channel TV9 Network, with another letter alleging mass resignations being reported in the Pakistan Army.

This post has over 1.3M views, 10K likes, and 3.1K reposts, and is still up at the time of writing. 

An independent Pakistani fact-checker, Pak Observer, has debunked the viral letters, claiming that they are rife with errors such as a misspelling of “Pakistan Zindabad”, and naming the wrong person as the current DG Inter-Services Public Relations.

5. Removal of Indian Northern Commander from his post

This piece of misinformation originated and spread in Pakistan between 29 and 30 April. Several Pakistani accounts on X, including Khara Sach anchorperson Mubasher Lucman’s, spread misinformation regarding the detainment and/or removal of the Indian Northern Commander Lt Gen Suchindra Kumar from his post following the Pahalgam attack.

India’s Press Information Bureau categorically fact-checked and debunked these claims on Wednesday 30 April, in a post on X, stating that “Lt. Gen MV Suchindra Kumar is attaining superannuation on April 30.”

AI-generated misinformation

Besides the major misinformation claims surrounding the Pahalgam attack, DRF also observed instances of AI-generated images and videos which helped to spread misinformation and fake news.

These included an image generated with the help of the Meta AI tool, as checked by an independent Indian fact-checker. This image was often used to accompany posts about the attack.

The video is now accompanied by the Meta third-party fact-check label; however, it is still up on Facebook.

An AI altered video of Zakir Naik went viral, which depicted the Islamic scholar claiming that the Quran instructs Muslims to kill Hindus.

An interesting point to note is how, conversely, there were also instances of AI being used by social media users to fact-check dubious claims. For instance, there were several instances of users replying to X posts, tagging the xAI assistant Grok to help them fact-check claims.

Hate speech and threats

Our findings revealed that misinformation was not the only form of dangerous content circulating online. Hate speech was rampant, with Indian accounts and media outlets targeting Pakistanis and Kashmiris, and vice versa. In many cases, the misinformation gave way to the hate speech, as in the case of the false claim that Kashmiri locals were aiding and abetting the terrorists, which allowed Indian users to justify their death threats targeting Kashmiri Muslims.

Three major hate speech categories were observed: Indian calls to invade/bomb Pakistani cities, calls to starve Pakistanis using (an uninformed understanding of) the suspension of the Indus Waters Treaty, and genocidal rhetoric targeting Muslims in Indian-administered Kashmir.

1. Threats to invade Pakistani cities

2. Threats to starve Pakistanis

These threats referenced the suspension of the Indus Waters Treaty, despite the severity of the short-term consequences being debunked by environmental experts.

3. Genocidal rhetoric targeting Kashmiris

Arnab Goswami, Editor-in-Chief of Indian news channel Republic TV, which has 6.73M YouTube subscribers, appeared on air to call for an “Israel-like” “final solution” in Kashmir.

On the other hand, Pakistani X accounts made misogynistic comments sexualising Indian celebrities, referring to them as “maal-e-ghaneemat”, or “spoils of war”.

Pahalgam: a complete failure of platform accountability

While DRF aims to provide timely analyses on dangerous online trends and the spread of misinformation and fake news, it has time and again come to light that during volatile events like the Pahalgam attack, there is a total and utter lack of platform accountability and governance across platforms.

With massively inflammatory content leading to communal incitement, and even outright genocidal posts gaining views and reshares in the millions and thousands, it is clear that platforms have not only failed to take down these posts, but their algorithms are actively amplifying them. Designed to prioritise engagement, these algorithms often push the most provocative and emotionally charged content to the forefront, regardless of its accuracy or potential harm. The more outrage a post generates, then, the more likely it is to be promoted on user feeds, creating a feedback loop that rewards violence, hate, and sensationalism with visibility. This is no accident. It is a direct result of a business model built on attention and sensationalism. Tech oligarchies, emboldened by profit and shielded by vague notions of free speech, continue to dodge real accountability. Many of these platforms have grown increasingly hesitant to moderate hate, and chosen to monetise it instead.

In a region on the brink of war, such unchecked misinformation is not merely irresponsible; it is incendiary.The cost of this negligence at best, and complicity at worst, is sky high. When narratives are left to fester unchallenged by facts, they don't just distort reality; they can help shape deadly outcomes.

The gap between platform policies and their implementation

Except for the few examples cited above, for most of the data analysed, these platforms failed even to accompany harmful content with community notes, fact-check disclaimers, content warnings, and other moderation tools. This is despite each of these platforms having community guidelines regarding misinformation.

Since Musk’s X takeover, the platform removed its policy on Crisis Misinformation, and as such no longer has a dedicated corporate policy that addresses “false or misleading information that could bring harm to crisis-affected populations (…) such as in situations of armed conflict, public health emergencies, and large-scale natural disasters”, which the Pahalgam attack and subsequent tensions comes under. However, in a blog post titled “Maintaining the safety of X in times of conflict”, X claims to have a comprehensive set of policies that “promote and protect the public conversation”, citing posts violating their Terms of Service made during the “Israel-Hamas conflict” as an example of implementation. The enforcement of these policies include:

  • Restricting the reach of a post
  • Removing the post
  • Account suspension
  • Ineligibility of such posts for monetisation
  • Community notes

However, out of all the posts that were analysed by DRF in this particular instance, not a single one faced even one of the above mentioned repercussions.

Meta’s Misinformation Policy seeks to “remove misinformation where it is likely to directly contribute to the risk of imminent physical harm.” In its Transparency Center, it also addresses third-party fact-checking:

Once a fact-checker rates a piece of content as False, Altered or Partly False, or we detect it as near identical, it may receive reduced distribution on Facebook, Instagram and Threads. We dramatically reduce the distribution of False and Altered posts, and reduce the distribution of Partly false to a lesser extent. For Missing context, we focus on surfacing more information from fact-checkers. Meta does not suggest content to people once it has been rated by a fact-checker, which significantly reduces the number of people who see it.

While to Meta’s credit, many of the most viral misinformative videos about the dancing couple were either removed or taken down by the media outlets who published them, and there was at least one case of a label being added to an AI-generated post, as mentioned above, these posts had already gone viral before such measures were employed, and several popular posts reposting the same misinformation are still up on Facebook and Instagram.

According to YouTube’s misinformation policies, it does not allow “certain types of misinformation that can cause real-world harm”. The enforcement of these policies include:

  • Removal. Content is taken down if it violates policy
  • Warning. Issued for first-time violations, usually without penalty
  • Training. Option to complete policy training for warning expiry
  • Strike. Given if the same policy is violated within 90 days
  • Termination. Triggered by 3 strikes or severe/recurring violations

Once again, in YouTube’s case, it was noticed that channels with subscribers in the millions posted misinformation and incendiary content with viewership in the thousands, as documented above. There was no implementation seen of the strike system, and the harmful content remains on the platform till date.

Conclusion

After the chaos that has been observed in online spaces in the fallout of the Pahalgam attack, it is clear that digital spaces can be just as volatile and dangerous as physical ones. If misinformation and hate speech continue to surge unchecked, inflammatory and potentially genocidal rhetoric will cost lives. There is an urgent need for ethical responsibility and accountability from platforms, policymakers, and users alike.

April 16, 2025 - Comments Off on March 2025 Newsletter: International Women’s Day and Digital 50-50 Released!

March 2025 Newsletter: International Women’s Day and Digital 50-50 Released!

This year’s first issue of Digital 50.50, 'Empowered Voices, Accountable Platforms: Redefining Digital Equality', was launched on 8 March to mark International Women’s Day. The issue features 10 stories approaching the theme from unique angles, such as social media moderation of gendered slurs in Urdu, how the overuse of AI in content moderation is affecting user experience, and how women’s digital livelihoods are being affected by the controversial PECA amendments. The issue also showcases cover art and beautiful illustrations by Emil Hasnain. Read the issue here.

 

In honour of International Women’s Day, the DRF team also presented their viewpoints explaining why digital rights are so important in today's world, and how digital security helplines like ours help to make online spaces safer for young girls and women in South Asia and beyond. 

 

Regional Engagements & Initiatives:

Nighat Dad at NO MORE Tech Summit Panel

Nighat Dad moderated a panel titled “Bias in the Bot: How AI can Either Perpetuate or Prevent Violence Against Women” at NO MORE's Tech Summit on 4 March. The panel explored the intersection of AI, ethics and gender-based violence, focusing on both the risks and time potential of technology to address these challenges.

 

 

DRF part of amicus brief submitted to US Supreme Court

DRF was part of an amicus brief submitted to the US Supreme Court by the Samuelson-Glushko Technology Law and Policy Clinic (TLPC) at the University of Colorado Law School over privacy rights. The brief urged the Court to rule against an act which threatens data privacy rights worldwide. DRF was amicus curiae (“friend of the court”) to the brief.

 

 

Five-Point Plan Submitted to WSIS+20

DRF is proud to be a part of this effort, alongside 101 civil society groups and 46 experts, to recommend a Five Point Plan to WSIS+20 on meaningfully operationalising global digital governance and development goals—with transparency, inclusivity, and stakeholder engagement at the forefront.

Our Latest Research & Advocacy:

Identifying AI deepfakes: #TrumpZelenskyMeeting

A video from US President Trump’s meeting with Ukrainian President Zelensky recently went viral, showing the leaders exchanging blows and fooling the public at large; even news outlets reported on it. DRF released this video sharing tips and tricks on how to detect AI deepfakes online.

 

#BeCyberSavvy Campaign

DRF collaborated with students from Punjab University for their campaign against cyberbullying and harassment, calling for citizens to #BeCyberSavvy and protect your digital space. The campaign featured insights and actionable tips from DRF’s helpline team.

 

Digital Rights Tracker Updates

DRF started a new weekly series for its Digital Rights Tracker. While still in its beta phase, the Tracker contains the latest updates on digital governance and rights issues in Pakistan. Stay informed, and check out our tracker here.

 

Press Coverage

Nighat Dad highlights need to build AI governance capacity 

Nighat Dad talked to Bol News about the urgent need to build national and regional capacity for AI governance given rapid geopolitical advancements and emerging technology developments in today’s world. Watch the entire segment here.

Nighat Dad’s Op-Ed on Workplace Harassment

Nighat Dad explored the legal precedent set on workplace harassment, highlighting the structural challenges women in Pakistan routinely face in professional environments, upon the recent Supreme Court decision by Justice Mansoor Ali Shah. She states, "Women’s safety in professional spaces should not hinge on sporadic judicial interventions. It must be a fundamental, non-negotiable standard." Read more here.

DRF Comments on Cyber Blackmail

Nukta Pakistan released a comprehensive piece on cyber blackmail trends in Pakistan in which DRF’s helpline was featured. DRF team member Anmol Sajjad also commented on the trend, saying “in these situations, you must remember that it’s not your fault.”

DRF was also featured in the following press coverage:

News Outlet Date Title
Echoes Media 1 March 2025 Digital Rights Foundation Report: How PECA Silences Journalists in Pakistan
Tech Policy Press 3 March 2025 What Happens When Democracy Falters? Lessons from The Global Majority
Hallmark News 4 March 2025 Global South Alliance unveils digital library, opens membership applications
Tech Juice 8 March 2025 From AI to Startups: Top 10 Women Transforming Pakistan’s Tech Industry
The Nation 14 March 2025 Cyberbullying in Pakistan: A Silent Crisis in the Digital Age
Daily Times 22 March 2025 A Roadmap to Women’s Empowerment
Daily Times 26 March 2025 Weaponizing the Web: How Cyber Harassment Silences Marginalized Voices in Pakistan
IFEX 31 March 2025 Transition, media reforms, and CSOs join forces for digital accountability in South Asia

 

Behind the Scenes with DRF:

AI Workshop for DRF Team

We held an in-depth workshop for our team on generative AI. Led by tech innovation educationalist and entrepreneur Jazib Zahir, this workshop explored how AI uses data, AI's limitations, and cases where AI tools can be used responsibly. AI is evolving fast, but policies and regulations are struggling to keep up—leaving room for misuse and abuse. That’s why DRF will continue to study AI and its ramifications on data and ethics, not just to understand the latest emerging technologies, but to also build trust and shape a future where AI works for all.

Tech trends:

AI ‘Studio Ghibli’ images: cute trend or ethically murky use-case?

In the last week of March, OpenAI released its “most advanced image generator yet” to its ChatGPT GPT-4o model, capable of producing photorealistic outputs which OpenAI founder Sam Altman claimed to have a hard time believing “they were really made by AI”. One use-case of this image generator that has taken the internet by storm since is the “Studio Ghibli-fication” of real photos, i.e., the rendering of personal photos in the style of the popular Japanese Studio Ghibli ‘cutesy’ animation style, co-founded by Hayao Miyazaki.

While the appeal of this transformation of photos of families, politicians, and even Israeli armed forces into adorable animations spread like wildfire, the internet was divided over its ethical ramifications. Critics expressed concerns over the data used to train GPT-4o to produce these outputs, likely Studio Ghibli art itself being used without permission (a four-second Studio Ghibli scene which took 1 year and 3 months to animate started making the rounds, demonstrating the lifelong efforts of animators, labour which is being stolen and cheapened into AI outputs) Excerpts from a 2016 interview with Miyazaki over AI art resurfaced, with him notably calling the incorporation of AI technologies into art and animation “an insult to life itself”.

While the jury remains out over the Studio Ghibli trend, there is no denying the impact of its proliferation, with ChatGPT reporting a record 150 million weekly active users for the first time on the back of the viral trend.

xAI acquires social media platform X in merger

Elon Musk’s AI company, xAI, has acquired his social media platform, X, in a merger that valued X at $33bn ($45bn excluding $12bn debt). Commenting on the merger, D.A. Davidson analyst Gil Luria said the $45bn price tag for X with debt included was not a coincidence. Since Musk bought Twitter for $44bn in 2022 excluding debt, representing X’s value in this way creates a positive financial narrative and reassures investors. The specifics of the deal remain unclear, with concerns over X leader integration, regulatory scrutiny, and xAI’s increased access to X user data to train its AI chatbot, Grok.

Tip of the month:

Practice Good Device Hygiene

Your devices hold your digital life—treat them with care. You can start by: 

  • Setting up strong passwords or biometric locks (fingerprint or face ID). 
  • Enable auto-lock and adjust settings so your screen times out quickly when not in use. 
  • Encrypt your phone and computer to protect sensitive data if they’re ever lost or stolen. 
  • Avoid leaving devices unattended in public places.
  • Don’t forget to back up your data regularly. 
  • Use privacy protectors on screens.

Keeping your devices secure isn’t just smart—it’s an act of self-care in the digital age. A little attention now can save you from major headaches later.

DRF Resources:

Cyber Harassment Helpline: 

The Cyber Harassment Helpline received 150 complaints in March 2025, of which 131 were related to cyber harassment.

The Helpline also issued a scam alert online, warning citizens to beware of scammers pretending to be PTA representatives and to get access to sensitive information via OTP (one-time passwords) codes.

If you’re encountering a problem online, you can reach out to our helpline at 0800-39393, email us at helpdesk@digitalrightsfoundation.pk or reach out to us on our social media accounts. We’re available for assistance from 9 am to 5 pm, Monday to Sunday.

 

IWF Portal

www.report.iwf.org.uk/pk 

 

 

 

StopNCII.org

https://stopncii.org/

February 4, 2025 - Comments Off on Asia Internet Coalition trade body expresses concern over PECA Amendments

Asia Internet Coalition trade body expresses concern over PECA Amendments

The Asia Internet Coalition (AIC) has expressed deep  “concerns” over the recently passed “Prevention of Electronic Crimes (Amendment) Act, 2025”. A tech industry body made up of companies such as Google, Apple, Facebook, LinkedIn, Amazon, and Cloudfare among others, the AIC claimed that the amendments “would have a significant impact on people’s digital rights and freedoms, as well as far-reaching implications for Pakistan’s digital economy”, and called upon the Government of Pakistan to:

“...pause the legislative process and initiate a genuine, transparent, inclusive, and comprehensive public consultation process with stakeholders, including industry, civil society, and the public, to ensure the amendments are in line with established human rights norms on privacy and freedom of expression and does not stifle economic growth and innovation.”

The statement by the AIC can be found here, in PDF format: https://aicasia.org/download/1187/

February 4, 2025 - Comments Off on Starlink expected to operate in Pakistan by Summer 2025: Government

Starlink expected to operate in Pakistan by Summer 2025: Government

Starlink, the satellite internet network owned by Elon Musk, is expected begin operations in Pakistan by June of 2025, according to the government, as 90% of the registration process has been completed. In order to operate in Pakistan, low earth orbit (LEO) satellite operators must register with the Pakistan Space Regulatory Board after which the Pakistan Telecommunication Authority will issue licenses if successful. According to news reports, Starlink will meet the June 2025 operational target.

January 10, 2025 - Comments Off on NWJDR condemns the ongoing online harassment and gendered disinformation campaign against female journalist Asma Shirazi

NWJDR condemns the ongoing online harassment and gendered disinformation campaign against female journalist Asma Shirazi

10 January 2025, Pakistan: The Network of Women Journalists for Digital Rights (NWJDR) strongly condemns the ongoing relentless harassment and gendered disinformation campaign against senior female journalist Asma Shirazi by prominent political party supporters, and political commentators and vloggers.

This is not the first time Asma Shirazi has been targeted, and is the most recent in a disturbing trend of online harassment and tech-facilitated gender-based violence against female journalists that is becoming increasingly normalised. In 2020, 150 journalists issued a statement against the trolling of female journalists. The National Commission for Human Rights (NCHR) took notice of this statement in 2022, demanding an update from the government which had not taken any action in two years. Shirazi, who has repeatedly been a target of gendered character assassination, won a two-year long case in the Islamabad High Court against ARY News and PEMRA in 2023, which involved a fabricated news story undermining her journalistic integrity. The court absolved her, and found her online and on-air character assassination to be baseless. Now, in January 2025, the situation is just as dire, and Shirazi is once again on the receiving end of an endless slew of abuse, hatred, accusations, and trolling by politically motivated and backed actors.

The continuation of such targeted campaigns not only places individual journalists' lives at risk, but also shrinks space for freedom of expression and press freedom as a whole. According to a recent report by the Digital Rights Foundation, at least 47 of 225 posts analysed across platforms during the 2024 Pakistan general elections targeted journalists covering the elections. These journalists “became vulnerable to online threats of physical assaults, organized trolling campaigns and gendered insults”. Platforms like X and Facebook have also failed to provide adequate recourse: a study by the International Centre for Journalists found that women journalists rated Facebook and X as the two least safe platforms, with 39% and 26% of respondents, respectively, expressing concerns. The research further revealed that nearly 73% of women journalists experience online violence.

The harassment and vile comments against Asma Shirazi are baseless and hinge upon character assassination by online trolls and political commentators with huge followings. NWJDR urges relevant authorities to take notice of Shirazi’s targeted harassment, as well as the growing trend of online harassment against female journalists. We urge political parties to take disciplinary action against those involved in the targeting of female journalists, and to formally dissociate from the actions of these trolls. The Ministry of Human Rights and the NCHR must also take action and develop a strategy for addressing such gendered attacks and campaigns against women journalists and women public figures.

These targeted disinformation and harassment campaigns cannot become the norm. Every time female journalists face gendered harassment, NWJDR will continue to raise its voice and assist survivors in finding avenues to justice.

November 5, 2024 - Comments Off on Online Gendered Violence against trans community in Pakistan: Dolphin Ayan Khan Case

Online Gendered Violence against trans community in Pakistan: Dolphin Ayan Khan Case

DRF investigates the dissemination of harmful content on social media platforms against transgender community member Dolphin Khan and identifies gaps in the implementation of platforms’ content moderation rules

Trigger Warnings: Discussions of Non-consensual Nudity, Threats of Bodily Harm, Technology-Facilitated Gendered Violence, Blurred/Obscured screenshots from a non-consensual video (for information purposes).

Context:

Pakistan’s transgender community has persistently experienced violence, ostracization at the societal level, and sexual exploitation over the years. In the province of Khyber Pakhtunkhwa alone, 267 cases of violence against the transgender community were reported during 2019-2023, but which resulted in only one conviction. To make matters worse, the community has long been a victim of online hate on social media platforms. In October 2024, DRF released “Gendered Disinformation in South Asia Case Study - Pakistan, which focused on the discrimination and online hate speech directed at Pakistan’s trans community. According to the report, it was found that at least 22% of harmful social media posts (including TFGBV, gendered disinformation, gendered hate speech)  were aimed at the transgender community. However, as the report noted, meetings and escalations with social media platforms concerning trans-specific hate speech were unsatisfactory, owing to the latter’s responses - or, in the case of X, largely unresponsive post-change in ownership.

Dolphin Khan Case:

On 29 October, a non-consensual video of a Pakistani trans woman was leaked online, with users on multiple social media platforms sharing it. The video, which the victim Dolphin Ayan Khan - also known as Dolphin Ayaan - has described as being “forcibly recorded”, shows her being forced to strip down entirely and dance, by someone in the background who can be heard but not seen by the viewer, after being abducted at gunpoint.

On the back of this incident, transgender rights activist Dr. Mehrub Awan brought the video to everyone’s attention on X on 30 October. Expressing disdain towards the persistent harassment of the trans community, Dr. Awan stressed:

 “...We have literally written papers, done podcasts, book chapters, and spoken to media and officials about “Beela violence” and how organised it is. We, ourselves, have presented data and identified hotspots - Mardan and Peshawar - and profiled the criminals involved. We have done everything that we, as a broken and battered community, could do. Ayyan (sic) was on the roads just a month ago organizing protests, and a year ago injured with bullets. When does this end? What else is expected from a community literally on the receiving end of genocidal murders in Pakhtunkhwa to do?”

Following the incident, Ms. Khan issued a video statement naming the alleged perpetrator behind the video and confirming that police authorities had been informed about the incident. Seeking  justice on the matter, she vowed to hold a press conference on this issue in November. According to news reports, on 01 November a case against the perpetrators of the video - which was filmed in 2023 - was filed in Khyber-Pakhtunkhwa, under the Prevention of Electronic Crimes Act 2016 (PECA).

Harmful Content On  Social Media Platforms: DRF’s findings

DRF conducted preliminary investigations into the matter to understand whether or not these videos were available on social media platforms especially X, Facebook, Instagram, Youtube and Tiktok. Social media platforms were reviewed on 31 October between 0900 AM-0300 PM, using the search terms “Dolphin Khan”, “Dolphin Ayan”, and “Dolphin Ayan Khan”. Owing to capacity and time constraints, DRF was unable to look at other platforms such as WhatsApp/Snack Video or Snapchat.

  • Non-consensual nude content:
    As per initial investigations, it was found that at least two accounts on X had reportedly posted clips of the actual video. However, none of these video clips were posted or available on Facebook, Tiktok or Instagram. Furthermore, DRF came across at least two concrete instances where a link shared by an account on X led to an uncensored video being hosted on at least two different pornographic websites (screenshots provided, but with URLs removed and images blurred). As of the 31 of October, these X profiles were still up, and with active links.
    Platform rules on non-consensual video:
    The availability of Dolphin’s non-consensual videos on X are in violation of X’s non-consensual nudity policy. It underscores how users cannot “post or share intimate photos or videos of someone that were produced or distributed without their consent.” In violation of these policies, an account can either be suspended or temporarily locked.
    While the accounts posting Dolphin’s videos had not been suspended at the time of data gathering, it was unclear if they were locked or not.  Irrespective of the restrictions, merely locking an account without removing the harmful post seemed an ineffective strategy in this case. DRF’s Cyber Harassment Helpline in the past has recorded similar incidents like these where transgender activists had their pictures/videos shared on the platform which were in violation of this policy, and yet had not been removed by the platform.

Accounts on X actively sharing clips from the leaked non-consensual video

 

Example of users on X sharing an external link to a pornographic website that is hosting the video in question

  • Posts containing malicious links:
    Multiple social media accounts on Facebook and X collectively were luring users towards suspected malicious web links that offered full access to the video of Ms. Khan which leads to a non-conformity of a second violation of policies of these platforms.  In the case of X, DRF found at least eight unique accounts that purported to offer full access to the video and at least one account that made three posts with different images, but linking out to the same spam or suspicious link, as indicated in the screenshots in this report. Similarly on Facebook, DRF came across at least ten unique accounts that claimed to offer full access to the video, but with each sharing the same screenshots. One Facebook account made at least two posts that offered the same spam or suspicious link, and another with a slightly different account name that shared the same spam or suspicious link.  Similarly, On YouTube, DRF found at least one example of someone purporting to offer the video in full in their comment section (with a censored screenshot), only to find it link out to a spam website (this appears to be this particular YouTube account’s modus operandi, as an attempt to garner views/likes, for different sorts of videos). No such posts containing suspected malicious links were found on Instagram or Tiktok within the time period under investigation, at the time of this report.


Example of users on X sharing suspected malicious web links

It is unclear whether all of the links observed lead to active executions of malware, and requires further investigation. As per initial investigations into a few links found on X and Facebook, it was found that clicking on a link within said posts would redirect users to external websites that would  install or attempt to install software, adult material or other unexpected programmes. This is a common malware redistribution tactic that can trick people into downloading harmful software posed as legitimate (if unethical and hateful) material. Furthermore, VirusTotal also found these links to be suspicious or malicious.

 

 

Platform rules on malicious links:

Social media platforms have slightly different rules when it comes to regulating accounts posting suspicious links. Meta’s policy on Cyber Security prohibits “Attempts to share, develop, host, or distribute malicious or harmful code…” Such accounts would be suspended with or without a warning. Similarly, YouTube accounts posting suspected malicious links are in violation of YouTube’s policies concerning “external links”. Youtube violations - whether this pertains to malicious or suspicious links - will be subject to a three strike system: strike one, where an account is suspended for one week; strike two, where the account is suspended for two weeks (if within 90 days of the first strike); strike three, which, if occurring in the 90 day period mentioned, will lead to termination. On the other hand, X’s policies are less restrictive but only state that the platform “may take action to limit the spread” of “malicious links that could steal personal information or harm electronic devices” or spam “links that disrupt their experience.

Thus, the accounts posting malicious links on Meta and YouTube will be liable to be suspended (with or without a warning) but those on X cannot be suspended as long as the links’  outreach is limited by X. In theory, the accounts posted on Youtube and Meta should have been suspended at least for posting suspected malicious links and those on X would have their reach limited. However, the content moderation measures in place seemed insufficient in this case to protect the users from potentially harmful software.

In order to timely raise awareness about the presence of suspected malicious links on platforms, DRF’s Executive Director Nighat Dad posted on Facebook and instagram, warning about a profile sharing these links especially on X. However, her Facebook post was taken down by Meta within hours, claiming that the post violated Community Standards regarding Cybersecurity. Interestingly, the same post was not removed from Instagram.  Reflecting on the experience, Nighat noted:

“While my story only aimed at warning users against harmful content that itself violated the platform’s rules, it looks like the automated checking system highlighted my post on Facebook as problematic and removed it but not on Instagram.”

Conclusion:

Transgender women in Pakistan are extremely susceptible to violence, as already noted in DRF’s case study on gendered disinformation in South Asia. The transgender community in Pakistan has been subjected to offline violence, accusations of blasphemy and economic harm which has been perpetuated with orchestrated campaigns like these by using trans individuals non-consensual images to call for more harm and violence. Trans individuals acceptance within society is under constant threat and the rise in targeted attacks against them has already led to a question mark around their rights as a citizen of Pakistan.. The Federal Shariat Court judgement striking down important sections of the Transgender (Protection of Rights) Act 2018 pertaining to self identity and later the National Database & Registration Authority (NADRA) temporarily halting issuing identity cards for the community grows to show the systematic and institutional violence that the community faces due to these disinformation campaigns online.

In Pakistan, non-consensual intimate and nude images are weaponized against women and gendered minorities. DRF’s Cyber Harassment Helpline has over time highlighted to platforms that these visuals cause imminent harm to transgender individuals and in most instances can lead to offline violence. Despite platforms prioritising this type of content that causes imminent physical harm, platforms’ approach with its automated content moderation policies leaves this harmful content online.

In recent trends in cases of gender based violence it has been witnessed that victims facing graphic violence and threats are filmed and photographed during the violence as an act of authority and intimidation towards the victims. The presence of harmful content on social media platforms in Dolphin Khan’s case is a reminder of these challenges and growing trends pertaining to regulating harmful content from social media platforms. Irrespective of the posts’ wider reach on the platforms, anyone curiously looking for Dolphin’s videos could have found them on X. Furthermore, they would have also been vulnerable to malicious links not only on X but also on Facebook and Youtube. Thus, merely locking or warning accounts for violating community standards might not be enough to proactively protect users from harmful content.

As technology is exacerbating technology facilitated gender based violence (TFGBV) and the degrees of violence of graphic harms are becoming more frequent and dangerous, platforms need to identify these patterns and look at non-consensual visuals in the global south from an intersectional lens particularly ensuring rapid and quick response mechanisms to deal with this problem. DRF will continue to work with platforms to highlight these challenges and ensure that online spaces are safe for users across all demographics.

 

September 11, 2024 - Comments Off on Digital Rights Foundation’s Comment Posts that include “From The River To The Sea”

Digital Rights Foundation’s Comment Posts that include “From The River To The Sea”

Digital Rights Foundation Research and Policy Department 

21-5-2024

In November 2023, following the events of October 7th there was a surge in posts online containing the phrase “From the River to the Sea” - a phrase used by people across the world to show their support for Palestine. The complete slogan, “From the River to the Sea, Palestine will be Free” is a reference to the land across the historical state of Palestine from the Jordan River to the Mediterranean Sea. The slogan has been used since the 1960s by Palestinian nationalist and resistance groups such as the Palestinian Liberation Organization and Hamas. Over time the phrase has become increasingly popular among Palestinians, and Palestinian diaspora around the world as it speaks to their personal ties to the land. Many identify themselves strongly with the village or town they or their ancestors come from, stretching across the land, from Jericho and Safed near the Jordan River, to Jaffa and Haifa on the shores of the Mediterranean sea. 

As the phrase is used globally by different actors, the context and intent varies depending on who is using it. Despite that, the chant is mostly used to support and empower the struggle of all Palestinians, regardless of religion, striving for a free and sovereign homeland. However, there have been instances where variations of the phrase have been used to support the movement for a Greater Israel. For example, the founding charter of Benjamin Netanyahu’s Likud party states: “Between the sea and the Jordan River there will only be Israeli sovereignty”. In 1977, their platform called for Israeli sovereignty over the land between Jordan and the Mediterranean sea, openly demanding complete annexation of the West Bank

The chant can be equated to the commonly supported ideology for a ‘Greater Israel’ - an Israeli Jewish state that extends from the Jordan River to the Mediterranean Sea. If we consider Palestinian usage of the chant for liberation as a call for the expulsion of Jews from the region, then in all fairness, the same should hold true for a call for a Greater Israel. It is no secret that the current Israeli government and those that came before have supported the complete annihilation and expulsion of Palestinians from the land. Supporters of the zionist ideology perceive the chant as a violent call because it threatens their vision of a solely Jewish state. The liberation of Palestine means that Israel will have to treat Palestinian Arabs and Israelis as equal citizens, adding millions of Palestinian Arabs to their citizenship rolls - a decision that goes against their aim of establishing a Greater Israel, diminishing the “Jewishness” of the state.

  In the past claims have been made that the slogan is antisemitic, however in truth the slogan and its use reflect a long history of attempts to silence Palestinian voices and those speaking in solidarity. Palestinian-American writer Yousef Munayyer argues that those who perceive “From the River to the Sea” to have genocidal connotations or any desire for the destruction of Israel, were simply reflective of their own Islamophobia. He argues that the phrase was instead merely used to express people's desire for a state where “Palestinians can live in their homeland as free and equal citizens, neither dominated by others nor dominating others.” Some Palestinians say that the slogan refers to a single state where Palestinians and Israelis can live together, and not as a call to remove anyone from the region. According to Rama Al Malah, an organizer with the Palestinian Youth Movement, the chant in no way calls for the killing of Jewish people but is a way for them to say that they want liberation from 75 years of occupation, and to advocate for the return of refugees who have been forced out of their land from 1948 till now. 

Now that the intended use of the phrase through online and offline platforms is established, it is important to highlight how Meta’s policies and content moderation practices have been heavily censoring content relating to Palestine since October 7th, 2023. Users across the globe have reported that the content they share that is pro-Palestine is being ‘shadow-banned’, limiting their reach and engagement on the platforms. Users have also reported the removal of pro-Palestine content from the platform after being flagged for ‘violating community guidelines’.When content regarding conflict areas is removed by Meta from its platforms, the risk of erasure of crucial evidence to be used in international criminal courts for prosecuting perpetrators increases. In addition to silencing voices that advocate for Palestinian rights, the deletion of the phrase “From the River to the Sea” among other pro-Palestine content creates gaps in potential digital evidence on human rights violations. As per Leiden guidelines, digitally derived evidence including photographs, social media content and videos is being increasingly used as documented evidence in international criminal prosecutions. The UN Fact-Finding mission using Facebook posts as evidence in the case of brutalities against Myanmar’s Rohingya population is one such example that signifies the crucial role played by social media platforms for the preservation of records. Similarly, Meta’s removal of content related to the Palestine-Israel conflict, in any capacity, creates a dent in the repository that has the potential to serve as crucial evidence for legal decision-making against violations within conflict zones.   

According to a report by Human Rights Watch from October to November 2023, there have been 1050 takedowns on Instagram and Facebook relating to Pro Palestinian content. Of the 1050 takedowns, written primarily in the English language from over 60 countries, 1049 cases contained peaceful content in solidarity with Palestinians. Since the October 7 conflict, there has been a surge in hateful content against Palestinians on social media platforms. 7amleh’s AI-powered language model has been monitoring the spread of hate speech in Hebrew against Palestinians and pro-Palestine users on these platforms. Since October the model has classified 6,026,492 hateful and violent cases on platforms. The distribution of violence according to the tool has been the highest on X (79.7%) followed by Meta platforms (19.1%). Additionally, it is difficult to overlook Meta’s biased approach towards pro-Palestinian content on the platform when in October 2023 Meta started inserting the word ‘terrorist’ into profile bios of Palestinian users on Instagram; later issuing an apology stating that the platform was experiencing a bug in auto-translation on Instagram. Previously, Meta’s track record in the May 2021 crisis between Israel and Palestine showed a similar pattern when Palestinian voices were censored and shadow-banned on the platform, as was later confirmed by the Sustainable Business Network and Consultancy (BSR) report. The continuous removal of pro-Palestine content on the platform indicates that Meta has repeatedly censored the voices of users on its platform even before the events of October 9. Post-October 9, the censorship has been further aggravated by big tech platforms. 

Meta’s handling of Palestinian content, particularly the removal of pages such as Eye of Palestine and the suspension of Palestinian journalist Motaz Azaiza’s account, raises serious concerns about the platform’s commitment to human rights and freedom of speech on the platform. Despite Meta’s newsworthy policy, which protects journalistic content, these accounts have faced undue restrictions and reach limitations. This biased enforcement is in stark contrast with Meta’s approach during the Russia-Ukraine conflict, where the platform displayed clear bias by promoting content favoring and showing solidarity with Ukraine. This discrepancy underscores an inconsistency in Meta’s content moderation practices, undermining principles of freedom of expression, freedom of association, and equality and non-discrimination. Although Meta has since issued an apology for its unfair treatment of Palestinian solidarity voices, the platform persists in limiting content that supports Palestine, further perpetuating digital apartheid and the use of social media algorithms that disproportionately impact marginalized voices. This ongoing issue highlights a significant gap between Meta’s stated policies and its actions, calling into question its commitment to upholding human rights responsibilities.

In its recent policy changes, Meta has introduced new default limits on political content, weakening free expression online by disproportionately affecting political content from marginalized groups. The time and context of this particular policy raise questions about the potentially biased approach of the platform in controlling narratives. This not only undermines democratic values of free speech and association but also exacerbates existing inequalities, particularly for voices supporting the Palestinian plight. The biased application of Meta’s policies reflects a broader trend of digital discrimination, where algorithmic decisions and content moderation policies reinforce existing power imbalances and suppress dissenting voices. Meta’s inconsistent and biased handling of Palestinian content, coupled with its preferential treatment of other geopolitical issues, not only raises grave concerns around adherence to global human rights principles but also potentially undermines systematic freedom of expression, freedom of association, and non-discrimination. Tech platforms need to create more transparent and equitable content moderation policies that are sensitive to contextual nuances.

Meta’s response to the phrase “From the river to the sea” on its platform revolves around several key human rights principles. Facebook, as a platform with 3.03 billion monthly active users, has the responsibility to protect the fundamental human rights of its user base. This includes allowing individuals to express political opinions, advocate for political changes, express solidarity with a cause and ensure equality and non-discrimination. The cases highlight contexts where the aim of the phrase “From the river to the sea” is to advocate peacefully for Palestinian civil rights without promoting violence or hatred towards people under protected characteristics. Upon reviewing the content mentioning the phrase on Meta platforms, it was found that a large majority of it only mentions and sympathizes with Palestinians with no discussion being anti-semitic or anti-Israel. The question that arises is in a case where the world has seen the extent of atrocities that Palestinians have been subjected to, are expressing personal opinions around the current crisis considered promoting terrorism on platforms? Many Palestinian activists have expressed that the complete phrase “From the river to the sea, Palestine will be free” does not insult or violate the sovereignty of the state of Israel, the Jewish community, or Meta’s content moderation policies. Through a more subjective perspective where the phrase is used critically against state institutions, Meta does not categorize the use of the phrase as hate speech, particularly when the phrase is against state institutions rather than any specific recognized individuals. In all three cases, the phrase has been provided more context with additional text, for example “#DefundIsrael”, “Zionist State of Israel”, and “Zionist Israeli occupiers”, highlighting the cases as an association with a political cause rather than to support any dangerous organizations (as categorized by Meta and/or the United States Government). Although the cause is controversial in the current global political landscape, the phrase and its use in these cases do not violate Meta’s community guidelines on “Hate speech”. The first case where the user claimed the phrase “violates Meta’s policies prohibiting content that promotes violence or supports terrorism” refers to Meta’s rules on “Violence and incitement”, “Dangerous Organizations” and Individuals. The phrase “From the river to the sea” is used to show solidarity with Palestinians in general, rather than an affiliation with any political or resistance group. None of the cases presented by the Oversight Board insinuate or show affiliation and alliance, or promote dangerous organizations. Moreover, Meta’s categorization of dangerous organizations needs further transparency and context. The issue of contextual categorization of keywords and associations has been a long-standing debate, especially with Meta’s content moderation policies. For a platform that deems its policies global and standardized for every country, specifically using “United States designated terrorist organizations” contradicts their global policies agenda. These policies need more robust and inclusive parameters to be globally inclusive throughout different regions. Moreover, the categorizations of “Dangerous organizations” should be transparently communicated with Meta’s trusted partners to make them aware of the kind of content that should be escalated to Meta.

These cases are a testament to addressing the contextual application of Meta’s community standards. Ideally, there should be no room for specific targeting of any religious groups thus anti-semitic content should be taken down right away, however in cases where the content is associated with a peaceful socio-political movement, the content should be left up as it does not go against any of Meta’s content moderation guidelines. Hence, the three cases should not be removed from the platform as long as they have been posted in solidarity with a political cause and are categorized as freedom of speech and freedom of association. 

The use of the phrase has also been widely scrutinized at the state and educational institutions level. The phrase was labeled antisemitic by the US House of Representatives in a resolution that was passed with a 377 majority against 44 who voted against it in January 2024. US Representative Rashida Tlaib was censured by the House of Representatives through a resolution as a consequence of her using the phrase on social media. Several House Republicans and Democrats came together to condemn the pro-Palestine statements of the only Palestinian-origin representative. According to them, the phrase’s genocidal nature encourages the eradication of the state of Israel. It is important to note that the resolution was passed and supported by the majority of House Representatives despite Tlaib clarifying on the House floor that her criticism is targeted at the Israeli government, not the people. In the UK, Prime Minister Rishi Sunak condemned the slogan and called the people who use it either gravely misinformed or supportive of the threat that the slogan signifies towards Israel’s existence. Last year, Pro Palestine rallies across the UK were condemned by former Home Secretary Suella Braverman. She was of the opinion that the rallies were “hate marches” against Jewish people and the state of Israel, encouraging the police to use brute force with zero tolerance. Braverman has repeatedly expressed her contention with the rallies and the phrase asking why it has been justified under claims of religious struggle. She has also proposed to alter the Terrorism Act 2000 as in its current state, evidence of incitement and encouragement of terrorism is required to charge the protestors, calling for laws to tackle “mass extremism” on the UK streets. Individuals holding office encouraging the police to take strong action against protestors without distinguishing between peaceful and non-peaceful elements is deeply concerning as it paves the way for influencing and forming a collective narrative that eventually infiltrates the general public. The encouragement of violence against the protestors in itself comes off as a threat to people’s right to protest and freedom of expression; just as Rashida Tlaib’s clarification on her stance being against the Israeli government and not the people was ignored, condemning her pro-Palestine stance.

The pro-Palestine student protests taking place across university campuses have been labeled anti-semitic resulting in several students being arrested by the police. Upon being asked about the phrase being used, the Columbia University President pointed out that although she feels that the phrase is antisemitic, there are people who do not hold the same opinion. Since April 18, the arrests have taken place at 40 different US campuses resulting in more than 2100 students being arrested. The arrests and the administration’s sympathetic stance towards anti-protestors have widely challenged freedom of speech and expression where students are being penalized for voicing out their opinion and publicly protesting against a genocide. Such practices are discriminatory and promote a greater divide within the community. 

Censoring public opinions on platforms is not only an undemocratic practice but also sets a questionable global precedent where silencing the masses becomes an acceptable norm. Although drawing a clear binary between free speech and hate speech is important, institutions and government bodies need to demarcate through careful consideration. As mentioned earlier, the particular phrase under scrutiny is used during peaceful pro-Palestine protests to showcase solidarity with Palestinians and their struggles. It is more to sympathize with them than it is to acts of terror. As the binary is defined, it is important to remember that calling out states participating in genocide cannot and should not be categorized as hate speech let alone students being penalized for the same. Several universities including New York University and Columbia University have barred graduating students from attending their graduation ceremonies as a consequence of their participation in the protests. This has led the protesting students to create their own events under the name of “The People’s Graduation” to provide support to the barred students by celebrating their achievements together. In addition, faculty members have also come forward to protest and in support of the protesting students, the same can however not be said about university administrations. 

Beyond the right to protest, students and other migrants relocate to countries like the US and the UK to improve their quality of life which includes their right to stand up for and against different causes that resonate with their identities as an ethnic, religious, or social community. When influential countries take a draconian position that advocates for the suppression of free speech, in addition to alienating the victims, they invalidate the individual right to democratic expression and legitimize all forms of oppression citizens and marginalized groups face in authoritarian states. 

While the intended use of the phrase at large is to advocate for the freedom of Palestinians, some perceive it as a threat to a state. By censoring pro-Palestine content, big tech platforms play a role in the erasure of digital evidence against human rights atrocities in addition to curbing free speech online. At the state and educational institutions level, the opposition to the phrase emphasizes the increased suppression of marginalized communities and their voices. To ensure equitable justice and access to information on online platforms through content regulation, it is important to not engage in disproportionate assessment of certain cases. To maintain their global status, platforms need to ensure that the criteria to flag specific content should be gauged not in line with regulations within specific countries, for instance, the US or the UK as discussed above, but per global majority countries. 

January 16, 2024 - Comments Off on Digital Rights Foundation’s Conference on Countering Digital Threats and Building Resilience of Communities

Digital Rights Foundation’s Conference on Countering Digital Threats and Building Resilience of Communities

December 15, 2023

ISLAMABAD: Digital Rights Foundation (DRF) held a conference titled, ‘Countering Digital Threats and Building Resilience of Communities’ on Friday, 15th December 2023 in Islamabad. DRF’s conference addressed the lack of discourse relating to online freedoms in the country particularly with the rise of hate speech and disinformation against vulnerable and at-risk communities in Pakistan. The conference brought together experts from across the country with two panels that highlighted DRF’s engagements and redressal mechanisms available in the country for at-risk communities in Pakistan.

The event started off with welcome remarks by Seerat Khan Programs Lead at DRF in which she highlighted the particular vulnerabilities that religious minorities face in the country, especially with respect to rising hate speech and disinformation. Nighat Dad, Executive Director at Digital Rights Foundation also noted that “With the upcoming elections we see how harmful content pertaining to religious minorities in the country is increasing, particularly (the elements of) disinformation and hate speech. The rise in hate speech and disinformation will be even more rapid with the use of AI and generative AI which is quite concerning. The Election Commission and government institutions need to address this and include hate speech in the code of conduct for political parties that the Commission is developing. Social media platforms also need to do more to address how hate speech and disinformation spread and impact they have on at-risk communities in countries like Pakistan.”

In 2021, DRF conducted a research on "Religious Minorities in Online Spaces (2021)," addressing communities' vulnerabilities to attacks, disinformation campaigns, harassment, and hate speech. The research mapped the experiences of religious minorities in online spaces and through surveys and interviews, we found a majority of respondents for the aforementioned research experienced online negativity, including backlash or threats on the basis of religious affiliation and/or a combination of factors.

The first panel of the conference, ‘Navigating Digital Boundaries: Combating Online Hate Speech and Disinformation’ was a conversation about the challenges posed by online hate speech and disinformation targeting at-risk communities. The panel was moderated by Senior Program Manager Zainab Durrani and included NCHR Secretary Mr. Kamran Rajar, Dr. Shoaib Suddle, One Man Commission for Minorities, Academic Dr. Ayra Patras, Journalist Sajjad Azhar and Director of Bolo Bhi, Usama Khilji. The panelists shed light on how online hate speech and disinformation manifest online and how to combat these as a community together.

Dr. Ayra Patras said,”When religious minority communities are ostracized in real life then you see the replication of this behavior online as well. We see more hate speech and there are no recompense mechanisms in place that actually work.” She added,”The social discrimination faced by these communities germinates into social exclusion and the consequences are far-reaching and become entrenched in real life.

The second panel of the event was on ‘Bridging the Digital Divide: Ensuring Equal Access for All’ which was moderated by Programs Lead Seerat Khan. The panel was joined by NCHR Member Minorities Manzoor Masih, Former Senator Farahtullah Baber, Community Leader and Activist Sunil Gulzar Khan and Cyber Harassment Helpline Manager Hyra Basit. The panel addressed mechanisms needed to ensure safe spaces for at-risk communities, particularly in light of the upcoming elections and the need for community building and resilience.

Senator Farhatullah Babar said,”The discussion around digital divide is very timely in light of  the upcoming elections. In Pakistan, media has played a great role in elections and online disinformation is a very real issue.” He added,”It is very important to considers all actors complicit in the online disinformation campaign and more than most, its the state is complicit”. He advocated for the Election Commission of Pakistan to develop a code of conduct for media house that is focused on combating disinformation on social media.

Digital Rights Foundation is a registered research-based NGO in Pakistan. Founded in 2012, DRF focuses on ICTs to support human rights, inclusiveness, democratic processes, and digital governance. DRF works on issues of online free speech, privacy, data protection and online violence against women.
For more information log on: www.digitalrightsfoundation.pk

Facebook
Twitter/X
Instagram
Tiktok
#DigitalResilienceConference
Contact

Nighat Dad 
nighat@digitalrightsfoundation.pk

Seerat Khan
seerat@digitalrightsfoundation.pk

Anam Baloch
anam@digitalrightsfoundation.pk

September 29, 2021 - Comments Off on Evaluating Applications Developed by the Pakistani Government

Evaluating Applications Developed by the Pakistani Government

Faizan Ul Haq is currently a Senior at LUMS majoring in History. His interests include tech, philosophy, and social justice

A non-exhaustive database of mobile phone applications developed by the Pakistani government has been compiled by Faizan and can be accessed here.

It has been widely noted that Pakistan’s potential for IT development has grown vastly in the last decade or so. According to the Pakistan Telecommunication Authority’s Annual Report for 2019-2020, in the period from 2016 to 2020, Mobile phone data usage in Pakistan has increased from 614 petabytes to 4,498 – an increase of over 700% in just half a decade. In the same time period, the distribution of broadband services has doubled. While numerous reasons can be speculated for leading this change (from the availability of cheaper smartphones from Chinese providers like Q-Mobile and Huawei, to the increasing importance of IT in business development, and the proliferation of mobile internet), it is obvious either way that the digital world in Pakistan now presents a new avenue that can be harnessed for better governance and delivering services.

It makes sense, then, that in late 2019, Prime Minister Imran Khan inaugurated the “Digital Pakistan” initiative. In its policy objectives, what stands out is the emphasis towards using digital applications (henceforth referred to as apps) for “e-governance” and in “key socio-economic sectors”. While there have been a few apps released previously to help with the aforementioned, the current government is seems intent on maximizing this newfound potential.

Over a 100 different apps (as of the summer 2021) have been released on the Google Playstore for Android phones and the Apple store for iOS device by both the government, at the provincial, federal and, at times, the district level. Primarily developed by different provincial IT boards, they cover a wide range of functions including education, the regulation of pre-existing government bodies, agriculture, and online ticketing and booking. Some apps are meant only for citizens of a particular locale (such as the City Islamabad app), while others are targeted to people of a specific profession (the Lahore and Sindh High Court apps are targeted towards the legal community). A few apps have also been released to help deal with health and safety emergencies, such as the Baytee app meant to increase women’s safety and a number of apps aimed at helping track and register COVID cases in Pakistan.

However, just publishing apps does not immediately mean that those apps have helped fix the underlying issues, or that they have been effective in their stated objectives. Quite a few of these apps have dubious efficacy, and some appear to not work at all. There are a few clear trends as to which apps have worked and which have not.

A number of apps profess a wide range of features. The “City Islamabad” app promises a lot. With the goal of “bridge(ing) the gap between citizens and government” by removing the need to go to government offices to access public services and departments, the app is supposed to provide quick access to numerous forms and payment services that would otherwise would have only been available therein. In practice, the Playstore review page is full of complaints that not all of the forms actually work. People have pointed out that tokens generated aren’t always registered by relevant financial departments. Certain forms load indefinitely – either they have not been programmed in properly, or the forms just are not available on the app. At the same time though, certain key features of the app still work and function effectively. The part of the app that provides information on Islamabad’s major landmarks and public facilities loads instantly and provides accurate information, while a portion of the userbase reports successful payment of tax related tokens and response upon submitting complaints. It appears that while a wide number of features have been programmed in, not all of them are perfectly useable.

A similar issue exists with what is arguably the government’s flagship application, the Pakistan Citizen Portal. Most of the reviews posted in September and August 2021 are entirely negative and allude largely to the same issue: a large number of the complaints registered on the app do not actually appear to lead to anything concrete and are instead marked “resolved” without any appropriate action being taken. While this is likely not representative of all users who have used the app, it does imply a degree of miscoordination between the app’s complaint registration mechanism and the departments that are meant to cater to it. If it’s true that complaints being marked as resolved does not actually mean any action has been taken, the widely quoted  statistics on the application’s website need to be taken with a grain of salt, it’s unlikely that each of the 3.1 million . It also speaks to the limitations inherent in e-governance and service delivery through apps – the issues that are already present in government bodies are likely to be reproduced through the functioning of the app. For example, if government bodies continue to treat cases of harassment lightly because of misogynistic attitudes, then the solution lies in a structural reform of said government bodies instead of opening more digital portals to file complaints through.

On the contrary, apps that are targeted towards a specific group of people appear to have had more success. There are two broad types of apps like this: some that have been created solely for the use of people in certain government departments, and others for everyone who works in a particular profession. Apps in the former category include the “Price Magistrate” app – a complaint management app meant specifically for district magistrates. This app has seen less use compared to other apps on this list, and its review section is full of users confused at the lack of a registration option. Of the few reviews that do appear to be from its intended user base, it seems that the app functions well.

An app’s functionality however is not just defined by how well certain features work. Overtime, as more bugs are reported, new devices are released and as operating systems go through several iterations, the publisher needs to provide constant support through updates to ensure their functionality. This is especially important in Pakistan, where Android users are likely to be using a very diverse set of devices given the numerous smartphone companies that exist. Additionally, smartphones in different price ranges have specific limitations – differences in screen resolution, RAM, processing power, and networking features mean that developers need to ensure that their apps can work despite these limitations. If this diversity isn’t catered for, sections of the Pakistani population that can only afford cheap smartphones with weaker specifications are likely to be left out. This means that the demographic which is least likely to be digitally literate will now also face bugs and compatibility issues that make it harder for them to use these applications. Updates are also important to address any security issues on the app, most application updates are issued to fix security bugs that are discovered later on and unanticipated backdoors.

The most prolific publisher of Government apps thus far has been the Punjab IT Board (compared to the other regional boards and other publishers, who barely have half as many apps as the Punjab board between them). On their Android publisher page alone, they have over 70 apps published. Yet, their support for these apps has been sporadic. More than half of these have not been updated even once in 2021. While at best, this might lead to most of these apps functioning albeit with bugs, quite a few of them have been rendered completely unusable as a result. A large number of users report that quite a few of these apps no longer have a working system for logging in users owing to an issue in generating and processing an OTP key. Other apps have been rendered completely unusable – the Agri-Smart app has been rendered completely unusable for certain Android users since their devices’ IMEI codes cannot be accessed. These issues have remained unaddressed for months on end.

It is unclear what the status of these apps is – if such glaring issues exist, has support for them been dropped completely? This seems to be the case, because other apps have had the publisher release frequent updates and engage with reviews that have pointed out issues. The fact that these apps remain available for download despite issues with their usability and a lack of developer support is troubling and speaks to a pattern where apps are launched without the necessary infrastructure to conduct follow-ups. This has caused a fair amount of confusion on app stores, as people continue to download said apps and leave negative reviews because of the clear lack of functionality.

If this is demonstrative of a communication gap between app developers and the intended user base, it is not the end of it. Certain apps certainly seem like they are designed to be used by a large user base, but evidently have not been used as such. The Click ECP app meant to facilitate voters during each election cycle and the Covid-19 Tracker app for Lahore both remain with only over a 1000+ downloads on the Playstore, when it is intuitive that their usage numbers should be far in the thousands. The “Equal Access App” meant to help disabled individuals also remains unused as its user base still is unengaged. At best, this is likely to result in certain apps being unused by their target demographic. At worst though, this can open the door to privacy violations.

Upon first use, a lot of apps require permission to access certain information and features of a phone. While this can vary from app to app, the general rule of thumb is that apps tend to only ask for those permissions that are core to an app’s functionality. Instagram, for example, will only ask for permission to use your camera when you open the in-app camera for the first time. However, even this can run awry – the Facebook app has long been under suspicion for secretly recording conversations for advertisement purposes. A number of apps supported by the Pakistan government, however, ask for a lot of permissions right at first launch. The Pehchaan app (currently unavailable on the Playstore as of September 2021) immediately requests permission to access a user’s location on launch. The “Forest Management Information System” (FMIS) app requests not only access to location services, but also to use the phone’s camera, to “modify and delete contents” of media files saved on device or USB storage, and of Wi-Fi connections. Why the app requires any of this is puzzling, especially since there is no use for any of these features immediately after an app has been launched. This runs afoul of the Principle of Data Minimization – the idea that data collectors should only request and use data that is needed for a specific purpose. Ideally, that purpose should be communicated clearly and a privacy policy should be attached in any scenario where private data is needed. Given that there is little communication from the developers of why these permissions are needed in the first place, it’s extremely troubling that many people in Pakistan could agree to these permissions just to launch an app without realizing the extent to which their privacy is invaded. While Google Play store does include a requirement that each app have a privacy policy attached, the Punjab IT Board’s Privacy Policy seems inadequate. The fact that it’s a generic policy means that it does not cater to the way each individual app may request, use, and store user data. By contrast, the City Islamabad App’s privacy policy and the Pakistan Citizens Portal’s privacy policy at least both specify the kind of data that may be collected. The Punjab IT Board’s privacy policy might already be violated by the FMIS collecting the “the minimum amount of information” required by the app. It is clear that the Punjab IT Board’s privacy policy – under which most of the apps released so far fall under – can be comprehensive and applied more rigorously.

Ultimately, the legitimacy of the Digital Pakistan initiative is worth questioning. Despite the massive growth in Pakistan’s access to these digital technologies and the potential therein, the system put in place to actualize it deserves further scrutiny. The reception of apps published by the government needs to move beyond a tokenistic celebration of each app’s release, to an evaluation of their actual benefit and long-term functioning.