April 9, 2025 - Comments Off on DRF condemns violation of privacy rights in police raid
All Posts in Statement
February 21, 2025 - Comments Off on Marking its launch the Digital Accountability Collective South Asia calls for stronger platform governance and user protection
Marking its launch the Digital Accountability Collective South Asia calls for stronger platform governance and user protection
South Asia, despite being home to one-fourth of the world’s population and a dynamic hub for tech innovation and burgeoning digital economies, has been consistently overlooked by global funders and tech corporations. Our region is often left grappling with the dual challenge of uneven policies of platforms and emerging authoritarian state regulations that increasingly undermine fundamental freedoms. The lack of context-specific protections provided by tech platforms adds to the unequal and unsafe online experiences for millions in our region, particularly among vulnerable communities. Furthermore, intermediary intervention in the South Asian region leads to a significant lack of meaningful engagement with the communities most affected by rapid technological proliferation and uneven governance.
In January 2025, Digitally Right (Bangladesh), Digital Rights Foundation (Pakistan), and Hashtag Generation (Sri Lanka) convened in Colombo and the Digital Accountability Collective South Asia (DACSA) emerged from a shared commitment to address pressing concerns regarding platform governance, accountability, and the broader impact of existing and emerging technologies in South Asia.
Our collective mission centers on ensuring that tech platforms operate in a manner that is transparent, equitable, and safeguards the rights of all users, particularly those from marginalized communities. We aim to present a unified voice from South Asia with a nuanced understanding of the impact of platform policies and stringent state regulations on our communities. The coalition also aims to learn, understand, contribute, and influence change at regional and local levels, and amplify the voices of civil society working on digital rights and tech justice across South Asia. DACSA will encourage collaboration among groups working in South Asia on digital rights in order to build a regional movement to influence platforms and state policies which are inconsistent with the international human rights framework. Together, we seek to create mechanisms that hold platforms and states accountable while empowering communities with skills, tools and frameworks to navigate the digital world safely and equitably.
As three organizations who have been working with stakeholders for the promotion of digital equity and safety in our respective countries for several years, we also represent and seek to highlight the combined weight that Bangladesh, Sri Lanka, and Pakistan have in the South Asian tech space/audience. The coalition aims to come forth as the first step in a wider South Asian collective that brings together the concerns and wealth of experience of three organizations who have been working on the ground to foster equality and safety in online spaces and hold tech platforms accountable.
DACSA expresses grave concern over the growing trend among social media and tech corporations to enact drastic policy changes, reportedly influenced by commitments to align with the current US administration’s priorities. These shifts, which include delegitimising fact checking and dismantling safeguards for marginalized communities risk exacerbating misinformation, political instability, communal violence, and democratic backsliding in regions like South Asia.
The erosion of accountability mechanisms, including protections for gender and marginalized identities, blatantly disregards the severe offline consequences of online hate speech and discrimination. By outsourcing enforcement to flawed user-reporting systems and abandoning proactive safeguards, tech companies disproportionately burden vulnerable communities already grappling with systemic harassment and violence. Such actions reveal a troubling prioritization of corporate and political interests over regional safety and equity. We urge all social media and tech companies to halt this dangerous trajectory and engage meaningfully with civil society to develop policies that prioritize user well-being. In South Asia, where digital platforms increasingly dictate political discourse and public safety for millions, the stakes of these profit-driven experiments are intolerably high.
As a collective, we at DACSA remain committed to closely monitoring the evolving digital landscape in South Asia and advocating for stronger, community-driven approaches to tech justice. We will continue to work collaboratively to ensure that the voices and experiences of those most affected are centered in shaping the region's digital future.
February 6, 2025 - Comments Off on Statement by the Network of Women Journalists for Digital Rights
Statement by the Network of Women Journalists for Digital Rights
5 February, 2025: The Network of Women Journalists for Digital Rights unequivocally condemns the vile and orchestrated campaign of threats and targeted disinformation against senior journalist and anchor Munizae Jahangir by extremist elements. Jahangir, a journalist of exceptional integrity, is facing a brazen assault from individuals who flagrantly violate the law with impunity. This escalating pattern of intimidation is not just an attack on her; it is a direct assault on press freedom and the fundamental right of journalists to carry out their professional duties without fear.
It is outrageous that despite these threats being widely circulated on social media and well known to the relevant authorities, those responsible for upholding the law have remained silent. The failure of law enforcement agencies to act not only emboldens these extremist actors, but also reinforces a dangerous precedent where journalists can be targeted without consequence.
We demand immediate and decisive action. The perpetrators must be identified, legal action must be taken, and Munizae Jahangir must be provided with urgent and robust protection. This culture of impunity must not be allowed to persist.
The journalistic community stands in unwavering solidarity with Jahangir, making it clear that neither intimidation nor disinformation campaigns against journalists will succeed in silencing those dedicated to truth and justice. Any failure to act will be seen as complicity in this attack on press freedom.
January 23, 2025 - Comments Off on NWJDR condemns domestic violence incident against journalist Naheed Jahangir and her sisters
NWJDR condemns domestic violence incident against journalist Naheed Jahangir and her sisters
23 January 2025, Pakistan: The Network of Women Journalists for Digital Rights (NWJDR) condemns the violent attack on journalist Naheed Jehangir and her sisters at the hands of their male relatives earlier in Peshawar, on 18 January 2025.
In what appears to be an incident of domestic violence, Ms Jehangir and her sisters were returning home after attending a wedding when their vehicle was allegedly stopped by their uncle and other male relatives who attacked them violently. Ms Jehangir and her sisters managed to escape and report the incident to the police, but so far only one out of the five nominated accused have been arrested. Moreover, the police have reportedly been lax in their efforts to provide adequate support to the survivors.
This situation exemplifies the pervasiveness of violence against women, specifically domestic violence, in Pakistan. According to a policy brief from the National Commission for Human Rights, 90% of women experience domestic violence in their lifetime. While legal infrastructure is in place to combat domestic violence, such as the Khyber Pakhtunkhwa Domestic Violence against Women (Prevention and Protection) Act, the implementation of such laws remains deeply flawed. Women continue to face obstacles and dismissive attitudes at every turn, especially from law enforcement authorities.
The NWJDR expresses solidarity and support with Naheed Jehangir and her sisters in their struggle to attain justice, and urges the local police to arrest the remaining accused and ensure that the perpetrators are held accountable. The NWJDR also notes that we must collectively work to ensure continuous support for survivors. Domestic violence has no place in a progressive society.
August 6, 2024 - Comments Off on DIGITAL RIGHTS FOUNDATION PUBLIC COMMENT ON OVERSIGHT BOARD CASES 2024-007-IG-UA, 2024-008-FB-UA (EXPLICIT AI IMAGES OF FEMALE PUBLIC FIGURES)
DIGITAL RIGHTS FOUNDATION PUBLIC COMMENT ON OVERSIGHT BOARD CASES 2024-007-IG-UA, 2024-008-FB-UA (EXPLICIT AI IMAGES OF FEMALE PUBLIC FIGURES)
Submission: Research Department - Digital Rights Foundation
Aleena Afzaal - Sr. Research Associate
Abdullah B. Tariq - Research Associate
Submission Date: April 30, 2024
Legal Context:
Given the borderless nature of digital content, Meta should consider international legal developments as a framework for its policies. The European Union’s Digital Services Act and specific statutes from the U.S. state of California, such as AB 602, provide precedents for regulating digital content and protecting individuals against non-consensual use of their images.
Irregular responses in two different cases (How such cases affect people in different regions):
It is important to note that the two cases relating to deepfake videos of women public figures were approached and dealt with differently potentially due to difference in ethnicity and identity: one being from the Global North and the other belonging to the global majority identity. The American public figure case received a relatively immediate response whereas the case of resemblance to a public figure in India was not highlighted or amplified as quickly. Despite the technical discrepancies, it cannot be ignored that in the latter case, an Instagram account with several similar images remained unflagged for a long time. Additionally, one question that arises continuously from a string of these cases is why have tech platforms not adopted technological mechanisms that can flag sensitive content, particularly deepfakes circulating on different platforms. The harms prevailing due to the emerging technologies particularly generative AI content need to be viewed through a more intersectional lens. Women and marginalized groups in the global majority particularly from South Asia are more vulnerable to attacks online with a significant impact on their online and offline safety rather than individuals from the global North. While female security and inclusion is crucial, the potential otherization of the community is concerning and needs to be revisited.
Moreover, taking cultural context into account, the level of scrutiny and criticism a South Asian female is subjected to in such events is higher as compared to a woman of American descent. In India, a woman is viewed as good only if she is able to maintain the respect and honor of her family. Female bodies are sexualized and any attack on them is considered to be an attack on men and the community's honor. Several cases have come forward in the past where women and young girls in India have taken their own lives as a result of leaked photos. In the wider Indian subcontinent region, cases have arisen where women have been subjected to honor killing as a consequence of being romantically involved with a man, their explicit photos being leaked and more. Such cases in the region showcase an underlying problem where women and honor are used as interchangeable terms and need to be taken into consideration when handling issues of similar nature. Public figures or not, women are more prone to being targeted by AI-generated content and deepfakes. Recently, incidents have come forward where deepfakes of two female public figures in Pakistan have been made widely available across different social media platforms. As far as Meta’s platforms are concerned, these deepfakes have been uploaded with nudity being covered with the use of stickers and emojis however in the comments section, users have offered and/or asked to share the link to view the originally created content. It is crucial that platforms like Meta have mechanisms in place where content and comments amplifying technology-facilitated gender-based violence are also flagged. Considering the higher probability combined with the societal consequences, it is essential for Meta to give greater consideration to cases involving deepfakes and AI-generated content showcasing characteristics of technology-facilitated gender-based violence more importance on the platform, particularly with countries from the global majority where the risk of potential harm is higher than others. Human reviewers should also be made aware of the language and cultural context of the cases under consideration. Trusted partners of Meta should be entrusted with the task of escalating the cases, where the response time of prioritized cases is expedited and addressed at the earliest.
Clarification and Expansion of Community Guidelines:
Meta’s current community standards need to be more explicit in defining violations involving AI-generated content. There is an urgent need to develop a specific section for public-facing community guidelines on the platform to address deepfakes. Detailing examples and outlining repercussions would clarify the company's stance for users and content moderators alike. Public figures are at a higher risk of being victims of deep fake content due to their vast exposure (reference imagery) in online spaces. Thus, the policy rationale and the consequent actions need to be similar in the case of public figures and private individuals considering the sensitivity of such content regardless of an individual’s public exposure. It is equally important that Meta revises its policy regarding sensitive content where the person being imitated is not tagged. The policy needs to be inclusive of such content as the potential harms remain. Regular updates to these guidelines are crucial as AI technology evolves.
Technical Mechanisms for Enhanced Detection and Response:
- Implementing cutting-edge machine learning techniques to detect deepfake content (image, video and audio) can significantly reduce the spread of harmful content. These algorithms should focus on detecting common deepfake anomalies and be regularly updated to keep pace with technological advancements. A two pronged approach can be utilized for detecting and flagging harmful content on their platforms. Larger investments should be placed in automated detection systems to efficiently categorize and identify generative AI content and be adaptable to future advancements.
- Detected Gen AI content should be marked on Meta platforms to avoid confusion or the spread of misinformation. Meta needs to reassess its appeals pipeline and allow for extended review times, especially for content that contains any human likeness. Moreover, Meta needs to reassess its appeals pipeline and allow for extended review times, especially for content that contains any human likeness.
- Collaborating with AI developers to embed watermarks in AI-generated content can help automatically identify and segregate unauthorized content. This would bolster Meta's ability to preemptively block the dissemination of harmful material.
- Expanding this database to include international cases and allowing for real-time updates can enhance its effectiveness in identifying and removing known violating content swiftly.
- Meta should build on and enhance the capacity of its trusted partners particularly in terms of escalating content to the platform and having a robust and quick escalation channel in case of emergencies or content that is life-threatening. Meta needs to have emergency response mechanisms in place and have policy teams who are sensitized to deal with matters of utmost urgency particularly when it relates to marginalized groups and vulnerable communities.
The current challenges faced by Meta in managing AI-generated content are largely due to its lack of specificity in its policies to encapsulate generative AI content. The community standards in their current state fail to address the complexities of AI-generated content and the adverse impacts it can have on people and communities. Meta’s clear differentiation in its policy application rationale for two different cases raises concerns over irregular and inefficient content moderation policies. While we acknowledge that content in both these cases is no longer on the platform, the urgency displayed in taking down content from the second case compared to the delay in the removal from the first case highlights the dire need for stringent and equitable response of social media platforms on gen-AI content. Moreover, in the second case the deepfake video of an American woman public figure was removed under the policy “Bullying and Harassment, specifically for "derogatory sexualised photoshop or drawings"” – Greater discourse is required over what classifies as “derogatory” in this context. In the absence of a derogatory element, will an AI-generated image that involves sexualisation and nudity be available to view on the platform? If so, then how is Meta perceiving the consensual privacy and dignity of public figures on its platforms? These are the questions that need to be addressed and outlined in Meta’s content moderation policies, especially in terms of tech-facilitated gender-based violence.
Meta’s Media Matching Service Banks are restricted by the database of known images, which renders them highly ineffective against newly generated deepfake content. With tools to create generative AI content becoming increasingly accessible, the technology to flag and address such content needs to catch up as soon as possible. It is essential for Meta to expand its database to encompass a wider array of AI-generated content types and implement real-time updates.
In conclusion, Meta’s automated detection systems struggle to keep pace with rapidly advancing sophisticated technologies used in deepfake content. For Meta to ensure safety on its platforms for marginalized groups and communities, it is essential for them to revisit their content moderation policies pertaining to generative AI content while enhancing and investing in its trusted civil society partners to escalate content towards the platform.
July 15, 2024 - Comments Off on Technology-facilitated Gendered Surveillance on the Rise in Women’s Private Spaces
Technology-facilitated Gendered Surveillance on the Rise in Women’s Private Spaces
7th June 2024
Pakistan: Digital Rights Foundation (DRF) is extremely alarmed and concerned about the ongoing surveillance of women and girls in private spaces through unregulated CCTV cameras in women's shelters, hostels, universities and salons, invading their right to privacy and dignity in private spaces. Women are already exceedingly subjected to gender based violence, harassment and social surveillance by society which in turn pushes them to seek refuge in gender segregated private spaces such as these.
According to the 2023 Gender Gap report Pakistan ranks at 142 out of 146 countries in terms of gender parity, including economic participation and opportunity, educational attainment, health and survival, and political empowerment. With women’s participation being severely limited and restricted in the country, they are significantly more financially dependent, prompting them to look towards spaces like Dar-ul-Amans (designated shelters for women in distress) for shelter and protection. Women residing in Dar-ul-Amans are largely vulnerable, particularly when they face little to no familial support and are seeking refuge.
Dar-ul-Amans in the country have been purpose-built to provide state-sanctioned support at an institutional level. In light of this, the use of unregulated CCTV cameras is flagrantly threatening and targeting women’s dignity and privacy. This an active and gross violation of their constitutional rights as granted under Article 14. Additionally, S. 9(5) of the Guidelines for Dar-Ul-Aman in Punjab also recognizes these rights and states that ‘violation of a resident’s privacy shall be considered as misconduct and the Social Welfare Department shall be justified in taking appropriate action in this regard.’
Women living in these shelters have also complained of gross mistreatment and abuse at the hands of those in charge at these centers. Days before the Rawalpindi Dar-ul-Aman incident, another incident of a similar nature took place in one of Lahore’s women’s hostels where hidden cameras were found on the premises of the building. These repeated instances of CCTV cameras being installed in private spaces under the guise of safety and the footage being misused , serve as a direct invasion of privacy and threat to women’s physical safety and create a hostile environment of mistrust and insecurity amongst women at large.
There have even been reports of instances of CCTV cameras being installed to surveil women in salons, where the footage and data has later been employed as blackmail material. In 2019, students from University of Balochistan (UoB) protested in the wake of CCTV camera footage being used by security personnels to sexually harass and blackmail students, particularly the young women on campus. In the past, we have seen that the Senate Standing Committee on Human Rights have taken notice of these issues and we urge them to exercise their position to do the same now and investigate these heinous acts of violation against women’s privacy at Dar-ul-Amans and other private spaces.
DRF's Cyber Harassment Helpline since its inception has received 16,849 complaints from across Pakistan, with 58.5% of the complaints having been received from women across the country. Over the years we have received a number of complaints where women have repeatedly complained about being targeted through surveillance & spyware technologies injected to their devices by individuals who are close to them in order to control and monitor their movements and activities. We have also witnessed a rising trend where women were captured on camera without their consent in addition to the misuse of their intimate images through blackmail and intimidation. In some instances these images are further manipulated and doctored through the use of generative AI tools to create deep fakes visuals and imagery.
We strongly urge transparent and urgent investigations into these violative incidents which are (of employing the use of unregulated CCTV cameras to violate women’s privacy) contributing to increased gender surveillance in the country. DRF has long been advocating for a human rights centric personal data protection law for this very reason, which needs to be centering the privacy and data of vulnerable communities including women, gender minorities and marginalized groups. We urge the current Ministry of Human Rights (MoHR) and Ministry of Information Technology & Telecommunication (MoITT) to involve women rights and digital rights groups in consultations around the proposed data protection bill in order to address the existing gaps. Moreover, we urge the National Commission on Human Rights (NCHR) and National Commission of Women Rights (NCWR) to look into the matter post haste and ensure that women are not subjected to gender based violence at the hands of technology, particularly in the form of their surveillance in private and public spaces.
Digital Rights Foundation is a registered research-based NGO in Pakistan. Founded in 2012, DRF focuses on ICTs to support human rights, inclusiveness, democratic processes, and digital governance. DRF works on issues of online free speech, privacy, data protection and online violence against women.
For more information log on: www.digitalrightsfoundation.pk
Contact
Nighat Dad
Seerat Khan
Anam Baloch
May 13, 2024 - Comments Off on DIGITAL RIGHTS FOUNDATION PUBLIC COMMENT ON OVERSIGHT BOARD CASE 2023-038-FB-MR (PAKISTANI PARLIAMENT SPEECH)
DIGITAL RIGHTS FOUNDATION PUBLIC COMMENT ON OVERSIGHT BOARD CASE 2023-038-FB-MR (PAKISTANI PARLIAMENT SPEECH)
Submission Author: Maryam Ali Khan Submission Date: 23rd January, 2024
In May 2023, a video was posted on Facebook by a news channel where a Pakistani politician was addressing the parliament and made comments that suggested that some public officials, including military personnel, needed to be hanged in order for the country to ‘heal itself.’ This was achieved by drawing parallels between the contemporary Pakistani political landscape and the perceived necessity for these executions with an ancient Egyptian ritual where individuals were sacrificed in the River Nile as a means of controlling flooding. Considering this, Meta should have taken down the video from Facebook in accordance with its Violence and Incitement Policy. The policy clearly stipulates that any content containing statements targeting individuals, other than private and high-risk persons with statements that advocated or called for violence, as well as statements containing aspirational or conditional calls to violence will be removed.
The statements made by the politician in the video were clearly inflammatory and violent especially when you look at them within the context of the country’s political history and how public hangings of influential figures, such as Zulfikar Ali Bhutto, have been manipulated by those in power to advance their agendas and propagate specific narratives - narratives that have had long term consequences on the political fabric of the country. When looking at posts and content such as this, it is also important to consider the role that state institutions such as the military and judiciary have played and continue to play in Pakistani politics. Pakistan has had a history of military managing or meddling with civilian state institutions. Free press and journalism have been relentlessly monitored and restricted during these periods of military rule, with censorship and intimidation being a regular occurrence. Journalists and citizens encounter a variety of problems, including threats of assault and harassment. Pakistan currently ranks 150th out of 180 nations in the 2023 World Press Freedom Index, indicating a striking deterioration in freedom of the press. As the country progresses, it is critical that authorities take relevant steps to ensure freedom of the press, especially due to the fact that no democracy can function efficiently without it.
As Pakistan approaches its upcoming general elections scheduled for February, familiar patterns seem to be repeating themselves. Censorship of the press and journalists has been ongoing before and after Imran Khan was ousted as the Prime Minister after a no-confidence motion in April 2022. There have been multiple riots by supporters of his party since then and the Pakistan Electronic Media Regulatory Authority (PEMRA), has increasingly censored media outlets and journalists who are critical of the state. Additionally, there have been multiple state-imposed internet shutdowns in an attempt to silence dissent and disrupt online political campaigning by certain political parties. Since May 2023, former prime minister Imran Khan has been in jail, and his party members are being severely restricted by authorities from freely contesting elections.
In Pakistan, social media and digital platforms have played a huge role in amplifying journalistic freedom. These platforms allow journalists and media organizations to quickly reach a much larger audience than was conventionally possible before. However, this has also meant that a lot of journalists and news agencies have been unable to maintain their standards of what content is “newsworthy”, and many times share content that is nothing but sensational, and is intended to bring in views - which is good for business. Journalists are pressured by media houses to report and sensationalize news, reflecting the severe absence of ethical journalism standards in the industry. We have also seen a shift in the industry where news reporting is no longer a monopoly maintained by journalists. Pakistan has seen an increasing trend of
‘Youtubers,’ ‘political commentators’, and ‘influencers’ spreading disinformation under the guise of news in the country.
The speech provided no useful information and led to more instability in the country leading up to the May 9th events in the country when riots broke out in cities after Imran Khan was ousted and military official buildings were attacked. What's more alarming is that this particular speech incited public opinion towards attacking public officials who can be in the guise of military officials, politicians, and any other official believed to be working for a political opponent other than the proclaimed party who made this statement.
When determining what kind of content should be allowed to stay up on social media platforms, content moderators should keep in mind these regional and political contexts. How certain content could escalate offline and online violence should especially be considered. Freedom of speech and freedom of the press can be achieved without having to resort to blatant violent speech. In content removal and guideline development, Meta should encourage governments to adhere to appropriate protocols for submitting content removal requests. The Pakistani government has had the capability to monitor and censor online material and has in the past issued draconian laws for social media platforms in February 2020, which allow them to erase “unlawful” information within 24 hours. These restrictions have been criticized for limiting freedom of expression and stifling dissent of users online. The 2016 Prevention of Electronic Crimes Act (PECA) also gives authorities powers to monitor and prohibit internet information. Therefore, it is essential that the government establishes a well-defined set of guidelines and protocols that prioritize human rights guiding principles and the freedom of the press. The justification for content removal should not be based on its opposition to the state or its advocacy for causes such as women's and trans rights - which are often misunderstood as ‘un Islamic,’ ‘immoral,’ ‘vulgar’ and ‘immodest.’
These steps will help in the democratization and de-escalation of political tensions both in online and offline platforms, while simultaneously improving the quality of journalism in the country.
*To the read the Oversight Board’s full decision on this case:
https://www.oversightboard.com/decision/fb-57spp63y/
**To read see all submitted Public Comments: https://www.oversightboard.com/news/
May 6, 2024 - Comments Off on DIGITAL RIGHTS FOUNDATION PUBLIC COMMENT ON OVERSIGHT BOARD CASE 2023-032-IG-UA (IRANIAN WOMAN CONFRONTED IN STREET)
DIGITAL RIGHTS FOUNDATION PUBLIC COMMENT ON OVERSIGHT BOARD CASE 2023-032-IG-UA (IRANIAN WOMAN CONFRONTED IN STREET)
Submission: Maryam Ali Khan, Digital Rights Foundation Submission Date: 30th November, 2023
Meta’s classifiers failed to assess the relevant context and took too abrupt a decision in removing the post shared on Instagram. The shared video showed a man confronting a woman in public because she was not wearing a hijab. The woman whose face was visible in the video, was arrested following the incident. The accompanying Persian caption used descriptive language to express the user’s support for the woman in the video and all Iranian women standing up to the regime. As per Meta's assessment, the caption was construed as having an "intent to commit high severity violence," thereby violating its Violence and Incitement policy. The post was later restored to Instagram under the Coordinating Harm and Promoting Crime policy after the user appealed to Meta and it was decided that the post did not violate community standards. This policy outlined that users may be allowed to advocate and debate the legality of content that aimed to draw attention to harmful or criminal activity as long as they did not advocate for or coordinate harm. It also outlined that any content that put unveiled women at risk would require additional information and context.
It is important to note that this context was provided in the post if the classifiers were designed to assess the content in totality instead of processing the caption and media individually. The attack did not take place in a vacuum and was a byproduct of strict moral policing by the Iranian state. This was exacerbated by ongoing political unrest that had unfolded in September 2022 when Mahsa Amini was taken into custody by the morality police under accusations of observing ‘improper hijab’ - where she suspiciously died of a heart attack. This sparked nationwide protests, united by the chant: ‘Zan, Zendegi, Azadi’ (Woman, Life, freedom).
For the ‘Woman, Life, Freedom’ movement, social media and online platforms were paramount in the mobilization of protests and broadcasting of vital information. Videos and pictures from various protests in schools, universities, and streets were circulated which showed more and more women exercising their right to freedom of expression by appearing in public without their head coverings. Social media allowed the protesters a platform to get their message out into the world. A prominent Iranian actress, Taraneh Alidoosti, posted multiple pictures of herself without a headscarf on Instagram with the caption ‘Woman. Life. Freedom’. Women willfully
unveiling in public spaces quickly became a symbol of defiance against the morality police, and the regime. As expected, acts of defiance such as unveiling in public in a political and religious climate, such as Iran’s, comes with its own risks. Women and girls who have stepped out in public without a head covering have been arrested, beaten, and had items like yogurt dumped on their heads. Men have also been arrested and beaten for showing support for the cause.
Additionally, the Iranian authorities resorted to unprecedented levels of internet shutdowns in an attempt to silence dissent and isolate the Iranian people from the world. According to Filter.watch, an Iran-focused internet monitor, Iran experienced internet blackouts for over four months either nationwide, or at a provincial level after Mahsa Aminis death. Moreover, the government enacted legislation that allowed the government to monitor and identify individuals based on their online activity.
These measures are a part of the government's effort to curtail freedom of expression and access to the global online platform. A majority of Iranian users are either experiencing constant removal of their content or know at least one person who is being censored in the Persian language. The most common type of content that has been removed or shadowbanned are hashtags of human rights campaigns; comedians posting political satire; and activists organizations using chants like “death to Khamenei”. Persian language news organizations have also had their content removed simply for discussing political organizations. Most of the content posted on Meta is in languages other than English, with more than a hundred languages being used on Facebook. This needs to be taken into account when assembling contextual embeddings. Meta needs to improve its Natural Language Processing (NLP), scaling it across more languages. Systems that detect and remove policy-violating content should then be trained accordingly.
Access to safe and well regulated social media platforms are essential for socio-political movements, making it essential for Meta to review its content moderation according to multiple regional, cultural, and linguistic contexts. Prior to content removal based solely on the judgment of automated classifiers, Meta should prioritize training manual moderators in understanding relevant complications tied to online content. This approach would allow human moderators to assess both media content and their accompanying captions in accordance to contextual cues, allowing for a more nuanced and accurate decision making process compared to evaluating them separately. Social media users ought to have the freedom to share content expressing their support for a cause or condemning harmful regimes and beliefs. This should be permitted without Meta's classifiers flagging it as a violation, even in cases where the language used may be deemed 'offensive.' It is important to note that offensive language can be used in non-offensive contexts, and hate speech does not always contain offensive language.
CNN, Leading Iranian actor posts picture without hijab in support of anti-government protests (CNN, 2022)
AWID, Iran's Year of Defiance and Repression: How One Woman's Death Sparked a Nationwide Uprising (AWID, 2023)
BBC NEWS, Iranian Women arrested for not covering hair after man attacks them with yogurt (BBC, 2023)
https://www.bbc.com/news/world-middle-east-65150135
https://www.cnn.com/2022/11/10/middleeast/iran-taraneh-alidoosti-actor-hijab-intl
March 20, 2024 - Comments Off on Digital rights foundation public comment on oversight board case: politician’s comments on demographic changes
Digital rights foundation public comment on oversight board case: politician’s comments on demographic changes
Submission Author: Abdullah b. Tariq Submission Date(s): 12 December 2023
The case is about a French politician, Zemmour, providing a commentary on French demographic changes. The post was shared on Eric Zemmour’s Facebook page by his administrator, in which during an interview Zemmour passed remarks on demographic changes and shift in power balance in Europe, further going on to say that this change in demography has led to Africa colonizing Europe. Zemmour in the past has crossed paths with the European justice system, where he was criticized for “inciting discrimination and religious hatred” in France. On a careful analysis of the current political discourse in Europe and the case's contents, we conclude that the case violates Meta’s hate speech policy under the TIER 3 categorization. The comment not only talks about the immigration policies but also about a broader generalization of Africans in Europe. The post echoes “The Great Replacement”(Le Grand Remplacement) theory. The idea propagated by French author Renaud Camus, promotes violence and hatred by framing the presence of non-white populations, particularly from Muslim-majority countries, as a threat to the ethnic French and white European populations. While Camus publicly condemns white nationalist violence, scholars argue that “implicit calls to violence” are present in his depiction of non-white migrants “as an existential threat”. The theory has been linked to several far-right terrorist acts, including the Christchurch mosque shootings and the El Paso shooting. The theory found support in Europe and has grown popular among anti-migrant and white nationalist movements, with its broader appeal attributed to simple catch-all slogans. More so than a commentary on immigration policies, the post furthers an existing civil division. Thus, it would be fair to categorize the post's contents in TIER 3 of hate speech. Moreover, the post also includes traces of misinformation and misleading content, which also falls under Meta’s content moderation policy on misinformation.
When provided with contextual information, the statement in question befits the broader conspiracy dialogue in France regarding the Great Replacement. Zemmour has vigorously defended “The Great Replacement”(Le Grand Remplacement) conspiracy. The concept, echoed by the far-right groups in Europe, elucidates that the white population of Europe is being demographically replaced. The sentence “...there are four Africans for one European and Africa colonizes Europe…” tries to induce the elements of segregation and dissent against the wider African diaspora within Europe. Moreover, this ideology has previously been used as the justification by white supremacists to carry out mass shootings in the US and New Zealand – bringing attention to the global relevance and repercussions of such a narrative. Not to mention, that the argument used to infer this claim is equally misleading. Using the correlation of demographics to infer the causation of colonization is a misleading argument and fuels conspiracy amongst the general populace. Additionally, using the term colonization induces a power hierarchy among the demographic segments, which does not exist in the context the Politician is framing it.
Zemmour’s comment, although generally highlighting the demographic analysis of two separate periods of two separate continents, the addition of “...Africa colonizes Europe…” creates a false correlation between demography and colonization. In that context, Zemmour is using false information to target a race and nationality – which goes directly against Meta’s policy against misinformation and hate speech on its platform. Such misinformation poses a danger to European democracies, as intimidation and manipulative narratives further jeopardize the broader political discourse on immigration policies and democratic elections in Europe.
Such conspiracies not only otherize a whole population segment but also induce hate and fear among the white European population. The statement “...Africa colonizes Europe…” serves as an identifier where Zemmour insinuates that African immigrants living in Europe are the colonizers. Creating a distinction of European citizens from European Citizens of African descent is highly exclusionary and discriminatory based on race and nationality. Moreover, such extreme claims about reverse colonization because of demographic changes take attention away from arguments that are of legitimate concern for most of Europe in current times. Commentary and criticism of immigration policies are healthy discussion topics that should not be restricted in our digital spaces. However, developing well-informed policies becomes a target of manipulated truth when this discourse enters the realm of conspiracies and misinformation. In that instance, it is equally essential to ensure that the wider population, especially protected groups, is kept safe in offline and online spaces. Meta needs to ensure, especially through election periods, that the bogus and conspiratorial claims are identified and marked on their platforms. Until the platform figures out a way to efficiently and effectively include detailed contextual embeddings within their algorithms, there needs to be increased human review of such reports. There are limited laws against the involvement of AI in online political discourses; therefore, as a multi-billion-user company, the responsibility falls on Meta to do its part in ensuring the minimal impact of such automated models on human discourse development.
Zemmour’s comment on demographic changes can not be viewed in isolation, considering his influence on the political discourse in France. The claim of a shift in power and explicit mention of the word “Africans'' targets and alienates the non-white population of Europe. The contextual underpinnings of general anti-migrant discourse in Europe and a lack of non-white voices hint towards the more significant issue of discrimination against groups falling within the protected characteristics. In such an environment, Meta must ensure their platform does not feed into discriminatory practices. Politicians worldwide have massive followings in online spaces and utilize these platforms to address a more comprehensive voting class. However, their followers are primarily the members of society who are already in alignment with the politicians’ political ideologies – as made evident through the response to Eric Zemmour’s post. This creates an echo chamber within the platform where the ideologies propagate and expand without much resistance. A lack of accountability in such situations could birth hostile and harmful narratives. Therefore, it is paramount that Meta ensures much more careful monitoring of what is being propagated in these echo chambers. Although identifying and removing hateful content online is essential, it is equally, if not more important, to evaluate the impact of such content. There should be higher sensitivity in the content moderation policies when evaluating content with a higher influence on the general public.
The case’s contextual review shows how the post discriminates against a protected group through misleading, fear-mongering narratives and exclusion. The alienation of a non-white demographic segment through Zemmour’s comments exacerbates the ongoing discourse around migration laws. In such situations, Meta needs to ensure that it can identify and differentiate between political commentary and targeting of specific segments of the society (“Africans”) through misinformation and hate speech. Meta in its hate speech policy allows for “commentary and criticism of immigration policies”; however, this exception does not apply to this case. Conspiracy theories and discriminatory speech falls under the the categorization of hate speech; thus a spade should be called a spade and dealt as such. Providing safe spaces for conspiracies and hateful narratives to grow under the guise of political commentary could have a detrimental impact on the democratic values of European people, as well as discriminate and further create a divide among the civilian population. Thus, a more rigorous understanding of the context within different echo chambers and political spheres should be developed by the reviewers of such claims. On such a basis, TIER 3 of Meta’s hate speech policy should take into account the repercussions of specific comments on immigration policies and how they promote segregation and exclusion of protected groups.
February 19, 2024 - Comments Off on NWJDR condemns the use of technology-facilitated gender-based violence (TFGBV) and Generative AI to attack and silence women journalists
NWJDR condemns the use of technology-facilitated gender-based violence (TFGBV) and Generative AI to attack and silence women journalists
PAKISTAN: The Network of Women Journalists for Digital Rights (NWDJR) is angered and deeply concerned about the ongoing attacks against prominent women journalist Meher Bokhari and others in online spaces by PML(N) party supporters. On examining multiple platforms, NWJDR has found non-consensual use of images (NCUI), non-consensual use of intimate images (NCII) and doctored images created through generative artificial intelligence (AI) and other AI tools of Meher Bokhari being shared online with sexist, misogynistic and sexualized gendered attacks.
It is not the first time women journalists have been targeted by political party supporters online. There has been pervasive and persistent online harassment, sexualized and otherwise gendered disinformation faced by women journalists in Pakistan, with many being threatened with physical assault and offline violence. We’ve witnessed multiple incidents of female journalists' private information being leaked online with what we can say are well-planned and directed efforts to silence them & resulted in stalking and offline harassment. In Meher’s case, the attempt to malign, scare and threaten her with morphed images of her on objectionable content through generative AI tools points towards a remarkably alarming trend of a new form of technology-facilitated gender-based violence (TFGBV) against journalists.
Before the elections, NWJDR released the 6-point agenda on media freedom and journalist safety for political parties' electoral manifestos, which was signed by more than 100 journalists and civil society members on journalist safety. It has been quite alarming and disappointing that despite the efforts to raise our concerns with political parties around journalist safety, we witnessed, in a matter of days, attacks on journalists like Meher Bokhari, Maria Memon, Hamid Mir, Saadia Mazhar and Benazir Shah, to name a few along with family members of journalists for simply reporting during Pakistan’s general elections 2024.
The action angers NWJDR, and we’d like to reiterate that online violence and abuse constitute as an offense and complaint-based action should be taken by relevant authorities.
Signed by:
-
- Absa Komal - Dawn TV
- Hafsa Javed Khawja - The reporters
- Mehr F Husain - Editor- The Friday Times/ Publisher, ZUKA Books
- Amber Rahim Shamsi - Director, Centre for Excellence in Journalism
- Saadia Mazhar- Freelance Investigative journalist
- Tehreem Azeem - Freelance Journalist and Researcher
- Laiba Zainab- The Current
- Rabbiya.A. - Turkman Multimedia Journalist & Documentary Maker
- Shehzad Yousafzai
- Nighat Dad - Executive Director, Digital Rights Foundation
- Amer Malik - Senior Journalist, The News International
- Kaif Afridi - News Producer, Tribal News Network
- Feroza Fayyaz - Web-Editor, Samaa TV
- Muhammad Ammad - Copy Editor
- Nasreen Jabeen - AbbTakk News
- Seerat Khan - Programs Lead/Co-editor Digital 50.50 , Digital Rights Foundation
- Afra Fatima - Digital journalist
- Fauzia Kalsoom Rana- Producer Power show with Asma Chaudhary, Founder and Convenor Women journalists Association of Pakistan WJAP
- Muhammad Bilal Baseer Abbasi - Multimedia Journalist, Deutsche Welle (DW) News Asia / Urdu Service
- Mahwish Fakhar - Producer Dawn TV
- Sadia Rafique Radio Broadcaster - FM 101
-
- Sarah B. Haider - Freelance Journalist
- Ramsha Jahangir - Journalist
- Zoya Anwer - Independent Journalist
- Afifa Nasar Ullah - Multimedia journalist/foreign correspondent at Deutsche Welle.
- Sheema Siddiqui- Geo TV Karachi
- Mudassir Zeb - Crimes Reporter Daily Aksriyat Peshawar.
- Nasreen Jabeen - Daily Jang Peshawar
- Aftab Mohammad - Dialogue Pakistan
- Anees Takar - Frontier Post, Radio Aman Network
- Unbreen Fatima- Deutsche Welle
- Khalida Niaz- Tribal News Network
- Naheed Jahangir- Assistant Media Manager
- Fozia Ghani- Freelance Journalist
- Ayesha Saghir- Express News
- Aamir Akhtar - Freelance Investigate Journalist Swabi, GTV News/Such News
- Rani Wahidi - Correspondent, Deutsche Welle (DW) Urdu
- Fahmidah Yousfi- Rava Documentary
- Kamran Ali- Reporter- Aaj News
- Jamaima Afridi- Freelance Journalist
- Fatima Razzaq- Lok Sujag
- Umaima Ahmed- Global Voices
- Najia Asher President GNMI
- Sanam Junejo - Associated Press of Pakistan
- Asma Kundi- Wenews.pk
- Maryam Nawaz- Geo News
- Lubna Jarrar - Freelance Journalist
- Sumaira Ashraf- Video journalist DW
- Laiba hussan - Aaj news
- Uroosa Jadoon- Geo News
- Tanzeela Mazhar GTV
- Ayesha Rehman - Geo News
- Najia Mir - Anchor/Producer KTN News
- Afia Salam- Freelance Journalist
- Farieha Aziz - Co-founder, Bolo Bhi
- Mehmal Sarfraz- Journalist
- Benazir Shah- Editor, Geo Fact Check
- Mahjabeen Abid- PTV National Multan
- Zainab Durrani - Senior Program Manager, Digital Rights Foundation
- Nadia Malik - Senior Executive Producer Geo News
- Annam Lodhi - Freelance Journalist
- Fatima Sheikh - Freelance Journalist/ Communications Executive at CEJ-IBA
- Maryam Saeed - Editor Digital 50.50
- Rabia Mushtaq - Senior Sub-editor, Geo.tv
- Nadia Naqi, Dawn News