Blog Archives

All Posts in Statement

August 6, 2024 - Comments Off on DIGITAL RIGHTS FOUNDATION PUBLIC COMMENT ON OVERSIGHT BOARD CASES 2024-007-IG-UA, 2024-008-FB-UA (EXPLICIT AI IMAGES OF FEMALE PUBLIC FIGURES)

DIGITAL RIGHTS FOUNDATION PUBLIC COMMENT ON OVERSIGHT BOARD CASES 2024-007-IG-UA, 2024-008-FB-UA (EXPLICIT AI IMAGES OF FEMALE PUBLIC FIGURES)

Submission: Research Department - Digital Rights Foundation

Aleena Afzaal - Sr. Research Associate 

Abdullah B. Tariq - Research Associate

 

Submission Date: April 30, 2024 

 

Legal Context:

Given the borderless nature of digital content, Meta should consider international legal developments as a framework for its policies. The European Union’s Digital Services Act and specific statutes from the U.S. state of California, such as AB 602, provide precedents for regulating digital content and protecting individuals against non-consensual use of their images. 

 

Irregular responses in two different cases (How such cases affect people in different regions):

It is important to note that the two cases relating to deepfake videos of women public figures  were approached and dealt with differently potentially due to difference in ethnicity and identity: one being from the Global North and the other belonging to the global majority identity. The American public figure case received a relatively immediate response whereas the case of resemblance to a public figure in India was not highlighted or amplified as quickly. Despite the technical discrepancies, it cannot be ignored that in the latter case, an Instagram account with several similar images remained unflagged for a long time. Additionally, one question that arises continuously from a string of these cases is why have tech platforms not adopted technological mechanisms that can flag sensitive content, particularly deepfakes circulating on different platforms. The harms prevailing due to the emerging technologies particularly generative AI content need to be viewed through a more intersectional lens. Women and marginalized groups in the global majority particularly from South Asia are more vulnerable to attacks online with a significant impact on their online and offline safety rather than individuals from the global North. While female security and inclusion is crucial, the potential otherization of the community is concerning and needs to be revisited. 

 

Moreover, taking cultural context into account, the level of scrutiny and criticism a South Asian female is subjected to in such events is higher as compared to a woman of American descent. In India, a woman is viewed as good only if she is able to maintain the respect and honor of her family. Female bodies are sexualized and any attack on them is considered to be an attack on men and the community's honor. Several cases have come forward in the past where women and young girls in India have taken their own lives as a result of leaked photos. In the wider Indian subcontinent region, cases have arisen where women have been subjected to honor killing as a consequence of being romantically involved with a man, their explicit photos being leaked and more. Such cases in the region showcase an underlying problem where women and honor are used as interchangeable terms and need to be taken into consideration when handling issues of similar nature. Public figures or not, women are more prone to being targeted by AI-generated content and deepfakes. Recently, incidents have come forward where deepfakes of two female public figures in Pakistan have been made widely available across different social media platforms. As far as Meta’s platforms are concerned, these deepfakes have been uploaded with nudity being covered with the use of stickers and emojis however in the comments section, users have offered and/or asked to share the link to view the originally created content. It is crucial that platforms like Meta have mechanisms in place where content and comments amplifying technology-facilitated gender-based violence are also flagged. Considering the higher probability combined with the societal consequences, it is essential for Meta to give greater consideration to cases involving deepfakes and AI-generated content showcasing characteristics of technology-facilitated gender-based violence more importance on the platform, particularly with countries from the global majority where the risk of potential harm is higher than others.  Human reviewers should also be made aware of the language and cultural context of the cases under consideration. Trusted partners of Meta should be entrusted with the task of escalating the cases, where the response time of prioritized cases is expedited and addressed at the earliest.   

 

Clarification and Expansion of Community Guidelines:

Meta’s current community standards need to be more explicit in defining violations involving AI-generated content. There is an urgent need to develop a specific section for public-facing community guidelines on the platform to address deepfakes. Detailing examples and outlining repercussions would clarify the company's stance for users and content moderators alike. Public figures are at a higher risk of being victims of deep fake content due to their vast exposure (reference imagery) in online spaces. Thus, the policy rationale and the consequent actions need to be similar in the case of public figures and private individuals considering the sensitivity of such content regardless of an individual’s public exposure. It is equally important that Meta revises its policy regarding sensitive content where the person being imitated is not tagged. The policy needs to be inclusive of such content as the potential harms remain. Regular updates to these guidelines are crucial as AI technology evolves.

 

Technical Mechanisms for Enhanced Detection and Response: 

  • Implementing cutting-edge machine learning techniques to detect deepfake content (image, video and audio) can significantly reduce the spread of harmful content. These algorithms should focus on detecting common deepfake anomalies and be regularly updated to keep pace with technological advancements. A two pronged approach can be utilized for detecting and flagging harmful content on their platforms. Larger investments should be placed in automated detection systems to efficiently categorize and identify generative AI content and be adaptable to future advancements. 
  • Detected Gen AI content should be marked on Meta platforms to avoid confusion or the spread of misinformation. Meta needs to reassess its appeals pipeline and allow for extended review times, especially for content that contains any human likeness. Moreover, Meta needs to reassess its appeals pipeline and allow for extended review times, especially for content that contains any human likeness.
  • Collaborating with AI developers to embed watermarks in AI-generated content can help automatically identify and segregate unauthorized content. This would bolster Meta's ability to preemptively block the dissemination of harmful material. 
  • Expanding this database to include international cases and allowing for real-time updates can enhance its effectiveness in identifying and removing known violating content swiftly.
  • Meta should build on and enhance the capacity of its trusted partners particularly in terms of escalating content to the platform and having a robust and quick escalation channel in case of emergencies or content that is life-threatening. Meta needs to have emergency response mechanisms in place and have policy teams who are sensitized to deal with matters of utmost urgency particularly when it relates to marginalized groups and vulnerable communities.

 

The current challenges faced by Meta in managing AI-generated content are largely due to its lack of specificity in its policies to encapsulate generative AI content. The community standards in their current state fail to address the complexities of AI-generated content and the adverse impacts it can have on people and communities. Meta’s clear differentiation in its policy application rationale for two different cases raises concerns over irregular and inefficient content moderation policies. While we acknowledge that content in both these cases is no longer on the platform, the urgency displayed in taking down content from the second case compared to the delay in the removal from the first case highlights the dire need for stringent and equitable response of social media platforms on gen-AI content. Moreover, in the second case the deepfake video of an American woman public figure was removed under the policy “Bullying and Harassment, specifically for "derogatory sexualised photoshop or drawings"” – Greater discourse is required over what classifies as “derogatory” in this context. In the absence of a derogatory element, will an AI-generated image that involves sexualisation and nudity be available to view on the platform? If so, then how is Meta perceiving the consensual privacy and dignity of public figures on its platforms? These are the questions that need to be addressed and outlined in Meta’s content moderation policies, especially in terms of tech-facilitated gender-based violence.

 

Meta’s Media Matching Service Banks are restricted by the database of known images, which renders them highly ineffective against newly generated deepfake content. With tools to create generative AI content becoming increasingly accessible, the technology to flag and address such content needs to catch up as soon as possible. It is essential for Meta to expand its database to encompass a wider array of AI-generated content types and implement real-time updates. 

 

In conclusion, Meta’s automated detection systems struggle to keep pace with rapidly advancing sophisticated technologies used in deepfake content. For Meta to ensure safety on its platforms for marginalized groups and communities, it is essential for them to revisit their content moderation policies pertaining to generative AI content while enhancing and investing in its trusted civil society partners to escalate content towards the platform.

 

 

July 15, 2024 - Comments Off on Technology-facilitated Gendered Surveillance on the Rise in Women’s Private Spaces 

Technology-facilitated Gendered Surveillance on the Rise in Women’s Private Spaces 

7th June 2024

Pakistan: Digital Rights Foundation (DRF) is extremely alarmed and concerned about the ongoing surveillance of women and girls in private spaces through unregulated CCTV cameras in women's shelters, hostels, universities and salons, invading their right to privacy and dignity in private spaces. Women are already exceedingly subjected to gender based violence, harassment and social surveillance by society which in turn pushes them to seek refuge in gender segregated private spaces such as these. 

 

According to the 2023 Gender Gap report Pakistan ranks at 142 out of 146 countries in terms of gender parity, including economic participation and opportunity, educational attainment, health and survival, and political empowerment. With women’s participation being severely limited and restricted in the country, they are significantly more financially dependent, prompting them to look towards spaces like Dar-ul-Amans (designated shelters for women in distress) for shelter and protection. Women residing in Dar-ul-Amans are largely vulnerable, particularly when they face little to no familial support and are seeking refuge. 

 

Dar-ul-Amans in the country have been purpose-built to provide state-sanctioned  support at an institutional level. In light of this, the use of unregulated CCTV cameras is flagrantly threatening and targeting women’s dignity and privacy. This an active and  gross violation of their constitutional rights as granted under Article 14. Additionally, S. 9(5) of the Guidelines for Dar-Ul-Aman in Punjab also recognizes these rights and states that ‘violation of a resident’s privacy shall be considered as misconduct and the Social Welfare Department shall be justified in taking appropriate action in this regard.’

 

Women living in these shelters have also complained of gross mistreatment and abuse at the hands of those in charge at these centers. Days before the Rawalpindi Dar-ul-Aman incident, another incident of a similar nature took place in one of Lahore’s women’s hostels where hidden cameras were found on the premises of the building. These repeated instances of CCTV cameras being installed in private spaces under the guise of safety and the footage being misused , serve as a direct invasion of privacy and threat to women’s physical safety and create a hostile environment of mistrust and insecurity amongst women at large. 

 

There have even been reports of instances of  CCTV cameras being installed to surveil women in salons, where the footage and data has later been employed as blackmail material. In 2019, students from University of Balochistan (UoB) protested in the wake of CCTV camera footage being used by security personnels to sexually harass and blackmail students, particularly the young women on campus. In the past, we have seen that the Senate Standing Committee on Human Rights have taken notice of these issues and we urge them to exercise their position to do the same now and investigate these heinous acts of violation against women’s privacy at Dar-ul-Amans and other private spaces. 

 

DRF's Cyber Harassment Helpline since its inception has received 16,849 complaints from across Pakistan, with 58.5% of the complaints having been received from women across the country. Over the years we have received a number of complaints where women have repeatedly complained about being targeted through surveillance & spyware technologies injected to their devices by individuals who are close to them in order to control and monitor their movements and activities. We have also witnessed a rising trend where women were captured on camera without their consent in addition to the misuse of their intimate images through blackmail and intimidation. In some instances these images are further manipulated and doctored through the use of  generative AI tools to create deep fakes visuals and imagery.

 

We strongly urge transparent and urgent investigations into these violative incidents which are (of employing the use of unregulated CCTV cameras to violate women’s privacy) contributing to increased gender surveillance in the country. DRF has long been advocating for a human rights centric personal data protection law for this very reason, which needs to be centering the   privacy and data of vulnerable communities including women, gender minorities and marginalized groups. We urge the current Ministry of Human Rights (MoHR) and Ministry of Information Technology & Telecommunication (MoITT) to involve women rights and digital rights groups in consultations around the proposed data protection bill in order to address the existing gaps. Moreover, we urge the National Commission on Human Rights (NCHR) and National Commission of Women Rights (NCWR) to look into the matter post haste and ensure that women are not subjected to gender based violence at the hands of technology, particularly in the form of their surveillance in private and public spaces. 

 

Digital Rights Foundation is a registered research-based NGO in Pakistan. Founded in 2012, DRF focuses on ICTs to support human rights, inclusiveness, democratic processes, and digital governance. DRF works on issues of online free speech, privacy, data protection and online violence against women.

 

For more information log on: www.digitalrightsfoundation.pk 

Facebook 

Twitter/X 

Instagram

Tiktok 

Contact

Nighat Dad 

[email protected] 

Seerat Khan

[email protected]

Anam Baloch

[email protected]

May 13, 2024 - Comments Off on DIGITAL RIGHTS FOUNDATION PUBLIC COMMENT ON OVERSIGHT BOARD CASE 2023-038-FB-MR (PAKISTANI PARLIAMENT SPEECH)

DIGITAL RIGHTS FOUNDATION PUBLIC COMMENT ON OVERSIGHT BOARD CASE 2023-038-FB-MR (PAKISTANI PARLIAMENT SPEECH)

Submission Author: Maryam Ali Khan
Submission Date: 23rd January, 2024

In May 2023, a video was posted on Facebook by a news channel where a Pakistani politician was addressing the parliament and made comments that suggested that some public officials, including military personnel, needed to be hanged in order for the country to ‘heal itself.’ This was achieved by drawing parallels between the contemporary Pakistani political landscape and the perceived necessity for these executions with an ancient Egyptian ritual where individuals were sacrificed in the River Nile as a means of controlling flooding. Considering this, Meta should have taken down the video from Facebook in accordance with its Violence and Incitement Policy. The policy clearly stipulates that any content containing statements targeting individuals, other than private and high-risk persons with statements that advocated or called for violence, as well as statements containing aspirational or conditional calls to violence will be removed.

The statements made by the politician in the video were clearly inflammatory and violent especially when you look at them within the context of the country’s political history and how public hangings of influential figures, such as Zulfikar Ali Bhutto, have been manipulated by those in power to advance their agendas and propagate specific narratives - narratives that have had long term consequences on the political fabric of the country. When looking at posts and content such as this, it is also important to consider the role that state institutions such as the military and judiciary have played and continue to play in Pakistani politics. Pakistan has had a history of military managing or meddling with civilian state institutions. Free press and journalism have been relentlessly monitored and restricted during these periods of military rule, with censorship and intimidation being a regular occurrence. Journalists and citizens encounter a variety of problems, including threats of assault and harassment. Pakistan currently ranks 150th out of 180 nations in the 2023 World Press Freedom Index, indicating a striking deterioration in freedom of the press. As the country progresses, it is critical that authorities take relevant steps to ensure freedom of the press, especially due to the fact that no democracy can function efficiently without it.

As Pakistan approaches its upcoming general elections scheduled for February, familiar patterns seem to be repeating themselves. Censorship of the press and journalists has been ongoing before and after Imran Khan was ousted as the Prime Minister after a no-confidence motion in April 2022. There have been multiple riots by supporters of his party since then and the Pakistan Electronic Media Regulatory Authority (PEMRA), has increasingly censored media outlets and journalists who are critical of the state. Additionally, there have been multiple state-imposed internet shutdowns in an attempt to silence dissent and disrupt online political campaigning by certain political parties. Since May 2023, former prime minister Imran Khan has been in jail, and his party members are being severely restricted by authorities from freely contesting elections.

In Pakistan, social media and digital platforms have played a huge role in amplifying journalistic freedom. These platforms allow journalists and media organizations to quickly reach a much larger audience than was conventionally possible before. However, this has also meant that a lot of journalists and news agencies have been unable to maintain their standards of what content is “newsworthy”, and many times share content that is nothing but sensational, and is intended to bring in views - which is good for business. Journalists are pressured by media houses to report and sensationalize news, reflecting the severe absence of ethical journalism standards in the industry. We have also seen a shift in the industry where news reporting is no longer a monopoly maintained by journalists. Pakistan has seen an increasing trend of

‘Youtubers,’ ‘political commentators’, and ‘influencers’ spreading disinformation under the guise of news in the country.

The speech provided no useful information and led to more instability in the country leading up to the May 9th events in the country when riots broke out in cities after Imran Khan was ousted and military official buildings were attacked. What's more alarming is that this particular speech incited public opinion towards attacking public officials who can be in the guise of military officials, politicians, and any other official believed to be working for a political opponent other than the proclaimed party who made this statement.

When determining what kind of content should be allowed to stay up on social media platforms, content moderators should keep in mind these regional and political contexts. How certain content could escalate offline and online violence should especially be considered. Freedom of speech and freedom of the press can be achieved without having to resort to blatant violent speech. In content removal and guideline development, Meta should encourage governments to adhere to appropriate protocols for submitting content removal requests. The Pakistani government has had the capability to monitor and censor online material and has in the past issued draconian laws for social media platforms in February 2020, which allow them to erase “unlawful” information within 24 hours. These restrictions have been criticized for limiting freedom of expression and stifling dissent of users online. The 2016 Prevention of Electronic Crimes Act (PECA) also gives authorities powers to monitor and prohibit internet information. Therefore, it is essential that the government establishes a well-defined set of guidelines and protocols that prioritize human rights guiding principles and the freedom of the press. The justification for content removal should not be based on its opposition to the state or its advocacy for causes such as women's and trans rights - which are often misunderstood as ‘un Islamic,’ ‘immoral,’ ‘vulgar’ and ‘immodest.’

These steps will help in the democratization and de-escalation of political tensions both in online and offline platforms, while simultaneously improving the quality of journalism in the country.

*To the read the Oversight Board’s full decision on this case:

https://www.oversightboard.com/decision/fb-57spp63y/

**To read see all submitted Public Comments: https://www.oversightboard.com/news/

May 6, 2024 - Comments Off on DIGITAL RIGHTS FOUNDATION PUBLIC COMMENT ON OVERSIGHT BOARD CASE 2023-032-IG-UA (IRANIAN WOMAN CONFRONTED IN STREET)

DIGITAL RIGHTS FOUNDATION PUBLIC COMMENT ON OVERSIGHT BOARD CASE 2023-032-IG-UA (IRANIAN WOMAN CONFRONTED IN STREET)

Submission: Maryam Ali Khan, Digital Rights Foundation
Submission Date: 30th November, 2023

Meta’s classifiers failed to assess the relevant context and took too abrupt a decision in removing the post shared on Instagram. The shared video showed a man confronting a woman in public because she was not wearing a hijab. The woman whose face was visible in the video, was arrested following the incident. The accompanying Persian caption used descriptive language to express the user’s support for the woman in the video and all Iranian women standing up to the regime. As per Meta's assessment, the caption was construed as having an "intent to commit high severity violence," thereby violating its Violence and Incitement policy. The post was later restored to Instagram under the Coordinating Harm and Promoting Crime policy after the user appealed to Meta and it was decided that the post did not violate community standards. This policy outlined that users may be allowed to advocate and debate the legality of content that aimed to draw attention to harmful or criminal activity as long as they did not advocate for or coordinate harm. It also outlined that any content that put unveiled women at risk would require additional information and context.

It is important to note that this context was provided in the post if the classifiers were designed to assess the content in totality instead of processing the caption and media individually. The attack did not take place in a vacuum and was a byproduct of strict moral policing by the Iranian state. This was exacerbated by ongoing political unrest that had unfolded in September 2022 when Mahsa Amini was taken into custody by the morality police under accusations of observing ‘improper hijab’ - where she suspiciously died of a heart attack. This sparked nationwide protests, united by the chant: ‘Zan, Zendegi, Azadi’ (Woman, Life, freedom).

For the ‘Woman, Life, Freedom’ movement, social media and online platforms were paramount in the mobilization of protests and broadcasting of vital information. Videos and pictures from various protests in schools, universities, and streets were circulated which showed more and more women exercising their right to freedom of expression by appearing in public without their head coverings. Social media allowed the protesters a platform to get their message out into the world. A prominent Iranian actress, Taraneh Alidoosti, posted multiple pictures of herself without a headscarf on Instagram with the caption ‘Woman. Life. Freedom’. Women willfully

unveiling in public spaces quickly became a symbol of defiance against the morality police, and the regime. As expected, acts of defiance such as unveiling in public in a political and religious climate, such as Iran’s, comes with its own risks. Women and girls who have stepped out in public without a head covering have been arrested, beaten, and had items like yogurt dumped on their heads. Men have also been arrested and beaten for showing support for the cause. 

Additionally, the Iranian authorities resorted to unprecedented levels of internet shutdowns in an attempt to silence dissent and isolate the Iranian people from the world. According to Filter.watch, an Iran-focused internet monitor, Iran experienced internet blackouts for over four months either nationwide, or at a provincial level after Mahsa Aminis death. Moreover, the government enacted legislation that allowed the government to monitor and identify individuals based on their online activity. 

These measures are a part of the government's effort to curtail freedom of expression and access to the global online platform. A majority of Iranian users are either experiencing constant removal of their content or know at least one person who is being censored in the Persian language. The most common type of content that has been removed or shadowbanned are hashtags of human rights campaigns; comedians posting political satire; and activists organizations using chants like “death to Khamenei”. Persian language news organizations have also had their content removed simply for discussing political organizations. Most of the content posted on Meta is in languages other than English, with more than a hundred languages being used on Facebook. This needs to be taken into account when assembling contextual embeddings. Meta needs to improve its Natural Language Processing (NLP), scaling it across more languages. Systems that detect and remove policy-violating content should then be trained accordingly.

Access to safe and well regulated social media platforms are essential for socio-political movements, making it essential for Meta to review its content moderation according to multiple regional, cultural, and linguistic contexts. Prior to content removal based solely on the judgment of automated classifiers, Meta should prioritize training manual moderators in understanding relevant complications tied to online content. This approach would allow human moderators to assess both media content and their accompanying captions in accordance to contextual cues, allowing for a more nuanced and accurate decision making process compared to evaluating them separately. Social media users ought to have the freedom to share content expressing their support for a cause or condemning harmful regimes and beliefs. This should be permitted without Meta's classifiers flagging it as a violation, even in cases where the language used may be deemed 'offensive.' It is important to note that offensive language can be used in non-offensive contexts, and hate speech does not always contain offensive language.

 CNN, Leading Iranian actor posts picture without hijab in support of anti-government protests (CNN, 2022)

 AWID, Iran's Year of Defiance and Repression: How One Woman's Death Sparked a Nationwide Uprising (AWID, 2023) 

https://www.awid.org/news-and-analysis/irans-year-defiance-and-repression-how-one-womans-death-sparked-nationwide

 BBC NEWS, Iranian Women arrested for not covering hair after man attacks them with yogurt (BBC, 2023) 

https://www.bbc.com/news/world-middle-east-65150135

https://www.cnn.com/2022/11/10/middleeast/iran-taraneh-alidoosti-actor-hijab-intl

March 20, 2024 - Comments Off on Digital rights foundation public comment on oversight board case: politician’s comments on demographic changes

Digital rights foundation public comment on oversight board case: politician’s comments on demographic changes

Submission Author: Abdullah b. Tariq
Submission Date(s): 12 December 2023

The case is about a French politician, Zemmour, providing a commentary on French demographic changes. The post was shared on Eric Zemmour’s Facebook page by his administrator, in which during an interview Zemmour passed remarks on demographic changes and shift in power balance in Europe, further going on to say that this change in demography has led to Africa colonizing Europe. Zemmour in the past has crossed paths with the European justice system, where he was criticized for “inciting discrimination and religious hatred” in France. On a careful analysis of the current political discourse in Europe and the case's contents, we conclude that the case violates Meta’s hate speech policy under the TIER 3 categorization. The comment not only talks about the immigration policies but also about a broader generalization of Africans in Europe. The post echoes “The Great Replacement”(Le Grand Remplacement) theory. The idea propagated by French author Renaud Camus, promotes violence and hatred by framing the presence of non-white populations, particularly from Muslim-majority countries, as a threat to the ethnic French and white European populations. While Camus publicly condemns white nationalist violence, scholars argue that “implicit calls to violence” are present in his depiction of non-white migrants “as an existential threat”. The theory has been linked to several far-right terrorist acts, including the Christchurch mosque shootings and the El Paso shooting. The theory found support in Europe and has grown popular among anti-migrant and white nationalist movements, with its broader appeal attributed to simple catch-all slogans.  More so than a commentary on immigration policies, the post furthers an existing civil division. Thus, it would be fair to categorize the post's contents in TIER 3 of hate speech. Moreover, the post also includes traces of misinformation and misleading content, which also falls under Meta’s content moderation policy on misinformation.

When provided with contextual information, the statement in question befits the broader conspiracy dialogue in France regarding the Great Replacement. Zemmour has vigorously defended “The Great Replacement”(Le Grand Remplacement) conspiracy. The concept, echoed by the far-right groups in Europe, elucidates that the white population of Europe is being demographically replaced. The sentence “...there are four Africans for one European and Africa colonizes Europe…” tries to induce the elements of segregation and dissent against the wider African diaspora within Europe. Moreover, this ideology has previously been used as the justification by white supremacists to carry out mass shootings in the US and New Zealand – bringing attention to the global relevance and repercussions of such a narrative. Not to mention, that the argument used to infer this claim is equally misleading. Using the correlation of demographics to infer the causation of colonization is a misleading argument and fuels conspiracy amongst the general populace. Additionally, using the term colonization induces a power hierarchy among the demographic segments, which does not exist in the context the Politician is framing it.

Zemmour’s comment, although generally highlighting the demographic analysis of two separate periods of two separate continents, the addition of “...Africa colonizes Europe…” creates a false correlation between demography and colonization. In that context, Zemmour is using false information to target a race and nationality – which goes directly against Meta’s policy against misinformation and hate speech on its platform. Such misinformation poses a danger to European democracies, as intimidation and manipulative narratives further jeopardize the broader political discourse on immigration policies and democratic elections in Europe. 

Such conspiracies not only otherize a whole population segment but also induce hate and fear among the white European population. The statement “...Africa colonizes Europe…” serves as an identifier where Zemmour insinuates that African immigrants living in Europe are the colonizers. Creating a distinction of European citizens from European Citizens of African descent is highly exclusionary and discriminatory based on race and nationality. Moreover, such extreme claims about reverse colonization because of demographic changes take attention away from arguments that are of legitimate concern for most of Europe in current times. Commentary and criticism of immigration policies are healthy discussion topics that should not be restricted in our digital spaces. However, developing well-informed policies becomes a target of manipulated truth when this discourse enters the realm of conspiracies and misinformation. In that instance, it is equally essential to ensure that the wider population, especially protected groups, is kept safe in offline and online spaces. Meta needs to ensure, especially through election periods, that the bogus and conspiratorial claims are identified and marked on their platforms. Until the platform figures out a way to efficiently and effectively include detailed contextual embeddings within their algorithms, there needs to be increased human review of such reports. There are limited laws against the involvement of AI in online political discourses; therefore, as a multi-billion-user company, the responsibility falls on Meta to do its part in ensuring the minimal impact of such automated models on human discourse development.

Zemmour’s comment on demographic changes can not be viewed in isolation, considering his influence on the political discourse in France. The claim of a shift in power and explicit mention of the word “Africans'' targets and alienates the non-white population of Europe. The contextual underpinnings of general anti-migrant discourse in Europe and a lack of non-white voices hint towards the more significant issue of discrimination against groups falling within the protected characteristics. In such an environment, Meta must ensure their platform does not feed into discriminatory practices. Politicians worldwide have massive followings in online spaces and utilize these platforms to address a more comprehensive voting class. However, their followers are primarily the members of society who are already in alignment with the politicians’ political ideologies – as made evident through the response to Eric Zemmour’s post. This creates an echo chamber within the platform where the ideologies propagate and expand without much resistance. A lack of accountability in such situations could birth hostile and harmful narratives. Therefore, it is paramount that Meta ensures much more careful monitoring of what is being propagated in these echo chambers. Although identifying and removing hateful content online is essential, it is equally, if not more important, to evaluate the impact of such content. There should be higher sensitivity in the content moderation policies when evaluating content with a higher influence on the general public. 

The case’s contextual review shows how the post discriminates against a protected group through misleading, fear-mongering narratives and exclusion. The alienation of a non-white demographic segment through Zemmour’s comments exacerbates the ongoing discourse around migration laws. In such situations, Meta needs to ensure that it can identify and differentiate between political commentary and targeting of specific segments of the society (“Africans”) through misinformation and hate speech. Meta in its hate speech policy allows for “commentary and criticism of immigration policies”; however, this exception does not apply to this case. Conspiracy theories and discriminatory speech falls under the the categorization of hate speech; thus a spade should be called a spade and dealt as such. Providing safe spaces for conspiracies and hateful narratives to grow under the guise of political commentary could have a detrimental impact on the democratic values of European people, as well as discriminate and further create a divide among the civilian population. Thus, a more rigorous understanding of the context within different echo chambers and political spheres should be developed by the reviewers of such claims. On such a basis, TIER 3 of Meta’s hate speech policy should take into account the repercussions of specific comments on immigration policies and how they promote segregation and exclusion of protected groups.

February 19, 2024 - Comments Off on NWJDR condemns the use of technology-facilitated gender-based violence (TFGBV) and Generative AI to attack and silence women journalists

NWJDR condemns the use of technology-facilitated gender-based violence (TFGBV) and Generative AI to attack and silence women journalists

PAKISTAN: The Network of Women Journalists for Digital Rights (NWDJR) is angered and deeply concerned about the ongoing attacks against prominent women journalist Meher Bokhari and others in online spaces by PML(N) party supporters. On examining multiple platforms, NWJDR has found non-consensual use of images (NCUI), non-consensual use of intimate images (NCII) and doctored images created through generative artificial intelligence (AI) and other AI tools of Meher Bokhari being shared online with sexist, misogynistic and sexualized gendered attacks. 

 

It is not the first time women journalists have been targeted by political party supporters online. There has been pervasive and persistent online harassment, sexualized and otherwise gendered disinformation faced by women journalists in Pakistan, with many being threatened with physical assault and offline violence. We’ve witnessed multiple incidents of female journalists' private information being leaked online with what we can say are well-planned and directed efforts to silence them & resulted in stalking and offline harassment. In Meher’s case, the attempt to malign, scare and threaten her with morphed images of her on objectionable content through generative AI tools points towards a remarkably alarming trend of a new form of technology-facilitated gender-based violence (TFGBV) against journalists. 

 

Before the elections, NWJDR released the 6-point agenda on media freedom and journalist safety for political parties' electoral manifestos, which was signed by more than 100 journalists and civil society members on journalist safety. It has been quite alarming and disappointing that despite the efforts to raise our concerns with political parties around journalist safety, we witnessed, in a matter of days, attacks on journalists like Meher Bokhari, Maria Memon, Hamid Mir, Saadia Mazhar and Benazir Shah, to name a few along with family members of journalists for simply reporting during Pakistan’s general elections 2024. 

 

The action angers NWJDR, and we’d like to reiterate that online violence and abuse constitute as an offense and complaint-based action should be taken by relevant authorities. 

 

Signed by: 

    1. Absa Komal - Dawn TV
    2. Hafsa Javed Khawja - The reporters
    3. Mehr F Husain - Editor- The Friday Times/ Publisher, ZUKA Books 
    4. Amber Rahim Shamsi - Director, Centre for Excellence in Journalism 
    5. Saadia Mazhar- Freelance Investigative journalist
    6. Tehreem Azeem - Freelance Journalist and Researcher
    7. Laiba Zainab- The Current 
    8. Rabbiya.A. - Turkman Multimedia Journalist & Documentary Maker
    9. Shehzad Yousafzai
    10. Nighat Dad - Executive Director, Digital Rights Foundation 
    11. Amer Malik - Senior Journalist, The News International 
    12. Kaif Afridi - News Producer, Tribal News Network
    13. Feroza Fayyaz - Web-Editor, Samaa TV
    14. Muhammad Ammad - Copy Editor 
    15. Nasreen Jabeen - AbbTakk News 
    16. Seerat Khan - Programs Lead/Co-editor Digital 50.50 , Digital Rights Foundation
    17. Afra Fatima - Digital journalist 
    18. Fauzia Kalsoom Rana- Producer Power show with Asma Chaudhary, Founder and Convenor Women journalists Association of Pakistan WJAP 
    19. Muhammad Bilal Baseer Abbasi - Multimedia Journalist, Deutsche Welle (DW) News Asia / Urdu Service
    20. Mahwish Fakhar - Producer Dawn TV 
  • Sadia Rafique Radio Broadcaster - FM 101
    1. Sarah B. Haider - Freelance Journalist
    2. Ramsha Jahangir - Journalist 
    3. Zoya Anwer - Independent Journalist 
    4. Afifa Nasar Ullah - Multimedia journalist/foreign correspondent at Deutsche Welle.
  • Sheema Siddiqui- Geo TV Karachi 
  • Mudassir Zeb - Crimes Reporter Daily Aksriyat Peshawar.
  • Nasreen Jabeen - Daily Jang Peshawar
  1. Aftab Mohammad - Dialogue Pakistan 
  2. Anees Takar - Frontier Post, Radio Aman Network 
  3. Unbreen Fatima- Deutsche Welle
  4. Khalida Niaz- Tribal News Network 
  5. Naheed Jahangir- Assistant Media Manager 
  6. Fozia Ghani- Freelance Journalist
  7. Ayesha Saghir- Express News
  8. Aamir Akhtar - Freelance Investigate Journalist  Swabi, GTV News/Such News
  9. Rani Wahidi - Correspondent, Deutsche Welle (DW) Urdu  
  10. Fahmidah Yousfi- Rava Documentary
  11. Kamran Ali- Reporter- Aaj News 
  12. Jamaima Afridi- Freelance Journalist
  13. Fatima Razzaq- Lok Sujag 
  14. Umaima Ahmed- Global Voices
  15. Najia Asher President GNMI
  16. Sanam Junejo - Associated Press of Pakistan
  17. Asma Kundi- Wenews.pk
  18. Maryam Nawaz- Geo News
  19. Lubna Jarrar - Freelance Journalist
  20. Sumaira Ashraf-  Video journalist DW
  21. Laiba hussan - Aaj news
  22. Uroosa Jadoon- Geo News 
  23. Tanzeela Mazhar GTV 
  24. Ayesha Rehman - Geo News 
  25. Najia Mir - Anchor/Producer KTN News
  26. Afia Salam- Freelance Journalist
  27. Farieha Aziz - Co-founder, Bolo Bhi
  28. Mehmal Sarfraz-  Journalist
  29. Benazir Shah-  Editor, Geo Fact Check
  30. Mahjabeen Abid- PTV National Multan 
  31. Zainab Durrani - Senior Program Manager, Digital Rights Foundation 
  32. Nadia Malik - Senior Executive Producer Geo News
  33. Annam Lodhi - Freelance Journalist 
  34. Fatima Sheikh - Freelance Journalist/ Communications Executive at CEJ-IBA
  35. Maryam Saeed - Editor Digital 50.50
  36. Rabia Mushtaq - Senior Sub-editor, Geo.tv
  37. Nadia Naqi, Dawn News

February 19, 2024 - Comments Off on Demanding Accountability, Transparency, and Support for Domestic Violence survivors/victims in the Media Industry

Demanding Accountability, Transparency, and Support for Domestic Violence survivors/victims in the Media Industry

 

PAKISTAN: The Network of Women Journalists for Digital Rights (NWDJR) is not only deeply concerned but fiercely angered at yet another instance of brutal domestic violence by the male members of the journalist community. This time, the alleged perpetrator is an influential media person, an ARY anchor named Ashfaque Ishaq Satti. 

 

Enough is enough; we are appalled at the criminal justice systems and institutions permeated with patriarchy that condone violence against women, dismissing it as a mere “personal or private dispute.” Domestic violence is criminalised in Pakistan under the Domestic Violence (Protection and Prevention) Act 2020. Additionally, the dignity of an individual is a fundamental right protected in the Constitution of Pakistan, and dismissing it as “a personal” matter is incongruous when the law specifically safeguards it as a human right. Domestic violence is the result of deep-rooted political structures and power dynamics that are intrinsically oppressive and patriarchal. We strongly believe that the personal is political, and for women, the challenges they face in their personal lives--the double shift due to inequitable distribution of care and domestic work, violence within the home, harassment in work and public places, online abuse directed at them--impacts and often puts their lives in danger. 

 

We have not forgotten the murder of Shaheena Shaheen at the hands of her husband, another perpetrator from the media fraternity. NWJDR has also received individual testimonies from its members of women journalists who are facing severe domestic abuse and violence from their partners who are also part of the media industry, and holding influential positions. Through the countless testimonies we have received, we foresee a new and dangerous trend of male journalists perpetrating violence in their relationships, highlighting a bleak picture that needs urgent action. 

 

As reported by the Human Rights Commission of Pakistan (HRCP) in 2020, over 90% of Pakistani women have faced domestic violence in their lifetime. According to a media report in Express Tribune in 2022, “data released under the Punjab Transparency and Right to Information Act 2013, cases of domestic violence saw an increase in the last five years. Same is the situation in courts; an estimated forty percent of cases in courts are family cases, and the remaining sixty percent involve crimes like murder, kidnapping and theft. For example, if a court has 150 total cases for hearing, at least eighty-five cases are family cases, mainly of violence”. How many more women will have to become victims of domestic abuse for it to be taken seriously? 

 

Domestic violence is often an underreported crime due to the stigma surrounding it and because of the lack of accessibility to complaint mechanisms for victims. Women are not believed; even when they provide evidence of abuse there is a hue and cry on how there are “two sides” to the story. How men use influence in cases of domestic violence to evade accountability and impunity is not uncommon. Law enforcement agencies, including the police, also force the survivors of abuse to reach a settlement outside the court, again reiterating how a “private” matter should remain within the confines of the house and associating ‘family honor’ with a woman’s dignity and identity. 

 

The concerns that women face should be taken seriously and acted upon, and in cases where male journalists are involved, media outlets as well as the government must respond immediately to ensure that influence is not exercised to evade legal action.  State inaction sends a message to women that they are on their own and in the long term discourages them from speaking up against abuse and filing a complaint. The challenges that women face before they actually speak up are already humongous; from living in close proximity with the perpetrators who are constantly surveilling their physical movement and devices to going back to the same house after registering a complaint can cost women their lives.  

 

Additionally NWJDR acknowledges and appreciates the step taken by ARY Management to immediately suspend Ashfaq Satti till the law takes its course and decides the matter. It is a promising step on how our society should have zero tolerance for violence against women. However, a lot more needs to be done before to prevent the menace of domestic violence in our country.

 

We, the undersigned, demand the following:

 

  1. We call upon media outlets to thoroughly investigate allegations against their personnel and take prompt and decisive action against individuals found guilty of domestic violence. Media organizations must not shield perpetrators and should instead prioritize the safety and well-being of women who are subjected to human rights abuses.
  2. We urge law enforcement agencies, especially the police, to handle cases of domestic violence with the seriousness they deserve. Survivor complaints should be thoroughly and impartially investigated, and survivors should not be coerced into settling outside the court. The police should actively pursue legal action against perpetrators, ensuring that the law is applied without influence or bias.
  3. We call upon the National Commission of Human Rights (NCHR) to proactively intervene in cases of domestic violence, particularly those involving perpetrators from the media industry. This includes conducting independent investigations into reported incidents, ensuring the protection of survivors, and holding perpetrators accountable under the law.
  4. We urge press clubs and journalist unions to actively condemn such grave incidents and avoid using their influence to silence or pressure women survivors of domestic violence to save male members of the media fraternity. These organizations must prioritize the safety and well-being of their members over protecting individuals accused of such heinous acts.
  5. Domestic abuse leaves a lasting impact on mental health and well being of the victim. The government must provide free psychological assistance to the victims.
  6. Healthcare in Pakistan is not easy and free to obtain especially when a case has been registered with the police. The government must ensure setting up sections in hospitals where victims are provided with free medical treatment.

 

Signatures:

  Name Organization Location
1 Bushra Iqbal Multimedia Journalist Islamabad
2 Laiba Zainab The Current Lahore
3 Madiha Abid Ali Anchor Person  PTV News Islamabad
4 Khalida Niaz sub editor TNN Peshawar
5 Lubna Jerar Naqvi Freelance Journalist Karachi
6 Zeenat Bibi Freelance  Journalist Peshawar
7 Asma Sherazi Senior Anchor, journalist Islamabad
8 Zainab Durrani Senior Program Manager DRF  
9 Xari Jalil Editor/Co-founder, Voicepk Lahore
10 Afshan Mansab Native media Lahore
11 Saddia Mazhar DW Urdu  
12 Umaima Ahmed Global Voices Lahore
13 Mahwish Fakhar Dawn TV Islamabad
14 Saeeda Salarzai Freelance  
15 Unbreen Fatima Freelance Journalist Karachi
16 Fozia Ghani Freelance Lahore
17 Ayesha Saghir Express News Lahore
18 Fahmidah Yousfi Rava Documentary Karachi
19 Salma jehangir TNN Peshawar
20 Sheeama Siddiqui Geo Karachi
21 Pernia Khan Freelance Lahore
22 Mahjabeen abid Pakistan Television Multan
23 Tanzeela Mazhar GTV Islamabad
24 Tayyaba Nisar Khan PTV World Islamabad
25 Ayesha Khalid Comms Manager, Media Matters for Democracy  
26 Rabia Mushtaq Geo.tv Karachi
27 Afia Salam  Environmental journalist Karachi
28 Fatima Razzaq, Journalist, CEO Lok Sujag  
29 Samina Chaudhary APP Islamabad
30 Tehreem Azeem Freelance Lahore
31 Bilal Azmat SM Executive   Dawn    Islamabad
32 Bushra Pasha DW Karachi
33 Moazzam Bhatti Freelance journalist Islamabad
34 Zoya Anwer, Freelance Journalist  
35 Miranda Husain, Editor and Journalist Lahore
36 Fauzia Kalsoom Rana Founder and Convenor Women journalists Association of Pakistan WJAP Islamabad
37 Islam Gul Afridi Correspondent,Special broadcasting Services Peshawar
38 Dr. Rabia Noor ARY News Lahore
39 Sanam junejo APP Islamabad
40 Nabila Feroz Bhatti Freelance Journalist Lahore
41 Asma Kundi, wenews.pk  
42 Maryam Nawaz Geo news Islamabad
43 Xari Jalil Voice.pk Lahore
44 Sabahat Khan Freelance Islamabad
45 Sarah B. Haider Freelance journalist Islamabad
46 Mahnoor shakeel Freelance journalist Mardan
47 Ambreen Sikander GTV News Karachi
48 ShaziaMehboob ThePenPk.com/Express Tribune Islamabad
49 Aneela Ashraf Freelance Journalist & Founder Journalists Save Movement Multan
50 Sumeira Ashraf Head of assignment and planning at 24 news HD Islamabad
51 Shawaiz Tahir Samaa Tv Islamabad
52 Beena Sarwar Founder Editor, Sapan News Network Boston
53 Ali Jabir Malik Reporter, APP                                         Islamabad 
54 Haya Fatima Iqbal Co Founder, Documentary Association of Pakistan  
55 Syeda Mehr Mustafa Freelancer  
56 Zeenat Shehzadi Investigative Journalist  
57 Zunaira Rafi We News Urdu  
58 Sophia Siddiqui Chief Editor Glory Magazine Islamabad
59 Maryam Saeed Editor Digital 50.50 Digital Rights Foundation  
60 Mehr F Husain Editor, The Friday Times/ Publisher, ZUKA Lahore/Dubai
61 Kainat Malik Chief Editor Jamal e jahan Rajanpur
62 Wajeeha aslam Samaa news manager special project Lahore
63 Rabia Anum Tv Host  
64 Marian Sharaf Joseph Freelance Journalist Lahore
65 Ismat Jabeen DW Correspondent Islamabad
66 Shinza nawaz PTV Islamabad
67 Ali Tanoli Geo News Islamabad
68 Nadir Guramani Anchor, journalist, Dawn news Islamabad
69 Qurrat ul Ain Shirazi Roving CorrespondentThe Independent Urdu Islamabad
70 Fatima Ali Correspondent Independent Urdu Lahore
71 Fauzia Yazdani    
72 Beenish Javed Freelance journalist Islamabad /Berlin
73 Jamaima Afridi Freelance Journalist  
74 Asad Ali Toor freelance journalist  
75 Arifa Noor Dawn tv Islamabad
76 Sabah Bano Malik Freelance journalist, Radio Host CityFM89 Karachi
77 Haroon Rasheed Indy Urdu  
78 Mehmal Sarfraz Journalist Lahore
79 Naheed jehangir Assistant media manager lady Reading Hospital Peshawar
80 Maryam Zia Anchor, PTV World  
81 Muhammad Faheem Mashriq Peshawar
82 Anam Baloch Comms Manager, Digital Rights Foundation Lahore
83 Sualeha Qureshi Director, Soch Media Karachi

 

January 4, 2024 - Comments Off on PAKISTAN: 6-POINT AGENDA ON DIGITAL RIGHTS PROTECTIONS FOR POLITICAL PARTIES’ ELECTORAL MANIFESTOS

PAKISTAN: 6-POINT AGENDA ON DIGITAL RIGHTS PROTECTIONS FOR POLITICAL PARTIES’ ELECTORAL MANIFESTOS

With the upcoming general elections in Pakistan, the Digital Rights Foundation urges political parties to include six key digital rights issues in their manifestos. This is crucial for a robust democracy, enabling citizens to scrutinize the new government effectively. The issues range from funding AI research initiatives and establishing a robust data protection regime, including enacting Data Protection Law, to PECA amendments and law enforcement capacity building. Additionally, it involves parliamentary oversight on the FIA’s Cyber Crime Wing, monitoring the actions of the Pakistan Telecommunication Authority, conducting human rights impact assessments on tech tools and cyber policies, bridging digital divide across Pakistan and revising existing tech policies that are detrimental to fundamental rights in the digital age.

1. Institute Parliamentary Oversight, Impact Assessment and Human Rights Audits:
    • Ensure effective and robust parliamentary oversight of the FIA under Section 53 of the Pakistan Electronic Crimes Act (PECA) 2016, while ensuring alignment with human rights principles.
    • Convene a multi-stakeholder committee, inclusive of legal experts, human rights advocates, and technology professionals, to amend the problematic and vague sections of PECA.
      • Defamation should be removed as a criminal offense by repealing Section 20 of the ‘Prevention of Electronic Crimes Act 2016’ and Section 499/500 of the Pakistan Penal Code in compliance with General Comment No. 34, Human Rights Committee.
      • Section 37 of the ‘Prevention of Electronic Crimes Act 2016’ should be repealed and ‘Removal and Blocking of Unlawful Online Content (Procedure, Oversight and Safeguards), Rules 2021’ should be denotified, and all laws concerning freedom of expression should be amended to remove vague/overbroad criteria for online content moderation.
    • Initiate a comprehensive impact assessment of the Federal Investigation Agency's Cyber Crime Wing, with a focus on evaluating its effectiveness and adherence to human rights standards. Ensure the findings of impact assessments lead to amendments and improvements to the existing structure, capacity, objectives and rules and protocols that aim to safeguard vulnerable groups rather than harm them.
    • Establish and implement effective transparency and accountability measures through mandatory human rights audits, such as through the National Commission on Human Rights, of state agencies and bodies regarding the acquisition of technologies used to regulate digital content, communications and data.
2. Ensure Digital Accessibility and Inclusion:
      • Make a firm commitment to prohibit and prevent arbitrary internet shutdowns that hamper the citizen’s access to the internet, a fundamental right and one essential for the exercise of other human rights. Pay particular attention to the internet shutdown in the ex-FATA area which has lasted more than 7 years since its issuance in June 2016. Despite some progress in 2021 regarding the restoration of the internet in some parts overall access in these regions remain precarious as services are frequently re-suspended on vague security grounds.
      • Ensure the incorporation of digital accessibility standards into national policies, such as the Website Content Accessibility Guidelines (WCAG), to ensure that website content and online services and platforms are accessible for individuals with disabilities and language constraints.
      • Invest in the expansion of reliable and affordable internet infrastructure, prioritizing rural and underserved areas to bridge the digital divide for this demographic and women and girls. Additionally, collaborate with technology providers to ensure the availability of budget-friendly and user-friendly devices, catering specifically to the needs of women and girls.

3. Protect Online Freedoms: Right to Privacy, Assembly and Association & Freedom of Expression

      • Amend the Personal Data Protection Bill, 2023 to align it with the international human rights standards. Initiate transparent and inclusive consultations with relevant stakeholders to make the bill human rights centric as the current bill falls short of ensuring protection of people, their data and rights.  Ensure that the law requires establishing an independent oversight body which has substantive powers to hold private and public bodies accountable for breaches of citizens’ privacy and data security and finally enact data protection law in order to protect personal data of citizens of Pakistan.
      • Ensure Constitutional guarantees including the right to online freedom of assembly and association under Article 16. Make explicit guarantees to stop blocking of digital communications to prevent public gatherings and mobilisation under section 54(3) of the ‘Pakistan Telecommunications Act 1996’. 
      • Implement safeguards to prevent the misuse of cybercrime laws on the freedom of expression of citizens, particularly individuals charged by authorities for online content deemed critical of public figures and institutions.
4. Ensure Ethical use of Artificial Intelligence (AI):
        • Establish a dedicated AI Ethics Committee with inclusive representatives from civil society, academics, businesses, and technical experts to:
          • develop and adhere to clear Ethical Guidelines for the Use of AI by the State, particularly including the use of facial recognition systems and social media surveillance to ensure they are grounded in the human rights principles of legality, necessity, and proportionality.
          • continuously assess and update the guidelines to ensure that the development, design and deployment of AI technology is human rights centric and doesn’t exclude the experiences of marginalized communities.
5. Protect Rights of Businesses and IT Industry
    • Establish a task force comprising industry experts, businesses and policymakers to regularly review and update policies that impact the industry, fostering innovation and growth.
    • Implement inclusive policies and strategies that cater to the needs of Small and Medium Enterprises (SMEs) in the IT sector, such as providing access to finance, mentorship programs, and regulatory relief. 

Align national laws and regulations with international standards and treaties, such as the UN Guiding Principles on Business and Human Rights, which ensure that businesses operate responsibly and respect human rights throughout their activities

6. Elevate Digital Literacy and Research on Tech and AI
      • Implement targeted digital literacy programs designed for women, girls and the transgender community aimed at fostering proficiency in fundamental computer literacy, internet navigation, and online safety practices, particularly focusing on the rural and fringe populations..
      • Incorporate comprehensive digital citizenship programs into education curricula, emphasizing responsible online behavior, ethical use of technology and AI, and digital rights awareness for children.
      • Enhance research infrastructure in Pakistan and foster collaboration with foreign research think tanks to expedite research on technology, digital rights, and AI, facilitating informed policies and strategies.
 General comment no. 34, Article 19, Freedoms of opinion and expression, UN. Human Rights Committee (102nd sess.:201:Geneva). Accessed at: https://digitallibrary.un.org/record/715606

July 24, 2023 - Comments Off on DRF strongly condemns the recent incident of sexual exploitation and harassment at the Islamia University Bahawalpur

DRF strongly condemns the recent incident of sexual exploitation and harassment at the Islamia University Bahawalpur

Trigger Warning

25 July 2023

Digital Rights Foundation strongly condemns the recent sexual exploitation and incidents of harassment that took place at Islamia University Bahawalpur (IUB), where the university’s Chief Security Officer was arrested by local police when explicit pictures and videos of women around campus, staff members and students alike, were retrieved from his cellular devices.

This distressing turn of events marks the third high-profile case in many years. In 2019, a similar incident like this took place at the University of Balochistan, Quetta, and later at the King Edward Medical University Lahore. Incidents like these are indicative of an alarming pattern of misconduct emerging, where at least two of the known cases implicate the chief of security as the primary accused.

DRF calls upon the Higher Education Commission (HEC), Federal Investigation Agency (FIA) and Senate/Parliament Human Rights Committee once again. The National Commission for Human Rights (NCHR) did take a Suo Moto of the incident and we are encouraged by the NCHR taking up this matter and hope there is effective follow-through in the future. 

In the aftermath of the ongoing situation, systemic issues at educational institutions which have already been highlighted previously by DRF have been brought to light. The ongoing instances have reminded us of past complaints regarding geographical constraints that make reporting difficult especially for female victims; the prospect of traveling long distances to register their cases of harassment often deters them entirely. Even in this particular case, if students from IUB were to register a case under the Prevention of Electronics Crimes Act (PECA) with the FIA, they would have had to travel to Multan which would not only result in a significant financial cost for them but would also be a burden on victims already under distress. Efforts to ensure that no survivor is kept from seeking justice must be prioritized by making the reporting process more accessible and efficient. Addressing the concerns and hesitations of survivors is important and only in doing so can we achieve a supportive environment that empowers victims to come forward with their stories.

Investigative authorities must be required by law to provide sensitized and timely relief to victims. Meanwhile, where such laws are in place, lack of effective implementation and monitoring become the problem. The senate human rights committee in 2019 acts as a prime example here, since it took up jurisdiction of harassment cases recorded on CCTV cameras on campus. Yet, safe spaces for female students have not been created on campus.

Simultaneously, the privacy and confidentiality of the victims must be safeguarded at all costs. Instead, statements by those in the position to protect the women and investigate these incidents choose to question the victims themselves, displaying an inability to take responsibility and a complete disregard for the clear imbalance of power. 

This discovery at the IUB serves as an alarming awakening that harassment continues to prevail in professional and/or academic arenas. Relevant personnel have failed to strictly enforce the rules set out by the Protection against Harassment of Women at the Workplace (Amendment) Act 2022 and, therefore, have been responsible for the current menace of sexual exploitation in our educational institutions and society at large. It is imperative to mention that the perpetrators at IUB and UOB both were Chief of Security and CCTV was installed for ‘safety’ which rather led to more insecurity and a violation of privacy for women on campus. If stringent and strict action isn’t taken in cases like these it would lead to parents’ reluctance to send their daughters for higher education in Pakistan where female literacy is already an issue.

If you or someone you know needs help reporting cyber harassment, please get in touch with us at the Cyber Harassment Helpline operating from Monday to Saturday, 9 am to 5 pm on 0800-39393. You can also email us at [email protected] or contact us on our socials. 

Source: 
https://www.dawn.com/news/1765829/islamia-university-bahawalpur-chief-held-with-drugs-videos-of-students-officials

https://www.dawn.com/news/1578777

https://thecurrent.pk/employee-at-king-edward-medical-university-caught-making-video-of-female-student-in-washroom/

https://twitter.com/nchrofficial/status/1683393720014733312?s=46&t=9DwfaN-p2fv3bkDoJMFkZA