Ofcom’s draft guidance – tackling the abuse women and girls receive online

Call 0345 872 6666


Woman in pink being interviewed, representing media, reputation, and privacy legal services at JMW Solicitors.

Ofcom’s draft guidance – tackling the abuse women and girls receive online

Department:
Media Law

The Online Safety Act 2023 makes platforms – including social media platforms, gaming services, dating apps, discussion forums and search services – now legally responsible for protecting people in the UK from illegal content, and harmful content to children. OFCOM’s research shows that women and girls are more likely to be negatively impacted by harmful content. They are also more likely to be the target of intimate image abuse or cyberflashing.

OFCOM is responsible for publishing final Codes and risk assessment guidance that covers the following:

  1. Tackling illegal content
  2. Action against harmful content and content that disproportionately impacts women and girls
  3. Protection of children

Where service providers implement the recommended measures and standards, they will be treated as complying with their legal obligations. OFCOM is the body responsible to measure and monitor compliance.

OFCOM recently introduced draft guidelines, setting out what it describes as “ambitious but achievable” measures, for platforms to follow to better protect women and girls online. OFCOM is inviting feedback on the draft guidelines, by 23 May 2025, and the intention is that the practical guidelines will be finalised later this year.

The guidance places an obligation on to platforms to proactively “police” abusive content, rather than placing the onus on women and girls having to self-report. It is a “safety-by-design” approach, which means that platform providers are to embed in the design of the platform and its operation, tools and technology that can tackle offensive or abusive content.

OFCOM have focused on four main areas of concern:

  1. Online misogyny – content that actively encourages or reinforces misogynistic ideas or behaviours.
  2. Pile-ons” and online harassment – this may happen where a woman or group of women are targeted with threats of violence. OFCOM reports that nearly ¾ of women journalists have experienced online threats and abuse.
  3. Online domestic abuse – using technology for coercive or controlling behaviours.
  4. Intimate image abuse – including cyberflashing and “deep fakes” intimate images.

The guidelines seek to tackle these issues by setting clear expectations for platforms, that go beyond the new legal duties, and fall under the following headings:

1. Taking responsibility

a. Ensuring governance and accountability processes address online gender-based harms. This may include setting policies, that may have been contributed to by industry experts, and the training of staff in decision making on online gender based harms.

b. Conducting risk assessments that focus on the harms to women and girls. This may include conducting user surveys or engaging with those impacted to better understand their experiences.

c. Being transparent about women and girls’ online safety, which may include sharing information about the prevalence of different harms and effectiveness of measures introduced.

2. Preventing harm

a. Conducting abusability evaluations and product testing

b. Set safer defaults

c. Reduce the circulation of content depicting, promoting or encouraging online gender-based harms

3. Supporting women and girls

a. Give users better control over their experiences.

b. Enable users who experience online gender-based harms to make reports.

c. Take appropriate action when online gender-based harms occur.

Alongside these guidelines, practical examples to help keep women and vulnerable adults safe online include:

  1. Removing geolocation by default. This type of information can facilitate stalking, enabling an individual to build up a picture of another’s movements and local areas.
  2. User prompts asking a poster to reconsider whether any content might be harmful or offensive, before posting. This would require platforms to build tools that can detect, for example, misogyny, nudity or content depicting illegal gendered abuse and violence.
  3. Providing a quick “exit” button throughout the reporting process.

Increasingly, we are having conversations, driven by concerns, about what material can be accessed online, what is appearing in our” “feeds” (by default) and how our online interactions are pervading our everyday, real life, lives. Online harms can encompass a range of behaviours that are targeted, and that may extend to behaviours offline. As well as providing a standard by which online platforms can be measured and held to account, the guidelines encourage conversations about what we are prepared to accept as appropriate content, and may change our expectations for and relationships with online platforms. 

Did you find this post interesting? Share it on:

Related Posts