Fighting Abuse Online: Recommendations for the Biden-Harris Administration

Sentropy Technologies
Sentropy
Published in
5 min readMar 3, 2021

--

By John Redgrave & Laura Gentile

The world is a very different place than it was just 20 years ago. The internet now touches nearly every part of our lives, and, today, the vast majority of human interactions take place online. These interactions influence our cultural identities, political views, relationships, and our emotional and physical wellbeing. With this shift, online abuse has emerged as a critical and wide-spread problem that threatens our online and offline communities alike. It’s this reality that inspired us to build Sentropy. Online abuse takes many shapes, but some of the most common forms include cyberbullying, misinformation and disinformation, image-based abuse, child sexual abuse material (CSAM), sex and drug trafficking, financial or identity scams, and terrorism. The events at the Captiol on January 6th were a direct result of the normalization of abuses online and the laissez-faire approaches that too many platforms have been allowed to take.

This post provides a set of recommendations on how to address the growing issue of online abuse to better protect individuals both online and offline.

Recommendations

Separate the issue of online abuse from cybersecurity.
The task of protecting individuals online often falls under the category of cybersecurity, but the two are separate and distinct issues. Where cybersecurity refers to the practice of protecting systems, networks, and programs from digital attacks, fighting online abuse is about protecting the end-users of these services and society at large. The same tools, policies, and recommendations do not apply to both issues, and thus, they should be considered distinct. Online abuse against users should be given its own category, definition, and terminology. For reference, in the UK the term ‘safety tech’ refers to tools that fight against online abuse, and in Australia, the term ‘eSafety’ refers to protecting individuals from online abuse.

Create a permanent task force to fight online abuse.
Protecting users online is a constant and ever-evolving problem. It isn’t something that can be solved with a few key pieces of legislation. A dedicated group should be created to track, research, and take action on this issue. This group should be responsible for creating policies, tools, and resources for individuals and companies dealing with online abuse. This group should be advised by key stakeholders including policymakers, online platforms, and the organizations that create tools and/or services to protect users online.

Create a global summit on digital safety.
At a minimum, the United States should be collaborating with allied nations to create a global framework to address these problems. Both Australia and the UK have spent substantial energy creating frameworks and dedicated government functions to begin addressing these issues (see the end of this post). This is not an issue to be combated at a country level as the internet and the platforms that thrive on it are global in nature. The seriousness of this issue likely calls for an international accord similar to the Paris Agreement. There are many steps required including creating transnational definitions of common online abuses to unify around how we detect and remediate these abuses.

Create standardized definitions for common forms of online abuse.
The lack of standardized or common definitions of abuse online is a major source of confusion within this sector. Because there isn’t a shared language around abuse forms, companies find it hard to comply with policies or collaborate with others to tackle the issue, users are often left confused about what they can or can’t do, and regulators are unable to set or enforce policies or best practices. Widespread adoption of standard definitions is unlikely to occur until the government agrees to conform to a set of definitions. Sentropy recently published this paper and the associated GitHub repository which outlines our abuse definitions.

Reduce data usability restriction and increase transparency.
Most online platforms enforce restrictions that limit the ability to collect, process, share, and use data from public user interactions on their sites, often in an effort to avoid public shaming or awareness. This makes it difficult to assess the scope of abuse online and create tools to combat it. De-identified and/or aggregate data around user interactions online should be made accessible and unrestricted for researchers, investigators, and those developing tools or services to protect online users. Further, creating centralized and open-source repositories of unfiltered data (i.e. data not yet filtered through a content moderation process) from online platforms will speed up the development and accuracy of artificial intelligence (AI) tools aimed at fighting online abuse.

Provide infrastructure to enable industry collaboration. Abuse affects nearly all online platforms and yet, to date, very little collaboration has occurred to combat it. We’ve seen collaboration work well in areas such as CSAM and terrorism, where centralized repositories help keep stakeholders informed or help stop the spread of harmful material. Putting similar processes in place around other forms of online abuse would help strengthen our collective defenses.

This list of recommendations is not intended to be comprehensive but rather is the starting point for a much larger conversation about how we can redesign our current digital world. With more collaboration and alignment we can create an internet that is safer and more welcoming to everyone.

Examples from Australia and the UK

Australia’s eSafety Commission

eSafety is an independent statutory office supported by the Australian Communications and Media Authority (ACMA). It’s tasked with creating policy, tools, and resources to promote safer, more positive experiences online.

Key Initiatives:

Safety by Design: Working directly with online platforms and investors to build products that consider user safety in the design process.

UK’s Safety Tech Initiative

The Safety Tech Initiative is run by the Department for Digital, Culture, Media, and Sports (DCMS). It’s tasked with facilitating collaboration and creating resources to help platforms comply with online abuse best practices.

Key initiatives:

Safety Tech Innovation Network: A collaborative network of organisations from the public, commercial, and non-profit sector that provides joint resources to deal with online abuse, including the Safety Tech Provider List and the Safety Tech Expo.

Online Harms Data Transformation Project: This project is aimed at improving data quality, sharing, and transparency in an effort to facilitate the creation of AI tools to fight online abuse.

--

--

Sentropy Technologies
Sentropy

We all deserve a better internet. Sentropy helps platforms of every size protect their users and their brands from abuse and malicious content.