For the Sentropy team, today is a big day. For the Sentropy mission, today is a monumental day. Three years after starting this company with Michele, Ethan, and Taylor, I’m thrilled to announce that we’re joining Discord to continue fighting against hate and abuse on the internet.

We are deeply grateful to our customers, investors, advisors, and, of course, our team for the key roles you have played in our journey to make the internet safer. Today would not be possible without the support and trust you’ve given us along the way.

Why we’re joining Discord

When you start a company, you do so…

Anyone who is managing a platform with user-generated content knows the risks. Insults, spam, hate speech, profanity, and other types of abuse seem to appear as soon as you acquire users.

You were hoping this wouldn’t happen so quickly, but you have valid concerns about user churn — and your platform’s reputation. The last thing you need is to become known for having degrading or disgusting content.

How can you prevent toxic content in the first place? And how can you do it without spending an arm and a leg?

This guide will show you how to build a content…

Meet Cindy Wang, a machine learning engineer at Sentropy. Cindy talks about her passion for using machine learning to detect hate speech, her conviction that this technology will help solve the problem of abuse online, and the process of building it from scratch.

Disclaimer: An image in this post includes obscene language pertaining to hate speech.

What did you do before Sentropy?

I did my undergraduate and my master's at Stanford. Both were in computer science. So I doubled up and spent a lot of time in Palo Alto! During my undergraduate degree, I focused on theory, and during my master's I focused on artificial intelligence…

With nearly 20 years of experience in product management, Dev joined Sentropy after leading product teams at Microsoft, Google, and Facebook. In this conversation, Dev and Sentropy CEO John Redgrave discuss raising kids in this world of technology, how to build products ethically, why fighting abuse online is so difficult — and how Sentropy’s products are designed to do just that.

Prefer to listen to this conversation?


John Redgrave: I’m John Redgrave. I’m the Co-Founder and CEO of Sentropy. And today we’re here with our very own Dev Bala, the Chief Product Officer at Sentropy. Hey Dev.

Dev Bala: Hey…

Why automating abuse detection is hard, and how we bring together experts and AI to tackle it.

By Alex Wang & Cindy Wang


Over the past two years at Sentropy, we’ve thought critically about how to build an effective, safe, and robust ML system for content moderation. At face value, abusive content detection might seem like a straightforward classification task. In reality, building a solution for content moderation requires thinking about a diverse set of stakeholders — including but not limited to end-users, platforms, moderators, and legislative bodies. From building data sets to developing models, in this post we’ll dive into some of the machine learning and data challenges we’ve faced and how we addressed them.

Defining abuse


By John Redgrave & Laura Gentile

The world is a very different place than it was just 20 years ago. The internet now touches nearly every part of our lives, and, today, the vast majority of human interactions take place online. These interactions influence our cultural identities, political views, relationships, and our emotional and physical wellbeing. With this shift, online abuse has emerged as a critical and wide-spread problem that threatens our online and offline communities alike. It’s this reality that inspired us to build Sentropy. Online abuse takes many shapes, but some of the most common forms include cyberbullying…

If you’re looking to detect and defend against abuse on your platform, consider the ins and outs of developing internally vs. purchasing.

In tech, it’s a rite of passage: the moment when you first have to decide whether to build a solution yourself or buy it from someone else. In the scrappy startup days of a business, it’s often an easy question to answer. Without the engineering resources of a larger company, buying is born of necessity.

But as you grow, the decision becomes a little knottier. When you have the ability to choose between buying and building, a whole…

It has become impossible to deny, after the events of 2020 and the Capitol riots, that something is intrinsically flawed with how we interact with each other online. A dark presence of hate and vitriol persists in a majority of our online interactions. This dark presence spreads online and continually leaks into what many call the “real world.”

Let us be clear: the Internet is the real world. The joys that spring from catching up with an old friend are real-world joys, the pain of an angry word or a blunt rejection is a real-world pain, the vitriol that people…

Disclaimer: The text below references obscene language pertaining to hate speech.

It’s clear that abusive content is a problem. A full one-third of adults and nearly half of teens have been the target of severe online harassment. And that abuse has some chilling real-world consequences. Multiple studies have shown that children, teens, and young adults who were victims of harassment online were more than twice as likely as non-victims to self-harm, exhibit suicidal behaviors, consider, and attempt suicide.

But abusive content’s presence continues, and even grows. It’s become a real-life Hydra: cut off one head, and more sprout up in…

Disclaimer: this post references obscene language pertaining to hate speech.

By: Emma Peng


Several studies (Kennedy et al. , Wiegand et al, Dixon et al. , etc.) have pointed out the issue of word level bias in hate speech datasets. For instance, Kennedy et al. suggest that group identifiers such as “gay” or “black” are overrepresented in the positive class of two hate speech datasets. They found that classifiers trained on such imbalanced datasets struggle on negative examples that contain the overrepresented group identifiers. Such biases manifest as undesirable false positives when these identifiers are present.

In this blog post…

Sentropy Technologies

We all deserve a better internet. Sentropy helps platforms of every size protect their users and their brands from abuse and malicious content.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store