Meet Dev Bala, Sentropy’s Chief Product Officer

Sentropy Technologies
Sentropy
Published in
18 min readMar 17, 2021

--

With nearly 20 years of experience in product management, Dev joined Sentropy after leading product teams at Microsoft, Google, and Facebook. In this conversation, Dev and Sentropy CEO John Redgrave discuss raising kids in this world of technology, how to build products ethically, why fighting abuse online is so difficult — and how Sentropy’s products are designed to do just that.

Prefer to listen to this conversation?

Introduction

John Redgrave: I’m John Redgrave. I’m the Co-Founder and CEO of Sentropy. And today we’re here with our very own Dev Bala, the Chief Product Officer at Sentropy. Hey Dev.

Dev Bala: Hey, thanks for having me.

John Redgrave: So, Dev, you know, this is, this is a very human centric problem. And so I’d love to just get to know you as a human. You know, I got into technology after recognizing the power, it could have to connect people. I built a web solutions company when I was in high school building websites for small and medium-sized businesses.

Where’d you grow up and how did you get into tech in the first place?

Dev Bala: It’s a funny story, actually. So, thank you for asking. I’m from St. Louis, Missouri originally. My parents moved to St. Louis from India in the early 1970s. And so my brother and I grew up there and we just so happened to live across the street from the grade school that I went to.

They didn’t know it at the time, but when I was six years old that school got a federal grant to put one computer for every student in every classroom from kindergarten through fifth grade. It was the 1980s. And so it was certainly not a ubiquitous thing that everybody had access to this technology. But through complete luck and fate, I was in front of a computer every day from a really early age. And I just fell in love with it.

I used to stick around after school and teach myself how to use different programs. I would go to the middle school library upstairs and I would find these books on programming and BASIC. And so I wrote my first line of code in BASIC when I was six, and computer science has just been a passion of mine ever since.

I carried it through to college and grad school and just always knew that a career in the technology industry was the right place for me. So that is what led me to study computer science. It led me to a variety of different jobs and internships, and it ultimately led to me, joining Microsoft right out of college.

John Redgrave: I love it. We’re both fathers and I’m curious about your perspective on how tech will shape our kids and even the access that they have to it. You talked about having a computer because of a federal grant. Our kids have little computers running around our houses at this point.

I’m curious how you think technology is going to shape our children.

Dev Bala: As you can imagine I think about that all the time, and I know you probably do too. If I can just be candid, I often wish it was more like my experience for my own kids. The only way I got to use that computer was in a classroom with a teacher’s supervision and I was using software that was designed for a six-year-old.

And there was really nothing else outside of that. And so it was very pure and genuine and very academic. As you said, that’s not the world we live in today. There are screens and computers on the walls of our houses now. And people like us have helped build those things.

So now how do we raise our kids in a responsible way and have a healthy relationship with technology? I think that’s one of the great unsolved problems that we’re only going to get better at with time. Smartphones are still a pretty recent thing. So thinking about the long-term effects of raising a child with access to the internet in the palm of their hand, we won’t realize that for another 20, 30, 40 years until those kids become middle-aged adults. And so there is an opportunity for people to build new technology and new companies that aim to address the problem, right here right now.

And you see a lot of that innovation coming out during the time of COVID as well. In distance learning and ways to form healthy relationships with technology — because a lot of working parents like us often have to resort to putting a tablet in front of a kid so we can join a meeting or something like that.

So there’s a great opportunity to do more there, but it’s a concern and something that I think is on every parent’s mind,

John Redgrave: Yeah, the iPad babysitter is a real thing at this point. I’m curious, you’ve built technology products — some that are exactly these devices that sit inside of our houses.

As a builder of these technologies, and as a parent, what are the rules you’re going to apply to your family when it comes to your kids’ use of technology?

Dev Bala: Yeah. Boy, that’s another topic I could spend an hour on. A few examples of that: I worked at Nest building cameras and thermostats. We were also working with the Google Assistant, which is a very common device in tech-savvy homes. I also worked on the Facebook Portal, which is a really popular product now, given the stay-at-home mandates that are all over the place and the inability to travel and see loved ones. So, yeah, I think people all around the world use these products and of course they exist in every room of my house.

To me, the key is all about forming a healthy relationship with these things and helping young minds to understand the value and purpose of these tools.

So for example, every morning when I’m having breakfast with my kids, they ask the Google Assistant to tell them the news or tell them the weather. They get a kick out of it, but they’re also learning this habit of “Hey that’s a helpful thing you can ask”. Or in the afternoon when they want to go out for a bike ride, they want to know the weather.

And so little things like that are fun for them. I think those are good examples. Using the Facebook Portal to call their grandma is another great example. And I think we, as adults, have to model the right behavior around interacting with these things. So not having your head in your phone when you’re right in front of your child is really good and is the discipline that parents need to have. Similarly, constantly interacting with gadgets and not focusing on the physical presence of people, is another pattern we have to try to avoid.

And so I would argue the people who create these things might be more sensitive to the patterns and anti-patterns than a lot of users. And I’d go a step farther and say, I think there’s a responsibility or even an ethical point of principle that the creators of these technologies need to communicate to the people who use them.

Because you do read stories about how a lot of parents here in Silicon Valley, don’t let their kids use the very same technology that they built Monday through Friday. And that’s a problem, right? So I think we need to be more honest about that.

John Redgrave: Yeah, this ethical concern is a really interesting one because there’s been a lot of news coverage around ethical considerations to product building. Whether that’s the physical products, IoT devices that you’ve highlighted, or even the social media sites that are allowing for amplification of hate.

You’re a very seasoned product leader. When you consider building products, is there a set of rules that you applied to these products to think about the ethical concerns, or are there ways that companies could apply a playbook to thinking through the ethical concerns?

Dev Bala: I think there are some famous examples of that out there that have been guiding lights for very large companies. An obvious example is Google’s motto of “Don’t be Evil”. I think you hear common terms like that. It’s not just in technology. “Do no harm” is another common phrase that people use and for a variety of professions. I think there is a place now more than ever for having principles and guiding concepts at the core of building a product.

One of my favorite things to think about is the “Jobs to be Done” that a customer or individual might have. What is the thing they’re trying to do? How can you help them do that thing? And in doing that thing, you want to serve them and you want to try and serve your own organization’s needs. Can you do that in a virtuous way? Can you do that in a way that isn’t nefarious, that isn’t collecting any more data than you should, that isn’t asking for any more permissions than you need, is respecting the fact that that customer has hired you to do something. So do that thing and do it well, and leave it at that.

John Redgrave: I love it. You know, it’s good to get to know you a little bit better as a human. I’d love to switch into the Dev Bala the professional.

Tell us a little bit about your background and how you got to Sentropy.

Dev Bala: Sure. I lead product here at Sentropy and I’ve been in product management for almost 20 years now. I got my start at Microsoft working on Microsoft Office and Outlook specifically. From there I went to Google and stayed in the enterprise software space. I was one of the first PM’s on Google Docs, now G-Suite.

It was about 10 years ago that I really started my career working on enterprise software and on platforms specifically. And then the past five years I was working on IoT at Nest and at Facebook — still from the lens of software platforms. And so that’s really been my area of interest and my focus.

And that was right in line with the opportunity here at Sentropy. To build some interesting platforms and technologies to combat the toxicity and hate speech that all of us are unfortunately familiar with online.

John Redgrave: So Dev, you and I got connected because we’re working to fight this major problem facing all major social media companies today — online hate and harassment. Platforms are impacted differently because they each have different jobs to be done, as you like to say, and need protecting whether that’s direct communication between users, customer reviews, comments on news articles.

And we’ve seen this adverse effect of toxic content, including things like user churn, advertising revenue impacts, especially around the Stop Hate for Profit initiative that was kicked off by the ADL and NAACP. It’s really had an impact on brand trust.

After all these years, why is identifying and stopping hate and harassment online so difficult for these companies to solve?

Dev Bala: Yeah. That is the question, right? That’s the question that a lot of these platform owners are dealing with, and that’s the question that we’re trying to help them answer with what we’re building here. It boils down to a couple of distinct patterns.

First and foremost, the landscape is rapidly evolving. When you think about the language people use, when you think about the topics that are in the news and the daily discourse — that is a constantly evolving set of words and language and patterns and terminology. And, coming from some of my former employers, I can tell you that those companies spend millions and millions of dollars trying to stay ahead of this problem.

And they also hire thousands upon thousands of people to try and moderate content on their platforms to try and stay ahead. Even with 10,000 moderators and a nine-figure moderation budget. It’s still hard. And you still read about it every day and that’s at the scale of a Facebook or a Google. But not everyone has those resources, so they often have to look at other tactics to deal with this. Including building some scrappy in-house tools or crowdsourcing the task of moderation to their own users — who are not necessarily trained for that task. Those approaches also have mixed results. I like to think of it as similar to the antivirus and malware challenges of maybe 10 plus years ago. It certainly still exists today, but it was a much bigger problem then. And you’ve got this notion of white hat and black hat, and you’re always trying to stay one step in front of the bad guys. And, I would argue, this is a modern version of a somewhat similar pattern.

John Redgrave: Yes the adaptive adversary issue in this world is really difficult. As you said, the Facebooks and Googles have the capacity to hire these incredible machine learning teams to be able to detect this. But one of the quandaries in this space seems to be an issue of not just detection — which clearly a number of companies have an issue with — but also the question around what do we do once we do detect it and where are we going to take a stand?

Just backing up for a second, I’m curious, you’ve had an illustrious career in big tech companies. Why come to a startup at this point in your career that is solving this particular issue?

Dev Bala: Well, that was certainly a big question I had to answer for myself, but it boils down to a couple of things. First off, like I said, it’s a platform problem. That’s where I’ve got a lot of my expertise and where I can add value. That’s a little bit more tactical, but if I’m going back to the top and thinking a little bit more aspirationally — this is a big problem that large firms and small firms are stuck in solving on their own. They miss an opportunity to try and solve it at scale and solve it in a more novel way. I believe that that’s the opportunity we have at Sentropy.

Furthermore, I believe that it’s an existential problem. The rise of these platforms continues to grow and niche platforms have become easier and easier to spin up based on their varying unique interests.

I only see this problem space evolving, and current events show that the cost of not dealing with this problem can be very serious. While it might appear to only be in an online forum, it can have real-world consequences.

So you put all those things together and while it is a difficult topic to discuss, it’s also a really exciting opportunity to stand for something principled and really make a difference in people’s lives. And also to change the course of how we think about and use some of these tools is just too great to ignore.

John Redgrave: I love that. And I totally agree.

This is one of the core principles around why we built this company — this problem is too big to ignore and more technology needs to be applied to it. So, you are our VP of product, you’ve hinted at what we’re doing. Can we talk about where we actually fit into this narrative of solving this problem of hate and harassment online?

Dev Bala: Yeah, absolutely. So, at Sentropy we have two core products, they’re called Detect and Defend. Together they create a full-stack moderation solution that puts a lot of this great and novel technology around machine learning, adaptive learning, constantly updating language models in a box. With this solution, we give firms of all shapes and sizes, the same cutting-edge capabilities that previously were only available to some of these larger companies we’ve been discussing.

I mentioned Detect — that’s our core machine learning technology. That’s where a lot of the hard data processing and analysis takes place. And that platform uses a series of APIs to connect and act on a variety of different endpoints. We have our own endpoint called Defend. Defend is a moderation dashboard that trust and safety agents and moderators can log into and use to review the determinations that come from our Detect API, and then decide what action to take.

And those actions are going to vary based on the platform involved, based on the severity of the message, and based on that company’s internal policies. So, looking at those two products together — Detect is the abuse detection technology, and Defend is the front end.

We can put this great technology in a box and deploy it to our customers environments so they can get started on addressing these problems very quickly, with little to no customization or integration required.

But we also understand that different organizations have varying levels of sophistication and complexity. And so for certain types of customers, we also just allow them to use our Detect API so that they can plug it into their existing infrastructure and supercharge their existing moderation workflow.

John Redgrave: Got it. And the Detect product is so fascinating in that it’s an API that’s serving really technical, hard-to-build machine learning models.

As you think about the future of Sentropy, starting in content moderation and content moderation workflows, what does the road ahead look like in terms of the larger vision for what’s possible with Sentropy’s technology?

Dev Bala: I think the potential we have is perhaps just as vast as the potential problem space that we’re trying to operate in. As we think about all the different places where this problem exists and where people interact with toxic content or hateful individuals, we have a potential role to play there.

While our current products focus on a solution that platform owners can use to moderate user-generated content on their own platforms, we think there are different opportunities to scale the core technology in different directions.

One area we’re looking at is serving individuals directly. As a regular person on social media, I encounter toxic content or hateful messages in places like Reddit and Twitter, and other such platforms. What kind of solutions can Sentropy offer me as an individual to help protect me in those experiences?

Another lens on the opposite side is looking at other business use cases and companies of all shapes and sizes that have an online presence. They have comment sections or discussion forums on their sites. They engage with people on a broad scale on social media and oftentimes those companies, without necessarily really realizing it yet, have unfortunate and adjacent, if not direct linkage to toxic content and really incendiary discussions. How can we help an organization like that? To create safe communities for their customers, for their fans. How can we help them achieve their own values? How can our tools help them moderate that discussion that’s happening around their company and around their brand?

That’s another example of where our tools can add value. It’s not just about the organization’s own platforms. It can be about your eyes as individuals, and it can also be about brands and companies that just have really strong beliefs and values around cleaning up the conversation.

John Redgrave: Yes, I think the summary here for me is Sentropy has the ability to protect not only platforms but also brands and individuals across all digital interactions. And that is a really powerful place for the company to be as a whole. And we happen to have a unique advantage in terms of the types of people who have chosen to join us — you included Dev — to go on this mission and to serve these different constituencies over time. And hopefully build a really great cyber safety company, for the future.

Dev, now that you’ve joined a startup, I’m really curious about what the biggest surprises have been since you’ve joined.

Dev Bala: Sure. I’ll kind of bucket it into a few categories. One: we’re all working in a different environment given COVID and offices being shut down. So I’m navigating that transition on top of this one. It’s certainly made it interesting, but I’ll start with the people because I think that’s what matters most. As you said, we’ve got a great team here and a very collaborative spirit. And I found that at the other firms I’ve worked at as well, but the fact that we’ve got it here in spades has made the work from home transition really seamless. You know, culture is what gets you through changes like this. And I think we’ve certainly got that. So that’s been a really refreshing surprise, frankly. The comradery that is often stereotyped in startups, rolling up your sleeves, working together in a garage. I found that a lot of that has translated to Zoom and online collaboration.

I’d say as far as the product and the technologies that we’re building, a few things have stood out to me. A lesson I learned many years ago when I was just starting out my career at Microsoft is that enterprise software is hard and it has a long sales cycle, a long discovery process. There’s a long tail of unique requirements that various organizations have when they’re evaluating a tool like ours, and it’s not easy. It’s not as easy as say just downloading an app from your phone. And that has proven to be true in our space as well.

I think that’s our opportunity again, to simplify the conversation and really make it as straightforward for a platform owner or an enterprise customer to look at solutions like these as being simple & turnkey. “Let me just sign up and get access and try it all in the span of a few minutes.” That’s our opportunity as enterprise software builders — to bring that same simplicity — your consumer life into your professional life. And so that pattern continues to exist.

And then I’d say the third thing, perhaps that has surprised me a bit, but once again is our opportunity, is the challenge that certain organizations have with trusting someone else to do this job for them. There are a lot of the potential organizations that I believe would be great customers or great advocates of our tools, and frankly, they and their users would benefit wildly from deploying our product. They oftentimes have invested, rightfully so, in people and tools to try and build out their own solutions. And what we can do is help them scale up those investments and get more for what they’re putting into this problem space. There’s an opportunity for us to deliver a message of really sophisticated solutions for the problem of toxicity and hate speech, but also in a simple elegant package.

I do believe though, and this is something I saw certainly at Facebook, that in the heat of battle teams are just going to look around them and throw whatever resources they have right in front of them at the problem. You’re going to grab somebody here or there and say, “Hey I need you to look at this. I need you to fix this. Whip me up a quick solution or a quick script that lets me moderate or delete this type of toxic information”. And that happens for a day and a week and a month and a year. And before you know it, you’ve got a team that’s doing this and you’ve got a codebase of some hacky solution that continues to evolve and evolve.

In some cases that can turn into a 10,000 person organization at Facebook. In other cases, it just becomes a sunk cost that’s hard to sort of move past. That’s a unique challenge that I’ve observed in our space that I didn’t quite see coming. People don’t build their own chat tools necessarily, they go off and buy them from known enterprise software companies. And I think that’s a similar transition for us to help customers make. You don’t need to try and build this on your own. We’re here for you. We’ve thought a lot about this. We’ve got some of the best minds in the industry working on it. Try our technology out. Let us help you solve this problem. So you’re not stuck throwing time and money at inefficient approaches.

John Redgrave: Yes. And, I think customization for a specific online platform is so important. You constantly hear in this industry “We already use a dictionary of bad words or a set of regular expressions”. And they’ve been customized to that community. They’ve chosen those sets of things and they keep them updated, which seems very painful.

Why don’t approaches such as dictionaries of bad words or sets of regular expressions work in this day and age?

Dev Bala: It’s a really interesting discussion. I would say a couple of things. First, as we were speaking about earlier, it is a rapidly evolving problem space. The words people are using today are different from the ones that were used last month or last year. So as that evolves your ability to moderate your community with just a list of keywords becomes harder and harder, and it just doesn’t scale well. It’s not evolving, it’s not, a living, breathing corpus of information.

It’s kind of a step. And so I think that’s one of the key things we do:

Our models are always learning, and so they are always evolving. And this is one of the benefits you get from working with a firm like ours that has that kind of evolution baked into its product. Our classifiers are not static. They’re constantly learning and adapting and growing. A nice, simple example I like to offer to people — if you aren’t a machine learning expert — is to think about the spam folder. Maybe you use Gmail. That spam folder works in two big ways. One: you can mark things as spam. And so now the spam folder is learning about what you consider to be inappropriate. At the same time, it has a global model where it’s learning what everyone thinks is spam. And so you’re benefiting from all the other people out there who are training that model to understand what’s appropriate and what’s not appropriate. And we kind of follow a similar concept here on a much bigger and more complicated scale, but really the concept is pretty simple.

John Redgrave: That’s great. I love the spam folder analogy. I think it’s very apt for the speed at which these things evolve. There are great YouTube videos about what happens when you get a spam message. The implications of opening it, but also the evolution that this goes through.

I really appreciate you spending this time talking to us, Dev. Walking through and educating people on what Sentropy is, what the products look like, why this problem’s hard. Why context matters so much and why it’s important to be able to customize. I think it’s a really important evolution in the space and hopefully, our product continues to get into more and more people’s hands to make the internet safer. So we really appreciate you spending time with us and thanks to our listeners for taking the time today as well.

--

--

Sentropy Technologies
Sentropy

We all deserve a better internet. Sentropy helps platforms of every size protect their users and their brands from abuse and malicious content.