Episode 74: How to Avoid Bias in Your Machine Learning Models with Clare Corthell

Episode 74 November 25, 2019 00:28:16
Episode 74: How to Avoid Bias in Your Machine Learning Models with Clare Corthell
The Georgian Impact Podcast | AI, ML & More
Episode 74: How to Avoid Bias in Your Machine Learning Models with Clare Corthell

Nov 25 2019 | 00:28:16

/

Hosted By

Jon Prial

Show Notes

Bias exists everywhere. It factors into everything that we do and into virtually every decision that we make. An interesting, but problematic side effect of this is that bias can also easily slip into our machine models. In this episode, Jon Prial talks with Clare Corthell, a well-known and respected data scientist and engineer, and the founder of Luminant Data, about the issues that can arise when bias enters your models and how to avoid it in the first place. Plus, check out our show notes to find out how to access our first episode of Extra Impact, where Jon and Clare go deeper into this topic, talking about bias in AI-powered services like Airbnb and the controversial policing stop-and-frisk program. You’ll hear about: -- The nature of bias in machine learning models and what causes it -- Cynthia Dwork’s work on transparency -- How companies should be thinking about data -- Implementing fairness into AI and machine learning models -- Developing an AI code of ethics Access the show notes here: http://bit.ly/2GLyjsU

Other Episodes

Episode 1

November 25, 2019 00:27:11
Episode Cover

Episode 1: Building Data Science Teams with Chris Matys

We live in a world that’s increasingly fueled by data and analytics. But for companies to unlock the value all of that information represents,...

Listen

Episode 83

November 25, 2019 00:23:55
Episode Cover

Episode 83: Understanding Differential Privacy with Chang Liu

Differential privacy is a technology that's quickly moving from academia into business. And it’s not just the big companies that are using it. With...

Listen

Episode 4

March 15, 2024 00:21:07
Episode Cover

Testing LLMs for trust and safety

We all get a few chuckles when autocorrect gets something wrong, but there's a lot of time-saving and face-saving value with autocorrect. But do...

Listen