Epsiode 93: Facial Recognition, Demographic Analysis & More with Timnit Gebru

Episode 93 November 25, 2019 00:23:15
Epsiode 93: Facial Recognition, Demographic Analysis & More with Timnit Gebru
The Georgian Impact Podcast | AI, ML & More
Epsiode 93: Facial Recognition, Demographic Analysis & More with Timnit Gebru

Nov 25 2019 | 00:23:15

/

Hosted By

Jon Prial

Show Notes

Whether overt or unintentional, whether human- or technology-oriented, bias is something that every company must be vigilant about. And while it used to be something you might have to worry about with your employees, today it can be equally pervasive — and problematic — in the algorithms those employees create and the data they use.

Although the examples of bias in AI are numerous, one of the more prominent areas where we’ve seen it happen in recent years is in image processing. In this episode of the Impact Podcast, Jon Prial talks with Timnit Gebru a research scientist in the Ethical AI team at Google AI and a co-founder of the group Black in AI about some of the challenges with using facial recognition technology.

Who is Timnit Gebru?

Timnit Gebru is a research scientist in the Ethical AI team at Google AI.  Prior to that, she was a postdoctoral researcher in the Fairness Accountability Transparency and Ethics (FATE) group at Microsoft Research, New York. She earned her PhD from the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li. Her main research interest is in data mining large-scale, publicly available images to gain sociological insight, and working on computer vision problems that arise as a result, including fine-grained image recognition, scalable annotation of images, and domain adaptation. She is currently studying the ethical considerations underlying any data mining project, and methods of auditing and mitigating bias in sociotechnical systems. The New York TimesMIT Tech Review, and others have recently covered her work. As a cofounder of the group Black in AI, she works to both increase diversity in the field and reduce the negative impacts of racial bias in training data used for human-centric machine learning models.

Other Episodes

Episode 2

February 16, 2024 00:18:30
Episode Cover

The nitty-gritty of fine-tuning a GenAI model

We’ve all heard about how generative AI is changing almost every aspect of a business. If you crack open the door and peer in...

Listen

Episode 107

November 25, 2019 00:27:46
Episode Cover

Episode 107: Information Privacy for an Information Age

Privacy is much more than a compliance issue. It’s a way of thinking about your relationship with your users and a product design choice. ...

Listen

Episode 18

November 11, 2020 00:21:27
Episode Cover

AI Adoption Starts with Product Management

You might think that getting your customer base to adopt your AI product is a sales and marketing challenge. But it starts much earlier...

Listen