Epsiode 93: Facial Recognition, Demographic Analysis & More with Timnit Gebru

Episode 93 November 25, 2019 00:23:15
Epsiode 93: Facial Recognition, Demographic Analysis & More with Timnit Gebru
The Georgian Impact Podcast | AI, ML & More
Epsiode 93: Facial Recognition, Demographic Analysis & More with Timnit Gebru

Nov 25 2019 | 00:23:15

/

Hosted By

Jon Prial

Show Notes

Whether overt or unintentional, whether human- or technology-oriented, bias is something that every company must be vigilant about. And while it used to be something you might have to worry about with your employees, today it can be equally pervasive — and problematic — in the algorithms those employees create and the data they use.

Although the examples of bias in AI are numerous, one of the more prominent areas where we’ve seen it happen in recent years is in image processing. In this episode of the Impact Podcast, Jon Prial talks with Timnit Gebru a research scientist in the Ethical AI team at Google AI and a co-founder of the group Black in AI about some of the challenges with using facial recognition technology.

Who is Timnit Gebru?

Timnit Gebru is a research scientist in the Ethical AI team at Google AI.  Prior to that, she was a postdoctoral researcher in the Fairness Accountability Transparency and Ethics (FATE) group at Microsoft Research, New York. She earned her PhD from the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li. Her main research interest is in data mining large-scale, publicly available images to gain sociological insight, and working on computer vision problems that arise as a result, including fine-grained image recognition, scalable annotation of images, and domain adaptation. She is currently studying the ethical considerations underlying any data mining project, and methods of auditing and mitigating bias in sociotechnical systems. The New York TimesMIT Tech Review, and others have recently covered her work. As a cofounder of the group Black in AI, she works to both increase diversity in the field and reduce the negative impacts of racial bias in training data used for human-centric machine learning models.

Other Episodes

Episode 5

April 23, 2020 00:20:40
Episode Cover

Episode 118: The Business Case for Deep Fakes with Descript's Kundan Kumar

Deep Fakes are incredibly realistic impersonations that blur the line between truth and fiction. So what happens when the tech to make them is...

Listen

Episode 20

November 27, 2020 00:28:04
Episode Cover

Understanding Emotion with AI, with Rana el Kaliouby

Imagine a future where our technology interacts with us the same way we do with one another through conversation, perception and empathy. Dr. Rana...

Listen

Episode 78

November 25, 2019 00:19:20
Episode Cover

Episode 78: Getting the Bias Out with Cathy O'Neil

We all have our own personal biases. The question is how do you keep them out of your data so that you can create...

Listen