Epsiode 93: Facial Recognition, Demographic Analysis & More with Timnit Gebru

Episode 93 November 25, 2019 00:23:15
Epsiode 93: Facial Recognition, Demographic Analysis & More with Timnit Gebru
The Georgian Impact Podcast | AI, ML & More
Epsiode 93: Facial Recognition, Demographic Analysis & More with Timnit Gebru

Nov 25 2019 | 00:23:15

/

Hosted By

Jon Prial

Show Notes

Whether overt or unintentional, whether human- or technology-oriented, bias is something that every company must be vigilant about. And while it used to be something you might have to worry about with your employees, today it can be equally pervasive — and problematic — in the algorithms those employees create and the data they use.

Although the examples of bias in AI are numerous, one of the more prominent areas where we’ve seen it happen in recent years is in image processing. In this episode of the Impact Podcast, Jon Prial talks with Timnit Gebru a research scientist in the Ethical AI team at Google AI and a co-founder of the group Black in AI about some of the challenges with using facial recognition technology.

Who is Timnit Gebru?

Timnit Gebru is a research scientist in the Ethical AI team at Google AI.  Prior to that, she was a postdoctoral researcher in the Fairness Accountability Transparency and Ethics (FATE) group at Microsoft Research, New York. She earned her PhD from the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li. Her main research interest is in data mining large-scale, publicly available images to gain sociological insight, and working on computer vision problems that arise as a result, including fine-grained image recognition, scalable annotation of images, and domain adaptation. She is currently studying the ethical considerations underlying any data mining project, and methods of auditing and mitigating bias in sociotechnical systems. The New York TimesMIT Tech Review, and others have recently covered her work. As a cofounder of the group Black in AI, she works to both increase diversity in the field and reduce the negative impacts of racial bias in training data used for human-centric machine learning models.

Other Episodes

Episode 21

December 11, 2020 00:37:31
Episode Cover

Tackling Digital Disinformation with Kathryn Harrison

It used to be you could trust what you saw. With the prevalence of deep fakes and other synthetic media, today it isn’t always...

Listen

Episode 62

November 25, 2019 00:21:39
Episode Cover

Episode 62: Is Machine Learning the Secret to Uber's Success?

If you think Uber is a ride-hailing business, you're wrong. It's actually a machine learning business. Machine learning is what makes Uber's service possible...

Listen

Episode 19

November 13, 2020 00:20:27
Episode Cover

The Challenges of Scaling Your Business With Slack’s Allan Leinwand

Is there such a thing as a problem people wish they had? When it comes to challenges in scaling your business, the answer is...

Listen