Epsiode 93: Facial Recognition, Demographic Analysis & More with Timnit Gebru

Episode 93 November 25, 2019 00:23:15
Epsiode 93: Facial Recognition, Demographic Analysis & More with Timnit Gebru
The Georgian Impact Podcast | AI, ML & More
Epsiode 93: Facial Recognition, Demographic Analysis & More with Timnit Gebru

Nov 25 2019 | 00:23:15

/

Hosted By

Jon Prial

Show Notes

Whether overt or unintentional, whether human- or technology-oriented, bias is something that every company must be vigilant about. And while it used to be something you might have to worry about with your employees, today it can be equally pervasive — and problematic — in the algorithms those employees create and the data they use.

Although the examples of bias in AI are numerous, one of the more prominent areas where we’ve seen it happen in recent years is in image processing. In this episode of the Impact Podcast, Jon Prial talks with Timnit Gebru a research scientist in the Ethical AI team at Google AI and a co-founder of the group Black in AI about some of the challenges with using facial recognition technology.

Who is Timnit Gebru?

Timnit Gebru is a research scientist in the Ethical AI team at Google AI.  Prior to that, she was a postdoctoral researcher in the Fairness Accountability Transparency and Ethics (FATE) group at Microsoft Research, New York. She earned her PhD from the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li. Her main research interest is in data mining large-scale, publicly available images to gain sociological insight, and working on computer vision problems that arise as a result, including fine-grained image recognition, scalable annotation of images, and domain adaptation. She is currently studying the ethical considerations underlying any data mining project, and methods of auditing and mitigating bias in sociotechnical systems. The New York TimesMIT Tech Review, and others have recently covered her work. As a cofounder of the group Black in AI, she works to both increase diversity in the field and reduce the negative impacts of racial bias in training data used for human-centric machine learning models.

Other Episodes

Episode 109

November 25, 2019 00:25:30
Episode Cover

Episode 109: Who’s Who in your Data with Jeff Jonas

A lot of effort goes into identifying who we are almost from the very moment we’re born. Birth certificates, passports, fingerprints, now facial recognition....

Listen

Episode 16

September 30, 2022 00:25:12
Episode Cover

How Explainable AI enables trust with Fiddler.AI’s Krishna Gade

In this episode of the Georgian Impact podcast, we’ll be talking about one pillar of responsible AI: explainable AI. Explainability provides insight into what's...

Listen

Episode 10

September 07, 2021 00:21:40
Episode Cover

The Future of Digital Advertising with Beam.city DNA

You might think that selling consumer products has never been easier. Not so fast. Sure, e-commerce is everywhere and is becoming more user-friendly by...

Listen