Celebrating 10 Years of Trusted News Discovery
One News Page
> >

Why Developing Ethical, Unbiased AI Is Complicated

Video Credit: NowThis - Duration: 07:11s - Published < > Embed
Why Developing Ethical, Unbiased AI Is Complicated

Why Developing Ethical, Unbiased AI Is Complicated

What if our phones turned out to be racist, misogynistic, homophobic pricks?

  Artificial intelligence assistants such as Siri and Alexa are great for convenience and offer a number of benefits.

But they also have a lot of flaws and discrimination is one of them.

Google, Microsoft, and Facebook have all admitted this.

And they’re trying to perfect this tech so we can have more ethical, humane AI in our lives.

But if the data we’re feeding this system comes from a society with its own biases, how can we develop an unbiased AI?

Artificial Intelligence has been around for over 60 years now.

The term was first coined in 1956 by computer scientist John McCarthy.

Research centers across the U.S. then started exploring the creation of systems that could efficiently solve problems and that could learn by themselves.

Ever since, machine learning algorithms have been developed to analyze data, learn from it, then make predictions and decisions based on those insights.

AI started in programs used to solve general common-sense scenarios, mathematical problems, and geometrical theorems. But today we see it in search engines, face recognition, social media, smartphones, cars, and the virtual assistants we put inside our homes.

And while it can determine anything from stock prices to medical diagnoses, it can also struggle telling apart a knee from a face.

This is because Artificial intelligence is just like a child, meaning they’ll learn from our behavior.

And the more data you feed into a machine learning algorithm, the smarter it gets.

Unfortunately, most of this data comes from Silicon Valley, which as we know, has a lot of issues with diversity.

Companies such as Facebook, Google, IBM, and Microsoft have realized that their algorithms are filled with biases.

So the fault here is with the people who input the data, not the AI algorithms themselves.

Think about it this way, if you’re using AI to hire someone, you feed it data of successful candidates you’ve hired in the past.

The AI will then recommend new candidates based on that data set.

There’s just one problem — if all your past hires are mostly white men from Ivy league schools, then the AI will only recommend candidates that meet that criteria.

And this isn’t something I just made up.

We’ve seen this behavior playout time and time again.

Siri, for starters, has provided homophobic answers to questions relating to LGBTQ+ topics.

It’s also offered offensive replies to the question, “what is an Indian?” And it’s been under fire for how it responds to sexual harassment remarks.

Facial recognition, one of the best-known applications of AI, which has proven to single out people of color because they’re disproportionately represented in mugshot databases.

Erasing bias from databases is complicated.

Especially when there’s no type of regulation or standard governing the data used to train machine learning algorithms. We, as a society, a human species, need to be responsible for what we teach our robots.

Just like we do with our kids.

So choose love, not hate.

This video, "Why Developing Ethical, Unbiased AI Is Complicated", first appeared on


You Might Like

Environmentally friendly: One News Page is hosted on servers powered solely by renewable energy
© 2019 One News Page Ltd. All Rights Reserved.
About us  |  Contact us  |  Disclaimer  |  Press Room  |  Terms & Conditions  |  Content Accreditation
 RSS  |  News for my Website  |  Free news search widget  |  In the News  |  DMCA / Content Removal  |  Privacy & Data Protection Policy
How are we doing? FeedbackSend us your feedback  |   LIKE us on Facebook   FOLLOW us on Twitter  •  FOLLOW us on Pinterest
One News® is a registered trademark of One News Page Ltd.