Exploring the downsides of artificial intelligence

Microsoft’s racist bot shows the limits and dangers of artificial intelligence

Microsoft Research ran an experiment last week on their artificial intelligence engine where they set a naive robot to learn from it was told on Twitter.

Within two days Tay, as they named the bot, had become an obnoxious racist as Twitter user directed obnoxious comments at the account.

Realising the monster they had created, Microsoft shut the experiment down. The result is less than encouraging for the artificial intelligence community.

Self learning robots may have a lot of power and potential, but if they’re learning from humans they may pick up bad habits. We need to tread carefully with this.

Similar posts:

  • No Related Posts

Author: Paul Wallbank

Paul Wallbank is a speaker and writer charting how technology is changing society and business. Paul has four regular technology advice radio programs on ABC, a weekly column on the smartcompany.com.au website and has published seven books.

Leave a Reply