Mar 262016
 

Microsoft Research ran an experiment last week on their artificial intelligence engine where they set a naive robot to learn from it was told on Twitter.

Within two days Tay, as they named the bot, had become an obnoxious racist as Twitter user directed obnoxious comments at the account.

Realising the monster they had created, Microsoft shut the experiment down. The result is less than encouraging for the artificial intelligence community.

Self learning robots may have a lot of power and potential, but if they’re learning from humans they may pick up bad habits. We need to tread carefully with this.

Leave a Reply

%d bloggers like this: