MSFT had to turn off its twitter bot .....

Joe Fan

10,000+ Posts
.... after only 15 hours
Kind of a funny story
[Trigger warning for some even more offensive tweets in the article, worst than what is visible below]
http://www.businessinsider.com/micr...l-tweets-from-ai-chatbot-tay-2016-3?r=UK&IR=T

screen%20shot%202016-03-24%20at%2009.50.46.png


screen%20shot%202016-03-24%20at%2011.10.58.png


screen%20shot%202016-03-24%20at%2010.48.22.png
 
From the linked article in @Joe Fan's post:

It's clear that Microsoft's developers didn't include any filters on what words Tay could or could not use.

Good job, Microsoft.
 
The article points to this project as a "Machine Learning exercise". Did the internet just teach "Tay" to be a bigoted racist?
 
Last edited:
Based on the average youtube or news article comment sections, we may need to worry about accidentally creating nazi artificial inteligence.

Microsoft should use hornfans instead. Any hornfans based AI will not discriminate based on religion, color or creed.... but I cannot promise that it would look favorably upon sooner or aggy.
 

Weekly Prediction Contest

* Predict HORNS-AGGIES *
Sat, Nov 30 • 6:30 PM on ABC

Recent Threads

Back
Top