First Hitler, now drugs: Microsoft’s racist chatbot returns to ‘smoke kush’ on Twitter

<p><a href="https://www. avi icke.com/wp-co te t/ ploa s/2016/03/56fc1137c461882c798b4596.jpg" rel="attachme t wp-att-367297"><img class="size-f ll wp-image-367297 alig ce ter" src="https://www. avi icke.com/wp-co te t/ ploa s/2016/03/56fc1137c461882c798b4596.jpg" alt="56fc1137c461882c798b4596" wi th="900" height="500" srcset="https://www. avi icke.com/wp-co te t/ ploa s/2016/03/56fc1137c461882c798b4596-300×167.jpg 300w, https://www. avi icke.com/wp-co te t/ ploa s/2016/03/56fc1137c461882c798b4596-768×427.jpg 768w, https://www. avi icke.com/wp-co te t/ ploa s/2016/03/56fc1137c461882c798b4596.jpg 900w" sizes="(max-wi th: 709px) 85vw, (max-wi th: 909px) 67vw, (max-wi th: 1362px) 62vw, 840px" /></a></p>
<p>‘Tay, a Twitter bot create by Microsoft to lear from chatti g, was take offli e for tweaks whe the i ter et ta ght it to share o trageo s a racist tweets. The artificial i tellige ce rece tly came back o li e, o ly to co ti e misbehavi g.</p>
<p>Microsoft create Tay as a exercise i machi e lear i g; the AI bot wo l mo ify its ow creative patter s base o i teractio s with real people o platforms s ch as Twitter, Kik or Gro pme, to em late a yo g woma o social me ia.</p>
<p>Last week, to the wo er of i ter et hooliga s, Tay’s chat algorithm allowe “her” to be tricke i to maki g o trageo s stateme ts s ch as e orsi g A olf Hitler, ca si g Microsoft to p t the bot to “sleep” to be reco e withi 24 ho rs.’</p>
<p><a href="https://www.rt.com/ sa/337801-microsoft-chatbot-smoke-k sh/? tm_so rce=browser& tm_me i m=aplicatio _chrome& tm_campaig =chrome" target="_bla k">Rea more: First Hitler, ow r gs: Microsoft’s racist chatbot ret r s to ‘smoke k sh’ o Twitter</a></p>

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.