Microsoft has apologised for racist and sexist messages generated by its Twitter chatbot. The bot, called Tay, was launched as an experiment to learn more about how artificial intelligence programs can engage with internet users in casual conversation. The programme had been designed to mimic the words of a teenage girl, but it quickly learned to imitate offensive words that Twitter users started feeding it. Microsoft was forced to take Tay offline just a day after it launched. In a blog post, the company said it takes "full responsibility for not seeing this possibility ahead of time."

Ferrari leads as Sportico’s F1 team values soar past $34B
Dubai Chambers opens first US office in New York
Dubai Exhibition Centre gears up for mega events
UAE executes first government transaction using Digital Dirham
Chad launches $30 billion investment drive from UAE
