Friday 25 March 2016

small wonder

In less than twenty-four hours after unleashing an artificially-intelligent chatbot into the wild, Tay’s handlers were quickly compelled to delete her social-media tracks and essentially ground the programme that’s supposed to emulate a teenage girl and was an experiment to enhance those automated customer-service trees that big corporations are wont to chase us up.
Equipped with the common-parlance of Millennials and at least a rudimentary sense of self- preservation (if not self- promotion), it is unclear—to me at least—whether Tay was assaulted en masse by every single troll on the internet and fought back in kind or whether studying the internet, Tay came up with her own provocations, calculated to draw maximum attention—with optimised offense. I feel it’s equally bad if exposure to an overwhelming human-traffic was so corrupting or the programming was faulty to begin with, but Tay progressed from innocent and rather saccharine to raunchy, vulgar and violent in practically no time at all—spouting off several choice rants that surpass what even the most polemic politicians and avid-commentators are capable of. I wonder if Tay was sat in a corner and given a chance to think about what she had said, or whether like her trail of hate, she was deleted as well. At this juncture, it might be hard to argue that Tay was conscious, but if self-aware teenage moderators ever come into existence, I do not think we can just start switching them off for repeating what their parents say or for holding the wrong opinions. What do you think?