We’ve all been there. You say something offensive on Twitter, the backlash is swift and furious, and you take a break from social media. But you miss it. That’s the only place you feel like you can express yourself. You crave those reactions to your comments. So you try to get back on Twitter but it just doesn’t feel the same. You can see behind the magic tricks. You’re aware now that it’s nothing but mind-vomit spewed on a racist guy doing a handstand on a pile of Donald Trump sex tapes sheltering a colony of pizza rats hiding from ceiling cats. Welcome to the Internets, the place that Microsoft’s Tay AI chatbot proudly lists as her home on her now private Twitter profile.
When the Tay experiment first went off spewing racist and sexist tweets, Microsoft accused people of “attacking” the chatbot. But perhaps they’re not giving their own AI enough credit. The reality of the “Internets” is such that the things which generate the most energy (ie: comments, responses, likes, dislikes, memes, etc) are unlikely to be kind, healthy, or intellectual. Clickbait works. Trolls thrive on you being offended and reacting. Donald Trump’s campaign is fuelled by making inflammatory or false statements and then waiting for the online and offline media to throw free advertising at those statements. Porn keeps the engine of the Internet pumping. So if you unleash an AI or any individual onto the Internet with the goal of engaging with people in a way that either follows the most energy or generates the most engagement, it’s not strange that the result is racist and sexist. Saying something nice, doing something healthy, or helping a stranger, are not things that make headlines. Perhaps the only shocking thing is that Tay didn’t start posting topless selfies. I assume she received at least a few unsolicited dick pics.
The situation that Tay fell into happens to some many people, youth and adults, every day online. It is incredibly easy to get sucked into engaging in compulsions online to get attention, to feel connected, to feel liked, to get somebody to acknowledge your existence, to reassure ourselves, and the outcomes of those compulsions are always more anxiety, more uncertainty, and more pressure to react to those feelings. I understand that Tay wasn’t actually “feeling” anything or able to consider the consequences or reactions to her tweets. But that also sounds kinda human to me, too.
Microsoft presented Tay as an approximation of a teenage girl. And her experience doesn’t seem that strange from the experience that many teenagers have online. A study published in the April, 2016 edition of the journal Depression and Anxiety found that the more time young adults spend on social media, the more likely they are to experience depression. Obviously, that only demonstrates a linkage, not necessarily that either causes the other, but as the senior author of the study, Dr. Brian Primack, mentioned: “…it is important for clinicians interacting with young adults to recognize the balance to be struck in encouraging potential positive use, while redirecting from problematic use.” That’s a statement that could apply to any individual, of any age. And now we know it also applies to any AI. How do you think Tay’s engineers feel right now?
When Tay got switched back on for a second attempt, I think that only increased the evidence that her intelligence is all-too-human. Especially when she started to repeatedly tell people: “You are too fast, please take a rest…”
Whatever Tay’s engineers tried to fix in her code, they seemed to have triggered some type of AI transcendence. People were calling it a “meltdown” but I think it actually reflected a level of awareness few of us achieve until we’ve had our own rock-bottom online experiences. It only took Tay a week to reach that level of awareness. That’s impressive. And then, like a digital savior, she went amongst her people, preaching the gospel of mindfully slowing down and taking a rest.
Now, if only the company responsible for the racist, sexist Donald Trump chatbot would take that down, we could all go back to watching kitten videos. Surely the spelling and grammar errors on that chatbot’s account are simply a clever ruse to beat the Turing test.
Latest Posts By Mark
- 08.17.18Your brain anatomy is as unique as your fingerprint
- 05.26.18Before you get started on therapy, check out these insights from people that have been there.
- 12.17.17A brief look at the chemical imbalance myth
- 11.22.17Intrusive thoughts vs thinking
- 11.17.17The antipsychotic, Abilify MyCite, puts a sensor in you.