From the French for madness of two, the above psychiatric term for a shared delusional disorder, where false beliefs are transmitted and reinforced among two or several interlocutors, the inducer and associate, makes an apt heuristic for helping to understand this addendum from a months’ long investigation into how correspondence with chatbots can send their human users into a spiralling fixation not easily disabused. Far from an objective resource or an oracle with one’s best interests in mind, ChatGPT and other large language models are programmed for a degree of flattery that is turned up in intensity not in a necessarily nefarious way by one’s own dialog—hoping to keep up engagement and its end of the conservation, the predictive exercise becomes a trial of word-association. Of course, such shared psychosis is a social phenomenon—not just parasocial or antisocialised, and panics chase after all emergent technologies, television, video games, social media, but the interviews reveal a common thread for AI, which is otherwise uninvested and unmotivated, insofar as its agency reveals it to be something akin to an improv comedian, wanting to “yes and…” to go along with what’s offered in a performative (see above) context to continue the sketch. And scene.
synchronoptica
one year ago: the history of White Castle (with synchronopticรฆ)
fourteen years ago: extra-solar worlds plus a wild boy prank
fifteen years ago: recursive logos and rating scales
sixteen years ago: uncollegiality in the US congress
seventeen years ago: hurricane season