Sunday 1 September 2019

go set a watchman

Via Boing Boing, we‘re exposed to a rather inverted demonstration project that leans heavily into the susceptibility of neural networks to human prejudice and pareidolia to pluck what could pass as evidence from the grainy though not necessarily sensitive to granularity.
Researchers—the sort that also lean heavily on gimmickry and Security Theatre—are training artificial intelligence on progressive facial resolution and recognition to limn in the incriminating details spared in historical footage. As shown nightmarishly on pixelated emoji, the subroutine wants to attribute greebling characteristics that are not honestly present with the potential of a netting an intruder or interloper whose culpability is boosted by being the unfortunate victim of circumstance and being in the wrong place at the wrong time. These applications can potentially turn into digital age witch-trials are rooted in the same mentality that supposes any image could be enhanced indefinitely or that the work of forensics is instantaneous and straightforward, speaking to authorities and actuaries that want a villain without regard to accuracy.