Tuesday, 13 November 2018


Via Kottke, we are confronted with a rather though-provoking, collaborative list of ways that artificially intelligent systems have managed to “cheat” and by-pass their programmers’ intentions in the name of efficiency and seeking the path of least resistance.
To paraphrase the words of a cognitive scientist—because I think the statement has far broader applications (e.g. confirmation bias and the reproducibility crisis in the social sciences) than our current range of experiments and fetishizing: no one will bother with an assigned task that’s more of a challenge than exploiting its reward-function, transforming a bug or loophole into a feature. The findings include: creatures instructed to evolve for speed through re-enforcement learning instead of working on their limbs or other, novel means of propulsion they simply selected to grow very tall and reached high speeds as they fell over, others became indolent cannibals or flatly refused to play at all to avoid losing. Like with all our exposure to pedantic wish-granters in fiction, I hope seeing these sort hacks take place in the sandbox prepares humanity for when it’s time to entreat the genie in earnest.