Tuesday, 16 April 2019

architecture of choice

A team of researchers who trained a neural network to play arcade games would often ask human teachers, role-models to explain the rational behind the moves they made, veering left or jumping forward to avoid an obstacle, etc. to help the robot learn.
In efforts for transparency in algorithmic decision making and hopefully presenting AI as less of an inscrutable black box and more of an auditable system, the research team also trained the machine to generate a running account of each move it made. While this sounds well-intentioned, the more likely outcome will result in AI that comes up with a plausible and civil explanations ex-post facto, much like all people in certain situations measuring the programme’s response and against past human-user reactions and gradually building trust and a rapport based on what the AI has learned that we want to hear. That’s my story and I’m sticking to it.