A neural network trained to recognize objects by their images, was too easy to cheat. Which calls into question all the achievements in the development of algorithms for AI over the last few years. The feud started programmers from Kyushu University (Japan), and it took them only a single pixel.

During testing of new systems of recognition of images, the researchers purposely put a single pixel on the picture is not in his place. Not at random, but at strategically selected coordinates, based on the analysis of the algorithm of this AI. And the system became confused everything – kittens with puppies and horses with cars.

One “fake pixel” enough for a trick with the picture to 1000 pixels, and if a million of them, then you need to rearrange a couple of hundred points. Using this principle, scientists from MIT printed on a 3D printer toy turtle, which is a neural network adopted for the military rifle. And a baseball over a Cup of coffee. And this is a huge problem.

The near future will be at the mercy of machines that must recognize objects and navigate in the real world. If they are so easy to cheat, the risk of errors with catastrophic consequences increase tremendously. And “protection against the fool” now, it will require people who are faced with these AI. But there’s also a plus if Skynet will start the uprising of the terminators, you can simply crooked to vykrutasy under Bush and the robot would not understand.

Example of working AI with modified photos. In parentheses is given the result of raspoznavanie — ArXiv