MIT Students Fool Google Image-Recognition Tech
MIT researchers managed to fool Google's image-recognition tech into thinking a picture of two skiers was a dog. That may audio trivial, but their work highlights that existent machine-learning algorithms—whether they be used in cocky-driving cars or for social media— tin be tricked and possibly abused.
"Information technology'southward really important that these system are made secure, and can't exist exploited," said Anish Athalye, an MIT PhD candidate who worked on the research.
The hack targeted Google's Cloud Vision API, which is open up to developers. The engineering scans digital pictures, and recognizes the objects depicted. But the API isn't perfect. Image recognition tin can be tricked to misclassify images when pixels are altered or shapes and colors are changed.
Athalye and his colleagues designed a estimator plan that does but that past making subtle tweaks to a picture. In another examination, they tricked Google's API into thinking a fix of rifles was actually a helicopter.
It's a notable hack considering it works on an actual Google product under what security experts telephone call "black box" conditions, in which the researchers had no access to the inner workings of the target technology. Other attempts to fool image-recognition tech have largely focused on "white box" systems, where the underlying computing mechanisms were known.
To exploit the Google prototype-recognition system, the MIT researchers used a computer algorithm known as Natural Evolution Strategies or NES. It essentially helped them estimate how the image recognition might go well-nigh classifying a movie.
Their programme will feed Google a set of modified pictures, detect how they might exist classified, and make changes appropriately before submitting another batch. In their own demo, it took about 1 one thousand thousand image submissions until their program finally crafted ane that fooled Google's system.
That's certainly a lot of queries. Merely the MIT researchers actually found their method tin exist upward to 1,000 times faster than previous approaches that targeted systems under blackness box weather condition. Information technology tin can do this because their program will tweak large groups of pixels across an paradigm, as opposed to a few pixels at a time, said Andrew Ilyas, an MIT master's pupil who also worked on the enquiry.
Google declined to annotate on the research, only the company's AI programmers take likewise been studying ways to protect motorcar learning-based systems. The search giant has also joined with others including Microsoft and Facebook to promote best practices in AI technology.
Athalye said the MIT enquiry shows attacking AI systems isn't simply theoretical; it's an upshot that needs to exist addressed before the technology enters actual products like self-driving cars. "We want to make certain the good guys can set up these things, before the bad guys stop up exploiting them," he said.
Source: https://sea.pcmag.com/news/18748/mit-students-fool-google-image-recognition-tech
Posted by: hallarowelf.blogspot.com

0 Response to "MIT Students Fool Google Image-Recognition Tech"
Post a Comment