You don’t always need to build fancy algorithms to tamper with image recognition systems – adding objects in random places will do the trick. In most cases, adversarial models are used to change a few pixels here and there to distort images so objects are incorrectly recognized. A few examples have included stickers that turn images of bananas into toasters, or wearing silly glasses to be fool facial recognition systems into believing you’re someone else. Let’s not forget the classic case of when a turtle was mistaken as a rifle to really drill home how easy it is to outwit AI. Now researchers from the York University and the University of Toronto, Canada, however, have shown that it’s possible to mislead neural networks by copying and pasting pictures of objects into images, too. No real clever trickery is needed here.
ORIGINAL SOURCE: The Register