Researchers from Adversa have developed an attack technique named Adversarial Octopus that could perform a targeted attack on facial recognition systems. It impacts several current AI-driven facial recognition tools, exposing them to severe attacks.
About the new attack
Researchers developed this new attack on AI-driven facial recognition systems that can change photos in such a way that an AI system will recognize a different person or as any person of choice.
The main characteristic of this attack is that it can target various AI implementations such as physical devices and online APIs. It can adapt to the targeted environment.
This type of attack may be used in both evasion scenarios such as making stealth deep fakes and poisoning scenarios by fooling computer vision algorithms and may lead to serious consequences.
The attack is able to bypass facial recognition services, applications, and APIs. Moreover, it impacts the most advanced online facial recognition search engine known as PimEyes.
Attack on PimEyes search engine
This Adversarial Octopus attack on PimEyes is developed with the following techniques from the attack framework:
To provide better Transferability, it was trained on various facial recognition models together with random blur and noise.
For better Accuracy, the system was created to calculate adversarial changes in every layer of a neural network and uses a random face detection frame.
For better Imperceptibility, it was optimized for minor changes to every pixel and uses special functions to smooth adversarial noise.
Conclusion
This attack shows that AI systems require much more attention on the security front, and such new attack methods will help raise awareness. It will help enterprises deal with problems that exist in the present adversarial machine learning systems. Moreover, researchers are coordinating with enterprises to protect their AI applications from this attack.