Usage of Adversarial Examples as a Defensive Mechanism in Cybersecurity


Add to Wishlist
Add to Wishlist


Usage of Adversarial Examples as a Defensive Mechanism in Cybersecurity, Is A Well-Researched Topic, It Is To Be Used As A Guide Or Framework For Your Research


The focus of this research article is on defenses to a theoretical threat model of malware. The malware, or GUI-Attack, aims to search the victim computer’s desktop and use image recognition to find the icons for highly used web browsers such as Google Chrome, Internet Explorer, Mozilla Firefox, and Opera, and gain access to secure data and information. We propose that adversarial examples can be used as a defensive mechanism to protect secure information from these GUI-Attacks. We hope to prove that these adversarial examples can be used to prevent malicious AI from being able to recognize the icons for popular web browsers, making an effective defensive mechanism against AI-powered GUI-Attacks.


With every evolution in computing and technology comes ways in which cyber-criminals and bad actors can access, steal, and scam away private information or assets. Thus, with each evolution of our technology, our security and defenses we employ within these technologies must also evolve to combat the threat. Due to the quickly rising power and accessibility of artificial intelligence, criminals will be able to employ much smarter, more powerful, and more dangerous viruses, malware, and spyware. Cyber-criminals will be able to use AI to better avoid detection by antivirus software and protocols. These AI-powered attacks are highly evasive and incredibly targeted, only attacking a very specific target and no others, unlike many of today’s common malware which employ more of a “spray and pray” method of attacking. To combat these AI-powered attacks, it is possible for security experts to use AI to fight back. By working to obfuscate certain aspects of what an AI-powered virus may be looking for, we create an adversarial example. Although adversarial examples are used to show major flaws and vulnerabilities in even the most state-of-the-art neural networks, they can also be used to fight back against cyber-attacks. Using these adversarial examples, we can exploit weaknesses in neural networks and AI where the virus will struggle to find or distinguish what it is looking for, thus making it unable to execute its function on the victim’s system, turning adversarial examples into a defense mechanism for cybersecurity.

Additional information


Carl Jake Thurnau

No of Pages







There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.