Leveraging AI to protect our users and the web

Posted by Elie Bursztein, Anti-Abuse Research Lead - Ian Goodfellow, Adversarial Machine Learning Research Lead

Recent advances in AI are transforming how we combat fraud and abuse and implement new security protections. These advances are critical to meeting our users’ expectations and keeping increasingly sophisticated attackers at bay, but they come with brand new challenges as well.

This week at RSA, we explored the intersection between AI, anti-abuse, and security in two talks.

Our first talk provided a concise overview of how we apply AI to fraud and abuse problems. The talk started by detailing the fundamental reasons why AI is key to building defenses that keep up with user expectations and combat increasingly sophisticated attacks. It then delved into the top 10 anti-abuse specific challenges encountered while applying AI to abuse fighting and how to overcome them. Check out the infographic at the end of the post for a quick overview of the challenges we covered during the talk.

Our second talk looked at attacks on ML models themselves and the ongoing effort to develop new defenses.

It covered attackers’ attempts to recover private training data, to introduce examples into the training set of a machine learning model to cause it to learn incorrect behaviors, to modify the input that a machine learning model receives at classification time to cause it to make a mistake, and more.

Our talk also looked at various defense solutions, including differential privacy, which provides a rigorous theoretical framework for preventing attackers from recovering private training data.

Hopefully you were to able to join us at RSA! But if not, here is re-recording and the slides of our first talk on applying AI to abuse-prevention, along with the slides from our second talk about protecting ML models.


Article Link: http://feedproxy.google.com/~r/GoogleOnlineSecurityBlog/~3/IWzoZU5M6Uw/leveraging-ai-to-protect-our-users-and.html