Adversarial Machine Learning

Posted 1 year ago by Brent Lagesse

School(s) : STEM
Primary PI Name : Brent Lagesse
Email : lagesse@uw.edu
Phone : 425-352-5313
Project/Faculty Website : http://faculty.washington.edu/lagesse/
Research Location : UW Bothell
Project Goals : Machine Learning algorithms have traditionally been trained with data that might have errors in it, but not with data that has strategically manipulated by an adversary. As more applications are using open data sources to train machine learning algorithms for important applications, we are seeing more systems vulnerable to adversarial machine learning attacks. We are devising game-theoretic defense mechanisms to protect systems against these attackers.
Student Qualifications : Self-motivation, programming knowledge (preferably Python), and knowledge of machine learning or distributed computing is a plus.
Student Outcomes : Develop security mechanisms to protect machine learning systems from active adversaries.
Student Responsibilities : Develop machine learning attacks and defenses, apply defenses to existing applications, and develop a user friendly system to enable other researchers to compare their work.
CSS

  • School(s) : STEM
  • Primary PI Name : Brent Lagesse
  • Interested? Contact Faculty Researcher by Email : lagesse@uw.edu
  • Phone : 425-352-5313
  • Project/Faculty Website : http://faculty.washington.edu/lagesse/
  • Research Location : UW Bothell
  • Project Goals : Machine Learning algorithms have traditionally been trained with data that might have errors in it, but not with data that has strategically manipulated by an adversary. As more applications are using open data sources to train machine learning algorithms for important applications, we are seeing more systems vulnerable to adversarial machine learning attacks. We are devising game-theoretic defense mechanisms to protect systems against these attackers.
  • Student Qualifications : Self-motivation, programming knowledge (preferably Python), and knowledge of machine learning or distributed computing is a plus.
  • Student Outcomes : Develop security mechanisms to protect machine learning systems from active adversaries.
  • Student Responsibilities : Develop machine learning attacks and defenses, apply defenses to existing applications, and develop a user friendly system to enable other researchers to compare their work.
  • Number of Student Positions Available : 2