Voice Processing Systems (VPSes) — such as your smartphone's voice assistant — are ubiquitous. But these systems have been shown to be susceptible to a wide variety of attacks including speaker impersonation, synthetic speech, hidden commands, etc. This project seeks to study the security of VPSes and characterize their attack surface, with a specific focus on the use of adversarial machine learning techniques.