Abstract
In this study, we conduct an empirical analysis of interpretation errors made by Amazon Alexa, the speech recognition engine that powers the Amazon Echo family of devices. We show how misinterpretations made by Alexa can be used to build a new attack, called skill squatting, and discuss its security implications.