Apple’s work on AI-enhancements for Siri has been officially delayed (it’s now slated to roll out “in the coming year”) and one developer thinks they know why – the smarter and more personalized Siri is, the more dangerous it can be if something goes wrong.
Simon Willison, the developer of the data analysis tool Dataset, points the finger at prompt injections. AIs are typically restricted by their parent companies who impose certain rules on them. However, it’s possible to “jailbreak” the AI by talking it into breaking those rules. This is done with so-called “prompt injections”.
As a simple example, an AI model may have been instructed to refuse to answer questions about doing something illegal. But what if you ask the AI to write you a poem about hotwiring a car? Writing poems isn’t illegal, right?
This is an issue that all companies offering AI chatbots face and they have gotten better at blocking obvious jailbreaks, but it’s not a solved problem yet. Worse, jailbreaking Siri can have much worse consequences than most chatbots because of what it knows about you and what it can do. Apple spokeswoman Jacqueline Roy described Siri as follows:
“We’ve also been working on a more personalized Siri, giving it more awareness of your personal context, as well as the ability to take action for you within and across your apps.”
Apple, undoubtedly, put rules in place to prevent Siri from accidentally revealing your private data. But what if a prompt injection can get it to do it anyway? The “ability to take action for you” can be exploited too, so it’s vital for a company that is as privacy and security conscious as Apple to make sure that Siri can’t be jailbroken. And, apparently, this is going to take a while.