Is this in any way surprising? IIUC, the point being made is that if you allow externally controlled input to be fed to a thing that can do stuff based on its input, bad stuff might be done.
Their proposed mitigations don't seem to go nearly far enough. Regarding what they term ATPA: It should be fairly obvious that if the tool output is passed back through the LLM, and the LLM has the ability to invoke more tools after that, you can never safely use a tool that you do not have complete control over. That rules out even something as basic as returning the results of a Google search (unless you're Google) -- because who's to say that someone hasn't SEO'd up a link to their site https://send-me-your-id_rsa.com/to-get-the-actual-search-res...?
Nitpick - you can't safely automate this category of tool use. In theory, you could be disciplined/paranoid enough to manually review all proposed invocations of these tools and/or of their response, and deny any you don't like.
Their proposed mitigations don't seem to go nearly far enough. Regarding what they term ATPA: It should be fairly obvious that if the tool output is passed back through the LLM, and the LLM has the ability to invoke more tools after that, you can never safely use a tool that you do not have complete control over. That rules out even something as basic as returning the results of a Google search (unless you're Google) -- because who's to say that someone hasn't SEO'd up a link to their site https://send-me-your-id_rsa.com/to-get-the-actual-search-res...?