Brutkey

David Gerard
@davidgerard@circumstances.run

Hack a smart home with a calendar invite! And Google Gemini

https://pivot-to-ai.com/2025/08/10/hack-a-smart-home-with-a-calendar-invite-and-google-gemini/ - text
https://pivottoai.libsyn.com/20250810-hack-a-smart-home-with-gemini-ai-and-a-calendar-invite - podcast
https://www.youtube.com/watch?v=jybs-p6rzz8&list=UU9rJrMVgcXTfa8xuMnbhAEA - video

Paul Walker
@arafel@mas.to

@davidgerard@circumstances.run Getting more and more convinced that a shift in the way we build these things is really really required (like the CHERIOT work that @david_chisnall@infosec.exchange is doing). The electronics now are fast enough to absorb any overhead of doing it securely.


David Chisnall (*Now with 50% more sarcasm!*)
@david_chisnall@infosec.exchange

@arafel@mas.to @davidgerard@circumstances.run

The things we’re doing with CHERIoT make it possible to secure the endpoints. Unfortunately, this doesn’t help if you authorise a confused deputy to access the device.

I wrote a year or so ago that LLMs are the new memory safety bugs. The problem with a memory safety bug is that it allows an attacker to step outside of the language abstract machine and do things that are not (and, in some cases, cannot be) expressed in the program source code. LLMs have no well-defined abstract machine and they have no separation between code and data. Just as anyone who can send a message to a system and exploit a memory safety bug should be assumed to have complete control over the system, anyone who can send any text to an LLM should be assumed to be able to do anything that the LLM can do.

If you create a system with a set of secure devices and have a secure authorised path from an LLM to these devices that allows the LLM to control them, you have built a system that allows anyone who can send any text to the LLM to have control over the system. That isn’t a security vulnerability, that is a property intrinsic to the shape of the system that you have built.