Brutkey

Christoffer S.
@nopatience@swecyb.com

All these prompt-injection attacks against "LLM"-enabled development environments are for sure, with no doubt in my mind, going to be exploited by North Korea.

Given their historical aptitude for poisoning the well (NPM, PyPI et al.) inserting "innocent" little prompts here and there across issues, even pull requests...

... I'm sure it happens now already. Suggest PR, open Issue, have copilots and other such "intelligent" agents redefine their prompts, exfiltrate some tokens, add some backdoors etc.

https://embracethered.com/blog/posts/2025/github-copilot-remote-code-execution-via-prompt-injection/

#Cybersecurity #LLM