Brutkey

Christoffer S.
@nopatience@swecyb.com
Christoffer S.
@nopatience@swecyb.com

@DamonCrowley@mindly.social PS: Would it be possible to buy that to print?

Christoffer S.
@nopatience@swecyb.com

@kagihq@mastodon.social ... and back it would appear.

Christoffer S.
@nopatience@swecyb.com

Ouch... appears as if there is a significant outage at @kagihq@mastodon.social

Christoffer S.
@nopatience@swecyb.com

I have now more actively started an internal push towards buying @frameworkcomputer@fosstodon.org laptops because from an environmental perspective pretty much anything else is mute. And from a performance / upgrade perspective, mute. Perhaps initial price. But compared to much else it's not that much more expensive, especially when considering what you get.

Let's see if I can make a difference. I know I'm due an upgrade, will try and get myself another Framework 13 with the new Ryzen chipset. Would probably pee a little of excitement if that happened.

#Laptop #FrameWork

Christoffer S.
@nopatience@swecyb.com

A thought on prompt injections. Could this defensive countermeasure work?

Before sending off a prompt, hash sign it using an ... MCP-prompt sign endpoint.

Then within the prompt ask the "agent" once completed with it's job, always use the MCP-prompt sign endpoint to sign what it believes is it's current prompt.

Once the LLM has completed processing, signed it's "current prompt", the original requestor can compare the two signed hashes.

I know I'm missing stuff here, but might this be worth exploring?

#LLM #AI #PromptInjection

Christoffer S.
@nopatience@swecyb.com

All these prompt-injection attacks against "LLM"-enabled development environments are for sure, with no doubt in my mind, going to be exploited by North Korea.

Given their historical aptitude for poisoning the well (NPM, PyPI et al.) inserting "innocent" little prompts here and there across issues, even pull requests...

... I'm sure it happens now already. Suggest PR, open Issue, have copilots and other such "intelligent" agents redefine their prompts, exfiltrate some tokens, add some backdoors etc.

https://embracethered.com/blog/posts/2025/github-copilot-remote-code-execution-via-prompt-injection/

#Cybersecurity #LLM

Christoffer S.
@nopatience@swecyb.com

Good read by Karsten Hahn (perhaps this Hahn @struppigel@infosec.exchange ) over at G Data Cyberdefense.

LLM's used to add backdoor in websites.

https://www.gdatasoftware.com/blog/2025/08/38247-justaskjacky-ai-trojan-horse-comeback

#Cybersecurity #Infosec #ThreatIntel
@threatintel@a.gup.pe @cybersecurity@a.gup.pe

Christoffer S.
@nopatience@swecyb.com

@cybersecurity@a.gup.pe No... I'm dumb. One way communication. Jesus... my brain, please get back to work... slacker...

or... what if the RSS-endpoint actually accepted PUT/POST... then it could work. It doesn't have to be a "real" RSS, just appear to be one...

... perhaps I should stop trying to think, it's not working.

Christoffer S.
@nopatience@swecyb.com

Hmm... wouldn't it be kind of fun to use RSS + RPCJSON as a C2-channel?

Given how often RSS-feeds contain descriptions of C2, why not use it as a C2?

#ThreatIntel #Cybersecurity #Infosec
@cybersecurity@a.gup.pe @threatintel@a.gup.pe

Christoffer S.
@nopatience@swecyb.com

#Fortinet #SSL #GreyNoise #Vulnerability #NotYet