Brutkey

Taggart
@mttaggart@infosec.exchange
Taggart
@mttaggart@infosec.exchange

Why is every slide presentation tool awful in some way?

Taggart
@mttaggart@infosec.exchange

Well this talk/workshop has quickly become the hackiest thing I've ever put together. It's working, but it is Rube Goldbergian in the extreme.

Taggart
@mttaggart@infosec.exchange

Sorry to gripe on about this but this #Matrix upgrade has been unbelievably hamfisted. And ironically, I was forced to leave the Synapse (reference Matrix server) admin room because of constant CSAM attacks.

So, I dunno, maybe we're done here.

Taggart
@mttaggart@infosec.exchange

His forecasting is predicated on genAI replacing knowledge workers in large numbers. Remember that in order for that to happen, it isn't required that the output is actually good, only that the managers/CEOs think it's good enough. In the fullness of time, they may be proven wrong, but that doesn't change the immediate harms of the decisions.

Taggart
@mttaggart@infosec.exchange

I don't agree with Miessler on everything, but when he's concerned, I pay attention.

https://danielmiessler.com/blog/im-worried-it-might-get-bad

Taggart
@mttaggart@infosec.exchange

So it was an elevation of privilege from someone who was already highly privileged?

Without further details, the cure seems worse than the disease.

https://mastodon.social/@therecord_media/115021361263571206

Taggart
@mttaggart@infosec.exchange

From Bsky: I guess Cursor is blind installing VSCode extensions? Lmao.

https://bsky.app/profile/johntuckner.me/post/3lw7se2iua22b

cc
@Sempf@infosec.exchange @cR0w@infosec.exchange

Taggart
@mttaggart@infosec.exchange

Did my #Matrix update and room upgrade.

Unfortunately, it looks like my preferred client, Cinny, has not updated to account for new rooms being joined to the same space as the old room, effectively replacing them. That's frustrating and poor UX.

Taggart
@mttaggart@infosec.exchange

Brutal:

The findings across task, length, and format generalization experiments converge on a conclusion: [Chain-of-Thought reasoning] is not a mechanism for genuine logical inference but rather a sophisticated form of structured pattern matching, fundamentally bounded by the data distribution seen during training. When pushed even slightly beyond this distribution its performance degrades significantly, exposing the superficial nature of the "reasoning” it produces.

Taggart
@mttaggart@infosec.exchange

LLMs can't reason, part 3348249:

https://arstechnica.com/ai/2025/08/researchers-find-llms-are-bad-at-logical-inference-good-at-fluent-nonsense/

Source paper:
https://arxiv.org/pdf/2508.01191