Brutkey

David Gerard
@davidgerard@circumstances.run
David Gerard
@davidgerard@circumstances.run

lol. Musk is adding ads to Grok on Twitter https://www.ft.com/content/3bc3a76a-8639-4dbe-8754-3053270e4605

David Gerard
@davidgerard@circumstances.run

there's going to be a discourse siiingularityyyyy https://comicbook.com/anime/news/homestuck-animated-series-hazbin-hotel-creators/

David Gerard
@davidgerard@circumstances.run

Alexa+ rolls out at last! … It’s not so great

https://pivot-to-ai.com/2025/08/09/alexa-rolls-out-at-last-its-not-so-great/ - text
https://pivottoai.libsyn.com/20250809-alexa-rolls-out-at-last-its-not-so-great - podcast
https://www.youtube.com/watch?v=oaVR9jm8IWI&list=UU9rJrMVgcXTfa8xuMnbhAEA - video

David Gerard
@davidgerard@circumstances.run

I've been noticing how earnings calls lately always seem to reassure analysts that new products already have a path to enshittification (in the strict Doctorow usage of the term) firmly on the product roadmap.

David Gerard
@davidgerard@circumstances.run

analysing tim cook's dumb speech to employees

https://www.bloomberg.com/news/articles/2025-08-01/apple-ceo-tells-staff-ai-is-ours-to-grab-in-hourlong-pep-talk

all the things he claims credit to "apple" for are steve jobs

including claiming credit to "apple" (jobs) the good tablet, when the shitty tablet was invented by "apple" (sculley)

has tim had a single jobsian category-maker? have i missed one?

David Gerard
@davidgerard@circumstances.run

Joy Division, Paradiso, Amsterdam, 11 Jan 1980

this is one of the best JD bootlegs. I got a shorter version on a shittily pressed LP in the 1980s.

i believe there's a best-possible version on an official CD now.

https://www.youtube.com/watch?v=hnchmyoAm2E

David Gerard
@davidgerard@circumstances.run

The thing about gpt-oss is, the market for LLM-at-home is nerd experimenters. home llm runners on localllama. there isn't a market as such.

there's cloud providers offering it as a model you could use

the purpose of gpt-oss seems to be marketing - pure catchup cos llama and deepseek and qwen are open weights. look not left behind.

and apparently it's pretty good as a home model? if that's your bag

OpenAI censored the shit out of it because that risks bad press less than not censoring it - and breaking the censorship appears not that hard

this thread on how gpt-oss seems to have been trained is hilarious
https://xcancel.com/jxmnop/status/1953899426075816164

this thing is clearly trained via RL to think and solve tasks for specific reasoning benchmarks. nothing else.
and it truly is a tortured model. here the model hallucinates a programming problem about dominos and attempts to solve it, spending over 30,000 tokens in the process
completely unprompted, the model generated and tried to solve this domino problem over 5,000 separate times
i could easily write 250 words on this thing but i'm not sure they'd be ones i'd care to read either

David Gerard
@davidgerard@circumstances.run

currently wondering what if anything to write about OpenAI's open-weights LLM GPT-OSS

lots of "here's how it performs" "boo censored" sure

but

is it significant? i'm not at all convinced it is. doesn't seem to change the game.

David Gerard
@davidgerard@circumstances.run

i thought this would be a good article answering a question i was interested in

it turned into an inadvertently hilarious one

they wanted a local LLM! so they uh tried to vibe code it

https://instavm.io/blog/building-my-offline-ai-workspace

David Gerard
@davidgerard@circumstances.run

this one says 8%

https://www.zdnet.com/article/only-8-of-americans-would-pay-extra-for-ai-according-to-zdnet-aberdeen-research/

any others welcomed