@davidgerard@circumstances.run
lol. Musk is adding ads to Grok on Twitter https://www.ft.com/content/3bc3a76a-8639-4dbe-8754-3053270e4605
lol. Musk is adding ads to Grok on Twitter https://www.ft.com/content/3bc3a76a-8639-4dbe-8754-3053270e4605
there's going to be a discourse siiingularityyyyy https://comicbook.com/anime/news/homestuck-animated-series-hazbin-hotel-creators/
Alexa+ rolls out at last! β¦ Itβs not so great
https://pivot-to-ai.com/2025/08/09/alexa-rolls-out-at-last-its-not-so-great/ - text
https://pivottoai.libsyn.com/20250809-alexa-rolls-out-at-last-its-not-so-great - podcast
https://www.youtube.com/watch?v=oaVR9jm8IWI&list=UU9rJrMVgcXTfa8xuMnbhAEA - video
I've been noticing how earnings calls lately always seem to reassure analysts that new products already have a path to enshittification (in the strict Doctorow usage of the term) firmly on the product roadmap.
analysing tim cook's dumb speech to employees
https://www.bloomberg.com/news/articles/2025-08-01/apple-ceo-tells-staff-ai-is-ours-to-grab-in-hourlong-pep-talk
all the things he claims credit to "apple" for are steve jobs
including claiming credit to "apple" (jobs) the good tablet, when the shitty tablet was invented by "apple" (sculley)
has tim had a single jobsian category-maker? have i missed one?
Joy Division, Paradiso, Amsterdam, 11 Jan 1980
this is one of the best JD bootlegs. I got a shorter version on a shittily pressed LP in the 1980s.
i believe there's a best-possible version on an official CD now.
https://www.youtube.com/watch?v=hnchmyoAm2E
currently wondering what if anything to write about OpenAI's open-weights LLM GPT-OSS
lots of "here's how it performs" "boo censored" sure
but
is it significant? i'm not at all convinced it is. doesn't seem to change the game.
The thing about gpt-oss is, the market for LLM-at-home is nerd experimenters. home llm runners on localllama. there isn't a market as such.
there's cloud providers offering it as a model you could use
the purpose of gpt-oss seems to be marketing - pure catchup cos llama and deepseek and qwen are open weights. look not left behind.
and apparently it's pretty good as a home model? if that's your bag
OpenAI censored the shit out of it because that risks bad press less than not censoring it - and breaking the censorship appears not that hard
this thread on how gpt-oss seems to have been trained is hilarious https://xcancel.com/jxmnop/status/1953899426075816164
this thing is clearly trained via RL to think and solve tasks for specific reasoning benchmarks. nothing else.i could easily write 250 words on this thing but i'm not sure they'd be ones i'd care to read either
and it truly is a tortured model. here the model hallucinates a programming problem about dominos and attempts to solve it, spending over 30,000 tokens in the process
completely unprompted, the model generated and tried to solve this domino problem over 5,000 separate times
currently wondering what if anything to write about OpenAI's open-weights LLM GPT-OSS
lots of "here's how it performs" "boo censored" sure
but
is it significant? i'm not at all convinced it is. doesn't seem to change the game.
i thought this would be a good article answering a question i was interested in
it turned into an inadvertently hilarious one
they wanted a local LLM! so they uh tried to vibe code it
https://instavm.io/blog/building-my-offline-ai-workspace
so I posted previously about a bad number for paying users of AI services vs. free users - it was 3%, but it's bad enough I headlined it as "3%*".
is there a robust number anyone knows of? I vaguely recall seeing 5-7% but can't remember where
this one says 8%
https://www.zdnet.com/article/only-8-of-americans-would-pay-extra-for-ai-according-to-zdnet-aberdeen-research/
any others welcomed