@nazokiyoubinbou@urusai.social
@kawazoe@mstdn.ca @Madagascar_Sky@mastodon.social @nixCraft@mastodon.social I didn't use the word "stupid." I used the word artificial.
As in real life systems are doing multiple tasks at once and must adapt in different ways rather than producing a single pre-determined result over and over.
And most of what they're using for benchmarks aren't even those things. They're things like how fast does office software load and perform a few basic operations.
@kawazoe@mstdn.ca
@nazokiyoubinbou@urusai.social @Madagascar_Sky@mastodon.social @nixCraft@mastodon.social I can't believe it has come to this but here's a screenshot of your post with every instances of the word stupid highlighted in yellow, and of the word artificial in red... Not a whole lot of red...
To your "multiple things at once" argument, I can't recall the last time I ran an AI training and thought it'd be smart to play Cyberpunk on the same computer to kill some time. Sure, I might watch a video while I code and browse the web, but out of those things, my CPU barely sees any usage. I see way more effect from badly optimized software like Teams, Outlook, and Windows than from what I'm actually trying to do. In that world, it doesn't matter what chip you buy. They'll all perform badly. You'll get a much stronger effect from changing the software you use. Does it mean singular task benchmarks are useless? No! When I hit compile, it doesn't matter if I have Teams opened. I want it to be done quickly. If a chip does it faster, it'll do it faster with Teams opened.