@androcat@toot.cat
@FandaSin@social.linux.pizza
The real scoop here is: Google is admitting that LLMs are a false start, a worthless nothingburger.
LLMs are trained on text.
As if text could be meaningful without the things referred to in the text.
The word "apple" cannot fully make sense to a person who has not seen and tasted and smelled an apple.
So, here Google is admitting that obvious fact.
Of course, it won't help them, because the "brains" they are able to produce are so tiny. They are dealing in neural networks tinier than any living insect's (and there are some seriously small-brained insects out there).
And a virtual world has a massive drawback compared to text, it's slow to explore. You can force the brain of a microscopic insect to "read" all the text of the internet, no problem. But if you give it the universe to explore, it will have no advantages in speed.
@snaptophobic@mastodon.me.uk @kibcol1049@mstdn.social
@peteriskrisjanis@toot.lv
@androcat@toot.cat @FandaSin@social.linux.pizza @snaptophobic@mastodon.me.uk @kibcol1049@mstdn.social AGI is long term which I don't see any American corporation being able to pull off. I would argue this is most likely very bright minds at R&D making first steps. In reality AGI is still 10 - 15 years away. No shareholders are interested in that.