Brutkey

Col
@kibcol1049@mstdn.social

Google has outlined its latest step towards artificial general intelligence (AGI) with a new model that allows AI systems to interact with a convincing simulation of the real world.


Heather Kavanagh
@snaptophobic@mastodon.me.uk

@kibcol1049@mstdn.social

Let me get this straight: you point a made-up thing at another made-up thing so the first made-up thing thinks it is dealing with a real thing..

Cool. You can let it worry away in its own bubble, quietly walk away and pull the power to kill it.

FandaSin
@FandaSin@social.linux.pizza

@snaptophobic@mastodon.me.uk

Both of those made up things takes GINORMOUS amount of electricity and water.
They could be using that computing power to look for cure for the cancer or for some better materials or more efficient batteries.
Yet, Google decided, that we need more "halucinating parrots".
🀬🀬

#PullThePlug

@kibcol1049@mstdn.social

Androcat
@androcat@toot.cat

@FandaSin@social.linux.pizza

The real scoop here is: Google is admitting that LLMs are a false start, a worthless nothingburger.

LLMs are trained on text.
As if text could be meaningful without the things referred to in the text.

The word "apple" cannot fully make sense to a person who has not seen and tasted and smelled an apple.

So, here Google is admitting that obvious fact.

Of course, it won't help them, because the "brains" they are able to produce are so tiny. They are dealing in neural networks tinier than any living insect's (and there are some seriously small-brained insects out there).

And a virtual world has a massive drawback compared to text, it's slow to explore. You can force the brain of a microscopic insect to "read" all the text of the internet, no problem. But if you give it the universe to explore, it will have no advantages in speed.

@snaptophobic@mastodon.me.uk @kibcol1049@mstdn.social

PΔ“teris KriΕ‘jānis
@peteriskrisjanis@toot.lv

@androcat@toot.cat @FandaSin@social.linux.pizza @snaptophobic@mastodon.me.uk @kibcol1049@mstdn.social AGI is long term which I don't see any American corporation being able to pull off. I would argue this is most likely very bright minds at R&D making first steps. In reality AGI is still 10 - 15 years away. No shareholders are interested in that.

Androcat
@androcat@toot.cat

@peteriskrisjanis@toot.lv

AGI is not inevitable.

There is very little reason to think it is actually achievable.

Why? Because physical neural nets are expensive, and emulated neural nets have hard limits to growth.

There is no sign of overcoming these limitations, despite all the hype.

@FandaSin@social.linux.pizza @snaptophobic@mastodon.me.uk @kibcol1049@mstdn.social

PΔ“teris KriΕ‘jānis
@peteriskrisjanis@toot.lv

@androcat@toot.cat @FandaSin@social.linux.pizza @snaptophobic@mastodon.me.uk @kibcol1049@mstdn.social it is achievable, but it is most likely highly inefficient in ways people hoping for big payout. As most of our science tech advances are driven by practical usability....yes.
I think the theoretical concept is achievable in 15 years, but as you say, it is not gonna human level or surpassing it.
Robotics and knowledge based systems and ML will be able to achieve much better results in that time frame.

Kierkethumbs up convincingly
@Kierkegaanks@beige.party

@androcat@toot.cat @FandaSin@social.linux.pizza @peteriskrisjanis@toot.lv @snaptophobic@mastodon.me.uk @kibcol1049@mstdn.social Sir Roger Penrose bites his thumb at that notion (he is really old)