Brutkey

Martin Escardo
@MartinEscardo@mathstodon.xyz

@johncarlosbaez@mathstodon.xyz @j_bertolotti@mathstodon.xyz @julesh@mathstodon.xyz

If you allow me, let me say more about this (which I would not regard as original thinking).

The problem with genAI is that there is no specification given to the general public of what it actually does, not even an informal one, that can be verified empirically or mathematically.

What happens now is that you ask a question to chatSomething, and then it gives an answer. Is the answer correct? Sometimes. And often not, and the frequency of (in)correctness depends on the subject (programming, cooking, molecular biology, mathematics, counselling, law, whatever).

Can this work in the future? Maybe. People are very creative, and may make this work. After all, many of us believe that intelligence, like everything else in nature, is ultimately mechanical.

But right now we get an answer with no promise of correctness, not even in a very vague sense of correctness.

We get answers with no promises that they answer the question, and worse, often they don't answer the question correctly for questions we already know the answer.

Never mind the questions for which nobody knows the answer (or that just the person asking the question doesn't know the answer).

Vassil Nikolov | Васил Николов
@vnikolov@ieji.de

@MartinEscardo@mathstodon.xyz wrote:

The problem with genAI is that there is no specification ...
Right.
One reason for that is because this kind of artificial intelligence is an emergent phenomenon.
Are people still talking about emergent phenomena?

Whether it is at all a good idea to base a commercial product on an insufficiently understood emergent phenomenon is a separate extremely important matter.

#AI
#ArtificialIntelligence

@johncarlosbaez@mathstodon.xyz @j_bertolotti@mathstodon.xyz @julesh@mathstodon.xyz