Brutkey

julesh
@julesh@mathstodon.xyz

My thoughts about this just clicked. I have seen a bunch of cases of professed non-programmers building pretty big things entirely by vibe-coding, and they work fine. This allows people to participate in creation without years of training, and in the end that is not a bad thing

There are a bunch of other systems like this, that allow non-experts to simulate being able to do the real thing. Playing a shooter with aimbot. Playing guitar with 3 chords. Driving automatic.

These are all perfectly good things that allow amateurs to match the performance of experts, and none of them are bad things. But if you don't take the hit and switch over to the real thing sooner or later, you'll be a novice forever.

John Carlos Baez
@johncarlosbaez@mathstodon.xyz

@julesh@mathstodon.xyz - I don't think people need to learn to drive with a stick shift, these days. If you're a race-car driver or some other sort of specialist, okay, yeah.

(I do know how to drive with a stick shift, but that's because I'm old!)


j_bertolotti
@j_bertolotti@mathstodon.xyz

@johncarlosbaez@mathstodon.xyz That is because for most people being a novice at driving a car is good enough, which I think was part of @julesh@mathstodon.xyz 's point.

(In Europe, contrary to the US, the vast majority of cars have a manual gearbox.)

John Carlos Baez
@johncarlosbaez@mathstodon.xyz

@j_bertolotti@mathstodon.xyz @julesh@mathstodon.xyz - a "novice" means an inexpert newbie, and there are plenty of people in the US who have driven cars successfully and well for decades using an automatic gearshift. They're not novices.

"In 1957 over 80% of new cars in the United States had automatic transmissions...."

and by now it must be higher.

https://en.wikipedia.org/wiki/Automatic_transmission

Martin Escardo
@MartinEscardo@mathstodon.xyz

@johncarlosbaez@mathstodon.xyz @j_bertolotti@mathstodon.xyz @julesh@mathstodon.xyz

The analogy with automatic transmission is not very good, as it generally works safely, whereas "vibe coding" with genAI doesn't generally work safely, and often fails badly (with many examples reported by the press and some in court for releasing private information as a result of not implementing any security measure, for example).

Martin Escardo
@MartinEscardo@mathstodon.xyz

@johncarlosbaez@mathstodon.xyz @j_bertolotti@mathstodon.xyz @julesh@mathstodon.xyz

If you allow me, let me say more about this (which I would not regard as original thinking).

The problem with genAI is that there is no specification given to the general public of what it actually does, not even an informal one, that can be verified empirically or mathematically.

What happens now is that you ask a question to chatSomething, and then it gives an answer. Is the answer correct? Sometimes. And often not, and the frequency of (in)correctness depends on the subject (programming, cooking, molecular biology, mathematics, counselling, law, whatever).

Can this work in the future? Maybe. People are very creative, and may make this work. After all, many of us believe that intelligence, like everything else in nature, is ultimately mechanical.

But right now we get an answer with no promise of correctness, not even in a very vague sense of correctness.

We get answers with no promises that they answer the question, and worse, often they don't answer the question correctly for questions we already know the answer.

Never mind the questions for which nobody knows the answer (or that just the person asking the question doesn't know the answer).

Vassil Nikolov | Васил Николов
@vnikolov@ieji.de

@MartinEscardo@mathstodon.xyz wrote:

The analogy with [automobile] automatic transmission is not very good, as...
Indeed.
I can't improvise a better analogy at once, but thinking of pilotless passenger planes seems a little more interesting.

@johncarlosbaez@mathstodon.xyz @j_bertolotti@mathstodon.xyz @julesh@mathstodon.xyz