Brutkey

Aron
@Aron@nerdculture.de

@SonstHarmlos@sueden.social @denschub@mastodon.schub.social

Disclaimer: I'm not judging you for testing this out in private (at least not more than anyone using "AI" tech).

Two questions though:
1. Would you do this in your day job too?
2. Would you consult unreliableΒΉ teachers too if you want to learn/explore something?

As you are a Software Engineer too, I'm flabbergasted how easily people give up their learning experiences and their control in this line of work for "quick" somethings when the most basic important thing in their job is to learn new stuff.

ΒΉ in the sense of them rather telling you self-invented alternative facts but never telling you "I don't know"

SonstHarmlos
@SonstHarmlos@sueden.social

@Aron@nerdculture.de @denschub@mastodon.schub.social
1.) No, I wouldn't. It's a different story, and the AI also failed miserably there with some things (like answering questions about the code base, which is >100.00 LoC and has grown over >20 years)
2.) Are you are aware that we're already using many things in our everyday lives considered "AI" decades ago, which are also doing "probabilistic" things like pattern recognition, speech recognition etc.? My Garmin miscounted my swim lanes today, but I wouldn't consider this "unsafe"


Aron
@Aron@nerdculture.de

@SonstHarmlos@sueden.social @denschub@mastodon.schub.social

2) Yes I am aware. But, to my mind, none of these existing "AI" implementations lead to people actively learning false stuff (in bigger amounts) in addition to unlearn skills they might have had (e.g. grammar or writing skills when using speech recognition).

It's anecdotical evidence, but I've personally seen examples of people losing the skill to actually process information and draw conclusions from information given to them - namely reading and interpreting programming errors and finding a solution or even an approach for them - after heavily using LLMs in their daily job. Similar people lost the skill to write a simple request to a team of a connected system.

I'd argue that your Garmin actually counts your swim lanes by anything related to AI. Most probably GPS or motion tracking is utilized together with recognizing your change of directions or speed.

So personally, I'd prefer not to use unreliable teachers (human or machine) to do/learn stuff.

SonstHarmlos
@SonstHarmlos@sueden.social

@Aron@nerdculture.de @denschub@mastodon.schub.social "It's anecdotical evidence, but I've personally seen examples of people losing the skill to actually process information and draw conclusions from information given to them - namely reading and interpreting programming errors and finding a solution or even an approach for them"

Before AI, wouldn't you just have pasted the error message into a search engine, which gave you wrong answers just as often? Or sifted through StackOverflow half-knowledge?

Aron
@Aron@nerdculture.de

@SonstHarmlos@sueden.social @denschub@mastodon.schub.social

First and foremost I would have read the error instead of blindly feeding it into any kind of search engine.

The errors from my anectode could be solved by an average programmer without utilization of an online search, to my mind. (but even if I'm wrong on my estimation here the first reflex of them was to just feed the error into another prompt instead of reading and understanding it).

Results from a search engine might be wrong too, absolutely correct. In most cases they are not correct because the proposed solutions might be outdated or the situation they occured is different than mine.

Yet in search results there is one thing that is integral for building problem solving skills in computer science -> context. I can see the age of the offered information, I can read the context given by the one that had the similar (but still different enough) error. Additionally they don't contain results built by throwing dice at a dictionary (simplified).

SonstHarmlos
@SonstHarmlos@sueden.social

@Aron@nerdculture.de @denschub@mastodon.schub.social When word processing software started becoming usable on personal computers, some people argued that this would make the writing worse, because you can change the written text at any time. Unlike a typewriter, which forced you to think before putting words on paper.

This it what some parts of the anti-AI sentiment remind me of.

Aron
@Aron@nerdculture.de

@SonstHarmlos@sueden.social @denschub@mastodon.schub.social

Who creates the output when using:
- a typewriter?
- a word processor?
- a LLM with a prompt?
It's a very bad comparison, to my mind.

I agree, that LLMs used as in ChatGPT and similar won't just disappear, but they most probably won't surpass a certain level, as that is inherently given by how they work internally. One can try to mitigate the drawbacks, but that won't solve the underlying problems. I will be happily proven wrong on this one.

To be open and curious and at the same time call it "anti-AI sentiment" is more than just a bit confusing, as it suggests there is no critical assessment of new technologies. It's good to acknowledge the advantages but turning a blind eye on the disadvantages and fingerpointing at existing and proven technologies by saying "With those XY is also not good" shows that you might be relying on a hype.

I think I've laid down all my points in the past comments, so I wouldn't answer if I would repeat myself.

Dennis Schubert
@denschub@mastodon.schub.social

@SonstHarmlos@sueden.social @Aron@nerdculture.de have you goofball ever considered why people are against GenAI in its current form? you know, someone like me who's famously lazy and would absolutely love a tool that makes my coding life easier? maybe spend some time reflecting on that before posting your knee-jerks into the internet.

SonstHarmlos
@SonstHarmlos@sueden.social

@denschub@mastodon.schub.social @Aron@nerdculture.de You can insult people like you want - this technology is here to stay (even if the current "bubble" deflates - we do a lot of online shopping today despite the early 2000s DotCom crash)

I prefer to be open and curious about the future. To get hands-on experience. To find out what works well and what isn't working.

SonstHarmlos
@SonstHarmlos@sueden.social

@denschub@mastodon.schub.social @Aron@nerdculture.de You can insult people like you want - this technology is here to stay (even if the current "bubble" deflates - we do a lot of online shopping today despite the early 2000s DotCom crash)

I prefer to be open and curious about the future. To get hands-on experience. To find out what works well and what isn't working.

Dennis Schubert
@denschub@mastodon.schub.social

@SonstHarmlos@sueden.social @Aron@nerdculture.de Yes - and that's exactly what I did. Multiple times now. First time in 2023 when I blogged about it - but also again just four weeks ago where I got myself a one-month license to GitHub Copilot and gave all the models a try in a project that isn't made out of all-standard code. Unsurprisingly, it generated as much trash as ChatGPT 5 did in this thread, and I ended up just writing all the code myself because that ended up being more productive and got me results faster. That also isn't just my sentiment, it's a very common occurrence even documented by science.

It's cool if it works for you. Nobody questions that LLMs can probably help generate a lot of the boilerplate code you have in some applications. I just don't work in an environment where I write a lot of boilerplate code, and even if I did, I'd optimize that away. I also happen to work in programming languages that makes it much easier to spot even slightly-wrong code, because the compiler will yell at you. Oh, and I also work in an environment where I have responsibility of my stuff still working in a couple of months, and where I have the responsibility to make sure my code is debuggable and maintainable.

I specifically asked people like you not to reply, and only comment if you have factual arguments against the points I'm making - and yet, here we are. The only person in this thread who is actually
not "open and curios" is you, because you keep responding to anyone disagreeing with you with "nuh-uhhhh, my experience says something else", without acknowledging that other people's experiences are real, and also casually conflating ML and GenAI. If you're not actually interested in listening to people and having arguments with people with different views, why are you even here.

Dennis Schubert
@denschub@mastodon.schub.social

@SonstHarmlos@sueden.social @Aron@nerdculture.de Yes - and that's exactly what I did. Multiple times now. First time in 2023 when I blogged about it - but also again just four weeks ago where I got myself a one-month license to GitHub Copilot and gave all the models a try in a project that isn't made out of all-standard code. Unsurprisingly, it generated as much trash as ChatGPT 5 did in this thread, and I ended up just writing all the code myself because that ended up being more productive and got me results faster. That also isn't just my sentiment, it's a very common occurrence even documented by science.

It's cool if it works for you. Nobody questions that LLMs can probably help generate a lot of the boilerplate code you have in some applications. I just don't work in an environment where I write a lot of boilerplate code, and even if I did, I'd optimize that away. I also happen to work in programming languages that makes it much easier to spot even slightly-wrong code, because the compiler will yell at you. Oh, and I also work in an environment where I have responsibility of my stuff still working in a couple of months, and where I have the responsibility to make sure my code is debuggable and maintainable.

I specifically asked people like you not to reply, and only comment if you have factual arguments against the points I'm making - and yet, here we are. The only person in this thread who is actually
not "open and curios" is you, because you keep responding to anyone disagreeing with you with "nuh-uhhhh, my experience says something else", without acknowledging that other people's experiences are real, and also casually conflating ML and GenAI. If you're not actually interested in listening to people and having arguments with people with different views, why are you even here.

SonstHarmlos
@SonstHarmlos@sueden.social

@denschub@mastodon.schub.social @Aron@nerdculture.de "[...] and also casually conflating ML and GenAI."

The past decades have shown that applications were AI is well-established tend to not be called "AI" any more - things like playing chess, speech recognition or driver assistance systems in cars.

The same will happen to
some (not all) usages of LLMs. Who would have predicted the current state of LLMs 5 or 10 years ago?

SonstHarmlos
@SonstHarmlos@sueden.social

@denschub@mastodon.schub.social @Aron@nerdculture.de I had a closer look at this study last weekend, and the authors themselves warn against overgeneralizing. This was a large, well established codebase, with developers very familiar with it (I'd compare it very roughly to the situation I have at work)

https://sueden.social/@SonstHarmlos/114961327106663505

Dennis Schubert
@denschub@mastodon.schub.social

@SonstHarmlos@sueden.social @Aron@nerdculture.de it was you who brought in pattern recognition, speech recognition, and other things into this thread while directly acting like GenAI is just one of those. it's not. you should know that. so you're either willfully ignorant, or actively malicious. what's your goal here? show the world that you'll keep looping over the same unfalsifiable statements and anecdotal stories endlessly? is this somehow satisfying to you?

SonstHarmlos
@SonstHarmlos@sueden.social

@denschub@mastodon.schub.social @Aron@nerdculture.de I think we should end this discussion now, but it also think we are not that far apart from each other.

I
am very cautious with increasing the amount of AI assistance at work (not only because my employer has a very strong safety culture, it took my team more than one year to get IntelliJ AI Assistant approved)

With my side projects, I am in a phase of AI excitement right now, but no one can tell how long this lasts. I cannot exclude that I'll think like you in one year.

SonstHarmlos
@SonstHarmlos@sueden.social

@denschub@mastodon.schub.social @Aron@nerdculture.de I think we should end this discussion now, but it also think we are not that far apart from each other.

I
am very cautious with increasing the amount of AI assistance at work (not only because my employer has a very strong safety culture, it took my team more than one year to get IntelliJ AI Assistant approved)

With my side projects, I am in a phase of AI excitement right now, but no one can tell how long this lasts. I cannot exclude that I'll think like you in one year.