@SonstHarmlos@sueden.social @denschub@mastodon.schub.social
First and foremost I would have read the error instead of blindly feeding it into any kind of search engine.
The errors from my anectode could be solved by an average programmer without utilization of an online search, to my mind. (but even if I'm wrong on my estimation here the first reflex of them was to just feed the error into another prompt instead of reading and understanding it).
Results from a search engine might be wrong too, absolutely correct. In most cases they are not correct because the proposed solutions might be outdated or the situation they occured is different than mine.
Yet in search results there is one thing that is integral for building problem solving skills in computer science -> context. I can see the age of the offered information, I can read the context given by the one that had the similar (but still different enough) error. Additionally they don't contain results built by throwing dice at a dictionary (simplified).
@Aron@nerdculture.de @denschub@mastodon.schub.social When word processing software started becoming usable on personal computers, some people argued that this would make the writing worse, because you can change the written text at any time. Unlike a typewriter, which forced you to think before putting words on paper.
This it what some parts of the anti-AI sentiment remind me of.
@SonstHarmlos@sueden.social @denschub@mastodon.schub.social
Who creates the output when using:
- a typewriter?
- a word processor?
- a LLM with a prompt?
It's a very bad comparison, to my mind.
I agree, that LLMs used as in ChatGPT and similar won't just disappear, but they most probably won't surpass a certain level, as that is inherently given by how they work internally. One can try to mitigate the drawbacks, but that won't solve the underlying problems. I will be happily proven wrong on this one.
To be open and curious and at the same time call it "anti-AI sentiment" is more than just a bit confusing, as it suggests there is no critical assessment of new technologies. It's good to acknowledge the advantages but turning a blind eye on the disadvantages and fingerpointing at existing and proven technologies by saying "With those XY is also not good" shows that you might be relying on a hype.
I think I've laid down all my points in the past comments, so I wouldn't answer if I would repeat myself.
@SonstHarmlos@sueden.social @Aron@nerdculture.de have you goofball ever considered why people are against GenAI in its current form? you know, someone like me who's famously lazy and would absolutely love a tool that makes my coding life easier? maybe spend some time reflecting on that before posting your knee-jerks into the internet.
@denschub@mastodon.schub.social @Aron@nerdculture.de You can insult people like you want - this technology is here to stay (even if the current "bubble" deflates - we do a lot of online shopping today despite the early 2000s DotCom crash)
I prefer to be open and curious about the future. To get hands-on experience. To find out what works well and what isn't working.
@denschub@mastodon.schub.social @Aron@nerdculture.de You can insult people like you want - this technology is here to stay (even if the current "bubble" deflates - we do a lot of online shopping today despite the early 2000s DotCom crash)
I prefer to be open and curious about the future. To get hands-on experience. To find out what works well and what isn't working.
@SonstHarmlos@sueden.social @Aron@nerdculture.de Yes - and that's exactly what I did. Multiple times now. First time in 2023 when I blogged about it - but also again just four weeks ago where I got myself a one-month license to GitHub Copilot and gave all the models a try in a project that isn't made out of all-standard code. Unsurprisingly, it generated as much trash as ChatGPT 5 did in this thread, and I ended up just writing all the code myself because that ended up being more productive and got me results faster. That also isn't just my sentiment, it's a very common occurrence even documented by science.
It's cool if it works for you. Nobody questions that LLMs can probably help generate a lot of the boilerplate code you have in some applications. I just don't work in an environment where I write a lot of boilerplate code, and even if I did, I'd optimize that away. I also happen to work in programming languages that makes it much easier to spot even slightly-wrong code, because the compiler will yell at you. Oh, and I also work in an environment where I have responsibility of my stuff still working in a couple of months, and where I have the responsibility to make sure my code is debuggable and maintainable.
I specifically asked people like you not to reply, and only comment if you have factual arguments against the points I'm making - and yet, here we are. The only person in this thread who is actually not "open and curios" is you, because you keep responding to anyone disagreeing with you with "nuh-uhhhh, my experience says something else", without acknowledging that other people's experiences are real, and also casually conflating ML and GenAI. If you're not actually interested in listening to people and having arguments with people with different views, why are you even here.
@SonstHarmlos@sueden.social @Aron@nerdculture.de Yes - and that's exactly what I did. Multiple times now. First time in 2023 when I blogged about it - but also again just four weeks ago where I got myself a one-month license to GitHub Copilot and gave all the models a try in a project that isn't made out of all-standard code. Unsurprisingly, it generated as much trash as ChatGPT 5 did in this thread, and I ended up just writing all the code myself because that ended up being more productive and got me results faster. That also isn't just my sentiment, it's a very common occurrence even documented by science.
It's cool if it works for you. Nobody questions that LLMs can probably help generate a lot of the boilerplate code you have in some applications. I just don't work in an environment where I write a lot of boilerplate code, and even if I did, I'd optimize that away. I also happen to work in programming languages that makes it much easier to spot even slightly-wrong code, because the compiler will yell at you. Oh, and I also work in an environment where I have responsibility of my stuff still working in a couple of months, and where I have the responsibility to make sure my code is debuggable and maintainable.
I specifically asked people like you not to reply, and only comment if you have factual arguments against the points I'm making - and yet, here we are. The only person in this thread who is actually not "open and curios" is you, because you keep responding to anyone disagreeing with you with "nuh-uhhhh, my experience says something else", without acknowledging that other people's experiences are real, and also casually conflating ML and GenAI. If you're not actually interested in listening to people and having arguments with people with different views, why are you even here.
@denschub@mastodon.schub.social @Aron@nerdculture.de "[...] and also casually conflating ML and GenAI."
The past decades have shown that applications were AI is well-established tend to not be called "AI" any more - things like playing chess, speech recognition or driver assistance systems in cars.
The same will happen to some (not all) usages of LLMs. Who would have predicted the current state of LLMs 5 or 10 years ago?
@denschub@mastodon.schub.social @Aron@nerdculture.de I had a closer look at this study last weekend, and the authors themselves warn against overgeneralizing. This was a large, well established codebase, with developers very familiar with it (I'd compare it very roughly to the situation I have at work)
https://sueden.social/@SonstHarmlos/114961327106663505
@SonstHarmlos@sueden.social @Aron@nerdculture.de it was you who brought in pattern recognition, speech recognition, and other things into this thread while directly acting like GenAI is just one of those. it's not. you should know that. so you're either willfully ignorant, or actively malicious. what's your goal here? show the world that you'll keep looping over the same unfalsifiable statements and anecdotal stories endlessly? is this somehow satisfying to you?
@denschub@mastodon.schub.social @Aron@nerdculture.de I think we should end this discussion now, but it also think we are not that far apart from each other.
I am very cautious with increasing the amount of AI assistance at work (not only because my employer has a very strong safety culture, it took my team more than one year to get IntelliJ AI Assistant approved)
With my side projects, I am in a phase of AI excitement right now, but no one can tell how long this lasts. I cannot exclude that I'll think like you in one year.
@denschub@mastodon.schub.social @Aron@nerdculture.de I think we should end this discussion now, but it also think we are not that far apart from each other.
I am very cautious with increasing the amount of AI assistance at work (not only because my employer has a very strong safety culture, it took my team more than one year to get IntelliJ AI Assistant approved)
With my side projects, I am in a phase of AI excitement right now, but no one can tell how long this lasts. I cannot exclude that I'll think like you in one year.