Brutkey

Dennis Schubert
@denschub@mastodon.schub.social

@SonstHarmlos@sueden.social @Aron@nerdculture.de Yes - and that's exactly what I did. Multiple times now. First time in 2023 when I blogged about it - but also again just four weeks ago where I got myself a one-month license to GitHub Copilot and gave all the models a try in a project that isn't made out of all-standard code. Unsurprisingly, it generated as much trash as ChatGPT 5 did in this thread, and I ended up just writing all the code myself because that ended up being more productive and got me results faster. That also isn't just my sentiment, it's a very common occurrence even documented by science.

It's cool if it works for you. Nobody questions that LLMs can probably help generate a lot of the boilerplate code you have in some applications. I just don't work in an environment where I write a lot of boilerplate code, and even if I did, I'd optimize that away. I also happen to work in programming languages that makes it much easier to spot even slightly-wrong code, because the compiler will yell at you. Oh, and I also work in an environment where I have responsibility of my stuff still working in a couple of months, and where I have the responsibility to make sure my code is debuggable and maintainable.

I specifically asked people like you not to reply, and only comment if you have factual arguments against the points I'm making - and yet, here we are. The only person in this thread who is actually
not "open and curios" is you, because you keep responding to anyone disagreeing with you with "nuh-uhhhh, my experience says something else", without acknowledging that other people's experiences are real, and also casually conflating ML and GenAI. If you're not actually interested in listening to people and having arguments with people with different views, why are you even here.

SonstHarmlos
@SonstHarmlos@sueden.social

@denschub@mastodon.schub.social @Aron@nerdculture.de "[...] and also casually conflating ML and GenAI."

The past decades have shown that applications were AI is well-established tend to not be called "AI" any more - things like playing chess, speech recognition or driver assistance systems in cars.

The same will happen to
some (not all) usages of LLMs. Who would have predicted the current state of LLMs 5 or 10 years ago?


Dennis Schubert
@denschub@mastodon.schub.social

@SonstHarmlos@sueden.social @Aron@nerdculture.de it was you who brought in pattern recognition, speech recognition, and other things into this thread while directly acting like GenAI is just one of those. it's not. you should know that. so you're either willfully ignorant, or actively malicious. what's your goal here? show the world that you'll keep looping over the same unfalsifiable statements and anecdotal stories endlessly? is this somehow satisfying to you?

SonstHarmlos
@SonstHarmlos@sueden.social

@denschub@mastodon.schub.social @Aron@nerdculture.de I think we should end this discussion now, but it also think we are not that far apart from each other.

I
am very cautious with increasing the amount of AI assistance at work (not only because my employer has a very strong safety culture, it took my team more than one year to get IntelliJ AI Assistant approved)

With my side projects, I am in a phase of AI excitement right now, but no one can tell how long this lasts. I cannot exclude that I'll think like you in one year.