Brutkey

SonstHarmlos
@SonstHarmlos@sueden.social

@denschub@mastodon.schub.social @Aron@nerdculture.de You can insult people like you want - this technology is here to stay (even if the current "bubble" deflates - we do a lot of online shopping today despite the early 2000s DotCom crash)

I prefer to be open and curious about the future. To get hands-on experience. To find out what works well and what isn't working.


Dennis Schubert
@denschub@mastodon.schub.social

@SonstHarmlos@sueden.social @Aron@nerdculture.de Yes - and that's exactly what I did. Multiple times now. First time in 2023 when I blogged about it - but also again just four weeks ago where I got myself a one-month license to GitHub Copilot and gave all the models a try in a project that isn't made out of all-standard code. Unsurprisingly, it generated as much trash as ChatGPT 5 did in this thread, and I ended up just writing all the code myself because that ended up being more productive and got me results faster. That also isn't just my sentiment, it's a very common occurrence even documented by science.

It's cool if it works for you. Nobody questions that LLMs can probably help generate a lot of the boilerplate code you have in some applications. I just don't work in an environment where I write a lot of boilerplate code, and even if I did, I'd optimize that away. I also happen to work in programming languages that makes it much easier to spot even slightly-wrong code, because the compiler will yell at you. Oh, and I also work in an environment where I have responsibility of my stuff still working in a couple of months, and where I have the responsibility to make sure my code is debuggable and maintainable.

I specifically asked people like you not to reply, and only comment if you have factual arguments against the points I'm making - and yet, here we are. The only person in this thread who is actually
not "open and curios" is you, because you keep responding to anyone disagreeing with you with "nuh-uhhhh, my experience says something else", without acknowledging that other people's experiences are real, and also casually conflating ML and GenAI. If you're not actually interested in listening to people and having arguments with people with different views, why are you even here.

SonstHarmlos
@SonstHarmlos@sueden.social

@denschub@mastodon.schub.social @Aron@nerdculture.de "[...] and also casually conflating ML and GenAI."

The past decades have shown that applications were AI is well-established tend to not be called "AI" any more - things like playing chess, speech recognition or driver assistance systems in cars.

The same will happen to
some (not all) usages of LLMs. Who would have predicted the current state of LLMs 5 or 10 years ago?

SonstHarmlos
@SonstHarmlos@sueden.social

@denschub@mastodon.schub.social @Aron@nerdculture.de I had a closer look at this study last weekend, and the authors themselves warn against overgeneralizing. This was a large, well established codebase, with developers very familiar with it (I'd compare it very roughly to the situation I have at work)

https://sueden.social/@SonstHarmlos/114961327106663505