@david_chisnall@infosec.exchange
@alex@feed.yopp.me @whitequark@mastodon.social @janl@narrativ.es
I read that a lot, but to me 'a lot of boilerplate' is code smell. A tool that makes it easy to write a lot of boilerplate is a bad thing because it removes incentives to remove the boilerplate.
If you need to duplicate a lot of code across different applications, you've introduced some fragility. It's hard to change the underlying APIs because everyone has copied and pasted the same thing (with or without an LLM). That's a problem for long-term evolution of a set of APIs. An LLM here just makes it easy to ship things that make everyone downstream accumulate technical debt faster.
@alex@feed.yopp.me
@david_chisnall@infosec.exchange @whitequark@mastodon.social @janl@narrativ.es depends on your definition of boilerplate I guess.
Say I have event driven system and I want isolate each even type in its own “container” (class, module, whatever). Consequently there will be a lot of structural similarity between “containers”, because they will expose same interface.
Some languages are verbose by design like HTML. If you are building a form you’ll have to repeat same things over and over again, and it’s… okay? Overall all UI/graphics code is very verbose because you have to setup a lot of things. And when you take component approach you’ll end up with case above.
LLM autocomplete predictions are just more context aware than plain autocomplete (intellisense or whatever) and can save you some time on typing, because it can just spit out pre-filled method call, with all arguments filled with values from the current context.
No wonder that thingie made to auto-complete based on a context does decent job as context-aware auto-complete!