Brutkey

yopp
@alex@feed.yopp.me

@david_chisnall@infosec.exchange @whitequark@mastodon.social @janl@narrativ.es depends on your definition of boilerplate I guess.

Say I have event driven system and I want isolate each even type in its own β€œcontainer” (class, module, whatever). Consequently there will be a lot of structural similarity between β€œcontainers”, because they will expose same interface.

Some languages are verbose by design like HTML. If you are building a form you’ll have to repeat same things over and over again, and it’s… okay? Overall all UI/graphics code is very verbose because you have to setup a lot of things. And when you take component approach you’ll end up with case above.

LLM autocomplete predictions are just more context aware than plain autocomplete (intellisense or whatever) and can save you some time on typing, because it can just spit out pre-filled method call, with all arguments filled with values from the current context.

No wonder that thingie made to auto-complete based on a context does decent job as context-aware auto-complete!

✧✦Catherine✦✧
@whitequark@mastodon.social

@alex@feed.yopp.me @david_chisnall@infosec.exchange @janl@narrativ.es if you can automate predictions for something to the point where an LLM can do it semi-reliably, in almost every case you could, and i will argue should, define an abstraction that does it deterministically


✧✦Catherine✦✧
@whitequark@mastodon.social

@alex@feed.yopp.me @david_chisnall@infosec.exchange @janl@narrativ.es people have been writing abstractions over HTML for almost as long as HTML existed. you can go ahead and use .jsx/.tsx in almost any environment today; the abstractions have won

yopp
@alex@feed.yopp.me

@whitequark@mastodon.social @david_chisnall@infosec.exchange @janl@narrativ.es and to make jsx/tsx components you have to setup said components and all the code around it is kinda boilerplate. Hardly any IDE have enough snippets to automate that.

But I totally agree that with LLM you trade speed for attention, especially if it’s non statically typed code so your IDE can’t catch bullshit on the spot.

I’m not defending LLMs: they are overhyped and they don’t live to their promise of the utility. For me it’s somewhat time saving in some scenarios, but overall it’s meh. I don’t believe that current architecture can achieve anything other than ruining the knowledge storages we had before