Brutkey

Scott Francis
@darkuncle@infosec.exchange

To ask an LLM "why?” is to fundamentally misunderstand what LLMs are: they are not (contrary to the nomenclature) intelligent, do not have any self-awareness or ability to learn, adapt, and change, and generate responses based on statistical likelihoods in training data -- not based on awareness or understanding or cognition.
https://mastodon.social/@arstechnica/115017545924003756


Rob O :verified:
@nerdpr0f@infosec.exchange

@darkuncle@infosec.exchange I love to highlight this by asking the question "Can an equation written on paper be intelligent?"

It suddenly makes a lot more sense to people. A big part of the problem here is that many people are really ready to suspend disbelief about what might be possible using computers, probably because the internals are a black box.