@mcdanlj@social.makerforums.info
RE: https://mastodon.social/@joeycastillo/115679311941728952
"The burden of communication is on the speaker/writer" is something that I absorbed from a friend and have taught for years. I have felt the asymmetry in communications with LLMs as an intermediary but hadn't thought explicitly about how it undermines this social contract.
Earlier this year, I wrote something and asked for feedback. A co-worker gave it to ChatGPT and gave me what it spit out.
He didn't invest effort; didn't bother to take the time to tell me anything he didn't like about what I wrote.
Given that we have an enterprise license to ChatGPT at work, that was a passive-aggressive diss equivalent to sending someone a LMGTFY link
ChatGPT added subtle lies that took work on my part to identify and articulate
So the burden of communication about my communication was on the person giving feedback, and they completely failed the test and broke the social contract, and then were surprised that I was unhappy. How could I possibly object to having a confabulation machine rewrite what I wrote?
(I'm not an AI hater in general. But I can be dismayed by abusing people, including if it's done with an LLM.)
@joeycastillo@mastodon.social
“LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. For the reader, this is important: should they struggle with an idea, they can reasonably assume that the writer themselves understands it. If, however, prose is LLM-generated, this social contract becomes ripped up: a reader cannot assume that the writer understands their ideas.” https://rfd.shared.oxide.computer/rfd/0576