I had an interesting experience over the past 48 hours with LLMs.
I have a little side project I'm working on, and it's in a monorep. I wanted to be able to put a .env file at root and then have each service reference it.
Claude code repeatedly choked on this constraint.
How it choked is the interesting part.
1/x
Claude kept trying make multiple env files for each service. No matter what I did.
So what did I do?
I'm using claude-mem, beads, and several other skills to attempt to improve alignment and long term memory.
I had it make a plan. I had it store that plan. I had it make tasks in beads about that plan.
I even eventually started ADRs to describe the decisions I made.
Didn't matter what I did. It kept trying to make multiple .env files for each service.
2/x
Finally, I had to reach down and just tell it: copy the file from root to each service. Is it elegant? No. Did it stop the errors from creeping in from having the other random things it tried. Yes.
It proposed to me all other kinds of elaborate solutions: lazy loading, multiple .env file references, etc. I accepted them and then it would drift back to multiple .env files.
3/x
To be clear, I didn't spend 48 hours in a row trying to fix this, nor even 1 hour.
It's something that kept coming up in its behavior that it would drift to as a default over time.
It viscerally highlighted things I already knew:
1. LLMs really can't remember
2. LLMs cannot "think"
3. LLMs cannot "know"
4. LLMs cannot learn because of 2 & 3
They are best simulate the appears of all of these.
It still remains on you.
If you don't know where you're going, you will get there.