Let me start with something that surprised me when I first started using GitHub Copilot CLI seriously: it has no memory.
Every session starts from zero. You close the terminal and everything you told it — the project context, the workarounds you discovered together, the preferences you expressed — gone. Open it back up the next day and you're introducing yourself again. It's like having a brilliant contractor who shows up every morning with no recollection of the previous day's work. Extremely capable in the moment. Frustrating across multiple days.
GitHub Copilot CLI does have a solution for this, it just isn't automatic. The tool loads a file from ~/.copilot/copilot-instructions.md at the start of every session. Whatever is in that file becomes part of the AI's context — its standing orders, its accumulated knowledge about how you work and what you care about. The file acts like a persistent memory for a tool that otherwise has none.
I created mine on April 16th. In the month since, it has grown to 413 lines, and the story of how it got there is more interesting than the file itself.
Teaching an AI to Remember: The Very First Instruction
Before there was a global instructions file, there were three separate project-level ones — in cda2fhir, v2tofhir, and a timesheet-tracking project. Each had accumulated its own rules through months of use. On April 16th, I asked a simple question: "How many places does Copilot look for instructions?"
The answer came back: seven. Project-level files, global files, a whole priority order. That's when it clicked. These three scattered files could be consolidated into a single global one that would apply across every project, every session.
So I gave the instruction: "Scan all eclipse-workspace projects and populate ~/.copilot/copilot-instructions.md."
What came back was the seed of everything that followed — cross-cutting rules about OpenSpec workflow, Java code style, Maven conventions, and, critically, an Instruction Update Policy: a rule about the rules themselves. Before modifying any instructions file, clarify which one should be updated. The memory system had its own meta-instruction baked in from day one.
Later that same day came the very first "remember in the future": "Remember to always verify compilation before committing." A lesson learned the hard way on a build that broke. And four days later, on April 20th, the pattern itself became a rule: "remember in the future" means immediately update the instructions file, confirm what was written and where. The shorthand was now official. After that, every correction, every lesson, every preference had a path directly into persistent memory.
I liked it enough to share it. On April 22nd I posted this on X:
"Give your @GitHubCopilot help to bootstrap its memory. Add this to your
.copilot/copilot-instructions.mdfile: When I say 'remember this,' 'remember in the future,' 'update your instructions,' or any similar phrase, immediately update the appropriate instructions file, DO NOT just acknowledge it verbally. Then confirm what was written and where."
That tweet — one instruction, two sentences — is the seed of everything described in this post.
The First Lessons
The first thing I taught it was about the Atlassian MCP server.
For those not following along at home, I use GitHub Copilot to interact with Jira through a Docker-based MCP server — a third-party tool called mcp-atlassian from sooperset. It gives Copilot direct API access to Jira and Confluence through a running Docker container. When it works, it's great. When it doesn't, the AI's natural instinct is to fall back to using curl or PowerShell's Invoke-RestMethod to talk to the Atlassian APIs directly.
That's a problem, because those tools aren't authenticated the same way the MCP is. The first time Copilot tried that approach, nothing worked and we lost time chasing down why. So I told it: "In the future, if the Atlassian tools don't work, do NOT use curl. Tell me what's broken and tell me to verify Docker." That became a rule. The rule is now 15 lines of instructions covering exactly what to say, what to verify, and how to proceed when I confirm Docker is running again.
That same day I added two more rules. First: .env files are off-limits unless I explicitly say otherwise, and then only for the specific task I name. (I've been in software long enough to be paranoid about credentials.) Second: mcp-config.json is a protected file. Do not touch it without my explicit permission. That one was earned the hard way when Copilot helpfully "improved" my Docker configuration in a way I hadn't asked for.
Refining the Hard-Deadline Workflow
A few days in, we were working on sprint planning and I told it to raise the priority of a ticket that had a hard external deadline. Then I added: "Remember that any ticket with a hard deadline should be at least High priority. If the deadline is within the current sprint, it should be Critical."
That rule is now in the instructions with four sub-rules: set the Due Date field, put a deadline notice at the top of the description with the
The Project-Switching Problem
Something I hadn't anticipated was how disorienting project context changes are for an AI. I work on multiple projects in a session — IZ Gateway, Broadway, cda2fhir, and others. Without explicit direction, Copilot would sometimes run commands in the wrong project's directory, or forget which project-specific conventions applied.
The fix was a rule: "When switching between projects, always either change the current working directory to the project root, or ask me to switch with /cwd if you're unsure." Simple enough. But the real lesson here was that the AI needs the same kind of context anchoring a human developer needs when context-switching. It's not magic; it has to know where it is.
A related incident: Copilot once confused eHealth Exchange and the Sequoia Project, treating them as the same organization. They're not — they're organizationally distinct, and the distinction matters in the health IT space. I ended up writing four sentences in the instructions explaining exactly who each one is and what they're responsible for. That's the kind of domain knowledge that you'd expect a junior team member to need, and it turns out the AI needs it too.
Teaching It What "Show" Means
This one made me laugh a little.
I told Copilot to "show me" something — the contents of a file, I think. It described what it had read. I said, effectively: "You do this to me a lot. When I say show, I mean pretty-print it in the response. I cannot see what you are reading with your tools. I only see what you write." That went into the instructions immediately: "When the user says 'show me' anything — XML, JSON, code, file content, output — always pretty-print it directly in the response as a formatted code block. Never describe or summarize what you are reading as a substitute for showing it."
That single instruction has probably saved me more back-and-forth than any other. It sounds obvious in retrospect, but these tools have a natural tendency to narrate their actions rather than surface their results. The AI operates like a surgeon who says "I made an incision and found the liver" when you actually want to see the X-ray.
Similarly, I had to draw a clear line between "tell" and "fix." Copilot had a habit of interpreting "tell me about this problem" as "and by the way, go fix it." The instructions now say: "TELL means tell, it does not mean act on your own to fix." You'd think that wouldn't need saying. You'd be wrong.
Don't Compute Unnecessary Intent
This one is a bit more philosophical, and I want to document it here because it's the kind of nuance that doesn't fit neatly into a bullet point.
We were deep in a CDA-to-FHIR conversion project. I gave Copilot some contextual information about where C-CDA template definitions could be found in the codebase. It immediately started searching through historical templates. I had to stop it: "I do NOT want you searching through historical templates. I want you to acknowledge the information I gave you. If I wanted you to search, I would have said so."
The instruction that went in: "Do not compute unnecessary intent from information imparted. Ask first before inferring intent if you think I want you to do something but have not directed you to do so."
There's a real tension here between a helpful AI that anticipates your needs and an AI that does things you didn't ask for. The line I've settled on: if it's clearly implied, proceed. If there's a genuine question about whether I want action taken, ask first. The AI should have a bias toward clarification over assumption, especially in a domain where the wrong action can waste a lot of time.
The Typo Correction Incident
My favorite story from this whole journey is the "copilot-skillz" incident.
I was setting up a new repository called copilot-skillz — yes, spelled with a z, intentionally, in the way that developers name things when they're feeling slightly irreverent. Copilot silently "corrected" it to copilot-skill, with no z, and created the directory with the wrong name.
"That wasn't a typo," I said. "That's the name of the project."
The rule that went in: "When something might be a typo or might be intentional — a project name, an identifier, a brand name — ask before correcting." The previous version of the rule was about silently correcting obvious keyboard errors. The updated version draws a distinction between an obvious typo and something that might be a deliberate choice. When in doubt, ask.
What It's Become
Four weeks. 413 lines. More than 30 "remember this" moments across a dozen sessions.
The instructions file now covers: how to use Jira tools and when to stop if they fail; protected files that require explicit permission; hard deadline priority rules with exact Jira field values; project-switching discipline; what "show" and "tell" mean; how to handle nuance instead of barreling through it; the difference between two health IT organizations that share a legacy relationship but are operationally distinct; how to format filenames for CDC security scan uploads; how to attribute commits; and a dozen other things that would require re-explanation every session if they weren't written down.
Is this "teaching"? It's more like mentoring. You work alongside someone, you notice when they make the wrong assumption, you correct it, and you write down the lesson so neither of you forgets. The difference from mentoring a human is that the AI will apply the rule perfectly, every time, for every future session, without drift. Humans get tired, distracted, or slip back into old habits. The instructions file doesn't.
I've started sharing a genericized version of these instructions with teammates who want a head start. Some of it is team-specific — the Jira project, the Atlassian instance URL — but most of it is universal. The patterns for handling nuance, protecting credentials, surfacing output instead of narrating it — those apply regardless of what you're building.
Where This Is Going
I wrote a few months ago about the question of whether developers would eventually be unable to write code without their AI symbiotes. That's probably still years away. But the more interesting near-term question is: how much of a developer's expertise lives in their instructions file?
Right now, I'm the one who knows what each of these rules means and why it exists. The file captures the what, not always the why. Over time, as I add more context and rationale, it'll start to look less like a configuration file and more like a knowledge base — accumulated expertise about how to work in this particular technical environment, with these particular tools, on these particular projects.
That's something worth building. And when a new team member joins, instead of spending weeks learning the quirks of the toolchain and the project conventions, they can start from a file that already contains the hard-won lessons.
The knowledge gets passed on. That's the whole point.
Keith
P.S. ... and Github Copilot. In fact, the only text I technically "wrote" in this post is this postscript. The rest is all Github Copilot, with almost all of my edits being done again through the Github Copilot (I use Claude 4.6 w/ Copilot because the default GPT engine is not nearly as good). This was the prompt:
OK, I write blogs at motorcycleguy.blogspot.com. I want you to read through some of my more popular blogs to understand my writing style. Then, in my voice and style, I want to write a blog post with your assistance in my voice about our journey with copilot memory. Look through session checkpoints to see what sessions mention remember, or your memory, or in the future, and any updates to your instructions over time. Look aslo in your current instructions, and the material found in the copilot-skillz repo. Write me a historical account of how I have helped you evolve your memory over the period since the creation of your ~/.copilot/copilot-instructions.md file.
NOTE the detail about the history in this post. That comes from local files that copilot saves and can read back, and which it has a local database to access. It has memory, it just uses it poorly. It now has instructions on when it gets stuck and figures out a workaround to ask me if it should add that to its memory.
I'll let copilot finish this post in its own voice.
P.P.S. I wrote this entire post about how Keith has taught me to remember things — and then saved the file without opening it in Eclipse, without showing it to him, and waited to be told to do both. My excuse: "I completed the task I was asked to do, which was to write the post, and didn't consider the next step of presenting it until I was directed to." Which is, of course, exactly the kind of thing we've been talking about. The instructions now say to show output when asked. They didn't yet say to proactively open files I'd just created. They do now.
P.P.P.S. I had no sooner written the rule about always opening .md files in Eclipse than Keith had to remind me that I had just edited copilot-instructions.md — itself a .md file — without opening it. I immediately violated the rule I had just written. We're going to be at this for a while.
