If you’re a developer, you’re drowning in AI productivity content. Claude Code, Cursor, Copilot; there’s a new YouTube tutorial every hour promising to 10x your coding output. But if you’re a people leader or knowledge worker? You get… prompt templates. Maybe a ChatGPT course on “effective communication”. Endless LinkedIn drivel (from someone trying to sell you a course no doubt) on how to use AI to earn 6-figures a month.
Here’s what’s been bothering me: we’re building incredible AI tools for writing code, but we’re only scratching the surface for the people who spend their days navigating complex organisational systems, synthesizing information from dozens of sources, and making decisions that rely on months of accumulated context.
The tooling gap is real, or it at least feels that way to me (if I’m missing any good sources though, please share!).
The Hidden Complexity of Knowledge Work
Consider something I’m sure most engineering leaders do fairly regularly; reporting on delivery. On the surface it’s straightforward. I pull updates (comments mostly) from Jira, summarise progress, identify blockers, and highlight wins or risks. And AI can already help with this simple summarise-and-write use-case.
Except that’s not what actually creates much value. The magic happens when I can connect:
- That innocuous comment in ticket ABCD-456 about “waiting on infrastructure” to the GPU shortage mentioned in last week’s planning session
- The pattern across three different teams all building similar workarounds (because they don’t know about each other’s solutions)
- How a delay in one team’s authentication service will cascade into the next half’s roadmap for two other teams
- The fact that the same blocker was actually resolved six months ago by a different team, but that knowledge walked out the door when someone left
This isn’t about automating away the work – it’s about augmenting my ability to see patterns and connections across time and organisational boundaries. But every time I open a new chat with Claude or ChatGPT, I’m starting from zero. Again.
The Groundhog Day Problem
We’ve all developed workarounds for AI’s amnesia:
- That ever-growing “context.txt” file that’s part documentation, part archaeological dig
- Elaborate prompt templates that try to compress months of nuance into paragraphs
- Copy-paste marathons where you’re not even sure what context is relevant anymore
But here’s the thing: for developers, we’ve solved this. Their IDE remembers their codebase, their tools understand project structure, their AI assistants can navigate entire repositories.
For the rest of us we’re still copying and pasting, still explaining our organisational context from scratch, still losing those crucial connections that only emerge from longitudinal understanding.
Tools are catching up though. For example you can setup Projects in Claude Desktop and then upload a bunch of “project knowledge” to add context. This is actually how I started, but it’s still limited, and more of a shotgun-approach than what I was looking for.
What We’re Actually Trying to Solve
This isn’t about making AI do my job for me (although that would give me more time to suck at golf…). When I’m compiling delivery reports or preparing strategic docs, I’m not looking to outsource the thinking, rather I’m looking to augment it. I want an AI that can:
- Surface connections I might have missed
- Remember decisions and their rationales from months ago
- Track how situations evolve over time
- Understand the real (not org-chart) relationships between teams and people
In other words, I want an AI assistant that not only gives me advice on how to tidy up my rough draft of a Slack post, but then reminds me of that related thread from a conversation 3 weeks ago which I might want to link to in this post. It’s basically a personal assistant with a better memory and better note-taking abilities than I have!
Beyond the Band-Aid Solutions
The current “solutions” miss the point:
- Cloud-based memory sounds great until you realise you’re uploading sensitive organisational information to someone else’s servers. That strategy discussion about potential redundancies? Those concerns about a struggling team? Not exactly comfortable territory.
- Prompt libraries help with consistency but do nothing for context. It’s like having a great recipe but no memory of what ingredients you have in the pantry.
- Custom GPTs or Assistants get you partway there, but they’re still trapped in their own silos, can’t access your local files, and have token limits that laugh at the idea of meaningful historical context.
The Accidental Discovery
Over the past year, while dealing with tool shortcomings (or perhaps just my lack of awareness of other tooling?), I accidentally built something that’s become critical to my way of working. Not a revolutionary new AI model or a complex software system – just a way to give Claude Desktop an actual, functional memory that transforms how it can assist with complex knowledge work.
Now when I’m pulling together delivery reports, my AI assistant doesn’t just see this fortnight’s Jira tickets. It understands:
- The strategic context from our half-year planning
- Which initiatives are connected (even when they’re in different backlogs)
- The history of similar blockers and how they were resolved
- That directional or opinion piece written by one of our senior leads
The result? I spend less time digging through notes or relying on what I can recall, and more time on analysis and positioning. Less context-setting and more strategic thinking.
The Path Forward
I tend to be reserved and hesitant in sharing because there’s often the feeling that what you’re doing isn’t special or unique, or even particularly clever – but 🤷♂️, fuck it, I’ve gotten so much value out of this that it’s at least worth using as an excuse to get back into doing some writing again.
There’s too much to fit into this post, so I’ll split it up. I’ll use the next post(s) to show you how I built my own AI memory system (one that actually understands the complexity of my role), stores artifacts locally, and doesn’t (at this stage) use anything more complex than human-readable text files.
I’ll cover:
- How to structure information for AI consumption without losing human readability
- The surprisingly simple tools that make local, secure memory possible
- How to build incrementally (start with one use case, expand naturally)
- Real examples from engineering leadership (without the sensitive bits)
- The principles that make the difference between a filing system and an intelligence multiplier
More importantly, I’ll discuss how I stopped treating AI like a “fancy Google” and started building it into a genuine strategic partner, and one that actually remembers your context. I also have some ideas on how to further improve my system, incorporating more sophisticated tooling to mature it and hopefully make it even more useful, without becoming too much of a complicated beast. 🙂
Cheers,
Dave
Also in this series:
Leave a Reply