Tag: tools

  • I Built Another Thing! (To extract Slack conversations for my “AI memory”)

    I Built Another Thing! (To extract Slack conversations for my “AI memory”)

    I’ve written about my AI memory system before, which gives Claude a “deeper” memory across conversations. I rely on this every day, but noticed that it still misses some context due to the fact that so many day-to-day conversations happen in Slack.

    Stuff like design discussions, technical opinions, those crucial “oh by the way” messages that reshape entire roadmaps, etc.

    But Slack is surprisingly lacking in terms of smart data export or analysis. For example, we have an internal helpdesk type of channel which I wanted to get a dump of data from to analyse; things like request count trends, documentation gaps, underserved teams, etc – but no luck, even when I requested this data through proper internal channels (i.e. IT).

    Anyway, I needed something that could grab specific conversations, preserve the context, and output clean markdown that my AI assistant could digest. So I built “SlackSnap”. 🎉

    Starting Simple-ish

    The good old “copy/paste” doesn’t really work here (no threads, messes up formatting, etc), so I didn’t start that simple.

    First I tried a javascript snippet that grabbed textContent from message elements. It kinda worked, but:

    • Slack’s DOM is a maze of virtual scrolling and dynamic loading, and the last time I pretended to be a “web developer” was in the 90s 🧓
    • Only captured visible messages (maybe 20-30 out of hundreds)
    • Lost all formatting (code blocks became walls of text)
    • No thread support
    • Usernames were just IDs like “U01234ABCD”

    So I rebuilt it as a proper Chrome extension. This gave me:

    • Background service workers for file downloads
    • Content scripts with full DOM access
    • Storage API for configuration
    • Proper permissions for Slack domains

    But the real breakthrough came when I discovered Slack loads its API token into localStorage. Instead of scraping the DOM, I could use Slack’s own API (well… *I* didn’t discover shit, the nice AI in Cursor informed me that this might be a better option 😄)

    Next: Dual Extraction Methods

    SlackSnap uses a two-pronged approach:

    Method 1: API-Based Extraction (Primary)

    // Get auth token from Slack's localStorage
    const config = JSON.parse(localStorage.getItem('localConfig_v2'));
    const token = config.teams[teamId].token;
    
    // Fetch messages via conversations.history
    const messages = await fetch('/api/conversations.history', {
      method: 'POST',
      body: new URLSearchParams({
        token: token,
        channel: channelId,
        limit: '100',
        oldest: oldestUnix
      })
    });

    The API approach is nice and simple (and understandable!) because it:

    • Gets ALL messages in the specified time window, not just visible ones
    • Includes thread replies with conversations.replies
    • Provides consistent data structure
    • Works with Slack’s pagination

    But the user IDs problem remained. Slack returns messages like:

    {
      "user": "U123ABC",
      "text": "Should we refactor the auth service?",
      "ts": "1753160757.123400"
    }

    Smart User Resolution

    Instead of fetching ALL workspace users, which the AI did initially, and *I* actually corrected (chalk 1 up for the humans!), SlackSnap:

    1. Extracts unique user IDs from messages
    2. Includes @mentions from message text
    3. Fetches ONLY those specific users
    4. Builds a lookup map for the export
    // Extract user IDs from messages and mentions
    const userIds = new Set();
    for (const msg of apiMessages) {
      if (msg.user) userIds.add(msg.user);
      // Extract @mentions like <@U123ABC>
      const mentions = msg.text.match(/<@([A-Z0-9]+)>/g);
      // ... collect those too
    }
    
    // Fetch only needed users (e.g., 15 instead of 5000)
    const userMap = await fetchSpecificUsers(Array.from(userIds), token);

    Method 2: DOM Fallback

    If API access fails (permissions, network issues), SlackSnap falls back to enhanced DOM scraping:

    // Scroll to load message history
    const scrollContainer = document.querySelector('.c-scrollbar__hider');
    
    for (let i = 0; i < 20; i++) {
      scrollContainer.scrollTop = 0;
      await new Promise(resolve => setTimeout(resolve, 1500));
    
      // Check if new messages loaded
      const currentCount = document.querySelectorAll('[role="message"]').length;
    
      // ... break if no new messages after 3 attempts
    }

    This bit never worked as well (still had issues resolving user names, was more inconsistent with the scrolling, etc) so I may just remove it entirely since I’ve found the API method actually works more reliably.

    The Output: Clean, Contextual Markdown

    SlackSnap produces markdown that preserves the conversation flow:

    # SlackSnap Export: #deathstar-review
    
    *Exported: November 28, 2024 10:45 AM*
    
    ---
    
    **D.Vader** (Today 9:15 AM):
    Team, what's this I hear about an "exhaust port" vulnerability?
    
    **Galen Erso** (Today 9:18 AM):
    Nothing to worry about; low sev vulnerability we can patch out later as a fast-follower :thumbsup: :agile:
    
    **Thread Replies:**
      - **Grand Moff T**: It's only 2 meters wide right? nobody's getting close enough to even see it! :approved:
      - **Emperor P**: Yeah, okay... its just a vent I guess, probably doesn't lead anywhere important in any case. Thanks team

    Configuration

    The options page lets you control:

    • Download directory: Organizes exports (e.g., downloads/slack-exports/)
    • Filename template: YYYYMMDD-HHmm-{channel}.md for chronological sorting
    • History window: How many days back to export (default: 7) – things get a bit funky if you download too much from a very busy channel
    • Thread inclusion: Critical for capturing full context, or to disable if you just want a high level overview
    • Timestamps: Full date/time vs. just time

    How I use the output

    The structured markdown output feeds directly into my AI context system. The way I do this isn’t to capture every single little detailed message, but to export (on a weekly basis) from a few key channels likely to contain important context, then pass all of those exports into Claude at once, asking it to write a single summary file to memory for that week, focusing on team dynamics, key decisions and technical direction, etc. 

    Then the memory system can take that Slack summary into account when I do my regular “memory updates”. So now when I start a chat in Claude Desktop, it can make use of context from meeting transcripts and documents I’ve provided, plus Slack conversations!

    For the week or so I’ve been using it I’ve noticed that it feels a little more accurate, or “connected to reality”, than it did before. YMMV.

    The Technical Stack

    Everything runs locally in your browser:

    • Manifest V3: Modern Chrome extension architecture
    • Slack’s Internal API: Already authenticated, just reuse their token
    • Chrome Downloads API: Handles subdirectories properly
    • Markdown Generation: Preserves code blocks, links, formatting

    Installation and Usage

    1. Clone from GitHub: https://github.com/dcurlewis/slacksnap
    2. Load as unpacked extension in Chrome
    3. Click extension icon on any Slack conversation
    4. Messages export to your Downloads folder

    The export captures the entire history (up to your configured limit), not just what’s on screen.

    Only tested on Chrome (but should work on Chromium based browsers, or those using the same extension architecture).

    Future Enhancements?

    • Selective date ranges: Export specific time periods
    • Multi-channel export: Batch export related channels
    • Search integration: Export search results
    • Attachment handling: Download shared files/images
    • Export formats: JSON for data analysis, PDF for sharing

    But honestly? The current version solves my immediate need so I probably won’t bother adding too many bells and whistles.

    Some Observations

    Building this revealed some interesting patterns in how we communicate:

    1. Critical decisions often happen in threads – Main messages lack context
    2. Code snippets in Slack are surprisingly common – And poorly preserved
    3. Timestamps matter more than you think – “Yesterday” is ambiguous a day later
    4. User attribution is crucial – “Someone suggested” vs. “Darth Vader suggested”

    Other observations from me, less related to this tool and more on the process of developing it; “Vibe coding” can still be infuriating, but works a lot better IMO if you provide a decent project plan at the outset. 

    I’ve seen arguments that planning time is better spent “iterating” (vibing?), but I literally spent 2 or 3 minutes prompting another AI to produce my “plan” based on my quickly thrown together requirements and limitations.

    This saved probably hours of the AI running me around in circles with “vibey” scope creep, mocked functions it thinks might be handy when you implement this amazing feature one day (that I definitely didn’t ask for), etc.

    Get It Running

    The tool is here: https://github.com/dcurlewis/slacksnap

    It’s intentionally simple – no external dependencies, no build process, just vanilla JavaScript that manipulates Slack’s own data. If you’re feeding an AI assistant with your work context, this might be the missing piece.

    Let me know if you find something like this useful, or if you have any feedback/ideas to share. 

    Cheers,
    Dave


    P.S. – Yes, I realize I’m slowly building a suite of tools to feed my AI assistant. Not sure what might be up next, yet…

  • I Built a Thing! (To Transcribe my Meetings)

    I Built a Thing! (To Transcribe my Meetings)

    The third-party transcription app I was using for my AI memory system got flagged as non-compliant at work. Zoom’s built-in transcription only really works when you’re the host. For vendor calls, external meetings, and anything where I wasn’t running the show, I needed an alternative that was free, local, and (more) compliant.

    So I built one.

    The Starting Point: Manual but Functional

    I already had OBS Studio installed (it’s on our approved software list) and knew it could record audio. My initial workflow was basic:

    1. Manually start OBS recording before a meeting
    2. Stop recording after
    3. Run OpenAI’s Whisper locally on the audio file
    4. Paste the transcript into Claude Desktop for summarization

    It worked, but had obvious problems:

    • Everything was manual
    • No speaker separation (just one wall of text)
    • Back-to-back meetings meant falling behind on processing
    • Easy to forget to start/stop recording

    The Evolution: From Manual to Automated

    First, I automated the OBS control using its websocket API. No more clicking around in the UI, just command-line control.

    Then I realised I could use OBS’s multi-track recording to solve the speaker problem to some extent:

    • Track 1: My microphone
    • Track 2: Desktop audio (everyone else)

    This works perfectly for 1-on-1 meetings, since there are only 2 of you, but for group meetings you’ll only know for sure what was said by you, and everyone else.

    Haven’t figured out a way to solve this yet, but to be honest AI summarization does a pretty good job in most cases of inferring who said what. It may just make mistakes in meetings where it’s important to know exactly who said what (e.g. assigning tasks or follow-up actions).

    FFmpeg could extract these tracks as separate files, Whisper could transcribe them independently, and a simple Python script could merge them with timestamps:

    [00:01:23] Me: What's the status on the inference service?
    [00:01:31] Others: Still blocked on GPU allocation...

    Finally, I opened Cursor (using Gemini 2.5) all my requirements, and asked it to build a proper CLI tool. The result was a bash script that orchestrated everything: OBS control, audio extraction, transcription, and transcript merging.

    The Final Tool: Simple Commands, Complete Workflow

    bash# Start recording
    ./run.sh start "Sprint Planning"
    
    # Process all recordings when you have time
    ./run.sh process
    
    # Check what's queued
    ./run.sh status
    
    # Discard a recording
    ./run.sh discard

    Key features I added during refinement:

    • Queue system: Records meeting metadata to processing_queue.csv for batch processing later (for when I have several back-to-back meetings and need to be able to process them later)
    • Automatic stop: Starting a new recording auto-stops the previous one (because I have a crap memory 😉)
    • Idempotent processing: Won’t re-process already completed steps if interrupted (e.g. if you start recording a meeting but nobody shows, or you don’t want to process it for some other reason)
    • Configurable Whisper models: Trade speed for accuracy based on your needs (I haven’t played with this much, so have only tried the base and small models, which worked well, but there is a turbo model too which looks interesting)

    The Technical Stack

    Everything runs locally:

    • OBS Studio: Multi-track audio recording (already approved in my use)
    • FFmpeg: Extract multiple audio tracks from MKV files
    • Whisper: Local transcription (base model by default, configurable)
    • Python: Controls OBS via websocket, merges SRT files
    • Bash: Orchestrates the workflow

    Why This Works for Me

    1. Fully local: No data leaves your machine
    2. Uses approved tools: in my case at least
    3. Handles real workflows: Queue system for back-to-back meetings
    4. Good enough quality: Whisper’s base model is sufficient for most meetings
    5. Searchable output: Timestamped, speaker-separated transcripts

    The transcripts feed directly into my AI assistant for summarization, action item extraction, and long-term context building. No manual notes, no missed decisions, no compliance issues.

    Get It Running

    The tool is here: https://github.com/dcurlewis/obs-transcriber along with more detailed instructions.

    Setup is straightforward:

    1. Install OBS Studio and enable websocket server
    2. Install FFmpeg and Python dependencies
    3. Configure OBS for multi-track recording
    4. Run the commands

    It’s not as polished as commercial services, but it solves my specific problem of local meeting transcription. Improvements I’m already thinking about are:

    • auto-deleting the recordings once they’re processed
      • cuts down disk space bloat, but importantly reduces the amount of potentially sensitive data lying around
    • even auto-generating summaries and deleting the transcripts themselves for a full end-to-end solution
    • drop me a comment here (or in Github) if you have any other ideas for improvements

    If you’re facing similar constraints – need transcription, can’t use cloud services, don’t control the meetings – this might help. Or inspire you to build your own version.

    Disclaimer: I’ve only “tested” this lightly today on a few meetings, so your mileage may vary, and do your own due diligence as always.

    Cheers,
    Dave

  • My AI Memory System: The Complete Implementation

    My AI Memory System: The Complete Implementation

    Right… enough context, let’s cut the shit and get to the good stuff. Here’s my complete AI memory system – the one I use every day to manage engineering teams.

    In the previous posts (1, 2), I showed you the background and high-level setup. Now let’s look at what I’ve built on top of that foundation.

    The Architecture (It’s Still Simple)

    Quick recap:

    • Claude Desktop with Project Knowledge
    • MCP Desktop Commander for local file R/W access
    • Markdown files organised in folders

    What’s evolved is the system layer on top – the memory structure, the commands, and the workflows that make this actually useful.

    The Instructions

    First, we need to set some ground rules. Basically explain to our old-mate Claude how to interpret what we say, what actions to take based on the role it should play given the context of the documents or meeting transcripts provided, etc.

    Below is a sample “context management system” instruction doc (just what I called mine, naming isn’t important) to demonstrate the level of detail needed to get started.

    ai-context-management-system.md
    # AI Context Management System Guide
    
    ## Overview
    This document instructs how to manage long-term context using a hybrid approach combining file-based storage and Project Knowledge artifacts.
    
    ## Directory Structure
    /Users/foo/bar/AI-Context/
    ├── Raw-Materials/          # Incoming stuff
    │   ├── Meeting-Transcripts/
    │   └── _Archive/          # Processed materials
    ├── Curated-Context/       # The actual memory
    │   ├── Meeting-Insights/
    │   ├── Team-Knowledge/
    │   ├── Project-Insights/
    │   ├── Decision-History/
    │   └── Strategic-Documents/
    ├── Tasks/                 # Strategic task tracking
    │   └── strategic-tasks.md
    ├── Prompts/              # System files & memory summaries
    │   ├── memory-organization.md
    │   ├── memory-strategy.md
    │   ├── memory-projects.md
    │   ├── memory-decisions.md
    │   ├── memory-team-dynamics.md
    │   └── memory-relationships.md
    └── Templates/            # Reusable formats
    
    ## Memory Command Actions - CRITICAL BEHAVIOR
    
    When I use phrases like "commit to memory", "add to memory", or "save this information", this is NOT a request for analysis in chat. These are EXPLICIT instructions to CREATE A NEW FILE in the appropriate subdirectory.
    
    Example: "commit to memory a summary of this meeting"
    Expected behavior:
    1. Create a new markdown file with proper naming [YYYYMMDD]-Meeting-Summary.md
    2. Write the summary content to this file
    3. Confirm the file path where content was saved
    
    Never just provide the summary in chat without creating a file.
    
    ## Content Curation Workflow
    
    1. **Processing New Information**
       - Analyze incoming content for key insights
       - Extract actionable items and patterns
       - Create structured summaries focusing on decisions and outcomes
       - Place in appropriate subdirectory
    
    2. **Meeting Transcripts**
       - Create one summary per transcript (never combine)
       - Focus on decisions, action items, and strategic insights
       - Note participants and context
       - Archive original after processing
    
    3. **Quality Standards**
       - Use clear headers and bullet points
       - Include metadata (date, participants, context)
       - Emphasize actionable over exhaustive
       - Maintain ~2-3 page maximum per document
    
    ## Strategic Task Tracking
    
    When processing any content, actively identify tasks that:
    - Have strategic importance or long-term impact
    - Span multiple weeks/months
    - Have dependencies or blockers
    - Might not be captured in regular task systems
    
    Automatically suggest: "Should I add this to strategic tasks?"
    
    ## Memory File Management
    
    When updating memory files:
    1. Focus on current state, not historical narrative
    2. Use status markers: [ACTIVE], [BLOCKED], [RESOLVED]
    3. Archive resolved items with reference
    4. Maintain target line counts (100-150 lines)
    5. Consolidate related items
    
    ## Pattern Recognition
    
    Actively look for:
    - Repeated themes across different sources
    - Contradictions or conflicts in information
    - Emerging risks or opportunities
    - Connections between seemingly unrelated items
    
    Surface these patterns proactively.

    This isn’t my actual version because (as you’ll discover yourself) you’ll continue tweaking and adding to it to make it your own over time, and mine is now a bit of a cobbled-together mess in need of a rewrite.

    The Workflow

    The flow is continuous:

    1. Raw materials accumulate throughout the day or week
    2. I process them (using Claude) into structured insights
    3. Commands consolidate insights into memory files
    4. Memory files get loaded as Project Knowledge
    5. Every new chat starts with full context

    The Memory Files

    After processing, information lives in six core memory files that get loaded into Claude as Project Knowledge:

    memory-organization.md

    • Current team structure and reporting lines
    • Personnel changes and succession planning
    • Role transitions and organisational impacts

    memory-projects.md

    • Active initiatives with status markers
    • Dependencies and blockers
    • Key milestones and delivery dates

    memory-team-dynamics.md

    • Team health indicators
    • Collaboration patterns
    • Morale and engagement signals

    memory-strategy.md

    • Technical vision and quarterly priorities
    • Architecture decisions and tradeoffs
    • Strategic initiatives and their rationale

    memory-decisions.md

    • Pending decisions requiring resolution
    • Implementation status of past decisions
    • Decision ownership and timelines

    memory-relationships.md

    • Stakeholder mapping and status
    • Cross-functional dependencies
    • Vendor and partner relationships

    Each file uses status markers: [ACTIVE], [BLOCKED], [CRITICAL], [RESOLVED], [PENDING]

    Why Commands Exist

    I was already using Claude to update memory files from fairly early on – but I was typing out the full instructions every time:

    "Claude, please scan the Curated-Context folder ('/path/to/my_folder') for new files since last Monday (7 April 2025). For each new file, extract relevant information and update the appropriate memory files. For the team dynamics file, focus on current issues and use status markers like [ACTIVE] or [RESOLVED]. Keep each file under 150 lines. Archive any resolved items. Make sure to..."

    You get the idea. These instructions would run to multiple paragraphs. And despite my best efforts, I’d phrase things slightly differently each time:

    • Sometimes I’d say “focus on current state”
    • Other times “emphasise active items”
    • Sometimes I’d forget to mention the line limit
    • Other times I’d use different status markers

    So I started saving the prompts to text files, and then I thought to myself “why not just get Claude to read those files?” and my “pseudo-Claude-CLI” file was born.

    The Command System

    Now here’s what I actually might type as a prompt:

    # After each meeting: process the transcript
    claude-meeting (brief description of the meeting if you like, but usually not necessary)
    
    # Update memory files with new information (usually each Monday)
    claude-memory-scan
    claude-memory-update-organization
    claude-memory-update-strategy
    claude-memory-update-projects
    claude-memory-consolidate
    
    # Monday morning: generate my weekly update
    claude-monday-update
    
    # Check strategic tasks
    claude-tasks
    
    # Prep for upcoming meetings
    claude-meeting-prep "stakeholder name"
    
    # Fortnightly: delivery reporting
    claude-delivery-scan # uses a Jira MCP to read Jira tickets and report on each goal's progress
    claude-delivery-report # Summarizes the above "initial report" into a less detailed, more business user-friendly version for sharing with leadership
    

    Real Workflow: Monday Morning

    Here’s my actual Monday routine:

    1. Update memory files first (10-15 minutes) – This ensures all context from last week is integrated.
      • claude-memory-scan
        • Scans for new files added since the last memory update
      • claude-memory-update-organization, claude-memory-update-strategy, claude-memory-update-projects, etc
        • Updates each type of memory file with the latest context
      • claude-memory-consolidate
        • Moves the updated memory files to the parent directory, archives the previous files, etc.
      • I then re-upload these updated memory files into Claude’s Project Knowledge.
    2. Generate Monday update (5 minutes)
      • claude-monday-update
        • Uses a template plus current context to suggest focus areas for this week, and a brief update shareable in Slack with my teams.
    3. Review strategic tasks (10 minutes)
      • claude-tasks
        • Shows what’s urgent, blocked, or needs attention this week.
    4. Prep for meetings (2 minutes per meeting)
      • claude-meeting-prep "John Smith"
        • Pulls out actions from prior meetings, as well as notes on any recent context from any other sources relevant for discussion with this person.

    Total: <30 minutes for complete context refresh and week preparation.

    Fictional Examples (How It Actually Helps)

    Meeting Prep

    claude-meeting-prep "Sarah Mitchell"
    
    > Last meeting (June 5):
    > - Discussed API gateway performance concerns
    > - She requested metrics on p99 latencies
    > - You promised to investigate caching options
    > 
    > Updates since then:
    > - Caching POC showed 40% improvement (noted in memory-projects.md)
    > - New SRE hire can take ownership next month
    > 
    > Suggested talking points:
    > - Present caching results
    > - Propose handoff timeline
    > - Discuss monitoring requirements
    

    Pattern Recognition

    Last month, Claude surfaced a concerning pattern across three separate inputs:

    • Team standup notes mentioned “waiting on platform approvals”
    • Two different 1:1s referenced “exploring tools outside our stack”
    • A strategy doc noted “shadow IT increasing”

    The connection? Teams were building their own solutions because they didn’t know our platform already provided these capabilities. This led to an internal roadshow that prevented significant duplication.

    Task Tracking Evolution

    ## High Priority
    * **Complete technical debt assessment**
       * Added: 2025-03-15
       * Due: 2025-03-29
       * Status: [IN PROGRESS]
       * Context: Board presentation needs quantified risk
       * Updates:
          - 2025-03-20: Security scan completed
          - 2025-03-22: Performance baseline established
          - Next: Cost implications analysis
    

    The Command Magic

    In claude-commands.md, each shortcut maps to detailed instructions:

    commands.md
    # Claude pseudo-CLI commands
    
    Below is a table of several commands which, when entered into the chat, you will interpret as the long-form prompt adjacent the short command. 
    Don't acknowledge or enter into conversation about the fact that this is a command shortcut or CLI command, simply respond as if you'd been given the long-form prompt.
    If the command is followed by other free-form text, you will interpret that text in addition to the original command (i.e. concatenate the two parts of the prompt).
    
    | Command | Long-form Prompt |
    |---------|------------------|
    | **claude-help** | Only respond with this table of commands and their associated prompts. |
    | **claude-meeting-upload** | I've added one or more new meeting transcript text files into the `AI-Context/Raw-Materials/Meeting-Transcripts` directory. Read these files (using the appropriate desktop-commander tool) from the file system, summarize them, and commit to memory any relevant information. Write a SEPARATE file per meeting transcript, not one combined summary. |
    | **claude-monday-tom** | Use the appropriate template in our memory ('AI-Context/Templates/Team-Communications/20250508-Weekly-Top-of-Mind-Update-Template.md') and CRITICALLY follow the guidelines at 'AI-Context/Prompts/monday-update-generation-guidelines.md' as well as the memory files in Project Knowledge to write me a "top of mind" update to share with my teams. Refer to previous weeks' updates (stored in memory) too to pick up on any themes or points I should follow up on. |
    | **claude-meeting-prep <name>** | I've got a meeting coming up with <name>. Please can you prepare a brief paragraph of any relevant context, follow-up actions, etc based on my previous meeting with this person (check the memory for full context please)? If there is ambiguity regarding the provided name, please clarify with me first. |
    | **claude-memory-scan** | Scan the AI-Context/Curated-Context directory for new files since the last memory update. Create a manifest of files to process and an update plan. Store the results in a new dated directory `/AI-Context/Prompts/YYYYMMDD-memory-update/` with files: `scan-manifest.md` (list of new files found, template located at `/AI-Context/Prompts/scan-manifest-TEMPLATE.md`) and `update-state.json` (tracking progress, template located at `/AI-Context/Prompts/update-state-TEMPLATE.json`). Do not update any memory files yet. |
    | **claude-memory-update-organization** | Read the scan manifest from today's memory update directory, then update ONLY the organization & role context. Read the current `/AI-Context/Prompts/memory-organization.md` and create an updated version at `/AI-Context/Prompts/YYYYMMDD-memory-update/memory-organization.md`. IMPORTANT: While updating, consolidate content by: 1) Focusing on current state rather than historical narratives, 2) Using status markers like [IMPLEMENTED], [IN PROGRESS], [PENDING], 3) Moving verbose historical content to archive references, 4) Removing duplication with other memory files, 5) Keeping file concise (target ~100 lines). Update the `update-state.json` to mark this topic as completed. |
    | **claude-memory-update-strategy** | Read the scan manifest from today's memory update directory, then update ONLY the strategic direction content. Read the current `/AI-Context/Prompts/memory-strategy.md` and create an updated version at `/AI-Context/Prompts/YYYYMMDD-memory-update/memory-strategy.md`. IMPORTANT: While updating, consolidate content by: 1) Focusing on current state rather than historical narratives, 2) Using status markers like [ACTIVE], [PLANNED], [DEPRECATED], 3) Moving verbose historical content to archive references, 4) Removing duplication with other memory files, 5) Keeping file concise (target ~130 lines). Update the `update-state.json` to mark this topic as completed. |
    | **claude-memory-update-projects** | Read the scan manifest from today's memory update directory, then update ONLY the projects & initiatives content. Read the current `/AI-Context/Prompts/memory-projects.md` and create an updated version at `/AI-Context/Prompts/YYYYMMDD-memory-update/memory-projects.md`. IMPORTANT: While updating, consolidate content by: 1) Focusing on active projects with clear status markers, 2) Using markers like [IN PROGRESS], [COMPLETED], [BLOCKED], [PLANNED], 3) Moving completed project details to archive references, 4) Grouping by status rather than chronology, 5) Keeping file concise (target ~120 lines). Update the `update-state.json` to mark this topic as completed. |
    | **claude-memory-update-decisions** | Read the scan manifest from today's memory update directory, then update ONLY the decisions & issues content. Read the current `/AI-Context/Prompts/memory-decisions.md` and create an updated version at `/AI-Context/Prompts/YYYYMMDD-memory-update/memory-decisions.md`. IMPORTANT: While updating, consolidate content by: 1) Focusing on active/unresolved decisions only, 2) Using status markers like [IMPLEMENTED], [PENDING], [UNRESOLVED], [STATUS UNKNOWN], 3) Moving implemented decisions to brief summary lines, 4) Grouping by category (Infrastructure, Organizational, Strategic, etc.), 5) Keeping file concise (target ~120 lines). Update the `update-state.json` to mark this topic as completed. |
    | **claude-memory-update-team** | Read the scan manifest from today's memory update directory, then update ONLY the team dynamics content. Read the current `/AI-Context/Prompts/memory-team-dynamics.md` and create an updated version at `/AI-Context/Prompts/YYYYMMDD-memory-update/memory-team-dynamics.md`. IMPORTANT: While updating, consolidate content by: 1) Focusing on current dynamics and active issues, 2) Using status markers like [CRITICAL], [MONITORING], [RESOLVED], 3) Removing resolved issues or outdated team dynamics, 4) Consolidating repetitive communication patterns, 5) Keeping file concise (target ~100 lines). Update the `update-state.json` to mark this topic as completed. |
    | **claude-memory-update-relationships** | Read the scan manifest from today's memory update directory, then update ONLY the cross-functional relationships content. Read the current `/AI-Context/Prompts/memory-relationships.md` and create an updated version at `/AI-Context/Prompts/YYYYMMDD-memory-update/memory-relationships.md`. IMPORTANT: While updating, consolidate content by: 1) Focusing on active relationships and current status, 2) Using markers like [ACTIVE], [DEPARTING], [BLOCKED], 3) Removing people who have left or outdated vendor relationships, 4) Grouping by relationship type, 5) Keeping file concise (target ~130 lines). Update the `update-state.json` to mark this topic as completed. |
    | **claude-memory-consolidate** | Complete the memory update process: 1) Review all updated memory files to ensure consolidation principles were followed (concise, status markers, current state focus), 2) Update memory-index.md with the new update date and summary of changes, 3) Generate a final report showing line count changes and key updates made. Mark the update-state.json as completed. |
    | **claude-memory-promote** | After reviewing the updates, promote the new memory files from the dated update directory to the main `/AI-Context/Prompts/` directory, archiving the previous versions to `/Users/dbdave/work/AI-Context/Prompts/_Archive/YYYYMMDD-memory-archived/`. Only run this after you've reviewed the updates. |
    | **claude-delivery-scan** | Scan all tracked Jira tickets in `/AI-Context/Delivery-Reports/delivery-tracking.md` for new comments since the last report date. For each ticket, fetch comments and create a summary of updates, highlighting progress, blockers, and key metrics. Store the scan results in a new file `/AI-Context/Delivery-Reports/YYYYMMDD-delivery-scan.md`. |
    | **claude-delivery-update <team>** | Update delivery reports for a specific team. Read the tracking file, fetch recent comments for that team's tickets, summarise the updates, and update the 'Last Report' dates in the tracking file. If no team is specified, update all teams. |
    | **claude-delivery-report** | Generate a comprehensive delivery report suitable for sharing with leadership. Read the most recent scan results and create a formatted report in `/AI-Context/Delivery-Reports/YYYYMMDD-delivery-report.md` that includes: executive summary, team-by-team progress, key achievements, blockers, upcoming milestones, and metrics. Use the template structure for consistent formatting. |
    | **claude-tasks** | Review the strategic task list at `/AI-Context/Tasks/strategic-tasks.md`. Display current tasks organised by priority and status. Check if any tasks need status updates based on recent context. Suggest any new tasks identified from recent meetings or documents. |

    This lets me maintain consistency without typing reams of instructions.

    Important Implementation Details

    1. Status Markers Are Useful

    Without them, files become append-only history logs. With them, you get:

    ### Platform Adoption [ACTIVE]
    - Shadow solutions emerging in 3 teams
    - Internal awareness campaign planned Q2
    - Previous evangelism efforts [ARCHIVED - see _Archive/2024-platform-push.md]

    2. Line Limits Force Priority

    Each memory file has a target length. This creates natural pressure to:

    • Archive resolved items
    • Consolidate related points
    • Focus on what matters now

    3. Separation Prevents Mess

    Raw-Materials/ → unprocessed inputs
    Curated-Context/ → processed insights
    _Archive/ → historical reference

    Don’t mix these.

    4. Dating Enables Everything

    20250315-Engineering-Sync.md tells Claude:

    • When this happened
    • Processing order
    • Relevance decay over time

    Without dates, you have a pile. With dates, you have a timeline.

    What This Enables (Real Impact)

    Weekly time saved: ~3-5 hours

    • No more searching through Slack/email/docs for context
    • Meeting prep takes minutes, not half-hours
    • Strategic patterns visible immediately

    Quality improvements:

    • Never miss follow-ups from previous meetings
    • Connect decisions across time and teams
    • Spot risks before they materialise

    Cognitive load reduction:

    • Stop holding everything in your head
    • Trust the system to surface what matters
    • Focus on analysis, not archaeology

    The Evolution Path

    I didn’t build this overnight. Here’s the rough timeline:

    • Week 1-2: Basic file dumping and summaries
    • Week 3-4: Realised I needed folder structure and archives
    • Month 2: Added status markers when files got unwieldy
    • Month 3: Created memory consolidation approach
    • Month 4: Built command shortcuts (game changer)
    • Month 5: Integrated strategic task tracking

    Each addition solved a specific pain point. No grand design, just iterating on friction.

    Start Where You Are

    1. Pick your biggest knowledge pain point
    2. Create ONE memory file for it
    3. Feed it information for a week
    4. Add status markers when it gets unwieldy
    5. Create shortcuts when you’re tired of typing

    The beauty? It’s just text files. You can’t break anything. The worst case is you reorganise some folders.

    Next post, I’ll explore what’s missing and where this could go – semantic search, team scaling, integration possibilities. But honestly? Even this simple version has transformed how I work.

    What information scattered across your tools would be most valuable if it lived in one place?

    Cheers,
    Dave


    Also in this series:

  • Building Your AI’s Memory: The Surprisingly Simple Foundation

    Building Your AI’s Memory: The Surprisingly Simple Foundation

    Last time I talked about giving your AI assistant an actual memory. Today I’ll show you how to set it up, and here’s the best bit: you barely have to write anything yourself. I didn’t want a new note-taking tool, I wanted someone else to take notes for me.

    The whole system runs on markdown files and folders. That’s it. But the magic is that Claude (or ChatGPT, etc) does the heavy lifting for you.

    Pointing out the obvious:

    This whole setup assumes you have access to an LLM/AI client, and may make use of some features which are paid-for. I haven’t tried this on any of the free plans yet – so please let me know if you do!

    First Things First: The Setup

    Before we dive into the fun stuff, you need a way for your AI to actually read and write files on your computer. This is where most people get scared off, but it’s super simple.

    I use Claude Desktop with an MCP (Model Context Protocol) tool called Desktop Commander. I won’t go into what MCP is, or how to install one (there are good instructions for Desktop Commander on their website). Suffice to say that MCPs extend the functionality of your LLM client.

    Once the tool is installed, Claude can now:

    • Read files from your computer
    • Create new files & directories
    • Search folders and files for specific patterns
    • Update existing documents

    The setup takes just a minute or two, and then you’re cooking with gas.

    How This Started

    I didn’t sit down one day and decide to build an entire memory system. It evolved from a simple problem.

    I had meeting transcripts or summaries from Zoom. I had a growing list of technical and strategic documents to read.

    So I started doing this:

    1. Export meeting transcripts from Zoom/Teams/whatever
    2. Drop them into Claude
    3. Prompt: “Please read these meeting transcripts and create individual summaries, then write them to my AI-Context folder following the pattern ‘YYYYMMDD-A-brief-description-based-on-meeting-context.md’”

    That was the beginning. And yes, I’m one of those people who says please and thank you to the machines. 😇

    The Simple Structure That Emerged

    After a few days of having Claude process meeting transcripts and docs, I noticed patterns. Some summaries were about team dynamics. Others were project updates. Some captured strategic decisions.

    So I asked Claude to create folders according to what it recommended based on the files created to date. Then to move all the files into the appropriate places:

    AI-Context/
    ├── Meeting-Insights/
    ├── Team-Knowledge/
    ├── Project-Insights/
    ├── Decision-History/
    └── Strategic-Documents/

    Nothing fancy. Just buckets that matched what my “memory” already contained.

    It was about now that I realised I wanted to start enforcing some structure. So I modified the project instructions (which are like a system prompt for Claude Desktop) with stuff like the expected filename format:

    20250704-Team-Planning-Session.md

    Starting with the date means everything sorts chronologically. More importantly, it creates temporal context. When I ask about “our infrastructure decisions”, Claude doesn’t just know what we decided – it knows when, what led up to it, and what happened next.

    Your First Memory (The Lazy Way)

    Want to try this? Here’s the laziest possible start:

    1. Set up Claude Desktop with MCP (or your preferred AI client with file access)
    2. Create an “AI-Memory” folder somewhere sensible (I limit the MCP’s access to only this parent folder to ensure it can’t do anything unexpected)
    3. Find a recent meeting transcript or important document
    4. Give it to Claude with this prompt:

    “Please read this document and create a summary focusing on key decisions, action items, and important context. Save it as a dated markdown file in my AI-Memory folder.”

    That’s literally it. Claude does the work.

    What Actually Goes in These Files?

    When I started, I’d give Claude specific instructions about what to capture. Now it’s mostly been rolled up into context or instruction files, but here’s what works:

    For meeting summaries:

    • Who attended and their roles
    • Key decisions made
    • Action items and owners
    • Unresolved questions
    • Important context or background mentioned

    For strategy documents:

    • Core objectives
    • Key stakeholders
    • Success metrics
    • Risks and dependencies
    • Timeline markers

    The beauty is you can iterate. Start simple, see what’s useful, adjust your prompts.

    Making Connections

    So far we’ve got some files on our filesystem, but this isn’t actually a “memory” system since you’d have to get your AI to ingest all files before each conversation.

    Instead what I started doing was getting Claude to read all files each week or so, and create “memory summary” files which I then loaded as Project Knowledge within the Claude Project (essentially persistent context that Claude always remembers between conversations).

    This type of system can be slowly improved (and the best bit is you can ask the AI to both recommend improvements, as well as implement them!). Below is an example of an instruction contained in a much larger instruction file, just to give some flavour:

    Scan the AI-Memory/Curated directory for new files since the last memory update. Create a manifest of files to process and an update plan. Store the results in a new dated directory `/AI-Memory/Working/YYYYMMDD-memory-update/` with files: `scan-manifest.md` (list of new files found, template located at `/AI-Memory/Working/scan-manifest-TEMPLATE.md`) and `update-state.json` (tracking progress, template located at `/AI-Memory/Working/update-state-TEMPLATE.json`).

    The Compound Effect

    Here’s what I didn’t expect: after about a month, the system became genuinely indispensable.

    Not because of any single document, but because of non-obvious connections it would start highlighting unexpectedly.

    These insights come from having context that spans time. It wasn’t about any individual note, but rather about the connections it could identify over time.

    Building the Habit (Without the Hassle)

    The reason most knowledge management systems fail? They require too much discipline. This doesn’t.

    After every important meeting, I spend 30 seconds dropping the transcript into Claude. After reading a strategy doc, same thing. Contract review? Technical design? Team feedback? Into Claude it goes.

    I still keep some notes (but mostly because old habits die hard). More and more though I find myself just feeding the machine and letting it build out a “memory”.

    What’s Next

    Once you have a few dozen files spanning a few weeks, the system comes alive (not literally, not yet 🤖). You start having conversations with Claude that assume context. Instead of explaining everything from scratch, you jump straight to: “Given what you know about our platform strategy, what are the risks in this new proposal?”

    Next post, I’ll show you how the system evolved – how I added automation, created reusable templates, and built commands that made it even easier to use and a little more consistent.

    But for now, just start. Create that folder. Process a few documents. Give your AI something to remember.

    The magic isn’t in the technology. It’s in figuring out how to use the existing tools in more interesting ways.

    Cheers,
    Dave


    Also in this series:

  • Why Your AI Assistant Forgets Everything (And Why Mine Doesn’t)

    Why Your AI Assistant Forgets Everything (And Why Mine Doesn’t)

    If you’re a developer, you’re drowning in AI productivity content. Claude Code, Cursor, Copilot; there’s a new YouTube tutorial every hour promising to 10x your coding output. But if you’re a people leader or knowledge worker? You get… prompt templates. Maybe a ChatGPT course on “effective communication”. Endless LinkedIn drivel (from someone trying to sell you a course no doubt) on how to use AI to earn 6-figures a month.

    Here’s what’s been bothering me: we’re building incredible AI tools for writing code, but we’re only scratching the surface for the people who spend their days navigating complex organisational systems, synthesizing information from dozens of sources, and making decisions that rely on months of accumulated context.

    The tooling gap is real, or it at least feels that way to me (if I’m missing any good sources though, please share!).

    The Hidden Complexity of Knowledge Work

    Consider something I’m sure most engineering leaders do fairly regularly; reporting on delivery. On the surface it’s straightforward. I pull updates (comments mostly) from Jira, summarise progress, identify blockers, and highlight wins or risks. And AI can already help with this simple summarise-and-write use-case.

    Except that’s not what actually creates much value. The magic happens when I can connect:

    • That innocuous comment in ticket ABCD-456 about “waiting on infrastructure” to the GPU shortage mentioned in last week’s planning session
    • The pattern across three different teams all building similar workarounds (because they don’t know about each other’s solutions)
    • How a delay in one team’s authentication service will cascade into the next half’s roadmap for two other teams
    • The fact that the same blocker was actually resolved six months ago by a different team, but that knowledge walked out the door when someone left

    This isn’t about automating away the work – it’s about augmenting my ability to see patterns and connections across time and organisational boundaries. But every time I open a new chat with Claude or ChatGPT, I’m starting from zero. Again.

    The Groundhog Day Problem

    We’ve all developed workarounds for AI’s amnesia:

    • That ever-growing “context.txt” file that’s part documentation, part archaeological dig
    • Elaborate prompt templates that try to compress months of nuance into paragraphs
    • Copy-paste marathons where you’re not even sure what context is relevant anymore

    But here’s the thing: for developers, we’ve solved this. Their IDE remembers their codebase, their tools understand project structure, their AI assistants can navigate entire repositories.

    For the rest of us we’re still copying and pasting, still explaining our organisational context from scratch, still losing those crucial connections that only emerge from longitudinal understanding.

    Tools are catching up though. For example you can setup Projects in Claude Desktop and then upload a bunch of “project knowledge” to add context. This is actually how I started, but it’s still limited, and more of a shotgun-approach than what I was looking for.

    What We’re Actually Trying to Solve

    This isn’t about making AI do my job for me (although that would give me more time to suck at golf…). When I’m compiling delivery reports or preparing strategic docs, I’m not looking to outsource the thinking, rather I’m looking to augment it. I want an AI that can:

    • Surface connections I might have missed
    • Remember decisions and their rationales from months ago
    • Track how situations evolve over time
    • Understand the real (not org-chart) relationships between teams and people

    In other words, I want an AI assistant that not only gives me advice on how to tidy up my rough draft of a Slack post, but then reminds me of that related thread from a conversation 3 weeks ago which I might want to link to in this post. It’s basically a personal assistant with a better memory and better note-taking abilities than I have!

    Beyond the Band-Aid Solutions

    The current “solutions” miss the point:

    • Cloud-based memory sounds great until you realise you’re uploading sensitive organisational information to someone else’s servers. That strategy discussion about potential redundancies? Those concerns about a struggling team? Not exactly comfortable territory.
    • Prompt libraries help with consistency but do nothing for context. It’s like having a great recipe but no memory of what ingredients you have in the pantry.
    • Custom GPTs or Assistants get you partway there, but they’re still trapped in their own silos, can’t access your local files, and have token limits that laugh at the idea of meaningful historical context.

    The Accidental Discovery

    Over the past year, while dealing with tool shortcomings (or perhaps just my lack of awareness of other tooling?), I accidentally built something that’s become critical to my way of working. Not a revolutionary new AI model or a complex software system – just a way to give Claude Desktop an actual, functional memory that transforms how it can assist with complex knowledge work.

    Now when I’m pulling together delivery reports, my AI assistant doesn’t just see this fortnight’s Jira tickets. It understands:

    • The strategic context from our half-year planning
    • Which initiatives are connected (even when they’re in different backlogs)
    • The history of similar blockers and how they were resolved
    • That directional or opinion piece written by one of our senior leads

    The result? I spend less time digging through notes or relying on what I can recall, and more time on analysis and positioning. Less context-setting and more strategic thinking.

    The Path Forward

    I tend to be reserved and hesitant in sharing because there’s often the feeling that what you’re doing isn’t special or unique, or even particularly clever – but 🤷‍♂️, fuck it, I’ve gotten so much value out of this that it’s at least worth using as an excuse to get back into doing some writing again.

    There’s too much to fit into this post, so I’ll split it up. I’ll use the next post(s) to show you how I built my own AI memory system (one that actually understands the complexity of my role), stores artifacts locally, and doesn’t (at this stage) use anything more complex than human-readable text files.

    I’ll cover:

    1. How to structure information for AI consumption without losing human readability
    2. The surprisingly simple tools that make local, secure memory possible
    3. How to build incrementally (start with one use case, expand naturally)
    4. Real examples from engineering leadership (without the sensitive bits)
    5. The principles that make the difference between a filing system and an intelligence multiplier

    More importantly, I’ll discuss how I stopped treating AI like a “fancy Google” and started building it into a genuine strategic partner, and one that actually remembers your context. I also have some ideas on how to further improve my system, incorporating more sophisticated tooling to mature it and hopefully make it even more useful, without becoming too much of a complicated beast. 🙂

    Cheers,
    Dave


    Also in this series:

  • SQL Server version information from build numbers

    SQL Server version information from build numbers

    Just a quick post in case anyone else finds this useful; for a report I’ve created, I wanted to have SQL Server versions displayed in a nice, readable form, rather than the usual build number format (e.g. 11.0.3128.0).

    Server dashboard report

    So I found the very useful SQL Server Builds page (thanks to whoever maintains that), and proceeded to write a function which accepts the build number, and spits out a friendlier “version string”, down to the service pack and cumulative update level of resolution. (I didn’t need anything lower than that, but you could easily add it in yourself).

    Here’s the file, feel free to download, use, hack to pieces, etc. 😉

    function

    Cheers,
    Dave