Conversation: agentic state management
This transcript is generated by parley.nvim.
Question
Let’s think about state management in agentic coding
So in agentic coding, the state of issues to fix are often outside the source control repo. There’s tension when a ticket is too big, then there needs to be incremental building blocks, this typically manifest as some todo.md file. In my workflow, I keep those todo.md file outside source control as they are ephemeral.
The issue then is I need to either finish that big ticket, can’t really make a check point and come back later.
It seems generally speaking, my intuition is to keep all state in a single place, the issues and the steps we think we need to fix those issues.
This led me to think I should update my AI workflow to be totally repo based, e.g. all tickets should be in some folder, and maybe proposed changes in subfolders. e.g.
issues/
00001/issue.md
00001/todo.md
00001/tombed -> to indicate this issue are "done"
then I guess you can have
00001/dependencies which just contain issue IDs that we need to work on first before this.
It also seems, instead of running different tasks in parallel, and deal with all the synchronization and conflict, we will just do a single threaded thing, one at a time with based on topological order.
I still need to figure out how to bring parallelism in a structured way, you can always do service separation in different repo, but that goes against my intuition.
thoughts?
🤖:[Claude-Code-Opus]
🧠: The user is thinking deeply about state management in agentic coding workflows. They’re describing a real problem: when using AI agents to work on code, the “meta-state” (what needs to be done, progress tracking, decomposition of tasks) often lives outside the repo (in Jira, Linear, todo.md files excluded from git, or just in conversation context). This creates problems with checkpointing, resumability, and coordination. They’re proposing a repo-based issue tracking system specifically designed for agentic workflows. Let me think about the key tensions and offer substantive thoughts. Key points to address: 1) The fundamental insight about collocating state with code, 2) The specific folder structure they propose, 3) The single-threaded vs parallel execution problem, 4) What’s missing or could be improved in their model. I should engage critically - not just validate but push back where appropriate and add insights they might not have considered. They seem quite experienced with agentic coding workflows, so I should go deep.
This is a really interesting design space, and I think you’re converging on something important. Let me engage with the specific tensions.
The core insight is right: colocate state with code
The fundamental problem you’ve identified is that context is the scarcest resource in agentic coding, and scattering it across systems (Linear, Jira, local ephemeral files, conversation history) means every agent session starts with a costly reconstruction phase. Making the repo the single source of truth for both code and work state is sound — it gets you versioning, diffing, branchability, and resumability for free.
On your proposed structure
The structure is reasonable, but I’d push on a few things:
The tombed marker feels like reinventing git. If an issue is done, that’s really a property of the commit history — the issue folder existed, work was done, and the resolution is the diff. A simpler model might be: issues that exist in issues/ are open; closing an issue means deleting the folder (the history lives in git). Or if you want to keep them around for reference, a simple status file or frontmatter in issue.md works. The tombstone file approach creates a state you have to check that’s redundant with what git already tracks.
todo.md is doing a lot of work. This is really the plan — the decomposition of the issue into steps. I’d be more explicit about its semantics. Is it a checklist the agent checks off? Is it append-only? Can the agent revise it mid-execution? The interesting design question is: should the agent be allowed to modify the plan, or only execute it? In practice, plans change as you learn things during implementation. You probably want the agent to be able to re-plan, but you want that to be an explicit, visible action (a diff you can review), not something that silently happens.
Dependencies as issue IDs is good but incomplete. You’ll likely need to distinguish between:
- Hard dependencies (must be merged first, the code won’t work otherwise)
- Soft dependencies (would be easier if done first, but can be worked around)
- Conflicts (touches the same files, can’t be parallelized safely)
The parallelism problem is the real hard problem
Your instinct toward single-threaded execution is pragmatic, but I think it’s worth being more precise about why parallelism is hard here. It’s not just merge conflicts — it’s semantic conflicts. Two agents can modify different files but break each other’s assumptions.
The structured way to bring parallelism back might not be repo separation but rather scope separation within a repo:
- By module/package boundary: if your codebase has well-defined module boundaries, two issues touching different modules can safely parallelize. The agent needs to know the dependency graph of modules, not just issues.
- By layer: an API contract change can be split into “update the interface” (serial, blocking) and “update all callers” (parallelizable per caller).
- Speculative execution: run N agents in parallel on independent issues, but against the same base commit. Then merge sequentially, and if a later merge has conflicts, re-run that agent against the new base. This is essentially optimistic concurrency control.
The speculative execution model is probably the most practical — you accept that some work will be thrown away, but the wall-clock speedup is worth it for issues that usually don’t conflict.
What might be missing
One thing I don’t see in your model: the agent’s reasoning trace. When you come back to a half-finished issue, the todo.md tells you what was planned, but not why certain decisions were made. You might want something like 00001/log.md that captures key decisions and discoveries during execution. This is essentially the agentic equivalent of commit messages, but at a higher level of abstraction.
Also worth considering: who creates the issue decomposition? If a human writes issue.md and the agent generates todo.md, that’s a clear contract. If the agent does both, you need a review step before execution begins.
📝: You asked about state management in agentic coding workflows and a repo-based issue tracking structure, I answered with validation of the colocate-state-with-code insight, critiques of tombstone files and todo.md semantics, pushed on dependency types, suggested parallelism via scope separation and speculative/optimistic concurrency, and noted missing elements like reasoning traces and plan authorship contracts.
Question
The following are response from ChatGPT, thoughts?
Yes. You are noticing a real systems problem, not just an AI-tooling quirk.
The core tension is: 1. the work state exists outside the repo, 2. but the code evolution is inside the repo, 3. so resumability, auditability, and decomposition drift apart.
That is why large tickets feel “must finish now.” The missing piece is not coding power. It is durable, local, versioned task state.
Your repo-based model is directionally right. I would push it a bit further.
The main idea
Treat issue execution as a state machine stored beside the code.
Not just: • issue statement • todo scratchpad • done marker
But: • problem definition • current understanding • decomposition • dependency edges • execution log • acceptance criteria • resulting commits / PRs / artifacts
That gives the agent somewhere to “stand” when it comes back later.
Why your current pattern breaks
Keeping todo.md outside source control is fine for purely personal scratch work, but for agentic coding it causes three failures:
- No checkpointing
You lose the intermediate reasoning and plan state that made the next step obvious.
- No causal trace
Later you cannot tell whether a sub-step was finished, invalidated, or superseded.
- No shared substrate
If multiple agents or even the same agent across time work on it, they have no canonical source of task truth.
So the result is exactly what you described: large tickets become “finish in one sitting or pay large re-entry cost.”
Your proposed structure is mostly sound
Something like:
issues/ 00001/ issue.md todo.md dependencies tombstone
is already much better than external ephemeral files.
But I would change a few things.
Minimal fixes
- Replace tombed with explicit machine-readable state
Do not use file presence as the main status signal. It is too lossy.
Prefer:
issues/00001/status.json
For example:
{ “state”: “done”, “updated_at”: “2026-03-28T10:00:00Z”, “blocked_by”: [], “depends_on”: [“00003”, “00007”], “supersedes”: [], “superseded_by”: null }
Or if you want human-friendly first:
state: done updated_at: 2026-03-28T10:00:00Z depends_on:
- 00003
- 00007 blocked_by: []
Reason: agents and scripts can consume it deterministically.
- Split todo.md into plan vs log
One file tends to become a mess of stale intentions plus historical junk.
Use:
plan.md log.md
• plan.md: current intended next steps only
• log.md: chronological execution notes, discoveries, failed attempts, decisions
This matters a lot. Otherwise stale todos poison future runs.
- Put acceptance criteria in their own file
Large-ticket drift often comes from the plan replacing the goal.
Use:
acceptance.md
That lets an agent ask: “what counts as done?” instead of hallucinating completion.
- Dependencies should be structured, not freeform text
A file named dependencies is okay, but make it machine-readable:
depends_on.txt
or YAML/JSON.
Example:
00003 00007 00011
Simple is fine.
A better repo shape
I would probably use:
issues/ 00001/ issue.md # problem statement acceptance.md # definition of done plan.md # current decomposition / next steps log.md # execution history, discoveries, decisions status.yaml # state, priority, owner, timestamps depends_on.txt # issue IDs artifacts/ # generated diffs, notes, screenshots, benchmarks
Optional:
patches/
research/
if the task requires exploration before code changes.
State model
You want explicit lifecycle states, maybe: • new • ready • in_progress • blocked • review • done • abandoned
That is enough.
For example:
state: in_progress priority: medium created_at: 2026-03-28T09:10:00Z updated_at: 2026-03-28T10:22:00Z depends_on:
- 00003 blocked_by: []
This gives you resumability without inventing a huge PM system.
Single-threaded topological execution
This is the part where your instinct is strong.
For agentic work, single-threaded by default is correct.
Not because parallelism is bad in principle, but because most repo-local tasks share hidden state: • same codebase assumptions • same branch • overlapping files • overlapping test surfaces • evolving plan context
Parallelism here often creates fake speed and real merge debt.
Topological execution over explicit dependencies is much saner. It gives: • deterministic work order • fewer conflicts • clearer blocked states • easier restart after interruption
So yes, “one at a time in dependency order” is a good default operating model.
Where parallelism actually belongs
Not “multiple agents randomly touching the same repo.”
Parallelism works when units are genuinely separable along one of these dimensions:
- File/ownership disjointness
Two tasks provably touch different bounded areas.
- Interface-first decomposition
One task defines contract/spec/tests, another implements against that fixed interface.
- Research vs execution split
One agent investigates and writes structured findings, another applies code changes.
- Branch-isolated speculative work
Parallel branches only for alternatives, not concurrent mutation of the same plan.
So I would not bring parallelism in as “do many tickets at once.” I would bring it in as controlled fork-join under explicit boundaries.
The deeper issue: issue systems are not enough
You also correctly noticed that “issues” and “steps to solve issues” want to live together.
Classic ticket systems separate: • ticket = external tracker • code = repo • work log = chat / notes / memory • decomposition = hidden scratchpad
That architecture is bad for AI workflows.
AI agents benefit from co-locating operational state with the code. In other words, the repo becomes not just source code, but the local working memory substrate.
That is a meaningful shift.
But there is one objection
Knowledgeable people will object that this pollutes the repo with ephemeral planning noise.
That objection is valid.
There are really three possible answers:
Option A: Commit issues/ into the repo
Pros: • full auditability • perfect resumability • easy collaboration
Cons: • lots of operational noise • history clutter • people may dislike mixing planning with product code
Option B: Keep issues/ in repo but gitignored
Pros: • local durable state • no repo pollution • agents still get a canonical on-disk structure
Cons: • not shared across machines unless separately synced • weaker team-level coordination • state can silently diverge
Option C: Separate “worktree state repo”
A second repo colocated with the main repo, e.g.:
myproject/ myproject.ops/
Pros: • structured durable state • versioned independently • no pollution of main source tree • sharable if desired
Cons: • another moving part • slightly weaker coupling to code unless tooling is good
I suspect Option C is probably your best long-term architecture.
It preserves your intuition that state should be repo-based, but avoids contaminating the product repo with ephemeral execution debris.
This is the model I would recommend
Use a separate operational repo or sibling directory:
repo/ repo.work/ issues/ 00001/ issue.md acceptance.md plan.md log.md status.yaml depends_on.txt links.yaml
Where links.yaml might contain:
code_repo: ../repo branch: issue/00001-fix-auth-timeout commits:
- abc1234 pr: null
That gives you durable work state, resumability, and clean separation.
On decomposition
You are also converging on an important point: a “big ticket” should usually become a DAG of smaller issues, not a single issue with a long todo list.
So instead of: • 00001 with a giant todo.md
prefer: • 00001 parent issue • 00002 extract parser • 00003 add failing tests • 00004 refactor interface • 00005 migrate callers
with explicit dependencies.
Then the parent issue becomes mostly a coordination node.
That is much more robust than giant freeform todo documents.
A practical heuristic
Use a new issue node when a step has any of these properties: • can be committed independently • has its own acceptance criteria • may block or unblock other work • could be delegated • could be abandoned without invalidating the entire ticket
If not, keep it as a checklist item inside plan.md.
That gives a sensible boundary between “sub-issue” and “todo item.”
How to handle resuming later
When an agent starts work, it should only need to read: • issue.md • acceptance.md • status.yaml • depends_on.txt • plan.md • tail of log.md
That is the minimal context pack.
If it needs more, it can inspect artifacts and code. This keeps restart cheap.
My blunt take
Your current instinct is right, but I would not say “all tickets should be in the source repo” as a universal rule.
The stronger rule is:
all execution state should be on disk, structured, durable, and near the codebase.
Whether that lives inside the main repo, in a gitignored subtree, or in a sibling ops repo is a secondary design choice.
For most serious agentic coding workflows, I would choose: • code repo: product source • work repo / ops repo: issue DAG, plans, logs, artifacts • default execution: single-threaded topological order • parallelism: only through explicit fork-join boundaries
That is coherent and scales much better than ad hoc todo.md scratchpads.
If you want, I can sketch a concrete file schema plus a tiny CLI design for creating issues, marking state transitions, and selecting the next runnable task.
🤖:[Claude-Code-Opus]
🧠: The user is showing me a response from another LLM and asking for my thoughts. This is a meta-discussion about comparing perspectives. I should be honest about where I agree, where I disagree, and where I think the other LLM’s response has strengths or weaknesses compared to my own earlier response. Let me think critically.
Key things to evaluate:
- The state machine / lifecycle model (new, ready, in_progress, blocked, review, done, abandoned)
- The plan.md vs log.md split
- The Option A/B/C analysis (in-repo, gitignored, separate repo)
- The recommendation of Option C (separate ops repo)
- The DAG decomposition of big tickets into sub-issues
- The structured files (status.yaml, acceptance.md, etc.)
- The “minimal context pack” for resuming
My honest assessment: this response is thorough but I think it over-engineers the problem significantly. It’s the classic LLM pattern of taking a good idea and adding layers of structure until it becomes a project management system. Let me be specific about what I think is good and what’s over-designed.
The separate ops repo (Option C) recommendation is interesting but I actually think it works against the user’s original insight — colocating everything. The whole point was reducing context fragmentation.
The status.yaml, acceptance.md, links.yaml etc. — this is creeping toward reinventing Jira on disk. The user’s original simple structure was better.
The DAG decomposition point is genuinely good.
The plan.md vs log.md split is reasonable but I should note that in practice, for agentic coding, the log is often the conversation history itself.
I should be direct and critical, since the user is asking for my honest assessment.
This is a thoughtful response, but I have some real disagreements with it. Let me be direct.
Where it’s strong
The plan vs. log split is genuinely good. I made a similar point about separating the reasoning trace from the plan, but this response names it more cleanly. Stale todos poisoning future runs is a real failure mode — I’ve seen agents re-attempt steps that were already tried and abandoned because the todo.md didn’t distinguish “planned” from “tried and failed.”
The DAG decomposition heuristic is the best part. The rule of thumb — “create a sub-issue when a step can be committed independently, has its own acceptance criteria, could be delegated, or could be abandoned without invalidating the parent” — is practical and correct. That’s the kind of guidance that actually changes how you work.
The “minimal context pack” concept is useful. Naming the exact set of files an agent reads on resume is a good design discipline.
Where I think it goes wrong
It over-engineers the structure. status.yaml, acceptance.md, links.yaml, artifacts/, patches/, research/ — this is drifting toward building a project management system on the filesystem. The user’s original structure was 3-4 files. This response inflated it to 7+ files per issue with YAML schemas. In practice, that structure becomes a maintenance burden itself, and the agent spends tokens reading and updating metadata instead of writing code.
The pragmatic question is: who maintains all this metadata? If the agent does, you’re spending agent cycles on bookkeeping. If the human does, you’ve just recreated Jira with worse UX.
Option C (separate ops repo) contradicts the core insight. The user’s whole point was that fragmentation of state across locations is the problem. A sibling repo.work/ directory is just a more organized version of the same fragmentation. Now you need to keep two repos in sync, worry about which commit in the ops repo corresponds to which commit in the code repo, and the links.yaml with branch names and commit hashes is exactly the kind of cross-referencing that rots immediately.
I’d argue Option A (committed to the repo) is actually correct for agentic workflows, with one nuance: use a branch convention. The issues/ folder on main contains open issues. Completed issues get cleaned up as part of the closing PR. The full history lives in git. This is simpler and more robust than any of the three options as presented.
The lifecycle states are overkill. new → ready → in_progress → blocked → review → done → abandoned is a workflow engine. For single-threaded topological execution — which both the user and this response agree is the right default — you really only need three states: open, blocked, done. The scheduler picks the first open issue whose dependencies are all done. That’s it. Adding ready vs in_progress vs review implies a multi-actor workflow that doesn’t exist yet.
The deeper thing it misses
This response treats the issue structure as a data format problem and solves it with more files and schemas. But the harder problem is what the agent does when the plan is wrong.
An agent starts executing plan.md, discovers the approach won’t work at step 3 of 7. What happens? The response says the agent should be able to re-plan, but doesn’t address the mechanics. I think this is where the real design work is:
- Should the agent stop and create new sub-issues in the DAG?
- Should it modify
plan.mdin place and note the change inlog.md? - Should it escalate to the human?
The protocol for plan revision matters more than the file layout. A simple structure with a clear revision protocol will outperform an elaborate structure with no protocol.
My blunt take on their blunt take
The response says “all execution state should be on disk, structured, durable, and near the codebase” — that’s right. But then it recommends putting it in a different repo, which is “near” only in the filesystem sense. The coupling between work state and code state is semantic, not spatial. Git branches already give you a mechanism to associate work-in-progress state with code changes without polluting main. Use that.
Keep the structure minimal. Start with what the user originally proposed. Add files only when you hit a concrete failure mode, not preemptively.
📝: You asked for my critical assessment of another LLM’s response on repo-based issue management for agentic coding. I answered that it correctly identifies plan/log split and DAG decomposition as valuable, but over-engineers the file structure toward a filesystem PM system, the separate ops repo recommendation contradicts the core insight of colocation, the lifecycle states are overkill for single-threaded execution, and the harder unsolved problem is the agent’s plan-revision protocol rather than the data format.
Question
Alright, based on what you understand of my intention, propose a simple but effective structure.
🤖:[Claude-Code-Opus]
🧠: The user wants a practical, minimal structure for repo-based agentic coding state management. Key constraints from the conversation: 1) Everything in the repo (not separate ops repo), 2) Simple - they started with issue.md, todo.md, tombstone, 3) Single-threaded topological execution as default, 4) Must support resumability (checkpointing and coming back later), 5) Must support decomposition of big tickets into sub-issues (DAG), 6) They use agentic coding workflows where an AI agent picks up and executes work. I should propose something close to their original intuition but incorporating the good ideas: plan vs log split, DAG decomposition, machine-readable dependencies. I should NOT over-engineer with status.yaml, acceptance.md, links.yaml etc. I should be opinionated and concrete. Let me think about what’s truly minimal: - issue.md: problem statement + what done looks like (no separate acceptance.md, that’s over-engineering) - plan.md: current steps, agent can revise but the diff is visible in git - log.md: discoveries, failed attempts, key decisions - deps: just issue IDs, one per line - done marker: I argued against tombstone files earlier. But actually, for simple scripting/agent consumption, a status indicator is useful. Simplest: the agent reads a single line from a status file. Or… just use presence in a convention. Hmm. Let me think about what the scheduler needs: it needs to know which issues are open and which are done, and dependency order. Simplest: a status file with one word in it: open, blocked, done. That’s 3 states as I recommended. Actually even simpler: issues/ folder contains open issues. A done issue gets its folder moved or deleted. But that loses history… no, git keeps it. But the agent needs to check deps - if a dep issue folder doesn’t exist, it’s done. That’s actually elegant but fragile. OK, I think a simple status file with one word is the right balance. Let me also think about the plan revision protocol since I criticized the other response for not addressing it. And the DAG decomposition - when should a step become a sub-issue. Let me draft the structure and conventions.
Here’s what I’d propose — biased toward minimalism, with clear conventions over configuration.
Structure
issues/
0001/
issue.md
plan.md
log.md
deps
status
That’s it. Five files max per issue.
File semantics
issue.md — The problem statement and what done looks like. One file. Keep them together because the definition of done is meaningless without the problem context. Format:
# Auth tokens not refreshed on 401
When the API returns 401, the client crashes instead of refreshing.
## Done when
- Client intercepts 401, refreshes token, retries original request
- Test covers the refresh-and-retry path
- No other callers affected
plan.md — The current execution plan. A checklist. The agent owns this file and can revise it. Revisions are visible in git diff, which is your audit trail. No separate versioning needed.
- [x] Identify all call sites that hit authenticated endpoints
- [x] Extract token refresh logic from login flow
- [ ] Add retry interceptor to HTTP client
- [ ] Write test for 401 → refresh → retry
- [ ] Verify no regression in existing auth tests
<a href="/2026/03/28/further_on_the_ideal_of_logmd.html" class="branch-inline">log.md</a> — Append-only. The agent writes here when something unexpected happens: a failed approach, a discovery, a decision that deviates from the plan. This is what makes resumability work — the next session reads this to understand why the plan looks the way it does.
## 2025-06-28 14:30
Tried wrapping fetch() directly but the interceptor pattern
doesn't work because we use a custom transport layer.
Need to hook in at the transport level instead.
Revising plan step 3.
deps — One issue ID per line. Empty file or absent means no dependencies.
0003
0007
status — A single word on a single line. Three possible values:
open
open— ready to work on (or being worked on; in single-threaded mode, this distinction doesn’t matter)blocked— can’t proceed, and the reason isn’t just deps (agent got stuck, needs human input, external blocker)done— finished
That’s the entire lifecycle. The scheduler logic is trivial:
pick the first open issue whose deps are all done
DAG decomposition
When a step in plan.md gets big enough that it could be committed, tested, and reviewed independently — pull it out into its own issue. The parent issue’s plan then references the child:
- [x] Extract token refresh logic → issue 0008
- [ ] Add retry interceptor → issue 0009
- [ ] Integration test
The parent’s deps doesn’t list its children (that would be circular). Instead, the parent stays open and its plan tracks children by reference. The parent is done when its plan is fully checked off.
Children can depend on each other via their own deps files. That’s where your topological ordering comes from.
Plan revision protocol
This is the part that matters more than file layout. Three rules:
-
The agent can revise
plan.mdfreely, but must write an entry inlog.mdexplaining why before making the change. This makes plan drift visible and reasoned. -
If the agent discovers the issue itself is wrong (wrong problem statement, wrong scope), it sets status to
blockedand writes in the log what needs human attention. It does not silently redefine the issue. -
If a plan step turns out to need more than ~2-3 commits, the agent decomposes it into a sub-issue rather than inflating the current plan. This keeps each issue’s plan short enough to hold in context.
Committing strategy
Commit issues/ to the repo, on whatever branch you’re working on. When an issue is done and merged, you can either:
- Delete the folder in the merge commit (clean main, history in git)
- Leave it (simple, searchable, slightly noisy)
I’d start with deleting on merge. If you miss having the history around, switch to leaving them.
A minimal scheduler
This is almost trivially implementable:
#!/bin/bash
# next-issue: prints the next issue to work on
for dir in issues/*/; do
id=$(basename "$dir")
status=$(cat "$dir/status" 2>/dev/null || echo "missing")
if [ "$status" != "open" ]; then
continue
fi
# Check all deps are done
all_done=true
if [ -f "$dir/deps" ]; then
while IFS= read -r dep; do
dep=$(echo "$dep" | tr -d '[:space:]')
[ -z "$dep" ] && continue
dep_status=$(cat "issues/$dep/status" 2>/dev/null || echo "missing")
if [ "$dep_status" != "done" ]; then
all_done=false
break
fi
done < "$dir/deps"
fi
if [ "$all_done" = true ]; then
echo "$id"
exit 0
fi
done
echo "No runnable issues found." >&2
exit 1
# Usage
chmod +x next-issue
./next-issue
# Output: 0001
What I deliberately left out
- No YAML/JSON metadata. Plain text files are easier for both humans and agents to read and write. You don’t need structured data until you’re building dashboards.
- No separate acceptance criteria file. It’s three lines in
issue.md. Another file is another thing to keep in sync. - No artifacts folder. If the agent produces research or patches, they’re either commits or they go in the log.
- No priority field. In single-threaded topological execution, priority is determined by dependency order and the sequence you create issues in. If you need explicit priority later, add a single number to the first line of
status(e.g.,open 1).
Start here. Add structure only when you hit a specific failure mode that forces it.
📝: You asked me to propose a concrete minimal structure for repo-based agentic coding state management, I answered with a 5-file-per-issue structure (issue.md, plan.md, log.md, deps, status), three lifecycle states (open/blocked/done), a plan revision protocol (log before revise, block if issue is wrong, decompose if step is too big), DAG decomposition convention, and a trivial bash scheduler — deliberately excluding YAML metadata, separate acceptance files, artifacts folders, and complex lifecycle states.
Question
If I move towards that (e.g. repo based issue management), then I’ll need some base level tool to help manage them.
I already have a tool called parley.nvim. I gues I’ll just extend that, a subsystem there would be an issue management system based on a folder of .md and .yaml files. Given how much you know about parley, read the specs folder for parley, propose an issue management subsystem, following the same “Keep It Simple” philosophy: issue management system is just a folder of markdown files.
🤖:[Claude-Code-Opus]
🧠: The user wants me to read the specs folder of their parley.nvim plugin to understand its architecture and philosophy, then propose an issue management subsystem that fits naturally within it. Let me fetch the repo and specs first.Let me read the specs index and a few key spec files to understand the architecture and philosophy.🧠: I can see the parley.nvim structure now. Key patterns I observe: 1) Everything is markdown/plain files (chats are markdown with emoji prefixes like ## Question
and 🤖:), 2) There’s already a notes system with year/month/week organization, 3) Specs are organized in a clean hierarchy, 4) The philosophy is “plain markdown files you can edit anytime” and “keyboard-first”, 5) There’s already a chat_dir and notes_dir concept, 6) There’s a picker system (telescope-based) for finding/navigating, 7) The plugin already has concepts of lifecycle, templates, and finder. The user wants me to propose an issue management subsystem that fits naturally within parley’s existing patterns. I should propose something that mirrors how chats and notes already work — markdown files, simple directory structure, telescope pickers, minimal metadata. The key insight is that parley already treats chats as “a folder of markdown files with a header” — issues should follow the exact same pattern. I should leverage existing parley infrastructure: pickers, file references (@@), the template system, and the notes-like directory structure. Let me propose something concrete.
Based on what I can see from the specs index and the README, here’s what I’d propose. The key design principle: issues should work exactly like chats and notes already work in parley — markdown files with frontmatter, navigable via pickers, editable as normal buffers.
Design: Issues as Markdown with YAML Frontmatter
Parley already has the pattern: chats are markdown files with a header block. Notes have templates and a directory structure. Issues should be the same thing — no new abstractions, just a new issues_dir.
Directory layout
{issues_dir}/
0001-auth-token-refresh.md
0002-extract-parser.md
0003-add-retry-tests.md
That’s it. One file per issue. No subdirectories per issue. The plan, log, status, and deps all live inside the single markdown file as sections. This mirrors how a parley chat is one file with all state in it — the header plus the transcript.
Issue file format
Following parley’s convention of frontmatter + structured content:
---
status: open
deps: [0002]
created: 2026-03-28
updated: 2026-03-28
---
# Auth tokens not refreshed on 401
When the API returns 401, the client crashes instead of refreshing.
## Done when
- Client intercepts 401, refreshes token, retries original request
- Test covers the refresh-and-retry path
## Plan
- [x] Identify all call sites that hit authenticated endpoints
- [x] Extract token refresh logic from login flow → issue 0002
- [ ] Add retry interceptor to HTTP client
- [ ] Write test for 401 → refresh → retry
## Log
### 2026-03-28 14:30
Tried wrapping fetch() directly but interceptor pattern doesn't
work with custom transport layer. Revising plan step 3.
### 2026-03-28 15:10
Transport-level hook works. Proceeding with retry logic.
Why single file, not a folder-per-issue
Three reasons that align with parley’s philosophy:
-
Parley chats are single files. A chat’s entire state — header, system prompt reference, all turns — is one
.md. Issues should follow the same pattern. You already have muscle memory for navigating a structured markdown file in nvim. -
Telescope picker works on files, not directories. The existing picker infrastructure can list issues the same way it lists chats or notes —
rgorfdover a directory, show filename + maybe grep the status from frontmatter. No new picker abstractions needed. -
@@file references already work. In a parley chat, you can reference@@issues/0001-auth-token-refresh.mdto pull an issue into context when discussing it with an LLM. One file = one reference. If issues were folders, you’d need to reference multiple files or invent a new aggregation syntax.
Minimal subsystem spec
Following the pattern of existing parley specs, here’s what the implementation would need:
specs/issues/structure.md -- format, frontmatter schema, sections
specs/issues/lifecycle.md -- status transitions, scheduler logic
specs/issues/templates.md -- template for new issues
Config additions
require("parley").setup({
-- ...existing config...
issues_dir = vim.fn.getcwd() .. "/issues", -- repo-local by default
issue_template = "default",
})
Note: unlike chat_dir and notes_dir which are global user directories, issues_dir defaults to cwd-relative because issues belong to the repo, not the user.
Commands and keybindings
Following parley’s <C-g> prefix convention:
| Command | Binding | Description |
|---|---|---|
:ParleyIssueNew |
<C-i>c |
Create issue from template, auto-increment ID |
:ParleyIssueFinder |
<C-i>f |
Telescope picker over issues, preview, open |
:ParleyIssueNext |
<C-i>x |
Open the next runnable issue (topo sort) |
:ParleyIssueStatus |
<C-i>s |
Cycle status in frontmatter (open → blocked → done) |
:ParleyIssueDecompose |
<C-i>d |
Create child issue, insert dep link in current plan |
The IssueFinder should show status in the picker line, e.g.:
[open] 0001-auth-token-refresh
[open] 0003-add-retry-tests
[blocked] 0005-migrate-callers
[done] 0002-extract-parser
Sort order should be by default topological depending on dependency, otherwise old to new.
With filtering — default to showing open + blocked, toggle to show all.
Scheduler logic (the IssueNext command)
-- Pseudocode for next runnable issue
function M.next_issue(issues_dir)
local issues = parse_all_issues(issues_dir)
local done_ids = set of issues where status == "done"
for _, issue in ipairs(sorted_by_id(issues)) do
if issue.status == "open" then
local all_deps_done = true
for _, dep in ipairs(issue.deps) do
if not done_ids[dep] then
all_deps_done = false
break
end
end
if all_deps_done then
return issue
end
end
end
return nil
end
Parsing frontmatter is trivial — read the file, extract the YAML block between --- markers, pull status and deps. Parley already parses chat headers, so this pattern exists in the codebase.
The IssueDecompose command
This is the one non-obvious command. When your cursor is on a plan line like:
- [ ] Extract token refresh logic from login flow
Running :ParleyIssueDecompose should:
- Create a new issue file
0002-extract-token-refresh.md(prompts for slug) - Copy the line as the issue title
- Add the new ID to the parent’s
depsin frontmatter - Replace the plan line with:
- [ ] Extract token refresh logic → 0002 - Open the new issue for editing
This makes DAG decomposition a single keystroke from within the plan.
Template
---
status: open
deps: []
created:
updated:
---
#
## Done when
-
## Plan
-
## Log
Follows the same template variable pattern as note templates.
What I deliberately kept aligned with parley’s existing patterns
- Frontmatter for metadata — chats already use this
- Emoji-free — unlike chat turns (💬/🤖), issues don’t need visual turn markers; markdown headings suffice
- Picker-first navigation — same as chat finder and note finder
@@compatible — issues are just.mdfiles, so existing file reference syntax works in chats- One file = one thing — same as one chat file = one conversation
What this does NOT include (deliberately)
- No priority field (ID order + deps is enough)
- No assignee/owner (single-player workflow)
- No tags/labels (use grep)
- No separate
statusfile,plan.md,log.mdper issue (one file is simpler) - No JSON/YAML status files (frontmatter is enough)
The previous proposal I gave you with 5 files per issue was designed for a generic CLI tool. For parley specifically, single-file-per-issue is better because it matches the existing “one markdown file is one unit of work” pattern that pervades the plugin.
📝: You asked about proposing an issue management subsystem for parley.nvim that fits its philosophy, I answered with a single-file-per-issue design using YAML frontmatter + markdown sections (status/plan/log in one file), telescope picker integration, 5 commands (New/Finder/Next/Status/Decompose), repo-local issues_dir, leveraging existing parley patterns like chat files, @@ references, templates, and pickers.