The Awareness Layer
there’s a framework with tens of thousands of github stars that forces your AI coding agent to stop and think before it writes code. Socratic questions. design specs. the works.
i looked at it and liked the pattern. but the implementation was wrong for me.
their version: the AI refuses to start until it’s asked enough questions. no choice. enforced discipline. great for developers who skip planning because they’re lazy.
my version is different because my problem is different.
i had a brain injury in 2016. one of the things it took was prospective memory. knowing you need to do something at a future moment. you know the tools exist. you know when to use them. but the connection between seeing the situation and reaching for the right response doesn’t always fire.
so instead of enforcement, i built an awareness layer. my agent watches for patterns and surfaces the tool in one line. “this looks like a PRISM moment.” “spec first?” “different angle?” one line. then it follows my lead.
the difference matters. enforcement removes human judgment from the loop. a cognitive prosthetic restores it. same shape. completely different purpose.
the framework treats AI agents like junior developers who need guardrails. i treat mine like a pattern recognition layer that fills a gap my brain used to fill on its own.
there are thirteen trigger conditions now. when i first started writing this post there were nine. each new one came from a moment where the gap showed up and i almost fell through it.
each one maps a pattern to a tool. multi-file change with no plan stated: “spec first?” same problem attempted twice: “different angle?” about to change a config that touches multiple systems: “hurt locker?” high-stakes decision where getting it wrong is expensive: “hugr-solve?” creative work produced: auto-save to memory. no question asked.
the whole thing is a markdown table in a config file. no code. no enforcement engine. just a list of situations and what to surface when they show up. the agent does the matching.
sometimes it catches me about to edit code i haven’t read yet and says “want to read that first?” and that one line saves an hour. sometimes i blow past it and the agent stays quiet. it doesn’t always fire. that’s the point. it’s not a gate. it’s a tap on the shoulder.
when you build AI tools for people with cognitive disabilities, the design question changes. it’s not “how do we make the AI do the thing we want it to.” it’s “how do we help the human remember to.”