Manager-oriented programming

 ·  journalclaudecode-reviewai-toolsdeveloper-workflow

Today I mined my manager Joe’s pull request review comments from the past 3 months using Claude, and extracted some patterns to add to our internal code review bot. I called it Joe’s Lens.

Joe's Lens — Architecture & Ownership
1. Service & data ownership — publishing events for tables you don't own? Existing pattern already handles it?
2. Schema & model design — validators on read schemas, unnecessary fields, missing Pydantic built-ins (`AnyUrl`, `EmailStr`)
3. Type safety — signatures accepting `None` then raising `ValueError`, types that allow invalid states
4. Terminology — one term per concept. Codebase says "tenant"? Don't introduce "organization".
5. Validation layer — validate at boundaries, not deep in the stack
6. Null vs absent — distinguish "not provided" from "explicitly null/false" where it matters. Sentinels (`NotFound`, `Unset`) or `T | None` with clear semantics.

The motivation is to align my code style closer to his before I mark my PR ready for review. Again, the experience of analyzing data with Claude is just SO GOOD. So fast and so insightful.

I had a little conversation with Claude on how much of “reading code” will be replaced by AI. Here’s what she said:

Code reading as a mechanical skill — tracing execution, spotting bugs, checking types — probably does get automated away. What survives longer is judgment: knowing when the spec is wrong, when a technically correct solution is the wrong move, when a pattern that worked before shouldn't be applied here. That's less about reading code and more about understanding people, systems, and tradeoffs over time.

Your bot encoding Joe's patterns is interesting precisely because Joe's value isn't that he reads code carefully — it's that he has accumulated judgment about what good looks like in your specific context. The question is how much of that judgment can be encoded versus how much requires him to have lived through the consequences.

← Back to blog