January 17, 2025

The Architecture That Actually Works: Human-Controlled AI

The Architecture That Actually Works: Human-Controlled AI

Part 3 of the AI Knowledge Series

When most people imagine AI in legal practice, they picture something like a very smart research assistant: you give it a question, it goes away and thinks, then comes back with an answer.

That model is exactly backwards.

The most effective AI architectures don't position the AI as an independent agent that produces work for human review. They position the AI as an extension of human capability -- powerful but contained, knowledgeable but directed.

This isn't just a philosophical preference. It's a practical requirement for producing reliable legal work.

The Autonomy Problem

AI systems are remarkably good at generating plausible-sounding text. They're less good at knowing when they're wrong.

Give an AI too much autonomy -- "research this question and write a memo" -- and you get output that looks professional but may contain subtle errors, hallucinated citations, or misapplied precedents. The AI doesn't know what it doesn't know.

The more autonomy you give, the more you need to verify. At some point, verification becomes more work than doing the task yourself.

The Control Architecture

The solution isn't to abandon AI assistance. It's to design systems that keep humans in control while leveraging AI capabilities.

In my practice, this means:

Knowledge is curated, not generated. The AI doesn't decide what authorities matter -- I do. My knowledge base contains cases I've vetted, propositions I've validated, connections I've verified. The AI can search and synthesize this knowledge, but it can't add to it without my review.

Tasks are scoped, not open-ended. Instead of "research Texas law on habendum clauses," I might ask "find cases in my knowledge base discussing the interaction between habendum and Pugh clauses." The AI works within boundaries I've defined.

Output is drafted, not final. Everything the AI produces is a starting point for my work, not a replacement for it. The AI writes a first draft; I revise, verify, and improve.

Process is explicit, not implicit. When the AI follows a workflow -- say, drafting a motion to compel -- every step is documented. I can see what it did, why it did it, and where it might have gone wrong.

The Skill Architecture

Beyond control, effective AI assistance requires encoding specific skills -- not just knowledge, but procedures.

Consider what it takes to draft a good motion to compel in Texas state court. You need to know:

  • The relevant rules (TRCP 215)
  • The specific requirements (certification of conference, etc.)
  • The standard of review
  • How courts in your jurisdiction typically analyze these motions
  • What arguments tend to work (and which don't)
  • Proper formatting and citation practices

A generic AI might know the rules but not the practice. It won't know that Judge Smith prefers detailed factual recitations or that opposing counsel's firm always makes the same procedural arguments.

Encoding this knowledge as structured "skills" -- step-by-step procedures the AI follows -- transforms generic capability into specific utility.

The Trust Gradient

Not all tasks require the same level of control. I've developed what I think of as a "trust gradient":

High autonomy (AI works independently, I spot-check):

  • Initial document review and organization
  • Citation formatting
  • Generating first drafts of routine correspondence

Medium autonomy (AI drafts, I review everything):

  • Research summaries
  • First drafts of motion sections
  • Contract review checklists

Low autonomy (AI assists, I direct each step):

  • Novel legal arguments
  • Strategic recommendations
  • Anything going to court without further revision

The key is matching autonomy to risk. Routine tasks with low consequences can run with less oversight. High-stakes work requires tight control.

Why This Architecture Works

Three reasons this human-controlled approach outperforms the "autonomous agent" model:

Reliability. When the AI works within verified knowledge boundaries, errors stay small. A missed case is easier to catch than a fabricated one.

Learning. Every correction improves the system. When I catch an error, I can update the knowledge base, refine a skill procedure, or adjust the trust gradient.

Scalability. The architecture grows with the practice. New knowledge slots into existing structures. New skills build on established procedures. The system becomes more valuable over time.

The Implementation Reality

I won't pretend this is simple. Building a human-controlled AI architecture requires:

  • Systematic capture of institutional knowledge
  • Careful design of task boundaries
  • Ongoing refinement of procedures
  • Honest assessment of what works and what doesn't

But the alternative -- either avoiding AI entirely or using it without appropriate controls -- means either missing real benefits or accepting unacceptable risks.

The architecture I've described threads that needle. It's not the only approach, but it's one that works in actual practice.


Next: Skills, Rules, and the Art of Encoding Procedures