Agentic Empathy | Marshall Shen

Agentic Empathy

I was listening to Peter Steinberger on the Lex Fridman podcast talk about OpenClaw when he mentioned something that clicked: agentic empathy. The idea that we need to develop empathy for how agents think—not emotional empathy, but understanding how they process and respond—so we can better direct them.

I’ve been trying to figure out how to work effectively with AI agents. The outputs vary wildly depending on how I frame instructions. Sometimes brilliant. Sometimes completely off. I realized the difference isn’t the agent. It’s me.

Learning How Agents Think

Agentic empathy means understanding the patterns of how agents interpret instructions and produce outputs. Better interactions lead to better outcomes.

The mental model that’s helped me: I’m now functioning as a company of one plus agents. Like the conductor analogy I wrote about, I’m directing a team of hyper-competent engineers who have zero context about my specific needs.

My role is to:

  • Clarify intention and vision: Not just the task, but why it matters and what problem it solves
  • Let go of specifics: Stop caring about implementation details like code style
  • Guard what truly matters: Focus on data integrity, security, and core requirements that have real impact

This shift changes everything. I’m no longer micromanaging the “how.” I’m directing the “what” and “why.”

One Right Question Away

The right result is always one right question away.

I’ve started thinking about each interaction as an iteration. How I frame the question determines the quality of the output. Be specific about outcomes, not methods. Provide context upfront. Distinguish between preferences (let it go) and requirements (guard it).

Each interaction teaches me something about how agents process information. I’m learning their patterns, their strengths, their constraints.

Practice Makes Better

This is a skill that develops over time. I’m getting better at:

  • Front-loading context before asking for output
  • Being explicit about constraints and non-negotiables
  • Iterating quickly when the first output isn’t right
  • Recognizing which types of tasks need which types of instructions

The more I understand how agents think, the better they perform.

The Skill That Matters

The bottleneck isn’t typing speed or implementation knowledge anymore. It’s clarity of thought and quality of direction.

Agentic empathy determines how well we work with AI. Those who develop it will direct agents to produce exceptional work. Those who don’t will struggle to get consistent results.

Agents will continue to evolve. And we’ll need to evolve with them. But what remains constant is this: clearer direction and vision yields better results.

Better interactions. Better outcomes. Better conductors.