A Sustainable Agriculture Séance

A Sustainable Agriculture Séance

How We Know When a Model Works, and Why We Need to Learn in Public

At the Sustainable Agriculture Summit this year, I found myself imagining a sustainable agriculture séance, a playful way to summon lessons from past programs, pilots, and ideas.
We rush forward so quickly that we rarely pause to ask: What worked? What didn’t? Why?

This kind of public honesty is rare, because it requires vulnerability, clarity, and time. It may reveal that something still “alive” is actually brittle. Or that a program we championed was built on questionable logic, or simply inherited from the decisions of a former colleague. Or that an effort failed not because of the practice, but because of the underlying model.

Before we call any ghosts into the room, we need to name the simple, structural truth behind every living or dead program:
each one rests on a deterministic model.
A causal story, often implicit, about how change is supposed to happen. Every effort in this field rests on a deterministic model, a causal “if X, then Y, because Z.” Naming that model is the first step toward understanding what lives, what dies, and what we need to learn next.

For sustainable ag, we can write it as:

If [actor] does [intervention] on [system] under [conditions],
then [mechanism/pathway] changes [state],
which leads to [impact metric] over [time].

🌾 A simple example: cover crops

If a farmer plants a winter cover crop (X)
then erosion decreases and soil carbon increase (Y)
because ground cover protects soil and root biomass feeds the system (Z).

This shifts key state variables: soil structure, water holding capacity, N cycling, yield stability. Those shifts then feed into an impact metric like CI per bushel or net margin per acre.

But, and this is where most models break, this causal chain is context dependent:

  • Works under certain rainfall patterns
  • Works if the farmer has a drill, cashflow, agronomic support, social validation, and confidence in the efficacy
  • Works if the timing fits the rotation
  • Works if incentives match the lag time of benefits

Change the conditions, and the model can break quietly… or loudly.

💀 Ghosts of Programs Past

Take Nori. We were a bold, creative marketplace that attracted enormous attention. But our causal model never fully closed (or was actually documented in a systematic way): the economic engine depended on demand that never solidified, verification assumptions that couldn’t hold, and incentives that flipped the minute token economics shifted (which we never even proved out).

If Nori were to visit our séance, it might whisper: “Know the economic engine before building the platform.”

Or consider the “nod-nod, wink-wink” regenerative certifications that add market sizzle but rest on vague causal pathways, or tools that claim scientific precision while skipping the question of boundary conditions. These programs are still “alive,” technically, but brittle. They persist because they are branded, not because their causal model is durable. When push comes to shove, they don’t support continuous improvement or reflect actual improvement.

Their ghosts would probably say: “Name the causal model. Test it. Don’t scale before you understand it. A tick the box certification program that lends no buyer trust or market value is not durable.”

🧩 Why deterministic models matter (and save us from ghosts)

When we articulate the causal chain, we can finally ask:

  • Under what conditions is this true?
  • Where does it break?
  • Who bears risk?
  • What assumptions are hidden?
  • What data do we need to test it?
  • How do we know the impact metric is real?

Without this clarity, programs succeed only while the funding firehose stays open. When subsidies end, adoption collapses — not because the practice is bad, but because the underlying model was never designed to survive without external scaffolding.

This is the pattern behind many short-lived “successful” pilots.

A Simple Model Card for Any Program

Below is a model card, a bare-bones tool for articulating and testing a program’s deterministic model. When we use it with our partners and clients, it typically gets more complicated more quickly with asking contextualized questions. Feel free to copy, adapt, and use it. I’d love to know if it’s useful to you.

1. CONTEXT & CONDITIONS

  • Where does this model work?
  • What must be true for it to function?

A model only works in the conditions it was built for. Soil, climate, markets, infrastructure, culture, and timing determine whether the causal chain has any chance of holding. Naming context upfront prevents us from pretending a local truth is a universal one.

2. ACTORS & ACTIONS

  • Who does what? How do they interact?
  • Who pays? Who bears risk?
  • What are actor motivations and constraints?

Clarity on who does what, who decides what, and who pays for what reveals the power dynamics and friction points that often determine program survival more than the practice itself.

3. ASSUMPTIONS

  • Explicit: __________
  • Implicit: __________
  • Unknowns: __________

Making assumptions explicit surfaces where we might be building on hope instead of evidence. Most program failures aren’t operational, they’re assumption failures that were never named in the first place.

4. CAUSAL PATHWAY 

  • X → Y → Z

  (Intervention → Mechanism → State Change). X → Y → Z forces us to articulate the mechanism, not just the outcome. If we can’t explain why something works, we can’t know when it won’t. Causal clarity is what separates insight from storytelling.

5. DATA & METRICS

  • What do we measure?
  • Which actors have which data
  • How do we calculate the metric?
  • What is the data model?

What we measure determines what we think is real. Who holds the data determines who gets to participate. How we model it determines what we consider impact. This step forces us to confront measurement choices as design choices.

6. EXPECTED IMPACT

  • Ecological:
  • Economic:
  • Operational:

Ecological, economic, operational: naming these explicitly prevents the classic trap of confusing activity with outcomes. If success is vague, failure is easy to hide.

7. BREAK AND INFLUENCE POINTS

  • Where does this model fail?
  • Where are the risks?
  • Where are the points of influence?
  • Under what conditions does it not hold?

These tell us where incentives need adjusting, where practices need tailoring, and where assumptions need rewriting, and where there is unrealized value. Mapping breakpoints prevents overselling and allows programs to evolve instead of collapse. Mapping influence points allows for more scale and value capture.

8. LEARNING LOOP

A deterministic model is only as good as its ability to evolve. Build → Measure → Learn, as coined by Eric Ries, author of Lean Start-up, creates the conditions for programs to become smarter over time instead of more brittle. The absence of a learning loop is the single best predictor of future ghosts.

 Learning isn’t optional, it is the model

A deterministic model is never a fact. It’s a hypothesis. And hypotheses only live when tested, adapted, and revised. It helps answer:

  • Where are the points of uncertainty?
  • Who is this for and why do they want it?
  • Should this product/platform/program be built?

The only thing that makes it come alive is a learning loop:

Programs don’t fail because they’re wrong.

They fail because they don’t learn fast enough.

The programs that endure — that move from pilot → practice → system — treat learning as infrastructure, not as an afterthought.

A Hopeful Invitation to the Field

If we want the next generation of sustainable ag efforts to truly work, we don’t need more buzzwords, more platforms, or more fragmented incentives.

We need a practice of public learning.

Once a year, what if every organization, startups, NGOs, buyers, lenders, researchers, published a simple, honest post-mortem of one program they built?

Just:

Here was our deterministic model.
Here were our assumptions.
Here is where the causal chain held.
Here is where it broke.
Here is what we learned.
Here is how we would change it.

For both the living and the dead to learn from each other. Imagine the collective intelligence we would unlock. And because our industry loves introducing jargony words, I’ll offer one more: a Sustainable Agriculture Séance, SAS.

A way of asking if we thought of all the things that didn’t work as we consider our programmatic data model: Did we SAS this yet?

A way of making learning public, systematic, intentional and expected.

One Comment

  1. YES! Totally agree. But from my own research experiences, it’s very hard to get folks to voluntarily evaluate whether their theory of change results in the desired change and/or other tradeoffs. If we apply systems thinking to the Sustainability / ESG Industrial Complex, we might gain insights into why so few participants avoid doing a priori or post hoc assessments.