PricingAboutBlogLog in
Elani
Log inGet started
PricingAboutBlogLog in

Solutions

  • For Founders
  • For CEOs
  • For Investors
  • For Personal Use

Product

  • Features
  • Pricing

Resources

  • Blog
  • About

Compare

  • vs Motion
  • vs Superhuman
  • vs ChatGPT
  • View All

Legal

  • Privacy Policy
  • Terms of Service

Socials

  • Follow @elani_ai on X
© 2026 Elani, Inc. Built to stay ahead.
January 18, 2026

Trust is an Engineering Metric: Designing for Progressive Autonomy

How we measure confidence to decide when an AI agent should ask for permission and when it should just act.

There is a paradox in the current wave of AI agents. We want them to be autonomous—to handle tasks without our involvement—but we are terrified of what they might do unsupervised.

The horror stories are easy to imagine: an agent hallucinating a meeting, deleting an important email, or replying to your boss with a tone-deaf joke.

Because of this fear, most "agents" today are crippled. They are reduced to glorified chatbots that propose drafts but never click send. They are stuck in the "Human-in-the-Loop" paradigm, where the human is the bottleneck.

At Elani, we believe the goal isn't just "Human-in-the-Loop." It's Human-on-the-Loop. The user should be the supervisor, not the operator. But to get there, we have to treat Trust not as a feeling, but as an engineering metric.

The Trust Battery

Every interaction with an agent either charges or drains its "Trust Battery" (a concept popularized by Tobi Lütke).

  • If Elani correctly identifies a spam email and archives it: +1 Charge.
  • If Elani mislabels a client contract as "newsletter": -10 Drain.

To build a truly autonomous agent, we need to protect this battery at all costs. We do this through a system we call Progressive Autonomy.

Levels of Autonomy

We classify every action Elani can take into three levels of risk/autonomy.

Level 1: Surface (Safe)

  • Action: Researching a person, summarizing a thread, tagging an email.
  • Risk: Low. If the summary is slightly off, no permanent damage is done.
  • Behavior: ACT. Elani does this silently in the background.

Level 2: Propose (Moderate)

  • Action: Drafting a reply, proposing a meeting time, reorganizing a project folder.
  • Risk: Medium. Sending a bad email is embarrassing; moving files is annoying to undo.
  • Behavior: ASK. Elani prepares the action but waits for your approval.

    "I've drafted a reply to Sarah. Click to review and send."

Level 3: Execute (High)

  • Action: Sending an email, declining an invite, archiving a thread.
  • Risk: High. These actions have external consequences.
  • Behavior: GATE. These require explicit permission unless confidence is near-perfect.

The Confidence Metric

How does Elani decide between Level 2 and Level 3? This is where the engineering comes in.

We don't just ask the LLM "What should I do?" We use specialized classifiers like TopicNotificationClassifier and IngestionExtractionClassifier that return not just a decision, but a Confidence Score.

// Simplified logic for an autonomous action
if (action.risk_level === 'HIGH') {
  if (classifier.confidence > 0.98 && user.preferences.auto_archive) {
    return execute(action); // Autonomous
  } else {
    return propose(action); // Human approval required
  }
}

This threshold isn't static. It adapts based on the User Graph.

  • If you consistently approve Elani's drafts to "Recruiters," the threshold lowers. Elani learns she understands how you talk to recruiters.
  • If you constantly edit drafts to your "Co-Founder," the threshold raises. Elani learns this relationship is nuanced.

Transparency: The "Why" Button

Trust also requires explainability. An agent that acts like a black box is terrifying. An agent that explains its reasoning is a colleague.

When Elani performs an action—whether it's moving a topic or flagging an email—we expose the Chain of Thought.

"I moved this to 'Urgent' because it contains the phrase 'deadline tomorrow' and is from a VIP sender."

This allows the user to audit the logic. It turns a "mistake" into a "calibration opportunity." You can correct the logic, not just the outcome.

Conclusion

We are moving towards a world where software acts on our behalf. This transition won't happen overnight. It will happen progressively.

By quantifying trust and strictly enforcing confidence thresholds, we can build agents that earn their autonomy one correct decision at a time. The future isn't about blind trust; it's about verifiable reliability.


Previous PostBeyond the Chatbox: The Architecture of an AI Chief of StaffNext PostThe Economics of Attention: Why We Budget Your AI's Focus