How I Think

Accuracy isn’t the end goal

A model can be technically accurate and still be useless.

If people can’t understand what it’s doing, don’t trust its outputs, or don’t know how to act on them, the system breaks down fast. I’ve seen projects with strong metrics struggle in practice because users weren’t confident enough to rely on them, or didn’t understand why something was flagged in the first place.

That’s when accuracy stops mattering.

When interpretability matters more than performance

In high-stakes or regulated settings, a slightly better metric isn’t always worth the tradeoff.

If a system can’t clearly explain itself, the cost of a wrong or unexplained decision can be much higher than a missed edge case. When I build systems, I try to think through the questions real users will actually ask:

  • Why was this flagged?
  • What evidence supports this?
  • What am I supposed to do next?

If a system can’t answer those clearly, it doesn’t matter how well it performs on paper.

What real systems taught me

Classes teach you tools. Building real systems teaches judgment.

Working on production pipelines and decision-support tools changed how I think about modeling. The question stopped being “Can we build this?” and became “Should we and how will someone actually use it?”

I’m most interested in work that sits close to real users, real constraints, and real consequences where the goal isn’t just to build something impressive, but something people can actually rely on.

Explore more: