Posts

Showing posts from December, 2025

Know Enough to Start Moving

Image
We wanted to simplify financial record keeping. Every feature support used to take six months. Planning took months. Implementation took months. Launch took months. It was too slow and too expensive. So an engineer and I teamed up to make it simpler. He was a deep diver. He wanted to understand every detail end to end before taking a single step. I didn’t realize that at first. We would agree on something, and then progress would freeze. He wasn’t sure about one detail. And because that detail was unclear, he couldn’t move. I had to keep removing the ambiguity. Many times I was wrong. But the direction never changed. The big picture stayed the same. At one point I explained it this way: if you are going from Marina Beach in Chennai to India Gate in Delhi, you don’t need every turn mapped out on day one. You just need to know you’re going north. You tune the route as you travel. You correct as you learn. The map is allowed to change. And in fast-moving systems, it usually does. Some d...

When “All Good” Smells Wrong

Image
We read the retro and it sounded perfect. The migration was declared a success with a few hiccups. But it smelled wrong to me. We had live escalations and delays. The story on paper did not match what people had lived. When the noise and the report disagree, the report is lying by omission. Good leaders do not pretend their team smells like perfume. They speak up. They say what went wrong even when it is embarrassing. They point out gaps in the measures themselves. I had to tell the group that our success metric was broken. That was the red flag. Saying that out loud felt awkward, but it forced a deeper look. When something seems off, do not accept the story. Ask which assumptions are hiding in the numbers. Call them out. Force the metrics to explain themselves. Tenet #4 — Make Assumptions Explicit. Truth beats comfort every time.

When Data Lies by Telling the Truth

Image
Confirmation bias shows up quietly. It shows up when numbers look clean, dashboards look green, and yet people keep saying something feels wrong. Our test infrastructure looked perfect on paper. The metrics showed less than 10% failure. With retries, it looked even better. But the test framework was failing constantly. Pass rates were under 20%. The framework team blamed infra for almost 40% of the failures. Infra teams pointed to healthy dashboards. Developers kept complaining. The data and the anecdotes were telling different stories. The easy mistake is to trust the data and dismiss the anecdotes. But when they disagree, the anecdotes are usually right. Not because data is wrong, but because we measure the wrong thing. Our infra metrics measured availability, not stability. They measured uptime, not connection paths. They measured retries, not retry cost. It took weeks of pushing for a proper RCA. When the teams finally traced the problem end-to-end, we found multiple layers of rou...