3 min read

Technical due diligence from both sides of the table

I've been diligenced and I've diligenced others. What the process actually surfaces.

I've now been on both sides of technical due diligence more times than I can count. As a founder raising, I was the one being evaluated. More recently, advising on acquisitions and partnerships, I've been the one doing the evaluating. The two experiences are different in ways I didn't expect.

What diligence is supposed to do

Surface technical risk. Specifically: is the technical architecture sound, is the team capable of executing, and are there hidden liabilities that change the investment or acquisition thesis?

In practice, it surfaces something narrower: whether the team can present their technical work coherently to someone who doesn't fully understand their domain. This is correlated with the underlying quality but it's not the same thing.

What I've learned as the evaluated party

Documentation is disproportionately impactful. Teams that have written down their architecture decisions, their scaling assumptions, and their known weaknesses get through diligence faster and with less friction than teams whose documentation is sparse. The documentation doesn't have to be polished. It has to exist.

Honesty about weaknesses is more credible than the absence of weaknesses. Every technical system has weak points. A founder who can articulate them clearly is demonstrating the kind of technical judgment that investors and acquirers want to see. A founder who claims to have no technical debt and no known risks is not credible.

The war stories matter. The specific incidents where things broke and you fixed them. The architectural decision you made under pressure that you'd make differently now. These stories demonstrate operational maturity in a way that clean architecture diagrams don't.

What I've learned as the evaluating party

The documentation tells you the aspirational state. The codebase tells you the actual state. Always read the code. Not all of it. The parts where the team's claims are hardest to verify. If they claim 99.6% accuracy, look at the evaluation pipeline. If they claim the architecture scales horizontally, look at the stateful components.

Ask about the last three production incidents. How were they detected, how long to resolution, what changed afterward? This tells you more about the team's operational maturity than any architecture discussion. Teams with no incidents in the last year either have very low traffic or haven't built good monitoring.

Ask about the thing they're most worried about. The good founders have a specific answer. It's usually something the due diligence process wasn't going to find on its own. The answer tells you they understand their system's real risk profile, not the one that looks good in a presentation.

The asymmetry

The evaluating party has limited time and is working from documentation and prepared presentations. The evaluated party has deep context and has been thinking about these questions for years. The process favors teams that are good at distillation and communication, which is a useful proxy for organizational quality but not a perfect one.

The best due diligence processes I've been part of, on both sides, are the ones that get past the prepared materials quickly and into genuine technical conversation. Once the conversation is real, the surface game doesn't matter. What you actually built and how you actually think about it becomes visible.

With gusto, Fatih.