We're founding Countercheck. Anti-counterfeiting for logistics and brands, built on computer vision. Before any code was written, we ran 50 customer conversations. Here's what that process actually looks like and what it produced.
Why 50
The number is somewhat arbitrary but not meaningless. At 10 conversations you're hearing individual opinions. At 20 you're starting to see patterns. At 30 the patterns are stable enough to act on. At 50 you've stress-tested your assumptions enough times that the surprises are small rather than large.
The risk of fewer conversations is mistaking a coherent individual perspective for a market. The risk of more conversations is analysis paralysis. 50 is enough to know, not so many that you're still learning by conversation 40.
How we structured them
Problem-first. We did not describe the product in the first half of the call. We described the problem space and asked about their experience with it. The moment you describe your solution, you start getting feedback on the solution rather than insight into the problem. Keep the solution out of the conversation until you've heard the problem described in their language.
The people we talked to: brand protection managers at apparel and luxury companies, logistics operations managers, customs officials in two countries, IP lawyers who represent brands in counterfeiting cases, and a handful of people who had built anti-counterfeiting tooling previously.
The categories were deliberate. Brand protection managers feel the problem and own the budget. Logistics operations managers feel the constraint of any solution that slows throughput. Customs officials understand the regulatory landscape. IP lawyers understand the evidentiary requirements. Former tooling builders understand why previous solutions failed.
What we learned
The problem is real and the existing solutions are bad. This is the most important thing to confirm before building. Every category confirmed it independently.
The specific failure of existing solutions: they're expensive, they require controlled conditions that logistics environments can't provide, and they have false positive rates high enough to create operational disruption. A solution that catches counterfeits but flags 5% of authentic products as fake is worse than no solution in a high-volume logistics environment because the disruption cost exceeds the counterfeiting cost.
This shaped the product requirement before we wrote anything: the false positive rate is the constraint to optimize for, not raw detection accuracy. That's a different product design than we would have built from first principles.
The letters of intent
Eight of the fifty conversations produced letters of intent. Not contracts. Not commitments. Signed statements that the organization was interested in piloting the solution if it performed to specification. The specs came from the conversations.
LOIs are not revenue. They are the highest-quality signal available before revenue exists. Eight LOIs from fifty conversations across multiple organization types and geographies is enough to proceed.
What we didn't learn
Pricing. People are remarkably resistant to discussing what they'd pay for a hypothetical solution. We asked directly and got ranges that were too wide to be useful. Pricing will come from pilots, not from conversations.
Integration requirements in detail. Every organization runs different warehouse management systems, different track-and-trace setups, different approval processes for new tooling. The integration complexity is real and varies so much across organizations that you can't map it from conversations. You map it from pilots.
Where we are now
Team is assembling. Patent strategy is mapped. The technical architecture is defined based on the problem requirements we validated. The first pilot is being scoped with one of the LOI companies.
We're building toward a specification we validated, not hoping the specification is right. That's the difference.
With gusto, Fatih.