When we started building trainedB.ai, the first thing we did wasn't build anything. It was research. We wanted to understand what already existed, what players were already spending time and money on, and where the gaps were.
I expected to find a fragmented landscape. Different approaches with different strengths and weaknesses. Some good at one thing, some good at another. I figured we'd find our niche in whatever corner nobody had covered yet.
That's not what we found.
What we found was that everything looks the same. And the thing they all have in common is the thing nobody talks about: none of them measure whether you actually improve.
Everything is the same product
Go look at what's available to a pickleball player who wants to get better. Really look.
There are drill libraries. Dozens of them. Apps, YouTube channels, paid programs, PDFs. They give you a list of drills organized by skill category. Here are your dinking drills. Here are your drop drills. Here are your serve drills. Pick some. Do them.
There are "road to 4.0" programs. They take those same drills and put them in a sequence. Week 1, kitchen fundamentals. Week 2, third-shot options. Week 3, transition zone. Everyone does the same sequence. It's a drill library with a calendar stapled to it.
There are clinics and academies. A coach stands in front of a group, demonstrates a skill, runs some drills, gives feedback during the session. You leave. Maybe you come back next week and do a different skill. Maybe the same one. The coach doesn't know what you worked on in between and doesn't track whether last week's lesson changed anything.
There are video analysis tools. Record yourself, watch it back, or send it to someone for feedback. You get notes on what to fix. You go try to fix it. Nobody checks whether you did.
There are rating systems. They tell you a number. The number goes up or down. They don't tell you why.
Strip away the branding, the pricing, the marketing. What do you have? The same thing, everywhere. Content libraries. Generic progressions. Episodic instruction. Outcome numbers with no diagnosis underneath.
I'm not saying none of this has value. Drill libraries are useful references. Good clinics teach real skills. Ratings structure competitive play. These are all real contributions to the sport.
But they all share one thing: they don't know if you got better.
Nobody is accountable for results
This is what surprised me. Not that the options were imperfect. I expected that. What I didn't expect was that accountability for improvement is completely absent from the entire ecosystem.
Think about what that means. A player signs up for a twelve-week "road to 4.0" program. They complete it. Did they get to 4.0? Did they get measurably better at anything? The program doesn't know. It has no mechanism to know. It delivered content on a schedule. Whether that content produced results is not its problem.
A player takes a clinic every Saturday for six months. Are they better than when they started? The coach might have a general impression. There's no data. No baseline. No progress tracking. No measurement of any kind. The clinic sold a session. What happened after is outside the frame.
A player watches a hundred hours of YouTube tutorials. Better? Worse? Same? Nobody's counting. The creator got the views. The algorithm served the next video. The loop between consuming the content and improving at the sport doesn't exist.
Every other part of the transaction is measured. Players pay. They show up. They complete the program. They watch the video. They do the drill. All tracked. The only thing that isn't tracked is the thing the player actually came for.
Activity as a substitute for progress
Once I started seeing this, I couldn't stop. The whole ecosystem is built around activity, not outcomes.
Did you do your drills this week? Check. Did you attend the clinic? Check. Did you finish the program? Check. Did you log your matches? Check.
Players accumulate activity and assume it means they're improving. The products encourage this because activity is what they can deliver and measure. Open rates, completion rates, session attendance, drill streaks. These are the metrics the products optimize for, because these are the metrics they control.
But activity is not improvement. A player who does a hundred dinking drills hasn't necessarily gotten better at dinking. They've done a hundred dinking drills. Those are different things, and the difference matters. One is an input. The other is an outcome. The entire market is selling inputs and hoping the outcomes take care of themselves.
Why this doesn't change on its own
You'd think the market would self-correct. Players who don't improve would stop paying. Products that don't work would lose customers.
But plateaus are slow and ambiguous. A player doesn't hit a wall and immediately know the program failed. They just keep playing, keep practicing, and keep not quite getting to the next level. By the time they realize nothing's changed, it's been months. They blame themselves. Not enough effort, not enough consistency, not enough talent. They rarely blame the tool, because the tool was never promising measurable results in the first place.
And the tools don't know they're failing, because they never measured success. You can't fail at something you never committed to. If your product is "here are some drills" and the player did the drills, you delivered. That the player didn't improve is outside your scope.
Nobody's on the hook. The player assumes the content works if they put in the reps. The content assumes improvement is the player's responsibility. And the gap between the two is where millions of players sit, stuck, wondering what they're doing wrong.
What finding this did to us
I'm a pickleball player. Before we started building trainedB.ai, I was doing all of this myself. Watching YouTube videos at night, showing up to clinics on weekends, grinding drills with a buddy who was at the same level and had the same blind spots. I thought I was putting in the work. I was putting in the work. I just had no idea if any of it was actually making me better.
And it hit me that I'd been a customer of this ecosystem for years and never once asked the obvious question: how do I know this is working? I just kept showing up, kept paying, kept doing the drills, and assumed that effort would eventually turn into results. When it didn't, I blamed myself.
That's when I realized the problem wasn't me. The business model doesn't need me to improve. It needs me to keep showing up. Churn is the game. Sell a clinic, sell a program, sell a rating to a coach. Whether the player on the other end actually moves from 3.5 to 4.0 is irrelevant as long as they re-enroll next month.
Nobody in this ecosystem wakes up in the morning accountable for whether you got better. That's the gap. That's what we're trying to close.
Next week, I want to dig into one specific piece of this: how rating systems, despite being the only real "measurement" most players have, actually make the problem worse by measuring the wrong thing entirely.