Don't Drop The Ball – Part II
Three systems I use to help my portfolio companies win by minimizing mistakes
If you haven’t read part I of this series yet, I suggest starting there.
In Part 1 of this series, I wrote about how most business problems aren’t strategic. They’re executional.
A lead goes cold. A ticket for an important customer sits undealt with. An important renewal slips by or gets mishandled. These aren’t issues of market selection, effects of macro-economic headwinds, or evidence of a deeper lack of product-market fit. They’re dropped balls. They’re small, preventable mistakes. Mistakes that kill your momentum, chip away at customer trust, and make it harder for you and your team to win.
I also touched on how these mistakes don’t happen on purpose. Nobody on your team is intentionally dropping the ball. They know what they should be doing. The problem isn’t lack of knowledge. This is the problem: personal vigilance is a finite resource. We all get distracted. We all get busy. Yesterday’s important thing gets buried under today’s newer, fresher, more urgent thing.
So, yeah. Balls get dropped. And that’s understandable, as long as it doesn’t happen too frequently.
What’s less understandable (and less forgivable) is when these silly little misses keep happening. What’s even less understandable is when people refuse to acknowledge or investigate the misses.
Why does this happen? Usually, it’s because we’re too embarrassed. It feels a little insulting to stand in front of a room of professionals and ask, “Did we prepare for that customer meeting?” or “How long did it take us to respond to that lead from yesterday?” Of course, the answer should always be some version of “We did what we’re supposed to.” So even though we know, deep down, that balls are getting dropped, we go ahead and choose to assume things are basically fine. Because assuming otherwise would mean admitting we’re not running things as well as we think we are.
It’s that assumption (and the dangerous combination of pride and ignorance that powers it) that’s killing you and your team.
High-performing teams don’t let that kind of pride in the door. They rip it out and replace it with a kind of workmanlike paranoia. They look at their business the way Toyota looks at a production line. Instead of assuming that the work is being done the right way, they obsessively hunt for the waste that they know will always be hiding somewhere. They assume the natural state of any system is not that it’s humming along, but that it’s quietly drifting off-course. They assume that if you aren’t actively applying energy to keep things tight, they will loosen. Slowly, but oh-so-predictably.
And because of their belief in and respect for this unavoidable drift, they adopt a more honest, powerful, and somehow-still-optimistic way to think about their business.
They assume that, even when things are going well, small opportunities to improve are hiding literally everywhere.
There’s a big benefit to adopting that mindset. When a leadership team believes that things will loosen unless they’re actively kept tight, they stop waiting for mistakes to reveal there’s a problem. They go looking for weak spots before those weak spots turn into dropped balls. That kind of vigilance creates a working environment where pointing out an executional gap isn’t seen as an accusation, but as an act of service. It’s the kind of environment where someone can say, “I think we’re dropping the ball here,” and the response isn’t defensiveness or finger-pointing, but an immediate, open, and thoughtful discussion about where the weak spot in the system might be and how to fix it.
That’s the kind of environment I want to work in.
And it’s the environment I try to help create when in my work with my portfolio companies.
Three Kinds of Dropped Balls
Here are the three kinds of dropped balls (messy, slow, and unprepared) I see most often in my work, the invisible damage they cause, and the systems I build with my companies to catch, fix, and prevent them.
1. MESSY: Your Pipeline Is Full of Junk
A while back, one of my portfolio companies found itself stuck in a familiar pattern. We would start each quarter with strong pipeline coverage, often five times (or more) vs. our bookings plan for the quarter. On paper, it looked like plenty of cushion. The forecast felt reassuring. Look at all those deals!
But then, by the end of the quarter, we would inevitably end up on the same white-knuckle ride. We would either just make the number or just miss it.
Despite the seemingly strong coverage, we were just squeaking by.
In the first few weeks of one particular quarter, I can remember the headline coverage number looked great. There were lots of late-stage deals and plenty of volume. But when we slowed down and analyzed the pipeline opportunity-by-opportunity, a different picture emerged. A meaningful number of those late-stage opportunities hadn’t seen any activity in weeks. Many of them had no next step captured. And too many of the newer deals had what I would call “placeholder vibes.” In other words, they didn’t yet have the information attached to them that would indicate a believable close date or a customer with clear urgency to buy.
In other words, the headline pipeline coverage number was a complete head-fake.
When I asked the sales leader about it, they said something to the effect of, “Yeah, we probably need to clean that up.”
I actually think that’s a very human response. No one was trying to mislead anyone here. We weren’t stuffing the pipeline with fake deals. The system was simply allowing comforting but misleading data to accumulate faster than we could reality-check it.
And we needed to fix that.
The Impact
Here’s a dirty little secret of B2B sales: No one’s pipeline is ever perfect. There’s always going to be a lag between what’s really going on with the deals sales is working and the data in the CRM. But when that lag grows too large, you get several harmful effects that can really hurt your business.
The Leadership Head-fake: See above. An inflated pipeline gives the management team a dangerous false sense of security. You think you have coverage for the quarter, so you don’t make necessary pivots or inspect your key deals’ probability to close until it’s too late.
Cognitive Overload: Reps feel more stretched than they actually are because they’re subconsciously managing 50 zombie deals instead of the 10 that actually have a chance to close this quarter.
Hiding the Gap: When the pipeline looks full but isn’t, it hides the urgent need to prospect or think of new ways to build pipeline. It lets reps off the hook from generating new business because “look how busy I already am with all these opportunities” (even though half of their pipeline should have already been closed out).
The System: The “Clean Your Room” Report
To fix the problem, we started running a “clean your room” CRM report every few weeks that surfaced open deals with three key markers:
Old: Deals open 150%+ of our average sales cycle (e.g., if you typically close a deal in 4 months, anything older than 6 months is flagged as part of the report).
Stale: Opportunities stuck in the same stage for ~X days with no movement. (X is often equal to ~20% of our sales cycle, so if you have a 6 month sales cycle, a deal would be considered stale when it sits in a single stage for ~40 days or so).
Sloppy: Deals missing critical, basic details like next steps, last activity, or a primary contact.
This report provided several helpful nudges for our teams. First, it acted as a forcing function for reps to update the details of their pipeline ahead of the report being published. Since “always scrubbing the pipeline means never scrubbing the pipeline,” it created a public deadline that encouraged hygiene without making CRM updates a pedantic, always-on expectation.
Second, it forced conversations at the ELT level about the key deals in the pipe and how solid our pipeline coverage actually was. While it can be painful at first to realize there’s a lot of junk in your CRM, it’s a necessary wake-up call for teams that want to get better at forecasting. It’s also the catalyst companies typically need to start acknowledging anemic lead-flow and focusing on building their top of the funnel more aggressively. As one of my former CRO puts it:
“Hunger is the best sauce.”
Once we put this report in place, our headline pipeline number went down. We started the quarter with less coverage and a little less comfort. At first, this felt worse. But we drew confidence from the fact that the pipeline we did have was real. And that started a clearer, more honest conversation about what we needed to do to win the deals that were actually in play.
With a pipeline we could actually trust, we started hitting our bookings numbers quarter-after-quarter. The forecast became more reliable. The end-of-quarter scramble eased. And we learned what it felt like to operate with less, but better, pipeline. (Spoiler: It feels good.)
2. SLOW: Your Leads Are Getting Cold
Another one of my portfolio companies had been making real progress on inbound. Every month, somewhere between 30 and 40 prospects were filling out a demo request form to learn more about what they did. That number was growing quarter over quarter. On the surface, this was exactly the kind of marketing momentum you want to see.
And yet, their conversion rates told a different story. Only 20–30% of those hand-raisers were turning into pipeline — far lower than you’d expect for a high-intent “give me a demo” form fill. Something wasn’t lining up.
So we dug in.
After a bit of analysis, the root cause became hard to ignore. On average, the team was getting back to these inbound leads in three to four days. Not hours. Days. Long enough for a motivated buyer to move on, get distracted, or (worst of all) take a meeting with a competitor.
By the time we followed up, many prospects had either lost urgency or forgotten they had raised their hand at all. The low conversion rate wasn’t a mystery anymore.
It was the predictable outcome of a systemically slow response.
The Impact
The most obvious cost here is wasted marketing spend. You pay to generate demand and then fail to capture it while it’s still warm.
But the invisible cost is trust. When a customer raises their hand and you wait three or four days to respond, they don’t experience that as you being busy. They experience it as indifference. Before you’ve even spoken to them, you’ve already signaled that they aren’t a priority to you.
You also kill momentum. You force the prospect to restart their buying journey from zero instead of meeting them at the peak of their intent — the exact moment they decided to signal that they’re ready to talk.
Everyone already knows that speed-to-lead is important. The data is unambiguous: contact a lead within a few minutes and you’re dramatically more likely to connect with them than if you wait even 30 minutes. Wait until the next day and your odds fall off a cliff.
The problem isn’t knowledge. The problem is that there’s no system to measure and expose your slow response times.
The System: Speed-to-Lead
To improve our conversion rates, we decided to implement a speed-to-lead initiative. We set up our CRM to timestamp when leads were created and when they were first contacted. Then we started reporting the difference between those timestamps as a shared metric across sales and marketing.
Just making the number visible completely changed the conversation. What had previously lived in the background as “something we should probably be faster at” suddenly became a concrete operating KPI that everyone could see.
From there, the fixes were straightforward:
We started routing leads to the right reps
We alerted sales leadership when a lead sat longer than expected
We started reporting speed-to-lead at the board level on its own slide, alongside metrics like pipeline coverage and win rates
With only a few small changes, we sent an unignorable signal that response times mattered, and we created a compelling, visual scoreboard that showed the team, in real time, whether we were actually getting better at handling what should be our most important source of leads.
Today, our speed-to-lead at that company is consistently under an hour. And by driving down that one simple metric, our conversion rates (and the amount of marketing-sourced pipeline we have to work with) have gone way up.
What fixed this? It wasn’t making the argument that speed-to-lead matters. Everyone already knew that. What fixed it was measuring the behavior, creating a scoreboard for it, and making it a thing.
3. UNPREPARED: Your Demos Are Losing You Deals
At another of my portfolio companies, we spent months refining our discovery process. We trained the sales team on better pain-finding questions. We simplified our pitch deck. We got really good at uncovering the specific pain points prospects were dealing with in their workflows.
And then, after a really good discovery call, we’d get to the demo and we’d lose the deal.
It wasn’t obvious at first why this kept happening. Discovery was going well. We had gotten genuinely better at uncovering pain, capturing genuine interest, and nailing down solid next steps. But quarter after quarter, our opportunity-to-close rate sat below 20%. We were still converting less than one-in-five demos into closed business.
When we dug into what was actually happening in those demos, the problem became clear. Despite learning exactly what was challenging about our prospects’ workflows in discovery, our demos hadn’t changed much at all. They were still generic product tours. Instead of focusing on what the prospect said they cared about, our sales engineers would walk the prospect through every feature in the product from beginning to end. They covered everything, but never clearly showed how we could solve the specific problem the prospect had told us they cared most about. The problem that, if we proved to them we could solve, would have probably convinced them to buy.
The Impact
Our problem here wasn’t our product. The product worked fine. The problem was that we were treating demos as training sessions instead of proof sessions. We were teaching prospects how to use the tool before we’d convinced them it could solve their problem.
Prospects don’t want to learn your product. They want to know if you can make their pain go away. When you make them sit through a comprehensive product tour before they’ve decided you can solve their problem, you force them to do the work to translate what your product can do and guess at whether it will work for them. Most won’t do that work. They’ll just space out and move on.
Worse, by treating your demos as a product tour, you validate your prospect’s biggest fear: that they’re about to buy software that won’t get used. A demo that spends lots of time on features unrelated to what they told you they cared about doesn’t build confidence. It manufactures doubt.
The result is polite interest during the demo, followed by silence, a complete loss of commercial momentum, and an anemic win-rate.
The win-rate you can see in the numbers. But the loss of momentum you can feel. (It feels bad.)
The System: The 15-Minute Demo Prep Meeting
The solution here wasn’t improving our demo presentation. It was improving our demo preparation.
We started requiring a 15-minute demo prep meeting before every demo with a prospect, using a one-page worksheet. Sales and the sales engineer were required to fill it out together.
The worksheet has four sections:
Executive Summary: Sales summarizes the key points from discovery—who they are, what they’re dealing with, and what they shared on the first call.
Demo Agenda: The salesperson and the sales engineer filled out a table with three columns:
The problem the prospect described (from discovery)
What we’ll show in the demo (that solves their specific problem)
Key points/questions to focus on during that chapter of the demo
Questions We Still Need Answers To: Sales summarizes “what we still don’t know” that will help us move the deal forward (e.g., buying process, competitors involved, budget details, key integrations, etc.)
Desired Next Steps: Sales + sales engineering agree on what we want to ask for at the end of the demo (both emphasizing next steps and reminding them to save time at the end to ask about them)
At first, this felt a little bureaucratic. Another meeting, and another form to fill out. But after piloting the prep meeting during certification and asking a few “guinea pig” early adopters to use it in real calls with customers, the overwhelmingly positive feedback made it easy to roll it out to the rest of the team.
The Outcome
The immediate benefit from our pre-demo prep meetings? It gave our teams permission to rethink how our demos help us win deals.
We had given them a powerful sort of permission. Permission to not show everything in the product. They now had license to skip features that didn’t map to a prospect’s stated pain. Basically, they had permission to be compelling with their demos instead of comprehensive. (More on this framework in a future article.)
And here’s what our leadership team learned with this new approach: while our prospect’s workflows varied widely, most of them had the same three to five pain-points. Pain-points like finding their files quickly, automating manual work that was sucking up their time, or integrating with other tools in their stack.
Once the team had permission to focus the demo on just that shortlist of things the prospect actually cared about, everything changed. Demos got shorter. More focused. More conversational. Prospects stopped putting us on our back foot with questions like “But can it do X?” and started validating us, saying things like “This is exactly what we’ve been struggling to figure out how to fix.”
And our sales engineers felt more confident because the knowledge transfer was complete. They knew what the prospect cared about before the demo began. They had a plan. They weren’t being forced to wing it anymore.
Close rates started to improve not because our product or pitch changed, but because we learned how to make our demos relevant. All of that came from a simple 15-minute alignment worksheet. A worksheet that gave our team a framework for what it meant to be prepared.
Final Thoughts
The gap between knowing what needs to happen and making sure it actually happens every single time? That’s where most companies live and lose.
You already know you should respond to leads fast. You know your pipeline needs scrubbing. You know demos should be personalized. But knowing doesn’t matter if there’s no system forcing it to happen when your team is distracted, underwater, or moving on to the next fire.
High-performing teams don’t rely on people remembering to do the right thing. They build simple, repeatable forcing functions that make the right thing the easy thing. A weekly report that surfaces stale deals. A timestamp that exposes slow follow-up. A 15-minute prep meeting that prevents generic demos.
I’ll leave you with an encouraging fact. Your competitors are dropping balls right now. They’re letting leads go cold, accumulating garbage in their pipeline, and running demos that don’t land.
What does that mean? It means you don’t need to be perfect.
You just need systems that help you drop fewer balls than they do.
So… where are you going to start?


What stands out here is how often teams treat dropped balls like “one-offs” instead of what they really are — signals that the operating system underneath isn’t doing its job. In GTM O.S. work, this is the gap we see everywhere: teams know what good looks like, they just don’t have the forcing functions that make good happen every time, not just on a good week.
Pipeline hygiene, speed-to-lead, demo prep… none of these are exotic problems. They’re symptoms of a system that relies on memory and heroics instead of rhythm and accountability.
And the best part? You don’t need perfection to win — just fewer dropped balls than the other guy. 😉