Authored by Fiona Cummings, DispatchHealth.
(Includes obligatory business buzzwords.)
DispatchHealth is a platform that allows patients to request on-demand medical care sent directly to their homes. Think urgent care on wheels.
In a marketplace operating model, efficiently matching the supply of clinical provider hours with patient demand is crucial to the unit economics of the business. Equally important, is demonstrating a value-added user experience to patients, providers, and strategic partners that clearly outperforms traditional healthcare settings.
With many potential paths forward, hundreds of good product ideas, and finite hours in the day, effectively prioritizing high-impact product features is our product team's most critical challenge.
At the beginning of 2022, it was clear we were not set up to make these important trade-offs effectively. We needed an overhaul of the product development processes including enhanced analytics capabilities. To make this happen, we needed to remedy 7 years of tech debt with a brand new engineering team (an Everest on its own) but more fundamental was changing the organizational mindset towards product strategy and effective use of data.
Thus began my quest to establish experimentation as standard practice in our product development process and our organizational psyche.
The experiment “happy path” of identifying a high-value feature, quantifying positive business impact, and launching a wildly successful product is naturally the more desired (and many times heavily-promised) outcome.
However, I’d argue the “utter failure” experiments—that expose gaps in process, move metrics in the wrong direction, and have unintended downstream consequences—are equally (if not more) important for two critical reasons:
The volume and significance of learning is much greater when things do not go according to plan. Teams are forced to work together and identify root causes that would have otherwise been left unaddressed or unknown. These insights give valuable strategic guidance on resource allocation and increase the likelihood of unlocking higher-impact projects down the road.
Learning from failures and building a culture of thoughtful analysis is crucial to creating a high-performing product team. Openly highlighting what is not working instead of only focusing on wins is how teams solve really hard problems, such as improving the US healthcare system. ⁽ᵈᵃᵘⁿᵗᶦⁿᵍ⁾
To tie the points above to reality, I will focus on a “high-potential” conversion project we used to pilot experimentation at DispatchHealth. But first, a brief background on the motivation and environment we were operating in to translate the proper context:
Right now, there is no digital-only way for a patient to schedule care with DispatchHealth. When a potential patient visits our website and submits a request, they are then instructed to wait for a phone call from DispatchHealth to finish the onboarding process.
This is, understandably, one of the highest drop-off points in our funnel. Many patients are either not available when we call them back, or did not anticipate speaking with a human and decide to opt out entirely.
The feature we chose for our maiden A/B test voyage was called ‘Click to Call.’ This feature aimed to reduce the number of drop-offs attributed to the clunky user experience of needing to answer a phone call after submitting a request online.
In the test experience, we added the ‘Click to Call’ button as the last step of the web request process and prompted the patient to call us directly instead of waiting for a call. The hypothesis was a more streamlined user experience would reduce the percentage of patients that are lost in the funnel due to a missed call and drive conversion up.
The introduction of experimentation represented a fundamental shift in how DispatchHealth approached product development. Before the ‘Click to Call’ functionality was even built, there was strong momentum behind launching it on the hypothesis alone. The intended effect aimed to improve the patient experience and boost revenue growth by increasing conversion—a PM’s dream.
It was seen as a no-brainer that should be launched as soon as possible to maximize revenue before year-end. Given these expectations, lobbying to add time for the experiment setup, data collection, and analysis was met with about as much enthusiasm as a dental check-up.
All parties agreed that experimentation was a healthy step to add to the routine in general, but the necessity of including the time required for every new feature going forward was an untested value proposition.
If you haven’t caught on to the heavy foreshadowing yet, this experiment did not go according to plan.
Instead of decreasing the number of patients we could not reach, the test group showed increased missed patient rates and a significantly disheartening impact on conversion. While this was not the success scenario I had hoped for to drive momentum on the new experimentation process, the test did clearly show that our hypothesis was not as airtight as originally thought.
Additionally, the data collected from the experiment gave us higher-quality leads to diagnose where things were going wrong and how to fix them.
While this was not the result I thought I needed to prove the value proposition that I had been peddling for months, it ultimately underscored the importance of experimentation in a more powerful way than the success scenario could have.
The unexpected results forced us to confront the core operational and technical gaps lingering under the hood and collaborate across teams to find the best solutions. Had we operated under the status quo, the team would have launched the feature directly to production and held out hope for a positive conversion trend to materialize.
The resulting aftermath would be a murky combination of seasonality impacts, technical issues, and operational deficiencies—creating a vulnerable scenario for politics and inertia to drive decisions instead of data.
Adding experimentation to the product development process is not free. More engineering and analytics resources are required to set up the technical components. Product managers take on more stakeholder management and timelines for product launches need to be elongated.
However, the value realized does not just come through quantifying the impacts of successful launches, it also prevents teams from chasing a losing strategy, debating over unclear results, and gives insight into what should be prioritized in the future.
With the initial implementation hurdle behind us, we can now focus on getting experimentation processes to a steady state for all product teams, and Statsig has been a reliable partner every step of the way to get us here.
The support team has gone above and beyond to work through issues directly with our engineers tailored to our use cases. Investing in experimentation tooling with top-notch support has been a critical component of up-leveling our product strategy and scaling our analytics coverage across teams. Thank you Statsig team!!
Power on nerds 🤘⚡
The Statsig <> Azure AI Integration is a powerful solution for configuring, measuring, and optimizing AI applications. Read More ⇾
Take an inside look at how we built Statsig, and why we handle assignment the way we do. Read More ⇾
Learn the takeaways from Ron Kohavi's presentation at Significance Summit wherein he discussed the challenges of experimentation and how to overcome them. Read More ⇾
Learn how the iconic t-test adapts to real-world A/B testing challenges and discover when alternatives might deliver better results for your experiments. Read More ⇾
See how we’re making support faster, smarter, and more personal for every user by automating what we can, and leveraging real, human help from our engineers. Read More ⇾
Marketing platforms offer basic A/B testing, but their analysis tools fall short. Here's how Statsig helps you bridge the gap and unlock deeper insights. Read More ⇾