Many clinical trials don’t fail because the science is hard; they fail because the system is inefficient. Curavit is engineering a better one.
Most people who end up running clinical trials start in medicine or research. Joel Morse started in mechanical engineering, which may explain why he’s less patient with broken systems—and more interested in understanding where, exactly, they begin to fail.
After years building and scaling global, tech‑enabled services for pharma, Joel noticed something unsettling: promising trials weren’t failing because the science was wrong. They were failing because enrollment was slow, screening was inefficient, and the entire operational model was clogged with handoffs, workarounds, and guesswork, especially at the points where better information could have changed the outcome.
So he stopped trying to optimize around the problem—and decided to redesign the solution from first principles.
As founder of Curavit, an innovative clinical research organization (CRO), Joel is rethinking how trials are run, whom they’re built for, and why enrollment is still the industry’s most predictable failure point. His partnership with Milliman Contxt reflects a shared belief that better data—used earlier—can surface risks and opportunities before they harden into delays.
We talked with Joel about why the industry keeps tripping over the same problems, and what it takes to fix them.
Let’s start here: why do so many clinical trials still struggle to enroll participants?
Because the system was never designed to work efficiently at scale.
Traditional trials depend on geography, overextended sites, and manual processes that don’t talk to each other. Eligibility data lives in too many places—or nowhere useful—and a lot of time is spent chasing participants who were never likely to qualify.
The industry has known this for years, but knowing there’s a problem isn’t the same as understanding where it actually originates.
When 80% of trials miss enrollment timelines, that’s not bad luck. That’s a structural problem.
So why hasn’t it been fixed yet?
We keep trying to patch the system instead of redesigning it. People add tools, vendors, and process layers, but the underlying assumptions stay the same. Trials are still site‑centric. Screening still happens too late. And enrollment hassles are treated as inevitable instead of addressable.
At some point, you have to admit the model itself is the issue.
And that’s what led you to Curavit?
Exactly. We asked a simple question: What if the site wasn’t the center of the universe?
Curavit’s Remote Clinical Site rethinks recruitment, screening, consent, and monitoring so trials can actually operate in the real world.
Decentralization removes geographic barriers and makes participation easier. But it’s not enough on its own. You still need to know who you’re recruiting and whether they’re likely to qualify before you invest heavily. That’s where things usually break down.
Enter Milliman Contxt. What problem does it solve for you?
Early qualification is one of the most expensive failure points in clinical trials. But it’s also one of the most fixable. Too often, screening relies on self‑reported information and late‑stage chart review, which means critical disqualifiers stay invisible until time and money are already spent. That leads to high screen‑fail rates, wasted effort, and blown timelines.
Contxt brings claims‑based medical history into the process much earlier. That means fewer false starts, fewer surprises, and far more confidence that we’re focusing on the right participants from the beginning.
What difference does that actually make?
Well, a big one. In one recent study, we were responsible for enrolling half the total population—about 300 participants—using our decentralized model. We finished in roughly four months. The remaining half went to nine traditional physical sites. They were still enrolling and expected to take roughly four times as long.
You can’t tell me that gap isn’t about motivation or competence. It’s clearly about system design.
There’s a lot of talk about improving representation in trials. Does decentralization actually help?
It enables it, but it doesn’t guarantee it. Removing travel and time barriers opens the door, but outcomes still depend on intentional design. You won’t get representation if you don’t plan for it.
In one study, we enrolled African American women at about 1.5 times the national average. That didn’t happen by accident. We engineered it. Recruitment strategies matter just as much as trial design.
Your background is in mechanical engineering. How much does that shape how you think about trials?
It’s who I am, so it shapes my viewpoint in a huge way. Engineers are trained to look at systems, identify failure points, and redesign for reliability. Clinical trials are complex, but that doesn’t mean they have to be fragile.
I’ve come to see that trials are too often treated as a series of disconnected tasks. At Curavit, we treat them like a machine. If it’s breaking down, you don’t blame the operator; you study the failure points
If you could wave a magic wand and redesign trials from scratch, what would change?
Oversight would stay just as rigorous. The difference would be how things actually run. It would all just get easier. Data would flow straight from source systems instead of being manually chased and re-entered, making meaningful insights available when decisions still matter. Monitoring would be continuous and risk-based. And communication would flex around participants, instead of pushing everyone through the same rigid process.
In other words, trials would finally work the way people actually live.
Last question: what makes you optimistic that change is actually possible now?
The industry is finally uncomfortable enough to rethink its assumptions and to ask better questions about what’s actually slowing trials down. Sponsors are under pressure to move faster and control costs. Regulators have clarified expectations around decentralized and hybrid models. And the technology is finally ready to support better execution.
We don’t need magic to fix clinical trials. We need better systems—and the willingness to build them.