Laurel: So when it comes to the pandemic, it really shows us how important and tough the race to deliver new treatments and vaccines is to patients. Can you explain what proof generation is and how it then fits into drug development?
Arnaub: Sure. So as a concept, generating proof in drug development is nothing new. It is the art of bringing together data and analytics to successfully demonstrate the safety, effectiveness, and value of your product to a variety of stakeholders, regulators, payers, suppliers, and more. acute and last, and most importantly, the patient. And so far, I’d say that creating evidence includes not only the testing process itself, but also the many different types of research that pharmaceutical or medical device companies conduct, and this can include: These can be studies such as literature reviews or observational or analytical data studies that demonstrate disease burden or even treatment models. And if you look at how most companies are designed, clinical development teams are focused on designing a protocol, implementing the test, and they are responsible for the successful readout of the trial. And most of that work happens in the clinical developer. But when a drug is coming out, the health economics, the outcome studies, the epidemiological team who are helping to figure out what is the value and how do we better understand this disease?
So I think we’re at a pretty interesting inflection point in the industry right now. Generating evidence is an activity that spans many years, both during the trial and in many cases long after the trial. And we find this especially true for vaccine trials, as well as for oncology or other areas of treatment. All in all, the vaccine companies assembled their packages of evidence in record time, and it was an incredible effort. And now I think what’s happening is that the FDA is navigating a difficult balance in which they want to drive the innovation we’re talking about, the advancements of new therapies for patients. They have built the means to deliver therapies like rapid approval, but we need confirmatory trials or long-term follow-up to really understand the evidence and understand the safety and effectiveness of these drugs. this drug. And that’s why the concept we’re talking about today is so important, how do we do this more quickly?
Laurel: That’s certainly important when you’re talking about something that is life-saving innovations, but as you mentioned earlier, with a combination of both the rapid pace of technological innovation and the As data is generated and reviewed, we’re special inflection points here. So, how has data and evidence generation evolved over the past few years, and then how could the potential for a vaccine and all the evidence packages be different now 5 years ago? or 10 years?
Arnaub: It is important to establish the distinction here between clinical trial data and so-called real-world data. Randomized controlled trial, and remains the gold standard for evidence generation and submission. And we know in clinical trials we have a very controlled set of parameters and focus on a small group of patients. And there’s a lot of specificity and detail in what’s being captured. There is a regular assessment period, but we also know the test environment is not necessarily representative of real-world patient outcomes. And that term, “real world,” is a wild west of a bunch of different things. It is requesting data or payment records from insurance companies. It’s the electronic medical records that emerge from providers, hospital systems and laboratories, and even more and more new forms of data that you can see from devices or even data. reported by the patient. And RWD, or real-world data, is a large and diverse collection of different sources that can document patient performance as patients enter and exit different healthcare systems and environments.
Ten years ago, when I first worked in this space, the term “real world data” didn’t even exist. It’s like a swear word, and it’s basically something that has been created in recent years by the pharmaceutical and regulatory industries. So I think what we’re seeing now, the other important part or dimension is that regulators, through very important pieces of legislation like the 21st Century Cures Act, have launch and advance how real-world data can be used and combined to advance our understanding of treatments and diseases. So there’s a lot of momentum here. Actual data is used in 85%, 90% of new drug applications approved by the FDA. So this is a world we have to navigate.
How do we keep clinical trial rigor and tell the whole story, and then how do we bring in actual data to complete that picture? It’s an issue that we’ve been focusing on for the past two years and we’ve even built a solution to this in covid called Medidata Link that actually connects patient level data together in clinical trial with all non-trial data. exist in the world for each patient. And as you can imagine, the reason this is so significant during covid, and we actually started this with a covid vaccine manufacturer, is so we can study the results. long term, so that we can combine that test data with what we’re considering post-processing. And do vaccines make sense in the long run? Is it safe? Is it effective? And here’s, I think, something that’s going to emerge that’s been a big part of our evolution over the last few years in terms of how we collect data.
Laurel: That data collection story is certainly part of the challenges in creating this high-quality evidence. What are some other gaps in the industry that you have seen?
Arnaub: I think the elephant in the development room in the pharmaceutical industry is that despite all the data and all the advances in analytics, the probability of technical success, or regulatory success as it is called is the drug, still really low. The overall likelihood of approval from stage one is consistently below 10% for several different areas of treatment. This is less than 5% for cardiology, more than 5% for oncology and neurology, and I think what underlies these failures is the lack of data to support efficacy. That’s where many companies submit or include what regulators call flawed study designs, inappropriate statistical endpoints, or, in many cases, incompetent trials. That is, the sample size is too small to reject the null hypothesis. So that means you are struggling with some important decisions if you just look at the test itself and some holes where the data should be more involved and influence the decision making more. determined.
So when you design a trial, you are evaluating “What are my primary and secondary endpoints? What inclusion or exclusion criteria do I choose? What is my comparator? What are my uses for biomarkers? And then how do I understand the result? How do I understand how it works? “It is a multitude of different choices and permutations of different decisions that must be made in parallel, all of this data and information coming from the real world; We talked about the dynamics of how valuable an electronic health record can be. But the gap here, the problem is how is the data collected? How do you verify where it came from? Can it be trusted?
So, despite the good volume, the gaps actually contribute and are likely to deviate significantly in many different areas. Selection bias, meaning there is a difference in the type of patient you choose to treat. There are performance deviations, detectability, some problems with the data itself. So I think what we’re trying to navigate here is how can you do this in a powerful way when you combine these data sets together, solving a number of problems around the drug failure I mentioned earlier? Our individual approach took a curated historical clinical trial dataset placed on our platform and used that dataset to contextualize what we’re seeing in the world. real world and to better understand how patients respond to therapy. And that, in theory, and what we’ve seen in our work, is to help clinical development teams use a new way to use data to design an experimental protocol or to improve improve some of the statistical analysis they do.