Archive for the ‘Pharmaceutical’ Category

Why clinical CROs hate eCRF systems – and why you should love them

Everything from banking to government services, from shopping to gambling has moved on-line in the past decade, yielding huge efficiency gains for suppliers and (for the most part) an improved experience for the customer.  Suppliers that have failed to adjust their business model are being slowly (or not slowly) ejected from the marketplace.

Against this background, then, it is surprising that such a high percentage of clinical trials are performed using simple pen and paper to record the raw data.  The classical paper Case Report Form (or CRF) has changed little in decades – and seems surprisingly entrenched against the assault of the digital age.

At first glance that seems understandable enough – after all, if you just want a flexible tool to record free-form information then pen and paper is still hard to beat.  The key word, some clinical researchers argue, is flexibility.  You never know what might happen, so its hard to predict in advance the kind of information you will need to capture.  Whatever the eventuality, the paper CRF can accommodate.  And anyway, it can never fail you – what happens to a digital system if the power fails or the internet connection goes down?

The flexibility is undeniable – we have all experienced on-line forms (even from large companies and government departments with huge IT budgets who should really know better) that simply will not allow you to enter the information you need to give them.  Quite simply the designer hadn’t put themselves in your particular situation when they designed the form.

As a result, digital forms work best for simple tasks (like booking a flight or buying a book) and much less well for complex tasks (such as completing your tax return).  There seems little doubt in which camp a clinical trial falls.

But managed correctly, this lack of flexibility is also the greatest strength of an electronic Case Report Form (or eCRF).  Flexibility in the hands of a genius is an unmitigated good – but flexibility gives people the opportunity to make mistakes.  Quite simply, the same digital system that frustrates and infuriates because it wont let you enter the right kind of information is performing a useful gatekeeper function when it prevents you entering errors.  An electronic form wont allow a body mass index of 235 or an age of 216 – errors that can be quickly and easily corrected if they are spotted in real time while the patient is still present, but much harder to correct when identified later.

Smart data entry doesn’t just catch errors.  It can also improve the quality of data by forcing free-form information into categories.  Categorical data can be subjected to statistical analysis more easily than unstructured text – and the originator of the data is much better placed to choose a category from a list than a data analyst attempting to throw a quadrat over the free-form data much later on.  There is no reason not to include a free text ‘notes’ field alongside the categories so that the full richness of the data that would have been captured on a paper form is also included in the eCRF.

Going digital can improve the quality of clinical data in other ways too.  Patient recorded outcomes are important end-points in many trials, but they are notoriously unreliable – they are subject to biases depending on how the questions are administered, as well as substantial variation from one day to the next.  The eCRF can help on both scores: using a computer, or even an iPad to administer the questionnaire removes the variability in presentation that inevitably occurs with a human operator.  Equally importantly, the ease and reliability with which the reporting tool can be self-administered allows data to be collected much more frequently – and time-averaged data is considerably more powerful than spot measures for highly variable end-points such as patient-reported outcome scales.

There is no reason in principle why the eCRF cannot be a truly interactive tool – providing information to the clinical researcher at the same time as the clinical researcher records information in the eCRF.  The eCRF becomes a dynamic manifestation of the protocol itself – reminder the researcher of the sequence of tests to be administered, or the individual steps of the protocol for more complex or lengthy procedures.  It can, of course, integrate information from the patient with the protocol to provide patient-specific instructions.  For example, in a recent clinical trial using the cutting-edge eCRF platform from Total Scientific, one of the end-points involved processing sputum samples.  The volume of reagents added to the sputum depended on the weight of the sputum plug – using a paper CRF would have required the clinical researcher to perform relatively complex calculations in real time while preparing the sputum sample; with the customised eCRF from Total Scientific the weight of the sputum plug was entered and the eCRF responded with a customized protocol for processing the sample with all the reagent volumes personalized for that particular sample.

A cleverly designed eCRF, then, is like having your own scientists permanently present at the clinical sites.  The eCRF is looking over the shoulder of every clinical research assistant and providing advice (in the form of the interactive protocol) and preventing errors.  This “real-time electronic monitoring” severely restricts the flexibility of the clinical researchers to do anything other than exactly what you intended them to do.  And this is why many clinical CROs do not like eCRFs. Loss of flexibility makes their job harder – but makes your clinical data better!

Of course, not all eCRFs are born equal.  Some deliver the restriction and lack of flexibility over data entry in return for only very limited data-checking.  Unless you really harness the power of using an eCRF rather than pen and paper, there is a danger it can cost more and deliver less.  But the advantages of well-designed eCRF, whose functionality has been matched to the needs of your particular protocol brings huge benefits in  data quality – which translate directly into increased statistical power.  Total Scientific’s bespoke eCRF platform, for example, uses individually-designed layouts grafted onto a powerful relational database engine to provide features that are difficult or impossible to realize using conventional eCRF products that are rigid and poorly-optimized for each new user (being little more than digital versions of the paper CRF they replace).

As a result, we provide features such as colour-coded dashboards for each patient visit that provide, at a glance, an indication to the clinical researcher which tasks have been completed and which remain outstanding, as well as user-defined options to display the blinded data in real-time so that outliers and trends in the data can be visualized and identified with an ease unimaginable in the days of paper-only data capture.

And the eCRF is still evolving.  At Total Scientific we are working on modules that implement statistical process control right into the eCRF itself.  Statistical process control is a well-established framework for monitoring complex systems, such a silicon chip fabrication plants.  By looking at all the data emerging from the process (whether chip manufacture or recruitment of patients) it spots when a significant deviation over time has taken place.  In the manufacturing setting, that allows the operators to halt production before millions of chips are made that will fail quality control.  In a clinical trial, statistical process control would identify any unexpected changes in baseline values that cannot be explained by random variation alone and flag them up – while the trial is still running.  While such artefacts can be identified in a conventional locked clinical database during data analysis, it is then too late to do anything about it (other than repeat the trial), and these common artefacts then substantially lower trial power.  Incorporating statistical process control into Total Scientific’s eCRF platform promises, for the very first time, to take clinical data quality to a new level.

If you are planning a trial and your clinical CRO is trying to convince you that the paper CRF system they have always used is better – more flexible and cheaper because they don’t have to learn a new system – then its time to list the benefits of a cutting-edge eCRF system.  They make not like the idea of “big brother” watching their every move – but that’s precisely why you should insist on it!

David Grainger
CBO, Total Scientific Ltd.

The HDL myth: how misuse of biomarker data cost Roche and its investors $5billion

On May 7th 2012, Roche terminated the entire dal-HEART phase III programme looking at the effects of their CETP inhibitor dalcetrapib in patients with acute coronary syndrome.  The immediate cause was the report from the data management committee of the dal-OUTCOMES trial in 15,000 patients that there was now no chance of reporting a 15% benefit with the drug.

The market reacted in surprise and disappointment and immediately trimmed $5billion of the market capitalization of Roche.  After all, here was a class of drugs that had been trumpeted by the pharma industry as the next “super-blockbusters” to follow the now-generic statins. The data from dal-OUTCOMES has dealt that dream a fatal blow.

The important lesson, however, is that such a painful and expensive failure was entirely preventable, because the dream itself was built on a fundamentally flawed understanding of biomarkers.   And that’s not speaking with the benefit of hindsight: we predicted this failure back in January 2012 in the DrugBaron blog.

On May 7th 2012, Roche terminated the entire dal-HEART phase III programme looking at the effects of their CETP inhibitor dalcetrapib in patients with acute coronary syndrome. The immediate cause was the report from the data management committee of the dal-OUTCOMES trial in 15,000 patients that there was now no chance of reporting a 15% benefit with the drug.

The market reacted in surprise and disappointment and immediately trimmed $5billion of the market capitalization of Roche. After all, here was a class of drugs that had been trumpeted by the pharma industry as the next “super-blockbusters” to follow the now-generic statins. The data from dal-OUTCOMES has dealt that dream a fatal blow.

The important lesson, however, is that such a painful and expensive failure was entirely preventable, because the dream itself was built on a fundamentally flawed understanding of biomarkers. And that’s not speaking with the benefit of hindsight: we predicted this failure back in January 2012 in the DrugBaron blog.

CETP inhibitors boost HDL (the so-called “good cholesterol”) by inhibiting the Cholesterol Ester Transfer Protein (CETP), a key enzyme in lipoprotein metabolism. And they work! HDL cholesterol concentrations are doubled soon after beginning treatment, more than reversing the depressed HDL levels that are robustly associated with coronary heart disease (and indeed risk of death from a heart attack).

That was quite a firm enough foundation for developers to believe that CETP inhibitors had a golden future. After all, HDL is the “best” biomarker for heart disease. By that I mean that, of all the lipid measures, HDL gives the strongest association with heart disease in cross-sectional studies and is the strongest predictor of future events in prospective studies. Since we know lipids are important in heart disease (from years of clinical experience with statins), therefore elevating HDL with CETP inhibitors just HAS to work. Right?

Wrong.

Strength of an association is just one factor in the decision as to whether a biomarker and an outcome are linked.  Unfortunately, Sir Austin Bradford Hill put it first in his seminal list of criteria published in 1963 and still widely used today.  And he didn’t  provide a strong enough warning, it seems, that it is only one factor out of nine that he listed.   Total Scientific updated those criteria for assessing modern biomarker data in 2011, and stressed how the strength of an association could be misleading, but obviously that was too late for Roche who were already committed to a vast Phase 3 programme.

Here’s the problem with HDL. HDL cholesterol concentrations are temporally very stable – they do not change a great deal from one day to the next, or even for that matter from one month to the next. A single (so-called ‘spot’) measure of HDL cholesterol concentration, therefore, represents an excellent estimate of the average concentration for that individual over a substantial period.

Other lipid parameters do not share this characteristic. Triglyceride concentration, for example, changes not just day by day but hour by hour. Immediately following a meal, triglyceride levels rise dramatically, with the kinetics and extent of the change dependent on the dietary composition of the food and the current physiological status of the individual.

These temporal variation patterns bias how useful a spot measure of a biomarker is for a particular application. If you want to predict hunger or mood (or anything else that varies on an hour-by-hour timescale) triglycerides will have the advantage – after all, if HDL doesn’t change for weeks it can hardly predict something like hunger. By contrast, if you want to predict something like heart disease that is a very slowly progressing phenotype, the same bias favours a spot measure of HDL over a spot measure of triglycerides.

HDL cholesterol concentration, then, as a biomarker has an in-built advantage as a predictor of heart disease IRREPESECTIVE of how tightly associated the two really are, and most critically IRRESPECTIVE of whether there is a real causative relationship between low HDL and cardiovascular disease.

All this matters a great deal because all the lipid parameters we measure are closely inter-related: low HDL is strongly associated with an elevated (on average) triglyceride and LDL. For diagnosing patients at risk of heart disease you simply pick the strongest associate (HDL), but for therapeutic strategies you need to understand which components of lipid metabolism are actually causing the heart disease (while the others are just associated as a consequence of the internal links within the lipid metabolism network).

Picking HDL as a causative factor primarily on the basis of the strength of the association was, therefore, a dangerous bet – and, as it turns out, led some very expensive mistakes.

Okay, so the structural bias towards HDL should have sounded the alarm bells, but surely it doesn’t mean that HDL isn’t an important causative factor in heart disease? Absolutely correct.

But this isn’t the first “death” for the CETP Inhibitor class. As DrugBaron pointed out, the class seemed moribund in 2006 when the leading development candidate, Pfizer’s torcetrapib, failed to show any signs of efficacy in Phase 3.

As so often happens, when observers attempted to rationalize what had happened, they found a ‘reason’ for the failure: they focused on the small but significant hypertensive effect of torcetrapib – a molecule-specific liability. An argument was constructed that an increase in cardiovascular events due to this small increase in blood pressure must have cancelled out the benefit due to elevated HDL.

That never seemed all that plausible – unless you were already so immersed in ‘the HDL myth’ that you simply couldn’t believe it wasn’t important. To those of us who understood the structural bias in favour of HDL as a biomarker, the torcetrapib data was a strong premonition of what was to come.

So strong was ‘the HDL myth’ that voices pointing out the issues were drowned out by the bulls who were focused on the ‘super-blockbuster’ potential of the CETP inhibitor class. Roche were not the only ones who continued to believe: Merck have a similar programme still running with their CETP Inhibitor, anacetrapib. Even the early data from that programme isn’t encouraging – there is still no hint of efficacy, although they rightly point out that there have not yet been enough events analysed to have a definitive answer.

But the signs are not at all hopeful. More than likely in 2012 we will have the painful spectacle of two of the largest Phase 3 programmes in the industry failing. Failures on this scale are the biggest single factor dragging down R&D productivity in big pharmaceutical companies.

Surely the worst aspect is that these outcomes were predictable. What was missing was a proper understanding of biomarkers and what they tell us (or, perhaps in this case, what they CANNOT tell us). Biomarkers are incredibly powerful, and their use is proliferating across the whole drug development pathway from the bench to the marketplace. But like any powerful tool, they can be dangerous if they are misused, as Roche (and their investors) have found to their substantial cost. Total Scientific exist to provide expert biomarker services to the pharmaceutical industry – let’s hope that not bringing in the experts to run your biomarker programme doesn’t cost you as much as it did Roche.

Dr. David Grainger
CBO, Total Scientific

Combinatorial animal study designs

It is sometimes assumed that government regulations governing the use of animal models in drug development hamper good science, either by accident or design. But reality is rather different: focus on the 3Rs of replacement, reduction and refinement can lead to more reliable results, quicker, at lower cost and with improved animal welfare and reduced animal use as well.

There are a number of strategies that can reduce the number of animals used during the development of a new drug. The most obvious is to combine several types of study, investigating efficacy, safety and drug disposition simultaneously. As well as reducing the number of animals required, it has scientific benefits too: instead of relying on measuring drug levels to assess exposure, you can observe the safety of the drug in exactly the same animals where efficacy is investigated. For drugs with simple distribution characteristics, measuring exposure in the blood is useful for comparing different studies, but as soon as the distribution becomes complex (for example, with drugs that accumulate in some tissues, or are excluded from others) comparing different end-points in different studies becomes challenging and fraught with risk of misinterpretation.

Quite simply, then, its simply better to look at safety and efficacy in the same animals in the same study. The results are easier to interpret, particularly early in drug development when knowledge of distribution characteristics may be imperfect. Not only is it scientifically better, but it reduces the use of animals, and it reduces the overall cost of obtaining the data. A combination study may be as much as 30% cheaper than running two separate studies.

For these reasons, Total Scientific plan to launch in 2012 a comprehensive range of combination study packages, combining our industry-standard models of chronic inflammatory diseases with conventional assessment of toxicity, including clinical chemistry, haematology, urinalysis, organ weights and histopathology. For anyone involved in early stage drug development in immunology and inflammation, these study designs will offer more reliable de-risking of an early stage programme at a lower cost than conventional development routes.

If the data is better and the costs are lower, why haven’t such combination designs become the norm before now? Perhaps its because of a misunderstanding of what kind of safety information is needed during the early stages of developing a first-in-class compound. Conventional toxicology (such as that required for regulatory filings) requires driving dosing levels very high to ensure that adverse effects are identified. Clearly, for a drug to be successful, the adverse events must be occurring at much higher doses than the beneficial effects – which is at odds with a combination study design.

That’s fine once you have selected your clinical candidate (and conventional toxicology studies of this kind will still be needed prior to regulatory submission even if you ran a combination study). But for earlier stage development, the combination design makes perfect sense: before you ask how big the therapeutic index might be, first you simply want to know whether it is safe at the doses required for efficacy.

A previous blog by DrugBaron has already commented on the over-focus on efficacy in early drug development as a contributor to costly attrition later in the pipeline. Why would you be interested in a compound that offered benefit but only at doses that cause unacceptable side-effects (whether mechanism-related or molecule-specific it matters not)? Continuing to invest either time or money in such a compound ignorant of the safety issues until later down the path is a recipe for failure.

Looking at early stage opportunities being touted for venture capital investment paints a similar picture: almost all have, as their centerpiece, a compelling package of efficacy data in one (or often several) animal models. Far fewer have any assessment of safety beyond the obvious (that the animals in the efficacy studies survived the treatment period). Since almost any first-in-class compound, by definition hitting a target unvalidated in the clinic, is associated with “expected” side-effects, this lack of any information to mitigate that risk is the most common reason for failing to attract commercial backing for those early stage projects. Total Scientific’s combination study designs rectify these defects, reducing risk earlier, and at lower cost.

Why stop there? Relatively simple changes to the study design also allow investigation of pharmacokinetics, metabolism and distribution – all in the same animals where efficacy and safety are already being investigated. Such “super-studies” that try and address simultaneously many different aspects of the drug development cascade may be unusual, and may not provide definitive (that is “regulator-friendly”) results for any of the individual study objectives. However, in early stage preclinical development they will provide an extremely cost-effective method of identifying potential problems early, while reducing use of animals still further.

Combining different objectives into one study is only one way Total Scientific refines animal model designs in order to reduce animal requirements. Being biomarker specialists, we can improve the phenotyping of our animal models in several different ways. Firstly, by using multiple end-points (and an appropriate multi-objective statistical framework) we can detect efficacy with fewer animals per group than when relying on a single primary end-point. There can be no doubt that a single primary end-point design, used for regulatory clinical studies for example, is the gold-standard – and is entirely appropriate for deciding whether to approve a drug. But once again its not the most appropriate design for early preclinical investigations. It’s much better to trade a degree of certainty for the extra information that comes from multiple end-points. In any case, the consistency of the whole dataset provides that certainty in a different way.

Learning how a new compound affects multiple pathways that compose the disease phenotype provides a lot of additional value. In respiratory disease, for example, understanding whether the effect is similar on neutrophils and eosinophils, or heavily biased towards one or the other provides an early indication as to whether the compound may be more effective in allergic asthma or in severe steroid-resistant asthma. Compounds that hit multiple end-points in an animal model are much more likely to translate to efficacy in the clinic.

Equally importantly, we focus on end-points that have lower inter-animal variability – and hence greater statistical power. There is a tendency for end-points to become established in the literature simply on the basis of being used in the first studies to be published. Through an understandable desire to compare new studies with those that have been published, those initial choices of end-points tend to become locked in and used almost without thinking. But often there are better choices, with related measures providing similar information, but with markedly better statistical power. This is particularly true of semi-quantative scoring systems that have evolved to combine several measures into one number. Frequently, most of the relevant information is in one component of the composite variable, while others contribute most of the noise – destroying statistical power and requiring larger studies.

What all these refinements have in common is that they improve the quality of the data (driving better decisions), while reducing the number of animals required on the other (with ethical and cost benefits). Its not often you get a win:win situation like this – better decisions typically cost more rather than less. But the forthcoming introduction of Total Scientific’s new range of preclinical model study designs promises benefits all round.

Dr. David Grainger
CBO, Total Scientific

The interleukin lottery: playing the odds on numbers 9 and 16

The interleukins are an odd family.  One name encompasses dozens of secreted proteins that are linked by function rather than by structure.  And even that common function is very broadly defined: cytokines that communicate between cells of the immune system.

Defined in such a way, its perhaps not surprising that the interleukins have yielded some of the best biomarkers of inflammatory disease conditions, and even more importantly are the target for a growing range of antibody therapeutics.  Interfering with interleukins is to biologicals what GPCRs are to small molecule drugs.

As with GPCRs, though, despite the success of interleukins as biomarkers and drug targets, some members of the superfamily are extensively studied and well understood, while others lie on the periphery largely ignored.  Type interleukin-1 into PubMed and it returns a staggering 54690 papers.  Repeat the exercise for the rest of the interleukins and you make an interesting discovery: although there is a slight downward trend across the family (probably reflecting the decreasing time since each was first described), there are a couple of striking outliers (Figure 1).  Family members who are much less well studied than the rest.   IL-9 has only 451 citations, IL-16 has 414 and IL-20 just 98.

Figure 1 : PubMed Citations for the Interleukin Family in December 2011. Note the log scale.

Are they really less interesting?  Or does this just reflect the positive re-enforcement of previous publications?  Once one paper links a particular interleukin with a disease or physiological process, a crop of papers exploring that link quickly appear, casting in concrete the random process of discovery.  If that’s correct, these unloved interleukins might make excellent targets for research and drug discovery.

Take IL-9 for example: what little is known about this cytokine certainly doesn’t paint a picture of a backwater function undeserving of attention.  IL-9 is a product of CD4+ T cells (probably one of the Th2 group of cytokines that includes the much-studied IL-4 and IL-5) that promotes proliferation and survival of a range of haemopoietic cell types.  It signals through the janus kinases (jaks) to modulate the stat transcription factors (both of which are validated drug targets in inflammatory diseases).  Polymorphisms in IL-9 have been linked to asthma, and in knockout animal studies the gene has been shown to be a determining factor in the development of bronchial hyper-reactivity.

IL-16 looks no less interesting.  It is a little known ligand for the CD4 protein itself (CD4 is one of the most extensively studied proteins in all of biology, playing a key role on helper T cells, as well as acting as the primary receptor for HIV entry).  On T cells, which express the T Cell Receptor (TCR) complex, CD4 acts an important co-stimulatory pathway, recruiting the lck tyrosine kinase (a member of the src family, and itself and interesting drug target being pursued by, among others, the likes of Merck).  But CD4 is also expressed on macrophages, in the absence of the TCR, and here it is ligand-mediated signaling in response to IL-16 that is likely to be the dominant function.

Another interesting feature of IL-16 is the processing it requires for activity.  Like several other cytokines, such as TGF-beta, IL-16 needs to be cleaved to have biological activity.  For IL-16 the convertase is the protease caspase-3, which is the lynchpin of the apoptosis induction cascade, tying together cell death and cell debris clearance.

Like IL-9, polymorphisms in the human IL-16 gene have also been associated with chronic inflammatory diseases, including coronary artery disease and asthma.  But perhaps the most interesting observations relating to IL-16 come from biomarker studies.  Our own studies at Total Scientific in our extensive range of preclinical models of chronic inflammatory diseases have repeatedly found IL-16 to be the best marker of disease activity.   In human studies, too, IL-16 levels in both serum and sputum have been associated with inflammatory status, particularly in asthma and COPD but also in arthritis and IBD.

After years in the backwater, perhaps its time for the ‘ugly ducklings’ of the interleukin family to elbow their way into the limelight.  After all, the rationale for adopting either IL-9 or IL-16 as a diagnostic biomarker, or even as a target for therapeutic intervention, is as good as the case for the better known interleukins.  But the competition is likely to be less intense.

Many years ago, the Nobel laureate Arthur Kornberg, discoverer of DNA polymerase, once said “If, one night, you lose your car keys, look under the lamppost – they may not be there, but it’s the only place you have a chance to find them”.  Sound advice – unless, of course, there are twenty others already searching in the pool of light under the lamppost.  Maybe the twinkle of metal in the moonlight may be your chance to steal a march on the crowd.

Dr. David Grainger
CBO, Total Scientific

Personalized Medicine Demands Investment in Innovative Diagnostics: Will the Returns be High Enough?

Several very senior pharma executives were recently overhead by a journalist discussing what each of them viewed as the most important changes in the way healthcare will be delivered over the coming decade.  Each of them listed several such factors, including increased payor pressure on prices, the mounting regulatory burden and the shift toward orphan indications, but there was unanimity on just one factor: the importance of personalized medicine.

Personalized medicine is the great white hope for the pharmaceutical industry: by only treating the fraction of the population who can benefit from a particular medicine, efficacy and value-for-money are substantially increased.  But the prices set by Pfizer and Abbott for lung cancer drug Xalkori™ (a dual c-met and ALK kinase inhibitor) and its companion diagnostic (a FISH assay for translocations affecting the ALK genes) following its US approval last week, while on the face of it being unremarkable, nevertheless raise questions about the personalized medicine business model.

Xalkori™ crizotinib will cost $9,600 per month, yielding $50k to $75k per patient for the full treatment regimen – expensive, but pretty much in line with other newly approved medicines for small patient groups (only about 5% of non-small cell lung carcinomas – those with transloactions affecting the ALK gene cluster – are amenable to treatment with this drug).

The Vysis ALK Break Apart™ FISH probe test, from Abbott, which identifies the patient subset sensitive to treatment with Xalkori™, by contrast, will cost less than $250 per patient.  Again, this is entirely consistent with pricing structure of DNA-based diagnostics used in the clinic.

So if there is nothing surprising about these prices, what’s the problem?  The distribution of income between the drug developer and the diagnostic developer is heavily biased towards the drug.  It’s not as extreme as the unit prices for the products suggest, because the diagnostic should be applied to a wider population to identify the target population.  So with 100 non-small cell lung carcinoma patients tested with diagnostic (raising $25,000 revenue for Abbott), 5 will be identified who are suitable for treatment with Xalkori™ (raising $375,000 revenue for Pfizer), assuming full penetration of the market in both cases.  The diagnostic product, therefore, garners about 6% of total spend on the test and drug combined.

There are lots of obvious reasons why this is the case: the cost of developing the drug product was more than 10-times higher than the development costs for a typical diagnostic.  Drugs take longer to develop, and have a much higher risk of failure.  The regulatory hurdles are much higher for drugs than diagnostics.  And in any case, the need for the diagnostic only became clear because of the success of the drug.  In short, 6% of the overall returns for the diagnostic partner in such a situation sounds generous.

However, the situation in oncology, where the vast majority of companion diagnostic products currently on the market are located, hides a bigger issue: the difficulty in earning rewards for genuine innovation in the field of diagnostics.  In oncology, not a great deal of innovation is required on the companion diagnostic side, since the test is tightly tied to the mechanism of action of the associated therapeutic.  In such situations, there is virtually no technical risk associated with the development of the diagnostic product.  The only risk is regulatory risk (which is relatively easy to mitigate, at least for the big players who well understand the process) as well as risk that the associated therapeutic fails to win regulatory or market acceptance – in which case sales of the diagnostic product will also be non-existent.

But in other indications, finding companion diagnostics will require much more innovation.  For example, in chronic inflammatory diseases picking people who might show the best responses to anti-TNFs requires something more innovative than tests for genetic variation in the TNF-a gene or its receptors.  Because the biology of inflammation is complex, predicting the responses to drugs (even those with well defined molecular mechanisms) is a substantial challenge – a challenge that, for the most part, remains unmet.

Indeed, in some cases innovations in biomarker discovery might actually drive new therapeutic approaches:  the management team of Total Scientific, in collaboration with Imperial College, London, discovered that low circulating levels of the amino acid proline is a powerful new biomarker for osteoporosis, predicting fracture risk as well as low bone mineral density.  This finding not only suggests that a diagnostic assay for serum proline may be clinically useful, but that therapeutic strategies directed to modulating proline metabolism may also be effective.  Our innovation in biomarker discovery may ultimately open up a whole new field of bone biology, spawning multiple high value therapeutic products.

In these situations where innovation is required in both the diagnostic and therapeutic domains (which will probably prove to be the majority of personalized medicine product combinations), a business model that splits the revenues 94% to the drug developer and 6% to the diagnostic developer seems skewed.  If the driving innovative step came from the biomarker end (as in the example with proline), the team with the original insight may hope to reap at least half the reward.

There are two major reasons why this is unlikely to happen: firstly, there is a glass ceiling on price for a diagnostic product.  Paying more than $200 or so for a molecular diagnostic, no matter how innovative or complex, is contrary to almost every healthcare reimbursement system worldwide.  Secondly, the barriers to prevent competition against the therapeutic component of the product combination are very high indeed (both from regulatory and intellectual property perspectives).  But in marked contrast, the barriers to prevent another competing product being launched against the diagnostic assay component of the combination are very much lower.

These two factors will likely combine to restrict the return to innovators in the diagnostics space relative to those in the therapeutic space, irrespective of the apparent value of their innovation.

This state of affairs is bad for everyone.  It limits the incentive for real investment in biomarker discovery independent of therapeutic development, so the chances of finding innovative new companion diagnostics outside of oncology are materially reduced.  As a result, even though (for example) a new test to determine which RA patients might respond best to anti-TNFs would be beneficial to patients (avoiding exposing patients to the drug who will not benefit and immediately giving them the opportunity to try something else without waiting 6 months to see of they responded), and also beneficial to payors by reducing the number of patients treated with an expensive drug.  Indeed, the economics of such a test might sustain a price for the product that was well above $200.

Yet the second problem would then intervene to drop the price: competition.  Since it is (usually) impossible to protect the concept of measuring a particular analyte (and is only possible to protect a particular methodological approach to its measurement), others would most likely be free to develop different assays for the same analytes.  As the regulatory hurdles for developing competing tests is low – particularly once the first test has been launched, since fast-followers need only demonstrate equivalence – it would not be long before the first product to successfully predict responses to anti-TNFs among RA patients would be subjected to competition, driving prices back down again.

Subtle though they seem, the differences in the IP and regulatory landscape for diagnostic tests compared with therapeutics, threaten the viability of the personalized medicine business model.  Delivering on the promise of personalized medicine for both patients and the healthcare industry requires allocation of capital to drive innovation in both biomarker discovery and identification of novel therapeutic targets.

At first sight, developing diagnostic products, as opposed to therapeutics is relatively attractive.  The limited demand on capital, short time-line to product launch, low technical and regulatory risk and the substantial medical need all favour developing diagnostic products.  But not if the discovery component becomes lengthy and expensive.  In other words, developing “me-better” diagnostics makes a lot of commercial sense, but investing in genuine innovation in biomarkers still looks unattractive.  And it is precisely these highly innovative new diagnostic products that will underpin the delivery of personalized medicine.

What can be done?  Not a great deal in the short term, perhaps.  But in the longer term, much needed reforms of the regulation of diagnostic products might raise the barrier to competition against first-in-class assay products.  The current regulatory framework for therapeutics is draconian, demanding very high levels of safety from every aspect of the drug product, from manufacturing to long-term side-effects.  By contrast, despite some tinkering in recent years, the diagnostic regulatory framework remains relatively lax.  Home-brew tests are introduced with little regulation of manufacturing standards, and the focus of the regulators is on the accuracy of the measurement rather than on the clinical utility of the result.  This leaves open a weak-spot in the overall protection of the patient, since an inaccurate diagnosis (leading to incorrect treatment) can be as harmful for the patient as treatment with an inherently unsafe medicine.  Just because molecular diagnostics are non-invasive, it doesn’t mean their potential to harm the patient is zero.

There are moves to close this loophole, and the unintended consequence of such regulatory tightening will be an increased barrier to competition.  Perhaps the addition of a period of data-exclusivity, much as applies in the therapeutics world, could be added in addition to further protect truly innovative diagnostic products from early competition.

Such moves are essential to make innovation in biomarkers as commercially attractive as innovation in therapeutics.  It will be difficult to achieve in practice, however, as pressure on healthcare costs ratchets up still further over the coming decade.  Competition, lowering prices, is on the surface attractive to everyone.  But it is the differing protection from competition between therapeutics and diagnostics that leads to skewed incentives to invest in innovation in one area rather than the other.  Lets hope that once combinations of therapeutics and companion diagnostics start to appear outside of oncology, the relative pricing of the associated products properly reflects the innovation in each of them.  If it doesn’t, our arrival in the world of truly personalized medicine may be delayed indefinitely.

Dr. David Grainger
CBO, Total Scientific

Biomarkers: A Band-Aid for Bioscience

In this kick-off article to the new Total Scientific biomarker blog, we discuss the potential for biomarkers to improve the R&D productivity of the pharmaceutical industry.
Please read the full article here.

Return top

Total Scientific

Total Scientific Ltd. is a contract research organisation that specialises in biomarkers.

The use of biomarkers is playing an ever-increasing role in both the pre-clinical and clinical phases of drug discovery, as well as its more traditional role as a core activity for many diagnostic companies. From target identification and validation, through pre-clinical and early clinical phases, the ability to predict or follow drug effects in vivo can significantly reduce the cost and time taken to develop new drugs.