Archive for the ‘Business’ Category

Why clinical CROs hate eCRF systems – and why you should love them

Everything from banking to government services, from shopping to gambling has moved on-line in the past decade, yielding huge efficiency gains for suppliers and (for the most part) an improved experience for the customer.  Suppliers that have failed to adjust their business model are being slowly (or not slowly) ejected from the marketplace.

Against this background, then, it is surprising that such a high percentage of clinical trials are performed using simple pen and paper to record the raw data.  The classical paper Case Report Form (or CRF) has changed little in decades – and seems surprisingly entrenched against the assault of the digital age.

At first glance that seems understandable enough – after all, if you just want a flexible tool to record free-form information then pen and paper is still hard to beat.  The key word, some clinical researchers argue, is flexibility.  You never know what might happen, so its hard to predict in advance the kind of information you will need to capture.  Whatever the eventuality, the paper CRF can accommodate.  And anyway, it can never fail you – what happens to a digital system if the power fails or the internet connection goes down?

The flexibility is undeniable – we have all experienced on-line forms (even from large companies and government departments with huge IT budgets who should really know better) that simply will not allow you to enter the information you need to give them.  Quite simply the designer hadn’t put themselves in your particular situation when they designed the form.

As a result, digital forms work best for simple tasks (like booking a flight or buying a book) and much less well for complex tasks (such as completing your tax return).  There seems little doubt in which camp a clinical trial falls.

But managed correctly, this lack of flexibility is also the greatest strength of an electronic Case Report Form (or eCRF).  Flexibility in the hands of a genius is an unmitigated good – but flexibility gives people the opportunity to make mistakes.  Quite simply, the same digital system that frustrates and infuriates because it wont let you enter the right kind of information is performing a useful gatekeeper function when it prevents you entering errors.  An electronic form wont allow a body mass index of 235 or an age of 216 – errors that can be quickly and easily corrected if they are spotted in real time while the patient is still present, but much harder to correct when identified later.

Smart data entry doesn’t just catch errors.  It can also improve the quality of data by forcing free-form information into categories.  Categorical data can be subjected to statistical analysis more easily than unstructured text – and the originator of the data is much better placed to choose a category from a list than a data analyst attempting to throw a quadrat over the free-form data much later on.  There is no reason not to include a free text ‘notes’ field alongside the categories so that the full richness of the data that would have been captured on a paper form is also included in the eCRF.

Going digital can improve the quality of clinical data in other ways too.  Patient recorded outcomes are important end-points in many trials, but they are notoriously unreliable – they are subject to biases depending on how the questions are administered, as well as substantial variation from one day to the next.  The eCRF can help on both scores: using a computer, or even an iPad to administer the questionnaire removes the variability in presentation that inevitably occurs with a human operator.  Equally importantly, the ease and reliability with which the reporting tool can be self-administered allows data to be collected much more frequently – and time-averaged data is considerably more powerful than spot measures for highly variable end-points such as patient-reported outcome scales.

There is no reason in principle why the eCRF cannot be a truly interactive tool – providing information to the clinical researcher at the same time as the clinical researcher records information in the eCRF.  The eCRF becomes a dynamic manifestation of the protocol itself – reminder the researcher of the sequence of tests to be administered, or the individual steps of the protocol for more complex or lengthy procedures.  It can, of course, integrate information from the patient with the protocol to provide patient-specific instructions.  For example, in a recent clinical trial using the cutting-edge eCRF platform from Total Scientific, one of the end-points involved processing sputum samples.  The volume of reagents added to the sputum depended on the weight of the sputum plug – using a paper CRF would have required the clinical researcher to perform relatively complex calculations in real time while preparing the sputum sample; with the customised eCRF from Total Scientific the weight of the sputum plug was entered and the eCRF responded with a customized protocol for processing the sample with all the reagent volumes personalized for that particular sample.

A cleverly designed eCRF, then, is like having your own scientists permanently present at the clinical sites.  The eCRF is looking over the shoulder of every clinical research assistant and providing advice (in the form of the interactive protocol) and preventing errors.  This “real-time electronic monitoring” severely restricts the flexibility of the clinical researchers to do anything other than exactly what you intended them to do.  And this is why many clinical CROs do not like eCRFs. Loss of flexibility makes their job harder – but makes your clinical data better!

Of course, not all eCRFs are born equal.  Some deliver the restriction and lack of flexibility over data entry in return for only very limited data-checking.  Unless you really harness the power of using an eCRF rather than pen and paper, there is a danger it can cost more and deliver less.  But the advantages of well-designed eCRF, whose functionality has been matched to the needs of your particular protocol brings huge benefits in  data quality – which translate directly into increased statistical power.  Total Scientific’s bespoke eCRF platform, for example, uses individually-designed layouts grafted onto a powerful relational database engine to provide features that are difficult or impossible to realize using conventional eCRF products that are rigid and poorly-optimized for each new user (being little more than digital versions of the paper CRF they replace).

As a result, we provide features such as colour-coded dashboards for each patient visit that provide, at a glance, an indication to the clinical researcher which tasks have been completed and which remain outstanding, as well as user-defined options to display the blinded data in real-time so that outliers and trends in the data can be visualized and identified with an ease unimaginable in the days of paper-only data capture.

And the eCRF is still evolving.  At Total Scientific we are working on modules that implement statistical process control right into the eCRF itself.  Statistical process control is a well-established framework for monitoring complex systems, such a silicon chip fabrication plants.  By looking at all the data emerging from the process (whether chip manufacture or recruitment of patients) it spots when a significant deviation over time has taken place.  In the manufacturing setting, that allows the operators to halt production before millions of chips are made that will fail quality control.  In a clinical trial, statistical process control would identify any unexpected changes in baseline values that cannot be explained by random variation alone and flag them up – while the trial is still running.  While such artefacts can be identified in a conventional locked clinical database during data analysis, it is then too late to do anything about it (other than repeat the trial), and these common artefacts then substantially lower trial power.  Incorporating statistical process control into Total Scientific’s eCRF platform promises, for the very first time, to take clinical data quality to a new level.

If you are planning a trial and your clinical CRO is trying to convince you that the paper CRF system they have always used is better – more flexible and cheaper because they don’t have to learn a new system – then its time to list the benefits of a cutting-edge eCRF system.  They make not like the idea of “big brother” watching their every move – but that’s precisely why you should insist on it!

David Grainger
CBO, Total Scientific Ltd.

The HDL myth: how misuse of biomarker data cost Roche and its investors $5billion

On May 7th 2012, Roche terminated the entire dal-HEART phase III programme looking at the effects of their CETP inhibitor dalcetrapib in patients with acute coronary syndrome.  The immediate cause was the report from the data management committee of the dal-OUTCOMES trial in 15,000 patients that there was now no chance of reporting a 15% benefit with the drug.

The market reacted in surprise and disappointment and immediately trimmed $5billion of the market capitalization of Roche.  After all, here was a class of drugs that had been trumpeted by the pharma industry as the next “super-blockbusters” to follow the now-generic statins. The data from dal-OUTCOMES has dealt that dream a fatal blow.

The important lesson, however, is that such a painful and expensive failure was entirely preventable, because the dream itself was built on a fundamentally flawed understanding of biomarkers.   And that’s not speaking with the benefit of hindsight: we predicted this failure back in January 2012 in the DrugBaron blog.

On May 7th 2012, Roche terminated the entire dal-HEART phase III programme looking at the effects of their CETP inhibitor dalcetrapib in patients with acute coronary syndrome. The immediate cause was the report from the data management committee of the dal-OUTCOMES trial in 15,000 patients that there was now no chance of reporting a 15% benefit with the drug.

The market reacted in surprise and disappointment and immediately trimmed $5billion of the market capitalization of Roche. After all, here was a class of drugs that had been trumpeted by the pharma industry as the next “super-blockbusters” to follow the now-generic statins. The data from dal-OUTCOMES has dealt that dream a fatal blow.

The important lesson, however, is that such a painful and expensive failure was entirely preventable, because the dream itself was built on a fundamentally flawed understanding of biomarkers. And that’s not speaking with the benefit of hindsight: we predicted this failure back in January 2012 in the DrugBaron blog.

CETP inhibitors boost HDL (the so-called “good cholesterol”) by inhibiting the Cholesterol Ester Transfer Protein (CETP), a key enzyme in lipoprotein metabolism. And they work! HDL cholesterol concentrations are doubled soon after beginning treatment, more than reversing the depressed HDL levels that are robustly associated with coronary heart disease (and indeed risk of death from a heart attack).

That was quite a firm enough foundation for developers to believe that CETP inhibitors had a golden future. After all, HDL is the “best” biomarker for heart disease. By that I mean that, of all the lipid measures, HDL gives the strongest association with heart disease in cross-sectional studies and is the strongest predictor of future events in prospective studies. Since we know lipids are important in heart disease (from years of clinical experience with statins), therefore elevating HDL with CETP inhibitors just HAS to work. Right?

Wrong.

Strength of an association is just one factor in the decision as to whether a biomarker and an outcome are linked.  Unfortunately, Sir Austin Bradford Hill put it first in his seminal list of criteria published in 1963 and still widely used today.  And he didn’t  provide a strong enough warning, it seems, that it is only one factor out of nine that he listed.   Total Scientific updated those criteria for assessing modern biomarker data in 2011, and stressed how the strength of an association could be misleading, but obviously that was too late for Roche who were already committed to a vast Phase 3 programme.

Here’s the problem with HDL. HDL cholesterol concentrations are temporally very stable – they do not change a great deal from one day to the next, or even for that matter from one month to the next. A single (so-called ‘spot’) measure of HDL cholesterol concentration, therefore, represents an excellent estimate of the average concentration for that individual over a substantial period.

Other lipid parameters do not share this characteristic. Triglyceride concentration, for example, changes not just day by day but hour by hour. Immediately following a meal, triglyceride levels rise dramatically, with the kinetics and extent of the change dependent on the dietary composition of the food and the current physiological status of the individual.

These temporal variation patterns bias how useful a spot measure of a biomarker is for a particular application. If you want to predict hunger or mood (or anything else that varies on an hour-by-hour timescale) triglycerides will have the advantage – after all, if HDL doesn’t change for weeks it can hardly predict something like hunger. By contrast, if you want to predict something like heart disease that is a very slowly progressing phenotype, the same bias favours a spot measure of HDL over a spot measure of triglycerides.

HDL cholesterol concentration, then, as a biomarker has an in-built advantage as a predictor of heart disease IRREPESECTIVE of how tightly associated the two really are, and most critically IRRESPECTIVE of whether there is a real causative relationship between low HDL and cardiovascular disease.

All this matters a great deal because all the lipid parameters we measure are closely inter-related: low HDL is strongly associated with an elevated (on average) triglyceride and LDL. For diagnosing patients at risk of heart disease you simply pick the strongest associate (HDL), but for therapeutic strategies you need to understand which components of lipid metabolism are actually causing the heart disease (while the others are just associated as a consequence of the internal links within the lipid metabolism network).

Picking HDL as a causative factor primarily on the basis of the strength of the association was, therefore, a dangerous bet – and, as it turns out, led some very expensive mistakes.

Okay, so the structural bias towards HDL should have sounded the alarm bells, but surely it doesn’t mean that HDL isn’t an important causative factor in heart disease? Absolutely correct.

But this isn’t the first “death” for the CETP Inhibitor class. As DrugBaron pointed out, the class seemed moribund in 2006 when the leading development candidate, Pfizer’s torcetrapib, failed to show any signs of efficacy in Phase 3.

As so often happens, when observers attempted to rationalize what had happened, they found a ‘reason’ for the failure: they focused on the small but significant hypertensive effect of torcetrapib – a molecule-specific liability. An argument was constructed that an increase in cardiovascular events due to this small increase in blood pressure must have cancelled out the benefit due to elevated HDL.

That never seemed all that plausible – unless you were already so immersed in ‘the HDL myth’ that you simply couldn’t believe it wasn’t important. To those of us who understood the structural bias in favour of HDL as a biomarker, the torcetrapib data was a strong premonition of what was to come.

So strong was ‘the HDL myth’ that voices pointing out the issues were drowned out by the bulls who were focused on the ‘super-blockbuster’ potential of the CETP inhibitor class. Roche were not the only ones who continued to believe: Merck have a similar programme still running with their CETP Inhibitor, anacetrapib. Even the early data from that programme isn’t encouraging – there is still no hint of efficacy, although they rightly point out that there have not yet been enough events analysed to have a definitive answer.

But the signs are not at all hopeful. More than likely in 2012 we will have the painful spectacle of two of the largest Phase 3 programmes in the industry failing. Failures on this scale are the biggest single factor dragging down R&D productivity in big pharmaceutical companies.

Surely the worst aspect is that these outcomes were predictable. What was missing was a proper understanding of biomarkers and what they tell us (or, perhaps in this case, what they CANNOT tell us). Biomarkers are incredibly powerful, and their use is proliferating across the whole drug development pathway from the bench to the marketplace. But like any powerful tool, they can be dangerous if they are misused, as Roche (and their investors) have found to their substantial cost. Total Scientific exist to provide expert biomarker services to the pharmaceutical industry – let’s hope that not bringing in the experts to run your biomarker programme doesn’t cost you as much as it did Roche.

Dr. David Grainger
CBO, Total Scientific

Personalized Medicine Demands Investment in Innovative Diagnostics: Will the Returns be High Enough?

Several very senior pharma executives were recently overhead by a journalist discussing what each of them viewed as the most important changes in the way healthcare will be delivered over the coming decade.  Each of them listed several such factors, including increased payor pressure on prices, the mounting regulatory burden and the shift toward orphan indications, but there was unanimity on just one factor: the importance of personalized medicine.

Personalized medicine is the great white hope for the pharmaceutical industry: by only treating the fraction of the population who can benefit from a particular medicine, efficacy and value-for-money are substantially increased.  But the prices set by Pfizer and Abbott for lung cancer drug Xalkori™ (a dual c-met and ALK kinase inhibitor) and its companion diagnostic (a FISH assay for translocations affecting the ALK genes) following its US approval last week, while on the face of it being unremarkable, nevertheless raise questions about the personalized medicine business model.

Xalkori™ crizotinib will cost $9,600 per month, yielding $50k to $75k per patient for the full treatment regimen – expensive, but pretty much in line with other newly approved medicines for small patient groups (only about 5% of non-small cell lung carcinomas – those with transloactions affecting the ALK gene cluster – are amenable to treatment with this drug).

The Vysis ALK Break Apart™ FISH probe test, from Abbott, which identifies the patient subset sensitive to treatment with Xalkori™, by contrast, will cost less than $250 per patient.  Again, this is entirely consistent with pricing structure of DNA-based diagnostics used in the clinic.

So if there is nothing surprising about these prices, what’s the problem?  The distribution of income between the drug developer and the diagnostic developer is heavily biased towards the drug.  It’s not as extreme as the unit prices for the products suggest, because the diagnostic should be applied to a wider population to identify the target population.  So with 100 non-small cell lung carcinoma patients tested with diagnostic (raising $25,000 revenue for Abbott), 5 will be identified who are suitable for treatment with Xalkori™ (raising $375,000 revenue for Pfizer), assuming full penetration of the market in both cases.  The diagnostic product, therefore, garners about 6% of total spend on the test and drug combined.

There are lots of obvious reasons why this is the case: the cost of developing the drug product was more than 10-times higher than the development costs for a typical diagnostic.  Drugs take longer to develop, and have a much higher risk of failure.  The regulatory hurdles are much higher for drugs than diagnostics.  And in any case, the need for the diagnostic only became clear because of the success of the drug.  In short, 6% of the overall returns for the diagnostic partner in such a situation sounds generous.

However, the situation in oncology, where the vast majority of companion diagnostic products currently on the market are located, hides a bigger issue: the difficulty in earning rewards for genuine innovation in the field of diagnostics.  In oncology, not a great deal of innovation is required on the companion diagnostic side, since the test is tightly tied to the mechanism of action of the associated therapeutic.  In such situations, there is virtually no technical risk associated with the development of the diagnostic product.  The only risk is regulatory risk (which is relatively easy to mitigate, at least for the big players who well understand the process) as well as risk that the associated therapeutic fails to win regulatory or market acceptance – in which case sales of the diagnostic product will also be non-existent.

But in other indications, finding companion diagnostics will require much more innovation.  For example, in chronic inflammatory diseases picking people who might show the best responses to anti-TNFs requires something more innovative than tests for genetic variation in the TNF-a gene or its receptors.  Because the biology of inflammation is complex, predicting the responses to drugs (even those with well defined molecular mechanisms) is a substantial challenge – a challenge that, for the most part, remains unmet.

Indeed, in some cases innovations in biomarker discovery might actually drive new therapeutic approaches:  the management team of Total Scientific, in collaboration with Imperial College, London, discovered that low circulating levels of the amino acid proline is a powerful new biomarker for osteoporosis, predicting fracture risk as well as low bone mineral density.  This finding not only suggests that a diagnostic assay for serum proline may be clinically useful, but that therapeutic strategies directed to modulating proline metabolism may also be effective.  Our innovation in biomarker discovery may ultimately open up a whole new field of bone biology, spawning multiple high value therapeutic products.

In these situations where innovation is required in both the diagnostic and therapeutic domains (which will probably prove to be the majority of personalized medicine product combinations), a business model that splits the revenues 94% to the drug developer and 6% to the diagnostic developer seems skewed.  If the driving innovative step came from the biomarker end (as in the example with proline), the team with the original insight may hope to reap at least half the reward.

There are two major reasons why this is unlikely to happen: firstly, there is a glass ceiling on price for a diagnostic product.  Paying more than $200 or so for a molecular diagnostic, no matter how innovative or complex, is contrary to almost every healthcare reimbursement system worldwide.  Secondly, the barriers to prevent competition against the therapeutic component of the product combination are very high indeed (both from regulatory and intellectual property perspectives).  But in marked contrast, the barriers to prevent another competing product being launched against the diagnostic assay component of the combination are very much lower.

These two factors will likely combine to restrict the return to innovators in the diagnostics space relative to those in the therapeutic space, irrespective of the apparent value of their innovation.

This state of affairs is bad for everyone.  It limits the incentive for real investment in biomarker discovery independent of therapeutic development, so the chances of finding innovative new companion diagnostics outside of oncology are materially reduced.  As a result, even though (for example) a new test to determine which RA patients might respond best to anti-TNFs would be beneficial to patients (avoiding exposing patients to the drug who will not benefit and immediately giving them the opportunity to try something else without waiting 6 months to see of they responded), and also beneficial to payors by reducing the number of patients treated with an expensive drug.  Indeed, the economics of such a test might sustain a price for the product that was well above $200.

Yet the second problem would then intervene to drop the price: competition.  Since it is (usually) impossible to protect the concept of measuring a particular analyte (and is only possible to protect a particular methodological approach to its measurement), others would most likely be free to develop different assays for the same analytes.  As the regulatory hurdles for developing competing tests is low – particularly once the first test has been launched, since fast-followers need only demonstrate equivalence – it would not be long before the first product to successfully predict responses to anti-TNFs among RA patients would be subjected to competition, driving prices back down again.

Subtle though they seem, the differences in the IP and regulatory landscape for diagnostic tests compared with therapeutics, threaten the viability of the personalized medicine business model.  Delivering on the promise of personalized medicine for both patients and the healthcare industry requires allocation of capital to drive innovation in both biomarker discovery and identification of novel therapeutic targets.

At first sight, developing diagnostic products, as opposed to therapeutics is relatively attractive.  The limited demand on capital, short time-line to product launch, low technical and regulatory risk and the substantial medical need all favour developing diagnostic products.  But not if the discovery component becomes lengthy and expensive.  In other words, developing “me-better” diagnostics makes a lot of commercial sense, but investing in genuine innovation in biomarkers still looks unattractive.  And it is precisely these highly innovative new diagnostic products that will underpin the delivery of personalized medicine.

What can be done?  Not a great deal in the short term, perhaps.  But in the longer term, much needed reforms of the regulation of diagnostic products might raise the barrier to competition against first-in-class assay products.  The current regulatory framework for therapeutics is draconian, demanding very high levels of safety from every aspect of the drug product, from manufacturing to long-term side-effects.  By contrast, despite some tinkering in recent years, the diagnostic regulatory framework remains relatively lax.  Home-brew tests are introduced with little regulation of manufacturing standards, and the focus of the regulators is on the accuracy of the measurement rather than on the clinical utility of the result.  This leaves open a weak-spot in the overall protection of the patient, since an inaccurate diagnosis (leading to incorrect treatment) can be as harmful for the patient as treatment with an inherently unsafe medicine.  Just because molecular diagnostics are non-invasive, it doesn’t mean their potential to harm the patient is zero.

There are moves to close this loophole, and the unintended consequence of such regulatory tightening will be an increased barrier to competition.  Perhaps the addition of a period of data-exclusivity, much as applies in the therapeutics world, could be added in addition to further protect truly innovative diagnostic products from early competition.

Such moves are essential to make innovation in biomarkers as commercially attractive as innovation in therapeutics.  It will be difficult to achieve in practice, however, as pressure on healthcare costs ratchets up still further over the coming decade.  Competition, lowering prices, is on the surface attractive to everyone.  But it is the differing protection from competition between therapeutics and diagnostics that leads to skewed incentives to invest in innovation in one area rather than the other.  Lets hope that once combinations of therapeutics and companion diagnostics start to appear outside of oncology, the relative pricing of the associated products properly reflects the innovation in each of them.  If it doesn’t, our arrival in the world of truly personalized medicine may be delayed indefinitely.

Dr. David Grainger
CBO, Total Scientific

Ultra-sensitive NMR-based diagnosis for infectious diseases: the tortoise races the hare again

Obtaining rapid and reliable diagnosis of infectious diseases is usually limited by the sensitivity of the detection technology.   Even in severe sepsis, accompanied by organ failure and admission to an intensive care unit, the causative organism is often present at a level of less than one bacterium per milliliter of blood.  Similarly, in candidiasis the yeast cells are present at vanishingly low levels in body fluids, while in chlamydia infections the pathogen is located intracellularly as is entirely absent from the blood fluid.

All these (and many other) pathogens have evolved to escape detection by the immune system, and its antibody sensors.  This, coupled with the low levels of organisms in samples from infected individuals, means that antibody-based diagnostic tests rarely have enough sensitivity to be useful.

Then came PCR.  The big selling point of the polymerase chain reaction is its exquisite sensitivity, while retaining useful specificity.  Under optimal conditions you can detect a single DNA molecule with this technique.   Surely PCR was going to revolutionize infectious disease diagnosis?

Not really.  There are several problems: the very low levels of infectious organisms in the samples means that there is a very large amount of other DNA (from the host cells) in the sample.  Unless some kind of enrichment is performed, the PCR reaction cannot achieve the necessary sensitivity in the presence of so much competing DNA template.  Secondly, DNA from dead organisms is detected just as efficiently as from live ones, and worse still DNA released from the dead organisms can persist in the blood for weeks and months.   Together, these issues lead to high rates of both false positive and false negative findings, and for many infectious diseases such simple PCR tests perform too poorly in the clinic to be of value.

A common solution that deals with both these problems is to culture the sample prior to running the test.  The rapid growth of the infectious organism enriches the sample with the target DNA template, and at the same time differentiates viable organisms from dead ones.  PCR on cultured samples usually achieves the necessary sensitivity and specificity to be clinically useful – but for severe disease, such as sepsis, the time taken to culture the sample (which may be several days) is critical when the correct treatment needs to be started immediately.

As a result, there is still a massive product opportunity for new infectious disease diagnostics.

One approach is to try and confer on the PCR tests the specificity for live organisms, and at the same time improve the ability to distinguish template from the organism from the high levels of host DNA.  A particularly promising solution from Momentum Biosciences is to employ the DNA ligase enzyme from live bacteria to ligate added DNA template to create an artificial gene that is then amplified by conventional PCR.  The product is still in development, but it offers real hope of a sepsis test that can identify live organisms in less than 2 hours.

But another potential solution comes from a much more surprising approach: using nuclear magnetic resonance (NMR) spectroscopy.  NMR offers exquisite specificity to distinguish molecules in a sample based on their chemical structure, a property that underpins the use of the technique in metabolic profiling.  However, as anyone who has ever tried to exploit this elegant specificity will tell you, the problem with NMR is its lack of sensitivity.  Even with cutting-edge equipment, costing millions, the sensitivity limit is usually above 10µM (which equates to a million million or so molecule per milliliter of sample.  Not much use, one might think, for detecting a single cell in a milliliter of blood.

But T2 Biosystems, based in Lexington, MA, have found a neat solution to the sensitivity problem of both antibodies and NMR.  By coating highly paramagnetic beads with antibodies specific for the infectious organism, they can readily detect the clumping of these beads in the presence of very low levels of antigen.  Again, the test is in development, but the company announced last week the closing of a $23M series D investment to bring the system to market.

There is an attractive irony in using a technique famed for its ultra-low sensitivity to solve a problem where sensitivity of detection was the limiting factor.  In the race to find clinically useful diagnostic tests for many infectious diseases, just as in Zeno’s race between the hare and the tortoise, the super-sensitive PCR took a massive early lead and for a long time looked like the only winner in an arena where the major barrier to success was sensitivity of detection.  But the wily old tortoise is not out of it yet: an ingenious twist added to low-sensitivity NMR might still win the race to clinical and commercial success in the infectious disease diagnostic arena.

Dr. David Grainger
CBO, Total Scientific Ltd.

FDA guidance on the use of biomarkers as drug development tools

Back in September the US Food and Drug Administration announced that it was going to delay its publication of draft guidance on the qualification of drug development tools, originally promised for the summer.  However, this draft guidance was finally published at the end of October.  While still in draft form, the Guidance substantially expands on the outline of the pilot qualification process given in an article written by two members of the Center for Drug Evaluation and Research published in 2007.

The new guidance principally provides information on the proposed administrative process that will be followed by the FDA in order to qualify new drug development tools (DDTs).  Qualification is defined as “a conclusion that within the stated context of use, the results of assessment with a DDT can be relied upon to have a specific interpretation and application in drug development and regulatory review.”  The document discusses two forms of DDT – biomarkers and patient-reported outcome scales.  There are couple of points that bear discussion in relation to biomarkers.

Firstly, the new qualification procedure is aimed at enhancing the utility of a qualified biomarker across the industry.  Hence, while previously use of a biomarker may have been part of an NDA, IND or BLA, this new programme is designed to make public those biomarkers that satisfy the qualification process, so that future drug development programmes can take advantage of already knowing that the biomarker has been qualified for a particular purpose.  Of course, wherever a new biomarker is proprietary, it can be retained as such by not using the new qualification process, but by remaining part of the NDA, IND or BLA.  This new programme, it seems therefore, is not particularly aimed at individual companies, but more towards collaborative groups that together can share the burden of the development of the drug development tools and submission to the FDA.  Indeed, less than a month after the draft guidance was published, several major pharmaceutical companies and leading academic institutions announced such a collaborative biomarker consortium for COPD.

Secondly, while there is detailed information as to the administrative process, there is no information on the level of evidence required by the FDA to take a biomarker through from submission to qualification.  There are a number of discreet stages that have to be undertaken, but nowhere are the criteria on which a new biomarker will be assessed described.  The means by which such an assessment may be made are described:  In the first stage they include a consultation process between the submitter and the FDA, and formal assessment of the biomarker follows, to include discussion at internal FDA meetings, discipline reviews and potentially even public discussions.  However, the level of evidence required for success at each stage is not discussed.

It is tempting to suggest that this gap in the document is due primarily to the difficulty in formalising the criteria required for qualification of a biomarker.  The wide range of uses to which biomarkers may be put, whether to preselect individuals for study or treatment, inform about disease progression, predict drug efficacy or toxicity or follow dynamically treatment in an individual, makes it difficult a priori to put together criteria that will apply in all cases.  If this supposition is true, and each biomarker will be assessed on its own merits, with no reference to pre-determined criteria, the new qualification procedures do at least give the scientific community the ability to comment and provide feedback on decisions made, since all new qualifications will be published.  Indeed the guidance states “Once a DDT is qualified for specific use, the context of use may become modified or expanded over time as additional data are collected, submitted, and analyzed.  Alternatively, if the growing body of scientific evidence no longer supports the context of use, the DDT qualification may be withdrawn.”  Of concern, such scrutiny will not be applied to proprietary biomarkers submitted as part of INDs, NDAs or BLAs, but some scrutiny and sharing of validation study data is at least a move in the right direction.   The FDA’s qualification process seems likely to stimulate a further increase in the utility of biomarkers as drug discovery tools.

It should be noted that the Guidance Document is still in draft form.  Published on October 25, the FDA is asking for comments and suggestions to be submitted to them by January 24, 2011 for consideration when preparing the final document.