Author Archive

The final frontier – post-genomic biomarkers

Some biomarkers are easier to find than others.  Once a class of molecules has been noticed, and the assay methodology to measure their levels has been optimized, data rapidly accumulates.  Related molecules frequently pop up (often as a result of artifacts appearing in the assays under certain conditions or when particular samples are analysed).  Its rather like unearthing an ancient pyramid – if the first dig identifies the tip of the pyramid, the rest follows quite quickly.

But imagine what it would be like trying to rebuild the pyramid if the blocks had been scattered over a wide area.  Finding one block wouldn’t necessarily help you find the next one.  That seems to be the case with the ever-growing superfamily of peptide modifications.  A trickle of discoveries of naturally occurring modifications of peptides is turning into a flood.  And the molecules that are being discovered seem to be associated with fascinating biology, and offer great promise as biomarkers now and in the future.

Modifications such as phosphorylation, sulphation, glycosylation and more recently glycation have been so extensively studied that they are taken for granted as part of the molecular landscape.  But the molecular diversity they generate is still under-appreciated.  Total Scientific have comprehensively analysed the unexpected array of natural antibodies against the oligosaccharides that decorate many extracellular proteins and peptides – and extended initial observations by others that changes in these anti-carbohydrate antibodies are useful biomarkers for the early stages of cancer development in man.  But even these studies, using multiplexed assays to profile the portfolio of anti-carbohydrate antibodies, hardly scratch the surface of the molecular diversity that exists in this domain.

Over the last decade the range of covalent tags on peptides and proteins has expanded much further.  The ubiquitin family of small peptide tags now numbers at least 46, and these can be added to proteins in a staggering variety of chains, ranging from a single ubiquitin tag to branched chains of different ubiquitin family members.  These modifications play central roles in diverse biological pathways, from cell division and organelle biogenesis to protein turnover and antigen presentation.  Our understanding of the importance of ubiquitinylation is progressing rapidly, but in the absence of good methodology to differentiate the vast diversity of tag structures the possibilities that proteins and peptides modified in this way may be valuable biomarkers is all but unexplored.

Covalent tags, such as phosphorylation, ubiquitination or nitrosylation, are not the only natural modifications of peptides now known.  More surprisingly, mechanisms exist to modify the amino acids composing the peptide chain itself.  Some seem highly specific for a single metabolic pathway (such as the formation of S-adenosylmethionine in the folate cycle controlling methyl group transfer); others at least seem limited to a single class of protein targets (such as lysine acetylation in histones to regulate the strength of DNA binding); but more recently it has become clear that enzymes exist to modify peptidyl amino acid side chains in a wide range of different substrates.  The best-studied example is the enzyme peptidyl arginine deiminase (PAD), which converts arginine in peptides and proteins into citrulline.  This unusual reaction only came to light because of the misregulation of PAD that occurs in almost all cases of rheumatoid arthritis (RA).  Dysregulated PAD activity in the extracellular space results in the generation of hundreds of different citrulline-containing proteins and peptides, many of which are immunogenic.  This, in turn, results in the formation of antibodies against citrulline-containing protein antigens (called ACPAs or anti-CCPs).  Diagnostic kits measuring anti-CCP levels have revolutionized the clinical diagnosis of RA, almost completely supplanting the use of rheumatoid factor, which has poorer sensitivity and specificity.  Today, the presence of anti-CCP antibodies is almost pathomnemonic for classical RA, and sales of the proprietary kits for measuring this biomarker are generating millions annually for their discoverers.

Conversion to citrulline is not the only fate for arginine residues in peptides and proteins.  In bacteria, conversion of arginine to ornithine is a key step in the generation of self-cleaving peptides called inteins.  Intriguingly, one of Total Scientific’s clients has recently discovered an analogous pathway in eukaryotes (including humans) that generates naturally occurring lactam-containing peptides, and we are helping them generate new assay methodology for this novel and exciting new class of potential biomarkers.

Even simpler than covalent tagging and metabolic transformation of the amino acid side chains is simple cleavage of the peptide or protein.  Removal of a handful of amino acids from the N-terminus (by dipeptidyl peptidases) or the C-terminus (by carboxypeptidases) of peptides can already generate hundreds of different sequences from a single substrate peptide.  Endoproteolytic cleavage at specific internal sites generates further diversity.  The problem here is that both the product and the substrate contain the same sequence, making the generation of antibodies specific for a particular cleavage product very difficult to generate.  Total Scientific are developing generally-applicable proprietary methods for successfully raising antibodies specific for particular cleavage products, and these tools should greatly accelerate the growing field of biomarkers that are specific cleavage products (such as the use of N-terminally processed B-type Naturetic Peptide, or ntBNP for the diagnosis of heart failure).

If the detection of different, closely related, cleavage products from a single substrate is a challenging analytical conundrum, then the specific detection of particular non-covalent aggregates of a single peptide or protein is surely the ultimate badge of honour for any assay developer.  Recent data suggests that some peptide hormones, such as adiponectin, may signal differently when aggregated in higher molecular weight complexes compared to when present in lower molecular weight forms.

Frustratingly, none of this wealth of diversity in the potential biomarker landscape is captured in the genome.  The glittering insights of the vast space beyond this post-genomic biomarker frontier have mostly come from fortuitous stumbling across a particular example.  But the sheer frequency with which such discoveries are now being made suggests there is a substantial horde of buried treasure out there waiting for us to develop the appropriate analytical tools to find it.  Total Scientific have built up an impressive toolkit, capable of shining a flashlight into the darkest corners of the post-genomic biomarker space and we relish any opportunity to turn this expertise into exciting new biomarker discoveries for our clients.

Dr. David Grainger
CBO, Total Scientific Ltd.

Finding exogenous biomarkers of heart disease: humans are ecosystems too!

It is ten years this week since the Total Scientific team, together with our collaborators at Imperial College in London submitted the first large-scale clinical metabolomics study for publication in Nature Medicine.  We applied proton NMR spectroscopy to serum samples collected from patients with coronary heart disease (defined by angiography), as well as control subjects with normal coronary arteries.  The results were dramatic: we could completely separate the groups of subjects based on the coronary artery status using a non-invasive blood test.

Despite such encouraging findings, the implications of that ground-breaking study have yet to impact clinical medicine.  There are a number of reasons for that: in 2006, a replication study was published, again in Nature Medicine, with some misleading conclusions.  Although they saw broadly the same patterns that we have observed five years previously, they interpreted their reduced diagnostic power as a negative outcome – though in reality its source was most likely the inappropriate concatenation of samples from different studies, collected with different protocols.

But another limitation of our study has its origin in the techniques we applied.  NMR spectroscopy is an amazingly reproducible analytical technique, but it has poor sensitivity (so misses many low abundance biomarkers) and, perhaps more crucially, it can be difficult to determine the exact molecular species responsible for the differences between groups of subjects.

In our study, the majority of the diagnostic power arose from a peak with a chemical shift around 3.22 ppm, which we attributed to the trimethylamine group in choline.  Individuals with angiographically-defined heart disease have much lower levels of this signal compared with healthy subjects.  Although we speculated that the signal might arise due to phosphatidylcholine residues in HDL, the lack of certainty about the molecular identity of this powerful diagnostic marker (that was clearly replicated in the 2006 study) hampered further investigation.

Then, last month, Wang and colleagues published a fascinating follow-up study in Nature.  Using LC-MS based metabolomics they identified three metabolites of phosphatidylcholine as predictors of heart disease (choline, TMAO and betaine).  In a stroke, they replicated our earlier findings and provided additional clarity as to the molecular nature of the biomarkers.  It has taken a decade to move from the realisation that there was a powerful metabolic signature associated with heart disease to an unambiguous identification of the molecules that are responsible.

Are measurements of these metabolites useful in the clinical management of heart disease?  That remains an open question, but with the molecular identity of the biomarkers in hand it is a question that can be readily investigated without the need for complex and expensive analytical techniques such as NMR and LC-MS.

But Wang and his colleagues went one step further: they showed that these biomarkers were generated by the gut flora metabolizing dietary phosphatidylcholine.  So the signature we originally published in 2002 may not represent differences in host metabolism at all, but actually reflect key differences in the intestinal flora of subjects with heart disease.  All of which serves as a useful reminder that we humans are complex ecosystems, and our biochemistry reflects much more than just our own endogenous metabolic pathways.

Metabolomics is an incredibly powerful platform for the discovery of new biomarkers, as this decade-long quest has demonstrated.  And the pathways it reveals can lead in the most surprising of directions.

Dr. David Grainger
CBO, Total Scientific Ltd.

Biomarkers: lessons from history

The increase in the use of the term biomarker is a recent one.  When one looks back at the use of this term in the literature over the last fifty years, there was an explosive increase in its use in the 1980s and 1990s, and it continues to grow today.  However, biomarker research as we now know it has a much deeper history.

Here we are going to focus on just one paper, published in 1965, twelve years before the term “biomarker” appeared in either the title or abstract of any paper in the PubMed database[i].  This is a paper by Sir Austin Bradford Hill, which appeared in the Proceedings of the Royal Society of Medicine entitled “The Environment and Disease:  Association or Causation?”.

Sir Austin neatly and eloquently describes nine factors that he feels should be taken into account when assessing the relationship between an environmental factor and disease.  These are:

  1. Strength
  2. Consistency
  3. Specificity
  4. Temporality
  5. Biological gradient
  6. Plausibility
  7. Coherence
  8. Experiment
  9. Analogy

In this blog we discuss the applicability of each of these factors to biomarker research today.  However, before we do, it is important to note that the aims of biomarker research today are much broader than the primary aim of Sir Austin’s paper – which was to discuss the ways in which an observed association between the environment and some disease may be assessed for the degree of causality involved.  However, only a very few biomarkers lie directly on this causal path (some biomarkers change in response to the disease itself, others are only indirectly associated with the disease and its causes), but crucially their utility does not depend upon a causal association.  However, particularly when biomarkers are used to aid the identification of disease, there are clear parallels between Sir Austin Bradford Hill’s assessment of causality and our current need to assess utility.

1.  Strength. Sir Austin’s primary factor to consider in the interpretation of causality was the strength of the association.  He argues that the stronger the association between two factors, the more likely it is that they are causally related.  However, he cautions against the obverse interpretation – that a weak association implies a lack of causality.  In fact, the strength of an association depends on the proportion of the variance in one factor that explained by the other over the relevant sampling timescale.  In other words, there may be a completely causal relationship between X and Y, but X may be only one factor (possibly a small factor) controlling Y.  The remaining variance in Y may even be random fluctuations (so X is the only factor causally associated with Y), yet the strength of the observed association will be weak, unless time-averaged measurements are taken for both variables.

The strength of the association is probably an even more important factor for assessing the utility of biomarkers than it was for assessing causality.  Firstly, it is clear to all that the stronger the association between a putative biomarker and the disease under examination, the more likely it is to have clinical application.  However, as with the arguments for causality there are important caveats to insert.  The clinical utility of a putative biomarker often depends upon the shape of the receiver-operator curve, not just the area underneath the curve.  For example, a test where the specificity remains at 100%, even with lower sensitivity may have far more clinical utility than a test where both sensitivity and specificity are 90% – depending on the application – even if the overall strength of the association was weaker.

It’s also possible to improve the strength of a crude association, for example by subsetting the patient population.  A given biomarker may perform much better in, say, males than females, or younger people rather than older people.  The applicability of the biomarker may be restricted but the strength, and hence clinical utility of the association may be improved dramatically.  But despite these caveats, the strength of the association is a good “first pass” screening criterion for assessing the utility of biomarkers – much as for Sir Austen it yielded a good “first guess” as to whether an association was likely to be causal

2.  Consistency.  Sir Austin Bradford Hill puts this essential feature of any biomarker programme second on his list of causality factors.  He states “Has [it] been repeatedly observed by different persons, in different places, circumstances and times?”.  This is an absolutely crucial issue, and one on which many a biomarker programme has failed.  One only has to look at the primary literature to realise that there have been dozens of potential biomarkers published, of which most have not been validated, as indicated by the lack of positive follow-on studies.  Much of this attrition can be put down to study design, something that was discussed in an earlier blog.

3.  Specificity. The discussion of specificity by Sir Austin Bradford Hill is also highly relevant to today’s biomarker research.  We live in an  ’omics world’, with the ability to measure levels of dozens, hundreds or even thousands of potential biomarkers with an ease that must have seemed like science fiction in 1965.  As a result, it is often trivial (in both the technical logical sense of the word as well as the everyday use) to identify a biomarker apparently associated with a disease.  Consider, however, how a marker of inflammation might behave:  they will likely be strongly associated with any selected inflammatory disease, but they are unlikely to have any specificity over other inflammatory conditions.  For example, serum levels of C-reactive protein correlate well with rheumatoid arthritis, but because it is also associated with dozens of other inflammatory conditions it has little clinical utility for the diagnosis of RA (although, of course, it may be useful for monitoring disease activity once you have secured a robust differential diagnosis by other means).  Again, this raises the issue of study design: preliminary studies are often set up with the aim of identifying differences in levels of biomarkers between subjects with disease and healthy controls.  Such studies may provide a list of candidates, but ultimately most of these will not show adequate specificity, an issue identified when a more suitable control population is used.

4.  Temporality. This is perhaps the most obvious of Bradford-Hill’s concepts: for a causal relationship between X and Y, changes in X must precede changes in Y.  Similarly, it is more useful in disease diagnosis when a biomarker changes before the disease is manifestly obvious.  On the face of it, the earlier the change can be detected before the disease exhibits clinically-relevant symptoms, the more useful that advance warning becomes.  In the limit, however, differences that are exhibited long before the disease (perhaps even for the whole life of the individual, such as genetic markers) become markers of risk rather than markers of the disease process itself.

5.  Biological gradient.  This feature of biomarker studies is just as important as it was when Sir Austin discussed it in relation to the causality of associations.  Our assessment of the utility of a biomarker increases if there is a dose-response association between levels of the biomarker and presence or severity of disease.  So, examining colorectal cancer for example, one might give greater weight to a biomarker whose levels are elevated somewhat in patients who have large polyps and strongly elevated in patients who have overt cancer.  A gradient of elevation across patients with different stages of cancer would also add to the plausibility of the putative biomarker (see below)

6.  Plausibility. Of all of the criteria put forward in the paper by Sir Austin Bradford Hill back in 1965, we find this is the most interesting.  Prior to the ’omics era, the majority of experimental designs were already based on a hypothesis of some sort – that is plausibility was inherently built-in to all experiments, just because the act of measuring most analytes or potential biomarkers was expensive in both time and money.  To Sir Austin, it must have been the norm rather than the exception that observed associations had at least a degree of plausibility.

In the modern era this is no longer the case.  Thousands of genes, metabolites or proteins may now be examined in a very short period of time and (for the amount of data obtained) at a very reasonable cost.  And because recruiting additional subjects into a clinical study is typically significantly more expensive than measuring an additional analyte or ten, one often finds that the resulting dataset for most modern studies is “short and fat” – that is, you have measured many more analytes (variables) than you had patients (observations) in the first place.  Moreover, there is often no particular reason why many of the analytes have been measured – other than the fact that they composed part of a multi-analyte panel or some pre-selected group of biomarkers.  Post-hoc justification becomes the norm.  It is almost impossible to avoid.  We find a few “statistically significant” differences[ii], and then rush to explain them either from our own background knowledge or by some hurried literature searches.  The sum of biological knowledge (or at least published data) is orders of magnitude greater than it was in Hill’s day, and nowadays it is entirely undemanding to construct a plausibility argument for any association one might find in such a trawl.

We caution strongly against this approach, however.  Tempting though it is to take this route, the likelihood that any biomarkers identified in such experiments have any validity is almost nil, and enthusiastic but unwitting over-interpretation is often the outcome.  This does not mean that such dataset are cannot be mined successfully, but doing so is a job for a professional, wary of the pitfalls.  And no such biomarker should be considered useful until it has been validated in some well-accepted manner.

Interestingly, from the perspective of 1965, Sir Austin Bradford-Hill came to the conclusion that it would be “helpful if the causation we suspect is biologically plausible”, but today we do not share that perspective.  Armed with so much published data, an argument for plausibility can be built for any association – this lack of specificity therefore means that such plausibility has little predictive value as a criterion for assessing utility.  He did, however, state that from the perspective of the biological knowledge of the day, an association that we observe may be one new to science and it must not be dismissed “light-heartedly as just too odd.”  This holds true as much today as it did then.  When faced with two associations, one plausible and one off-the-wall, the criteria of plausibility is not necessarily the primary criterion that we apply to determine utility.

7.  Coherence.  Similar to plausibility, this criterion highlights that while there may be no grounds to interpret something positively based on currently available biological knowledge, there may nevertheless be reason to doubt data based on existing scientific evidence.  The arguments against using coherence to assess utility of candidate biomarkers are the same as for plausibility.

8.  Experiment.  This is another crucial factor that is just as relevant in today’s world of biomarkers as it was in 1965.  Sometimes the fields of diagnostic medicine and experimental biology are not as well integrated as they should be.  Interpretation of biomarker identification or biomarker validation experiments is often limited by the availability of samples or data.  However, there is much to be said for taking the information learnt in the examination of biomarkers in patients back to the bench.  Here much tighter control may be applied to your experimental system, and hypotheses generated in vivo may be tested in vitro.  This may seem back-to-front, but it is an essential feature of any well-designed biomarker programme that it be tested experimentally.  This may be possible in patients, but it may often be carried out more cheaply and quickly at the bench or in animal models of disease.

9.  Analogy. Analogy falls into the same category as plausibility and coherence.  The huge range of published data, much of which is carried out poorly and / or not followed through means that testing the validity of a finding by analogy to existing biological knowledge is becoming ever more difficult.  It is not analogy that’s needed, but consistency – and that means more well-designed experiments.

Perhaps it’s time to bring Bradford-Hill’s criteria bang up to date for the 21st Century?  Much of his pioneering work applied to assessing causality between environmental factors and disease is just as valuable in assessing modern biomarkers for clinical utility.  For the initial assessment of biomarkers, as data begins to emerge from the first discovery studies it is consistency and specificity that carry the greatest weight, with temporality, strength of the association and biological gradient only a short distance behind.  The key is to design efficient studies that allow each of these critical parameters to be assessed at the earliest stages of the biomarker discovery programme – too often biomarkers are trumpeted as ready for use before this checklist has been completed, and quite often before any experiment has even been conceived of that might properly test each of them.

Experiment is a crucial component of the eventual validation of any biomarker, but the effort involved means that preliminary prioritization of candidate biomarkers will likely have to be undertaken without it.  Our Total Scientific Criteria (with appropriate deference to Sir Austin Bradford Hill) for assessing the utility of biomarkers might look something like this:

  1. Consistency
  2. Specificity
  3. Temporality
  4. Strength
  5. Biological gradient

There may be inflation in almost everything in the modern world, but at least when it comes to criteria for judging the utility of biomarkers we have gone from nine criteria to just five.  The pleasures of living in a simpler world!

Dr. David Mosedale and Dr. David Grainger
CEO and CBO, Total Scientific Ltd.


References

[i] Source:  PubMed search carried out in March 2011.

[ii] We are deliberately avoiding discussion of what might be statistically significant in such a short and fat dataset.  Interestingly, Sir Austin’s paper finishes with a discussion on statistical tests, and their potential overuse back in 1965.  This is well worth a read!

FDA guidance on the use of biomarkers as drug development tools

Back in September the US Food and Drug Administration announced that it was going to delay its publication of draft guidance on the qualification of drug development tools, originally promised for the summer.  However, this draft guidance was finally published at the end of October.  While still in draft form, the Guidance substantially expands on the outline of the pilot qualification process given in an article written by two members of the Center for Drug Evaluation and Research published in 2007.

The new guidance principally provides information on the proposed administrative process that will be followed by the FDA in order to qualify new drug development tools (DDTs).  Qualification is defined as “a conclusion that within the stated context of use, the results of assessment with a DDT can be relied upon to have a specific interpretation and application in drug development and regulatory review.”  The document discusses two forms of DDT – biomarkers and patient-reported outcome scales.  There are couple of points that bear discussion in relation to biomarkers.

Firstly, the new qualification procedure is aimed at enhancing the utility of a qualified biomarker across the industry.  Hence, while previously use of a biomarker may have been part of an NDA, IND or BLA, this new programme is designed to make public those biomarkers that satisfy the qualification process, so that future drug development programmes can take advantage of already knowing that the biomarker has been qualified for a particular purpose.  Of course, wherever a new biomarker is proprietary, it can be retained as such by not using the new qualification process, but by remaining part of the NDA, IND or BLA.  This new programme, it seems therefore, is not particularly aimed at individual companies, but more towards collaborative groups that together can share the burden of the development of the drug development tools and submission to the FDA.  Indeed, less than a month after the draft guidance was published, several major pharmaceutical companies and leading academic institutions announced such a collaborative biomarker consortium for COPD.

Secondly, while there is detailed information as to the administrative process, there is no information on the level of evidence required by the FDA to take a biomarker through from submission to qualification.  There are a number of discreet stages that have to be undertaken, but nowhere are the criteria on which a new biomarker will be assessed described.  The means by which such an assessment may be made are described:  In the first stage they include a consultation process between the submitter and the FDA, and formal assessment of the biomarker follows, to include discussion at internal FDA meetings, discipline reviews and potentially even public discussions.  However, the level of evidence required for success at each stage is not discussed.

It is tempting to suggest that this gap in the document is due primarily to the difficulty in formalising the criteria required for qualification of a biomarker.  The wide range of uses to which biomarkers may be put, whether to preselect individuals for study or treatment, inform about disease progression, predict drug efficacy or toxicity or follow dynamically treatment in an individual, makes it difficult a priori to put together criteria that will apply in all cases.  If this supposition is true, and each biomarker will be assessed on its own merits, with no reference to pre-determined criteria, the new qualification procedures do at least give the scientific community the ability to comment and provide feedback on decisions made, since all new qualifications will be published.  Indeed the guidance states “Once a DDT is qualified for specific use, the context of use may become modified or expanded over time as additional data are collected, submitted, and analyzed.  Alternatively, if the growing body of scientific evidence no longer supports the context of use, the DDT qualification may be withdrawn.”  Of concern, such scrutiny will not be applied to proprietary biomarkers submitted as part of INDs, NDAs or BLAs, but some scrutiny and sharing of validation study data is at least a move in the right direction.   The FDA’s qualification process seems likely to stimulate a further increase in the utility of biomarkers as drug discovery tools.

It should be noted that the Guidance Document is still in draft form.  Published on October 25, the FDA is asking for comments and suggestions to be submitted to them by January 24, 2011 for consideration when preparing the final document.

Biomarkers: standing the test of time means good initial study design

It feels like every other day that another putative biomarker is identified that will predict presence or extent of some disease or another, usually with an absurdly low p value.  So, if these biomarkers are so common, why is the subsequent commercialisation and clinical use of these potential diagnostics so difficult to achieve?

I believe that the biggest of the problems is in the design of the studies carried out, particularly in the early stages of biomarker research.

The first stage in the development of a biomarker is the study in which it is identified.  This initial study is usually designed to maximise the difference in phenotype between your control subjects and patients with disease.  This is usually thought of as the way of best identifying a biomarker for the disease in question.  However, it should always be remembered that the result of your experiment will always be dictated by its design.  Assuming that the scientific aspects of the study are carried out rigorously, the best outcome of a biomarker identification study can only be a biomarker that best distinguishes your two study groups.  However, the groups of subjects studied in the first identification of a biomarker are rarely those that a clinician will want to discriminate between.  A clinician is seldom faced with the need to determine whether a sample is indicative of a patient with disease or a healthy individual.  More typically their problem is in distinguishing different underlying pathologies with similar symptoms or, in the context of screening, in determining which of two apparently healthy subjects have underlying asymptomatic disease.

Once this initial mistake in study design has been made, it leads to something we all see often – gradually reducing diagnostic power the more you work with a diagnostic.  After the first study has been carried out, you try and repeat your preliminary work, usually with greater numbers of patients.  Another clinician is brought on board, or an additional clinical site.  The study is repeated, and the ability to distinguish subjects with disease from those without disease is markedly weaker than your first study.  This should be entirely expected.  Your biomarker identification study looked for the difference between healthy subjects and those with your target disease.  Often your follow up studies are not actually testing the same thing.  Now you are looking at distinguishing subjects with disease from subjects with similar symptoms, but who may have completely different underlying pathologies.  It should come as no surprise that your sensitivity and specificity has dropped.

At this stage consternation sets in for you and your investors.  You feel that there must be some improvement that can be made in the measurement of your biomarker.  Protocols are tightened up, samples re-assayed, statisticians called in.  But they are all in vain.  Your initial choice of biomarker was flawed, and it is all too late.

So what are the answers to this chain of events?  The answer is actually quite simple.  When you are designing your early biomarker studies, make absolutely sure that you have done all of your homework.  Understand exactly what problems clinicians have when trying to identify particular pathologies, and target your FIRST clinical studies accordingly.  You might be less likely to find that wonderful biomarker in the very first study, but you will quickly find yourself in one of two situations:  either you will fail early (and cheaply, which your investors will thank you for) or you will find a biomarker that is much more likely to stand the test of time.

David Mosedale

Biomarkers: A Band-Aid for Bioscience

In this kick-off article to the new Total Scientific biomarker blog, we discuss the potential for biomarkers to improve the R&D productivity of the pharmaceutical industry.
Please read the full article here.