Quantcast
Channel: PulmCrit: Pulmonary Intensivist's Blog
Viewing all articles
Browse latest Browse all 104

Investigation Bias: The freakonomics of when industry choses to sponsor a clinical trial

$
0
0



Background:  Publication bias

Over the last several years, publication bias has received a considerable amount of attention.  In its most blatant form, this is when a drug company sponsors several trials, but only publishes the trials which yeild positive results.  Growing awareness of this problem has led to the development of trial registries, with increasing pressure on industry to publish all trial results (positive or negative).  Investigation bias is a similar problem.

Investigation bias

Investigation bias refers to the fact that companies will only sponsor studies which they believe are likely to improve sales of their product.  That seems reasonable enough.  However, it can lead to an unfortunate situation wherein therapies are only partially evaluated.  

When an intervention is first developed, it must undergo at least some testing to be accepted and FDA approved.  Once the intervention is approved and enters clinical use, things get more complicated.  As the intervention becomes more popular, the company has more to lose from additional clinical trials (if the trials fail to show benefit, or reveal an unexpected complication).  Additionally, as the intervention grows more popular, the company has less to gain from a positive trial (the intervention is already being used, so a positive trial may not improve sales substantially). 

Game theory: When is it beneficial to run an additional clinical trial?

 

To illustrate how this may work, let’s imagine a very simple situation.  Suppose that a drug is currently being used for x% of the market share (the maximal number of patients for whom it could be prescribed).  If a clinical trial is run, the probability that it will yield a positive result is psuccess.  For the sake of simplicity, let us imagine that this will be a definitive trial which will either drive the market share up to 100% (if the trial is successful) or down to zero (if the trial is negative).  If this scenario were played out a thousand times, the average gain and average loss in market share would be:


If the likelihood of having a positive study (psuccess) is 50%, this yields the following graph:


In this scenario, it only makes sense to run a clinical trial if the current market share is <50%.  If the drug is already occupying >50% of the market, the risk of running a trial outweighs the benefit.  

Now let’s suppose that we have a very promising drug, which we truly believe is going to work well.We estimate the likelihood of obtaining a positive trial with this wonderdrug (psuccess) to be 90%.  In this case, the average gain and loss will be as shown below.  Even if the drug is already quite popular (occupying up to 90% of the market share), it would still make sense to run another clinical trial.


Finally, let’s suppose instead that we have a rather dodgy drug which we don’t really expect to work well.  We estimate the likelihood of obtaining a positive trial (psuccess) with this drug to be 25%.  As shown below, for this drug it is advantageous to run another clinical trial only if the market share is very low (<25%).  


As these examples illustrate, it is unwise to run a clinical trial if the current market share is greater than the likelihood of obtaining a positive trial.  This is an oversimplification, but it illustrates a basic point:  when the drug’s popularity exceeds its probable effectiveness, further investigation of the drug is a poor investement for the company.  

What does this imply about investigation bias?

After an intervention is approved, its market share will improve over time.  The company will continue to study it, until a stopping point is reached when the company believes that the risk of a negative trial outweighs the possible benefit of a positive trial.  

For a drug which the company believes is going to be extremely successful (e.g., psuccessguessed to be 90%), investigation bias is not a big problem.  The company will continue running clinical trials until the market share is very high.  By investigating the drug thoroughly, the company has much to gain and little to lose.  This is good business and good medicine.  

The problem arises for a drug which the company doesn’t  think works very well.  Or, perhaps, the drug is initially promising but further investigation suggests that it may not work well.  For example, suppose that the company estimates the likelihood of a successful clinical trial is 15%.  As soon as the drug becomes somewhat popular, the company will stop further testing on the drug.  This is good business but poor medicine.  

Herein lies the real danger of investigation bias:  a drug which becomes prematurely popular, before rigorous research has proven its efficacy.  If the company believes that the drug’s popularity has exceeded its effectiveness, it will immediately halt further investigation into the drug.  This may cause the drug to be used for years or decades, without any firm evidence basis.

Example #1: Activated protein C (APC), a.k.a. Drotecogin alpha (XIGRIS)

In 2001, the PROWESS trialfound that APC caused a stunning 6% absolute mortality reduction in septic shock.  However, there were some concerns about this study including premature termination and a change in the recruitment protocol during the study (Finfer 2008).  Additionally, subgroup analysis suggested that the drug was effective only in the sickest patients.  

When the FDA considered approval of APC, the initial vote was evenly tied.  Ultimately the FDA approved APC in 2001 with a request for further evaluation in patients with less severe septic shock.This led Lilly to sponsor the ADDRESS trial in 2005, which confirmed that APC was indeed ineffective in such patients.  

Unlike the FDA, the European Medicines Agency (EMA) approved APC in 2002 with a requirement for undergoing annual review.  In 2007, this annual review found that the evidence supporting APC was weak and called for further studies.  To satisfy the EMA, Lilly performed the PROWESS-SHOCK trial, which was an attempt to replicate the initial PROWESS trial.  This was a negative study, which led to the immediate withdrawal of APC from the market.   



This illustrates how investigation bias may discourage replication of a positive study.  Ideally, a dangerous and expensive drug shouldn’t be prescribed to thousands of patients over eleven years on the basis of a single positive study.  Ideally, the PROWESS trial should have been replicated earlier.  However, from Lilly’s standpoint, a smarter strategy was instead to study APC in populations where the drug wasn’t already being used (e.g. septic children in the RESOLVE trial).   If this trial had been positive, it would have expanded the use of APC.  Alternatively, if it were negative (as was the case), it wasn’t a major loss to the company.  Thus, studying the drug in a new patient population is a low-risk, high-benefit strategy.  

Alternatively, replicating the PROWESS trial was a high-risk undertaking, because it threatened the entire APC market.  Although the PROWESS trial was very positive, subsequent data didn’t look so encouraging for APC.  Lilly may have realized that a replication of PROWESS probably wouldn’t be as positive as the initial trial.  Consequently, Lilly waited until it was pressured by the EMA to undertake a replication study.  The rest is history.  

Example #2:  Inferior Vena Cava Filters

The use of IVC filters currently is based on a single RCT in 1998 (PREPIC-1).  This was not an overwhelmingly positive study:  IVC filters reduced the risk of PE, increased the risk of DVT, and yielded no mortality benefit.  The results may have been biased because patients who were randomized not to receive an IVC filter knew that they didn’t receive a filter, which may have increased anxiety about recurrent PE, leading to an increased intensity of subsequent scans.   

Nearly two decades have gone by, with IVC filters being permanently implanted in thousands of patients.  Meanwhile, there has been no replication of PREPIC-1.  Why not?  IVC filters have been broadly accepted by the medical community, with a consensus to use them in patients with pulmonary emboli who can’t receive anticoagulation.  Their popularity probably exceeds their efficacy.  Industry has little to gain and much to lose from attempting to replicate PREPIC-1.  

The PREPIC-2study was performed to investigate whether temporary IVC filters improve outcomes in patients who could receive anticoagulation, a controversial indication for which IVC filters were only occasionally used.  The study was sponsored by the French Department of Health but supported indirectly with a free supply of IVC filters from industry.  This study is similar to the RESOLVE trial of APC in septic children:  it tested IVC filters in a situation where they were rarely used.  The study was negative, but only caused limited damage to the IVC filter market (more discussion of these studies here).

Detecting investigation bias: Hearing the dog that doesn’t bark

Investigation bias can be very subtle, to the point of being nearly invisible.  It is impossible to be critical of a study which doesn’t exist.  Especially in critical care, we are used to encountering topics about which little evidence exists.  Thus, a lack of evidence supporting various interventions often goes unnoticed.


A famous Sherlock Holmes story turns around Sherlock’s noticing the absence of a dog barking as evidence that the perpetrator was the dog’s owner.  In this same sense, it may sometimes be possible to sense the presence of investigation bias based on the absence of studies which really ought to be done.  When a drug is intensively investigated for a few years and then, amid ongoing controversy, investigations abruptly stop… what happened?  Why were no further trials done?  Perhaps the company lost faith in the drug and felt that ongoing studies wouldn’t pay off.  It is impossible for us to know.  

Conclusions

The scientific method is based upon ongoing study until a topic is understood, driven by a fundamental quest to understand the topic.  Unfortunately, industry-funded research is a perversion of this process, driven instead with an endpoint of improving sales.  Investigation bias is created because industry will only pursue studies to the extent that they represent a good financial investment for the company.  In some cases, this may halt further research long before a topic is well understood, leading to the ongoing use of a harmful or ineffective therapy.  Unfortunately, this form of bias is difficult to detect or contend with.

Related posts


Image credits:
https://en.wikipedia.org/wiki/Sherlock_Holmes#/media/File:Strand_paget.jpg

Viewing all articles
Browse latest Browse all 104

Trending Articles