Rational Use of Plasma Protein and Tissue Binding Data in Drug Design


The topic of protein binding has been a hot topic in recent years with a number of authors challenging the long held belief that simple in vitro measures of plasma protein binding or tissue binding can be used to accurately predict active drug concentrations. At recent article in J.Med Chem continues this debate. The authors perform both theoretical and empirical studies to show that low in vitro plasma protein binding is not necessarily predictive of a high unbound blood concentration in vivo, nor does low brain tissue binding accurately predict a high free CNS drug concentration.
The authors start by study a list of 169 recently approved drugs (2003-2013), all efficacious compounds of which 45% are > 95% plasma protein bound in vitro (figure 1)

 ppb

The authors go on to describe experiments demonstrating the effects of protein binding in in vitro assays, on pharmacokinetic parameters and finally on free brain concentration. The summary of the extensive discussion is that protein binding measurements are of value in drug discovery programmes, however the picture is complex and data should never be used in isolation. Finally the authors make four recommendations for when and how to use plasma protein binding (PPB) data

  1. PPB should be determined to investigate if the potency shift is due to PPB.
  2. PPB should be determined in pharmacokinetic and pharmacodynamic (PK/PD) studies so that the relationship between in vivo free concentrations and in vitro free IC50 for the pharmacological target or toxicological targets can be assessed.
  3. PPB and brain tissue binding should be determined to calculate the unbound brain to unbound plasma concentration ratio in brain distribution assessment.
  4. In human clearance prediction studies, binding in the microsomes and hepatocytes should be considered.

 

May disordered protein cause serious drug side effect?


Insomnia is a sleep disorder characterized by the inability to fall asleep, in 2011 the 12-20% of the general adult population suffer insomnia, with more predisposition in the USA. The abnormal sleep cycle results in other symptoms like, lack of motivation, mood disturbances, loss of memory, tiredness, deficiency of energy, headaches and some gastrointestinal disorders.
The most common treatment for insomnia is medication with benzodiazepine and non-benzodiazepine drugs. The benzodiazepine drugs enhance the effect of the GABA (g amino butyric acid) at GABAA receptor, resulting in a sedative, hypnotic, muscle relaxing, sleeping inducing and anxiolytic effects. Non benzodiazepine drugs include Non-benzodiazepine drugs include zolpidem tartrate (Ambien®), sodium butabarbital (Butisol®) and eszopiclone (Lunesta®). The major disadvantages of these drugs are the multiple side effects: tolerance, dependence and memory impairment.
In the article by Weng and Calvin (http://www.sciencedirect.com/science/article/pii/S1359644613003875) they proposed a possible drug design strategy that might reduce the side effects by considering disordered proteins related with insomnia.
The sleep-related protein/receptor can be classified into the clock complex and the hypnotic-related receptor (HRR). The clock complex controls the circadian rhythm activity; genetic mutations in this complex had showed dysfunctions in the perception of sunset and sunrise, leading to sleep disorder. This complex is formed by several proteins, like BMAL-1, CLOCK, CRY1/2, CKI/II and PER1/2/3.
The HRR, includes dopamine receptors (D2/D3), GABAA, histamine receptor (H1/H2), melatonin receptor, muscarinic receptor M1 and orexin receptor 1/2.
In the study they analyzed the structural properties of clock complex and HRR by computational techniques and they found that the majority of the clock complex proteins share a highly disordered property, located in the middle of the sequence, implicated the flexibility of the whole complex. Over the 30% of CLOCK, PER1/2/3, BMAL-1, M1, melatonin receptor and CKI are disordered, the percentage of structural disorder in HRR is much lower that the clock complex.

thalia

The clock complex showed more disordered rate in the middle of the sequence, implicated the flexibility of the whole complex. They suggest HRR it might be a better drug target for new insomnia therapies due to the small possibility of affecting the flexibility and stability of the complex. They mention that some investigators had give strong evidence of structure lead to side effects, because the there are limited binding sites in nature.
They suggest not to screen or base studies on the rigid structure of disordered proteins, because these proteins do not exist in a fixed structure, and even a small disordered region can change the conformation, flexibility and stability of the protein. Many chemists screen the databases by using the rigid structure of the protein, not considering the flexible movement and the cryptic allosteric sites (CAS). CAS do not exist in the native state but they might be present in the transition state, being possible positions for drug targeting.
Also a coarse-grained model might be a good method to map out the transition pathway of disordered proteins transformation, being able to predict possible cryptic allosteric sites.
They mention that cognitive behavioral therapy is another useful treatment for insomnia, this therapy not always work for all patients. Finally the traditional Chinese medicine (TCM) seems to be a good potential treatment for insomnia, however is not enough evidence or being nontoxic, due to the highly ingredients extracted from herbs, they suggest that more clinical studies are required. Further studies and research is needed for identifying a novel highly folded protein, related with the clock complex or the sleep related receptors.

Schizophrenia drug targeting negative symptoms remains as elusive as ever with the demise of Roche’s bitopertin


Schizophrenia is a major disorder of brain function, prevalent in 1% of the population worldwide and estimated by the World Health Organisation to be the fifth leading worldwide cause of disease. Symptoms of schizophrenia can be categorized as positive and negative; positive symptoms can include hallucinations, delusions and disordered thoughts and speech, while negative symptoms include apathy, poverty of speech, anhedonia (inability to experience pleasure) and social withdrawal. In general, positive symptoms are treated fairly well by antipsychotics but there is an unmet need for drugs that are able to treat the negative symptoms and these largely untreated symptoms remain a huge barrier to the resumption of a fully functional, “normal” life for affected individuals. Being unable to do so results in a huge emotional and societal burden, from the costs of the diseases management to the impact on caregivers and resources with an annual estimated cost in the UK alone of around £12 billion. Additionally, an inability to function and contribute to society often leads to withdrawal and depression, leading to a suicide rate of 4-5% in the schizophrenic population.
In January of this year, Roche announced that two Phase III clinical trials of bitopertin (RG1678, see here), a glycine transporter 1 (GlyT1) inhibitor, failed to achieve their primary end-point in the treatment of the negative symptoms of schizophrenia – severely reducing hopes that this might be the first drug to target currently untreatable negative symptoms (see here). At that time, four additional Phase 3 studies were continuing. However, in their quarterly report last month, the company stated that an additional Phase 3 study looking at sub-optimally controlled symptoms, such as hallucinations and delusions, also failed its primary endpoint (see here)In the light of these results, the company terminated two of the remaining three studies leaving just one study in sub-optimally controlled symptoms active.

And so once more a much-touted, potential breakthrough CNS therapeutic has essentially failed, prompting the obvious question of why? Well, the Phase III studies were triggered by a Phase II 8-week study of bitopertin in 2010, which demonstrated a significant improvement in negative symptoms compared to placebo, as measured by the PANSS negative symptom factor score (see Figure), although in retrospect, the narrow therapeutic window, with efficacy at 10, 30 but not 60 mg doses, might have been a cause for concern. However, this is not the first time (and unfortunately it will probably not be the last time) that Phase 2 efficacy does not translate into Phase 3 efficacy for a novel CNS therapeutic (with Merck’s NK1 antagonist being perhaps the most spectacular recent example). And at least Roche had positive Phase 2 data rather than the “Intriguing preliminary data” in a negative Phase 2 study that was the basis for Lilly to commence Phase 3 studies with their mGlu2/3 agonist pomaglumetad methionil (see here).
This once positive outlook of bitopertin can have only intensified the disappointment not only of Roche but also a neuroscience community in general. From a mechanistic point of view, the Roche data also calls into question the efficacy of therapeutically modulating glutamatergic pathomechanisms, thought to underpin schizophrenia and complement the dopaminergic hypothesis the underlies the positive symptoms. Roche’s rationale was to target the glycine transporter, GlyT1, and inhibit it with bitopertin in an effort to raise the synaptic levels of glycine and thereby increase the activity of NMDA receptors, for which glycine is a co-agonist along with glutamate. This is based on increasing evidence that implicates dysfunction in glutamatergic signaling pathways, specifically a hypofunction in NMDA receptor signaling, in the pathogenesis of schizophrenia and the appearance of negative symptoms and is also the basis of several other therapeutic approaches, including modulation of the mGluR2 and mGluR5 receptors. While it would be unwise to attempt to directly activate NMDA receptors, indirectly modulating NMDA receptor activity by blocking the reuptake transporter of glycine, has been an attractive and less risky alternative. An alternative approach to treating the negative symptoms of schizophrenia is being pursued by AbbVie and EnVivo who are both targeting nicotinic acetylcholine receptors and it is to be hoped that one or both of these drugs makes it to market to reenergize the beleaguered neuroscience therapeutic area.
chloe
Figure taken from Roche’s December 2010 pipeline update (see here).

DESs: New ionic fluids


You will surely have heard of ILs or ionic liquids, which were discovered more than a century ago, but have you hear of the new variety of ionic fluids called Deep Eutectic Solvents or DES?
I had certainly not come across DES until this recent joint publication by Spanish and Scottish research groups. (http://onlinelibrary.wiley.com/doi/10.1002/anie. 201400889/abstract)
A Deep Eutectic Solvent is a type of ionic solvent with special properties. It is composed of two or three cheap and safe components that together form an eutectic mixture, with a melting point much lower than either of the individual components. They were first described in 2003 when Abbot and co-workers reported on a low melting mixture of a 1:2 mole ratio of choline chloride (2-hydroxyethyl-trimethylammonium chloride [ChCl]) and urea. Since then many different mixtures have been described. In most cases a DES is obtained by mixing a quaternary ammonium salt with metal salts or a hydrogen bond donor (HBD) that has the ability to form a complex with the halide anion of the quaternary ammonium salt (Scheme 1). Thus the components of such solvents are low in cost, biodegradable and low in toxicity and synthesis of DESs is 100% atom economic, easy to handle and no purification is required and it is easily recycled. Moreover, in comparison to common organic solvents they are less volatile and not flammable making them ideal green alternative media.
carol 1

Since their preparation, DES have found applications in a variety of synthetic procedures such as brominations, polymerizations, dehydrations, cycloadditions, hydrogenations, condensations, NaBH4-reductions and in Heck and Stille coupling reactions. However it is this latter publication that firstly reports the successful coexistence of Grignard or organolithium reagents and green solvents within the same solution using Deep Eutectic Solvents.

We are all familiar with Grignard reactions and you don’t need to be a chemist to have heard of them. As a synthetic chemist you will have definitely carried out one of these reaction, if not more, and you will know that addition of Grignard reagents (or organolithiums) to carbonyl groups requires the use of aprotic dry solvents under inert atmosphere and at temperatures ranging from 0˚C to –78˚C and therefore the use of glycerol (Gly) or even water would definitely not be your choice of solvent. Hevia and co-cowerkers, however, report chemoselective addition of Grignard reagents (Table 1) and organolithiums (Table 2) to ketones using the eutectic mixtures 1ChCl/2Gly, 1qChCl/2EG and 1ChCl/2H2O at room temperature and in air

carol2

carol3

It is of interest to point out that the addition reaction of the Grignard reagents or organolithium is orders of magnitude faster than their protonation by water, ethylene glycol (EG) or glycerol (Gly) present in the DES and that the reactions are completed immediately (2-3s) suggesting a kinetic activation of the alkylating reagents. The authors speculate that the ammonium salt ChCl present in the DESs employed may have a further role than as a component of the DES mixture. Although not isolated in their work, they believe that an anionic magnesiate (from the Grignard reagents) and a dianionic halolithiate (from the organolithium) species are formed that have and enhanced nucleophilic power which favor the addition reaction in DES over the competing protonation process.
Although choline chloride is not a chemical we currently stock in the lab I am tempted to order it and have it ready for my next Grignard reaction so that I can test this simple methodology, which avoids use of Schlenk techniques and low temperatures. Are you not tempted to try it too?

Collaborative practices for medicinal chemistry research across the big pharma and not-for-profit interface


Drug Discovery Today 2014, article in press
There have been many publications in recent years on the topic of drug discovery collaborations between industry and academic groups. The majority discuss plans and strategies and nothing is ever heard again from the authors with respect to the success of the project or otherwise. This article written by scientists from Astra Zeneca, Medical Research Council Technology and Cancer Research Technology is refreshingly different. It describes a model used by the teams in successful collaborations, highlights the factors which contributed to the success and describes potential improvements which could be considered for future work.

In establishing the working model the authors looked to address key issues which had hampered previous collaborative models. They highlight two key issues as (a) the need for academics to publish work early (largely riven by funding bodies) which has the potential to compromise intellectual property and (b) unbalanced contributions from the parties involved – with industrial groups often acting as coordinators/directors and the academics performing the practical work.

With the issues in mind the groups established collaborative working models from the outset which were actively designed to address these issues and included initiatives which include (a) the initial planning and drafting of research agreement actively involved scientists from each organisation thus addressing conflicting demands at the outset and these agreements contained clear definitions of milestones and ownership of each contributing organisation (b) to create a single project team with members from all the partner organisations which communicates regularly and uses a common language, this overcomes differences in terminology between partner organisations and leads to efficient working (c) the design and use of an electronic data sharing tool to data to be shared in real time across the entire project. The tool contained all of the information necessary to allow team members to rapidly assimilate date and make decisions based on it.

The authors end by suggesting some additional initiatives which could be incorporated in future projects to improve efficiency further still. These suggestions include (a) Further improving face to face communication by considering staff exchanges or secondments and (b) As IT packages were a crucial factor in the success of the projects to ensure that these are set up and all staff trained in their use at the very start of the project, something which the authors could have been set up more efficiently in their experience to date and led to some early problems.
This is an excellent article and any group considering embarking on a collaborative drug discovery exercise is encouraged to read it.

Toward in silico structure-based ADMET prediction in Drug Discovery


G Moroy, V.T. Martini, P. Vayer, B.O. Villoutreix and M.A. Miteva, Drug Disc. Today 2012 Jan 17 (1-2) 44-55

For a drug to be successful in treating a condition, it must not only modulate the condition’s underlying mechanism, but also have a suitable absorption, distribution, metabolism, excretion and toxicology (ADMET) profile. Accurate prediction of this profile is a key way of reducing costs, animal studies and other resources for molecules that are destined to fall at ADMET hurdles. Accurate prediction is also, unfortunately, somewhat elusive.

Traditionally, computationally predicted ADMET relied on 2D and 3D QSAR/QSPR (Quantitative Structure-Activity / Structure-Property Relationships) or knowledge-based / expert systems as its preferred method of model development. QSAR comes with the problem of requiring high quality (and usually large) data sets of materials that have been tested biologically. The authors of this paper have noted recent changes to this historical approach and have detailed current movement away from using QSAR/QSPR on its own, using systems that consider the 3D nature of the interacting proteins, rather than solely a set of ligands.

There has been change in pace in isolation and production of high quality crystal structures, and the authors note that for a large number of ADMET-involved proteins, there are crystal structures in the Protein DataBank (PDB) which have allowed them to investigate the concept of using flexible docking for finding potential pitfalls in their compound development – a list of common Human CYP450 and Human sulfontransferases (SULTS), along with their PDB codes are given, with insights into the state of the art over a range of areas associated with ADMET (for example, plasma-binding protiens, hERG, ABC transporters and so on). They then go to demonstrate that for one particular family of proteins, the SULTS, it was possible to develop a flexible docking model to support ADMET prediction.

The authors conclude that whilst flexible docking is amenable to some ADMET-involved protein families it is not so straightforward: these proteins are designed to be a promiscuous (some with multiple binding sites), may not have the whole picture with regards to water, and require flexible modelling. They propose that a series of other emerging techniques such as MD / MM and proteochemometrics may be useful in addition to ligand based methods.

Commentary:

There is no doubt that in silico ADMET prediction is a big challenge in computational chemistry – the aim here is not to simply replace in vivo and in vitro tests, but to improve the knowledge-base and reduce those materials that are very likely to be liabilities as early as possible – preferably before synthesis: Fail early, fail fast, fail cheap.

It may be a mistake to attempt to tackle the problem using only one tool in the toolbox (e.g. topological analysis systems) – A mixture of QSAR, toxicophore / pharmacophore, flexible docking and expert systems together could open up an avenue to much more accurate desktop prediction. This paper goes a distance in explaining not only where the successes lie, but more importantly, which issues still challenge computational chemists working toward in silico structure-based ADMET prediction in drug discovery.

Fluorescent membrane potential. A good surrogate for E-phys?


Fluorescent membrane potential. A good surrogate for E-phys?
The key for every screen is to effectively predict efficacy in disease. Practically higher-throughput screens tend to sacrifice disease relevance for throughput. These then filter the compounds through to the lower throughput- more disease relevant assays. With ion channels it is often too expensive and/or throughput is too low to run large screens via electrophysiology and therefore fluorescent assays are often used as a higher-throughput and cheaper surrogate. The assumption is there is a good correlation between the two, but generally how good this correlation is only know after many compounds have been screened and the doubt is always there as to how many active compounds are discarded as false negatives.
In a recent paper Ghisdal et al., (http://jbx.sagepub.com/content/19/3/462.long) compared a set of ~41 compounds identified as GABAA binders by 3H flunitrazepam binding to rat brain membranes and compared their functional effects using a validated fluorescence membrane potential (FMP) assay vs. automated patch clamp. The correlation between 3H flunitrazepam binding and the fluorescent membrane potential assay was good. However, when the relative efficacy of these compounds in the FMP assay was compared to automated patch, there was no correlation and some compounds that produced significant potentiation in patch clamp were inactive in the FMP. This suggests FMP may not a good surrogate to measure compound efficacy and whether FMP should be used for high-throughput screening to identify novel chemistry.
A selection of fluorescent dyes are now available for measuring Na+ (sodium green, SBFI) and K+ ions (PBFI, FluxOR) and yellow fluorescent protein as a halide dye. It would be interesting to hear others experience with these dyes and whether they are more suitable surrogates for than a fluorescent membrane potential dye.
Key words
Fluorescent membrane potential
GABA

The Reproducible Irreproducibility of The ScientificProcess: NIH Plans for Increased Rigour


In a previous blog the issue of data replication was discussed as exemplified by Bexarotene as a potential treatment for Alzheimer’s disease (see here). This issue is gaining increased attention as evidenced by a recent article in The Economist entitled “Trouble in the Lab” and “How Science Goes Wrong” (here) and now Francis Collins and Lawrence Tabak, who are respectively Director and Principal Deputy Director of the National Institutes of Health, offer their perspective on the issue (see here). In their article they emphasize that irreproducibility is only rarely due to deliberate fabrication or fiddling of the data. Rather, they criticise poor experimental design, publications that limit the space given to technical details (and which therefore make the reproduction of experiments more difficult) coupled with the pressure for publication, particularly in high-profile journals, that does not encourage scientists to try replicate or disprove their own data. In addition, methodological descriptions are sometime kept deliberately vague or omit a key step or “secret sauce” to maintain a competitive advantage within the originating lab.
As regards drug discovery, it is remarkable that critical decision-making, preclinical efficacy studies often pay scant regard to such basic principles of experimental design as blinding, randomization and power-calculations; elements that are essential components in the more regulated – and therefore more rigourous – arena of clinical trial design. This issue was highlighted by researchers at Bayer (see here) who reported reproducibility of 20-25% of published data associated with 67 projects, which were mainly in the oncology area whereas researchers at Amgen were only able to reproduce data in only 6 out of 55 (11%) key publications relating to hematology and cancer targets (see here). Moreover, although much is made of the so-called “Pharma bias” in publication relating to perceived potential conflicts of interest of academic scientists or the preferential publication of positive clinical trials, there is also a considerable academic bias towards publishing positive but not negative findings which “creates a huge conflict of interest for academics, and a strong bias to write papers that support the hypotheses included in grant applications and prior publications” (see here ). Furthermore, as C. Glenn Begley (who led the Amgen study and is now the Chief Scientific Officer of TetraLogic) states in an interview with Reuters: “The real problem is that scientists are reluctant to speak up about studies that won’t replicate because there is so much to lose,” Begley said. “If I criticize you, and you review my next grant application, you might [take revenge]. That’s why people are afraid to say the reason they couldn’t replicate a study is that it was just plain wrong” (see here).
john 1
Figure: Consequences of data irreproducibility (from the Global Biological Sciences Institute report on The Case for Standards in Life Science Research – Seizing Opportunities at a Time of Critical Need).
Clearly the issue of the lack of reproducibility not only has a negative impact on scientific, institutional and journal reputations but also causes a huge waste of time, effort and resources and damages the publication opinion of scientific research in the life sciences (see Figure). So, what’s to be done about it? Well, the NIH recognize that part of the problem may well be due to a lack of training of scientists resulting in poor experimental design and accordingly they are instigating formal training for intramural scientists that could serve as a template for wider dissemination. In addition, the NIH will encourage more rigorous examination of grant applications and may require additional preclinical studies to support clinical trials that are based on only limited preclinical efficacy data. Efforts are also being made to encourage journals to devote more space to methodological details as well publish negative findings. However, perhaps the toughest nut to crack is the academic incentive system which prizes publications in high-profile journals; a system that may well encourage rapid submission before systematic replication is carried out. Moreover, the use by university promotion committees of the number of publications in high impact factor journals is a convenient metric but is not necessarily the means of judging a scientist’s contribution. Still, the fact that there is an open discussion of these multiple, interrelated factors and that steps are being taken to correct the self-correcting mechanisms that underpin the scientific progress are in themselves major steps forward.

2013 FDA drug approvals


This analysis by Asher Mullard published in Nature Reviews Drug Discovery (2014,13, 85-89) reports the new drugs approved by FDA in 2013. From a total of thirty-six applications, twenty-five new small molecules and two new biologics were approved. The same trend as the previous years was overall maintained, with the exception of 2012. (Figure 1).
marco
A notable achievement was the high approvals (33%) of new molecular entities for the treatment of orphan disease. In addition, 33% of the new approvals had a unique mode of action and were identified as first-in-class agents. The anticancer therapeutic area obtained the majority of approvals (eight, six of which are for orphan indication), followed by metabolic and endocrinology, antiviral and medical imaging (three approvals for each category). Cardiology, neurology, respiratory and women’s health have two agents approved each, and only one new approval for psychiatry and dermatology.
Ten drugs received a priority review status, which is given to drugs that potentially have significant advances in term of safety or effectiveness for the treatment, diagnosis or prevention of serious conditions, including: ado-trastuzumab emtansine, radium RA 223 dichloride, afatinib, obinutuzumab, and ibrutinib (anticancer); dolutegravir, simeprevir, and sofosbuvir (antiviral); gadoterate meglumine (diagnostic); and riociguat (cardiology). Among these, the humanized CD-20-specific monoclonal antibody obinutuzumab (chronic lymphocytic leukaemia), the Bruton’s tyrosine kinase inhibitor ibrutinib (mantle cell limphona) and the HCV nucleotide analogue NS5B polymerase inhibitor sofosbuvir (chronic HCV infection as a part of an antiviral treatment regimen), received also the breakthrough designation status. This status is given to drugs used to treat serious or life-threatening disease or condition and that showed a substantial improvement versus existing therapies as evidenced on early clinical data.

Thirteen new potential blockbusters have been forecast by 2018-2019 (Thomson Reuters Cortellis) (Figure 2) including:
– Sofosbuvir (Gilead), first-in-class HCV nucleotide analogue polymerase inhibitor, with an annual potential of 6.8 billions $US;
– Dimethyl fumarate (Biogen Idec), approved for the treatment of multiple sclerosis with an unknown mechanism of action, with sale forecasts of 6 billions $US;
– Ibrutinib (Pharmacyclics), first-in-class Bruton’s tyrosine kinase inhibitor, approved for the treatment of mantle cell limphoma, with an annual potential of 4.5 billions $US;
– Ado-trastuzumab emtansine (Genentech), an antibody-drug conjugate approved for the treatment of HER2-positive metastatic breast cancer, with a potential income of 4.1 billions $US;
– “Umeclidinium and vilanterol” and “Fluticasone and vilanterol” (GSK) both approved for the treatment of chronic obstructive pulmonary disease, with sale forecasts of 3.1 and 2.8 billions $US respectively.
marco1
Figure 2: new potential blockbusters.

In terms of companies, GSK had the highest number of approvals (five, including the HIV-integrase inhibitor dolutegravir develop with ViiV) within three different therapeutic areas, and with three potential blockbusters.

Despite a 31% drop of approvals compared to the previous year, 2013 saw the approvals of innovative new products that will have a significant impact for medical care. Analysts forecast thirteen new potential blockbusters (almost half of all approvals), and six of them may have a multibillion dollar potential

Rushing to abandon tQT


Drug discovery always involves the continuous reassessment of the benefit-risk balance: from the first target identification all the way to the choice of patient population most suited to a new drug.
This paper (BioCentury (2013) Vol. 21,No.30 Page A1-A4) considers the call by the FDA and key Pharma stakeholders for the fine tuning of the benefit-risk evaluation during the cardiovascular safety de-risking of molecules: in particular with regards to QT signal prolongation, proarrhythmia and Torsades de Pointes (TdP) risk.
Following the withdrawal of eight drugs from the US market in the 1990s, a link between the inhibition of the hERG potassium channel, prolongation of the QT signal and elevated risk of TdP was made. Now, identification of pharmacological selectivity over inhibition of the hERG channel is standard in drug discovery programs. Further preclinical evaluation in animals and ultimately costly thorough QT trials (tQT) are employed to further clarify the risk / safety margins of development compounds.
It is now considered that information regarding QT prolongation, alone, is an insufficient predictor of TdP. Currently a 5-15 millisecond prolongation of the QT signal is considered a clinical risk, with >15 millisecond prolongation considered a serious clinical concern. The journal suggests that the focus of QT prolongation under the current FDA guidelines may have ‘killed’ development compounds that do not cause fatal arrhythmia.
The FDA, in consultation with pharma companies, clinicians and academics, want to abandon tQT studies by 2015, replacing them with a preclinical assay suite better able to detect proarrhythmia side effects than existing assays. The proposed suite would be:
• Functional Voltage clamp studies on several cardiac ion channels including hERG, Na1.5, CaV1.2, KvLQT1 and Kir2.1.
• Human stem cell derived ventricular cardiomyocytes, to look at the repolarization effects of a compound.
• Computational modelling of cardiomyocytes, based on input from the studies above, to predict early afterdepolarisation and action potential duration, both now considered predictors of the risk of a compound triggering arrhythmias.
The deadline is laudable, however achieving industry wide consensus by 2015 as to the assay suite’s format and interpretation of results may prove challenging. Replacing tQT would no doubt save money and resource for late stage preclinical and early clinical development. It will be important that the latest scientific understanding of repolarisation gives rise to clear guidelines to the discovery scientist. If research groups opt to prosecute molecules with a known hERG and QT prolongation liability, what ‘blend’ of pharmacology in the other ion channels is acceptable? Voltage clamp studies may still prove costly and resource consuming in an early drug discovery setting. For those hoping that the new assay suite and guidelines will significantly expedite and broaden drug discovery, choosing to sail close to the guidelines and navigating through them may take longer than anticipated.