Computationally assessing druggability – are hot spots the answer?


Assessing the druggability of protein targets computationally is becoming an increasingly useful technique, especially as the hunt for novel drug targets becomes harder and harder. Historically, there has been two approaches taken. One approach analyses the empirical evidence of known drug/ligand interactions and uses them to predict the druggability of other proteins based on structural and physical similarities. Alternatively, the other approach looks for ligand binding potential based solely on structural data of the protein.

A recent publication by Kozakov, Whitty and Vadja in J. Med. Chem takes a slightly different angle on computational druggability assessment, inspired by pioneering work analysing empirical fragment screening data by Hajduk et al. figure1 Tom 30-09-2015

Firstly, a method was developed for determining binding hotspots on a protein surface by docking a selection of 16 different ‘fragment’ molecules. The fragments were docked as rigid conformers, and hot spots defined as areas where multiple fragments would dock in the same place forming ‘clusters’. The FTMap algorithm to do this has been made available to use online.

They then attempted to determine whether it was possible to reliably predict druggability of a protein based on the size and position of the hot spot clusters as determined by FTMap. They defined a pocket as requiring a primary cluster, and at least one secondary cluster nearby which could also be involved in the ligand binding interaction. From this, they devised three metrics by which they would measure their pockets by; the strength of the primary interaction (S), the distance between primary and secondary clusters (CD), and the overall size of the resulting ‘pocket’ (MD) (see table below for more in depth definitions).table1  Tom 30-09-2015By comparing the FTMap generated cluster locations of known drug targets, they were able to determine values of S, CD and MD that would predict a protein to be druggable (shown in the table above). By requiring that S > 16, CD < 8 Å, and MD > 10 Å, they were able to correctly classify a surprising proportion of the proteins tested known to be druggable, with only a handful of proteins with known inhibitors not satisfying their criteria at the drug binding site.

This technique seemed to particularly shine when it came to testing the druggability of more non-conventional drug targets. The authors defined a set of slightly relaxed rules which allowed the prediction of protein druggability using non-canonical drug types, such as macrocycles or polypeptides. This allowed the technique to be used to predict the druggability of known macrocycle binding proteins and protein-protein interactions, with a fair degree of accuracy. Other computational techniques may have less success in this area due to the large size and shallow nature of the pockets in these classes of protein target.

Overall, it seems that by using computationally determined hot spots as a measure, the authors have developed an effective and novel approach for assessing the druggability of proteins. The breadth of applicability of this technique compared with others should hopefully be of particular help with the increasing trend in drug discovery of looking towards more unconventional drug targets such as protein-protein interactions.

Blog written by Tom Moore

A comparison of in vitro cellular viability assays


In the hunt for new drugs it is particularly important to understand if your compounds could have a negative impact on cellular health, even if they do interact with your target of interest. This requirement is determined with a variety of different measures and steps within the drug discovery process. One early and quick method is the use of in vitro cellular viability assays.

A group from the University of Otago compared some of the different in vitro cellular viability assays available in the following recent publication (Single, A. et al J Biomol Screen 2015, in press).

The authors wanted to investigate any differences between the more conventional viability endpoint measurements, such as resazurin based (a fluorescent based measurement) and CellTiter glo® (a luminescent based assay) with nuclear counting techniques (fluorescence based imaging technique). Further comparisons were made with the more recent developments from xCELLigence (an electrode impedance measurement) and IncuCyte (live cell imaging) assays. Both these latter methods can measure in kinetic intervals, which may offer a different insight by measuring cell growth rates in response to treatment with different compounds.

All the assay methods were tested using the MCF10A cell line and additionally the CDH1-negative isogenic line for which the compound vorinostat would be synthetically lethal. Taxol was used as a non specific toxic compound control.

Figure1-Gareth 28-09-15

First the endpoint assays were compared, the nuclear counting method showed a greater detection of reduced viability compared to the resazurin and CellTiter glo® methods. Another point of interest was that the CellTiter glo® showed the lowest sensitivity of all three measures. This was quite surprising given my experience of the use of this assay format and as it is a widely quoted assay in published literature. It would be interesting to see if the same trend occurs in other cell lines or if it was specific to this cell type. Also, a wider panel of compounds would be a useful adjunct to this work.
The Kinetic based platforms (the IncuCyte and xCELLigence) were used to determine proliferation rate during the logarithmic growth phase of the cell lines and also at full confluence of the cells. During the log phase, both systems reported reduced growth rate matching the different dose additions of the compounds. For the xCELLigence system slower rates were achieved, but the authors suggested this could be due to lower cellular adhesion of this cell type and thereby reducing the number of cells being detected. Once the cells had reached full confluence, the difference between treatment and non treatment of compound became very small, so it appears the use of these kinetic based platforms requires measurements in the log phase to generate the most reliable data.
As the group wanted to develop a method to identify synthetically lethal compounds against this cell line, they further investigated the use of a multiplexed assay format to overcome the limitations of the different methods they encountered. They combined both the resazurin and nuclei counting methods with the IncuCyte measurement and determined viability ratios with the results.
Using the multiplexed format the authors could determine the synthetically lethal effect of the vorinostat at both the log phase and full confluency of the cells. It was also noted that certain Taxol concentrations produced reduced viabilities during the growth phase of cells, but when full confluence endpoint was measured the cells had recovered to generate the similar readings as DMSO controls. This again highlights the advantage of using a kinetic measurement to discern these very subtle differences.
In summary the authors suggested the use of the multiplexed assay format as the most sensitive way to determine synthetically lethal compounds, but if that was not possible, they suggest the use of nuclear counting as it appeared to generate the most sensitive endpoint result. Overall, this was an interesting publication, which has challenged some of my preconceptions about the most optimal assay to use for this type of experiment. It is also interesting to see these more recently developed methods in real world action.

Blog written by Gareth Williams

The pain of drug discovery


A recent paper in J Med Chem highlights again the perils and pitfalls of early stage drug discovery activities when we are searching for that elusive early chemical matter which has the true potential to be polished up into drug candidate gems. The group describes the HTS output looking for inhibitors of DNA cytosine deaminases APOBEC3A (A3A) and APOBEC3G (A3G), which are interesting targets under investigation in a range of labs following data linking them to mutation and evolution in cancer and HIV-1.

Simon's Blog 21-09-2015 Picture1

The group in the University of Minnesota screened just over 20k compounds from a range of commercial library sources in a fluorescence-based assay and identified a hit that they found particularly encouraging, with micromolar target potency.

Simon's Blog 21-09-2015 Picture2.jpg

This hit had the quinolone-furan sub-structure that is present in a range of members of the various libraries, which coupled with emergent SAR in the series and all the usual checks of confirming hits from repurified, new solid stocks, gave them confidence in the start point.

However, having launched into a hit to lead chemical programme, life started to turn sour, with the finger of suspicion quickly pointing at the inherent stability of the hit structure itself in solution. They essentially discovered that fresh solid preparations were inactive in their assay system but activity was restored over time as the solution stocks aged…….The story is all too familiar across the early hit landscape, and the rest of the paper is then taken up with a description of the various instability pathways and decomposition routes (some highlighted in graphical abstract above) – principally via aerial oxidation of the furan followed by Baeyer-Villiger rearrangement and on to a variety of nasty looking species.

The main point of the publication is then to make furans, and specifically this quinolone-furan sub-structure, sit on the naughty step along with the other PAINS and associated promiscuous/unstable/frequent-hitting structures. Whilst a number of us, I think, would already have suspicions about the stability of furans (despite being components of marketed drugs such as lapatanib), the open disclosure of unsuccessful hit finding campaigns and ‘lessons learned’ in these sorts of papers is to be applauded. Whilst the number of papers highlighting these issues is increasing at a very rapid rate, there are still many inappropriate chemical tools in circulation and widespread use and I fear that we may currently be preaching to the (mainly) converted. Hopefully, initiatives over wider public sharing of these pitfalls will provide some further integration of this knowledge into the wider community.

Blog written by Simon Ward

Fluorine in Medicinal Chemistry.


As a synthetic organic chemistry I  didn’t need to introduce a single fluorine atom in any of the molecules I synthesised over the years. That all changed when I moved into medicinal chemistry. In the project in which I am currently involved we have introduced up to four fluorine atoms in molecules with molecular weights between 350 and 380.

Fluorine is a hot topic in Drug Discovery and this is not only reflected in the recently published book “Fluorinated pharmaceuticals: advances in medicinal chemistry”, to which a member of staff of the SDDC contributed, but also in a recent article which reviews the effects that the strategic introduction of fluorine can have on potential drug candidates, influencing conformation, pKa, intrinsic potency, membrane permeability, metabolic pathways and pharmacokinetic properties.

Thanks to the advances in fluorine chemistry we are nowadays presented with a variety of reagents for selective introduction of fluorine or fluoroalkyl groups into specific locations. A recent new synthetic methodology gives access to pyrazoles bearing diverse fluoroalkyl groups, targets that were, till now, difficult to prepare.

The difficulty, up to now, of their preparation is sketched below, and it was mainly due to the trouble in handling and preparing the polyfluoroalkyl β-diketone precursors.

Carol blog picture1

The new procedure developed by Leroux and co-workers uses the unprecedented reaction between activated fluoroalkyl amino reagents, easily prepared from fluroolefines, and azines.

Carol blog picture2

Treatment of a fluoroalkyl amino reagents with a Lewis acids generates the corresponding iminium salt which reacts with fluorinated azines to form vinamidinum intermidiates. The latter can then easily cyclize to 3,5-bis(fluroalkyl)-1H-pyrazoles upon treatment with concentrated HCl.

In this article the authors prepare 14 different pyrazoles in moderate to excellent yields (see Table below), in one-pot, without isolation of any of the intermediates, giving a quick access to pyrazoles that bear two fluoroalkyl functionalities.

Carol blog picture3Carol blog picture4

Blog written by Carol Villalonga

Solanezumab, the story continues… but is it good news, bad news or just news?


On the 23rd of July, the British newspapers interpreted Eli Lilly’s most recent data on Solanezumab presented at the Alzheimer’s Association International Conference in Washington DC ranging from a circumspect “Dementia Drug Shows Promise” (the i), through “Jab Will Halt Alzheimer’s” (Daily Express) to “Alzheimer’s Miracle Drug Has Saved My Life” (Daily Mirror). The BBC was at the less excitable end of the spectrum claiming that there were “Early signs that drug ‘may delay Alzheimer’s decline”. Within the scientific literature opinion was likewise divided as to whether Solanezumab does not help in Alzheimer’s disease (McCarthey, M. BMJ 2015), has “questionable potential” (Reardon, S Nat Rev Drug Discov, 2015) and or actually does demonstrate disease modifying effects (Karran, E. BMJ, 2015). So, what’s going on here? Well, the recent debate relates to data from a two-year extension study, EXPEDITION-EXT from the two earlier 18-month Phase 3 studies of Solanezumab (EXPEDITION 1 and EXPEDITION 2) that we discussed previously (Amyloid in Alzheimer’s Disease – The End of the Beginning or the Beginning of the End?). Questions can be raised about the clinical trial design, in which patients and caregivers knew that they were on the drug on the extension trial as well as the effect size (Reardon, S Nat Rev Drug Discov, 2015). Nevertheless, the key observation, which is consistent (but not proof of) a disease modifying rather than symptomatic effect, is that change in the ADAS-Cog score for subjects that were previously on placebo but then given Solanezumab in the extension trial ran parallel rather than converged with the group that received drug during the original 18-month Phase 3 study (see figure below, taken from see Reardon, S Nat Rev Drug Discov, 2015).

Picture JA blog 14-09-2015

Better placed than most to comment on the matter is Dr Eric Karran, currently the director of research at Alzheimer’s Research UK but formerly head of neuroscience research at Eli Lilly when Solanezumab first progressed into development. Dr. Karran, who recently authored an excellent article reviewing the preclinical and clinical data on a variety of Phase 3 Alzheimer’s disease drugs (Karran E. Annals of Neurology, 2014), told the BBC; “If this gets replicated, then I think this is a real breakthrough in Alzheimer’s research. Then, for the first time, the medical community can say we can slow Alzheimer’s, which is an incredible step forward. These data need replicating, this is not proof, but what you can say is it is entirely consistent with a disease-modifying effect”. The Solanezumab replication study, EXPEDITION 3, involves 2,100 patients with mild Alzheimer’s disease with results due in October 2016 and as Dr. Karran comments “if it doesn’t work, we will all be very disheartened” although the next drug stepping up to the crease/plate (depending on whether you prefer your sporting analogies with a British or American flavour) will be Merck’s B-secretase inhibitor, MK-8931 which he described as “a superlative molecule” (Karran, E. Nat Rev Drug Discov, 2015). And so, following the flurry of excitement and hyperbole we settle back down to awaiting the data from rigourously-conducted, placebo-controlled Phase 3 studies with predefined end-points and can best describe the EXPEDITION-EXT data as very interesting without making any extrapolation to the patient benefit and ultimate regulatory approval (or otherwise) of Solanezumab. However, from the patient point of view, it is perhaps best to keep our collective fingers crossed.

Blog written by John Atack

How to reduce attrition in drug discovery!


The discovery and development of new drugs is a long process which faces different challenges, having as a final goal the identification of a molecule that meets multiple criteria. In the last few decades, different guidelines focussing on structural and physicochemical properties improved ADME and toxicology profiles leading to a reduction of attrition in the drug development process. However, the overall failure still remains quite high with only 4% of the candidates reaching the market, with several suboptimal compounds entering costly clinical trials, some of which should not have been made in the first instance!

The following paper published in Nature Review Drug Discovery (Nat Rev Drug Design 2015, 14, 475-486) reports a new analysis on the drug development attrition faced by four major pharmaceutical companies (Astra Zeneca, Eli Lilly, GlaxoSmithKline and Pfizer). Would this analysis finally reveal medicinal chemists how to reduce attrition in drug discovery program?

This study evaluated 812 small molecule drug candidates developed in the decade 2000-2010 analysing physicochemical properties, target class, broad disease area and reasons for failures. Unfortunately, certain specific characteristics such as chemical structures, pharmacological targets etc., were not provided due to confidentiality issues.

The two graphs reported below depict the highest phase reached by the set compounds and their distribution by target classes.Marco Picture1- Blog 9-9-15

The table below summarises the primary cause of failure for the compounds analysed, reporting also the difference between compounds developed in 2000-2005 and those developed in 2006-2010. Non-clinical toxicology was found to be the major cause for compounds termination (40%), followed by “the less noble” rationalization of company portfolio (21%), then clinical safety (11%), efficacy (9%), etc. Interestingly, comparing the first five years of the decade with the last five, a decrease in failure has been observed for the majority of categories, with the exception of the rationalization of company portfolio which showed a whopping 20% increase!Marco-Table1-Blog 9-9-15

The major cause of failure for each individual class was: non-clinical toxicology for the preclinical phase (59%), clinical safety for phase I (25%), lack of efficacy and clinical safety for phase II (35% and 25% respectively). The failure due to reorganization of company portfolio was constant for the three phases (20%)

An analysis of the physicochemical properties of the considered compounds showed that 75% of compounds fall within the desirable range of properties and the distribution overlap with the standard deviation values of the marketed drugs considered, with only 7.6% of compounds breaking two Lipinski’s Ro5 (molecular weight >500 and CLogP >5). However, looking at these properties in more details significant differences were noticed. The mean molecular weight was found to be 10% higher for the drug candidates compared to the approved drugs, and both calculated logP and logD were 0.5 log units higher for the drug candidate compared to the approved ones’. In addition, differences were also observed for the ratio of sp3 atoms and the mean aromatic ring count. This result indicates the tendency to design and select more lipophilic and planar clinical candidates, a lesson not yet learnt?

Controversial findings were observed when comparing physicochemical properties with non-clinical toxicology and clinical safety failure. In fact, while no correlation between non-clinical toxicology failure and physicochemical properties was identified, in the case of clinical safety a statistically relevant correlation was observed with regards to the lipophilicity of the compounds. Compounds failing due to clinical safety issues were more lipophilic compared to the ones progressing (CLogP= 3.8±1.6 versus 3.1±2.1 respectively). Although the results are very close and within the desired range, these findings were found to be statistically significant. Similarly, this trend was observed for calculated logD but was not statistically relevant.

Despite the progress made, human pharmacokinetics remains the third cause of failure (16%) in phase 1. From the analysis no correlation between physicochemical properties and failure or progression to the next phase was observed. However, it was noticed that almost as twice as acidic compounds failed compared to basic/neutral compounds, which may be attributed to poor predictive results from preclinical data.

This analysis points out, as also described in one of our recent blog, the necessity to consider with caution the calculated physicochemical properties of the compounds considered as the results obtained may be misleading. In general, it reinforces the current guidelines used in drug design and in particular the need to move away from the extremes of the drug like properties and to use them with more care. In particular, it supports the notion that more lipophilic and planar compounds have an increased chance of failing during development.

Blog written by Marco Derudas

Selective chemical probe + disease relevant biology = target validation ??


Target validation is a critical part of the drug discovery process – after all the best drug against the wrong target will still lead to failure in the clinic. Although well used, the phrase ‘target validation’ doesn’t necessarily mean the same thing to all and the acceptable level of validation or data package to support a target varies considerably from group to group and company to company. A clear disease linkage to human genetics is considered by many to be the strongest validation for a clinically unproven target. However this is still complex to achieve, particularly for diseases likely to be polygenic in their aetiology. Even monogenetic diseases where we know the identity of the gene and have clarity on how that affects protein and cellular function have been challenging – it’s taken over 20 years to get from the identification of the cystic fibrosis gene (CFTR) to the introduction of the first drug to ‘correct’ the genetic defect (Kalydeco in 2013).

The use of chemical probes to enable us to functionalise the genome and identify new drug targets is an approach that continues to gain credibility, particularly when combined with human cell based test systems (the merits or otherwise of animal models is perhaps best left for a future blog). However if the data generated using a chemical probe is to be correctly interpreted then the chemical probe needs to have a well defined and selective pharmacological mechanism of action ie. the quality of the conclusion will be directly related to the quality of the probe. In a recent article in Nature Chemical Biology (11, 536-541) Cheryl Arrowsmith, along with other academic and industry colleagues discuss the appropriate (…and inappropriate) use of chemical probes – they also provide some thoughtful advice and guidelines in addition to advocating the open sharing of best practices.

From Arrowsmith et al (2015) The promise and peril of chemical probes. Nature Chemical Biology 11, 536-541

From Arrowsmith et al (2015) The promise and peril of chemical probes. Nature Chemical Biology 11, 536-541

A key part of the Sussex Drug Discovery Centre’s role is to develop novel chemical probes with the aim of facilitating target validation. If you have a target that you think a chemical probe would ‘enable’ then we’d really like to hear from you!

Blog written by Martin Gosling