Kinase inhibitors for CNS penetration

The need to develop new brain cancer treatments using targeted molecular therapy was recognised over a decade ago. Glioblastoma—the most common form of malignant primary brain tumour – is the leading cause of cancer death in children and it also accounts for a high proportion of cancer deaths in adults. Currently, there are only two FDA approved chemotherapeutics for the treatment of glioblastoma multiform: the alkylating agents temozolomide and the carmustine-based Gliadel wafer. The success of kinase inhibitors in treating various malignancies suggests that it is highly desirable to identify a kinase inhibitor, capable of effectively crossing the blood-brain barrier (BBB). This necessity also arises from the risk factor when metastasis of tumour to the central nervous system (CNS) occurs as a mechanism of emergent resistance, if the inhibitor does not freely penetrate CNS.

While the importance of free BBB penetration for drugs targeting brain cancer is well understood, it is also essential to correctly assess the extent of this BBB penetration (as opposed to just achieving a target free concentration in the brain), for which a comparison of free brain concentrations to free plasma concentrations is needed (Kp,uu). The values of <0.1 are considered to be low (limited CNS penetration), whereas the values of >0.3 demonstrate a significant degree of free BBB penetration. The principal requirement for any small molecules to achieve the adequate Kp,uu values is that the molecules are not the substrates of the efflux transporters, such as P-gp or Bcrp, which are highly expressed at the BBB interface, and to possess the required for BBB permeability physicochemical properties. These properties for the CNS drug design have been well reviewed and summarised by Zoran Rankovic, the most critical being the topological polar surface area (TPSA) of the molecule and the number of hydrogen bond donors (HBDs).

In the recent review on kinase inhibitors for the treatment of brain cancer, Tim Heffron has analysed known small molecule kinase inhibitors with reported CNS penetration data and compared their physicochemical properties with those of the approved CNS drugs. Typically, the kinase inhibitors utilise multiple hydrogen bond interactions to achieve effective binding to the catalytic site of a kinase. As the result, the median TPSA values for the approved kinase inhibitors are double of that of the approved CNS drugs (Table 1). Interestingly, for two categories of kinase inhibitors, the first – with limited brain penetration and the other – with evident CNS penetration, there is remarkable similarity in the median values of cLogP, cLogD7.4, TPSA, HBD, and MW. The only notable difference was in the calculated pKa median values, where CNS penetrating kinase inhibitors have a lower median pKa than either kinase inhibitors that do not cross the BBB or CNS drugs.


Table 1. Comparison of median values of physicochemical properties for kinase inhibitors that are reported or predicted (based on efflux transport data) to have limited CNS penetration or reported to have significant free CNS penetration and/or no significant P-gp or Bcrp efflux.

It is worth noting that the quality of the data set for this comparison and, therefore, additional differentiation in the properties between the groups might be affected by a lack of data on free-brain-to-free-plasma drug concentration ratios (Kp,uu) for most molecules. In addition, there are limitations to the use of calculated physical properties that might conceal actual differences between molecules, and a potential for species differences to affect the interpretation of reported data for P-gp efflux.

Besides, the common medchem strategies to improve CNS penetration and to reduce efflux transport, such as utilisation of intramolecular hydrogen bonds to effectively mask HBDs and reduction in number of rotatable bonds, would not be accounted for in the calculated properties of those molecules.

The research into CNS penetrant kinase inhibitors is a fairly new direction, and to date only a few kinase inhibitors have been reported that are designed to be BBB permeable. This demonstrates that success in this area can be achieved, even if the physicochemical properties of kinase inhibitors and those of CNS drugs at first appear at odds. Of course, many additional variables impact evaluation of CNS penetrant kinase inhibitors clinically (e.g., PK, selectivity profile, safety, extent of free brain penetration, etc.). However, the significant unmet medical need for such inhibitors and the appreciation for what constitutes meaningful (free) brain penetration are driving the current R&D efforts in the discovery of kinase inhibitors for the treatment of brain cancer.

Blog written by Irina Chuckowree

Period pain drug with a potentially beneficial effect on Alzheimer’s disease

Alzheimer’s disease

Alzheimer’s disease is a neurodegenerative disorder and is the most common cause of dementia. The degradation of brain matter leads to symptoms including memory loss and difficulties with thinking, problem-solving or language. As Alzheimer’s disease is a progressive disorder the symptoms gradually get worse over time 1. In 2015 it was estimated that 46.8 million people were living with dementia 2. Thus drug development to reduce or stop the progression of Alzheimer’s disease is heavily invested in as there is no effective treatment.

It has been shown that an overactive immune system can play an important role in Alzheimer’s disease. Within the immune system particularly the inflammasome has been implicated in Alzheimer’s disease. The inflammasome is a large bundle of proteins which are responsible for producing proinflammatory cytokines in cells. The cytokines then promote inflammation in the brain which causes can worsen Alzheimer’s disease.

Rachael 1.jpg

The Strategy

The plan – to inhibit the inflammasome complex to improve the outcomes for people living with Alzheimer’s disease. Designing new drugs can take 20 years to get into clinic and cost 1.6 billion dollars in the process. Dr David Brough, leading the study employed the strategy of ‘repurposing’ to discover drugs for Alzheimer’s disease i.e. using a drug already approved in the market to treat another disease. This involved testing drugs from a large class of non-steroidal anti-inflammatory drugs (NSAIDs) to determine if they could inhibit the inflammasome and therefore have potential therapeutic benefit in Alzheimer’s disease.

In Vitro

From screening many NSAIDs for their activity on immune cells, mefenamic acid was identified as having the ability to inhibit the inflammasome and prevent the release of inflammatory cytokines (figure a). This is a drug mainly prescribed for the treatment of period pain. The next step was to identify how the drug was working. It is understood that ion channels on the cell surface may be involved in inflammasome activation and the proposed mode of action of mefenamic acid was that it inhibits particular ion channels. During this study the target was identified as a chloride ion channel called the volume-regulated anion channel (VRAC).


In Vivo

In order to make the link between these exciting initial findings and the effectiveness of mefenamic acid at treating Alzheimer’s disease animal models were utilised. Firstly a rat model of amyloid-beta induced memory deficits was used. Amyloid beta plaques are known to be a major hallmark of Alzheimer’s disease and the injection of the protein into rat brains results in permanent memory deficits. From the novel object recognition test it was found that mefenamic acid protected the rats from these memory deficits.

The next model that was studied was a mouse model of expressed genes found in the humans with the genetic form of Alzheimer’s disease. In this model the symptoms are progressive, starting from when the mice are 14 months old (simulating the human disorder). Using a water maze based task it was determined that mefanamic acid was able to reverse the memory deficits observed in placebo treated mice. Additionally there was much inflammation in the brains of placebo treated mice but no inflammation in the case of mefenamic acid treated mice.


In conclusion from this study we can see that Alzheimer’s disease is a debilitating progressive disorder which is a huge global concern. There is strong evidence that overactivity of the immune system, particularly the inflammasome protein complex is involved. Testing NSAIDs led to the identification of mefenamic acid which inhibits the inflammasome by blocking the VRAC ion channel and it was effective in at eliminating memory deficits associated with Alzheimer’s disease in mouse and rat models. The group is planning to progress to clinical trials, and since the drug is already known and approved there is no requirement for a safety testing stage.


Daniels, M.J.D., Rivers-Auty, J., Schilling, T., Spencer, N.G., Watremez, W., Fasolino, V., Booth, S.J., White, C.S., Baldwin, A.G., Freeman, S., Wong, R., Latta, C., Yu, S., Jackson, J., Fischer, N., Koziel, V., Pillot, T., Bagnall, J., Allan, S.M., Paszek, P., Galea, J., Harte, M.K., Eder, C., Lawrence, C.B. and Brough, D. 2016. Fenamate NSAIDs inhibit the NLRP3 inflammasome and protect against Alzheimer’s disease in rodent models. Nature Communications. 7, p12504.

1 Alzheimer’s Society Website

2 Alzheimer’s Disease International Website


Blog written by Rachael Besser

C4-Selective C-H Arylation of Thiazoles Enabled by Boronic Acids

Direct functionalization of carbon-hydrogen bonds (C-H activation) has recently emerged as a powerful method for C-C bond formation. Over the past 5 years, the Itami group have developed unique catalytic systems that can preferentially activate and arylate less reactive C-H bonds on heteroarenes. In particular, the group has looked at selective arylation of thiazoles. Thiazoles are often seen in drug discovery and late stage functionalization of this structure at all positions would be useful.

The three C-H bonds on the thiazole are chemically nonequivalent. Pd-catalysed arylation of C-H bonds of thiazoles occur preferentially at the positions α to the sulfur atom (C2 and/or C5). Under basic conditions, the C2 position is preferentially deprotonated and so under conditions with a strong base, C2 arylation occurs preferentially in C-H activation. Whereas electrophiles act preferentially at the C5 positon, so under conditions in C-H activation where the nucleophilicity of the thiazole is dominant, then C5 arylation preferentially occurs. There is therefore a need for C4 selective arylation.


Figure 1. General reactive of thiazole in C-H activation. Source : (Tani, Uehara, Yamaguchi, & Itami, 2014)

Itami et al found the key to functionalization at C4 position of thiazoles was the use of boronic acids (Kirchberg et al., 2011). The reaction conditions were optimized and it was found that various nitrogen-based bidentate ligands were effective and that TEMPO was the best oxidant to use. The system worked very well on thiohenes and thiazoles where yields were generally over 75% and selectivity for thiazoles were over 85%.


Figure 2. Conditions to produce C4 selective products. Source: (Kirchberg et al., 2011)

The authors propose that C4-selective arylation occurs through a Heck-like concerted arylpalladation across the thiazole C=C bond (Kirchberg et al., 2011). This neat synthetic route allows multiple functionalisation of thiazoles in a selective manner through a catalyst control approach. A paper published by the same group shows how this technique was utilised in the programmed synthesis of arylthiazoles (Tani et al., 2014). This route was also used to allow late-stage functionalisation of potential HDAC inhibitors (Sekizawa et al., 2014) and potential antibiotics (Lohrey et al., 2014) demonstrating the use of this method on a variety of substrates. The methodology has also been used to functionalise furans to assess late stage SAR of KL001 derivatives which are modulators of the mammalian circadian clock (Oshima et al., 2015).

C-H functionalisation has generated significant attention from the synthetic chemistry community because it is a powerful tool to form C-C bonds. The selective arylation of thiazoles at all positions by C-H activation is now possible thus allowing a very streamlined and efficient drug discovery process when using the thiazole core. Yamaguchi et al have recently produced an overview of C-H activation strategies for the rapid synthesis of biologically active compounds  (Yamaguchi, Yamaguchi, & Itami, 2012).

Blog written by Yusuf Ali

New treatments for cystic fibrosis lung disease: what’s beyond Orkambi?

A new treatment for cystic fibrosis (CF), Orkambi, has recently gained media attention as a result of the National Institute for Health and Care Excellence (NICE) committee’s decision to not fund the drug through the NHS (Cystic Fibrosis drug Orkambi decision ‘a death sentence’, BBC Online). At an annual cost of >£100,000 per patient, the positive effects of the drug on disease exacerbations and lung function were not considered by NICE to be cost effective. So what does the near future hold in terms of new therapies in CF and will therapies for patients with rare mutations benefit from the current wave of pharma interest?

The last decade has seen significant advances in the hunt for new therapies to treat cystic fibrosis (CF). Since the cloning of CFTR, the gene which is mutated in CF, the mechanisms which lead to such a devastating lung disease have come into greater focus and the first drug treatments, designed to repair the basic defect have shown efficacy in the clinic. Vertex’s Kalydeco (ivacaftor), a drug which potentiates the chloride channel activity of CFTR, has shown significant benefit in CF patients with so-called ‘gating mutations’1 in the channel which account for approximately 10% of the 120,000 patients globally. As a mono-therapy, ivacaftor does not however show any benefit in CF patients carrying the most common mutation, F508del. The F508del mutation prevents the channel being trafficked to the cell membrane, so for a CFTR potentiator such as ivacaftor to work a 2nd drug, a CFTR corrector is required to rescue membrane insertion. To this end, Vertex have developed lumacaftor, a drug that facilitates F508delCFTR trafficking to the plasma membrane (a CFTR corrector)2. The combination of lumacaftor with ivacaftor (Orkambi), as described above, has shown some clinical improvement in CF patients homozygous for F508delCFTR (approximately 50% of the CF population), although the magnitude of benefit has fallen somewhat short of the impressive ivacaftor and in recognition of this 2nd generation ‘correctors’ are in development. Vertex have advanced two new correctors (VX152, VX440) and Galapagos are pushing GLPG2222, GLPG2737 and GLPG2857.

Central to the efficacy of these CFTR repair therapies is their ability to restore anion secretion into the airway lumen which serves to draw in water and hydrate the thick, sticky CF mucus thereby improving its ability to clear bugs out of the lungs. However, a number of other CFTR-independent mechanisms are at play in the airway epithelium which can also improve the hydration status of the mucus. Clearly there is not yet the same weight of clinical validation that a CFTR-independent based therapy will show efficacy in patients, however, these types of therapies will offer the potential to treat all CF patients, irrespective of their particular CFTR mutation. Furthermore, these therapies are anticipated to be used in combination with CFTR repair approaches, where available, with anticipated additive or perhaps synergistic activity.

One of these CFTR-independent targets is the epithelial sodium channel, ENaC. ENaC is responsible for absorbing fluid out of the airways and blockers of this channel have been shown to improve mucociliary clearance in the clinic3. Vertex/Parion have VX-371, an inhaled ENaC blocker in a Phase 2 study in CF patients in combination with Orkambi. When used as a monotherapy in a short Phase 2 CF study, VX-371 failed to show any signs of efficacy. This may have been due to the short duration of the study (14 days) but it was also suggested that the efficacy of an ENaC blocker may be limited if there is inadequate anion secretion. Putting a plug into an empty bath tub will not help it fill up unless the taps are switched on! Data reported at the NACFC 2016 seemed to support this notion illustrating that the combination of VX-371 with Orkambi enhanced mucosal hydration and ciliary beat frequency in primary cultures of CF-derived bronchial epithelial cells4.

As well as CFTR, there is an additional anion secretory pathway in the human airway epithelium, often called the ‘alternative’ chloride conductance. This chloride conductance is regulated by calcium and termed the Calcium Activated Chloride Conductance, CaCC. The human airway CaCC was identified as TMEM16A in 2008. This CFTR-independent pathway for anion and thereby fluid secretion into the airway appears to have been somewhat overlooked in the ‘pharma clamour’ for CFTR regulators but its importance should not be underestimated. CF patients are born with dysfunctional CFTR but despite this they maintain some degree of mucociliary and cough clearance, that is the lungs manage to provide anion and fluid secretion from somewhere. Mother Nature has installed a back-up mechanism, the alternative chloride conductance, that is capable of sustaining mucus clearance even in the absence of CFTR. However, this mechanism is clearly not sufficient to maintain long-term health and eventually the lungs succumb. So from a therapeutic perspective, can we utilise this alternative chloride conductance? Actually, we are already doing it, albeit subconsciously. We now understand that CF patients gain significant benefits from regular exercise5 and one of the mechanisms believed to drive this is the alternative chloride conductance. During exercise, our rate and depth of breathing increases which induces an increase in calcium levels in the epithelium through a purinergic, P2Y-receptor mediated mechanism6. This elevation of calcium increases CaCC activity and thereby enhances anion and fluid secretion into the airway boosting mucus clearance; an important mechanism to ensure our lungs clear the extra burden of microorganisms that come with the extra volumes of inspired air on exertion. So could we harness this mechanism and use it as a basis for a completely novel drug therapy for CF? With the identity of the CaCC known to be TMEM16A it is now possible to find compounds that will potentiate the activity of the channel. In essence, a TMEM16A ‘potentiator’ will sensitise the channel to intra-cellular calcium levels thereby maintaining it in an activated state for longer, enabling an enhanced fluid secretory response. As with an ENaC blocker, this approach will be agnostic to the CF patient’s mutation and will therefore be suitable for all patients. Furthermore, additive or synergistic effects with CFTR repair therapies as well as ENaC blockers would be anticipated.

Looking ahead to the near term, advances in pharmacotherapy for CF patients are going to continue to be largely based around CFTR-repair, focusing on the 50% of the population who are homozygous for the F508del mutation. We will eagerly await data to understand whether Orkambi does influence the natural history of the disease whilst testing the next generation of ‘corrector’ molecules, hoping for a breakthrough in clinical efficacy. In parallel, CFTR-independent therapies hold significant promise as both combinations for existing therapies, but importantly also as stand-alone treatments in their own right.

  1. Ramsey BW, Davies J, McElvaney NG, Tullis E, Bell SC, Dřevínek P, Griese M, McKone EF, Wainwright CE, Konstan MW, Moss R, Ratjen F, Sermet-Gaudelus I, Rowe SM, Dong Q, Rodriguez S, Yen K, Ordoñez C, Elborn JS; VX08-770-102 Study Group. A CFTR potentiator in patients with cystic fibrosis and the G551D mutation. N Engl J Med. 2011;365(18):1663-72
  2. Wainwright CE, Elborn JS, Ramsey BW, Marigowda G, Huang X, Cipolli M, Colombo C, Davies JC, De Boeck K, Flume PA, Konstan MW, McColley SA, McCoy K, McKone EF, Munck A, Ratjen F, Rowe SM, Waltz D, Boyle MP; TRAFFIC Study Group.; TRANSPORT Study Group. Lumacaftor-Ivacaftor in Patients with Cystic Fibrosis Homozygous for Phe508del CFTR. N Engl J Med. 2015;373(3):220-31
  3. Hirsh AJ. Altering airway surface liquid volume: inhalation therapy with amiloride and hyperosmotic agents. Adv Drug Deliv Rev. 2002;54(11):1445-62.
  4. Haberman, R, Ling M, Thelin W, van Goor, F, Higgins M, Jain M. Preclinical evidence for adding ENaC inhibition to corrector/potentiator therapy (lumacaftor/ivacaftor combination therapy) in cystic fibrosis. Ped Pulm. 2016;S45:216 (abstract)
  5. Hebestreit H. Exercise in cystic fibrosis. In: Cystic Fibrosis ed. Mall M, Elborn JS. European Respiratory Society Monograph 2014
  6. Button B, Okada SF, Frederick CB, Thelin WR, Boucher RC. Mechanosensitive ATP release maintains proper mucus hydration of airways. Sci Signal. 2013;6(279):ra46.

Blog written by Henry Danahay

Multiple Multiparameter Optimisations, and the Success of Confirmation Bias

This blog article refers to the article in press by A.K. Ghose et al. on “Technically Extended MultiParameter Optimization”, 1 and the somewhat pivotal works of T.T. Wager et al. onDefining Desirable Central Nervous System Drug Space through the Alignment of Molecular Properties, in Vitro ADME, and Safety Attributes” and  “Central Nervous System Multiparameter Optimization Desirability: Application in Drug Discovery”,2 and attempts to explain what an MPO is and discusses the two systems’ design.

 A Multiple Parameter Optimisation (MPO) tool in any application domain is one where the user selects several important parameters that collectively indicate prediction of an outcome for a particular endpoint (e.g. oral bioavailability). The user then creates a scoring system which balances a scoring matrix across all of the selected parameters, so as to reduce the data to (usually) a single number or small collection of numbers (“scores”). This is simply a data reduction, which was briefly discussed in a previous article about man-made metrics for drug discovery (Reducing Data: Ligand Efficiency and Other Fallacies).3 Unlike QSAR / QSPR, where we use various rigorous mathematical and statistical methods to determine which factors are important, and weight them accordingly, often an MPO in drug design uses criteria that are picked by senior scientists who have many years’ experience observing the particular endpoint in question.  It has the added benefit of usually being a bit more human-readable compared to typical QSAR / QSPR

An MPO differs from a hard-logic filter (e.g. Lipinski’s Rule of 5), in that it considers optimal and suboptimal values with graduated scores, whereas in a hard-logic filter, there are only two states: pass or fail. Typically anything failing a hard-filter is thrown away, whereas a moderate scoring MPO material might be tweaked to improve it as part of Lead Op. You simply show the MPO your structure (or SMILES string), and it will calculate the values for the selected criteria, and give you a score – in the case of the Wager MPO, a score between 0 and 6, with a score of 4 or higher being representative of something that is “probably CNS penetrant”. If you are in abundance of chemical matter, you might throw the lower scorers away, but if you are limited with molecular hit matter, you might design your molecule to improve it (it is an MP optimiser after all).

An MPO for determining likelihood of central nervous system (CNS) penetration has been outlined back in 2010 when Travis Wagner and colleagues at Pfizer, using internal data, determined a six-criterion system.2 It is fair to say that this work changed the way many medicinal chemists designed their output for neuroscience targets across multiple organisations (including ours). The same authors revisited their work earlier this year with some further pseudopost-hoc validation data (more on this later). Arup Ghose is a name well known by chemoinformaticians, and I recall reading his works at the turn of the millennium alongside those of Lipinski and Oprea for filters for oral bio-availability and the like (he also attributed with the invention of the AlogP method of LogP prediction). Ghose and colleagues recently published their own ideas on a suitable CNS MPO using a humanised-QSPR type approach.

The 2010 Wagner paper has been cited over 200 times, with the Pfizer MPO being used by various organisations and groups as their primary MPO for neuroscience projects. As a result, Ghose’s suggestions are an interesting variance, especially as Ghose’s is statistically more rigorous in its design.


 For a model to be a useful one, it needs to be validated. This is normally done by taking a data set, and randomly splitting it into a design (training) set and a validation (test) set. You build the model on the training data and see how well it holds up against the test set. This is how the Ghose model was designed and built, and hence it can statistically demonstrate its validity within the data set. In the case of the Travis MPO, they fell into the normal non-statistical pitfall of creating a Texas Sharpshooter Fallacy (like Lipinski and many others, below), in that they used the whole data set to build the model, and then had no data that was external to the training set in order to validate it. In the case of Wager’s 2016 paper, they effectively demonstrated conformational bias in recent development.

A Texas Sharpshooter Fallacy is a mathematical fault-of-reasoning where a person shoots a wall with a gun and then goes and draws the target around all of the bullets in the wall, and claims they were all within the target. Without an extra set of bullets to then go back and shoot into the target, you cannot validate how good a shot he is. This is basically the use of all of the available data in the creation of a model, and leaving none aside to test it with, and then calling it successful because all of the data matches (even though it was the same data used to make the model).


Figure 1: Data on numbers of candidate and drugs with their MPO ranges from Wager et al. (loc. cit.)

As can be seen in Figure 1, these are trends and not steadfast rules, however MPO’s are very useful in reducing the perceived risk of endpoint failure. There may be a problem in assessing the quality of this MPO, as many organisations use this method for development of early materials, and as a result, we have a confirmation bias issue, discussed at the end of this article.


Table 1 shows the different criteria used for their optimisation methods. Though it contains more components, Ghose’s system requires less computing to calculate, having only AlogP as a required machine-calculable element (the rest could be done by eye), however, realistically you would have your software do the maths for them all. Ghose and colleagues used a computational data reduction method to reduce the criteria to eight in a way that has effectively tried to reduce the elements that give variation by method (e.g. different software will determine pKa, LogP and logD differently. In fact the same software will give different values depending on the version. Recently we have seen our LogP and LogD values change overnight as ChemAxon changed the way they natively calculate those criteria). By trying to avoid these and stick to human measurable, functionally discreet criteria it becomes less method dependant. The use of AlogP rather than a complex ClogP, KlogP, ACDlogP also attempts to minimise errors (or rather make errors more consistent) where there are novel chemotypes that are not likely in the model set for the LogP calculator (see previous article on CLogP and other short stories).4

Table 1: Comparison of the criteria in both Wager MPO and Ghose TEMPO


* No method suggested

The way the criteria are scored varies between authors. In the case of Wager et al. the criteria were scored according to Figure 2 (mostly monotonic, with a hump function for tPSA), whereas Ghose used a hump function across all of the criteria (Figure 3 and Table 2).


Figure 2: Criteria plots, each detailing a parameter (“desirability function”) in the Pfizer (Wager) MPO. The six criteria are scored on their value for each compound, with a result being a score of between 0 and 6.


Figure 3: A hump function (albeit upside down), where P is preferred, and Q is qualifying, U is upper and L is lower. The penalty is the applied in a scale for materials outside of the preferred range.

Table 2: The scoring range for Ghose et al.’s TEMPO



A good scoring system should be a mathematical construct based on a criteria and its relevance as described in Figure 4.

ScoreABC… = (criterionA * coeffA) + (criterionB * coeffB) + (criterionC * CoeffC)…

Fig. 4: A simple scoring formula, where a criterion (e.g. LogP), is multiplied by its weighting coefficient. All the component products are then summed.

In a system like that of Figure 4, each criteria is multiplied by its weighting, which is derived from its determined importance in its contribution to the endpoint. In classical QSAR, it is determined by PCA or other regression technique, but in the case of human data reduction it is often whimsical. In the case of Wager et al., each component of the MPO was given the same weight, that is, each coefficient was 1. In the case of Ghose et al. each criteria was given a weight derived from the data reduction analysis in the model design (Table 2, column 6 “coeff (C)”). You can see in Ghose’s that the number of basic amines is three times more important than the number of rotatable bonds, for example, whereas in the Wager MPO, all features are as valuable as each other.

 Comparison and confirmation bias

Typically we could compare models here to see which predicts or gives a better correlation to CNS penetration by taking a dataset from our pipeline and seeing how it predicts, however we have a problem with confirmation bias. Like the atomic bomb dispersing isotopes rendering certain archaeological aging techniques impossible in modern times, the Wager CNS MPO system may have dispersed into product pipelines since (and possibly before, if the system was used internally) their paper, and so materials that are CNS penetrant that also score highly in the CNS MPO in our compound deck may be because materials that did not score highly, were not developed in the first place.

As a result, we would need a data set that was evidently not based on or used in the generation of  this CNS MPO system, or in fact any CNS or development guide as they will influence the materials in the comparison set.

Conclusions and comments from the blogger (whose opinions are his own).

Without a doubt MPOs are fantastic tools to simplify and somewhat humanise the abundance of data in order to give chemists information about materials to make or avoid. It is also beyond question that the original works by those Old Guard of filters (the Lipinski’s and Oprea’s and Ghose’s), have shaped how we design and prioritise materials, and likewise Travis Wager and the Pfizer team really influenced multiple groups around the world by shining a light on how to optimise their materials for CNS penetration at the design stage. I believe that the Ghose et al. TEMPO, despite probably being named that way purely for the cool acronym, is a statistically and logically more rigorous piece of data reduction. Wager’s 2010 paper it seems more contextual, and thoroughly details the thoughts and trends behind the MPO is a less pure-stats way.

The problem with confirmation bias is actually testament to how well adopted the Wager CNS MPO system and others have been picked up,  – it does however now make it quite difficult to compare systems (all we can do is try to re-zero across the MPO’s to see what it translates to. It is likely we will keep an eye on both systems in our data generation and see how they track side by side.

Conformational bias is apparent in a number of other areas in drug discovery, and a prime example is Ligand Efficiency metrics, which have permeated multiple organisation’s design principles.

In the few organisations I have worked in, I have seen and used multiple CNS and other MPO tools which means that the compounds at the end are tainted by design, and are useless for comparing methods. I wonder how true this is across the larger chemical community.

So my challenge to the reader, next time you look at your enumerations / libraries and line up your synthesis priorities next to your nicely coloured coded MPO score columns, whichever tool you use, and whatever the endpoint, what information you are really getting from them, and are you perpetuating a cycle of confirmation bias (and limiting chemical space in doing so)?

1.Arup K. Ghose, regory R. Ott and Robert L. Hudkins, ACS Chem. Neurosci., article in press,

DOI: 10.1021/acschemneuro.6b00273.

2. (a) Travis T. Wager, Xinjin Hou, Patrcik R. Verhoest and Anabella Villalobos

DOI: 10.1021/cn100008c |ACS Chem. Neurosci. (2010), 1, 435–449

(b)Travis T. Wager, Xinjin Hou, Patrcik R. Verhoest and Anabella Villalobos.

DOI:10.1021/ascchemneuro.6b00029 | ACS Chem. NeuroSci (2016), 7, 767-775.

Blog written by Ben Wahab

Small but mighty: Nanoparticle polymers aid drug delivery

Nanotechnology, the branch of technology that is conducted at the atomic, molecular, or supramolecular scale (or ‘nanoscale’) has captured the public imagination since the concept was first introduced over fifty years ago. Even before the relevant technology caught up and allowed nanotechnology to transition from the abstract to the practical realm, it was a subject that inspired scientists and technologists – as well as doomsday enthusiasts and science-fiction writers. With good reason – it is easy to imagine how harnessing the ability to manipulate matter at absurdly small dimensions could open the door for a litany of potential applications in medicine, electronics, energy production and consumer products. And, as just one recent paper has shown (, nanotechnology even has a role to play in enhancing drug delivery.

In this paper, the authors have set out to combat the issue of poor drug solubility, which leads to low bioavailability and therapeutic efficacy and can be a great hindrance to the development of otherwise promising compounds. This has particular relevance for medications that target tumour cells, which are often hydrocarbons that are inherently hydrophobic and so require higher dosages to enter aqueous environments. While there have been relatively successful attempts to improve the solubility of pharmaceuticals by ‘nanosizing’ drug formulations to the 10-1000nm range, it is difficult to prevent them aggregating and consequently losing some potency.

To solve this problem, the authors have sought to stabilise drug compounds by using branched copolymer nanoparticles (BCN) made from biocompatible polymers – in this case polyethylene glycol and isopropylacrylamide (PEG-PNIPAM). They synthesised these polymers as branched spheres with varying degrees of cross-linking between the carbon chains, resulting in a structure with both hydrophobic and hydrophilic elements. These characteristics then make it possible to create an emulsion between the drug compound and PEG-PNIPAM when they are mixed together, and subsequent freeze-drying of the emulsion spurred formulation of the organic nanoparticles directly within the pores of the PEG-PNIPAM polymer. Unlike earlier nanosized drugs, these nanoparticles are protected from aggregation within the unique, highly interconnected scaffold structure, which was exquisitely visualised using scanning electron microscopy and well characterised using dynamic light scattering.

The emulsion-freeze-drying process was then applied to the poorly water-soluble drug indomethacine (IMC) to highlight the versatility of this technique. IMC was dissolved in o-xylene and was emulsified with PEG-PNIPAM prior to freeze-drying, which fragmentised the emulsion into nanoparticles. The researchers then showed that the IMC nanoparticles could be readily dissolved in water to form an aqueous dispersion, even after eight months of storage. Not only this, but when the procedure was repeated with two more drugs – ketoprofen and ibuprofen – the resulting nanoparticles achieved an impressive 100% yield. Such a simple and elegant approach could realistically be applied to a wide range of pharmaceuticals to improve drug solubility issues and carve the way for new nanotechnology roles in medical treatment.

Blog written by Chloe Koulouris

Mass spectrometry as a primary screen

Alternative methods to identify hit material in the drug discovery process always catch my eye. When a very recent publication using a Mass spectrometry (MS) detection system as a primary assay was released, I took the time to read it through. A link to the publication is shown below:

In this publication the authors were looking to identify inhibitors of the obesity target monoacylglycerol acyltransferase, which is responsible for acylation of monoacylglycerol (MAG) to diacylglycerol (DAG) in certain tissues. Diacylglycerol is then further metabolised by diacylglycerol acyltransferases resulting in triacylgycerol (TAG). Triacylglycerol is then stored in tissues as an energy source. Interrupting this metabolic pathway could then be used as a method to assist in diseases such as type 2 diabetes, which are influenced by excess storage of Triacylglycerol

Historic assay formats for this target previously included a scintillation proximity assay and thin layer chromatography, both of which have specific drawbacks. The authors took a different path for this target and developed a Mass spectrometry readout utilising the rapid-fire system.

In this assay format, crude human intestinal microsomes were allowed to react with substrate in the presence of test compounds using 384 well plates. The reaction was then quenched and transferred to the rapid-fire system (a solid phase extraction system). The samples where then measured on a triple quad mass spectrometer. The specific products of the reaction were then identified and a % inhibition determined for each test compound.


Enzyme activity monitored on a Mass spectrometry system. Adachi, R., Ishii, T., Matsumoto, S., Satou, T., Sakamoto, J., and Kawamoto, T. (2016). Discovery of human intestinal MGAT inhibitors using high-throughput mass spectrometry. Journal of biomolecular screening (in press).

One of the advantages of using the Mass spec based detection method was that as crude human intestinal microsomes were being used, both the production of DAG and TAG could be monitored in one reaction sample. This allows the ability to identify different enzyme inhibitors from one screen.

Remarkably the screen was carried out on 500,000 compounds at a screening concentration of 1μM. Given the cycle time of 10 seconds per sample, this suggests about 2 month’s work, assuming 100% up time for the Mass spec system. This is probably longer than a standard plate based biochemical assay could take to screen those number of compounds, however I would assume further improvements to this cycle time could occur with further technical development. The results showed the screen had an average Z prime of 0.7 and 0.83 for the different enzymes measured. Hit compounds were further classified with concentration response curves against a variety of different Monoacylglycerol acyltransferase subtypes (MGAT2, MGAT3), and they were able to release a structure of a selective compound in the paper and highlighted a number of other compounds which were identified.

The use of mass spectrum based screening has been highlighted in other publications, although mostly used at a hit confirmation stage of a screening cascade. This is the first time, I personally have seen it used as a primary screen with compound numbers of this size. The technique may open up primary screening of large compound collections on targets that have unable to be fully explored due to the failure to develop a robust assay format.

I do envision more targets using mass spectrometry as a primary screen and I think this publication is a step forward in that direction.

Blog written by Gareth Williams