Metrics to increase efficiency in Lead Optimisation (LO) processes


A recent article in J Med Chem by A. Maynard at GSK describes a statistical framework which can be used to quantify and visualise through process-centric analysis the progression of lead optimisation (LO) projects. A. Maynard and his team at GSK propose a framework to visualise the whole lead optimisation process using compounds’ activities, physical properties, DMPK and pharmacology risk alongside ‘design entropy’ describing it as the ‘LO telemetry’ of the project. This is a shift from looking at a lead optimisation program in pure static compound-centric terms such as Lipinski’s Rule of 5. In a LO project each subsequent compound synthesised is an end-point in a succession of many compounds and by analysing the dynamics of project progression allows better decisions to be made regarding actual efficiency and potential of the project.

Aiding managerial decisions on LO projects

An article in Nature Reviews Drug Discovery by R. Peck in 2015 discussed that behavioural, cultural and organizational issues in industrial research were key obstacles in preventing termination of projects at earlier points. An argument made by Peck on why it is difficult to terminate failing projects is due to the subjective biases even scientists are subject to. For example, optimism causes overestimation of the probability and timeline of success.

The motivation of Maynard to develop a quantitative view of LO progression can aid in more objective decisions when evaluating LO projects. There is certainly a need to develop more process-centred evaluation tools that enhance efficiency and productivity in drug discovery. This is not an argument for increasing management where efforts are only being made to ‘game’ the metrics system in order to achieve targets. Instead, by using previous projects to understand progression, failure and success, metric tools can be developed to aid managerial decisions in future projects.

Quantify, visualising and monitoring LO projects

LO aims for convergence to its endpoints. In Maynard’s paper, LO convergence was quantified through the statistical minimisation of risk. So for a given optimisation variable (e.g. herg_pEC50), there is a distribution of SAR, some of which is closer to the end point. As the optimisation process moves on, the SAR moves closer to the end point eventually achieving convergence. The ‘risk’ of a given optimisation variable is its resistance to convergence which is quantified by how the mean of the SAR distribution moves closer to the end point. Ultimately, the aim is for projects to converge to a score of 0 otherwise failure to reach an endpoint for a particular endpoint means a residual risk is carried. Essentially, all that is needed are variables and their associated convergence end points. This consist of in vitro and in vivo DMPK variables generic to most projects and then more unique target variables such as potency and off target selectivity. Furthermore, physical properties (e.g. solubility) can also be included.

The paper describes four LO programs targeting hepatitis C virus (HCV) replication inhibitors. Protein targets were NS4B, NS5A, NS5B and PI4Ka. NS5A and NS5B were successful while NS4B and PI4Ka where halted due to preclinical safety. An additional oncology project, where the target was not named, was also followed. This project failed due to LO tractability. The paper gives further information on each of these projects and follows their LO telemetry. For example, Figure 1 shows the evolution of the NS4B project towards convergence. Each step in the staircase indicates the next lead compound and then this is followed by more SAR with close analogues until the next lead is found.

Yusuf 21-12-2015-Picture 1

Figure 1

 

Figure 2 illustrates this same picture through the multiple risks involved and show the interplay between them.

Yusuf 21-12-2015-Picture 2

Figure 2

Another metric used, ‘design entropy’ is based on the principal that diversity of chemical space explored can relate to convergence. Projects will start off exploration around SAR where there is a higher chemical diversity in structures explored. Eventually, a local minimum is reached and exploration becomes very conservative as a project zeroes in on convergence to the end point. But if a team becomes stuck on a given variable, diversity of chemical matter will increase again to escape this issue until they find an alternative local minimum where again more conservative changes will be explored. Here, chemical diversity was calculated in terms of Shannon’s entropy where 1024 bit chemical functional group fingerprints was used to compute entropy.

In the NS4B project, the design entropy reached a minimal near compound 4, but at this point, toxicity was discovered and so design entropy increased as attempts were made to escape this series.

Yusuf 21-12-2015-Picture 3

Figure 3

Going forward

Currently, progression is measured by milestones that fall in to ‘static’ compound endpoints. The idea here is for the progression of a project to be readily visualised and tracked which can have the benefits of supporting portfolio management, provide support to individual projects and help co-ordinate LO programs. More data on successful and failing projects can allow for more sophisticated analytical tools in order to reduce the amount of time spent projects that will fail.

There is no doubt that these sorts of managerial tools are used with caution and in context and not used as the main determinant in assessing performance of chemists. Experience has taught us that this can drive a dopamine driven mentality of chasing targets much like the banker’s behaviours in 90s and early millennium which led to the financial crash is 2008.

These sort of metrics once further developed can potentially be adjusted to be suitable academic drug discovery, where projects tend to involve the development of much less chemical matter due to limited resources. This could lead to improved productivity through the use of these metrics and a bigger focus on management which is often less seen in academia.

Any opinions noted are those of the blog author only.

Blog written by Yusuf Ali

 

 

Advertisements

3rd generation EGFR TKIs


Here is a topical paper (Robert Heald et al; J. Med. Chem. 2015, 58, 8877-8895) by the group at Argenta/Genentech on the discovery of third generation EGFR tyrosine kinase inhibitors. First generation EGFR TKIs like Gefitinib or Erlotinib show convincing early responses in lung cancer, but resistance quickly develops that has limited their overall effectiveness. A recent study published in the British Journal of Cancer (2014, 110, 55-62) that looked at 106 patients with EGFR sensitising mutations showed a 70% response rate to Gefitinib and progression free survival of 9.7 months. Approximately 60% of the acquired resistance is caused by a T790M mutation of the gatekeeper residue that substantially reduces the affinity of first generation inhibitors for the ATP binding site. Several pharma companies are in competition to market a new generation of EGFR TKI’s that also inhibit the mutated kinase. Recent news from the FDA announced the approval of AstraZeneca’s TagrissoTM ahead of the Clovis drug, Rociletinib, for treating patients with EGFR T790M mutation positive metastatic non-small cell lung cancer. Together with the recently approved companion diagnostic, CobasTM, TagrissoTM could see rapid uptake in the clinic with estimated sales potential of $3 billion. (http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm472525.htm)

The T790M mutation reduces the effectiveness of first generation inhibitors by both occluding a portion of the ATP binding site and increasing the enzymes’ affinity for ATP. To overcome the low Km[ATP], both Clovis and AstraZeneca have targeted the mutant tyrosine kinase with covalent modifiers that juxtapose the poorly conserved Cys797 with an acrylamide acting as a Michael acceptor. Unlike the second generation inhibitors, both compounds show good selectivity against the wild-type EGFR. Unfortunately, the EGFR is a moving target with the potential for further mutations always one step away.

The lead optimisation case study published by the Argenta/Genentech group comments on recent reports of a C797S mutation in samples from patients who have developed resistance to third generation therapies. Loss of the cysteine nucleophile would render the third generation covalent inhibitors ineffective, but how quickly this becomes a problem will have to await further clinical study.

To address this concern the Argenta/Genentech medicinal chemists have designed a non-covalent inhibitor. Among the challenges are the low Km[ATP] of the mutant EGFR kinase requiring inhibitors to have high affinity for the ATP binding site to overcome the large intracellular ATP concentrations. The paper builds on their initial publication (J. Med. Chem. 2014, 57, 10176-10191) that identified potent and selective inhibitors based on a 4-imidazopyridine substituted diaminopyrimidine scaffold. Their second paper focuses primarily on the DMPK challenges towards designing compounds with predicted low to moderate in vivo human clearance. However, in doing so it covers many areas of medicinal chemistry with an amalgamation of DMPK, structure based design and lead optimisation metrics. In the authors words; “ …this work highlights a number of aspects of medicinal chemistry doctrine: the structural similarity of leads and optimised compounds, the utility of fluorine in the optimisation of small molecule drugs, and the judicious application of compound quality metrics as an aid to interpretation of SAR”. I would also add to the list, in vivo mouse xenograft PK/PD that clearly shows biomarker modulation consistent with the free drug hypothesis, and some ambitious chemistry with the final compound containing no fewer than 4 chiral centres!

Darren 16-12-2015 Picture 1

Darren 16-12-2015 Picture 2

Compound 42 was administered orally (po) to H1975 tumor-bearing mice once a day for 3 days with either 100 or 300 mg/ kg, respectively. Tumors were collected at indicated times (after last dose), and phosphorylation of EGFR, ERK, and AKT was determined using commercially available MSD assays. Lines on the free concentration plot show the in vitro H1975 p-EGFR IC50 and IC90. Error bars represent the mean ± SD. Plasma concentrations of 42 were measured using a liquid chromatography−tandem mass spectrometry assay, and free concentration was calculated using Fu of 0.207.

The authors suggest follow up studies on highlighted compounds to demonstrate an improved therapeutic index over current third generation treatments and evaluating the potency against C797S mutant cell lines.

I can’t find any mention of them on the current Genentech pipeline. Does anybody know about their progress?

Blog written by Darren Le Grand

 

#RETHINK HIV


In the aftermath of World AIDS day on 1st December 2015, whereby the National AIDS Trust (NAT) encouraged the public to rethink outdated stereotypes and challenge myths by adopting the ‘Be positive: Rethink HIV’ campaign, it seems opportune to ‘rethink’ or reflect on the current landscape of medical research in this area – is it really something to ‘be positive’ about?

A devastating pandemic that has killed in excess of 35 million people since its identification in 1984 [1], the human immunodeficiency virus (HIV) still affects more than 100,000 people living in the UK, and 34 million people worldwide [2].   Each year 6000 people are newly diagnosed in the UK alone (Fig 1 [3]). But daunting figures and statistics aside, scientific advances have been made in HIV treatment, understanding of the disease has vastly improved, and laws have been put in place to protect people living with HIV. In fact, highly active antiretroviral therapy (HAART; Fig 3) has substantially transformed HIV infection from an inevitably fatal condition into a chronic disease with a longer life expectancy [3]. But, with prolonged treatment comes other problems such as tolerability, side effects, adherence to medication and drug resistance. Accordingly, strategies to optimise the therapeutic response, prevent adverse drug reactions, and find drugs with a novel action of mechanism must now come to the forefront of this enduring war between HIV and medicine.

Victoria 14-12-2015 Figure 1

Figure 1 New HIV diagnoses, AIDS and deaths over time; 1999-2014. If diagnosed early, people now living with HIV can expect a near-normal life span. People diagnosed with HIV late have a ten-fold increased risk of death in the year following diagnosis compared to those diagnosed promptly. In 2014, 346 were diagnosed with AIDS for the first time and 613 people with HIV infection were reported to have died, most of whom were diagnosed late. [3]

An HIV infection causes a chronic viral infection that leads to the selective depletion of CD4+ T cells. This class of T cell plays active roles in the immune system, particularly in the acquired immune response. They recognise and bind to antigens on the surface of antigen presenting cells (APCs) via a T cell receptor-CD3 complex, triggering downstream signalling cascades that ultimately aim to rid the body of these antigens. However, a gradual loss of these T cells comprises immune competence and progresses to acquired immune deficiency syndrome (AIDS). As the virus is unable to reproduce alone, it exploits these CD4+ T cells as vectors for amplification. It makes sense then that understanding the life cycle of the virus is key to treatment: effective combined antiretroviral drug therapy can decrease viral load below detectable levels by targeting mechanisms employed by the virus during its life cycle (Fig 2).

Victoria 14-12-2015 Figure 2

Figure 2 Thin sections of the entry of the HIV-1 virus into a CD4 + cell by fusion. The life cycle begins with the HIV virus recognising CD4 on the surface of the CD4+ T cells. (a) The virus is initially attached to the cell membrane. Several virus-cell interconnections (double-headed arrow) are seen between the virus and the cell membrane. It is thought that binding via a CCR5 or CXCR4 coreceptor [4] allows for gp41-mediated fusion of the virus with the host cell. (b) The Contact surface increases. (c) The envelope of the virus fuses to the cell membrane (double-headed arrow). Some virions (arrow) are tagged with ferritin (arrowheads). Fusion enables the reverse transcription of the viral RNA into DNA. For the viral DNA to integrate into the host genome, hydroxyl groups are added during 3’ processing, which is catalysed by the enzyme integrase. This complex of protein and viral DNA then can enter the nucleus, where the strand transfer reaction integrates the viral DNA into the host genome [5]. Virus replication occurs when the DNA in transcribed into both genomic RNA and mRNA. The mRNA is translated into multiple Gag-Pol polyproteins, the precursors to the structural proteins and enzymes of the virus, which cause migration to the plasma membrane. This allows for budding of the immature virions from the CD4+ T cell before they can then undergo maturation to form infectious viral particles. Scale bars indicate 100nm [6]

So, why does HIV remain a worldwide health challenge? Well, in part, this can be attributed to the implementation of widespread treatment in developing countries being impeded by economic, political and cultural factors. The costly nature of HAART, combined with need for regular administration of the drugs, only fosters the HIV epidemic. Despite an increase in the number of HIV-infected patients receiving treatment and a fall in the number of new cases, there is still an imbalance between those new cases and those new patients gaining access to treatment [7]. In addition to obstacles accessing treatment, another contributing factor is the continued evolution of drug resistance. There are many different strains of HIV, grouped into two main types HIV-1 and HIV-2. An infected person can carry multiple subtypes of the virus at any one time. The World Health Organisation [8] estimated that as many as 17% of new HIV infections in developed countries are due to resistance of virus strains to one or more of the cocktail of antiretroviral drugs used in standard HAART (Fig 3). This therapy usually includes two different NRTIs (nucleoside analog reverse transcriptase inhibitors) and an NNRTI (non-nucleoside reverse transcriptase inhibitor) or a protease inhibitor. NRTIs, for example, permit the rapid development of resistance of HIV to the treatment when administered alone. Clinically proven in HIV-1 suffers, the use of allosteric inhibitors is an attractive alternative to solely using active site inhibitors. When used in combination with the active site inhibitor, an NRTI such as azidothymidine helps to reduce the evolution of resistance [9].

Victoria 14-12-2015 Figure 3

Figure 3 The HIV life cycle and antiretroviral drug intervention. Entry inhibitors interfere with viral entry into the host cell by inhibiting several key proteins that mediate the process of viron attachment, co-receptor binding and fusion [10]. Reverse transcriptase inhibitors include NRTIs, which are analogs of endogenous deoxyribonucleotides and high affinity for the viral reverse transcriptase. These are therefore incorporated into the viral DNA strand during synthesis and causes transcription termination as they lack the 3’-OH group necessary for phosphodiester bond formation in DNA strand elongation [11]. NNRTIs are compounds that bind to the allosteric site of the HIV-1 reverse transcriptase and interfere with its activity causing the selective block of HIV-1 transcription [9]. Integrase inhibitors bind cofactors needed for the interaction of integrase and the host DNA. This blocks the insertion of viral DNA into the host genome [12]. Protease inhibitors bind the active site of the viral protease with high affinity and as a result inhibits the cleavage of polypeptides necessary for viral maturation after budding from the host cell [13]. Maturation inhibitors, similarly to protease inhibitors stop the processing of the HIV-1 polypeptides but do this by binding to the polypeptides themselves [14].

Finally, one of the main challenges of completely eradicating HIV infection is the concept of HIV latency [15]. A latent virus is usually used to describe one that is not able to give rise to new viral particles. Contrary to this, HIV is thought to be able to hide in a persistent viral reservoir harboured within the host. This ‘reservoir’ contains integrated and replication-competent HIV DNA in the host genome, which is unaffected by HAART, and unable to be cleared by the host’s own immune defence [15]. As a consequence there is always a possibility for new rounds of infection even after achieving undetectable levels of viral load. Despite the concept of a ‘viral reservoir’ being accepted, understanding of it from a cellular and tissue level, and therefore how to quantify this in a patient, still requires some elucidation. The reservoir exists in the CD4+ T cell compartment [16], and therefore could be expected to be present in higher levels in those tissues with high levels of CD4+ cells. However, its distribution between tissues is not known, and factors such as viral subtypes, the age and gender of the patient, and medications being taken, could affect this. Furthermore the extent to which the viral DNA, when present, is able to replicate has not be determined but it is predicted to be as low as 2% [16]. Regardless, it only takes one cell containing viral DNA able to replicate to trigger a new round of infection after cessation of HAART. In order to reach a stage where a drug-free remission of HIV infection is possible, clearance of this persistent virus must be attained [17].

Blog written by Victoria Miller

Reference

[1] J. Marx, “Strong new candidate for AIDS agent’,” Science, vol. 224, no. 4648, pp. 475-477, 1984.
[2] NAT, “National AIDS Trust,” December 2015. [Online]. Available: http://www.nat.org.uk/HIV-in-the-UK/HIV-Statistics/Latest-UK-statistics.aspx.
[3] A. Skingsley, P. Kirwan, Z. Yin, A. Nardone, G. Hughes, J. Tosswill, G. Murphy, P. Tookey, N. Gill, J. Anderson and V. Delpech, “HIV New Diagnoses, Treatment and Care in the UK,” Public Health England, 2015.
[4] A. Haqqani and J. Tilton, “Entry inhibitors and their use in the treatment of HIV-1 infection,” Antivir. Res., vol. 98, p. 158–170, 2013.
[5] L. Krishnan and A. Engelman, “Retroviral integrase proteins and HIV-1 DNA integration,” J. Biol. Chem., vol. 287, p. 40858–40866, 2012.
[6] T. Goto, S. Harada, N. Yamamoto and M. Nakai, “Entry of human ixmnunodeficiency virus (HIV) into MT-2, human T-cell leukemia virus carrier cell line.,” Arch. Virol., vol. 102, pp. 29-38, 1988.
[7] M. Sidibé, Interviewee, New HIV infections drop, but treatment demands rise.. [Interview]. 2010.
[8] W. H. Organisation, “HIV Resistance Report,” 2012.
[9] DeClercq, “Perspectives of non-nucleoside reverse transcriptase inhibitors (NNRTIs) in the therapy of HIV-1 infection.,” Farmacognosia, vol. 54, no. 1999, pp. 26-45, 1991.
[10] J. Tilton and R. Doms, “Entry inhibitors in the treatment of HIV-1 infection.,” Antivir. Res., vol. 85, pp. 91-100, 2010.
[11] T. Cihlar and A. Ray, “Nucleoside and nucleotide HIV reverse transcriptase inhibitors: 25 years after zidovudine,” Antivir. Res., vol. 85, pp. 39-58, 2010.
[12] J. Schafer and K. Squires, “Integrase inhibitors: a novel class of antiretroviral agents.,” Ann. Pharmacother., vol. 44, pp. 145-156, 2010.
[13] C. Adamson, “Protease-mediated maturation of HIV: inhibitors of protease and the maturation process,” Mol. Biol. Int., 2012.
[14] J. Richards and S. McCallister, “Maturation inhibitors as new antiretroviral agents.,” J. HIV Ther., vol. 13, pp. 79-82, 2008.
[15] T.-W. Chun, D. Finzi, J. Margolick and e. al., “Fate of HIV-1-infected T cells in vivo: rates of transition to stable latency.,” Nat. Med., vol. 1, pp. 1284-1290, 1995.
[16] T.-W. Chun, L. Carruth, D. Finzi and e. al., “Quantitation of latent tissue reservoirs and total body load in HIV-1 infection.,” Nature, vol. 387, pp. 183-188, 1997.
[17] A. e. a. Crooks, “Precise quantitation of the latent HIV-1 reservoir: implications for eradication strategies.,” J. Infect. Dis., vol. 212, p. 1361–1365 , 2015.

DNA damage and repair: Let’s use our brains!


The integrity of our DNA is under constant attack from numerous endogenous and exogenous agents. The consequences of defective DNA and DNA damage responses (DDRs) have been extensively studied in fast proliferating cells, especially in connection to cancer, yet their precise roles in the nervous system are relatively poorly understood.

Two fundamental questions are still open:

What is the integrity of the genome in the adult and aging brain?

 What is the role of DNA damage in aging and neurodegenerative disorders such as Alzheimer’s disease (AD) or Parkinson’s disease (PD)?

How damaged is the genome in the adult brain?

The neurons of our nervous system are post-mitotic, meaning that once matured, they cannot rely on cell division to replace a lost or disabled neighbour. This fact has two important consequences:

  • In a long-lived species such as homo sapiens, a ‘lucky’ CNS neuron may survive for 80 years or more, potentially accumulating a lot of DNA damage.
  • Neurons are devoid of homologous recombination – the most effective way to repair DNA double stranded breaks -which takes place mainly during cell division.

The genome integrity in the adult brain is still the object of intense scrutiny but what is generally accepted is that the adult brain tolerates an unexpected degree of DNA damage and that the DDR mechanisms might be significantly different from other somatic cells.

What is the role of DNA damage in aging and neurodegenerative disorders?

As we have already observed, neurons are particularly prone to accumulating DNA defects with age, the key question here is whether these defects contribute to developing and/or sustaining neurodegeneration.

Several pieces of evidence seem to point in this direction. For example, age is the most common risk factor for most adult-onset neurodegenerative diseases with even the most aggres­sive familial forms of dementia rarely striking before the age of 40 years.

What is still unclear is how DNA damage contributes to the development of pathologies such as AD and PD that are regional by nature, e.g. involve only specific areas of the brain.

On this regard, several models and theories have been suggested but none of them has been fully validated yet with sufficient data (Fig.1).

Fig.1 Models to explain the relationship between age, DNA damage and neurodegeneration. (from: Chow, H-m; Herrup, K. Genomic integrity and the ageing brain’, NATURE REVIEWS NEUROSCIENCE, 2015; 16, p 672)

Alessandro 8-12-2015 Figure 1

DNA damage and the onset of specific neurodegenerative diseases. a | As we age, all of our neurons experience increasing amounts of irreparable DNA damage. The accumulating damage is induced by products of cell metabolism and other destructive activities (black arrows) coupled with a reduced capacity for DNA repair (grey arrows). Disease initiation then arises as a result of an additional insult, specific to the particular degenerative condition, which, coupled with the damage already present, precipitates the emergence of disease. Without that insult, a slow but benign descent into ageing would continue without serious clinical consequences (as indicated by the dashed line). Once the activity of DNA repair can no longer keep pace with the rate at which DNA damage is generated, damage accumulates at an increased pace and a point of no return is reached, eventually leading to neuronal death. b | An alternative, but not mutually, exclusive conceptualization involves a network-based model of DNA damage. If the relative activity levels of different circuits of neurons leads to the accumulation of specific unrepaired DNA lesions in the participating cells 42 , the predicted consequence would be regional variability in the rates of DNA damage, leading to different rates of neuronal ageing and hence to specific selections of neurodegenerative events. For instance, during the development of Alzheimer disease (AD), aberrant activities of neurons in the hippocampal network might result in the lethal accumulation of DNA damage in certain cells. Within the same brain, Purkinje cells in the cerebellum, engaged in a different pattern of physiological activity, would show minimal accumulation of such damage and be spared. After many years, the loss of genomic integrity in the most affected hippocampal neurons would lead to a pattern of cell dysfunction and death that would be more pronounced than that in the cerebellum. A similar branching network model with different initiation points could be envisioned for other diseases, including Parkinson disease (PD), Lewy body disease (LBD) and epilepsy.

 

Clearly, answering to some of these questions could open new exciting avenues in the field of neurodegeneration, an area that unfortunately is increasingly neglected by big pharma after the clinical failures of the last decade.

It is definitely time for DDR research to focus on the brain!

Blog written by Alessandro Mazzacani

Further reading:

‘Genomic integrity and the ageing brain’, Hei-man Chow and Karl Herrup, NATURE REVIEWS NEUROSCIENCE, 16, NOVEMBER 2015, p 672

 DNA Damage and Its Links to Neurodegeneration’ Ram Madabhushi, Ling Pan,and Li-Huei Tsai, Neuron, 83, July 2014, p 266

Don’t hold your breath. Understanding asthma could take a while…


Let’s face it, asthma has been around for more than ‘a bit’. It’s often considered as a 21st century epidemic, but as early as c.3000 BC, E.sinica (ephedra) was used to treat asthma by Shen Nong, the ‘Father of Chinese Herbal Medicine’. Not long after (relatively speaking…), the Ebers Papyrus (c.1550 BC), a text containing the worldly knowledge of Thoth, the Egyptian god of learning, described a “disorder of the metu”, ducts that were thought to distribute air and water to the lungs, amongst other organs. Interestingly, to alleviate wheezing and panting, many ancient civilisations inhaled the smoke of burning ephedra, which contains ephedrine, a beta-agonist. Brings a whole new meaning to a relaxing smoke, doesn’t it? Hippocrates (c.460-377 BC) was one of the earliest physicians to make the link between respiratory disease and the environment, whilst Pliny the Elder (AD 23-79) described pollen as an irritant of asthma, prescribing ephedra in red wine (hurrah!), or drinking the blood of wild horses, fox liver in red wine or millipedes soaked in honey (no thanks, I just ate). Why, then, given that the condition has existed for more than 5000 years, are we still unable to accurately define asthma and treat it effectively?

One explanation is the increasing recognition that asthma (particularly severe asthma) is a heterogeneous disease. As early as the 1950s, it was noted that not all asthmatics expressed the eosinophilia phenotype1 and correspondingly, inhaled corticosteroids (ICSs) were ineffective in disease control. Yet it wasn’t until the late 1990s that asthma phenotyping really signalled a break away from the ‘one size fits all’ approach of ICS therapy2, when it was proposed that asthma was divided into two inflammatory subtypes (those with or without eosinophilia) by Wenzel et al. The advent of further biomarker discoveries led to the designation of ‘Type 2-associated asthma’ (Type 2high), characterised by the increased expression of the cytokines IL-4, IL-5 and IL-13. Recent evidence, however, suggests that directing therapies according to the Type 2high signature might have limited efficacy, suggesting that further, more complex, patho-biological features have yet to be elucidated4,5, in addition to the biomarkers already used to assign Type 2high status (Table 1).Diane 7-12-15 Picture 1

It follows that assigning sub-phenotypes to patients may go some way to address some of the diagnostic and therapeutic challenges faced in the clinic today. Different phenotypes will have different therapeutic consequences, whilst other pulmonary diseases which mimic asthma can be differentiated, sometimes resulting in a reversal of diagnosis, or re-assessment of therapy. How this should be approached is of some debate. The least invasive form of phenotype identification is that of cluster analysis, such as that performed by Serrano-Pariente et al.6, in which three clusters of near fatal asthma phenotypes were identified. Cluster 1, the largest, encompassing older patients with the clinical and therapeutic classical criteria of severe asthma; cluster 2 was marked by a higher proportion of respiratory arrest, impaired consciousness and mechanical ventilation (the latter being a whopping 98%). The final cluster tended to include younger patients, sensitive to certain allergens. The reasoning of such clustering, based on variables including (but not exclusive to) demographics, clinical and functional characteristics, spirometric and immunological studies is to improve the design of therapeutic strategies for each phenotype.

A somewhat more controversial approach is to biopsy the patient, by video-assisted thoracoscopic surgery (VATS)(Figure 1)7. Doing so gives detailed insight into all areas of the lung (including the distal airways, an area where knowledge is relatively limited), providing information on individual phenotype which can then guide therapy and improve patient outcome. It also allows differentiation from asthma mimics, but it carries risks, not least those of general anaesthesia and acute exacerbation of the underlying disease.

Diane 7-12-15 Picture 2

Figure 1. Examples of distal lung disease in “atypical” severe asthma and differential diagnoses from video-assisted thoracoscopic surgery procured tissue. a) Small airway with massive lymphocytic bronchiolitis (arrowheads) often found in severe asthma with an autoimmune background (100× magnification). b) Small interstitial poorly defined granulomas (arrows) with epitheloid histiocytes/giant cells (arrowheads) in asthmatic granulomatosis, note the absence of vasculitis (100× magnification). c) Granulomatous inflammation in eosinophilic granulomatosis with polyangiitis, note the presence of prominent tissue eosinophilia, vasculitis (arrow) and necrotising granulomatous inflammation (arrowheads) (200× magnification). d) Aspiration granuloma, note a well-formed, non-necrotising granuloma entirely replaces an airway lumen (arrow), with foreign-body giant cell reaction to vegetable material (arrowhead) (100× magnification).7

A less invasive procedure is that of sputum examination, from which an evaluation of treatment can be made, for example the inclusion of immunosuppressants (methotrexate and sulfasalazine) or antifungals (oral itraconazole) or biologics (omal-izumab, the anti-IgE monoclonal antibody). Sputum also provides a useful source of asthma biomarkers, used in both phenotyping and also as indicators of therapeutic response. It is by linking the clinical phenotypes of asthma with mechanisms of disease data obtained through the integration of genetic, transcriptomic and proteomic technologies, that the diagnosis and treatment of asthma may be improved. It is hoped that this will lead towards tailored, therapeutic strategies for asthma, with any luck before the passing of another five millennia!

Blog written by Diane Lee

  1. Brown HM, et al. Treatment of chronic asthma with prednisolone; significance of eosinophils in the sputum. Lancet 1958; 2: 1245–1247.

 

  1. Wenzel SE, et al. Evidence that severe asthma can be divided pathologically into two inflammatory subtypes with distinct physiologic and clinical characteristics. Am J Respir Crit Care Med 1999; 160: 1001–1008.

 

  1. Woodruff PG, et al. T-helper type 2-driven inflammation defines major subphenotypes of asthma. Am J Respir Crit Care Med 2009; 180: 388–395

 

  1. Bel EH, et al. Oral glucocorticoid-sparing effect of mepolizumab in eosinophilic asthma. N Engl J Med 2014; 371: 1189–1197.

 

  1. De Boever EH, et Efficacy and safety of an anti-IL-13 mAb in patients with severe asthma: a randomized trial. J Allergy Clin Immunol 2014; 133: 989–996.

 

  1. Serrano-Pariente J, et al., Identification and characterization of near-fatal asthma phenotypes by cluster analysis. Eur J Allergy Clin Immunol 2015; 70: 1139-1147.

 

  1. Doberer D, et al., Should lung biopsies be performed in patients with severe asthma? Eur Respir Rev 2015; 24: 525–539

Telomerase might have an important role in the mechanism of action of various psychiatric medications


Telomeres is the region at the end of each strand of DNA that protects our chromosomes from deterioration, its role is to provide chromosomal stability and genomic integrity. Telomere shortening has been associated with senescence, cell death and disease on the other hand, telomerase length (TL) might help with the understanding of general health, life expectancy and individual aging.

Some recent preclinical studies suggest a link between telomerase activity and psychiatric medication. Telomerase is a reverse transcriptase enzyme that add repeats sequence “TTAGGG” in the telomeres region of the chromosome. Telomerase is formed of two main elements:

1) Telomerase RNA element (TERC), which serves as a RNA template for telomeric DNA synthesis

2) Telomerase reverse transcriptase (TERT), whose function is adding telomeric repeats.

Thalia Figure 1 2-12-15

In a recent paper Bersani and co-workers review the recent clinical and preclinical data in which they suggest a link in telomerase activity in psychopharmacological interventions. For example, in a preclinical rodent study (Zhou, et al. 2011) they found that hippocampal (HC) telomerase might be implicated in the regulation of depression like behaviour. Mice with chronic mild stress (CMS) presented a reduced HC telomerase activity compared to the control group, while inhibition of telomerase activity resulted in depression like phenotype, but most importantly they also found that overexpression of TERT resulted in antidepressive like effects. However, the telomerase activity is regulated in different manner in humans. A pilot study with humans (Wolkowitz et al., 2012) showed a possible link between telomerase activity and antidepressant medication, patients with major depressive disorder (MDD) increased peripheral blood mononuclear cells (PBMC) telomerase activity when treatment is administered (Sertraline therapy), they suggest a link between the increased levels of telomerase activity and antidepressant treatment. Another study in humans (Martisson et al. 2013) showed that long term lithium treatment (30 months) in patients with bipolar disorder presented longer leukocyte telomere length compared to patients with shorter period of treatment, suggesting that lithium treatment might preserve against telomere shortening by induction of telomerase activity. There are different hypotheses on why telomerase activity might be linked to psychiatric medications, the diagram below shows some of the proposed mechanism of actions between these interactions, which includes some regulation pathways involved in cell proliferation, differentiation and oxidative stress.

Thalia Figure 2 2-12-15

The study of telomere biology and telomerase activity in the body is complex, more data and further research is necessary in order to understand the interaction with psychopharmacological interventions, however it might be a new promising area of study to develop novel drugs targeting telomerase activity in mental disorders.

Blog written by Thalia Carreno

References

Bersani, F.S. et al. (2015) Telomerase activation as a possible mechanism of action for psychopharmacological interventions. Drug Discov Today. 20, 1305-1309

Martinsson, L. et al. (2013) Long-term lithium treatment in bipolar disorder is associated with longer leukocyte telomeres. Transl. Psychiatry 3, e261

Wolkowitz, O.M. et al. (2012) Resting leukocyte telomerase activity is elevated in major depression and predicts treatment response. Mol. Psychiatry 17, 164–172

Zhou, Q.G. et al. (2011) Hippocampal telomerase is involved in the modulation of depressive behaviors. J. Neurosci. 31, 12258–12269