Case placement in Emeryville


I have recently returned from my PhD Case placement at the Novartis Institutes for Tropical Disease (NITD) who recently relocated to Emeryville California. NITD drug discovery is focused in 3 areas; malaria, cryptosporidiosis and kinetoplastid diseases.

My PhD project has been to identify drug-like inhibitors that inhibit an enzyme that has been identified as a target for the kinetoplastid pathogen that causes African sleeping sickness, trypanosoma brucei. Whilst on my placement at NITD I was fortunate to work alongside scientists that have been involved with identifying drug candidates against the same pathogen, and have parasite assays in place to examine the effects of drug molecules against them. I was able to culture the live parasite and it was exciting to see that the compounds that I had designed and synthesised during my PhD inhibited the growth of the parasite.
During my time at NITD I was also able to work in the research chemistry laboratories, synthesising further analogues against my enzyme target, it was a great opportunity to work in a cutting edge research environment with a variety of instruments that facilitated day to day chemistry experiments. Over the course of my placement I was able to identify new chemical analogues that showed improved potency against the parasite, that I am currently following up back at the Sussex Drug Discovery Centre.

Ryan 1

At the weekends I was able to do a bit of travelling around California, some of the highlights were visiting Yosemite national park (above), cycling over the Golden Gate Bridge and visiting Alcatraz.

Blog written by Ryan West

Advertisements

Beware Greeks bearing gifts


The publication that I wanted to discuss was bought to my attention whilst reading Derek Lowe’s superb blog “in the Pipeline”. I would wholeheartedly recommend readers of this particular website to read the original blog discussion of the publication there. The link to in the Pipeline article is here http://blogs.sciencemag.org/pipeline/archives/2017/10/05/beware-of-zinc-and-of-other-stuff and the link to the original article is as follows https://pubs.acs.org/doi/abs/10.1021/acs.jmedchem.7b01071

In the publication a group at Dundee University were exploring a fragment based approach, against an enzyme which has a role in ubiquitin conjugating called Ube2T. The team had been successful in identifying a fragment hit in an earlier publication. In this first publication they had utilised differential scanning fluorimetry (otherwise known as a thermal shift assay) and followed up with biolayer interferometry, as an orthogonal technique to confirm primary hits from the DSF primary screen. To further generate confidence, ligand observed NMR was carried out.

From this work a specific fragment was chosen as the top hit and was then measured with 15N- labelled NMR and ITC measurements against which at this stage showed positive binding. At this point analogues were ordered around this fragment and this is where the project hit some rough water. None of the analogues seem to show any significant SAR, and the limited number of analogues that did bind were much weaker in potency than the original hit

In parallel, co-crystallization of the fragment against Ube2T was attempted. While this was unsuccessful in showing specific binding, it did highlight a rearrangement of the protein, and the presence of a metal ion close to the catalytic residue, this metal ion was then identified as Zn2+

As no zinc was present in the co-crystallization buffer system, the team investigated the original fragment hit compound, running a zincon colorimetric assay on the sample which was gave a positive result for the presence of zinc. The team re-ran the original ITC experiment with the fragment hit in the presence of EDTA (which should chelate all zinc present), this showed that all binding was lost. In summary, the fragment hit must have contained Zn2+ ions which was the cause of the activity.

Gareth 1

Figure A showing ITC binding experiment of Ube2T with the original fragment compound with and without EDTA present. ZnCl2 against Ube2T is also shown as a comparison.

Taken from: F. E. Morreale et al., Mind the Metal: A Fragment Library-Derived Zinc Impurity Binds the E2 Ubiquitin-Conjugating Enzyme Ube2T and Induces Structural Rearrangements. Journal of Medicinal Chemistry60, 8183–8191 (2017).

The authors did point out that this hit was from a commercial library and one of the actions they undertook was to ask for the QC data from the supplier, at the point when the related analogues of the fragment compound did not show any activity. This obviously did not highlight the presence of the Zn2+

I did wonder when reading this publication if a different screening cascade may have identified this type of false positive, before having to get to the step of a very labour intensive co-crystallization process. For example a cell based assay might by more resilient to the presence of metal ions. However with a fragment based project, this may not be possible due to the poor affinity of fragments and also obtaining an effective concentration within the reduced DMSO tolerance of a cell based format compared to a biochemical assay. The only recommendation for this specific case, would be carry out further purification of samples, and then re-assay. It would also appear that the data from commercial suppliers is lacking and maybe that can be changed.

The authors should be commended for this work and putting it in the public domain. Drug discovery is a hard, long and quite often an unsuccessful process, and anything we can do to reduce the time following red herrings the better.

Blog written by Gareth Williams

 

 

Circular dichroism for protein characterisation


During my PhD project, I was working on a bacterial protein involved in iron transport. The secondary and tertiary information about this protein was limited and function was not well known at that time. In order to shed more light on this, I was looking for various techniques and came across ‘Circular Dichroism’ [CD], a widely used technique for analysing the secondary structure content of the protein of interest in solution. Circular dichroism measures the difference in the absorption of left and right circularly polarised light by chiral molecules. It is also a great technique to study the protein-ligand interactions.

The secondary structure composition is associated with characteristic spectra based on the absorption due to the peptide bond in the far ultraviolet [UV] CD region (180-260 nm) whereas tertiary structure features are influenced by the spectrum in the near UV region (260-320 nm) with absorption being due to aromatic residues [2]. In this category, synchrotron radiation circular dichroism spectroscopy [3] [SRCD] also became popular which provided more structural information with the available options for measuring the spectra to lower wavelengths, with improved signal to noise ratio levels.

I came across an article in Nature Scientific Reports indicating more advancement in this area by the introduction of High-throughput SRCD [HT-SRCD] using multi-well plates [1] which was interesting. The paper describes the features provided by Beamline B23 of Diamond Light Source, Oxford, UK for setting up the HT-SRCD using multi-well plates. It also describes examples for high-throughput measurements carried out using multi-well systems. In short, this HT-SRCD could be a potential resource for studying the protein folding, conformational changes in protein structure induced by ligands, buffers and other components as well as secondary structure determination on a high-throughput scale.

Blog written by Mohan Rajasekaran

  1. Hussain, R., Javorfi, T., Rudd, T. R., and Siligardi, G. (2016) High-throughput SRCD using multi-well plates and its applications. Scientific reports 6, 38028
  2. Kelly, S. M., Jess, T. J., and Price, N. C. (2005) How to study proteins by circular dichroism. Biochimica et Biophysica Acta (BBA) – Proteins and Proteomics 1751, 119-139
  3. Wallace, B. A., and Janes, R. W. (2010) Synchrotron radiation circular dichroism (SRCD) spectroscopy: an enhanced method for examining protein conformations and protein interactions. Biochemical Society transactions 38, 861-873

 

 

 

 

Citizen scientists


Tesh 1Many moons ago while I was an postgrad in Dundee, when a computer screen was the size of a medium to large beach ball, i.e. the late-90s, I came across a curious project over the then relatively nascent internet called SETI@home.

The project was conceived by the Berkley SETI Research Centre as an initiative to engage with the general public to help address a specific (and possibly the ultimate) question, “Is there anybody out there?”. The premise being that if there are other (extra-terrestrial) intelligent life forms out there in the universe, they may be broadcasting (knowingly or otherwise) their presence and thus should be detectable by simply listening in.

A vast amount of data was collected, consisting of recorded radio signals across a chunk of the spectrum, which then needed to be analysed. The scientific community could not fund/carry out the analysis and needed an “out of the box” solution. Step in SETI@home, where the idea was to make use of the processing power of the vast numbers of PCs that were starting to take up residence within the public domain. The strategy was not particularly interactive and solely relied in the ability of PCs to crunch through the data when not in use by their owner. All you had to do was register and packets of data were sent out to your PC (the University of Dundee’s in my case, ahem!) and your PC did the rest. There was something hypnotic, ethereal and satisfying as the screen presented how much data had been processed and how much more there was to go. Even as someone who only has a passing interest in space related matters, wondering if there were any “curious” signals in the data, how long it would take to get a positive answer and if enough bandwidth had been covered to account for a negative outcome, was exciting and alluring. Though it was not particularly interactive, it did capture the imagination of the wider public and made use of their individual resources to try and answer an important scientific question.

A decade or so later I came across another project (planethunter.org) that pricked my curiosity and again it was a space related project. The premise was as simple as it was unimaginable – spot planets as they traverses across the face of their host star(s) in the line of site from Earth from up to ~1500 light years away. Tesh 2

This however was made possible by the recently launched Kepler space telescope by NASA, which is a space based observatory with the sole purpose of peering at a patch of the sky continuously and recording (every 30 minutes) the light emitted by the stars within its field. Every 3 months or so the data would be downloaded by NASA and distributed to its collaborators within the scientific community for analysis. This involved dealing with an incredible volume of data, for which data processing and analysis pipelines had to be set up for.

Though these were in themselves incredibly successful, it was recognised fairly early on that engaging with the public in a meaningful way would be advantageous, not only to prevent public apathy (which seems to haunt many of these large projects) but also in recognising that any analysis pipeline set up would ultimately be limited.

Analysis of any large data sets requires assumptions, which are usually applicable across the majority of instances but not where there are deviations from the “expected scenario”. The ability of the human eye to perceive subtle differences and patterns was seen as an advantage here, and which, with the right approach could potentially be tapped. Step in planethunter.org (hosted by zooinverse.org) and citizen scientists.

At its simplest level (below), the platform asks the public if there is a periodic signal within the light curve of a particular star. However, the level of analysis could be more complex if desired, e.g. identifying the star type, accessing the unprocessed/raw data and links to information about a stars’ age, its metallicity, a deep visual look into its neighbourhood and so on. There were forums set up to discuss the data in general, reoccurring glitches in the data, individual stars, analysis pipelines for larger bespoke batch analyses and much more.

Tesh 3So did these citizen scientists find anything? Yes, being the clear answer. To date there have been 10 peer reviewed publications and more will no doubt follow

(https://www.zooniverse.org/about/publications).

These types of endeavours, where the public can be engaged in a meaningful way to answer specific and scientifically inspiring questions, are important on a number of levels:

  1. Access to a “free” and potentially vast resource.
  2. There are important (and sometimes unexpected) discoveries to be made.
  3. It prevents public apathy.
  4. Exposure.
  5. Funding (e.g. from the greater exposure of the project)

The question arises that in this age of big data, particular with the explosion in cell biology and disease related big data projects, why do they not also have such endeavours? For such well-funded scientific areas, for there to be only one such endeavour on zooniverse.org (Etch a Cell) is if nothing else sad. Etch a Cell is an initiative led by the Francis Crick Institute where the aim is to engage with the public to help build 3D models of the nuclear envelope from electron micrographs. This is of interest to me and (possibly some) other people in my field of research but it hardly captures the ziet geist as Planethunters and SETI@home did and continues to do so.

Blog written by Tesh Patel

From lab to launch….


Drug discovery is a time consuming and expensive undertaking. Currently it commonly takes between 10 to 15 years to get from the start of the discovery process through to launch (if you make it !) and the cost can range from $2 billion to $5 billion depending upon whose statistics you reference. The preclinical discovery phase tends to be short and relatively cost effective, the output of which then runs the gauntlet of a battery of toxicity and safety tests before being allowed to enter testing in humans. After what is usually a relatively short efficacy study in a small number of patients provides a suggestion that the drug may work, a series of larger and longer clinical studies ensure that the drug is both safe and effective. If all goes well in these studies the regulatory authorities, usually FDA or the EMEA, will review the extensive data package and if opinion is positive, give approval for the drug to be launched for use in routine clinical practice.

We all want this process to be faster and more effective but without any compromise on safety and determination of efficacy – and by ‘we’ I mean drug discovers, patients, the pharmaceutical industry and regulators. We also want the process ideally to be more efficient and predictive with less chance of failure, particularly in late stage, expensive studies (one of the major reasons the costs above are in the billions – if every drug that entered clinical studies worked the cost of discovering a new drug would be ~$350 million). A focus on orphan diseases where patients are more homogeneous and we have strong understanding of the genetic basis of their disease is delivering higher success rates. Perhaps the poster child for this approach has been cystic fibrosis and the introduction of therapies that effectively address the genetic defect by repairing the defective protein – in CF it’s the cystic fibrosis transmembrane conductance regulator (CFTR), a chloride ion channel. Vertex Pharmaceuticals introduced the first of its expanding portfolio of CFTR repair therapies in 2012 – this was the CFTR potentiator, ivacaftor (tradename Kalydeco). Ivacaftor demonstrated impressive clinical effects in patients with a specific CFTR mutation (G551D) and has demonstrated efficacy in a number of follow on trials in CF patients with mutations which are biophysically similar to G551D (a CFTR protein that makes it to the cell membrane but is loathe to open). G551D is the third most common CF disease causing mutation that accounts for somewhere between 2 – 5% of the CF population so relatively rare as there are estimated to be ~70,000 patients worldwide.

So what happens if you have a medicine which you believe will deliver benefit to additional patients but they are few and far between, with not enough to undertake a robust phase 3 trial ? This was the conundrum facing Vertex when looking to expand the labelling for Ivacaftor. In what is the first of its kind the FDA granted expanded approval to Vertex for Ivacaftor based upon in vitro data only. This could be a landmark step and the FDA has acknowledged that this approach could have implication for other drugs that have a well understood safety profile and address well characterised diseases. With Ivacaftor Vertex have a drug with a robust safety package, a strong understanding of its mechanism of action and have put considerable effort into assessing the correlation between preclinical cellular assays, clinical biomarkers and registerable endpoints. To support the request for expanded labelling Vertex expressed ~50 mutations in Fisher rat thyroid cells, a cell system widely used by the CF field as it has low expression of background chloride channels and can be used in a variety of assays (including Ussing chamber ion transport). Mutations that delivered a 10% increase in chloride transport when treated with Ivacaftor were considered responsive. This wasn’t an arbitrarily selected figure but one borne out by Vertex’s clinical experience with Ivacaftor and other compounds from their developing CFTR repair portfolio. Of those tested 23 mutations have been added to Ivacaftors labelling (26 failed to meet the criteria).

In real terms this means that ~900 CF patients in the US alone will now have the opportunity to access this breakthrough medicine – my congratulations to Vertex for pioneering the approach and my congratulations to the FDA for entertaining it….let’s hope it can be pursued for many other diseases.

Martin 1

Image source: http://valueofinnovation.org/the-long-road-to-a-new-medicine.html

Blog written by Martin Gosling

References:

Durmowicz A.G et al (2018) The FDA’s experience with Icavaftor in cystic fibrosis: establishing efficacy using in vitro data in lieu of a clinical trial. Ann Am Thorac Soc. 2018 Jan;15(1):1-2. doi: 10.1513/AnnalsATS.201708-668PS.

Kingwell K (2017) FDA Oks first in vitro route to expanded approval. Nature Reviews Drug Discovery; doi:10.1038/nrd.2017.140

AI – a cure for the ROI?


Marcus 1

The face of the Pharmaceutical industry has changed beyond recognition over the past 20 years with many of the major players passing through multiple rounds of M&A, calving off large swathes of their portfolios, synergising, repurposing drugs ………all ultimately to improve Return On Investment (ROI). It is no secret the sector has had a rough ride with blockbuster drugs becoming increasingly rare, only ≈10% of drug candidates in phase I reaching approval in the years 2006-2015 (1) , increased payer pressure to cut prices, company revenues taking a hit as patent cliffs pass by and a dearth of innovative medicines being brought to the market.

How long will companies be able to sustain the significant cost of R&D whilst still turning enough of a profit to satisfy shareholders? The morality of industrial drug discovery (DD), long questioned in any case by those outside the industry, will be under serious scrutiny – not hard to see why with companies like Gilead charging upwards of $80,000 for a full course of the hepatitis C drug Sovaldi (2).  Take Ebola for instance.  Prior to the 2014 outbreak in West Africa (3) Ebola may have been viewed as an African problem and perhaps not an attractive area for investment.  After the outbreak and associated hysteria in certain corners of Western society all of a sudden this had the potential to be a little more than just an African problem.  Ebola was discovered in 1976 yet a study by the University of Edinburgh in 2015 estimated around half of all funding for Ebola research occurred in the 2014-2015 period after the outbreak.

Even the philanthropy of the Wellcome Trust is driven by ROI which unlike VC funding, is perhaps based more on intellectual than financial value, but here too we see a move away from traditional DD as demonstrated by the demise of the Seeding Drug Discovery Initiative. Granted this change in focus may be designed to generate new targets or technologies but the sentiment is clear; as traditional DD becomes more difficult with target patient populations potentially dwindling as a result of increased personalised/specialised therapies or peripheral areas of unmet need, where is the motivation for investment?

We are in desperate need of new anti-bacterials, as the population becomes older the prevalence of dementia is on the rise and alongside the ever present spectre of cancer all represent substantial investment if we are to have a meaningful impact in the development of effective treatments for these indications. It is clear that with the patient heterogeneity and the variable aetiology of these conditions that the model of drug discovery to date needs a significant change in prosecution.  In an effort to speed up the DD process we have seen a recent spate of AI-Large pharma collaborations (4/5/6) but are these tools merely ways to speed up old methods or will they genuinely result in the generating novel targets which might otherwise remain undiscovered by conventional means of investigation?

Regression analysis such as QSAR has long been used within DD to correlate physiochemical and functional parameters to guide chemical synthesis programs. Correlations between various ‘attributes’ (derived from principal component analysis) are used to generate a model which can be used to predict the behaviour of new compounds.  Of course the model is only as good as the number of data points and parameters used to define it and, once defined, remains unchanged.  The essential difference is that AI (specifically Deep learning) generates self-adapting models using a multi layered network approach that wasn’t really possible before the development of GPUs which allowed the parallel processing of vast amounts of data.  The data is assessed according to any one of its ‘attributes’ in the ‘top layer’ before being passed to the ‘second’ layer and processed according to another attribute etc. and as this is an unguided approach the systems needs a sufficient volume of data and many iterations to generate a reliable model of correlation.  After each iteration the algorithms used to generate the correlations can then be altered as the network ‘learns’ from the previous iteration.  Like any other computer model generating system though, it is liable to the ‘garbage in, garbage out’ (GIGO) concept.

It is easy to see how AI can be and is effective in, for instance, developing candidate compound libraries generated from well-characterised protein and protein/ligand crystal structures and suggest routes of synthesis (7) but the question of how AI can truly revolutionise an ailing industry is a long way off being answered. The regression analysis used in AI is the same as that used in QSAR for years, essentially it’s just the volume of data and the learning aspects that are different.  The hope, however, is that AI will generate unique correlations that have thus far eluded us or only revealed themselves serendipitously.  Pfizer’s hopes to quickly analyse and test hypotheses from “massive volumes of disparate data sources” (including more than 30 million sources of laboratory and data reports as well as medical literature)(8) seems, to the untrained eye (mine!) to be fraught with danger regarding curation of the input data.  Even in the simplest instance of a standard compound IC50, how would un-curated inter-institution variations affect a blind, self-determining analysis?  Perhaps, conversely to the GIGO scenario, considering the volume and disparate nature of data used (literature, experimental, predicted) and correct application of principal component analysis in a given enquiry, AI may actually be resilient to these small variations.

With regard to mental health we only have to look at our efforts to provide an objective definition of a subjective experience in the reconceptualization/re-categorisation/inclusion/elimination of mental disorders in subsequent editions of the ‘Diagnostic and Statistical Manual of Mental Disorders’ (9) to know that our understanding of these disorders is in a constant state of refinement.  AI assessment of a potentially novel pathway/target based on the prevailing definitions of a given condition superposed on the inherently variable subjective clinical data would, it seems, yield different answers from one year to the next.

AI has been hampered not necessarily by the development of algorithms but by the availability of sufficiently broad, curated training data sets and the development of both GPUs and adequate storage (10). With the advent of ‘-omics’ technologies able to acquire vast amounts of data, only relatively recently have the means been developed by which we can effectively interrogate this huge repository of information.  It would seem then that a standardised curation of this data is of primary importance if the industry is going to rely heavily on AI to effectively generate new medicines……notwithstanding the importance of generating clinically verified biomarkers in parallel…..but that’s for another blog!!

We only have to look at GenBank as an example. From its inception in 1982 it took until 2003 for the first release of the curated RefSeq collection.  I remember trying to identify novel splice variants in the late 90’s only to be frustrated by poorly annotated and simply incorrect sequences.  Contemporary parallels can be drawn with the Protein Data Bank (PDB) especially in relation to a) the Structural genomics program where un-curated, non-peer reviewed, homology based structures are being submitted to the database (11), and b) inaccurate Protein-Ligand co-crystal structures (12).

It is clear that AI can/will be/is a benefit during every step of drug discovery and that algorithm refinement is an ongoing, iterative process but what is not currently clear is whether AI will deliver where, for instance, HTS failed in dramatically impacting on the inefficiencies of the DD process (13). I have no doubt that very soon AI will become a fundamental part of all aspects of health care and drug discovery but I wonder whether this will actually precede the demise in scale of small molecule drug design and highlight the need to pursue other avenues (e.g. Gene therapy/Biologics) more vigorously.   In any case as the complexity of both the diseases/unmet need and the required solutions increase it will be interesting to see how ROI will be maintained and how much more Big Pharma consolidation we will see over the coming years.

Blog written by Marcus Hanley

References

  1. https://www.bio.org/sites/default/files/Clinical%20Development%20Success%20Rates%202006-2015%20-%20BIO,%20Biomedtracker,%20Amplion%202016.pdf
  2. http://www.latimes.com/business/hiltzik/la-fi-mh-that-hepatitis-treatment-20160111-column.html
  3. Fitchett et al. (2016) Ebola research funding: a systematic analysis 1997–2015, Journal of Global Health. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5112007/
  4. https://www.forbes.com/sites/brucejapsen/2016/12/01/pfizer-partners-with-ibm-watson-to-advance-cancer-drug-discovery/#299a161b1ef8
  5. https://www.businesswire.com/news/home/20160425005113/en/Recursion-Pharmaceuticals-Announces-Research-Agreement-Sanofi-Genzyme
  6. https://www.theregister.co.uk/2017/07/03/gsk_signs_ai_deal_british_firm/
  7. https://quertle.com/pharmaceutical-industry-teams-with-artificial-intelligence-ai-to-speed-drug-discovery/
  8. https://www.forbes.com/sites/brucejapsen/2016/12/01/pfizer-partners-with-ibm-watson-to-advance-cancer-drug-discovery/#4b37de281b1e
  9. The DSM-5: Classification and criteria changes. World Psychiatry. 2013 Jun; 12(2): 92–98. Published online 2013 Jun 4.
  10. http://www.spacemachine.net/views/2016/3/datasets-over-algorithms?rq=breakthroughs+in+ai
  11. Domagalski et al. (2014) The Quality and Validation of Structures from Structural Genomics MinorMethods Mol Biol. 2014 ; 1091: 297–314
  12. Reynolds, Charles H (2014) Protein-ligand cocrystal structures: we can do better. ACS medicinal chemistry letters, 10 July 2014, Vol.5(7), pp.727-9
  13. https://www.omicsonline.org/highthroughput-screening-what-are-we-missing-2153-0777.1000e120.pdf

 

 

 

 

Using Biochemical Light-switches to Illuminate Ion Channel Activity


There’s no doubting that optogenetics is an important recent development within the field of neuroscience. Channelrhodopsins, non-mammalian proteins that conduct ions in response to specific wavelengths of light, have now been inserted into various neuronal pathways to demonstrate the use of light to control modalities as diverse as vision, hearing, pain and motor control. In November 2017, The Scientist reported on progress in human clinical trials using channelrhodopsins in combination with viral vectors to restore a degree of function in damaged sensory neurons in response to light1. In a study conducted by Allergan, patients suffering from retinitis pigmentosa were injected with virus carrying the genetic signal to express channelrhodopsins specifically in retinal ganglion cells, bypassing the damaged light-detecting cells of the retina, enabling a rudimentary sense of light-detection in patients that were previously totally blind. Although primarily a safety study, it has shown promising progress in the field which may soon see developments towards treating hearing loss and chronic pain using a similar approach.

However, alongside the use of non-mammalian light-activated proteins to control neuronal activity, an alternative light-based approach has been developing which has direct and immediate usefulness as a tool in the field of drug discovery. The use of light to reversibly deliver ligand to a native protein receptor or ion channel, or ‘optopharmacology’, is the subject of an interesting recent mini-review by Bregestovski, Maleeva and Gorostiza2. For drug discovery, the use of these chemical photo-switches enables the rapid, and most importantly, reversible activation of ion channel function in response to light. Ligand-gated channels are the subject of this particular review, but much work with voltage-gated channels and G-protein-coupled receptors has now also been published. This approach provides a huge step forward from the previous use of caged-compounds in flash photolysis – often used, for example, to study synaptic transmission, but would leave the preparation awash with ligand that was slow to clear by re-uptake/diffusion, meaning difficult and slow ‘one-hit’ experimentation.

At the Ion Channel Modulation Symposium in Cambridge last year (2016), Dirk Trauner spoke about the development of these tools and demonstrated their use in research conducted both within his group and in conjunction with collaborators around the world, showing examples (see figures 1 & 2 below) of the use of photochromic ligands in both their soluble (PCL) and tethered (PTL) forms.

These two forms are defined by the nature of their interaction with target proteins – PCLs are designed to mimic the ligand of a specific receptor but are freely diffusible and may not exhibit sub-type specificity within a tissue. PTLs, as their named suggests, become covalently tethered to their target, usually via naturally-occurring or genetically modified cysteine residues, conferring a much higher degree of selectivity. On exposure to specific wavelengths of light, the molecules photoisomerize between cis-and trans- states, enabling ligand-receptor interaction that triggers either activation or deactivation of the target protein.

Of the two types, PCLs are the simplest to use. Figure 1 shows the impressive degree of temporal control of gained over the function of the capsaicin receptor TRPV1 in the presence of 1uM of a ‘photolipid’ PCL (here, an azobenzene combining the vanilloid head-group of capsaicin with photoswitch-containing fatty-acid chain (AZCA derivatives)) in response to simply varying light stimulation between 350 and 450 nm.

Figure 1: reproduced from Frank et al (2015)3

Sarah 1

Having proved the principle in TRPV1-expressing HEK cells, optical switching in the presence of this PCL was applied to isolated mouse DRG neurones, and c-fibre nociceptive neurones in saphenous nerve preparations, both of which contain native TRPV1 receptors. All showed rapidly reversible non-selective cation conductance in response to the shorter wavelengths of light, translating into nerve depolarisation and action potential firing in native neurones – responses which were absent in TRPV1-/- knockout mice.

Trauner also presented work using PTLs, and show-cased developments made in collaboration with US colleagues to extend the chemical tether to reinforce its chemical stability and reduce the chance of off-target attachment. Their new ‘PORTL’ (photoswitchable orthogonal remotely tethered ligand), shown in figure 2a, was used to demonstrate dual optical control of mGluR2 and GluK2 expressed in the same cell, differentially responding to the specified wavelengths of light (figure 2b).

Figure 2: reproduced from Broichhagen et al (2015)4                 Sarah 2

This elegant chemistry is a powerful tool for studying ion channel-mediated physiology. Its use, for example, to selectively activate or silence particular neurones, or sub-populations of heteromeric channels containing a common tagged subunit, with a degree of spacial and temporal control unachievable with perfusion, enables more qualitative assessment of the interaction between  new possible therapeutic compounds and their target proteins. And like channelrhodopsins, the use of these photochromic ligands as therapies in their own right is also possible and currently being investigated.

Blog by: Sarah Lilley

References:

1 The Scientist Nov 16 2017 article by Shawna Williams

https://www.the-scientist.com/?articles.view/articleNo/50980/title/Optogenetic-Therapies-Move-Closer-to-Clinical-Use/

2Bregestovski, P., Maleeva, G., and Gorostiza, P. (2017) Light-induced regulation of ligand-gated channel activity. British J Pharm, doi: 10.1111/bph.14022.

3Frank, JA., Moroni, M., Moshourab, R., Sumser, M., Lewin, GR., and Trauner, D. (2015) Photoswitchable fatty acids enable optical control of TRPV1. Nature Comms 6, doi:10.1038/ncomms8118

 4Broichhagen, J., Damijonaitis, A., Levitz, J., Sokol, KR., Leippe, P., Konrad, D., Isacoff, EY., and Trauner, D. (2015) Orthogonal optical control of a G protein-coupled receptor with a SNAP-tethered photochromic ligand. ACS Cent Sci 1 383-393, doi: 10.1021/acscentsci.5b00260

The Silicon switch in Drug Discovery


Hit-to-Lead optimisation is a crucial step in Drug Discovery. It implies the wise modification of hit molecules by following specific pharmacologic and pharmacokinetic parameters. Many strategies can be employed to tackle this challenge, one of them is bioisosterism. Bioisosters are moieties or atoms that show the same physicochemical properties and biological activity. Thus, medicinal chemists can rely on a large chemical toolbox, for example, by changing an amide bond to an oxazole or shielding a carboxylic acid with a tetrazole. This all depends how we want to drive the series (in terms of physicochemical properties) through the bottleneck of the Drug Development. Bioisosterism is also widely used for the IP space expansion of chemical libraries.

In this light, I would like to discuss the “big brother” of carbon, the silicon. Within the third row of the periodic table, silicon is located below the carbon; they share the same valency of 4 and commonly forms tetrahedral molecules, the most common silicon linkage being Si-C and Si-O. The replacement of carbon by silicon within bioactive compounds could therefore yield new compounds with different properties for lead optimization.1 Small chemical differences exist between silicon and carbon. Indeed, it is known that the C-Si bond is 20% larger than the C-C bond – this observation has consequences on the shape and conformation of the molecule, which in turn leads to different interactions with the biological system. Silicon compounds are also more lipophilic than their carbon congeners. Therefore, switching from carbon to silicon could improve cell penetration, which is very important for compounds targeting the central nervous system for example. Nevertheless it also creates solubility and metabolic clearance issues that could be mitigated, depending where we want to put the cursor in terms of DMPK. A “hidden” feature of silicon is that it can form hexacoordinated compounds in comparison with carbon: that has great significance in Medicinal Chemistry since many potent transition state mimics containing silanediols have been developed. Finally, silicon is more electropositive than carbon, which leads to a difference in bond polarity and ultimately to a different biological outcome, one good example is the ammonium/silicon exchange found in Zifrosilone (acetylcholinesterase inhibitor).

A lot of work have been produced recently towards the pharmacological evaluation of new silicon-containing molecules (Figure 1), however none of these progresses has yet yielded a marketed drug. As said recently by the blogger Derek Low,2 silicon stays in the shadows, despite the huge potential offered by this element in balancing physicochemical properties with DMPK and lowering compound attrition during the lead optimization phase.

I believe that a new era for silicon in Drug Discovery will come soon; we cannot neglect this element any longer.

Mo 1

Figure 1. Examples of some bioactive silicon-containing molecules with enhanced pharmacology and DMPK

Blog written by Mohamed Benchekroun

References

(1)          Ramesh, R.; Reddy, D. S. Quest for Novel Chemical Entities through Incorporation of Silicon in Drug Scaffolds. J. Med. Chem. 2017.

(2)          Lowe, D. Silicon Stays in the Shadows http://blogs.sciencemag.org/pipeline/archives/2017/11/07/silicon-stays-in-the-shadows (accessed Dec 7, 2017).

Mutational Signatures in Cancer


A cancer carries thousands of somatic mutations, most of which provide no selective advantage to clones which acquire them. Much research is focussed on the few driver mutations that do confer such an advantage; how such mutations enable cancer development and whether they can be targeted as cancer therapies.  However, the recent reduction in cost of large scale sequencing (either whole exome or whole genome sequencing), has allowed mutational information to be used in a different way.  The study of mutational signatures can provide information about the mutagens to which a patient has been exposed to over a lifetime, any DNA repair mechanisms malfunctioning within the tumour, and potential therapeutic agents to which the tumour may be sensitive (Helleday, Eshtad et al. 2014).

Following large scale sequencing, signatures are generated by categorising base pair substitutions into 6 categories (C.G→A.T, C.G→G.C, C.G→T.A, T.A→A.T, T.A→C.G, T.A→G.C) and also taking into account the bases on each side of the substituted base. This gives 96 possibilities and each of these can be scored, giving an image such as that below.  This is a signature common in colorectal cancers, and is a product of defects in the mismatch repair (MMR) pathway.

Jess 1

Figure 1. An example of a mutational signature from a MMR defective tumour. Taken from http://cancer.sanger.ac.uk/cosmic/signatures

Sequencing can also provide information about other types of mutation.  Indels, which are insertions or deletions can be quantified, together with their average size.  This provides further information about the damage/repair environment within the tumour.  For example, indels between 4 and approximately 50 bases, surrounded by mircohomology can be indicative of tumours relying on non-homologous end joining due to a defect in homologous recombination (HR).  Information about gross chromosomal rearrangements, such as tandem duplications, translocations and karyotypic variations can be integrated with base substitution information.  This can be viewed in a circos plot.  Below are examples of these plots from a tumour defective in homologous recombination (A) with large numbers of rearrangements and another defective in MMR (B) with large numbers of substitutions.

Jess 2

Figure 2. Circos plots of tumours with different repair defects.  A is HR defective and B is MMR defective.  Adapted from (Davies, Morganella et al. 2017).

So what is the value of studying these mutational signatures? Since they give a historical perspective on the damage accrued by the genome, they can be used to monitor historical exposure to various agents.  For example, the presence of a particular mutational signature associated with exposure to Aristolochic acids (present in traditional medicines) has been identified to be present in a large number hepatocellular carcinomas in Asia (Ng, Poon et al. 2017). This in turn can inform policy on the availability of such agents.

Additionally study of mutational signatures allows the improved targeting of therapies. HR defective tumours present a distinct mutational signature and such tumours are generally sensitive to PARP inhibitors. MMR defective tumours carry a high mutational load and respond well to immune checkpoint inhibitors such as anti-PD-1 antibodies (Le, Durham et al. 2017). Although colorectal tumours are routinely examined for MMR status to allow specific treatment, MMR defective tumours in other cancer types are likely missed. A recent study showed that a small percentage of breast cancers carry MMR defects without germline mutation in MMR genes (Davies, Morganella et al. 2017). These tumours may respond well to immune checkpoint blockade. Therefore, the classification of tumours by mutational signatures could provide a broader but more highly targeted use for several existing therapies. Although currently these are only identified in a research setting, the falling cost of sequencing may allow such characterisation to be part of the process in treatment of cancer patients.

Blog written by Jess Hudson

References:

Davies, H., S. Morganella, C. A. Purdie, S. J. Jang, E. Borgen, H. Russnes, D. Glodzik, X. Zou, A. Viari, A. L. Richardson, A. L. Borresen-Dale, A. Thompson, J. E. Eyfjord, G. Kong, M. R. Stratton and S. Nik-Zainal (2017). “Whole-Genome Sequencing Reveals Breast Cancers with Mismatch Repair Deficiency.” Cancer Res 77(18): 4755-4762.

Helleday, T., S. Eshtad and S. Nik-Zainal (2014). “Mechanisms underlying mutational signatures in human cancers.” Nat Rev Genet 15(9): 585-598.

Le, D. T., J. N. Durham, K. N. Smith, H. Wang, B. R. Bartlett, L. K. Aulakh, S. Lu, H. Kemberling, C. Wilt, B. S. Luber, F. Wong, N. S. Azad, A. A. Rucki, D. Laheru, R. Donehower, A. Zaheer, G. A. Fisher, T. S. Crocenzi, J. J. Lee, T. F. Greten, A. G. Duffy, K. K. Ciombor, A. D. Eyring, B. H. Lam, A. Joe, S. P. Kang, M. Holdhoff, L. Danilova, L. Cope, C. Meyer, S. Zhou, R. M. Goldberg, D. K. Armstrong, K. M. Bever, A. N. Fader, J. Taube, F. Housseau, D. Spetzler, N. Xiao, D. M. Pardoll, N. Papadopoulos, K. W. Kinzler, J. R. Eshleman, B. Vogelstein, R. A. Anders and L. A. Diaz, Jr. (2017). “Mismatch repair deficiency predicts response of solid tumors to PD-1 blockade.” Science 357(6349): 409-413.

Ng, A. W. T., S. L. Poon, M. N. Huang, J. Q. Lim, A. Boot, W. Yu, Y. Suzuki, S. Thangaraju, C. C. Y. Ng, P. Tan, S. T. Pang, H. Y. Huang, M. C. Yu, P. H. Lee, S. Y. Hsieh, A. Y. Chang, B. T. Teh and S. G. Rozen (2017). “Aristolochic acids and their derivatives are widely implicated in liver cancers in Taiwan and throughout Asia.” Sci Transl Med 9(412).

 

TMEM16A: Closing the circle


Raimund Dutzler and his lab have been the first group to solve the crystal structure of the mammalian mouse TMEM16A ion channel using cryo-electron microscopy (Paulino et al., 2017). This study is the first essential step to resolving the controversies which has raged in the ion channel community since the discovery of this protein responsible for the calcium activated chloride channel back in 2008 by 3 independent groups (Caputo et al; Schroeder et al; Yang et al., 2008). More recently Brunner et al. (2014) was the first to solve the crystal structure for the related fungal protein Nectria haematococca TMEM16 (nhTMEM16), a lipid scramblase.  Their hypothesis being that the TMEM16 protein family, consisting of ten genes (TMEM16A-k, missing out I) split from a common ancestral lipid scramblase and evolved in mammals to produce the only 2 known chloride ion channels, TMEM16A and TMEM16B, in the TMEM16 family. Up until this point functional studies had shown Ca2+ activation, voltage dependency and through mutagenesis which amino acids had an impact on chloride conductance in TMEM16A. Brunner et al’s X-ray structure, to a 2.6 Å resolution, gave rise to a number of profound questions about the chloride ion channels TMEM16A & B.

Their structure of the TMEM16 homodimer did not conform to ion channel dogma. Based on the homology model of the lipid scramblase X-ray structure for nhTMEM16A, Whitlock and Hartzel (2016) proposed that the calcium activated chloride ion conductance protein TMEM16A shared structural similarities to the lipid scramblases i.e. it was also an homodimer and each dimer had a subunit cavity or pore, but the chloride permeation channel was made up of half lipid, half protein and the Ca2+ binding site was located within the transmembrane domain of the subunit cavity of the protein. In June 2017 (TMEM16A: 2pores, or not 2 pores) I discussed 2 studies (Lim et al. 2016 & Jeng et al. 2016) using covalently linked mouse TMEM16A subunits with one of the two subunits carrying mutations, which confirmed that the TMEM16A homodimer had 2 independent chloride pores.

Raimund Dutzler’s lab are the first to report the structure of mammalian TMEM16A ion channel with cryo-electron microscopy of mouse TMEM16A at a resolution of 6.6Å (Paulino et al., 2017). They are proposing that the putative pore region of mTMEM16A is now an enclosed aqueous proteinaceous pore with a large intracellular vestibule that narrows to a chloride conducting pathway surrounded by alpha –helices (Figure 1). They show this is brought about by a realignment of helices, when compared to the nhTMEM16A X-ray structure, helices 4 and 6 move from the edges of the subunit cavity, in to enclose the aqueous proteinaceous pore and closing it off to membrane lipids (Figure 1). However, a drawback of this study is the resolution of the cyro-electron microscopy technique, as crucially it does not give the locations of amino acids side chains or the pitch of helices within the protein. We will await the increased resolution of an X-ray protein structure to confirm these important initial findings. Also, the protein was purified in the presence of high levels of Ca2+ and therefore could represent a non-conducting form of mTMEM16A.

Roy 1

Figure 1. Mechanistic relationships within TMEM16 family.

(A) Depiction of the mTMEM16A pore. The molecular surface of the pore region is shown as grey mesh. The boundaries of hydrophobic (black) and polar regions (grey) of the membrane are indicated by rectangular planes. The positions of positively charged residues affecting ion conduction are depicted as blue and bound Ca2+ ions as green spheres. Hypothetical Cl ions (radius 1.8 Å) placed along the pore are displayed as red spheres. (B) Schematic depiction of features distinguishing lipid scramblases (left) from ion channels (right) in the TMEM16 family. The view is from within the membrane (top panels) and from the outside (bottom panels). The helices constituting the membrane accessible polar cavity in scramblases have changed their location in channels to form a protein-enclosed conduit. A and B, Permeating ions and lipid headgroups are indicated in red. (Paulino et al. (2017) eLife;6: e26232).

They tested their hypothesis with functional studies, mutating basic amino acids for neutral Alanines in both the vestibule and within the pore region of the mTMEM16A to see what impact this had on chloride conductance. They found, as predicted, that altering the charge in the vestibule had little impact on chloride conductance whereas, altering the charge in the pore had a more pronounced effect.

The Brunner et al. (2014) X-ray structure and the homology models that arose from it, gave rise to the controversy in the literature, trying to reconcile ion channel dogma with the available TMEM16A functional data and the nhTMEM16 X-ray structure. Paulino et al. (2017) has with this essential first mTMEM16A structure, albeit with the cryo-electron microscopy technique at 6.6Å resolution which needs to be confirmed with the increased resolution of an X-ray structure, resolved our understanding of how chloride ions pass through TMEM16A. And in mTMEM16A anyway, we still await elucidation of the human structure, this has come with the realignment of helices 4 and 6 closing the circle of the subunit furrow to form an aqueous chloride conducting pore.

Blog written by Roy Fox

References:

  1. Paulino et al. (2017) eLife; 6:e26232
  2. Caputo et al. (2008) Science 322:590–594.
  3. Schroeder (2008) Cell 134:1019–1029.
  4. Yang et al. (2008) Nature 455:1210–1215.
  5. Brunner et al. (2014) Nature 516:207–212.
  6. Whitlock JM and Hartzell HC (2016) Eur. J. Physiol. 468: 455-473.
  7.  Lim et al. (2016) J. Gen. Physiol. 148:5, 375-392.
  8.  Jeng et al. (2016) J. Gen. Physiol. 148:5, 393-404.