Commentary Volume 2 Issue 3
1Department of Biological Sciences, King Abdulaziz University, Saudi Arabia
2Department of Biotechnology, Eternal University, India
Correspondence: Kulvinder S Saini, Department of Biological Sciences, Embryonic & Cancer Stem Cell Research Group, King Abdulaziz University, Jeddah-21589, Saudi Arabia
Received: March 30, 2015 | Published: May 20, 2015
Citation: Saini KS, Hajrah NH. Toxicogenomics–getting us a step closer towards personalized medicine. J Investig Genomics. 2015;2(3):51-52. DOI: 10.15406/jig.2015.02.00024
TGx, toxicogenomics; NDDR, new drug discovery research; CYP450, cytochrome P450; ADME, absorption, distribution, metabolism and excretion; hiPSCs, human induced pluripotent stem cells; MOA, mechanism of action; SAGE, serial analysis of gene expression; NGS, next generation sequencing; GPCRs, g-protein coupled receptors; MPS, micro physiological system; QSTR, quantitative structure toxicity relationship; USFDA, US food and drug administration; EMEA, european medicines agency
Traditionally, academic labs identified & validated drug targets and also optimized cell-based assays for hit-identification. Afterwards, pharmaceutical companies developed high throughput assays for screening thousands of molecules every week against specific targets and therapeutic areas. From new drug discovery research (NDDR) point of view, one of the unprecedented challenges facing the pharmaceutical industry is the constant failure of its promising molecules during clinical development. Almost half of these lead drug candidates will have toxicity related issues. Hepatic, cardiac, muscle and renal tissues are most vulnerable to drug-induced toxicity and collectively contribute over 80% of the new molecules withdrawn from clinical development and sometimes even after launch. This necessitates the identification of compounds, with potential side-effects, early in the discovery process and replacing them with innovative and better backup molecules, thereby saving resources and precious time for the industry. High throughput human genome sequencing in the last decade has led to the elucidation of a number of pathways and individual genes, which provide attractive NDDR targets. Gene microarray, Serial Analysis of Gene Expression (SAGE), differentiated display technologies and now the Next Generation Sequencing (NGS) & System Biology tools have paved the way for effective approaches to identify and target specific kinases, G-protein coupled receptors (GPCRs), phosphatases, mRNAs, miRNAs, etc. involved in the disease etiology and progression. NGS certainly has the potential to bring us closer to the goal of “Personalized Medicine”. These techniques coupled with better bioinformatic tools to capture, store and analyze complex datasets have led the way to understand the relationships between genetic variability and individual response to a known pharmaceutical agent (s). In addition, this knowledge could to be leveraged early in NDDR to have better, safer and potent molecules with low chance of toxicity during clinical development. Application of these combined datasets to the field of predictive toxicology, particularly, pre-clinical and clinical development, has led to the emergence of Toxicogenomics (TGx) field. In the last decade, predictive toxicity has taken center-stage, not only during the discovery and development of new molecules, but also aims to provide futuristic molecular clues about disease progression and track valuable bio-markers “in action”. These TGx datasets are further helping us to keep track of toxicity and side effects issues even after regulatory approvals and systematic follow-up of patients post-marketing (Phase-4).
TGx has wider application in all phases of NDDR, starting from hit identification to lead optimization and also during pre-clinical and clinical development. A number of cell- and tissue-specific protocols and tools are gaining popularity with the researchers; such as organ-on-a–chip, individual gene chips highlighting polymorphism of a specific Cytochrome P450 (CYP), 3-D printing and artificial lab grown human tissues. Two reviews from our laboratory have captured the role of polymorphism among key CYP genes and how we may leverage this knowledge to take forward safe, potent and efficacious molecules into clinical development.1,2 In another study, using microarray, we compared 2 known compounds- one hepatotoxin and its safe analogue, with our in-house compound, to predict potential hepatotoxicity related issues3 and were able to make informed decision about not taking one of our molecule into clinical development.
One of the most promising recent developments is the use of human-induced pluripotent stem cells (hiPSCs) for growing liver, kidney, pancreatic and heart tissues for TGx studies. Certainly, hiPSCs based organs have added a new dimension (3D) to TGx in drug-screening, ADME, CYP studies for disease modelling, as observed quite often, even proper animals models fail to recapitulate human physiology and response to drug-induced injury. Furthermore, hiPSCs technology uses somatic cells which are easily available and are ethically safe, as no human embryo was used to produce them. In this regard, a cardiac micro-physiological system (MPS)-created from hiPSCs seems most promising, as the organ generated in a 3D format was able to maintain its cellular viability over weeks.4 Major advantage of this MPS system clearly seems to be that pharmacological data generated for IC50 and EC50 values reflects “real life” tissue-scale kinetics rather than individual cell-type specific response. In addition, MPS was able to predict cardiotoxicity of a number of known compounds accurately. Tissues generated using hiPSCs technology, for example-the MPS system, may be leveraged to produce patient-centric human in vitro systems that will not only be better predictors of toxicology and ADME, but also have excellent potential in prognosticating drug safety, IC50 & EC50 and efficacy. However, overall human physiology involving complex interactions of various organs and intricate control by the nervous, endocrine and immune systems to the overall drug metabolism and clearance rates indicates that we may find hiPSC generated issues might fail to completely mimic the in vivo situation.
Another recent example where hiPSCs technology was employed in the successful generation of pancreatic β cells have resulted in raising aspirations and “hope” that diabetes may be “cured” one day.5 These lab generated β cells not only have therapeutic benefits when transplanted into a diabetic individual, but also have huge potential benefits in personalized drug discovery and development efforts. The clinical development of this technology got a boost in March 2015, when AstraZeneca and Melton’s lab at the Harvard Stem Cell Institute-the discoverer of this protocol, signed a 5year collaborative agreement.6 This alliance will further explore R&D into the engineering of pancreatic islets and pharmacological aspects into the etiology of type-1 and type-2 diabetes as well as how decline of β cell function leads to diabetes. Taken together, hiPSC generated tissues hold great potential for NDDR, however, at present it may not be able to completely replace non-rodent animal studies. We need to collect, store and analyze multiple TGx datasets, for each such tissue and correlate these with what is already known about the MOA of key safe and efficacious drug molecules. In addition, stringent controls-negative & positive will be required before extrapolating these Ex Vivo findings to the overall human physiology and deciphering various parameters of DMPK.
A number of predictive toxicology programs and software are available for modeling drug metabolism studies. Some of these programs are freely available on the web, or from academic labs, while others are available commercially. TOPKAT is extensively employed for Quantitative Structure Toxicity Relationship (QSTR) models and for assessing various measures of toxicity (www.acceryls.com). Individual pharmaceutical and biotech companies have also developed their own patented programs and integrated these into their therapeutically-focused discovery and development protocols. Data generated from microarray, NGS, RNA-seq studies should be submitted to the regulatory bodies, once the molecule moves into clinical development and need to be kept in open-source environment. Toxicogenomics coupled with traditional pathology as well as in silico tools need to be leveraged early in the drug discovery process to develop and take forward safe and efficacious compounds into the clinic.7‒9 Appropriate disease-specific, or transgenic animals, are mandatory for determining MOA of new drug candidates for evaluating safety, dose-response, ADME properties, half-life, LD50 and critical end points of toxicity. Can hiPSCs-generated human tissues and system biology generated software replace some of these animal studies, mimic drug metabolism and disease profiling? However, hiPSC technology has its own pros and cons,10 particularly when it comes to mutagenic and oncogene activation risk. It appears that for the time being, for obvious reasons, we will be required to use non-human primates in drug development and safety assessment studies.
Apart from the Pharma or Biotech companies developing new molecules, regulatory bodies-US Food and Drug Administration (US FDA) and European Medicines Agency (EMEA), are also interested in capturing and storing these datasets for future risk assessment within a population and also possibly looking for warning signs associated with certain chemo-types and formulations. Continuous efforts from industry, academia and regulatory bodies will lead to desired updates, better and strict enforcement, which ultimately benefits all the stake holders and provides highest level of patient safety. Comparison of TGx data from different platform microarray technologies (e.g. Affymetrix) and RNA-seq data (Illumina vs. others) will also require team effort. With NGS becoming affordable and the price coming down to $1000/genome, we will see these technologies linking TGx with pharmacogenomics. A system biology level understanding of genome wide data of new molecules, during the processes of NDDR and clinical development, will require us to be more vigilant and hopefully provide better medicines and relevant bio-markers. Clearly, USFDA is working with various stakeholders to make it compulsory to submit “omics” datasets along with new drug application8 and already efforts are underway to analyze omics datasets for predictive toxicology.11 Finally, academic labs, pharmaceutical industry and regulatory bodies need to create systems and processes, where these datasets are available freely to researchers worldwide (data banks, web links, etc.), something on the lines of NCBI. This will ensure that future drug discovery and development efforts are done not in isolation but in collaboration, where ultimate beneficiary will be the patient.
None.
Author declares that there is no conflict of interest.
©2015 Saini, et al. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.