Mini Review Volume 3 Issue 6
1School of Geography, Earth & Environmental Sciences, University of Birmingham, UK
2School of Chemical Engineering, National Technical University of Athens, Greece
3Ethical Aspects in Research and Technology for Human Network for Ethics and Research Integrity, National Technical University of Athens, Greece
Correspondence: Eugenia Valsami-Jones, School of Geography, Earth & Environmental Sciences, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
Received: March 04, 2016 | Published: June 30, 2016
Citation: Valsami-Jones E, Lynch I, Charitidis CA (2016) Nanomaterial Ontologies for Nanosafety: A Rose by any Other Name…. J Nanomed Res 3(5): 00070. DOI: 10.15406/jnmr.2016.03.00070
The future of nanotechnology, along with all the economic and societal implications this entails, might depend on getting one thing right: an agreed naming framework. To be successful, this framework will have to be transparent, consistent, easy to adopt by the nanoscience community and all its satellite fields (from biology and medicine to engineering and agriculture) and enable harmonisation of discoveries to date and lay the foundation for the discoveries of tomorrow. The naming revolution is speerheaded by the “young” field of nanosafety, where harmonisation of terminology is crucial for the development of a database of nanosafety data - and it is needed fast. The database will facilitate re-use of existing pockets of data for modelling, prediction and risk assessment, thereby supporting the route to market for nano-enabled products.
Keywords: Nanomaterials, Ontology, Nanosafety, Nanotechnology, Toxicology, Ecotoxicology, Environmental chemisty, Data management, Nanoinformatics
Nanosafety, the research into the safety and safe use of nanomaterials, has come a long way in the last 30 years; also known as “nanotoxicology”, a name losing popularity in recent years due its negative conotations, nanosafety began as a niche field in the eighties and nineties, with a handful of publications annually, to become a prominent field of endevour with a few thousand publications per year.1 Nanosafety may be seen as the deal maker (or breaker) for nanotechnology, the so-called enabling technological revolution that has given us self-cleaning windows and transparent sunscreens but more importantly promises medical breakthroughs and a revolution in all aspects of modern science and technology. It is easy to be inspired by the potential of nanotechnology: by miniaturising materials and their applications we can save precious resources, minimise wasteage and improve efficiency; more than that, capitalising on the uniqueness of nanoscale properties we can achieve unfathomable technological advances where we need them most: healthcare, electronic devices, clean environment and energy generation.
However, we can ignore concerns about the safety of nanomaterials and devices at our peril; the aftermath of asbestos cleanup, public perceptions of GM (Genetically Modified) foods and nuclear accidents are examples of technologies gone wrong and scenarios to be avoided. The progress in nanosafety research, particulalry in recent years, means that we can be reasonably confident that nanomaterials do not trigger acute toxicity at realistic doses, but equally that they often possess a distinct “nano” effect that is additive to the toxicity oberved by their bulk counterparts.2 Where we have been less successful to date is in developing a mechanistic framework of exposure and toxicity that applies across the board to all nanomaterials in all application scenarios and that enables read-across; in other words prediction of exposure and toxicity from one type of nanomaterial to another based purely on availability of their physicochemical description. Integrated and cost-effective strategies are being developed.3 and established in order to:
The problem the research community faces at the present time can be broken into several key components:
Most of these issues of course are not unique to nanosafety; very recently it was shown that 60% of publications in the field of Psychology cannot be reproduced .5 Also, in 2016 two publications revealed the disturbing fact of serious deficits in publications in the field of Biomedicine.6-7
The value of databases
So what can be done to change the current impasse? The answer, surprisingly, is quite a lot. Nanosafety, arguably, leads the way in efforts to harmonise and unify datasets and create a unique shared openly accessible database.
To succeed, it is necessary that the creation of data has as a prerequisite the implementation of coherent experimental methods and materials used through the implementation of unambiguous Standard Operation Procedures (SOPs) and harmonized protocols – in itself a hugely difficult consensus to reach.
A second important condition is an agreement on the ontology underpinning the data(base): the naming of things. In science, it is not enough to simply call a rose by its name; you have to describe it fully, giving its colour, size, age, scent… and any other parameter that might facilitate comparison of one rose with another. The level of description is also crucial: too little is open to misintepretation and poor replication; too much it becomes unmanageable and unimplementable. It is, however, crucially important that an agreed ontology exists, in order for the results of different studies to be comparable.
A shared database must also be openly accessible; this will enhance the pace of research, since what is not useful for one researcher could be vital for someone else, and facilitate aggregation of data and assessment of emerging patterns in the data e.g. across materials classes or between assay types, through the process of meta-analysis. A database concept is beginning to emerge within the nanosafety community, providing a systematic registry of nanomaterials characterization, environmental and health hazards assessment, high throughput and high content impacts data in a database infrastructure, with search capabilities (through the EU FP7 project eNanoMapper) .8 This is a start, although further work is needed, for example to add concepts related to nanomaterials release, exposure and environmental fate and transformation aspects.
An open approach termed the Nanoinformatics Knowledge Commons has been initiated by CEINT in the US .9,10 which will be co-developed within EU scientists. Even when a comprehensive and widely accepted framework becomes available, more work is needed to transfer (where possible) existing data into the database. Beyond that, tailored and user-friendly interfaces should be designed and implemented for different needs and usages. This includes explanations of data-related terminology for experimentalists and intuitive flow processes for data flows from creation to curation and storage, written by a technical writer to avoid too much technical jargon.
User friendly tools should also be available for data preparation and upload, supporting many different import formats, custom spreadsheet templates and raw data files (such as microscopic images and high-throughput screening data). Where possible, digital lab notebooks should be integrated such that data management is directly linked to the data generation steps, rather than being an after-thought or separate task. Integration and communication of a database with modelling and analysis tools allows exploitation of the data in the most efficient way, extraction of useful information and development of predictive mathematical models.
Clearly, for a database to be of value, it needs to contain a large volume of diverse data. This is where the European Commission’s Open Research Data Pilot (ORDP) initiative is an important step. Partly motivated by the substantial volume of research data funded by European taxpayers money, through research grants, the ORDP’s basic principle is to make research data open/visible, in order to facilitate validation of the results presented in scientific publications by any interested stakeholder and to progress the pace of research through enhancing the re-usability of data. As such, it is an important step towards the ability of the scientific community to validate results that appear in scientific publications and, as a result, a way to minimize sloppy science and inhibit research misconduct.11 ORDP, currently adopted on a voluntary basis but is intended to become compulsory, is not free of technical challenges including:
More generally, and to ensure data quality, experimental data audits should be part of the modus operanti of all research laboratories, and institutional quality procedures and outcomes should accompany large datasets. Research facilities should undergo independent audits of scientific data annually by certified public scientists, in much the same way as businesses and not-for-profit organizations are independently audited by certified financial accountants.12 Quality Assurance (QA) audits must aim at eliminating disorganized sample storage, inadequate data logging, variable experiments, unsecured data analysis, and missed maintenance checks / calibration.13 Research Integrity Offices, having authority from the state, can be in charge of such data audits, with scientists of acknowledged record, experience and integrity. As first steps, data audits could be a mandatory part of ORDP plans, and evidence of a Quality System of laboratories could be a prerequisite for the completion of EU funded projects. Training in data quality, integrity and management should be provided to researchers at project initiation. Experimental instrument calibration (metrological services) should be performed regularly and supervised by an independent authority (e.g. National Metrological Institute). Interlaboratory proficiency testing comparisons can be organized according to international standards (ISO, CEN) to check competence of laboratories in performing measurements and proficiency in delivering accurate testing results, but also requires laboratories to know the historical variability of the assay in their facility.
Finally, there should be preventive measures against open access side effects (i.e. against misconduct). For example, caution should be excercised in the way that post publication peer review processes are handled .14 Also, the Code of Responsible research in nanoscience.15 should be strengthened (currently includes “meaning, sustainability, precaution, inclusiveness, excellence, innovation and accountability”) to include also QA and Open Data aspects. The importance of this becomes apparent when considering recent debates (blogs1, twitter etc.) regarding “data vultures” and “data parasites” following a poorly worded editorial on Data Sharing .16 which tries to address some of the issues by suggesting collaborative re-use of data but causes a controversy and significant backlash.17
The above discussion makes it clear that even though the importance of a consensus database and common ontology cannot be overstated, the route to achieving this is riddled with difficulties. But to succeed, critical mass, enthusiasm and support for these concepts has to be wide and unconditional to, something that might be difficult to gain acceptability by scientists, who normally take pride in doing things individually and differently. To counter that, lets try and imagine a rose by any other name… It simply wouldn't work.
1http://www.thecogitoblog.com/blog/im-not-a-research-parasite-youre-a-data-vulture/
EVJ and IL would like to acknowledge financial support for their nanosafety activities by the EU project NanoMILE (NMP4-LA-2013-310451) and UK NERC funded FENAC (FENAC010001). Authors would like to acknowledge the ongoing discusions on nanosafety data management underway within the EU NanoSafety Cluster. This editorial is based on a Position paper just published through the EU NanoSafety Cluster; for further details, see: www.nanosafetycluster.eu/publications-and-outputs/outputs.html.
None.
©2016 Valsami-Jones, et al. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.