Information

4.2: A Systematic Approach - Biology

4.2: A Systematic Approach - Biology


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Learning Objecctives

  • Describe how microorganisms are classified and distinguished as unique species
  • Compare historical and current systems of taxonomy used to classify microorganisms

Once microbes became visible to humans with the help of microscopes, scientists began to realize their enormous diversity. Microorganisms vary in all sorts of ways, including their size, their appearance, and their rates of reproduction. To study this incredibly diverse new array of organisms, researchers needed a way to systematically organize them.

The Science of Taxonomy

Taxonomy is the classification, description, identification, and naming of living organisms. Classification is the practice of organizing organisms into different groups based on their shared characteristics. The most famous early taxonomist was a Swedish botanist, zoologist, and physician named Carolus Linnaeus (1701–1778). In 1735, Linnaeus published Systema Naturae, an 11-page booklet in which he proposed the Linnaean taxonomy, a system of categorizing and naming organisms using a standard format so scientists could discuss organisms using consistent terminology. He continued to revise and add to the book, which grew into multiple volumes (Figure (PageIndex{1})).

In his taxonomy, Linnaeus divided the natural world into three kingdoms: animal, plant, and mineral (the mineral kingdom was later abandoned). Within the animal and plant kingdoms, he grouped organisms using a hierarchy of increasingly specific levels and sublevels based on their similarities. The names of the levels in Linnaeus’s original taxonomy were kingdom, class, order, family, genus (plural: genera), and species. Species was, and continues to be, the most specific and basic taxonomic unit.

Evolving Trees of Life (Phylogenies)

With advances in technology, other scientists gradually made refinements to the Linnaean system and eventually created new systems for classifying organisms. In the 1800s, there was a growing interest in developing taxonomies that took into account the evolutionary relationships, or phylogenies, of all different species of organisms on earth. One way to depict these relationships is via a diagram called a phylogenetic tree (or tree of life). In these diagrams, groups of organisms are arranged by how closely related they are thought to be. In early phylogenetic trees, the relatedness of organisms was inferred by their visible similarities, such as the presence or absence of hair or the number of limbs. Now, the analysis is more complicated. Today, phylogenic analyses include genetic, biochemical, and embryological comparisons, as will be discussed later in this chapter.

Linnaeus’s tree of life contained just two main branches for all living things: the animal and plant kingdoms. In 1866, ErnstHaeckel, a German biologist, philosopher, and physician, proposed another kingdom, Protista, for unicellular organisms (Figure (PageIndex{2})). He later proposed a fourth kingdom, Monera, for nicellular organisms whose cells lack nuclei, like bacteria.

Nearly 100 years later, in 1969, American ecologist Robert Whittaker (1920–1980) proposed adding another kingdom—Fungi—in his tree of life. Whittaker’s tree also contained a level of categorization above the kingdom level—the empire or superkingdom level—to distinguish between organisms that have membrane-bound nuclei in their cells (eukaryotes) and those that do not (prokaryotes). Empire Prokaryota contained just the Kingdom Monera. The Empire Eukaryota contained the other four kingdoms: Fungi, Protista, Plantae, and Animalia. Whittaker’s five-kingdom tree was considered the standard phylogeny for many years.

Figure (PageIndex{3}) shows how the tree of life has changed over time. Note that viruses are not found in any of these trees. That is because they are not made up of cells and thus it is difficult to determine where they would fit into a tree of life.

Exercise (PageIndex{1})

Briefly summarize how our evolving understanding of microorganisms has contributed to changes in the way that organisms are classified.

Molecular Genetics and the Three Domains

Haeckel’s and Whittaker’s trees presented hypotheses about the phylogeny of different organisms based on readily observable characteristics. But the advent of molecular genetics in the late 20th century revealed other ways to organize phylogenetic trees. Genetic methods allow for a standardized way to compare all living organisms without relying on observable characteristics that can often be subjective. Modern taxonomy relies heavily on comparing the nucleic acids (deoxyribonucleic acid [DNA] or ribonucleic acid [RNA]) or proteins from different organisms. The more similar the nucleic acids and proteins are between two organisms, the more closely related they are considered to be.

In the 1970s, American microbiologist Carl Woese discovered what appeared to be a “living record” of the evolution of organisms. He and his collaborator George Fox created a genetics-based tree of life based on similarities and differences they observed in the small subunit ribosomal RNA (rRNA) of different organisms (16S rRNA in prokaryotes and 18S rRNA in eukaryotes). In the process, they discovered that a certain type of prokaryote, the archaea, were significantly different from other bacteria and eukaryotes in terms of the sequence of small subunit rRNA. To accommodate this difference, they created a tree with three Domains above the level of Kingdom: Archaea, Bacteria, and Eukarya (Figure (PageIndex{4})). Genetic analysis of the small subunit rRNA suggests archaea, bacteria, and eukaryotes all evolved from a common ancestral cell type. The tree is skewed to show a closer evolutionary relationship between Archaea and Eukarya than they have to Bacteria.

Exercise (PageIndex{3})

  1. In modern taxonomy, how do scientists determine how closely two organisms are related?
  2. Explain why the branches on the “tree of life” all originate from a single “trunk.”

Vertical and Horizontal Genetic Transfer

The use of DNA sequence analysis relies on the assumption that the DNA in an organism has been passed down, generation after generation, from its ancestors. This is called vertical genetic transfer. The genes for ribosomal RNA are almost exclusively transferred vertically, adding to the list of reasons why they are such a good choice for determining evolutionary relationships. In some cases, however, organisms (especially but not exclusively prokaryotes) can acquire DNA from a source other than its ancestors. DNA can be obtained from the environment, by infection with a virus or bacteriophage, or passed directly from one organism to another. This is called horizontal gene transfer (covered more extensively in Chapter 10: Bacterial Genetics). The existence of horizontal gene transfer complicates evolutionary relationships, making them more of a web of life than a tree of life.

Naming Microbes

In developing his taxonomy, Linnaeus used a system of binomial nomenclature, a two-word naming system for identifying organisms by genus and species. For example, modern humans are in the genus Homo and have the species name sapiens, so their scientific name in binomial nomenclature is Homo sapiens. In binomial nomenclature, the genus part of the name is always capitalized; it is followed by the species name, which is not capitalized. Both names are italicized.

Taxonomic names in the 18th through 20th centuries were typically derived from Latin, since that was the common language used by scientists when taxonomic systems were first created. Today, newly discovered organisms can be given names derived from Latin, Greek, or English. Sometimes these names reflect some distinctive trait of the organism; in other cases, microorganisms are named after the scientists who discovered them. The archaeon Haloquadratum walsbyi is an example of both of these naming schemes. The genus, Haloquadratum, describes the microorganism’s saltwater habitat (halo is derived from the Greek word for “salt”) as well as the arrangement of its square cells, which are arranged in square clusters of four cells (quadratum is Latin for “foursquare”). The species, walsbyi, is named after Anthony Edward Walsby, the microbiologist who discovered Haloquadratum walsbyi in in 1980. While it might seem easier to give an organism a common descriptive name—like a red-headed woodpecker—we can imagine how that could become problematic. What happens when another species of woodpecker with red head coloring is discovered? The systematic nomenclature scientists use eliminates this potential problem by assigning each organism a single, unique two-word name that is recognized by scientists all over the world.

In this text, we will typically abbreviate an organism’s genus and species after its first mention. The abbreviated form is simply the first initial of the genus, followed by a period and the full name of the species. For example, the bacterium Escherichia coli is shortened to E. coli in its abbreviated form. You will encounter this same convention in other scientific texts as well.

Bergey’s Manuals - Phenetic (Phenotypic) Identification

Whether in a tree or a web, microbes can be difficult to identify and classify. Without easily observable macroscopic features like feathers, feet, or fur, scientists must devise ways to differentiate and classify microbes. Prior to the ability to easily sequence DNA, a combination of cell structure, physiological characteristics, and metabolic abilities were used to classify and identify bacteria. Despite the challenges, a group of microbiologists created and updated a set of manuals for identifying and classifying microorganisms. First published in 1923 and since updated many times, Bergey’s Manual of Determinative Bacteriology and Bergey’s Manual of Systematic Bacteriology have been the standard references for identifying and classifying different prokaryotes. Because so many bacteria look identical, methods based on nonvisual characteristics must be used to identify them. For example, biochemical tests can be used to identify chemicals unique to certain species. Likewise, serological tests can be used to identify specific antibodies that will react against the proteins found in certain species. There are still situations today, such as diagnosis of some infections, where the use of these observable characteristics (phenotype) are preferable due to ease and speed. Phenotypic identification is best used when there are just a few well-characterized bacteria which could be present. Ultimately, however, DNA and rRNA sequencing can be used both for identifying a particular bacterial species and for classifying newly discovered species.

  • What is binomial nomenclature and why is it a useful tool for naming organisms?
  • Explain why a resource like one of Bergey’s manuals would be helpful in identifying a microorganism in a sample.

SAME NAME, DIFFERENT STRAIN

Within one species of microorganism, there can be several subtypes called strains. While different strains may be nearly identical genetically, they can have very different attributes. The bacteriumEscherichia coli is infamous for causing food poisoning and traveler’s diarrhea. However, there are actually many different strains of E. coli, and they vary in their ability to cause disease.

One pathogenic (disease-causing) E. coli strain that you may have heard of is E. coli O157:H7. In humans, infection from E. coli O157:H7 can cause abdominal cramps and diarrhea. Infection usually originates from contaminated water or food, particularly raw vegetables and undercooked meat. In the 1990s, there were several large outbreaks of E. coli O157:H7 thought to have originated in undercooked hamburgers.

While E. coli O157:H7 and some other strains have given E. coli a bad name, most E. coli strains do not cause disease. In fact, some can be helpful. Different strains of E. coli found naturally in our gut help us digest our food, provide us with some needed chemicals, and fight against pathogenic microbes.

Summary

  • Carolus Linnaeus developed a taxonomic system for categorizing organisms into related groups.
  • Binomial nomenclature assigns organisms Latinized scientific names with a genus and species designation.
  • A phylogenetic tree is a way of showing how different organisms are thought to be related to one another from an evolutionary standpoint.
  • The first phylogenetic tree contained kingdoms for plants and animals; Ernst Haeckel proposed adding kingdom for protists.
  • Robert Whittaker’s tree contained five kingdoms: Animalia, Plantae, Protista, Fungi, and Monera.
  • Carl Woese used small subunit ribosomal RNA to create a phylogenetic tree that groups organisms into three domains based on their genetic similarity.
  • Bergey’s manuals of determinative and systemic bacteriology are the standard references for identifying and classifying bacteria, respectively.
  • Bacteria can be identified through biochemical tests, DNA/RNA analysis, and serological testing methods.

Glossary

binomial nomenclature
a universal convention for the scientific naming of organisms using Latinized names for genus and species
eukaryote
an organism made up of one or more cells that contain a membrane-bound nucleus and organelles
phylogeny
the evolutionary history of a group of organisms
prokaryote
an organism whose cell structure does not include a membrane-bound nucleus
taxonomy
the classification, description, identification, and naming of living organisms

Contributor

  • Nina Parker, (Shenandoah University), Mark Schneegurt (Wichita State University), Anh-Hue Thi Tu (Georgia Southwestern State University), Philip Lister (Central New Mexico Community College), and Brian M. Forster (Saint Joseph’s University) with many contributing authors. Original content via Openstax (CC BY 4.0; Access for free at https://openstax.org/books/microbiology/pages/1-introduction)


Systematics: Meaning, Branches and Its Application

The term systematics is derived from the Latinised Greek word and ‘systema’ means ‘together’. The systematics partly overlap with taxonomy and originally used to des­cribe the system of classification prescribed by early biologists. Linnaeus applied the word “Systematics” in the system of classi­fication in his famous book ‘Systema Natu­rae’ published in 1735.

Blackwelder and Boyden (1952) gave a definition that “sys­tematics is the entire field dealing with the kinds of animals, their distinction, classifica­tion and evolution”. C. G. Simpson (1961) considers that “Systematics is the scientific study of the kinds and diversity of organ­isms and of any and all relationships among them”.

The simpler definition by Ernst Mayr (1969), and Mayr and Ashlock (1991) is “Sys­tematics is the science of the diversity of organisms”. Christoffersen (1995) has de­fined systematics as “the theory, principles and practice of identifying (discovering) systems, i.e., of ordering the diversity of organisms (parts) into more general systems of taxa according to the most general causal processes”.

The systematics includes both taxonomy and evolution. Taxonomy includes classifi­cation and nomenclature but inclines heavily on systematics for its concepts. So study of systematics includes a much broader aspect that includes not only morphology and ana­tomy but also genetics, molecular biology, behavioural aspects and evolutionary biology.

The recent approach to the science of biology has added a new dimension to the science of classification and the new system­atics has emerged as a synthesis of progress in all the major disciplines of Biology.

Branches of Systematics:

The new systematics may be divided into following branches:

1. Numerical systematics:

This type of systematics is based on bio-statistical method in identification and classifi­cation of animals. This branch is called biometry.

2. Biochemical systematics:

This branch of systematics deals with classification of animals on the basis of biochemical analysis of protoplasm.

3. Experimental systematics:

This branch of systematics deals with identification of various evolutionary units within a species and their role in the process of evolution. Here mutation is considered as evolutionary unit.

Application of Systematics in Biology:

1. Systematics is the study of diversity of organisms including past and present and relationships among living things. Relationships are established by mak­ing cladograms, phylogenetic trees and phylogenies. The phylogeny is the evolutionary history of an animal or plant, for a taxonomic group.

Phylogenies include two parts—the first part shows the group relationships and the second part indicates the amount of evolution. Phylogenetic trees of species and higher taxa are established by morphological, physi­ological and molecular characteristics, and the distribution of animals and their ancestors are related to geogra­phy. In this way the systematics is used to understand the evolutionary history of organisms.

2. The field of systematics provides scientific names of the organisms, de­scription of the species, ordering the organisms into higher taxa, classifica­tion of the organisms and evolution­ary histories.

3. Systematics is also important in imple­menting the conservation issues be­cause it attempts to explain the biodiversity which is related to differ­ent kinds of species and could be used in preservation and protect the endan­gered animals and plants.

The loss of biodiversity is related to the extreme harmful of the existence of mankind. The unchecked human population destroy different kinds of plants and animals for food and other factors.

4. The destruction or suppression of harm­ful pests or animals by the introduc­tion and increase of their natural en­emies is called biological control.

The natural enemies of pests are often in­troduced for biological control for the advantage of agriculture and forestry. The natural enemies include insectivorous spiders, centipedes, some insects, frogs and birds which are much more economical than the chemical control because they have no injurious side effects.

The predaceous insects play a vital role in the natural control of in­jurious insects. The adult and larval stages of predatory insects of lady bird beetles (Coccinellidae) are economi­cally very important and are responsi­ble for the destruction of the colonies of plant lice, scale insects, mealy bugs and white flies which are found as serious pests in various parts of the world.

Some chrysopids are also preda­tory enemies of mealy bugs and plant lice. An egg parasite, Trichogramma sp. is utilized in India for the control of sugar cane borers and boll worms of cotton.

In all cases the proper identification of parasites and their hosts are necessary for the control of the pests. The systematists are involved in implement­ing the biological control programmes of the pests and diseases most effec­tively.

5. There are a lot of insects which act as vectors of various human diseases. For example, some species of Anopheles sp. are the vector of malaria diseases, Aedes aegypti spreads the virus of dengu fe­ver and phlebotomus argentipes spreads the pathogens of kala-azar fever.

So taxonomists play a vital role in iden­tification of the species of vectors, and control strategy programmes of the vectors should be planned in such a way that the target species is attacked.


References

Tomancak P, Beaton A, Weiszmann R, Kwan E, Shu S, Lewis SE, Richards S, Ashburner M, Hartenstein V, Celniker SE, Rubin GM: Systematic determination of patterns of gene expression during Drosophila embryogenesis. Genome Biol. 2002, 3: research0088.1-0088.14. 10.1186/gb-2002-3-12-research0088.

Celniker SE, Wheeler DA, Kronmiller B, Carlson JW, Halpern A, Patel S, Adams M, Champe M, Dugan SP, Frise E, et al: Finishing a whole-genome shotgun: Release 3 of the Drosophila euchromatic genome sequence. Genome Biol. 2002, 3: research0079.1-0079.14. 10.1186/gb-2002-3-12-research0079.

Stapleton M, Carlson J, Brokstein P, Yu C, Champe M, George R, Guarin H, Kronmiller B, Pacleb J, Park S, et al: A Drosophila full-length cDNA resource. Genome Biol. 2002, 3: research0080.1-0080.8. 10.1186/gb-2002-3-12-research0080.

Mungall CJ, Misra S, Berman BP, Carlson J, Frise E, Harris NL, Marshall B, Shu S, Kaminker JS, Prochnik SE, et al: An integrated computational pipeline and database to support whole-genome sequence annotation. Genome Biol. 2002, 3: research0081.1-0081.11. 10.1186/gb-2002-3-12-research0081.

Lewis SE, Searle SMJ, Harris NL, Gibson M, Iyer VR, Richter J, Wiel C, Bayraktaroglu L, Birney E, Crosby MA, et al: Apollo: A sequence annotation editor. Genome Biol. 2002, 3: research0082.1-0082.14. 10.1186/gb-2002-3-12-research0082.

Misra S, Crosby MA, Mungall CJ, Matthews BB, Campbell KS, Hradecky P, Huang Y, Kaminker JS, Millburn GH, Prochnik SE, et al: Annotation of the Drosophila melanogaster euchromatic genome: a systematic review. Genome Biol. 2002, 3: research0083.1-0083.22. 10.1186/gb-2002-3-12-research0083.

Kaminker JS, Bergman C, Kronmiller B, Carlson J, Svirskas R, Patel NH, Frise E, Wheeler DL, Lewis SE, Rubin GM, et al: The transposable elements of the Drosophila melanogaster euchromatin - a genomics perspective. Genome Biol. 2002, 3: research0084.1-0084.20. 10.1186/gb-2002-3-12-research0084.

Hoskins RA, Smith CD, Carlson JW, Carvalho AB, Halpern A, Kaminker JS, Kennedy C, Mungall CJ, Sullivan BA, Sutton GG, et al: Heterochromatic sequences in a Drosophila whole-genome shotgun assembly. Genome Biol. 2002, 3: research0085.1-0085.16. 10.1186/gb-2002-3-12-research0085.

Bergman CM, Pfeiffer BD, Rincón-Limas DE, Hoskins RA, Gnirke A, Mungall CJ, Wang AM, Kronmiller B, Pacleb J, Park S, et al: Assessing the impact of comparative genomic sequence data on the functional annotation of the Drosophila genome. Genome Biol. 2002, 3: research0086.1-0086.20. 10.1186/gb-2002-3-12-research0086.

Ohler U, Liao G-C, Niemann H, Rubin GM: Computational analysis of core promoters in the Drosophila genome. Genome Biol. 2002, 3: research0087.1-0087.12. 10.1186/gb-2002-3-12-research0087.

Furlong EE, Andersen EC, Null B, White KP, Scott MP: Patterns of gene expression during Drosophila mesoderm development. Science. 2001, 293: 1629-1633. 10.1126/science.1062660.

Arbeitman MN, Furlong EE, Imam F, Johnson E, Null BH, Baker BS, Krasnow MA, Scott MP, Davis RW, White KP: Gene expression during the life cycle of Drosophila melanogaster. Science. 2002, 297: 2270-2275. 10.1126/science.1072152.

Stathopoulos A, Van Drenth M, Erives A, Markstein M, Levine M: Whole-genome analysis of dorsal-ventral patterning in the Drosophila embryo. Cell. 2002, 111: 687-701.

Egger B, Leemans R, Loop T, Kammermeier L, Fan Y, Radimerski T, Strahm MC, Certa U, Reichert H: Gliogenesis in Drosophila: genome-wide analysis of downstream genes of glial cells missing in the embryonic nervous system. Development. 2002, 129: 3295-3309.

Adams MD, Celniker SE, Holt RA, Evans CA, Gocayne JD, Amanatides PG, Scherer SE, Li PW, Hoskins RA, Galle RF, et al: The genome sequence of Drosophila melanogaster. Science. 2000, 287: 2185-2195. 10.1126/science.287.5461.2185.

Myers EW, Sutton GG, Delcher AL, Dew IM, Fasulo DP, Flanigan MJ, Kravitz SA, Mobarry CM, Reinert KH, Remington KA, et al: A whole-genome assembly of Drosophila. Science. 2000, 287: 2196-2204. 10.1126/science.287.5461.2196.

Reese MG, Hartzell G, Harris NL, Ohler U, Abril JF, Lewis SE: Genome annotation assessment in Drosophila melanogaster. Genome Res. 2000, 10: 483-501. 10.1101/gr.10.4.483.

Loder N: Celera's shotgun approach puts Drosophila in the bag. Nature. 2000, 403: 817-10.1038/35002745.

Hartl DL: Fly meets shotgun: shotgun wins. Nat Genet. 2000, 24: 327-328. 10.1038/74125.

Livesey R: Have microarrays failed to deliver for developmental biology?. Genome Biol. 2002, 3: comment2009.1-2009.5. 10.1186/gb-2002-3-9-comment2009.

White KP, Rifkin SA, Hurban P, Hogness DS: Microarray analysis of Drosophila development during metamorphosis. Science. 1999, 286: 2179-2184. 10.1126/science.286.5447.2179.

Chudin E, Walker R, Kosaka A, Wu SX, Rabert D, Chang TK, Kreder DE: Assessment of the relationship between signal intensities and transcript concentration for Affymetrix GeneChip arrays. Genome Biol. 2002, 3: research0005.1-0005.10. 10.1186/gb-2001-3-1-research0005.

Schena M, Shalon D, Davis RW, Brown PO: Quantitative monitoring of gene expression patterns with a complementary DNA microarray. Science. 1995, 270: 467-470.

Klebes A, Biehs B, Cifuentes F, Kornberg TB: Expression profiling of Drosophila imaginal discs. Genome Biol. 2002, 3: research0038.1-0038.16. 10.1186/gb-2002-3-8-research0038.

Rubin GM, Hong L, Brokstein P, Evans-Holm M, Frise E, Stapleton M, Harvey DA: A Drosophila complementary DNA resource. Science. 2000, 287: 2222-2224. 10.1126/science.287.5461.2222.

Stapleton M, Liao G, Brokstein P, Hong L, Carninci P, Shiraki T, Hayashizaki Y, Champe M, Pacleb J, Wan K, et al: The Drosophila gene collection: identification of putative full-length cDNAs for 70% of D. melanogaster genes. Genome Res. 2002, 12: 1294-1300. 10.1101/gr.269102.

Patterns of gene expression in Drosophila embryogenesis. [http://toy.lbl.gov:8888/cgi-bin/ex/insitu.pl]

Simin K, Scuderi A, Reamey J, Dunn D, Weiss R, Metherall JE, Letsou A: Profiling patterned transcripts in Drosophila embryos. Genome Res. 2002, 12: 1040-1047. 10.1101/gr.84402.

Caenorhabditis elegans Sequencing Consortium: Genome sequence of the nematode C. elegans: a platform for investigating biology. Science. 1998, 282: 2012-2018. 10.1126/science.282.5396.2012.

Thatcher JW, Shaw JM, Dickinson WJ: Marginal fitness contributions of nonessential genes in yeast. Proc Natl Acad Sci USA. 1998, 95: 253-257. 10.1073/pnas.95.1.253.

Ashburner M, Misra S, Roote J, Lewis SE, Blazej R, Davis T, Doyle C, Galle R, George R, Harris N, et al: An exploration of the sequence of a 2.9-Mb region of the genome of Drosophila melanogaster: the Adh region. Genetics. 1999, 153: 179-219.


1 INTRODUCTION

The importance of conserving and maintaining biodiversity is a fundamental element of a healthy ecosystem and a common tenet of many knowledge systems, such as those of the circumpolar Arctic Indigenous Peoples (Gadgil, Berkes, & Folke, 1993 ). Although the various knowledge systems include different terms to describe the elements of biodiversity (Posey, 1999 ), the essential and shared components of the biodiversity concept include and require ecosystem function to be resilient if ecosystem services are to be maintained (CAFF, 2002 , 2013 Reid et al., 2005 ). The concept that all species are connected through food webs and, in turn, are influenced by environmental forces is well-understood by harvest-based Indigenous communities around the globe (Gadgil et al., 1993 Merculieff et al., 2017 Mustonen & Ford, 2013 ). Despite the importance of freshwater biodiversity and ecosystem services to Arctic Indigenous communities, there have been limited attempts to summarise Indigenous Knowledge (IK) regarding Arctic freshwater systems and understand how monitoring and conservation can benefit from such knowledge.

The importance of freshwater ecosystem services in the circumpolar Arctic is reflected in Indigenous Peoples having relied for millennia on local ecosystems to provide wild fauna and flora for subsistence purposes (Huntington et al., 2013 Mazzocchi, 2006 Merculieff et al., 2017 Michelutti et al., 2013 Mustonen & Ford, 2013 ). Freshwater ecosystems have provided sustenance for circumpolar Indigenous peoples since time immemorial (Huntington et al., 2013 Merculieff et al., 2017 ). Lakes and rivers provide drinking water, and a variety of freshwater species comprise important subsistence harvests, including anadromous and non-anadromous fish, waterfowl, mammals, and shoreline plants (CAFF, 2010 Huntington et al., 2013 Merculieff et al., 2017 ). In the past, peat from edges of ponds was collected to build houses (Merculieff et al., 2017 ). Athabascan People have used the Yukon River and its tributaries for hunting and fishing, travel, driftwood collection (for firewood), spiritual purposes, drinking, sanitation, recreation, and other domestic uses (Wilson, 2014 ). Many freshwater fish species, especially salmonids, were used to feed dog sled teams, an important form of transportation in areas of the Arctic, especially from the mid-19th to 20th centuries (Andersen, 1992 Nuttall et al., 2005 ). Air drying freshwater and marine fish, a technique that allows for the preservation of fish in dry Arctic climates, has remained an important preservation technique for subsistence harvests by Indigenous groups across the circumpolar Arctic (Mustonen & Mustonen, 2016 Spearman, Nageak, & Elders of Anaktuvuk Pass, 2005 Wishart, 2014 ). Inuit communities harvesting wildlife in freshwater catchments have also contributed nutrient inputs into these systems, thus affecting the freshwater ecosystems where they have lived (Michelutti et al., 2013 ).

The reliance of Indigenous communities on freshwater ecosystem services promotes a strong connection to the land and unique in-depth understanding of organisms and ecosystem processes (ACIA, 2005 Mazzocchi, 2006 Merculieff et al., 2017 Mustonen & Ford, 2013 ). For example, knowledge of animal behaviour and phenology supports harvesting and hunting activities (ACIA, 2005 Merculieff et al., 2017 ). To recognise this unique and rich knowledge base, The Convention on Biological Diversity adopted guidelines that recognise “the close and traditional dependence of many Indigenous Peoples and local communities on biological resources [and] the contribution that Traditional Knowledge can make to both the conservation and the sustainable use of biological diversity” (Convention on Biological Diversity, 2017 ). This connection of Indigenous Peoples with the land allows them to observe and develop first-hand knowledge of biodiversity and changes to local ecosystems. Examples of such observations and knowledge include shifts in the timing of ice on/ice off on lakes, changes to snowpack duration, shifts in plant and animal phenology, and changes to wildlife population size, diversity, and animal health (ACIA, 2005 Alexander et al., 2011 Ford & Pearce, 2010 Merculieff et al., 2017 ). Clearly, Arctic Indigenous Peoples have an intimate experience with, and extensive knowledge of, freshwater ecosystems that could be applied to improve assessments of biodiversity in Arctic freshwaters and the effects of environmental change on these systems.

In addition, the inclusion of IK in Arctic freshwater biodiversity and climate assessments is important to provide a more holistic understanding of observed or studied phenomenon and to include Indigenous voice and local expert knowledge (Alexander et al., 2011 CAFF, 2013 Furgal, Dickson, & Fletcher, 2006 Merculieff et al., 2017 Mistry & Berardi, 2016 ). Climate change is directly and indirectly affecting Arctic freshwater ecosystem biodiversity through changes to physical and chemical properties, and alterations to species composition and the geographic distribution of species (Culp, Goedkoop, et al., 2012 Ford & Pearce, 2010 Lento et al., 2019 Meltofte, 2013 ). Moreover, increased rates of development and resource extraction in Arctic regions including hydropower dams, mining, sport and commercial fisheries, and sport hunting all threaten water quality, habitat condition, and the ecosystem services provided by Arctic freshwaters (Culp, Lento, et al., 2012 Huntington et al., 2013 Mustonen & Mustonen, 2016 ). Learning from Arctic Indigenous Peoples understanding and observations of climate change and development impacts is critical to providing a better understanding of the state of Arctic freshwaters.

Arctic freshwater biodiversity is not well understood and critical gaps exist in our knowledge (Meltofte, 2013 ), particularly in remote areas of the Arctic that are difficult to access (Lento et al., 2019 ). Understanding both the historical and current state of freshwater biodiversity in the Arctic will benefit from the use of all possible sources of knowledge in monitoring and management of these ecosystems (CAFF, 2013 ). The Freshwater Group of the Circumpolar Biodiversity Monitoring Program (CBMP), part of the Conservation of Arctic Flora and Fauna (CAFF) working group of the Arctic Council, has conducted the first circumpolar assessment of the state of Arctic freshwater biodiversity to support monitoring and assessment of changes resulting from climate change and development (Lento et al., 2019 ). The assessment focused on biodiversity data compiled from western science (WS), a term used here and throughout this paper following the definitions of Cajete ( 2000 ) and Mazzocchi ( 2006 ), including government, industry, and academic monitoring data from all Arctic countries. A high priority for this assessment was to ensure inclusion of IK, recognising its valuable contribution to characterising and monitoring freshwater biodiversity in the Arctic. However, the CBMP also recognises that it is critical to approach this in a way that is respectful to the knowledge holders and does not seek to ignore their right to ownership of their knowledge. The first step in including IK in this assessment was to understand the scope and breadth of documented IK that might contribute to widespread assessment of biodiversity.

Although previous efforts have focused on summarising IK to describe general impacts of climate change on Arctic ecosystems (e.g. see ACIA, 2005 Ford & Pearce, 2010 Merculieff et al., 2017 ), this manuscript presents the results of a systematic literature review using thematic coding to summarise previously documented observations from circumpolar IK on Arctic freshwater biodiversity (defined here as the variety of organisms found in Arctic freshwater ecosystems) in Canada, Greenland, Fennoscandia (Norway, Sweden, and Finland), Russia, and the U.S.A. (Alaska). This is the first time this approach has been applied to this specific topic. The specific goals of this systematic literature review were to: (1) improve understanding of documented IK resources on the topic of Arctic freshwater biodiversity (2) determine if observations from previously documented IK could contribute to mapping freshwater biodiversity across the circumpolar Arctic (3) determine if observations from previously documented IK could support the identification of emerging trends in Arctic freshwater biodiversity and habitats and (4) identify synergies or discrepancies between IK and WS knowledge bases, or new information or trends in Arctic freshwater biodiversity not documented through WS methods. In this review, we did not attempt to analyse or interpret the previously documented IK itself, or to contrast regional differences in how IK was collected or documented. Instead, the aim was to show what observations on Arctic freshwaters have been recorded in the literature from IK, and to understand how previously documented IK can be included in biodiversity assessments.

This manuscript presents the results of the emergent themes in Arctic freshwater biodiversity, as identified through IK in published literature. Despite not all of the retrieved documents having studies focused specifically on IK of Arctic freshwater biodiversity, considerable amounts of information about Arctic freshwater biodiversity and habitats were present allowing for a preliminary understanding of these systems. The emergent themes primarily provided knowledge on diversity and abundance of freshwater organisms including fish, birds, mammals, and plants as well as the state of, or changes to, freshwater habitats. The results present an initial inventory and state of recorded Indigenous Knowledge of Arctic freshwater taxa and ecosystems for each country. Ultimately, suggestions are made for improvements to the incorporation of IK into future Arctic freshwater biodiversity assessments.


Rationale:

Most attempts to evaluate the impact of mental illness-associated genes or variants on brain function have been limited in scale to one or a few genes against a relatively narrow range of biological endpoints. Systematic efforts are hindered by our ability to fully capture the spectrum of potential disease-relevant biological phenotypes across a sufficiently large number of genes or variants in a cost-efficient and comprehensive way. New scalable technologies are emerging that address these limitations, offering the opportunity to probe the role of genetic variation in neurodevelopmental and psychiatric disorders with systematic and coordinated assays that more thoroughly capture the genetic and phenotypic space at a scale and breadth not currently covered by existing NIMH efforts.


4.2: A Systematic Approach - Biology

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited.

Feature Papers represent the most advanced research with significant potential for high impact in the field. Feature Papers are submitted upon individual invitation or recommendation by the scientific editors and undergo peer review prior to publication.

The Feature Paper can be either an original research article, a substantial novel research study that often involves several techniques or approaches, or a comprehensive review paper with concise and precise updates on the latest progress in the field that systematically reviews the most exciting advances in scientific literature. This type of paper provides an outlook on future directions of research or possible applications.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to authors, or important in this field. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.


3S--Systematic, Systemic, and Systems Biology and Toxicology.

Systematic, systemic, and systems sound very much alike, but they represent three different approaches in the life sciences. We will argue here that synergy between them is necessary to achieve meaningful understanding in biomedicine. Biology stands for the unperturbed "physiological" behavior of our model systems. Toxicology is certainly one of the more applied sciences studying the perturbation of model systems (pathophysiology) ultimately, all experimental medical research links to this. Here, we focus primarily upon examples from toxicology, which is the authors' primary area of expertise, and the restriction to this area in the title appears prudent.

Systematic is a term most commonly used in the context of systematic reviews, i.e., evidence-based approaches that aim for a comprehensive, objective and transparent use of information. Born in the clinical and health care sciences, these approaches have gained significant traction in toxicology (1) but have not had major impacts on other pre-clinical and biological areas. We will argue that this represents an omission and an opportunity, as the respective tools for evidence evaluation (quality scoring, risk-of-bias analysis, etc.) and integration (meta-analysis, combination of information streams, etc.) are widely applicable across scientific disciplines. The resulting condensation of information and mapping of knowledge deficits as well as the cross-talk with quality assurance, Good Practices, and reporting standards, yield valuable lessons on how the systematic evaluation of available scientific knowledge can accelerate the organization of vast, and rapidly expanding, knowledge generation.

Systemic views are primarily organism-level views on problems (the big-picture view), the opposite of studying smaller and smaller elements of the machinery. However, it is also thinking in terms of functionalities. Cell culture is starting to embrace this with the advent of complex co-cultures with multiplexed endpoints (Kleinstreuer et al., 2014) and organotypic cultures reproducing organ architecture and functionalities (microphysiological systems, MPS) (Marx et al., 2016), now even moving to multi-organ models of a human-on-chip / body-on-chip (Skardal et al., 2016 Watson et al., 2017). The concomitant emerging availability of human stem cells that can be used to produce high-quality organoids further adds to this revolutionary change (Suter-Dick et al., 2015), as shown recently for the BrainSphere organoid model (Pamies et al., 2017a, 2018a). Functional thinking can also be applied to cellular biology when considering toxic impact, for example, repair, recovery and resilience (Smirnova et al., 2015). We are returning to seeing the forest, not just individual trees.

Systems biology and, more recently, toxicology (Hartung et al., 2012, 2017a) aim to study systems behavior: "Systems biology begins to recognize the limitations of the reductionist approach to biology" (Joyner and Pedersen, 2011). In its detailed definition (Ferrario et al., 2014), it is based on a comprehensive study of our knowledge on these systems, which is translated into computer models, allowing virtual experiments/ simulations that can be compared to experimental results. Systems approaches require sufficient biological and physiological detail about the relevant molecular pathways, associated cellular behaviors, and complex tissue-level interactions, as well as computational models that adequately represent biological complexity while offsetting mathematical complexity. Bernhard 0. Palsson wrote in his book Systems Biology: Constraint-Based Reconstruction and Analysis, "The chemical interactions between many of these molecules are known, giving rise to genome-scale reconstructed biochemical reaction networks underlying cellular functions". So, to some extent, the systems toxicology approach is systematic and systemic in view, but it brings in the additional aspects of knowledge organization using dynamic models of physiology.

Figure 1 shows how these different components come together. This paper suggests that the traditional 3Rs approach, which has served us well to replace a substantial part of acute and topical toxicities, might need approaches along the 3S for systemic toxicity testing replacement. It suggests that systematic organization of existing knowledge be combined with experimental and computational systems approaches to model the complexity of (patho-)physiology.

2 Systematic biology and toxicology

Perhaps a better term would be "systematic review" of biology and toxicology. Similar to the term evidence, the concept of being systematic sounds like it must be a given for any scientific endeavor. Unfortunately, it is not. Most of us are drowning in a flood of information. The seemingly straightforward request of evidence-based medicine (EBM) to assess all available evidence quickly reaches limits of practicality. A systematic evaluation of the literature often returns (tens of) thousands of articles. Only very important questions, which must at the same time be very precise and very focused, warrant comprehensive efforts to analyze them. It is still worth the effort--as typically the result is strong evidence that is difficult to refute.

Earlier work in this area (Hoffmann and Hartung, 2006) led to the creation of the Evidence-based Toxicology Collaboration (EBTC) in 20111. Developments have been documented (Griesinger et al., 2008 Stephens et al., 2013) and have gained acceptance (Aiassa et al., 2015 Stephens et al., 2016 Agerstrand and Beronius, 2016). The field is developing very rapidly (Morgan et al., 2016 Mandrioli and Silbergeld, 2016). The fundamentals and advantages of evidence-based approaches were previously detailed in a dedicated article (Hartung, 2009a) that appeared earlier in this series (Hartung, 2017a), and are not repeated here. Noteworthy, the call for a systematic review of animal testing methods is getting louder and louder (Basketter et al., 2012 Leist et al., 2014 Pound and Brakken, 2014) the work of SYRCLE, the SYstematic Review Centre for Laboratory Animal Experimentation (2) and CAMARADES (3) (Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies) is especially noteworthy. Here, we focus on two main points, one concerning opportunities for application in biology and other non-toxicology biomedical sciences, and the other framing the utility of systematic review in the context of systemic toxicities and systems toxicology.

Note that evidence-based approaches have a lot to offer also across diverse areas of biomedicine. Mapping what we know and what we do not, helps a field to focus research and resources, not only in areas where safety is at stake. In the clinical arena, EBM was the catalyst for many quality initiatives. Nobody wants to do research that is excluded from deriving authoritative conclusions by peers for quality (of reporting) reasons. A significant portion of irreproducible science could be avoided by using evidence-based approaches (Hartung, 2013 Freedman et al., 2015).

It should also be commented that in the context of systemic toxicities, we first of all need a systematic review of the traditional test methods, the information that they provide, and the decision contexts in which they are used. This was the unanimous recommendation of the roadmap exercise on how to overcome animal-based testing for systemic toxicology (Basketter et al., 2012 Leist et al., 2014). Systematic review was also suggested as a necessary element for a mechanistic validation (Hartung et al., 2013) this represents a key opportunity for the validation of both adverse outcome pathways (AOP) and mechanistic experimental models such as MPS. Lastly, systems toxicology should be based on a comprehensive analysis of biological systems characteristics, again calling for systematic literature analysis.

3 Systemic biology and toxicology

Systemic biology is not a common term--probably "physiology" covers it best, though much of physiology is studied in isolated organs. With the flourishing of molecular and cellular biology and biochemistry, systemic biology has been less prominent over the last few decades. However, the need to understand molecular and mechanistic findings in the context of an intact organism is obvious. This is one of the arguments for whole animal experimentation that are more difficult to refute. In fact, it is the use of genetically modified animals in academic research that is driving the steady increase of animal use statistics after three decades of decline (Daneshian et al., 2015).

Regulatory assessment of the complex endpoints of repeated-dose toxicity (RDT), carcinogenicity, and developmental and reproductive toxicity (DART), which are often grouped under systemic toxicities, still relies heavily on animal testing. Arguably, there is hardly any toxicity in nature that is not systemic as even topical effects such as skin sensitization include inflammatory components involving leukocyte infiltration and other acute effects, e.g., lethality, involve many parts of the organism. But, the aforementioned areas of toxicology represent the best examples of systemic toxicology, in which new approaches are needed but implementation is not straightforward.

The following section first addresses the limitations of current systemic toxicity testing and then reviews the alternative approaches that were developed in these areas of systemic toxicity in the last decades to waive testing or reduce the number of animals used.

The shortcomings of the current paradigm have been discussed earlier (Hartung, 2008a, 2013 Basketter et al., 2012 Paparella et al., 2013, 2017) some studies that cast doubt as to their performance are summarized in Table 1, using the more factual references, though the balance between opinion and evidence is difficult in the absence of systematic reviews (Hartung, 2017b). However, they stress the need for the strategic development of a new approach (Busquet and Hartung, 2017), especially for the systemic toxicities.

Alternative approaches range from the individual test methods (e.g., the cell transformation assay for carcinogenicity and the zebrafish and embryonic stem cell embryotoxicity tests for reproductive toxicity) to animal reduction approaches such as the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) strategy for carcinogenicity of pharmaceuticals and the extended-one-generation reproductive toxicity study. Currently, these areas are being revitalized owing to the broad recognition of the shortcomings of current in vivo testing requirements and the current regulatory environment (e.g., the European REACH and Cosmetic Regulation, the US amendment to the Toxic Substance Control Act (TSCA), i.e., the Lautenberg Chemical Safety for the 21st Century Act). More recent developments aimed at a more human-relevant chemical assessment, which rely on the integration of different sources of information, are also described.

The assessment of repeated-dose systemic toxicity, carcinogenicity and developmental and reproductive toxicity represent essential components of the safety assessment of all types of substances, being among the endpoints of highest concern. As such, their assessment still relies mainly on animal tests. Progress toward replacing this paradigm is summarized in the following sections.

3.1 Repeated-dose systemic toxicity

Repeated chemical treatment, usually on a daily basis, from several days to life-long exposure, is key to the hazard assessment of substances as it covers toxicokinetic aspects, i.e., adsorption, distribution, metabolism and excretion (ADME), as well as toxicodynamics with the potential of all organ effects and interactions. The present testing schemes are based on rodent or non-rodent studies performed for 28 days (subacute toxicity), 90 days (subchronic toxicity), or 26-102 weeks (chronic toxicity). These tests typically form the basis for identifying hazards and their characterization, especially no-effect-levels (NOEL). This approach rests upon the key assumption that the animal models are representative of human ADME and effects. In fact, the enormous differences in ADME represented a key reason for drug attrition two decades ago (attrition has dropped from 40-60% to nowadays 10% (Kennedy, 1997 Kubinyi, 2003 Kola and Landis, 2004)), as the development of a portfolio of in vitro and in silico tools has drastically improved the situation as reviewed earlier in this series (Tsaioun et al., 2016). The toolbox is neither perfect nor complete but, as discussed in the context of developing a roadmap for improvment (Basketter et al., 2012 Leist et al., 2014), there was general consensus among the experts involved that the missing elements are feasible and in reach. For example, epithelial barrier models (Gordon et al., 2015) as input into physiology-based (pharmaco-/toxico-) kinetic (PBPK) modelling were identified as a key opportunity for modelling RDT and were recently the subject of the Contemporary Concepts in Toxicology 2018 meeting "Building a Better Epithelium".

The Adler et al. (2011) report already compiled the many partial solutions to RDT. The problem is how to integrate these elements into a testing strategy that provides predictivity of human toxicity that is equivalent or greater than that of an animal study. This is a difficult question to answer, as in most cases we do not actually know how predictive the animal studies are due to the absence of human data. A notable exception is in the area of topical toxicities such as skin sensitization, where the predictive performance of the animal studies against human data has been shown to be essentially equivalent to the reproducibility of the animal data (Kleinstreuer et al., 2018).

We can start by asking how reproducible they are and how well different laboratory animal species predict each other. An important analysis conducted by Wang and Gray (2015) gives us an idea: very little. They compared earlier RDT findings with the non-cancer pathologies observed in cancer bioassays in rats and mice of both genders run by the National Toxicology Program for 37 substances. They concluded: "Overall, there is considerable uncertainty in predicting the site of toxic lesions in different species exposed to the same chemical and from short-term to long-term tests of the same chemical." Although this study was done for only 37 chemicals, it gives us a hint that there is no reason to assume that the predictivity of rodent data for humans will be any better. For a larger scale comparison, the key obstacle is the lack of harmonized ontologies and reporting formats for RDT (Hardy et al., 2012a,b Sanz et al., 2017). Very often it is unclear whether effects for certain organs or systemic effects were not reported because (a) they were assessed but not found and not reported as negative data (b) there were already other organ toxicities at lower doses and, thus, the data on remaining organs was omitted or not assessed, or (c) only one organ was the focus of the study and the remaining and/or systemic effects were out of scope for the given study. Therefore, the standardized curation of databases with detailed organ effects is a resource-intensive problem, and there are currently none that facilitate widespread reproducibility assessments. Independent of the specific site of toxic manifestations, however, it is relatively easy to compare NOELs across studies. Using our machine-readable database from the REACH registration process (Luechtefeld et al., 2016a), such comparisons between 28- and 90-day studies showed strong discrepancies (Luechtefeld et al., 2016b). A systematic evaluation of RDT studies will enable further analysis of the current testing paradigm.

Given these problems, it will be very difficult to model such findings with a test strategy (Chen et al., 2014). Our t (4) workshop on Adversity in vitro (report in preparation) in the context of the Human Toxome Project (Bouhifd et al., 2015), took a different approach: Based on the observation that the majority of chemicals are quite promiscuous, i.e., start perturbing the same cellular pathways in a relatively narrow concentration range, it appears feasible to define in vitro benchmark doses at which adversity starts using a set of complementary cell-based assays (Judson et al., 2016). Quantitative in vitro to in vivo extrapolation based on exposure data (plus some safety factors) should allow definition of exposures necessary to reach such tissue concentrations. Without necessitating a prediction of which organs would be affected, a safe use range would be established. In fact, the current risk assessment paradigm also makes little use of which organ exhibits toxic effects first but relies upon the most sensitive endpoint to define a benchmark / no-effect dose. Obviously, this does not work for substances whose molecular initiating events (MIE) are not reflected in the cell test battery to derive benchmark doses or NOELs. This means that over time this should be complemented with specific assays for those substances whose effects may be missed with this approach. Read-across strategies could add safety measures to such an approach, i.e., besides defining the safe dose, read-across and other in silico tools could provide alerts for where to add additional safety factors. In cases where human exposure is not sufficiently below doses that can reach critical tissue concentrations, it will be necessary to follow a more investigative toxicological approach, i.e., a mechanistic evaluation addressing the human relevance of the findings.

Biological models for different organs, e.g., liver, kidney, lung or brain, have been established and new culture techniques, especially in form of 3D organoids and MPS, are expected to solve present in vitro testing issues concerning long-term culturing, absence of relevant immune cells (Hengstler et al., 2012) and availability of fully mature cell phenotypes. Stem cells, especially induced pluripotent stem cells (iPSC), are a major source of tissue and cell models not available otherwise. Therefore, re search on the generation of 2D cultures and 3D tissues from stem cells is of high importance. The formalization of our mechanistic knowledge via adverse outcome pathways (AOP) (Leist et al., 2017) further helps to assess whether these models are relevant. New prospects come from systems approaches, where human complexity is either modelled experimentally or virtually, as discussed below. The European Commission-funded Horizon 2020 consortium EU-ToxRisk was in fact set up to integrate advances in cell biology, omics technologies, systems biology and computational modelling to define the complex chains of events that link chemical exposure to toxic outcome in the areas of RDT, developmental and reproductive toxicity (Daneshian et al., 2016). The vision of EU-ToxRisk, which builds on the activities started by the SEURAT-1 EU framework project, is to progress towards an animal-free toxicological assessment based on human cell responses and a comprehensive mechanistic understanding of cause-consequence relationships of chemical adverse effects4.

Substances are defined as carcinogenic if after inhalation, ingestion, dermal application or injection they induce (malignant) tumors, increase their incidence or malignancy, or shorten the time to tumor occurrence. It is generally accepted that carcinogenesis is a multi-hit/multi-step process from the transition of normal cells into cancer cells via a sequence of stages and complex biological interactions, strongly influenced by factors such as genetics, age, diet, environment and hormonal balance (Adler et al., 2011). Although attributing observed cancer rates to individual specific causes remains a challenge, the fraction of all cancers currently attributed to exposure to carcinogenic pollutants is estimated to range from less than 1% to 10-15% to as high as 19% (Kessler, 2014 Colditz and Wei, 2012 Anand et al., 2008 President's Cancer Panel, 2010 GBD 2013 Risk Factors Collaborators, 2013).

For nearly half a century, the 2-year rodent cancer bioassay was widely considered the "gold standard" for determining the carcinogenic potential of a chemical and OECD Test Guidelines (TG) exist since 1981 (Madia et al., 2016). Its adequacy to predict cancer risk in humans, however, is the subject of considerable debate (Gottmann et al., 2001 Alden et al., 2011 Knight et al., 2006a,b Paules et al., 2011) (Tab. 1). Recently, Paparella and colleagues (2017) conducted a systematic analysis of challenges and uncertainties associated with the cancer bioassay. Notably, extrapolating from rodents to humans and quantitative risk estimation has limited accuracy (Knight et al., 2006b Paparella et al., 2017 Paules et al., 2011). Moreover, the rodent bioassay, as originally designed, does not take into account windows of susceptibility over the life-time, and so may not have adequate sensitivity to detect agents, such as endocrine active chemicals, that alter susceptibility to tumors (Birnbaum et al., 2003). Furthermore, these studies are very time- and resource-consuming, taking up to three years to completion, and the high animal burden has raised ethical concerns.

From a regulatory perspective, the gradual recognition of nongenotoxic mechanisms of carcinogenesis (that do not involve direct alterations in DNA) has complicated the established relationship between genotoxicity and carcinogenicity and has challenged the conventional interpretation of rodent carcinogenicity results in terms of relevance to human cancer (Hengstler et al., 1999 Waters, 2016). Because of the default assumption in regulatory decision-making regarding the presumed linearity of the dose-response curve for genotoxic carcinogens, the classification of carcinogens as genotoxic or non-genotoxic became an essential but highly controversial component of cancer risk assessment.

The area of carcinogenicity has been very quiet for decades, but in recent years it has been revitalized due to broad recognition of the shortcomings of current regulatory in vivo testing requirements, and the awareness of information gaps in legislation that limit or ban the use of animals (e.g., European REACH Regulation (EC) No. 1907/2006 and Cosmetic Regulation (EC) No 1223/2009).

Table 2 shows some steps on the road to replacing the traditional paradigm, some of them are detailed in the following paragraphs.

Beginning in the late 1960s, highly predictive short-term genotoxicity assays were initially developed to screen for carcinogens. This led to a variety of well-established in vitro assays and, since the 1980s, to their respective OECD TGs that have been used successfully to predict genotoxicity, label chemical substances and inform cancer risk assessment. However, these tests are not at present considered to fully replace animal tests currently used to evaluate the safety of substances (Adler et al., 2011). In the last decade, several activities have been carried out worldwide with the aim of optimizing strategies for genotoxicity testing, both with respect to the basic in vitro testing battery and to in vivo follow-up tests. This was motivated by the scientific progress and significant experience of 40 years of regulatory toxicology testing in this area.

One of the major gaps identified was the need to ensure that in vitro tests do not generate a high number of false positive results, which trigger unnecessary in vivo follow-up studies, hence generating undesirable implications for animal welfare (Kirkland et al., 2005). The recommendations from a workshop organized by ECVAM (Kirkland et al., 2007) and from an EURL ECVAM strategy paper (EURL ECVAM, 2013 a) on how to reduce genotoxicity testing in animals have contributed to several international initiatives aiming to improve the existing in vitro genotoxicity tests and strategy and to identify and evaluate new test systems with improved specificity, while maintaining appropriate sensitivity. The outcome of this work led to the revision of OECD TGs for genotoxicity.

Meanwhile, the in vitro micronucleus test, which was the first test to be evaluated by ECVAM through retrospective validation (Corvi et al., 2008), is acquiring an increasingly prominent role in the genotoxicity strategy. It has in fact been proposed as the assay to be used in a two-test battery together with the Ames test (Kirkland et al., 2011 Corvi and Madia, 2017). Further in vitro methods are being developed and validated, especially aiming at a full replacement, as in the case of genotoxicity assays in 3D human reconstructed skin models, and for a better understanding of modes of action (MoA) using toxicogenomics-based tests and biomarker assays (Corvi and Madia, 2017).

Transgenic mouse model tests are possible alternatives to the classical two-year cancer bioassay owing to their enhanced sensitivity as predictors of carcinogenic risk to humans (Tennant et al., 1999). In fact, these models have a reduced tumor latency period (6-9 months) to chemically-induced tumors and may result in a significant reduction in the use of experimental animals (20-25 animals/sex/treatment group) (Marx, 2003). A study coordinated by ILSI/HESI (ILSI HESI ACT, 2001 MacDonald et al., 2004) led to the initial acceptance by pharmaceutical regulatory agencies of three primary models: [p53.sup.+/-], Tg.AC and rasH2 model, to be used in lieu of a second species full carcinogenicity bioassay (ICH, 2009).

Cell transformation assays

In vitro cell transformation assays (CTA) for the detection of potential carcinogens have been in use for about four decades. Transformation involves several events in the cascade potentially leading to carcinogenesis and is defined as the induction of phenotypic alterations in cultured cells that are characteristic of tumorigenic cells (LeBoeuf et al., 1999 Combes et al., 1999). Despite the long experience in the use of CTAs, the intense and prolonged activities at the OECD from 1997 to 2016, and the performance of ECVAM and JaCVAM validation studies (EURL ECVAM, 2012, 2013b), the assays were adopted as OECD Guidance Documents (OECD, 2015, 2016), but they have so far not been adopted as OECD TGs in their own right. Among the obstacles to the development of an OECD TG for the Syrian Hamster Embryo (SHE) Cell Transformation Assay (SHE CTA) was the lack of a coordinated full validation. The combination of a detailed review paper (DRP) and a prospective limited validation study triggered the need for additional analyses by the OECD expert group (OECD, 2007 Corvi et al., 2012). This also demonstrates that a DRP cannot be considered equivalent to a retrospective validation. Moreover, with the lack of an Integrated Approach to Testing and Assessment (IATA) or alternative testing strategy available for carcinogenicity and since there was common agreement that the assay should not be used as a stand-alone, no strategy was available on how to apply it for regulatory decision-making. This dilemma, "What comes first: the chicken or the egg, the test method or the testing strategy (or IATA)?" raises the question whether in the future the OECD should accept only new methods associated to a well-defined testing strategy or an IATA. A better characterization of the performance of the CTA to detect non-genotoxic carcinogens was considered important because the data collected in the DRP were biased towards genotoxic carcinogens, which reflects data available in the public domain. Another recurring concern was that the mechanistic basis of cell transformation and the link to tumorigenesis are not yet completely elucidated, which hampers interpretation of the findings from such an assay.

During the course of the OECD CTA activities, the regulatory framework in Europe changed considerably with the ban on animal testing for cosmetics (Hartung, 2008c) coming into force and the REACH evaluation of industrial chemicals commencing (Hartung, 2010b). This has put a huge burden on industry, which is limited in the use of in vivo tests to confirm results from in vitro tests, and on regulators, who have to assess carcinogenicity potential without in vivo data, leading to a more cautious uptake of in vitro tests to support assessment of such a critical endpoint. Many of these considerations apply also to the CTA based on Bhas 42 cells.

IATAs for non-genotoxic carcinogens

Non-genotoxic carcinogens contribute to an increased cancer risk by a variety of mechanisms that are not yet directly assessed by international regulatory approaches. In April 2014, the OECD WNT recognized that the CTA alone was insufficient to address non-genotoxic carcinogenicity and that a more comprehensive battery of tests addressing different non-genotoxic mechanisms of carcinogenicity would be needed in the future. This discussion led to the identification of the need for an IATA to properly address the issue of non-genotoxic carcinogenicity and where the CTA, together with other relevant assays, could fit. Under the auspices of the OECD, an expert working group was thus set up to examine the current international regulatory requirements and their limitations with respect to non-genotoxic carcinogenicity, and how an IATA could be developed to assist regulators in their assessment of non-genotoxic carcinogenicity (Jacobs et al., 2016). Moreover, the working group is tasked to review, describe and assess relevant in vitro assays with the aim of tentatively organizing them into levels of testing, following the adverse outcome pathway format such that possible structure(s) of the future IATA(s) can be created. Different in vitro methods are in fact already available as research tools to study a number of potential non-genotoxic mechanisms, such as oxidative stress or inhibition of gap junction intercellular communication (GJIC) (Basketter et al., 2012 Jacobs et al., 2016). Recent work has focused on mapping in vitro high-throughput screening (HTS) assays, e.g., from the ToxCast research program, to the hallmarks of cancer (Kleinstreuer et al., 2013a) and the characteristics of carcinogens (Chiu et al., 2018). However, these methods cannot currently be used to reliably predict carcinogenic potential rather they are useful to better understand the mechanistic basis of effects elicited by a compound, as demonstrated by use in International Agency for Research on Cancer (IARC) monographs, within a weight of evidence strategy (i.e., IATA).

Toxicogenomics-based test methods for carcinogenicity

Toxicogenomics for the study of carcinogenicity has been applied to several in vitro and short-term in vivo test systems (Vaccari et al., 2015 Schaap et al., 2015 Worth et al., 2014). For example, the EU-Framework Project carcinoGENOMICS, which aimed at developing in vitro toxicogenomics tests to detect potential genotoxicants and carcinogens in liver, lung and kidney cells, also assessed the preliminary reproducibility of the assay using different bioinformatics approaches (Doktorova et al., 2014 Herwig et al., 2016). Potential applications of toxicogenomics-based assays are clarification of mode of action (MoA), hazard classification, derivation of points of departure (PoD) and prioritization (Paules et al., 2011 Waters, 2016). Among these, the targeted use of transcriptomics tests for MoA determination seems to be the preferred application. However, there is still limited implementation of transcriptomics in regulatory decision-making, as discussed in a recent workshop featuring multi-sector and international perspectives on current and potential applications of genomics in cancer risk assessment organized by the Health and Environmental Sciences Institute (HESI), Health Canada and Mc Gill University in Montreal in May 2017. Even though companies make use of transcriptomics-based tests to guide internal decisions, the uncertainty on how these data would be interpreted by regulators is among the main roadblocks identified for submission of data. In addition, lack of validation and regulatory guidance were considered roadblocks (Corvi et al., 2016).

Systematic approaches to carcinogenicity assessment

Identification and incorporation of important, novel scientific findings providing insights into cancer mechanisms is an increasingly essential aspect of carcinogen hazard identification and risk assessment. In recent years, the IARC realized that its process for classifying human carcinogens was complicated by the absence of a broadly accepted, systematic method to evaluate mechanistic data to support conclusions regarding human hazard from exposure to carcinogens. First, no broadly accepted systematic approach was in place for identifying, organizing, and summarizing mechanistic data for the purpose of decisionmaking in cancer hazard identification. Second, the agents documented and listed as human carcinogens showed a number of characteristics that are shared among many carcinogenic agents. Hence, ten key properties that human carcinogens commonly exhibit and that can encompass many types of mechanistic endpoints were identified. These characteristics were used to conduct a systematic literature search focused on relevant endpoints that provides the basis for an objective approach to identifying and organizing results from pertinent mechanistic studies (Smith et al., 2016).

An example of a comprehensive systematic literature review was recently compiled by Rodgers et al. (2018). Here epidemiologic studies published since 2007, which were related to chemicals previously identified as mammary gland toxicants, were reviewed. The aim was to assess whether study designs captured relevant exposures and disease features, including windows of susceptibility, suggested by toxicological and biological evidence of genotoxicity, endocrine disruption, tumor promotion, or disruption of mammary gland development. Overall, the study added to evidence of links between environmental chemicals and breast cancer.

Beside systematic reviews, IATA can be considered approaches that integrate and weight all relevant existing evidence in a systematic manner to guide the targeted generation of new data, where required, and to inform regulatory decision-making regarding potential hazard and/or risk (e.g., IATA for non-genotoxic carcinogens as described above).

Alternative approaches to rodent long-term carcinogenicity studies for pharmaceuticals

Mainly due to deficiencies of animal carcinogenicity studies and based on some extensive data reviews, representatives of the pharmaceutical industry have leveraged decades of experience to make a proposal for refining the criteria for when carcinogenicity testing may or may not be warranted for pharmaceuticals. In August 2013, an ICH Regulatory Notice Document (RND) was posted by the Drug Regulatory Authorities (DRAs) announcing the evaluation of an alternative approach to the two-year rat carcinogenicity test (ICH Status Report, 2016). This approach is based on the hypothesis that the integration of knowledge of pharmacological targets and pathways together with toxicological and other data can provide sufficient information to anticipate the outcome of a two-year rat carcinogenicity study and its potential value in predicting the risk of human carcinogenicity of a given pharmaceutical. The rationale behind this proposal was supported by a retrospective evaluation of several datasets from industry and drug regulatory agencies, which suggests that up to 40-50% of rat cancer studies could be avoided (ICH Status Report, 2016 Sistare et al., 2011 van der Laan et al., 2016).

A prospective evaluation study to confirm the above hypothesis is ongoing. The industry sponsors are encouraged to submit a carcinogenicity assessment document (CAD) to address the carcinogenic potential of an investigational pharmaceutical and predict the outcome and value of the planned two-year rat carcinogenicity study prior to knowing its outcome (ICH Status Report, 2016). Predictions in the submitted CADs will then be checked against the actual outcome of the two-year rat studies as they are completed. The results of this study are expected for 2019. Currently, the EPAA (European Partnership for Alternative Approaches to Animal Testing) is carrying out a project to evaluate whether a similar approach is also applicable to the carcinogenicity assessment of agrochemicals.

3.3 Reproductive and developmental toxicity

Reproductive toxicity is defined as "effects such as reduced fertility, effects on gonads and disturbance of spermatogenesis this also covers developmental toxicity' (Ferrario et al., 2014), while developmental toxicity is defined as effects of "e.g., growth and developmental retardation, malformations, and functional deficits in the offspring". Often referred to in combination as DART (developmental and reproductive toxicity), the assessment of these endpoints aims to identify possible hazards to the reproductive cycle, with an emphasis on embryotoxicity. Only 2-5% of birth defects are associated with chemical and physical stress (Mattison et al., 2010), including mainly the abuse of alcohol and other drugs, with a far greater percentage attributable to known genetic factors. Overall, approximately 50% of birth defects have unknown causes (Christianson et al., 2006). The available database is even more limited for the assessment of the prevalence of effects on mammalian fertility.

Similarly, DART was not in the foreground of updates to safety assessments for many years after the shock of the thalidomide disaster (Kim and Scialli, 2011) had died down. More recently, the European REACH legislation, which is extremely demanding in this field (Hartung and Rovida, 2009), has stirred discussion, notably because tests like the two-generation study are among the costliest and require up to 3,200 animals (traditional two-generation study) per substance. In the drug development area, the discussion has focused mainly around a possible replacement of the second species by human mechanistic assays and the value of using non-human primates for biologicals.

Another driving force is the European ban on animal testing for cosmetics ingredients (Hartung, 2008b). A series of activities by ECVAM and CAAT, including several workshops, have tackled this challenge. The Integrated Project ReProTect (Hareng et al., 2005) was one of its offspring, pioneering several alternative approaches, followed by projects like ChemScreen and most recently the flagship program EU-ToxRisk4 (Daneshian et al., 2016).

Developmental disruptions are especially difficult to assess (Knudsen et al., 2011), as the timing of processes creates windows of vulnerability, the process of development is especially sensitive to genetic errors and environmental disruptions, simple lesions can lead to complex phenotypes (and vice versa), and maternal effects can have an impact at all stages.

The treatment of one or more generations of rats or rabbits with a test chemical is the most common approach to identifying DART, detailed in seven OECD TGs. For specifically evaluating developmental toxicity, TGs were designed to detect malformations in the developing offspring, together with parameters such as growth alterations and prenatal mortality (Collins, 2006). For REACH, developmental toxicity tests are considered mainly as screening tests (Rovida et al., 2011). The shorter and less complex "screening" tests, which combine reproductive, developmental, and (optionally) repeated dose toxicity endpoints into a single study design, are variants.

The fundamental relevance of the current testing paradigm has only recently been addressed in a more comprehensive way (Carney et al., 2011 Basketter et al., 2012). There is considerable concern about inter-species differences (of about 60% concordance), reproducibility (in part due to a lack of standardization of protocols but also high background levels of developmental variants), and the value of the second generation in testing versus the costs, duration and animal use. An analysis of 254 chemicals (Martin et al., 2009b) suggests that 99.8% of chemicals show no-effect-levels for DART within a ten-fold range of maternal toxicity and thus might be simply covered by a safety assessment factor of 10.

An analysis by Bremer and Hartung (2004) of 74 industrial chemicals, which had been tested in developmental toxicity screening tests and reported in the EU New Chemicals Database, showed that 34 chemicals had demonstrated effects on the offspring, but only two chemicals were actually classified as developmentally toxic according to the standards applied by the national competent authorities (Bremer and Hartung, 2004).

This demonstrates the lack of confidence in the specificity of this "definitive" test. The same analysis showed that 55% of these chemical effects on the offspring could not be detected in multi-generation studies.

The status of alternative methods for DART has been summarized by Adler et al. (2011), endorsed by Hartung et al. (2011), and in the context of developing a roadmap to move forward by Basketter et al. (2012) and Leist et al. (2014). Some key developments are summarized in Table 3 and in the following text.

Extended one-generation reproductive toxicity study Increasing doubt as to the usefulness of the second generation for testing of substances led to retrospective analyses by Janer et al. (2007), who concluded that this provided no relevant contribution to regulatory decision-making. The US EPA obtained similar data (Martin et al., 2009b) supporting the development of an extended one-generation study (OECD TG 443 OECD, 2011), originally proposed by the ILSI-HESI Agricultural Chemicals Safety Assessment (ACSA) initiative (Doe et al., 2006). The history of the new assay is summarized by Moore et al. (2009). This shows that elements of study protocols can indeed be useless and warrant critical assessment. The reduction brings the number of animals down from 3,200 to about 1,400 per substance tested. Ongoing discussions concern the new animal test's modules for neurodevelopmental and developmental immunotoxicity, which may be triggered as a result of the extended one-generation study and which undo a lot of the reduction in terms of work and animal use.

Zebrafish embryotoxicity test

In the field of mammalian alternatives, the most complete reflection of embryonic development apparently can be achieved with zebrafish embryos (Selderslaghs et al., 2012 Sukardi et al., 2011 Weigt et al., 2010), for example using dynamic cell imaging, or frog eggs (FETAX assay) (Hoke and Ankley, 2005), with the latter having been evaluated quite critically by ICCVAM.

Currently, the Evidence-based Toxicology Collaboration (EBTC) is evaluating available protocols and data in a systematic review. This retrospective analysis is also exploring whether such systematic reviews (Stephens et al., 2016 Hoffmann et al., 2017) can substitute for traditional validation approaches (Hartung, 2010a). The US National Toxicology Program (NTP) is currently leading the Systematic Evaluation of Application of Zebrafish in Toxicity Testing (SEAZIT) project to assess the impact of varying protocol elements, harmonize ontologies, and develop recommendations around best practices.

By 2002, three well-established alternative embryotoxicity tests had already been validated, i.e., the mouse embryonic stem cell test, the whole rat embryo culture and the limb bud assay (Genschow et al., 2004 Piersma et al., 2004 Spielmann et al., 2004). This decade-long validation process represented a radical departure from other validation studies ongoing at that time. They covered only a small, though critical, part of the reproductive cycle and embryonic development. For this reason, none of the tests have received regulatory acceptance in the 15 years since. Although the embryonic stem cell test was validated, the exact regulatory use was still to be defined (Spielmann et al., 2006). The validation study was criticized because the validity statements had raised significant expectations, but such partial replacements could only be used in a testing strategy (Hartung et al., 2013 Rovida et al., 2015) as later attempted within ReProTect and other projects cited above. This prompted a restructuring of the validation process with earlier involvement of regulators and their needs (Bottini et al., 2008), leading among other outcomes to today's PARERE network at EURL ECVAM. This is only one example, but in general a common problem of tests that have undergone the classical validation process. This was also addressed and reflected on in the recently published ICCVAM strategic roadmap in conjunction with a clear statement "The successful implementation of new approach methodologies (NAMs) will depend on research and development efforts developed cooperatively by industry partners and federal agencies. Currently, technologies too often emerge in search of a problem to solve. To increase the likelihood of NAMs being successfully developed and implemented, regulatory agencies and the regulated industries who will ultimately be using new technologies should engage early with test-method developers and stay engaged throughout the development of the technologies" (ICCVAM, 2018)

Other critical views as to the validation of alternative embryotoxicity tests concerned the low number of substances evaluated due to the costs of these assays, and the somewhat arbitrary distinction between weak and strong embryotoxicants, where a weak toxicant was defined as being reprotoxic in one species and a strong toxicant being reprotoxic in two or more.

Among the embryotoxicity tests, the murine embryonic stem cell test (EST) has attracted most interest, partly because it represents the only truly animal-free method of the three. Originally based on the counting of beating mouse embryonic stem cell-derived cardiomyocytes, this test has been adapted to other endpoints and to human cells (Leist et al., 2008). It is also used in pharmaceutical industry with revised prediction models. A dedicated workshop on the problems of the EST (Marx-Stoelting et al., 2009) pointed out that its prediction model is overly driven by the cytotoxicity of compounds. Importantly, a variant of the EST using either human embryonic stem cells or human induced pluripotent stem cells and metabolite measurements, which were identified by metabolomics, was introduced by Stemina Biomarker Discovery. This CRO offers contract testing in-house. The assay was evaluated with very promising results for more than 100 substances and is undergoing evaluation by the US EPA and the NTP. There is ongoing discussion with the FDA whether such assays might replace the second species in DART evaluations.

Endocrine disruptor screening assays

Endocrine disruption is one key element of DART but may also constitute a pathway of carcinogenesis. The important assay developments in the context of chemical endocrine disruptor screening go beyond the scope of this short overview. However, they could form critical building blocks in an integrated testing strategy for DART as suggested first by Bremer et al. (2007) and attempted in ReProTect, and for carcinogenicity (Schwarzman et al., 2015).

Computational methods and the threshold of toxicological concern (TTC)

Development of (Q)SAR models for reproductive toxicity is relatively meagre, due to both the complexity of the endpoint and the limited available public data (Hartung and Hoffmann, 2009). The more recent availability of larger toxicity datasets might change this (Hartung, 2016).

An alternative approach has been made by improving TTC for DART (van Ravenzwaay et al., 2017) by expanding earlier attempts by BASF (Bernauer et al., 2008 van Ravenzwaay et al., 2011, 2012 Laufersweiler et al., 2012). The approach avoids testing by defining doses that are very unlikely to produce a hazard across a large number of chemicals based on the actual use scenario for a given substance of interest (Hartung, 2017c). This work resulted in remarkably high TTC (compared to other endpoints) of 100 ^g/kg bw/day for rats and 95 ^g/kg bw/day for rabbits for reproductive toxicity. If found acceptable, this could contribute to considerable test waiving.

4 Systems biology and toxicology

"You think that because you understand 'one ' that you must therefore understand 'two 'because one and one make two. But you forget that you must also understand 'and'." This quote by Donella H. Meadows in Thinking in Systems: A Primer hits the nail on the head: It is not about knowing the components but about their interrelationships. That is what systems approaches are about. The term has been used mainly for the computational approach of modelling these interrelationships. A key point made here is that there are two systems biology / toxicology approaches--one that is computational and one that is experimental--and they complement each other in addressing the complexity of the organism. Donella H. Meadows, quoted above, phrased it "The behavior of a system cannot be known just by knowing the elements of which the system is made". We will ultimately propose to fuse these approaches, as we can sharpen our modeling tools with data generation in (quality-) controlled MPS. Mathematical modeling has a long history in physiology, but the new added value comes from the generation of big data via the respective measurement technologies (omics, high-content and sensor technologies), and the computational power to make sense of them.

4.1 Experimental systems biology and toxicology

We have recently comprehensively summarized the emergence of microphysiological systems (MPS) (Alepee et al., 2014 Hartung, 2014 Marx et al., 2016), which will not be repeated here. Here, the focus of this review will be on the understanding of how MPS can help to address systemic toxicities and aspects of their quality assurance. MPS bring a certain face-validity to the portfolio of tools as they introduce organ architecture, representative complexity and functionality to the in vitro approaches and increasingly even incorporate (patho-)physiological organ interactions. A critical element is the proper reflection of ADME, but microfluidics offers many opportunities to approach this goal (Slikker, 2014). The promise of MPS in biomedicine and drug development depends critically on their quality control. Especially, regulatory decisions based on them will require a high degree of confidence, which only strict quality control can create.

The quality assurance of MPS again requires an adaptation of the validation paradigm. Concepts of validation originally shaped around relatively simple cell systems for regulatory decision-taking as an alternative to animal testing. Three decades of experience have laid the foundation to broaden this concept to MPS in the context of their use in the life sciences and especially in drug development (Abaci and Shuler, 2015 Ewart et al., 2017 Skardal et al., 2016, 2017).

FDA recognizes that alternative test platforms like organs-on-chip can give regulators new tools that are more predictive. However, for these new alternative methods to be acceptable for regulatory use, confidence is needed that the questions can be answered by these new methods as with traditional testing. Fostering collaborations between government researchers and regulators and between regulators, industry, stakeholders and academia can ensure that the most promising technologies are identified, developed, validated and integrated into regulatory risk assessment. The FDA-DARPA-NIH Microphysiological Systems Program started in 2011 to support the development of human microsystems, or organ "chips", to screen swiftly and efficiently for safe and effective drugs (before human testing). It represents a collaboration through coordination of independent programs:

a) Defense Advanced Research Projects Agency (DARPA): Engineering platforms and biological proof-of-concept (DARPA-BAA-11-73: Microphysiological Systems)

b) National Institutes of Health (NIH), National Center for Advancing Translational Sciences (NCATS): Underlying biology/pathology and mechanistic understanding (RFA-RM-12-001 and RFA RM-11-022)

c) Food and Drug Administration (FDA): Advice on regulatory requirements, validation and qualification.

This was a unique partnership because it involved regulatory scientists at the very beginning and thus was able to address identified gaps in knowledge needed to regulate FDA products (Fig. 2).

As an outcome of the program, in April 2017, the FDA signed a Cooperative Research and Development Agreement (CRADA) with Emulate, Inc. to use organs-on-chips technology as a toxicology testing platform to understand how products affect human health and safety. It aims to advance and qualify their "Human Emulation System" to meet regulatory evaluation criteria for product testing (5,6). The FDA will evaluate the company's "organs-on-chips" technology in laboratories at the agency's Center for Food Safety and Applied Nutrition (CFSAN). Their miniature liver-on-chip will be evaluated as to its effectiveness to better understand the effects of medicines, disease-causing bacteria in foods, chemicals, and other potentially harmful materials on the human body. FDA will beta-test the Emulate system and look at concordance of chip data with in vivo, in silico and other in vitro (2-D) data on the same compounds furthermore, FDA will begin to develop performance standards for organs-on-chips to create a resource for FDA regulators and researchers.

The work is part of the FDA Predictive Toxicology Roadmap announced December 6, 2017 (7). An FDA senior level toxicology working group was formed to foster enhanced communication among FDA product centers and researchers and leverage FDA resources to advance the integration of emerging predictive toxicology methods and new technologies into regulatory safety and risk assessments. This will include training of FDA regulators and researchers with continuing ongoing education in new predictive toxicology methods that are essential for FDA regulators. As part of this, FDA established an agency-wide education calendar of events and a Toxicology Seminar Series to introduce concepts of new toxicology methodologies and updates on toxicology-related topics. In order to promote continued communication, FDA reaffirmed its commitment to incorporate data from newly qualified toxicology methods into regulatory missions, is encouraging discussions with stakeholders as part of the regulatory submission process, and encourages sponsors to submit scientifically valid approaches for using a new method early in the regulatory process. FDA fosters collaborations with stakeholders across sectors and disciplines nationally and internationally. This is pivotal to identify the needs, maintain momentum, and establish a community to support delivery of new predictive toxicology methods. With this goal, FDA's research programs will identify data gaps and support intramural and extramural research to ensure that the most promising technologies are identified, developed, validated, and integrated into the product pipeline. Under the oversight of the Office of the Commissioner, the progress of these recommendations will be tracked, including an annual report to the Chief Scientist. This shall ensure transparency, foster opportunities to share ideas and knowledge, showcase technologies, and highlight collaborations on developing and testing new methods.

In conclusion, the FDA roadmap identifies the critical priority activities for energizing new or enhanced FDA engagement in transforming the development, qualification, and integration of new toxicology methodologies and technologies into regulatory application. Implementation of the roadmap and engagement with diverse stakeholders should enable FDA to fulfill its regulatory mission today while preparing for the challenges of tomorrow.

Quality assurance and ultimately validation of the tools in the life sciences is a key contribution to overcome the stagnant drug development pipeline due to high attrition rates and the reproducibility crisis in biomedicine. MPS bring a certain face-validity to the portfolio of tools as they introduce organ architecture and functionality to the in vitro approaches and increasingly even incorporate (patho-)physiological organ system interactions. With more MPS developing, the major challenge for their use as translational drug development tools is to make micropathophysiological systems (MPPS). The promise of MPS in biomedicine and drug development depends critically on their quality control. Especially, regulatory decisions will require a confidence that only strict quality control can create.

Typically, the new test would be compared to a traditional method, usually an animal experiment, and the relative reproducibility of reference results would be used as the primary measure of success. Concurrent testing of new substances with the reference test represents another opportunity to gain comparative information without the information bias of the scientific literature (e.g., overrepresentation of toxic substances with specific mechanisms). In an ECVAM workshop (Hoffmann et al., 2008), it was suggested that instead of a specified reference (animal) test, a reference standard could be formed by expert consensus by integrating all knowledge a list of substances could be produced with results a hypothetical ideal test would provide. This can for example allow using also human data in combination with animal data or combination of results from various test systems.

These concepts of correlative validation are only partially applicable to MPS, which often have many purposes other than replacing an animal test, and for which in many cases a respective animal test does not even exist. For drug development, typically a pathophysiological state first needs to be introduced and then treatment effects are analyzed. This greatly complicates the validation process, as both the induction of pathophysiology and its correction need to be quality assured.

MPS are usually more relevant based on the mechanisms of physiology and pathophysiology they reflect. For this reason, mechanistic validation (Hartung et al., 2013) lends itself to the evaluation of MPS: This is first of all a comparison to mechanisms from the scientific literature, ideally by systematic review. Alternatively, high-content characterizations of a reference test and the new test can show that similar patterns of perturbation of physiology occur, in the easiest case that the same biomarkers of effect are observed. This experimental approach can be applied where the definition of mechanism is incomplete or the existing literature insufficient. Lastly, computational modelling of physiology and the prediction of test outcomes in comparison to real test data can show how well the test and the computational model align. The envisaged validation process for MPS has to start with the information need, which defines the purpose of the test.

Although validation is often perceived as rigid and inflexible (which it has to be once a study is initiated), it is actually a highly flexible process, which needs to be adapted case by case and should be performed with the end use in mind (ICCVAM, 2018). Concepts of pre-validation, applicability domain, retrospective validation, catch-up-validation, minimal performance standards, prediction models, etc. are examples of the continuing adaptation to meet the needs of stakeholders (Hartung, 2007 Leist et al., 2012). Here, especially the concepts of "fit-for-purpose", meeting defined "performance standards", and "mechanistic validation" will have to be elaborated upon, specific to MPS (Fig. 3).

--"Fit-for-purpose": The purpose of a test is its place and function in a testing strategy to meet an overall information need and decision context, e.g., the information need is developmental neurotoxicity with the focus on one of the key events of neural development such as myelination of axons. The question to be addressed can be the following: Do certain substances perturb myelination of neuronal axons? Then, a first test could assess toxicity to oligodendrocytes. A second test could quantify the level of myelin basic protein (MBP) in MPS. A third test might assess electrophysiology within the organoid as a functional outcome of perturbed myelination and as a consequence of the perturbation of neurodevelopment and neural differentiation. The testing strategy would need to combine these test results (evidence integration) with existing information.

--"Performance standards": The concept of a performance standard for alternative methods was introduced in the Modular Approach in 2004 (Hartung et al., 2004) and incorporated into OECD validation guidance from 2005 (OECD, 2005). The basic idea is that if a successfully validated method is available, it should be defined what a similar method should demonstrate to be considered equivalent to the validated one. This has proven to be crucial for any modification of tests as well as to avoid extensive and expensive retesting for similar tests. For this reason, they were originally termed "minimum performance standards". Over time, the concept has evolved, now also using the performance standards among others to show the proficiency of a laboratory to carry out a test. Most radically, the current work on developing a performance standard-based OECD TG for a skin sensitization defined approach (DA) aims at defining how any test or combination of tests should perform to be acceptable under the guidance without prescribing a specific method. By extension, a performance standard could be defined for an MPS: This means setting engineering goals (performance standards) and the quality assurance (validation) process would confirm that these standards are met. To some extent this is similar to the reasoning of an earlier ECVAM workshop on points of reference, where it was recommended to define a point of reference by expert consensus for a given validation, not by comparing to a dataset from a traditional animal test (Hoffmann et al., 2008). This was first applied in the retrospective validation of the micronucleus in vitro test validation (Corvi et al., 2008) and later in the more recent validation studies of micronucleus and comet assays in 3D skin models, and it will be applied in the future for the validation of thyroid endocrine disruptor tests.

--"Mechanistic validation" (Hartung et al., 2013) is another radical departure from current practice. Though validation has always included the aspect of mechanistic relevance when addressing test definition, this is usually only minimally covered. The traditional (animal) test and the new method are typically taken as black boxes and the correlation of their results is the measure of validity. MPS bring (patho)physiology, i.e., mechanism, to the foreground. Thus, it makes sense to use a mechanistic basis for comparison. Mechanistic validation dictates first an agreement on the relevant mechanisms for a given information need, followed by evaluation based on coverage of the mechanism by the new method. This type of an approach increasingly takes place with the definition of adverse outcome pathways (AOP) and was the goal of the parallel Human Toxome Project (Bouhifd et al., 2015). One of the basic underpinnings of mechanistic validation is that a systematic review of the literature can be used to ascertain mechanism.

Even before attempting formal validation of MPS, their quality assurance will be of utmost importance. The Good Cell Culture Practice (GCCP) movement initiated by one of the authors in 1996 led to the first guidance of its kind (Coecke et al., 2005) under the auspices of ECVAM. The international community recognized a need to expand this to MPS and stem cell-based models ten years later, and under the lead of CAAT, with participation of FDA, NIH NCATS, NICEATM, ECVAM, UK Stem Cell Bank and others, GCCP 2.0 was initiated. In two dedicated workshops and several publications (Pamies et al., 2017b, 2018b Pamies and Hartung, 2017 Eskes et al., 2017), the needs were defined, and a steering group plus scientific advisory group is currently formulating GCCP 2.0. The proof-of-principle of validation attempts by NIH NCATS in establishing Tissue Chip Testing Centers (TCTC) will cross-fertilize with these developments. The GCCP discussion was already the topic of workshops and conferences such as European Society of Toxicology In Vitro 2016, EuroTox 2017, Society of Toxicology 2018 and a joint conference with FDA and the IQ consortium in 2015. A 2017 workshop (Bal-Price et al., 2018) developed test readiness criteria for toxicology using the example of developmental neurotoxicity, which will be a further starting point for the performance standard development attempted here.

Recognizing the need for a stakeholder dialogue on the quality assurance of MPS, CAAT this year initiated a Public Private Partnership for Performance Standards for Microphysiological Systems (P4M), which aims to establish a stakeholder consensus process toward performance standards. P4M will discuss the core aspects, i.e., when is an MPS good enough and can we express this as a performance standard? Expressions of interest already received include various companies, academics, ECVAM, and US and Japanese agencies.

4.2 Computational systems biology and toxicology

J. B. S. Haldane (1892-1964), a biologist and mathematician, predicted "If physics and biology one day meet, and one of the two is swallowed up, that one will be biology". Systems biology is biology swallowed by physics. Joyner and Pederson (2011) give an interesting reflection on this discipline. Systems toxicology (Kiani et al., 2016), its more applied sibling, was the topic of an earlier article in this series (Hartung et al., 2012), some dedicated conferences and symposia (Andersen et al., 2014 Sturla et al., 2014 Sauer et al., 2015 Hartung et al., 2017a) and a special issue of Chemical Research in Toxicology (Hartung et al., 2017b). As experimental systems biology has been fueled by bioengineering and stem cell technologies, computational systems biology / toxicology has been driven by big data and machine learning technologies (Hartung, 2016 Luechtefeld and Hartung, 2017). The ultimate vision is using computational models of human metabolism, possibly as avatars or virtual patients, to try out pharmacological interventions or toxic insults on the way, tissue and organ models are emerging (Hartung, 2017d).

Systems biology approaches biological function and its perturbation by various biochemically active compounds by complementing the traditional reductionist approaches. The emphasis of systems biology approaches is on the interactions between components rather than just the components themselves. This approach is therefore frequently focused on the dynamics of biological interactions and the emergent properties of biological cells, tissues and organisms stemming from the complexity of the underlying regulatory networks.

The systems biology analysis allows one to examine the disruption of network components by pharmacological and other interventions through the lens of their effects, not only on the designated target but on the network of molecular components, with frequently paradoxical, unexpected and counter-intuitive results. These results can be products of complex feedback interactions involving a specific target and the multiple phenotypes controlled by it, rather than just off-target biochemical effects. The network level effects can span multiple scales, from biochemical to cellular and tissue levels, which involve cell-cell communication through various signaling mechanisms, producing networks of networks. This complexity is captured through high-throughput experimentation and computational analysis and modelling, with a particular focus on the unanticipated, emergent properties. Below we provide some examples of the recent systems biology approaches to complex problems related to the mechanisms of drug action and possible toxic effects.

Several recent examples of the systems approach illustrate the philosophy and power of the approach. Particular attention so far has been paid to the complex mechanisms of action of cocktails of various pharmaceutical compounds. For instance, a recent systems analysis demonstrated that the order and timing of application of anti-cancer compounds can determine the efficacy of combinatorial treatments (Lee et al., 2012). This effect has been ascribed to re-wiring of the signaling network by the first compound, which might result in a more potent effect of the second compound if applied at the appropriate time. The same dynamic network view can be applied to combinatorial applications of radio--and chemotherapeutic treatments, as elucidated through mathematical modeling and experimental validation in another high-profile systems biology application (Chen et al., 2016). These types of network perspectives and associated modeling support will likely also inform the analysis of the effects of putative combinatorial treatments on other tissues and the associated toxicology outcomes.

Another example of a study benefiting from a systems approach is the paradoxical increase rather than decrease of the total kinase activity by ATP-competitive inhibitors of BRAF/ CRAF kinases (Hatzivassiliou et al., 2010). As these kinases are a key target in various cancers, the paradoxical effect has received much attention. However, it is virtually impossible to rationalize it without a modeling approach within a framework of systems biology. Two key insights had to be made to formulate hypotheses of how this might occur, including a feedback regulation inherent in the RAF/MAPK kinase signaling and potential allosteric action of the drugs on the enzyme (Kholodenko et al., 2015).

A number of recent efforts to build and apply computational systems models have focused specifically on mechanisms of developmental toxicity. The US EPA's Virtual Tissues research project uses cellular agent-based models to recapitulate developing embryonic systems and creates in silico testing platforms by parameterizing such models using the ToxCast/Tox21 HTS data to mimic chemical exposure and simulate effects on a tissue level. An AOP of embryonic vascular disruption was published based on a systematic literature review (Knudsen and Kleinstreuer, 2012) and was used to inform the construction of a computational model predicting disruption of blood vessel development (Kleinstreuer et al., 2013b). Putative vascular disruptor compounds and associated systems model predictions have been tested and confirmed in a number of functional assays such as transgenic zebrafish, human cell-based tubulogenesis assays, and whole embryo culture (Tal et al., 2016 McCollum et al., 2016 Ellis-Hutchings et al., 2017). Other work has focused on modelling key developmental toxicity mechanisms driving cleft palate formation (Hutson et al., 2017) and taking a systems toxicology approach to understanding disruption of male reproductive development and endocrine pathways (Leung et al., 2016 Kleinstreuer et al., 2016)

4.3 A fusion of experimental and computational systems biology / toxicology?

Even though MPS are complex, they are considerably simpler than human organisms and they are much more open for measurements and interventions. Thus, the opportunity to first model our experimental systems has enormous advantages however, it represents an interdisciplinary challenge. Bioengineers and modelers have to be brought together. At the same time, funding bodies have to be convinced of the value of this interim step. As an example, in a recent organ-on-chip study (Kilic et al., 2016), mediator gradients were modeled computationally and and then verified experimentally. By parameterization of the experimental systems, we can also start to scale our systems virtually as a quantitative in vitro to in vivo extrapolation (QIVIVE) (Tsaioun et al., 2016 Hartung, 2018).

Overall, further discussion is needed as to the relevance of current carcinogenicity, RDT and DART assessments. Recognizing the societal need to ensure the safety of drugs, chemicals and consumer products, this might make it difficult to abandon current testing, but should lower the barrier for implementing alternative approaches that may improve the status quo. Data sharing and the harmonization of ontologies and data formats will be critical.

Repeated-dose toxicity, carcinogenicity and reproductive toxicity are three examples of systemic toxicology implementation. Since they are very complex endpoints, the uptake of alternative in vitro test methods is still very limited. Rather, some approaches are being investigated or are already in place for waiving testing and reducing the number of animals, such as the ICH strategy for pharmaceuticals and the extended one-generation assay.

Promise for all systemic toxicities comes from the starting development of integrated testing strategies driven by mechanistic relevance: By mapping the human reproductive cycle and its disturbance or the array of pathways of carcinogenesis with a number of assays, the hope is to design more human-relevant test strategies. These and other approaches form part of the emerging roadmap for replacement (Basketter et al., 2012 Leist et al., 2014 Corvi et al., 2017 ICCVAM, 2018) and will contribute to the momentum for implementing alternative approaches, which is also aided by the increasing recognition of the shortcomings of current testing methods.

Given the importance of these hazards and the backlog of testing for most substances of daily use, more efforts in the development of tests, design of testing strategies and their validation are needed. Quality assurance and ultimately validation of the tools in the life sciences is a necessity to unplug the drug development pipeline, which is blocked by high attrition rates, and the reproducibility crisis in biomedicine. The systematic condensation of our existing knowledge (including the mapping of gaps and shortcomings of existing evidence) can herald a more predictive systems approach to addressing systemic toxicity.

However, ultimately, we need a "new deal" for systemic toxicities. Albert Einstein once said, "We can't solve the problems by using the same kind of thinking we used when we created them". The increasing awareness of the shortcomings of current tests with respect to reproducibility (Baker, 2016 Jilka, 2016 Voelkl et al., 2018), inter-species differences and thus lack of human relevance, ambiguity of results and steep costs, should make all who are in this field uneasy (Miller, 2014). It requires the "art of toxicology" to make good decisions on the basis of such compromised information sources. How can we sleep well when we know that our daily decisions are subject to these limitations? Rasheed Ogunlaru wrote, "All the tools, techniques and technology in the world are nothing without the head, heart and hands to use them wisely, kindly and mindfully". This holds for the current art of toxicology and will likely be no different for any new approach. Especially as the new approaches come in the guise of objective "evidence-based" and high-tech approaches, they are still models created with a purpose. Frank Herbert (in God Emperor of Dune) warns "Dangers lurk in all systems. Systems incorporate the unexamined beliefs of their creators. Adopt a system, accept its beliefs, and you help strengthen the resistance to change".

The 3Rs have served us to some extent to replace animal tests for acute and topical toxicities but have done so by modelling and reproducing the animal test results despite their shortcomings. A testing strategy modelling the outcomes of traditional tests will not serve us as well for systemic toxicities. The hazards are typically more severe and less directly attributable to exposure because they can manifest anywhere in the body and after any time of exposure. The 3S approach suggested here is such a "new deal" for safety assessments. It goes far beyond the 3Rs as it does not aim to reproduce the results of a black box (animal) test, which may bear little resemblance to the human scenario. The combination of systematic evaluation of our knowledge and experimental as well as computational modelling of biological systems complexity promises a different approach to systemic toxicity prediction, even though it still has to prove its feasibility and utility.

Abaci, H. E. and Shuler, M. L. (2015). Human-on-a-chip design strategies and principles for physiologically based pharmacokinetics/pharmacodynamics modeling. Integr Biol 7, 383-391. doi:10.1039/C4IB00292J

Adler, S., Basketter, D., Creton, S. et al. (2011). Alternative (nonanimal) methods for cosmetics testing: Current status and future prospects. Arch Toxicol 85, 367-485. doi:10.1007/s00204-011-0693-2

Agerstrand, M. and Beronius, A. (2016). Weight of evidence evaluation and systematic review in EU chemical risk assessment: Foundation is laid but guidance is needed. Env Int 92-93, 590596. doi:10.1016/j.envint.2015.10.008

Aiassa, E., Higgins, J. P. T., Frampton, G. K. et al. (2015). Applicability and feasibility of systematic review for performing evidence-based risk assessment in food and feed safety. Crit Rev Food Sci Nutr 55, 1026-1034. doi:10.1080/10408398.20 13.769933

Alden, C. L., Lynn, A., Bourdeau, A. et al. (2011). A critical review of the effectiveness of rodent pharmaceutical carcinogenesis testing in predicting human risk. Vet Pathol 48, 772-784. doi:10.1177/0300985811400445

Alepee, N., Bahinski, T., Daneshian, M. et al. (2014). State-of-the-art of 3D cultures (organs-on-a-chip) in safety testing and pathophysiology --A t (4) report. ALTEX 31, 441-477. doi:10.14573/ altex1406111

Anand, P., Kunnumakara, B. A., Sundaram, C. et al. (2008). Cancer is a preventable disease that requires major lifestyle changes. Pharm Res 25, 2097-2116. doi:10.1007/s11095-008-9661-9

Andersen, M. E., Betts, K., Dragan, Y. et al. (2014). Developing microphysiological systems for use as regulatory tools--Challenges and opportunities. ALTEX 31, 364-367. doi:10.14573/ altex.1405151

Anisimov, V. N., Ukraintseva, S. V. and Yashin, A. I. (2005). Cancer in rodents: Does it tell us about cancer in humans? Nat Rev Cancer 5, 807-819. doi:10.1038/nrc1715

Ashby, J. and Tennant, R. W. (1991). Definitive relationships among chemical structure, carcinogenicity and mutagenicity for 301 chemicals tested by the U.S. NTP.MutatRes 257, 229-306. doi:10.1016/0165-1110(91)90003-E

Bal-Price, A., Hogberg, H. T., Crofton, K. M. et al. (2018). Recommendation on test readiness criteria for new approach methods in toxicology: Exemplified for developmental neurotoxicity. ALTEX, Epub ahead of print. doi:10.14573/altex.1712081

Bailey, J., Knight, A. and Balcombe, J. (2005). The future of teratology research is in vitro. Biogenic Amines 19, 97-145. doi:10.1163/1569391053722755

Baker, M. (2016). Is there a reproducibility crisis? Nature 533, 452-454. doi:10.1038/533452a

Basketter, D. A., Clewell, H., Kimber, I. et al. (2012). A roadmap for the development of alternative (non-animal) methods for systemic toxicity testing. ALTEX 29, 3-91. doi:10.14573/ altex.2012.1.003

Batke, M., Aldenberg, T., Escher, S. and Mangelsdorf, I. (2013). Relevance of non-guideline studies for risk assessment: The coverage model based on most frequent targets in repeated dose toxicity studies. Toxicol Lett 218, 293-298. doi:10.1016/j. toxlet.2012.09.002

Bernauer, U., Heinemeyer, G., Heinrich-Hirsch, B. et al. (2008). Exposure-triggered reproductive toxicity testing under the REACH legislation: A proposal to define significant/ relevant exposure. Toxicol Lett 176, 68-76. doi:10.1016/j. toxlet.2007.10.008

Bernstein, L., Gold, L. S., Ames, B. N. et al. (1985). Some tautologous aspects of the comparison of carcinogenic potency in rats and mice. Fundam Appl Toxicol 5, 79-86. doi:10.1016/0272-0590(85)90051-X

Birnbaum, L. S. and Fenton, S. (2003). Cancer and developmental exposure to endocrine disruptors. Environ Health Perspect 111, 389-394. doi:10.1289/ehp.5686

Bokkers, B. G. H. and Slob, W. (2007). Deriving a data-based interspecies assessment factor using the NOAEL and the benchmark dose approach. Crit Rev Toxicol 37, 355-373. doi:10.1080/10408440701249224

Bottini, A. A., Alepee, N., De Silva, O. et al. (2008). Optimization of the post-validation process. The report and recommendations of ECVAM workshop 67. Altern Lab Anim 36, 353-366.

Bouhifd, M., Andersen, M. E., Baghdikian, C. et al. (2015). The human toxome project. ALTEX 32, 112-124. doi:10.14573/ altex.1502091

Bremer, S. and Hartung, T. (2004). The use of embryonic stem cells for regulatory developmental toxicity testing in vitro--The current status of test development. Curr Pharm Des 10, 2733-2747. doi:10.2174/1381612043383700

Bremer, S., Pellizzer, C., Hoffmann, S. et al. (2007). The development of new concepts for assessing reproductive toxicity applicable to large scale toxicological programs. Curr Pharm Des 13, 3047-3058. doi:10.2174/138161207782110462

Brown, N. A. and Fabro, S. (1983). The value of animal teratogenicity testing for predicting human risk. Clin Obstet Gynecol 26, 467-477. doi:10.1097/00003081-198306000-00028

Busquet, F. and Hartung, T. (2017). The need for strategic development of safety sciences. ALTEX 34, 3-21. doi:10.14573/ altex.1701031

Carney, E. W., Ellis, A. L., Tyl, R. W. et al. (2011). Critical evaluation of current developmental toxicity testing strategies: A case of babies and their bathwater. Birth Defects Res B: Develop Reprod Toxicol 92, 395-403. doi:10.1002/bdrb.20318

Chen, M., Bisgin, H., Tong, L. et al. (2014). Toward predictive models for drug-induced liver injury in humans: Are we there yet? Biomark Med 8, 201-213. doi:10.2217/bmm.13.146

Chen, S.-H., Forrester, W. and Lahav, G. (2016). Schedule-dependent interaction between anticancer treatments. Science 351, 1204-1208. doi:10.1126/science.aac5610

Chiu, W. A., Guyton, K. Z., Martin, M. T. et al. (2018). Use of high-throughput in vitro toxicity screening data in cancer hazard evaluations by IARC Monograph Working Groups. ALTEX 35, 51-64. doi:10.14573/altex.1703231

Christianson, A., Howson, C. P. and Modell, B. (2006). March of Dimes Global Report of Birth Defects: The hidden toll of dying and disabled children. New York, USA: March of Dimes Birth Defects Foundation. https://bit.ly/2GFyjXi

Coecke, S., Balls, M., Bowe, G. et al. (2005). Guidance on good cell culture practice. Altern Lab Anim 33, 261-287.

Colditz, G. A. and Wei, E. K. (2012). Preventability of cancer: The relative contributions of biologic and social and physical environmental determinants of cancer mortality. Annual Rev Publ Health 33, 137-156. doi:10.1146/annurev-publhealth-031811-124627

Collins, T. F. (2006). History and evolution of reproductive and developmental toxicology guidelines. Curr Pharm Des 12, 1449-1465. doi:10.2174/138161206776389813

Combes, R., Balls, M., Curren, R. et al. (1999). Cell transformation assay as predictors of human carcinogenicity. Altern Lab Anim 27, 745-767.

Corvi, R., Albertini, S., Hartung, T. et al. (2008). ECVAM retrospective validation of in vitro micronucleus test (MNT). Mutagenesis 23, 271-283. doi:10.1093/mutage/gen010

Corvi, R., Aardema, M. J., Gribaldo, L. et al. (2012). ECVAM prevalidation study on in vitro cell transformation assays: General outline and conclusions of the study. Mut Res 744, 12-19. doi:10.1016/j.mrgentox.2011.11.009

Corvi, R., Vilardell, M., Aubrecht, J. and Piersma, A. (2016). Validation of transcriptomics-based in vitro methods. In C. Eskes and M. Whelan (eds.), Validation of Alternative Methods for Toxicity Testing, Advances in Experimental Medicine and Biology. Vol. 856, 243-257. Switzerland: Springer. doi:10.1007/978-3-319-33826-2_10

Corvi, R. and Madia, F. (2017). In vitro genotoxicity testing: Can the performance be enhanced? Food Chem Toxicol 106, 600608. doi:10.1016/j.fct.2016.08.024

Corvi, R., Madia, F., Guyton, K. Z. et al. (2017). Moving forward in carcinogenicity assessment: Report of an EURL ECVAMESTIV workshop. Toxicol In Vitro 45, 278-286. doi:10.1016/j. tiv.2017.09.010

Daneshian, M., Busquet, F., Hartung, T. and Leist, M. (2015). Animal use for science in Europe. ALTEX 32, 261-274. doi:10.14573/altex.1509081

Daneshian, M., Kamp, H., Hengstler, J. et al. (2016). Highlight report: Launch of a large integrated European in vitro toxicology project: EU-ToxRisk. Arch Toxicol 90, 1021-1024. doi:10.1007/s00204-016-1698-7

Daston, G. P. and Seed, J. (2007). Skeletal malformations and variations in developmental toxicity studies: Interpretation issues for human risk assessment. Birth Defects Res B Dev Reprod Toxicol 80, 421-424. doi:10.1002/bdrb.20135

Doe, J. E., Boobis, A. R., Blacker, A. et al. (2006). A tiered approach to systemic toxicity testing for agricultural chemical safety assessment. Crit Rev Toxicol 36, 37-68. doi:10.1080/10408440500534370

Doktorova, T. Y., Yildirimman, R., Celeen, L. et al. (2014). Testing chemical carcinogenicity by using a transcriptomics HepaRG-based model? EXCLIJ13, 623-637.

Ellis-Hutchings, R., Settivari, R., McCoy, A. et al. (2017). Embryonic vascular disruption adverse outcomes: Linking HTS signatures with functional consequences. Reprod Toxicol 70, 82-96. doi:10.1016/j.reprotox.2017.04.003

Ennever, F. K., Noonan, T. J. and Rosenkranz, H. S. (1987). The predictivity of animal bioassays and short-term genotoxicity tests for carcinogenicity and non-carcinogenicity to humans. Mutagenesis 2, 73-78. doi:10.1093/mutage/2.2.73

Ennever, F. K. and Lave, L. B. (2003). Implications of the lack of accuracy of the lifetime rodent bioassay for predicting human carcinogenicity. Regul Toxicol Pharmacol 38, 52-57. doi:10.1016/S0273-2300(03)00068-0

Eskes, C., Bostrom, A.-C., Bowe, G. et al. (2017). Good cell culture practices & in vitro toxicology. Toxicol In Vitro 45, 272-277. doi:10.1016/j.tiv.2017.04.022

EURL ECVAM (2012). EURL ECVAM Recommendation on three cell transformation assays using Syrian hamster embryo cells (SHE) and the BALB/c 3T3 mouse fibroblast cell line for in vitro carcinogenicity testing. https://bit.ly/2qbJ50J

EURL ECVAM (2013a). EURL ECVAM strategy to avoid and reduce animal use in genotoxicity testing. JRC Report EUR 26375. https://bit.ly/19V6dzF

EURL ECVAM (2013b). EURL ECVAM Recommendation on cell transformation assay based on the Bhas 42 cell line. https:// bit.ly/2qbJ50J

Ewart, L., Fabre, K., Chakilam, A. et al. (2017). Navigating tissue chips from development to dissemination: A pharmaceutical industry perspective. Exp Biol Med 242, 1579-1585. doi:10.1177/1535370217715441

Ferrario, D., Brustio, R. and Hartung, T. (2014). Glossary of reference terms for alternative test methods and their validation. ALTEX 31, 319-335. doi:10.14573/altex.140331

Freedman, D. A. and Zeisel, H. (1988). From mouse to man: The quantitative assessment of cancer risks. Statist Sci 3, 3-56. doi:10.1214/ss/1177012993

Freedman, L. P., Cockburn, I. M. and Simcoe, T. S. (2015). The economics of reproducibility in preclinical research. PLoS Biol 13, e1002165. doi:10.1371/journal.pbio.1002165

Gaylor, D. W. (2005). Are tumor incidence rates from chronic bioassays telling us what we need to know about carcinogens? Regul Toxicol Pharmacol 41, 128-133. doi:10.1016/j. yrtph.2004.11.001

GBD 2013 Risk Factors Collaborators (2013). Global, regional, and national comparative risk assessment of 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks in 188 countries, 1990-2013: A systematic analysis for the global burden of disease study 2013. Lancet 386, 2287-2323. doi:10.1016/S0140-6736(15)00128-2

Genschow, E., Spielmann, H., Scholz, G. et al. (2004). Validation of the embryonic stem cell test in the international ECVAM validation study on three in vitro embryotoxicity tests. Altern Lab Anim 32, 209-244.

Gold, L. S., Slone, T. H., Manley, N. B. et al. (1991). Target organs in chronic bioassays of 533 chemical carcinogens. Environ Health Perspect 93, 233-246. doi:10.1289/ehp.9193233

Gold, L. S., Slone, T. H. and Ames, B. N. (1998). What do animal cancer tests tell us about human cancer risk? Overview of analyses of the carcinogenic potency database. Drug Metab Rev 30, 359-404. doi:10.3109/03602539808996318

Gordon, S., Daneshian, M., Bouwstra, J. et al. (2015). Nonanimal models of epithelial barriers (skin, intestine and lung) in research, industrial applications and regulatory toxicology. ALTEX 32, 327-378. doi:10.14573/altex.1510051

Gottmann, E., Kramer, S., Pfahringer, B. and Helma, C. (2001). Data quality in predictive toxicology: Reproducibility of rodent carcinogenicity experiments. Environ Health Perspect 109, 509-514. doi:/10.1289/ehp.01109509

Gray, G. M., Li, P., Shlyakhter, I. and Wilson, R. (1995). An empirical examination of factors influencing prediction of carcinogenic hazard across species. Regul Toxicol Pharmacol 22, 283-291. doi:10.1006/rtph.1995.0011

Griesinger, C., Hoffmann, S., Kinsner-Ovaskainen, A. et al. (2008). Proceedings of the First International Forum Towards Evidence-Based Toxicology. Conference Centre Spazio Villa Erba, Como, Italy. 15-18 October 2007. Human Exp Toxicol 28, Spec Issue: Evidence-Based Toxicology (EBT) 2009, 71-163.

Hardy, B., Apic, G., Carthew, P. et al. (2012a). Food for thought . A toxicology ontology roadmap. ALTEX 29, 129-137. doi:10.14573/altex.2012.2.129

Hardy, B., Apic, G., Carthew, P. et al. (2012b). Toxicology ontology perspectives. ALTEX 29, 139-156. doi:10.14573/ altex.2012.2.139

Hareng, L., Pellizzer, C., Bremer, S. et al. (2005). The integrated project ReProTect: A novel approach in reproductive toxicity hazard assessment. Reprod Toxicol 20, 441-452. doi:10.1016/j. reprotox.2005.04.003

Hartung, T., Bremer, S., Casati, S. et al. (2004). A modular approach to the ECVAM principles on test validity. Altern Lab Anim 32, 467-472.

Hartung, T. (2007). Food for thought . on validation. ALTEX 24, 67-72. doi:10.14573/altex.2007.2.67

Hartung, T. (2008a). Food for thought . on animal tests. ALTEX 25, 3-9. doi:10.14573/altex.2008.1.3

Hartung, T. (2008b). Toward a new toxicology--Evolution or revolution? Altern Lab Anim 36, 635-639.

Hartung, T. (2008c). Food for thought . on alternative methods for cosmetics safety testing. ALTEX25, 147-162. doi:10.14573/ altex.2008.3.147

Hartung, T. (2009a). Food for thought . on evidence-based toxicology. ALTEX26, 75-82. doi:10.14573/altex.2009.2.75

Hartung, T. (2009b). Toxicology for the twenty-first century. Nature 460, 208-212. doi:10.1038/460208a

Hartung, T. and Rovida, C. (2009). Chemical regulators have overreached. Nature 460, 1080-1081. doi:10.1038/4601080a

Hartung, T. and Hoffmann, S. (2009). Food for thought . on in silico methods in toxicology. ALTEX 26, 155-166. doi:10.14573/ altex.2009.3.155

Hartung, T. (2010a). Evidence based-toxicology--The toolbox of validation for the 21st century? ALTEX 27, 241-251. doi:10.14573/altex.2010.4.253

Hartung, T. (2010b). Food for thoughts on alternative methods for chemical safety testing. ALTEX 27, 3-14. doi: 10.14573/ altex.2010.1.3

Hartung, T., Blaauboer, G. J., Bosgra, S. et al. (2011). An expert consortium review of the EC-commissioned report "Alternative (non-animal) methods for cosmetics testing: current status and future prospects--2010". ALTEX 28, 183-209. doi:10.14573/ altex.2011.3.183

Hartung, T., van Vliet, E., Jaworska, J. et al. (2012). Systems toxicology. ALTEX 29, 119-128. doi:10.14573/altex.2012.2.119 Hartung, T. (2013). Look back in anger--What clinical studies tell us about preclinical work. ALTEX 30, 275-291. doi:10.14573/ altex.2013.3.275

Hartung, T., Luechtefeld, T., Maertens, A. and Kleensang, A. (2013). Integrated testing strategies for safety assessments. ALTEX30, 3-18. doi:10.14573/altex.2013.1.003

Hartung, T. (2014). 3D--A new dimension of in vitro research. Adv Drug Deliv Rev 69-70, vi. Preface Special Issue "Innovative tissue models for in vitro drug development".

Hartung, T. (2016). Making big sense from big data in toxicology by read-across. ALTEX33, 83-93. doi:10.14573/altex.1603091 Hartung, T. (2017a). Food for thought . the first ten years.

ALTEX 34, 187-192. doi:10.14573/altex.1703311 Hartung, T. (2017b). Opinion versus evidence for the need to move away from animal testing. ALTEX 34, 193-200. doi:10.14573/ altex.1703291

Hartung, T. (2017c). Thresholds of toxicological concern--Setting a threshold for testing where there is little concern. ALTEX 34, 331-351. doi:10.14573/altex.1707011

Hartung, T. (2017d). Utility of the adverse outcome pathway concept in drug development. Exp Opin DrugMetabol Toxicol 13, 1-3. doi:10.1080/17425255.2017.1246535

Hartung, T., FitzGerald, R., Jennings, P. et al. (2017a). Systems toxicology--Real world applications and opportunities. Chem Res Toxicol 30, 870-882. doi:10.1021/acs.chemrestox.7b00003

Hartung, T., Kavlock, R. and Sturla, S. (2017b). Systems toxicology II: A special issue. Chem Res Toxicol 30, 869-869. doi:10.1021/acs.chemrestox.7b00038

Hartung, T. (2018). Perspectives on in vitro to in vivo extrapolations. J Appl In Vitro Toxicol, in press. doi:10.1089/ aivt.2016.0026

Haseman, J. K., Boorman, G. A. and Huff, J. (1997). Value of historical control data and other issues related to the evaluation of long-term rodent carcinogenicity studies. Toxicol Pathol 25, 524-527. doi:10.1177/019262339702500518

Hatzivassiliou, G., Song, K., Yen, I. et al. (2010). RAF inhibitors prime wild-type RAF to activate the MAPK pathway and enhance growth. Nature 464, 431-435. doi:10.1038/nature08833

Hengstler, J. G., van der Burg, B., Steinberg, P. and Oesch, F. (1999). Interspecies differences in cancer susceptibility and toxicity. Drug Metab Rev 31, 917-970. doi:10.1081/DMR100101946

Hengstler, J. G., Marchan, R. and Leist, M. (2012). Highlight report: Towards the replacement of in vivo repeated dose systemic toxicity testing. Arch Toxicol 86, 13-15. doi:10.1007/ s00204-011-0798-7

Herwig, R., Gmuender, H., Corvi, R. et al. (2016). Inter-laboratory study of human in vitro toxicogenomics-based tests as alternative methods for evaluating chemical carcinogenicity: A bioinformatics perspective. Arch Toxicol 90, 2215-2229. doi:10.1007/s00204-015-1617-3

Hoffmann, S. and Hartung, T. (2006). Towards an evidence-based toxicology. Hum Exp Toxicol 25, 497-513. doi:10.1191/0960327106het648oa

Hoffmann, S., Edler, L., Gardner, I. et al. (2008). Points of reference in validation--The report and recommendations of ECVAM Workshop. Altern Lab Anim 36, 343-352.

Hoffmann, S., de Vries, R. B. M., Stephens, M. L. et al. (2017). A primer on systematic reviews in toxicology. Arch Toxicol 91, 2551-2575. doi:10.1007/s00204-017-1980-3

Hoke, R. A. and Ankley, G. T. (2005). Application of frog embryo teratogenesis assay-Xenopus to ecological risk assessment. Environ Toxicol Chem 24, 2677-2690. doi:10.1897/04-506R.1

Hotchkiss, A. K., Rider, C. V., Blystone, C. R. et al. (2008). Fifteen years after "Wingspread"--Environmental endocrine disrupters and human and wildlife health: Where we are today and where we need to go. Toxicol Sci 105, 235-259. doi:10.1093/ toxsci/kfn030

Hurtt, M. E., Cappon, G. D. and Browning, A. (2003). Proposal for a tiered approach to developmental toxicity testing for veterinary pharmaceutical products for food-producing animals. Food Chem Toxicol 41, 611-619. doi:10.1016/S02786915(02)00326-5

Hutson, M. S., Leung, M., Baker, N. C. et al. (2017). Computational model of secondary palate fusion and disruption. Chem Res Toxicol30, 965-979. doi:10.1021/acs.chemrestox.6b00350

ICCVAM (2018). A Strategic Roadmap for Establishing New Approaches to Evaluate the Safety of Chemicals and Medical Products in the United States. January 2018. https://ntp.mehs. nih.gov/go/natl-strategy (accessed 22.03.2018).

ICH (2009). S1B Guideline on Carcinogenicity Testing of Pharmaceuticals. EMA Document CPMP/ICH/299/95. https://bit. ly/2H3PTrw

ICH Status Report (2016). The ICHS1 Regulatory Testing Paradigm of Carcinogenicity in rats--Status Report. Safety Guidelines, 2 March, 1-5. https://bit.ly/2IzczwZ

ILSI HESI ACT (2001). ILSI HESI Alternatives to carcinogenicity testing project. Toxicol Pathol 29, Suppl, 1-351.

Jacobs, M. N., Colacci, A., Louekari, K. et al. (2016). International regulatory needs for development of an IATA for nongenotoxic carcinogenic chemical substances. ALTEX 33, 359-392. doi:10.14573/altex.1601201

Janer, G., Hakkert, B. C., Slob, W. et al. (2007). A retrospective analysis of the two-generation study: What is the added value of the second generation? Reprod Toxicol 24, 97-102. doi:10.1016/j.reprotox.2007.04.068

Jemal, A., Siegel, R., Ward, E. et al. (2009). Cancer statistics, 2009. CA Cancer J Clin 59, 225-249. doi:10.3322/caac.20006

Jilka, R. L. (2016). The road to reproducibility in animal research. J Bone Mineral Res 31, 1317-1319. doi:10.1002/jbmr.2881

Johnson, F. M. (2001). Response toTennant et al.: Attempts to replace the NTP rodent bioassay with transgenic alternatives are unlikely to succeed. Environ Mol Mutagen 37, 89-92. doi:10.1002/1098- 2280(2001)37:1<89::AID-EM1011>3.0.C02-4

Joyner, M. J. and Pedersen, B. K. (2011). Ten questions about systems biology. J Physiol 589, 1017-1030. doi:10.1113/ jphysiol.2010.201509

Judson, R., Houck, K., Martin, M. et al. (2016). Analysis of the effects of cell stress and cytotoxicity on in vitro assay activity across a diverse chemical and assay space. Toxicol Sci 152, 323-339. doi:10.1093/toxsci/kfw092

Kennedy, T. (1997). Managing the drug discovery/development interface. Drug Discov Today 2, 436-444. doi:10.1016/S1359-6446(97)01099-4 Kessler, R. (2014). Air of danger. Nature S62, 509. doi:10.1038/509S62a

Kholodenko, B. N. (2015). Drug resistance resulting from kinase dimerization is rationalized by thermodynamic factors describing allosteric inhibitor effects. Cell Reports 12, 1939-1949. doi:10.1016/j.celrep.2015.08.014

Kiani, N. A., Shang, M.-M. and Tegner, J. (2016). Systems toxicology: Systematic approach to predict toxicity. Curr Pharm Des 22, 6911-6917. doi:10.2174/1381612822666161003115629

Kilic, O., Pamies, D., Lavell, E. et al. (2016). Microphysiological brain model enables analysis of neuronal differentiation and chemotaxis. Lab Chip 16, 4152-4162. doi:10.1039/ C6LC00946H

Kim, J. H. and Scialli, A. R. (2011). Thalidomide: The tragedy of birth defects and the effective treatment of disease. Toxicol Sci 122, 1-6. doi:10.1093/toxsci/kfr088

Kirkland, D., Aardema, M., Henderson, L. and Muller, L. (2005). Evaluation of the ability of a battery of three in vitro genotoxicity tests to discriminate rodent carcinogens and non-carcinogens I. Sensitivity, specificity and relative predictivity. Mutat Res 584, 1-256. doi:10.1016/j.mrgentox.2005.02.004

Kirkland, D., Pfuhler, S., Tweatsm, D. et al. (2007). How to reduce false positive results when undertaking in vitro genotoxicity testing and thus avoid unnecessary follow up animal tests: Report of an ECVAM Workshop. Mutat Res 628, 31-55. doi:10.1016/j.mrgentox.2006.11.008

Kirkland, D., Reeve, L., Gatehouse, D. and Vanparys, P. (2011). A core in vitro genotoxicity battery comprising the Ames test plus the in vitro micronucleus test is sufficient to detect rodent carcinogens and in vivo genotoxins. Mutat Res 721, 27-73. doi:10.1016/j.mrgentox.2010.12.015

Kleinstreuer, N., Dix, D., Houck, K. et al. (2013a). In vitro perturbations of targets in cancer hallmark processes predict rodent chemical carcinogenesis. Toxicol Sci 131, 40-55. doi:10.1093/ toxsci/kfs285

Kleinstreuer, N., Dix, D., Rountree, M. et al. (2013b). A computational model predicting disruption of blood vessel development. PLoS Comput Biol 9, e1002996. doi:10.1371/journal. pcbi.1002996

Kleinstreuer, N., Houck, K., Yang, J. et al. (2014). Phenotypic screening of the ToxCast chemical library to classify toxic and therapeutic mechanisms. Nat Biotechnol 32, 583-591. doi:10.1038/nbt.2914

Kleinstreuer, N. C., Ceger, P., Watt, E. D. et al. (2016). Development and validation of a computational model for androgen receptor activity. Chem Res Toxicol 30, 946-964. doi:10.1021/ acs.chemrestox.6b00347

Kleinstreuer, N., Hoffmann, S., Alepee, N. et al. (2018). Non-animal methods to predict skin sensitization (II): An assessment of defined approaches. Crit Rev Toxicol 23, 1-16. doi:10.1080/10 408444.2018.1429386

Knight, A., Bailey, J. and Balcombe, J. (2006a). Animal carcinogenicity studies: 2. Obstacles to extrapolation of data to humans. Altern Lab Anim 34, 29-38.

Knight, A., Bailey, J. and Balcombe, J. (2006b). Animal carcinogenicity studies: 1. Poor human predictivity. Altern Lab Anim 34, 19-27.

Knudsen, T. B., Kavlock, R. J., Daston, G. P. et al. (2011). Developmental toxicity testing for safety assessment: New approaches and technologies. Birth Defects Res B Dev Reprod Toxicol 2, 413-420. doi:10.1002/bdrb.20315

Knudsen, T. B and Kleinstreuer, N. C. (2012). Disruption of embryonic vascular development in predictive toxicology. Birth Defects Res CEmbryo Today 93, 312-323. doi:10.1002/bdrc.20223

Kola, I. and Landis, J. (2004). Can the pharmaceutical industry reduce attrition rates? Nat Rev Drug Discov 3, 711-715. doi:10.1038/nrd1470

Kubinyi, H. (2003). Drug research: Myths, hype and reality. Nat Rev Drug Discov 2, 665-668. doi:10.1038/nrd1156

Laufersweiler, M. C., Gadagbui, B., Baskerville-Abraham, I. M. et al. (2012). Correlation of chemical structure with reproductive and developmental toxicity as it relates to the use of the threshold of toxicological concern. Regulat Toxicol Pharmacol 62, 160-182. doi:10.1016/j.yrtph.2011.09.004

Lave, L. B., Ennever, F. K., Rosenkranz, H. S. and Omenn, G. S. (1988). Information value of the rodent bioassay. Nature 336, 631-633. doi:10.1038/336631a0

LeBoeuf, R. A., Kerckaert, K. A., Aardema, M. J. and Isfort, R. J. (1999). Use of Syrian hamster embryo and BALB/c 3T3 cell transformation for assessing the carcinogenic potential of chemicals. IARC Sci Publ 146, 409-425.

Lee, M. J., Ye, A. S., Gardino, A. K. et al. (2012). Sequential application of anticancer drugs enhances cell death by rewiring apoptotic signaling networks. Cell 149, 780-794. doi:10.1016/j. cell.2012.03.031

Leist, M., Bremer, S., Brundin, P. et al. (2008). The biological and ethical basis of the use of human embryonic stem cells for in vitro test systems or cell therapy. ALTEX 25, 163-190. doi:10.14573/altex.2008.3.163

Leist, M., Hasiwa, M., Daneshian, M. and Hartung, T. (2012). Validation and quality control of replacement alternatives--Current status and future challenges. Toxicol Res 1, 8. doi:10.1039/ c2tx20011b

Leist, M., Hasiwa, N., Rovida, C. et al. (2014). Consensus report on the future of animal-free systemic toxicity testing. ALTEX 31, 341-356. doi:10.14573/altex.1406091

Leist, M., Ghallab, A., Graepel, R. et al. (2017). Adverse outcome pathways: Opportunities, limitations and open questions. Arch Toxicol 91, 3477-3505. doi:10.1007/s00204-017-2045-3

Leung, M. C., Phuong, J., Baker, N. C. et al. (2016). Systems toxicology of male reproductive development: Profiling 774 chemicals for molecular targets and adverse outcomes. Environ Health Perspect 124, 1050-1061. doi:10.1289/ehp.1510385

Luechtefeld, T., Maertens, A., Russo, D. P. et al. (2016a). Global analysis of publicly available safety data for 9,801 substances registered under REACH from 2008-2014. ALTEX 33, 95-109. doi:10.14573/altex.1510052

Luechtefeld, T., Maertens, A., Russo, D. P. et al. (2016b). Analysis of public oral toxicity data from REACH registrations 20082014. ALTEX 33, 111-122. doi:10.14573/altex.1510054

Luechtefeld, T. and Hartung, T. (2017). Computational approaches to chemical hazard assessment. ALTEX 34, 459-478. doi:10.14573/altex.1710141

MacDonald, J., French, J. E., Gerson, R. J. et al. (2004). The utility of genetically modified mouse assays for identifying human carcinogens: A basic understanding and path forward. Toxicol Sci 77, 188-194. doi:10.1093/toxsci/kfh037

Madia, F., Worth, A. and Corvi, R. (2016). Analysis of carcinogenicity testing for regulatory purposes in the European Union (92 pp). JRC Report EUR 27765.

Mandrioli, D. and Silbergeld, E. K. (2016). Evidence from toxicology: The most essential science for prevention. Environ Health Perspect 124, 6-11. doi:10.1289/ehp.1509880

Martin, M. T., Judson, R. S., Reif, D. M. et al. (2009a). Profiling chemicals based on chronic toxicity results from the U.S. EPA ToxRef database. Environ Health Perspect 117, 392-399. doi:10.1289/ehp.0800074

Martin, M. T., Mendez, E., Corum, D. G. et al. (2009b). Profiling the reproductive toxicity of chemicals from multigenerationstudies in the toxicity reference database. Toxicol Sci 110, 181-190. doi:10.1093/toxsci/kfp080

Marx, J. (2003). Building better mouse models for studying cancer. Science 299, 1972-1975. doi:10.1126/science.299.5615.1972

Marx, U., Andersson, T. B., Bahinski, A. et al. (2016). Biology-inspired microphysiological system approaches to solve the prediction dilemma of substance testing using animals. ALTEX 33, 272-321. doi:10.14573/altex.1603161

Marx-Stoelting, P., Adriaens, E., Ahr, H. J. et al. (2009). A review of the implementation of the embryonic stem cell test (EST). The report and recommendations of an ECVAM/ReProTect Workshop. Altern Lab Anim 37, 313-328.

Mattison, D. R. (2010). Environmental exposures and development. Curr Opin Pediatrics 22, 208-218. doi:10.1097/ MOP.0b013e32833779bf

McCollum, C. W., de Vancells, J. C., Hans, C. et al. (2016). Identification of vascular disruptor compounds by analysis in zebrafish embryos and mouse embryonic endothelial cells. Reprod Toxicol 70, 60-69. doi:10.1016/j.reprotox.2016.11.005

Miller, G. W. (2014). Improving reproducibility in toxicology. Toxicol Sci 139, 1-3. doi:10.1093/toxsci/kfu050

Moore, N., Bremer, S., Carmichael, N. et al. (2009). A modular approach to the extended one-generation reproduction toxicity study: Outcome of an ECETOC task force and International ECETOC/ECVAM workshop. Altern Lab Anim 37, 219-225.

Morgan, R. L., Thayer, K. A., Bero, L. et al. (2016). GRADE: Assessing the quality of evidence in environmental and occupational health. Environ Int 92-93, 611-616. doi:10.1016/j.envint.2016.01.004

OECD (2005). Guidance Document on the Validation and International Acceptance of New or Updated Test Methods for Hazard Assessment. Series on Testing and Assessment No. 34, NV/JM/ MONO(2005)14. Paris: OECD Publishing. http://www.oecd. org/officialdocuments/publicdisplaydocumentpdf/?doclanguag e=en&cote=env/jm/mono(2005)14

OECD (2007). Detailed Review Paper on Cell Transformation Assays for Detection of Chemical Carcinogens. Series on Testing and Assessment No. 31. Paris: OECD Publishing. http:// www.oecd.org/officialdocuments/publicdisplaydocumentpdf/? cote=ENV/JM/MONO%282007%2918&docLanguage=En

OECD (2009). Carcinogenicity Studies. OECD Guidelines for Chemical Testing, TG No. 451. http://www.oecd.org/dataoecd/30/46/41753121.pdf

OECD (2011). OECD Guideline on Extended One-Generation Reproductive Toxicity Study. Series on Testing and Assessment No. 443. Paris: OECD Publishing. doi:10.1787/9789264122550-en

OECD (2015). Guidance Document on the in vitro Syrian Hamster Embryo (SHE) Cell Transformation Assay, Series on Testing and Assessment, No. 214. Paris: OECD Publishing. http:// www.oecd.org/env/ehs/testing/Guidance-Document-on-the-invitro-Syrian-Hamster- Embryo-Cell-Transformation-Assay.pdf

OECD (2016). Guidance Document on the in vitro Bhas 42 Cell Transformation Assay. Series on Testing and Assessment, No. 231. Paris: OECD Publishing. https://www.oecd.org/env/ehs/ testing/ENV_JM_MONO(2016)1.pdf

Pamies, D. and Hartung, T. (2017). 21st century cell culture for 21st century toxicology. Chem Res Toxicol 30, 43-52. doi:10.1021/ acs.chemrestox.6b00269

Pamies, D., Barreras, P., Block, K. et al. (2017a). A human brain microphysiological system derived from iPSC to study central nervous system toxicity and disease. ALTEX 34, 362-376. doi:10.14573/altex.1609122

Pamies, D., Bal-Price, A., Simeonov, A. et al. (2017b). Good cell culture practice for stem cells and stem-cell-derived models. ALTEX 34, 95-132. doi:10.14573/altex.1607121

Pamies, D., Block, K., Lau, P. et al. (2018a). Rotenone exerts developmental neurotoxicity in a human brain spheroid model. Toxicol Appl Pharmacol, Epub ahead of print. doi:10.1016/j. taap.2018.02.003

Pamies, D., Bal-Price, A., Chesne, C. et al. (2018b). Advanced good cell culture practice for human primary, stem cell-derived and organoid models as well as microphysiological systems. ALTEX, Epub ahead of print. doi:10.14573/altex.1710081.

Paparella, M., Daneshian, M., Hornek-Gausterer, R. et al. (2013). Uncertainty of testing methods--What do we (want to) know? ALTEX 30, 131-144. doi:10.14573/altex.2013.2.131

Paparella, M., Colacci, A. and Jacobs, M. N. (2016). Uncertainties of testing methods: What do we (want to) know about carcinogenicity? ALTEX34, 235-252. doi:10.14573/altex.1608281

Paules, R. S., Aubrecht, J., Corvi, R. et al. (2011). Moving forward in human cancer risk assessment. Environ Health Persp 119, 739-743. doi:10.1289/ehp.1002735

Piersma, A. H., Genschow, E., Verhoef, A. et al. (2004). Validation of the postimplantation rat whole-embryo culture test in the international ECVAM validation study on three in vitro embryotoxicity tests. Altern Lab Anim 32, 275-307.

President's Cancer Panel (2010). Reducing environmental cancer risk, NIH. https://deainfo.nci.nih.gov/advisory/pcp/annual reports/pcp08-09rpt/pcp_report_08-09_508.pdf

Pound, P. and Bracken, M. B. (2014). Is animal research sufficiently evidence based to be a cornerstone of biomedical research? BMJ348, g3387. doi:10.1136/bmj.g3387

Pritchard, J. B., French, J. E., Davis, B. J. and Haseman, J. K. (2003). The role of transgenic mouse models in carcinogen identification. Environ Health Perspect 111, 444-454. doi:10.1289/ehp.5778

Rall, D. P. (2000). Laboratory animal tests and human cancer. Drug Metab Rev 32, 119-128. doi:10.1081/DMR-100100565

Rodgers, K. M., Udesky, J. O., Rudel, R. A. and Brody, J. G. (2018). Environmental chemicals and breast cancer: An updated review of epidemiological literature informed by biological mechanisms. Environ Res 160, 152-182. doi:10.1016/j. envres.2017.08.045

Rovida, C., Longo, F. and Rabbit, R. R. (2011). How are reproductive toxicity and developmental toxicity addressed in REACH dossiers? ALTEX 28, 273-294. doi:10.14573/altex.2011.4.273

Rovida, C., Alepee, N., Api, A. M. et al. (2015). Integrated testing strategies (ITS) for safety assessment. ALTEX 32, 171-181. doi:10.14573/altex.1411011

Sanz, F., Pognan, F., Steger-Hartmann, T. et al. (2017). Legacy data sharing to improve drug safety assessment: The eTOX project. Nat Rev Drug Discov 16, 811-812. doi:10.1038/ nrd.2017.177

Sauer, J. M., Hartung, T., Leist, M. et al. (2015). Systems toxicology: The future of risk assessment. Int J Toxicol 34, 346-348. doi:10.1177/1091581815576551

Schaap, M. M., Wackers, P. F., Zwart, E. P. et al. (2015). A novel toxicogenomics-based approach to categorize (non-)genotoxic carcinogens. Arch Toxicol 89, 2413-2427. doi:10.1007/s00204-014-1368-6

Schmidt, C. W. (2002). Assessing assays. Environ Health Per spect 110, A248-251. doi:10.1289/ehp.110-a248

Schwarzman, M. R., Ackerman, J. R., Dairkee, S. H. et al. (2015). Screening for chemical contributions to breast cancer risk: A case study for chemical safety evaluation. Environ Health Perspect 123, 1255-1264. doi:10.1289/ehp.1408337

Seidle, T. (2006). Chemicals and cancer: What the regulators won't tell you about carcinogenicity testing. PETA Europe Ltd. https://www.peta.de/mediadb/EUreport300.pdf

Selderslaghs, I. W. T., Blust, R. and Witters, H. E. (2012). Feasibility study of the zebrafish assay as an alternative method to screen for developmental toxicity and embryotoxicity using a training set of 27 compounds. Reprod Toxicol 33, 142-154. doi:10.1016/j.reprotox.2011.08.003

Silbergeld, E. K. (2004). Commentary: The role of toxicology in prevention and precaution. Int J Occup Med Environ Health 17, 91-102.

Sistare, F. D., Morton, D., Alden, C. et al. (2011). An analysis of pharmaceutical experience with decades of rat carcinogenicity testing: Support for a proposal to modify current regulatory guidelines. Toxicol Pathol 9, 716-744. doi:10.1177/0192623311406935

Skardal, A., Shupe, T. and Atala, A. (2016). Organoid-on-a-chip and body-on-a-chip systems for drug screening and disease modeling. Drug Discov Today 21, 1399-1411. doi:10.1016/j. drudis.2016.07.003

Skardal, A., Murphy, S., Devarasetty, M. et al. (2017). Multi-tissue interactions in an integrated three-tissue organ-on-a-chip platform. Sci Rep 7, 8837. doi:10.1038/s41598-017-08879-x

Slikker, W. (2014). Of human-on-a-chip and humans: Considerations for creating and using microphysiological systems. Exp Biol Med 239, 1078-1079. doi:10.1177/1535370214537754

Smirnova, L., Harris, G., Leist, M. and Hartung, T. (2015). Cellular resilience. ALTEX32, 247-260. doi:10.14573/altex.1509271

Smith, M. T., Guyton, K. Z., Gibbons, C. F. et al. (2016). Key characteristics of carcinogens as a basis for organizing data on mechanisms of carcinogenesis. Environ Health Perspect 124, 713-721. doi:10.1289/ehp.1509912

Spielmann, H., Genschow, E., Brown, N. A. et al. (2004). Validation of the rat limb bud micromass test in the international ECVAM validation study on three in vitro embryotoxicity tests. Altern Lab Anim 32, 245-274.

Spielmann, H., Seiler, A., Bremer, S. et al. (2006). The practical application of three validated in vitro embryotoxicity tests. The report and recommendations of an ECVAM/ZEBET workshop (ECVAM workshop 57). Altern Lab Anim 34, 527-538.

Stephens, M. L., Andersen, M., Becker, R. A. et al. (2013). Evidence-based toxicology for the 21st century: Opportunities and challenges. ALTEX30, 74-103. doi:10.14573/altex.2013.1.074

Stephens, M. L., Betts, K., Beck, N. B. et al. (2016).The emergence of systematic review in toxicology. Toxicol Sci 152, 10-16. doi:10.1093/toxsci/kfw059

Sturla, S. J., Boobis, A. R., FitzGerald, R. E. et al. (2014). Systems toxicology: From basic research to risk assessment. Chem Res Toxicol 27, 314-329. doi:10.1021/tx400410s

Sukardi, H., Chang, H. T., Chan, E. C. et al. (2011). Zebrafish for drug toxicity screening: Bridging the in vitro cell-based models and in vivo mammalian models. Expert Opin Drug Metab Toxicol 7, 579-589. doi:10.1517/17425255.2011.562197

Suter-Dick, L., Alves, P. M., Blaauboer, B. J. et al. (2015). Stem cell-derived systems in toxicology assessment. Stem Cells Develop 24, 1284-1296. doi:10.1089/scd.2014.0540

Takayama, S., Thorgeirsson, U. P. and Adamson, R. H. (2008). Chemical carcinogenesis studies in nonhuman primates. Proc Jpn Acad Ser B Phys Biol Sci 84, 176-188. doi:10.2183/ pjab.84.176

Tal, T., Kilty, C., Smith, A. et al. (2016). Screening for angiogenic inhibitors in zebrafish to evaluate a predictive model for developmental vascular toxicity. Reprod Toxicol 70, 70-81. doi:10.1016/j.reprotox.2016.12.004

Tennant, R. W., Stasiewicz, S., Mennear, J. et al. (1999). Genetically altered mouse models for identifying carcinogens. IARC Sci Publ 146, 123-150.

Thilly, W. G. (2003). Have environmental mutagens caused on comutations in people? Nat Genet 34, 255-259. doi:10.1038/ ng1205

Tsaioun, K., Blaauboer, B. J. and Hartung, T. (2016). Evidence-based absorption, distribution, metabolism, excretion and toxicity (ADMET) and the role of alternative methods. ALTEX 33, 343-358. doi:10.14573/altex.1610101

Vaccari, M., Mascolo, M.G., Rotondo, F. et al. (2015). Identification of pathway-based toxicity in the BALB/c 3T3 cell model. Toxicol In Vitro 29, 1240-1253. doi:10.1016/j. tiv.2014.10.002

van der Laan, J. W., Kasper, P., Silva Lima, B. et al. (2016). Critical analysis of carcinogenicity study outcomes. Relationship with pharmacological properties. Crit Rev Toxicol 46, 587-614. doi:10.3109/10408444.2016.1163664

Van Oosterhout, J. P., Van der Laan, J. W., De Waal, E. J. et al. (1997). The utility of two rodent species in carcinogenic risk assessment of pharmaceuticals in Europe. Regul Toxicol Pharmacol 25, 6-17. doi:10.1006/rtph.1996.1077

van Ravenzwaay, B. (2010). Initiatives to decrease redundancy in animal testing of pesticides. ALTEX 27, 159-161.

van Ravenzwaay, B., Dammann, M., Buesen, R. et al. (2011). The threshold of toxicological concern for prenatal developmental toxicity. Regulat Toxicol Pharmacol 59, 81-90. doi:10.1016/j. yrtph.2010.09.009

van Ravenzwaay, B., Dammann, M., Buesen, R. et al. (2012). The threshold of toxicological concern for prenatal developmental toxicity in rabbits and a comparison to TTC values in rats. Regul Toxicol Pharmacol 64, 1-8. doi:10.1016/j.yrtph.2012.06.004

van Ravenzwaay, B., Jiang, X., Luechtefeld, T. and Hartung, T. (2017). The threshold of toxicological concern for prenatal developmental toxicity in rats and rabbits. Regul Toxicol Pharmacol 88, 157-172. doi:10.1016/j.yrtph.2017.06.008

Voelkl, B., Vogt, L., Sena, E. S. and Wurbel, H. (2018). Reproducibility of preclinical animal research improves with heterogeneity of study samples. PLoSBiology 16, e2003693. doi:10.1371/ journal.pbio.2003693

Wang, B. and Gray, G. (2015). Concordance of Noncarcinogenic Endpoints in Rodent Chemical Bioassays. Risk Analysis 35, 1154-1166. doi:10.1111/risa.12314

Waters, M. D. (2016). Introduction to predictive carcinogenicity. In Issues in Toxicology No. 28 Toxicogenomics in Predictive Carcinogenicity (Waters MD and Thomas RS, eds). Cambridge, UK: Royal Society of Chemistry. doi:10.1039/978178262405900001

Watson, D. E., Hunziker, R. and Wikswo, J. P. (2017). Fitting tissue chips and microphysiological systems into the grand scheme of medicine, biology, pharmacology, and toxicology. Exp Biol Med 242, 1559-1572. doi:10.1177/1535370217732765

Weigt, S., Huebler, N., Braunbeck, T. et al. (2010). Zebrafish teratogenicity test with metabolic activation (mDarT): Effects of phase I activation of acetaminophen on zebrafish Danio rerio embryos. Toxicology 275, 36-49. doi:10.1016/j. tox.2010.05.012

Worth, A., Barroso, J. F., Bremer, S. et al. (2014). Alternative Methods for Regulatory Toxicology--A State-of-the-art Review, 470pp. JRC Report 26797. https://bit.ly/2q91DiM

Disclaimer: The views presented are those of the individual authors and do not necessarily reflect those of all authors or those of their institutions or official federal government policy. This article does not necessarily reflect the policy of the National Toxicology Program, National Institutes of Health, or the Food and Drug Administration.

Abbreviations: NIH, National Institutes of Health USA FDA, Food and Drug Administration USA DARPA, Defense Advanced Research Projects Agency USA

The authors declare the following competing financial interest(s): T.H. is the founder of Organome LLC, Baltimore, and consults AstraZeneca, Cambridge, UK, in the field of organo-typic cultures / MPS. A. L. is a co-founder of Sidera Medicine. The opinions expressed in this article are not informed by this affiliation.

This work was supported by the EU-ToxRisk project (An Integrated European "Flagship" Program Driving Mechanism-Based Toxicity Testing and Risk Assessment for the 21st Century) funded by the European Commission under the Horizon 2020 program (Grant Agreement No. 681002). The work on human BrainSpheres mentioned was supported by NIH NCATS (grant U18TR000547 "A 3D Model of Human Brain Development for Studying Gene/Environment Interactions", PI Hartung) and Alternatives Research & Development Foundation ("A 3D in vitro 'mini-brain' model to study Parkinson's disease", PI Hartung). Andre Levchenko is the PI of a U54 NCI Cancer Systems Biology grant CA209992.

Center for Alternatives to Animal Testing

Johns Hopkins Bloomberg School of Public Health

W7032, Baltimore, MD 21205, USA

Lena Smirnova [1], Nicole Kleinstreuer [2], Raffaella Corvi [3], Andre Levchenko [4], Suzanne C. Fitzpatrick [5] and Thomas Hartung [1,6]

[1] Johns Hopkins University, Bloomberg School of Public Health, Center for Alternatives to Animal Testing (CAAT), Baltimore, MD, USA [2] NIH/NIEHS/DNTP/NICEATM, RTP, NC, USA [3] European Commission, Joint Research Centre (JRC), EU Reference Laboratory for Alternatives to Animal Testing (EURL ECVAM), Ispra, (VA), Italy [4] Yale Systems Biology Institute and Biomedical Engineering Department, Yale University, New Haven, CT, USA [5] Food and Drug Administration (FDA), Center for Food Safety and Applied Nutrition, College Park, MD, USA [6] CAAT-Europe, University of Konstanz, Konstanz, Germany

Received April 5, 2018 doi: 10.14573/altex.1804051

This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International license (http://creativecommons.org/ licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium, provided the original work is appropriately cited.

Caption: Fig. 1: The 3S approach to study systemic phenomena

Caption: Fig. 2: The FDA-DARPA-NIH Microphysiological Systems Program

Caption: Fig. 3: The concept of performance standard-based validation The different elements for anchoring a validation in a correlative or mechanistic manner will be combined by expert consensus to define a performance standard meeting a test purpose.


Now, what does systemic mean?

Systemic is an adjective that means “of or relating to a system.” It is especially used to describe some phenomenon—an illness, a social problem—that affects every part of an entire system. Some near synonyms to systemic are structural, comprehensive, inherent, pervasive, ingrained, and extensive. Besides a plan or method, a system, as we saw in our first section, can also mean a group of parts or things that come together to form a whole systemic draws on this sense of system.

It helps to know a little history and anatomy when it comes to the word systemic. In biology, a system can refer to the cells and organs that work together to accomplish the same goal (such as the immune system) or to the entire organism (e.g., chocolate is toxic to a dog’s system).

Some of the earliest uses of systemic referred to diseases that affected more than one organ system (such as the circulatory system or digestive system). Systemic especially refers to diseases that affect the whole body. In medicine, systemic is still widely used this way today. Many types of cancer are systemic diseases, for instance, spreading throughout and harming the entire body.

System has developed to refer to various nonliving systems that are large and complex—writing systems, belief systems, or, on the level of social institutions, systems of government, banking, and healthcare, to name a few. These systems involve various smaller parts, organizations, and levels that form an intricate whole think about the complexity of the English alphabet, with letters representing sounds and combining together to represent words. Then, think about the complexity of the federal government.

Especially when talking about social institutions like government or healthcare, systemic is used when discussing something that affects the whole—systemic problems, systemic change.

For example, some banks are so central to the global economy that they are considered systemic banks by certain oversight or regulatory agencies. A systemic bank, such as the Bank of America or Bank of China, is so important to the overall financial system that, if it were to fail, it could cause systemic damage—its failure could set off a collapse of the global economy.

Compared to systematic, systemic is the newer word, dating back to between 1795–1805. Systemic was formed within English as a combination of the word system and –ic, an adjective suffix commonly and originally appearing in Greek and Latin loanwords (e.g., public, metallic, poetic).


2. &ldquoPop Sociobiology&rdquo

The second way that &ldquosociobiology&rdquo has come to be understood is as a particular approach to understanding specifically human behavior which Philip Kitcher (1985) calls &ldquoPop Sociobiology&rdquo (as opposed to his description of &ldquonarrow sociobiology&rdquo which is roughly equivalent to &ldquobehavioral ecology&rdquo above). Pop Sociobiology is so-called because it is a view about how to study human behavior described in a variety of literature written by Wilson and others [4] for a general, rather than an academic audience. In this literature, Wilson and the other &ldquoPop sociobiologists&rdquo present some speculative and preliminary [5] sketches of how an evolutionary science of human behavior might proceed: Wilson&rsquos main focus in On Human Nature (Wilson, 1978) and to a lesser extent the last chapter of Sociobiology (Wilson, 1975) is to show that such a science is possible, to describe some of the techniques that might be used in pursuing it, and to sketch some possible evolutionary analyses for certain particular human behaviors. Because of its presentation in the popular press, &ldquoPop Sociobiology&rdquo was probably important in shaping popular perceptions of the nature of sociobiology (for example, the Time article, &ldquoWhy you do what you do&rdquo 1977, 110 (Aug 1), 54) and consequently drew the ire of the critics. Unfortunately, the intensity of this debate may have led to a certain amount of mischaracterization of the sociobiologists&rsquo views. This section will address the main concerns that the critics raised about Wilson&rsquos early &ldquoPop&rdquo sociobiology, and discuss whether and how far these are fair descriptions of his views.

Genetic determinism. In a variety of articles major critics of sociobiology such as Stephen J. Gould (1977, 251&ndash259 1978) and the so-called &ldquoSociobiology Study Group&rdquo (hereafter SSG) (Allen et al., 1975 Sociobiology Study Group of Science for the People, 1976) claim that sociobiologists are strong genetic determinists. For example, according to the SSG Wilson believes that there are particular genes &ldquofor&rdquo behavioral traits, including indoctrinability, territoriality, warfare and reciprocal altruism, and that these genes are subject to natural selection in a relatively straightforward way. Indeed, the SSG (1976) argue that claiming that traits have a selective origin requires that there are genes &ldquofor&rdquo them Wilson&rsquos apparent acceptance that traits may often have a strong cultural component is said to be an error, since if this is true then evolutionary theory tells us nothing about the origin of such traits. Gould (1977) similarly claims that sociobiologists do not realize that genes only produce traits with a contribution from the environment.

Both of these claims are believed, even by other critics, to be unfair analyses of the views of the sociobiologists, and especially Wilson&mdashfor example, Kitcher, one of the strongest critics of sociobiology, takes Gould and the SSG to task on this point (Kitcher, 1985, 22&ndash23). In On Human Nature Wilson describes genes as, essentially, difference makers&mdashhe explicitly claims that differences in genes, even for heritable traits, only explain the variance in traits across a population they are by no means independent causes for any trait in individuals and variation in the environment also accounts for part of the variation in any trait (Wilson, 1978, 19). In at least one paper responding to the SSG Wilson says that, on the question of the relative contributions to the variation in human behavior from variation in genes vs. variation in the environment, his &ldquoown views lie closer to the environmentalist than the genetic pole&rdquo (Wilson, 1976, 183). Wilson also does seem to be trying to support his claim that there are some human behaviors which are probably highly heritable: he describes a variety of different sorts of evidence that might identify them. This evidence includes cross cultural appearance (e.g. Wilson, 1975, 550 Wilson, 1978, 20, 129) plausible homology with other closely related species (especially chimpanzees) (e.g. 1978, 20, 27, 151) early development of the trait in question (e.g. 1975, 551 1978, 129) differences between individuals that arise without differences in their developmental environment (e.g. 1978, 128&ndash130) genetic syndromes that cause behavioral differences (e.g. 1978, 43&ndash45) and twin studies (e.g. 1978, 145). Finally, Wilson claims that trying to change human behavior from its heritable form usually fails or causes misery (Wilson, 1978, 20) [6] he describes the failures of certain attempts to change the features of normal human behavior by massively changing the social environment, such as the persistence of family ties under slavery (Wilson, 1978, 136) and in the Israeli kibbutzim (1978, 134). Of course, whether or not all of the above is good evidence for his claims is very much up for debate (Kitcher, 1985 Sociobiology Study Group of Science for the People, 1976). It is worth bearing in mind that while Wilson thinks the evidence that some human behaviors are heritable is overwhelming (Wilson, 1978, 19) he does see many of his specific proposed evolutionary explanations as preliminary and speculative rather than fully formed (for example, Wilson is explicit that his discussion of homosexuality is preliminary: 1978, 146). For more discussion of the problems relating to heritability when studying the evolution of behavior, see section 4.2. below and the entry on heritability.

Ignoring learning and culture. As a concomitant of the objection that Pop Sociobiology was committed to genetic determinism, its central players are also often accused of being insensitive to the problem of learning and culture, i.e. to the problem that many traits in which they are interested are simply not subject to natural selection at all, and that this state of affairs may indeed be common in humans (Kitcher, 1985 Sociobiology Study Group of Science for the People, 1976). However, Wilson, for example, clearly recognized the important role of culture in many behavioral traits (Wilson, 1976) indeed, he thought that even minor genetic differences that made a difference in behavior could be exaggerated by acquired culture&mdashthis is the so-called &ldquomultiplier effect&rdquo (although it is seriously in question whether the multiplier effect works &ndash Maynard Smith and Warren, 1982). Furthermore, partly in response to these concerns on the part of his critics Wilson eventually went on to publish Genes, Minds and Culture with Charles Lumsden (Lumsden and Wilson, 1981), which was an attempt to consider the effects of cultural transmission on the nature and spread of behavioral traits, and of the interaction between genes and culture. The book, however, was subject to heavy criticism (see, for example, Kitcher, 1985 Lewontin, 1981 Maynard Smith and Warren, 1982). The main concern raised was that there was little substance in the models the book provided&mdashthe most interesting features of these simply followed from the assumptions built into them, in particular, assumptions about the degree to which genes kept culture &ldquoon a leash&rdquo (Lumsden and Wilson, 1981, 13 Wilson, 1978, 167).

Strong adaptationism. The third problematic feature ascribed to Pop Sociobiology was its reliance on an overly strong form of adaptationism. In both papers by the SSG (1976) and by Gould and Lewontin in their famous &ldquoSpandrels of San Marco&rdquo paper (Gould and Lewontin, 1979), the critics of sociobiology argue that sociobiologists are committed to a &ldquoPanglossian&rdquo adaptationism. While the &ldquoSpandrels&rdquo paper is directed at &ldquoadaptationists&rdquo generally, sociobiologists were some of its clear targets (for example, David Barash&rsquos (1976) work on jealousy in male bluebirds).

The central accusations of the &ldquoSpandrels&rdquo paper were as follows: that adaptationists treat all traits as adaptations that when &ldquoatomizing&rdquo individuals into traits to study they take no care to establish that the traits so atomized could actually independently evolve by natural selection that they ignore developmental constraints on evolution that they fail to identify traits that are prevalent due to causes other than natural selection that they fail to distinguish between current adaptiveness and a past history of natural selection that they generate adaptationist hypotheses, fail to test them properly and replace one such hypothesis with another, ignoring other interesting kinds of evolutionary and non-evolutionary explanation. Instead, according to Gould and Lewontin, adaptationists tell purely speculative, untestable &ldquojust so&rdquo stories and present them as science fact.

Again, insofar as Wilson and the other sociobiologists are being purely speculative this criticism may be warranted: quite a lot of the evolutionary explanations of particular human behaviors Wilson describes in the first and last chapters of Sociobiology and in On Human Nature are speculation on his part (although not entirely speculation). Perhaps the speculative adaptationist stories are appropriately described as &ldquojust so stories&rdquo the question is whether such stories, treated as preliminary hypotheses, are problematic in themselves. Furthermore, while Wilson made no attempt to test any of his speculative hypotheses, behavioral ecologists do try to test adaptationist hypotheses about humans and other animals. Again, the proper question is whether these tests are appropriate or sufficient to establish the truth of the hypotheses in question. Gould and Lewontin do, however, make some more sophisticated objections to adaptationist methods some of these will be discussed in Section 4.


Methods/design

Research question

What is the evidence on the forms of crime facilitated by biotechnology?

Objectives

To reveal evidence on (i) what forms of biotechnology have been shown to be prone to criminal exploitation, (ii) what crime types have been discussed as already materialised, (iii) what types of crime are expected in the future and (iv) what necessary conditions are for crime events to occur with a view to informing their prevention.

Study overview

An overview of the study protocol is illustrated in a flow chart in Fig. ​ Fig.1 1 .


Watch the video: approaches in geography (October 2022).