Toxicology is the study of poisons, or, more comprehensively, the identification and quantification of adverse outcomes associated with exposures to physical agents, chemical substances and other conditions. As such, toxicology draws upon most of the basic biological sciences, medical disciplines, epidemiology and some areas of chemistry and physics for information, research designs and methods. Toxicology ranges from basic research investigations on the mechanism of action of toxic agents through the development and interpretation of standard tests characterizing the toxic properties of agents. Toxicology provides important information for both medicine and epidemiology in understanding aetiology and in providing information as to the plausibility of observed associations between exposures, including occupations, and disease. Toxicology can be divided into standard disciplines, such as clinical, forensic, investigative and regulatory toxicology; toxicology can be considered by target organ system or process, such as immunotoxicology or genetic toxicology; toxicology can be presented in functional terms, such as research, testing and risk assessment.
It is a challenge to propose a comprehensive presentation of toxicology in this Encyclopaedia. This chapter does not present a compendium of information on toxicology or adverse effects of specific agents. This latter information is better obtained from databases that are continually updated, as described in the last section of this chapter. Moreover, the chapter does not attempt to set toxicology within specific subdisciplines, such as forensic toxicology. It is the premise of the chapter that the information provided is relevant to all types of toxicological endeavours and to the use of toxicology in various medical specialities and fields. In this chapter, topics are based primarily upon a practical orientation and integration with the intent and purpose of the Encyclopaedia as a whole. Topics are also selected for ease of cross-reference within the Encyclopaedia.
In modern society, toxicology has become an important element in environmental and occupational health. This is because many organizations, governmental and non-governmental, utilize information from toxicology to evaluate and regulate hazards in the workplace and nonoccupational environment. As part of prevention strategies, toxicology is invaluable, since it is the source of information on potential hazards in the absence of widespread human exposures. Toxicological methods are also widely used by industry in product development, to provide information useful in the design of specific molecules or product formulations.
The chapter begins with five articles on general principles of toxicology, which are important to the consideration of most topics in the field. The first general principles relate to understanding relationships between external exposure and internal dose. In modern terminology, “exposure” refers to the concentrations or amount of a substance presented to individuals or populationsamounts found in specific volumes of air or water, or in masses of soil. “Dose” refers to the concentration or amount of a substance inside an exposed person or organism. In occupational health, standards and guidelines are often set in terms of exposure, or allowable limits on concentrations in specific situations, such as in air in the workplace. These exposure limits are predicated upon assumptions or information on the relationships between exposure and dose; however, often information on internal dose is unavailable. Thus, in many studies of occupational health, associations can be drawn only between exposure and response or effect. In a few instances, standards have been set based on dose (e.g., permissible levels of lead in blood or mercury in urine). While these measures are more directly correlated with toxicity, it is still necessary to back-calculate exposure levels associated with these levels for purposes of controlling risks.
The next article concerns the factors and events that determine the relationships between exposure, dose and response. The first factors relate to uptake, absorption and distributionthe processes that determine the actual transport of substances into the body from the external environment across portals of entry such as skin, lung and gut. These processes are at the interface between humans and their environments. The second factors, of metabolism, relate to understanding how the body handles absorbed substances. Some substances are transformed by cellular processes of metabolism, which can either increase or decrease their biological activity.
The concepts of target organ and critical effect have been developed to aid in the interpretation of toxicological data. Depending upon dose, duration and route of exposure, as well as host factors such as age, many toxic agents can induce a number of effects within organs and organisms. An important role of toxicology is to identify the important effect or sets of effects in order to prevent irreversible or debilitating disease. One important part of this task is the identification of the organ first or most affected by a toxic agent; this organ is defined as the “target organ”. Within the target organ, it is important to identify the important event or events that signals intoxication, or damage, in order to ascertain that the organ has been affected beyond the range of normal variation. This is known as the “critical effect”; it may represent the first event in a progression of pathophysiological stages (such as the excretion of small-molecular-weight proteins as a critical effect in nephrotoxicity), or it may represent the first and potentially irreversible effect in a disease process (such as formation of a DNA adduct in carcinogenesis). These concepts are important in occupational health because they define the types of toxicity and clinical disease associated with specific exposures, and in most cases reduction of exposure has as a goal the prevention of critical effects in target organs, rather than every effect in every or any organ.
The next two articles concern important host factors that affect many types of responses to many types of toxic agents. These are: genetic determinants, or inherited susceptibility/resistance factors; and age, sex and other factors such as diet or co-existence of infectious disease. These factors can also affect exposure and dose, through modifying uptake, absorption, distribution and metabolism. Because working populations around the world vary with respect to many of these factors, it is critical for occupational health specialists and policy-makers to understand the way in which these factors may contribute to variabilities in response among populations and individuals within populations. In societies with heterogeneous populations, these considerations are particularly important. The variability of human populations must be considered in evaluating the risks of occupational exposures and in reaching rational conclusions from the study of nonhuman organisms in toxicological research or testing.
The section then provides two general overviews on toxicology at the mechanistic level. Mechanistically, modern toxicologists consider that all toxic effects manifest their first actions at the cellular level; thus, cellular responses represent the earliest indications of the body’s encounters with a toxic agent. It is further assumed that these responses represent a spectrum of events, from injury through death. Cell injury refers to specific processes utilized by cells, the smallest unit of biological organization within organs, to respond to challenge. These responses involve changes in the function of processes within the cell, including the membrane and its ability to take up, release or exclude substances; the directed synthesis of proteins from amino acids; and the turnover of cell components. These responses may be common to all injured cells, or they may be specific to certain types of cells within certain organ systems. Cell death is the destruction of cells within an organ system, as a consequence of irreversible or uncompensated cell injury. Toxic agents may cause cell death acutely because of certain actions such as poisoning oxygen transfer, or cell death may be the consequence of chronic intoxication. Cell death can be followed by replacement in some but not all organ systems, but in some conditions cell proliferation induced by cell death may be considered a toxic response. Even in the absence of cell death, repeated cell injury may induce stress within organs that compromises their function and affects their progeny.
The chapter is then divided into more specific topics, which are grouped into the following categories: mechanism, test methods, regulation and risk assessment. The mechanism articles mostly focus on target systems rather than organs. This reflects the practice of modern toxicology and medicine, which studies organ systems rather than isolated organs. Thus, for example, the discussion of genetic toxicology is not focused upon the toxic effects of agents within a specific organ but rather on genetic material as a target for toxic action. Likewise, the article on immunotoxicology discusses the various organs and cells of the immune system as targets for toxic agents. The methods articles are designed to be highly operational; they describe current methods in use in many countries for hazard identification, that is, the development of information related to biological properties of agents.
The chapter continues with five articles on the application of toxicology in regulation and policy-making, from hazard identification to risk assessment. The current practice in several countries, as well as IARC, is presented. These articles should enable the reader to understand how information derived from toxicology tests is integrated with basic and mechanistic inferences to derive quantitative information used in setting exposure levels and other approaches to controlling hazards in the workplace and general environment.
A summary of available toxicology databases, to which the readers of this encyclopaedia can refer for detailed information on specific toxic agents and exposures, can be found in Volume III (see “Toxicology databases” in the chapter Safe handling of chemicals, which provides information on many of these databases, their information sources, methods of evaluation and interpretation, and means of access). These databases, together with the Encyclopaedia, provide the occupational health specialist, the worker and the employer with the ability to obtain and use up-to-date information on toxicology and the evaluation of toxic agents by national and international bodies.
This chapter focuses upon those aspects of toxicology relevant to occupational safety and health. For that reason, clinical toxicology and forensic toxicology are not specifically addressed as subdisciplines of the field. Many of the same principles and approaches described here are used in these subdisciplines as well as in environmental health. They are also applicable to evaluating the impacts of toxic agents on nonhuman populations, a major concern of environmental policies in many countries. A committed attempt has been made to enlist the perspectives and experiences of experts and practitioners from all sectors and from many countries; however, the reader may note a certain bias towards academic scientists in the developed world. Although the editor and contributors believe that the principles and practice of toxicology are international, the problems of cultural bias and narrowness of experience may well be evident in this chapter. The chapter editor hopes that readers of this Encyclopaedia will assist in ensuring the broadest perspective possible as this important reference continues to be updated and expanded.
Toxicity is the intrinsic capacity of a chemical agent to affect an organism adversely.
Xenobiotics is a term for “foreign substances”, that is, foreign to the organism. Its opposite is endogenous compounds. Xenobiotics include drugs, industrial chemicals, naturally occurring poisons and environmental pollutants.
Hazard is the potential for the toxicity to be realized in a specific setting or situation.
Risk is the probability of a specific adverse effect to occur. It is often expressed as the percentage of cases in a given population and during a specific time period. A risk estimate can be based upon actual cases or a projection of future cases, based upon extrapolations.
Toxicity rating and toxicity classification can be used for regulatory purposes. Toxicity rating is an arbitrary grading of doses or exposure levels causing toxic effects. The grading can be “supertoxic,” “highly toxic,” “moderately toxic” and so on. The most common ratings concern acute toxicity. Toxicity classification concerns the grouping of chemicals into general categories according to their most important toxic effect. Such categories can include allergenic, neurotoxic, carcinogenic and so on. This classification can be of administrative value as a warning and as information.
The dose-effect relationship is the relationship between dose and effect on the individual level. An increase in dose may increase the intensity of an effect, or a more severe effect may result. A dose-effect curve may be obtained at the level of the whole organism, the cell or the target molecule. Some toxic effects, such as death or cancer, are not graded but are “all or none” effects.
The dose-response relationship is the relationship between dose and the percentage of individuals showing a specific effect. With increasing dose a greater number of individuals in the exposed population will usually be affected.
It is essential to toxicology to establish dose-effect and dose-response relationships. In medical (epidemiological) studies a criterion often used for accepting a causal relationship between an agent and a disease is that effect or response is proportional to dose.
Several dose-response curves can be drawn for a chemicalone for each type of effect. The dose-response curve for most toxic effects (when studied in large populations) has a sigmoid shape. There is usually a low-dose range where there is no response detected; as dose increases, the response follows an ascending curve that will usually reach a plateau at a 100% response. The dose-response curve reflects the variations among individuals in a population. The slope of the curve varies from chemical to chemical and between different types of effects. For some chemicals with specific effects (carcinogens, initiators, mutagens) the dose-response curve might be linear from dose zero within a certain dose range. This means that no threshold exists and that even small doses represent a risk. Above that dose range, the risk may increase at greater than a linear rate.
Variation in exposure during the day and the total length of exposure during one’s lifetime may be as important for the outcome (response) as mean or average or even integrated dose level. High peak exposures may be more harmful than a more even exposure level. This is the case for some organic solvents. On the other hand, for some carcinogens, it has been experimentally shown that the fractionation of a single dose into several exposures with the same total dose may be more effective in producing tumours.
A dose is often expressed as the amount of a xenobiotic entering an organism (in units such as mg/kg body weight). The dose may be expressed in different (more or less informative) ways: exposure dose, which is the air concentration of pollutant inhaled during a certain time period (in work hygiene usually eight hours), or the retained or absorbed dose (in industrial hygiene also called the body burden), which is the amount present in the body at a certain time during or after exposure. The tissue dose is the amount of substance in a specific tissue and the target dose is the amount of substance (usually a metabolite) bound to the critical molecule. The target dose can be expressed as mg chemical bound per mg of a specific macromolecule in the tissue. To apply this concept, information on the mechanism of toxic action on the molecular level is needed. The target dose is more exactly associated with the toxic effect. The exposure dose or body burden may be more easily available, but these are less precisely related to the effect.
In the dose concept a time aspect is often included, even if it is not always expressed. The theoretical dose according to Haber’s law is D = ct, where D is dose, c is concentration of the xenobiotic in the air and t the duration of exposure to the chemical. If this concept is used at the target organ or molecular level, the amount per mg tissue or molecule over a certain time may be used. The time aspect is usually more important for understanding repeated exposures and chronic effects than for single exposures and acute effects.
Additive effects occur as a result of exposure to a combination of chemicals, where the individual toxicities are simply added to each other (1+1=2). When chemicals act via the same mechanism, additivity of their effects is assumed although not always the case in reality. Interaction between chemicals may result in an inhibition (antagonism), with a smaller effect than that expected from addition of the effects of the individual chemicals (1+1<2). Alternatively, a combination of chemicals may produce a more pronounced effect than would be expected by addition (increased response among individuals or an increase in frequency of response in a population), this is called synergism (1+1>2).
Latency time is the time between first exposure and the appearance of a detectable effect or response. The term is often used for carcinogenic effects, where tumours may appear a long time after the start of exposure and sometimes long after the cessation of exposure.
A dose threshold is a dose level below which no observable effect occurs. Thresholds are thought to exist for certain effects, like acute toxic effects; but not for others, like carcinogenic effects (by DNA-adduct-forming initiators). The mere absence of a response in a given population should not, however, be taken as evidence for the existence of a threshold. Absence of response could be due to simple statistical phenomena: an adverse effect occurring at low frequency may not be detectable in a small population.
LD50 (effective dose) is the dose causing 50% lethality in an animal population. The LD50 is often given in older literature as a measure of acute toxicity of chemicals. The higher the LD50, the lower is the acute toxicity. A highly toxic chemical (with a low LD50) is said to be potent. There is no necessary correlation between acute and chronic toxicity. ED50 (effective dose) is the dose causing a specific effect other than lethality in 50% of the animals.
NOEL (NOAEL) means the no observed (adverse) effect level, or the highest dose that does not cause a toxic effect. To establish a NOEL requires multiple doses, a large population and additional information to make sure that absence of a response is not merely a statistical phenomenon. LOEL is the lowest observed effective dose on a dose-response curve, or the lowest dose that causes an effect.
A safety factor is a formal, arbitrary number with which one divides the NOEL or LOEL derived from animal experiments to obtain a tentative permissible dose for humans. This is often used in the area of food toxicology, but may be used also in occupational toxicology. A safety factor may also be used for extrapolation of data from small populations to larger populations. Safety factors range from 100 to 103. A safety factor of two may typically be sufficient to protect from a less serious effect (such as irritation) and a factor as large as 1,000 may be used for very serious effects (such as cancer). The term safety factor could be better replaced by the term protection factor or, even, uncertainty factor. The use of the latter term reflects scientific uncertainties, such as whether exact dose-response data can be translated from animals to humans for the particular chemical, toxic effect or exposure situation.
Extrapolations are theoretical qualitative or quantitative estimates of toxicity (risk extrapolations) derived from translation of data from one species to another or from one set of dose-response data (typically in the high dose range) to regions of dose-response where no data exist. Extrapolations usually must be made to predict toxic responses outside the observation range. Mathematical modelling is used for extrapolations based upon an understanding of the behaviour of the chemical in the organism (toxicokinetic modelling) or based upon the understanding of statistical probabilities that specific biological events will occur (biologically or mechanistically based models). Some national agencies have developed sophisticated extrapolation models as a formalized method to predict risks for regulatory purposes. (See discussion of risk assessment later in the chapter.)
Systemic effects are toxic effects in tissues distant from the route of absorption.
Target organ is the primary or most sensitive organ affected after exposure. The same chemical entering the body by different routes of exposure dose, dose rate, sex and species may affect different target organs. Interaction between chemicals, or between chemicals and other factors may affect different target organs as well.
Acute effects occur after limited exposure and shortly (hours, days) after exposure and may be reversible or irreversible.
Chronic effects occur after prolonged exposure (months, years, decades) and/or persist after exposure has ceased.
Acute exposure is an exposure of short duration, while chronic exposure is long-term (sometimes life-long) exposure.
Tolerance to a chemical may occur when repeat exposures result in a lower response than what would have been expected without pretreatment.
Diffusion. In order to enter the organism and reach a site where damage is produced, a foreign substance has to pass several barriers, including cells and their membranes. Most toxic substances pass through membranes passively by diffusion. This may occur for small water-soluble molecules by passage through aqueous channels or, for fat-soluble ones, by dissolution into and diffusion through the lipid part of the membrane. Ethanol, a small molecule that is both water and fat soluble, diffuses rapidly through cell membranes.
Diffusion of weak acids and bases. Weak acids and bases may readily pass membranes in their non-ionized, fat-soluble form while ionized forms are too polar to pass. The degree of ionization of these substances depends on pH. If a pH gradient exists across a membrane they will therefore accumulate on one side. The urinary excretion of weak acids and bases is highly dependent on urinary pH. Foetal or embryonic pH is somewhat higher than maternal pH, causing a slight accumulation of weak acids in the foetus or embryo.
Facilitated diffusion. The passage of a substance may be facilitated by carriers in the membrane. Facilitated diffusion is similar to enzyme processes in that it is protein mediated, highly selective, and saturable. Other substances may inhibit the facilitated transport of xenobiotics.
Active transport. Some substances are actively transported across cell membranes. This transport is mediated by carrier proteins in a process analogous to that of enzymes. Active transport is similar to facilitated diffusion, but it may occur against a concentration gradient. It requires energy input and a metabolic inhibitor can block the process. Most environmental pollutants are not transported actively. One exception is the active tubular secretion and reabsorption of acid metabolites in the kidneys.
Phagocytosis is a process where specialized cells such as macrophages engulf particles for subsequent digestion. This transport process is important, for example, for the removal of particles in the alveoli.
Bulk flow. Substances are also transported in the body along with the movement of air in the respiratory system during breathing, and the movements of blood, lymph or urine.
Filtration. Due to hydrostatic or osmotic pressure water flows in bulk through pores in the endothelium. Any solute that is small enough will be filtered together with the water. Filtration occurs to some extent in the capillary bed in all tissues but is particularly important in the formation of primary urine in the kidney glomeruli.
Absorption is the uptake of a substance from the environment into the organism. The term usually includes not only the entrance into the barrier tissue but also the further transport into circulating blood.
Pulmonary absorption. The lungs are the primary route of deposition and absorption of small airborne particles, gases, vapours and aerosols. For highly water-soluble gases and vapours a significant part of the uptake occurs in the nose and the respiratory tree, but for less soluble substances it primarily takes place in the lung alveoli. The alveoli have a very large surface area (about 100 m2 in humans). In addition, the diffusion barrier is extremely small, with only two thin cell layers and a distance in the order of micrometers from alveolar air to systemic blood circulation. This makes the lungs very efficient not only in the exchange of oxygen and carbon dioxide but also of other gases and vapours. In general, the diffusion across the alveolar wall is so rapid that it does not limit the uptake. The absorption rate is instead dependent on flow (pulmonary ventilation, cardiac output) and solubility (blood: air partition coefficient). Another important factor is metabolic elimination. The relative importance of these factors for pulmonary absorption varies greatly for different substances. Physical activity results in increased pulmonary ventilation and cardiac output, and decreased liver blood flow (and, hence, biotransformation rate). For many inhaled substances this leads to a marked increase in pulmonary absorption.
Percutaneous absorption. The skin is a very efficient barrier. Apart from its thermoregulatory role, it is designed to protect the organism from micro-organisms, ultraviolet radiation and other deleterious agents, and also against excessive water loss. The diffusion distance in the dermis is on the order of tenths of millimetres. In addition, the keratin layer has a very high resistance to diffusion for most substances. Nevertheless, significant dermal absorption resulting in toxicity may occur for some substanceshighly toxic, fat-soluble substances such as organophosphorous insecticides and organic solvents, for example. Significant absorption is likely to occur after exposure to liquid substances. Percutaneous absorption of vapour may be important for solvents with very low vapour pressure and high affinity to water and skin.
Gastrointestinal absorption occurs after accidental or intentional ingestion. Larger particles originally inhaled and deposited in the respiratory tract may be swallowed after mucociliary transport to the pharynx. Practically all soluble substances are efficiently absorbed in the gastrointestinal tract. The low pH of the gut may facilitate absorption, for instance, of metals.
Other routes. In toxicity testing and other experiments, special routes of administration are often used for convenience, although these are rare and usually not relevant in the occupational setting. These routes include intravenous (IV), subcutaneous (sc), intraperitoneal (ip) and intramuscular (im) injections. In general, substances are absorbed at a higher rate and more completely by these routes, especially after IV injection. This leads to short-lasting but high concentration peaks that may increase the toxicity of a dose.
The distribution of a substance within the organism is a dynamic process which depends on uptake and elimination rates, as well as the blood flow to the different tissues and their affinities for the substance. Water-soluble, small, uncharged molecules, univalent cations, and most anions diffuse easily and will eventually reach a relatively even distribution in the body.
Volume of distribution is the amount of a substance in the body at a given time, divided by the concentration in blood, plasma or serum at that time. The value has no meaning as a physical volume, as many substances are not uniformly distributed in the organism. A volume of distribution of less than one l/kg body weight indicates preferential distribution in the blood (or serum or plasma), whereas a value above one indicates a preference for peripheral tissues such as adipose tissue for fat soluble substances.
Accumulation is the build-up of a substance in a tissue or organ to higher levels than in blood or plasma. It may also refer to a gradual build-up over time in the organism. Many xenobiotics are highly fat soluble and tend to accumulate in adipose tissue, while others have a special affinity for bone. For example, calcium in bone may be exchanged for cations of lead, strontium, barium and radium, and hydroxyl groups in bone may be exchanged for fluoride.
Barriers. The blood vessels in the brain, testes and placenta have special anatomical features that inhibit passage of large molecules like proteins. These features, often referred to as blood-brain, blood-testes, and blood-placenta barriers, may give the false impression that they prevent passage of any substance. These barriers are of little or no importance for xenobiotics that can diffuse through cell membranes.
Blood binding. Substances may be bound to red blood cells or plasma components, or occur unbound in blood. Carbon monoxide, arsenic, organic mercury and hexavalent chromium have a high affinity for red blood cells, while inorganic mercury and trivalent chromium show a preference for plasma proteins. A number of other substances also bind to plasma proteins. Only the unbound fraction is available for filtration or diffusion into eliminating organs. Blood binding may therefore increase the residence time in the organism but decrease uptake by target organs.
Elimination is the disappearance of a substance in the body. Elimination may involve excretion from the body or transformation to other substances not captured by a specific method of measurement. The rate of disappearance may be expressed by the elimination rate constant, biological half-time or clearance.
Concentration-time curve. The curve of concentration in blood (or plasma) versus time is a convenient way of describing uptake and disposition of a xenobiotic.
Area under the curve (AUC) is the integral of concentration in blood (plasma) over time. When metabolic saturation and other non-linear processes are absent, AUC is proportional to the absorbed amount of substance.
Biological half-time (or half-life) is the time needed after the end of exposure to reduce the amount in the organism to one-half. As it is often difficult to assess the total amount of a substance, measurements such as the concentration in blood (plasma) are used. The half-time should be used with caution, as it may change, for example, with dose and length of exposure. In addition, many substances have complex decay curves with several half-times.
Bioavailability is the fraction of an administered dose entering the systemic circulation. In the absence of presystemic clearance, or first-pass metabolism, the fraction is one. In oral exposure presystemic clearance may be due to metabolism within the gastrointestinal content, gut wall or liver. First-pass metabolism will reduce the systemic absorption of the substance and instead increase the absorption of metabolites. This may lead to a different toxicity pattern.
Clearance is the volume of blood (plasma) per unit time completely cleared of a substance. To distinguish from renal clearance, for example, the prefix total, metabolic or blood (plasma) is often added.
Intrinsic clearance is the capacity of endogenous enzymes to transform a substance, and is also expressed in volume per unit time. If the intrinsic clearance in an organ is much lower than the blood flow, the metabolism is said to be capacity limited. Conversely, if the intrinsic clearance is much higher than the blood flow, the metabolism is flow limited.
Excretion is the exit of a substance and its biotransformation products from the organism.
Excretion in urine and bile. The kidneys are the most important excretory organs. Some substances, especially acids with high molecular weights, are excreted with bile. A fraction of biliary excreted substances may be reabsorbed in the intestines. This process, enterohepatic circulation, is common for conjugated substances following intestinal hydrolysis of the conjugate.
Other routes of excretion. Some substances, such as organic solvents and breakdown products such as acetone, are volatile enough so that a considerable fraction may be excreted by exhalation after inhalation. Small water-soluble molecules as well as fat-soluble ones are readily secreted to the foetus via the placenta, and into milk in mammals. For the mother, lactation can be a quantitatively important excretory pathway for persistent fat-soluble chemicals. The offspring may be secondarily exposed via the mother during pregnancy as well as during lactation. Water-soluble compounds may to some extent be excreted in sweat and saliva. These routes are generally of minor importance. However, as a large volume of saliva is produced and swallowed, saliva excretion may contribute to reabsorption of the compound. Some metals such as mercury are excreted by binding permanently to the sulphydryl groups of the keratin in the hair.
Mathematical models are important tools to understand and describe the uptake and disposition of foreign substances. Most models are compartmental, that is, the organism is represented by one or more compartments. A compartment is a chemically and physically theoretical volume in which the substance is assumed to distribute homogeneously and instantaneously. Simple models may be expressed as a sum of exponential terms, while more complicated ones require numerical procedures on a computer for their solution. Models may be subdivided in two categories, descriptive and physiological.
In descriptive models, fitting to measured data is performed by changing the numerical values of the model parameters or even the model structure itself. The model structure normally has little to do with the structure of the organism. Advantages of the descriptive approach are that few assumptions are made and that there is no need for additional data. A disadvantage of descriptive models is their limited usefulness for extrapolations.
Physiological models are constructed from physiological, anatomical and other independent data. The model is then refined and validated by comparison with experimental data. An advantage of physiological models is that they can be used for extrapolation purposes. For example, the influence of physical activity on the uptake and disposition of inhaled substances may be predicted from known physiological adjustments in ventilation and cardiac output. A disadvantage of physiological models is that they require a large amount of independent data.
Biotransformation is a process which leads to a metabolic conversion of foreign compounds (xenobiotics) in the body. The process is often referred to as metabolism of xenobiotics. As a general rule metabolism converts lipid-soluble xenobiotics to large, watersoluble metabolites that can be effectively excreted.
The liver is the main site of biotransformation. All xenobiotics taken up from the intestine are transported to the liver by a single blood vessel (vena porta). If taken up in small quantities a foreign substance may be completely metabolized in the liver before reaching the general circulation and other organs (first pass effect). Inhaled xenobiotics are distributed via the general circulation to the liver. In that case only a fraction of the dose is metabolized in the liver before reaching other organs.
Liver cells contain several enzymes that oxidize xenobiotics. This oxidation generally activates the compoundit becomes more reactive than the parent molecule. In most cases the oxidized metabolite is further metabolized by other enzymes in a second phase. These enzymes conjugate the metabolite with an endogenous substrate, so that the molecule becomes larger and more polar. This facilitates excretion.
Enzymes that metabolize xenobiotics are also present in other organs such as the lungs and kidneys. In these organs they may play specific and qualitatively important roles in the metabolism of certain xenobiotics. Metabolites formed in one organ may be further metabolized in a second organ. Bacteria in the intestine may also participate in biotransformation.
Metabolites of xenobiotics can be excreted by the kidneys or via the bile. They can also be exhaled via the lungs, or bound to endogenous molecules in the body.
The relationship between biotransformation and toxicity is complex. Biotransformation can be seen as a necessary process for survival. It protects the organism against toxicity by preventing accumulation of harmful substances in the body. However, reactive intermediary metabolites may be formed in biotransformation, and these are potentially harmful. This is called metabolic activation. Thus, biotransformation may also induce toxicity. Oxidized, intermediary metabolites that are not conjugated can bind to and damage cellular structures. If, for example, a xenobiotic metabolite binds to DNA, a mutation can be induced (see “Genetic toxicology”). If the biotransformation system is overloaded, a massive destruction of essential proteins or lipid membranes may occur. This can result in cell death (see “Cellular injury and cellular death”).
Metabolism is a word often used interchangeably with biotransformation. It denotes chemical breakdown or synthesis reactions catalyzed by enzymes in the body. Nutrients from food, endogenous compounds, and xenobiotics are all metabolized in the body.
Metabolic activation means that a less reactive compound is converted to a more reactive molecule. This usually occurs during Phase 1 reactions.
Metabolic inactivation means that an active or toxic molecule is converted to a less active metabolite. This usually occurs during Phase 2 reactions. In certain cases an inactivated metabolite might be reactivated, for example by enzymatic cleavage.
Phase 1 reaction refers to the first step in xenobiotic metabolism. It usually means that the compound is oxidized. Oxidation usually makes the compound more water soluble and facilitates further reactions.
Cytochrome P450 enzymes are a group of enzymes that preferentially oxidize xenobiotics in Phase 1 reactions. The different enzymes are specialized for handling specific groups of xenobiotics with certain characteristics. Endogenous molecules are also substrates. Cytochrome P450 enzymes are induced by xenobiotics in a specific fashion. Obtaining induction data on cytochrome P450 can be informative about the nature of previous exposures (see “Genetic determinants of toxic response”).
Phase 2 reaction refers to the second step in xenobiotic metabolism. It usually means that the oxidized compound is conjugated with (coupled to) an endogenous molecule. This reaction increases the water solubility further. Many conjugated metabolites are actively excreted via the kidneys.
Transferases are a group of enzymes that catalyze Phase 2 reactions. They conjugate xenobiotics with endogenous compounds such as glutathione, amino acids, glucuronic acid or sulphate.
Glutathione is an endogenous molecule, a tripeptide, that is conjugated with xenobiotics in Phase 2 reactions. It is present in all cells (and in liver cells in high concentrations), and usually protects from activated xenobiotics. When glutathione is depleted, toxic reactions between activated xenobiotic metabolites and proteins, lipids or DNA may occur.
Induction means that enzymes involved in biotransformation are increased (in activity or amount) as a response to xenobiotic exposure. In some cases within a few days enzyme activity can be increased several fold. Induction is often balanced so that both Phase 1 and Phase 2 reactions are increased simultaneously. This may lead to a more rapid biotransformation and can explain tolerance. In contrast, unbalanced induction may increase toxicity.
Inhibition of biotransformation can occur if two xenobiotics are metabolized by the same enzyme. The two substrates have to compete, and usually one of the substrates is preferred. In that case the second substrate is not metabolized, or only slowly metabolized. As with induction, inhibition may increase as well as decrease toxicity.
Oxygen activation can be triggered by metabolites of certain xenobiotics. They may auto-oxidize under the production of activated oxygen species. These oxygen-derived species, which include superoxide, hydrogen peroxide and the hydroxyl radical, may damage DNA, lipids and proteins in cells. Oxygen activation is also involved in inflammatory processes.
Genetic variability between individuals is seen in many genes coding for Phase 1 and Phase 2 enzymes. Genetic variability may explain why certain individuals are more susceptible to toxic effects of xenobiotics than others.
The human organism represents a complex biological system on various levels of organization, from the molecular-cellular level to the tissues and organs. The organism is an open system, exchanging matter and energy with the environment through numerous biochemical reactions in a dynamic equilibrium. The environment can be polluted, or contaminated with various toxicants.
Penetration of molecules or ions of toxicants from the work or living environment into such a strongly coordinated biological system can reversibly or irreversibly disturb normal cellular biochemical processes, or even injure and destroy the cell (see “Cellular injury and cellular death”).
Penetration of a toxicant from the environment to the sites of its toxic effect inside the organism can be divided into three phases:
1. The exposure phase encompasses all processes occurring between various toxicants and/or the influence on them of environmental factors (light, temperature, humidity, etc.). Chemical transformations, degradation, biodegradation (by micro-organisms) as well as disintegration of toxicants can occur.
2. The toxicokinetic phase encompasses absorption of toxicants into the organism and all processes which follow:transport by body fluids, distribution and accumulation in tissues and organs, biotransformation to metabolites and elimination (excretion) of toxicants and/or metabolites from the organism.
3. The toxicodynamic phase refers to the interaction of toxicants (molecules, ions, colloids) with specific sites of action on or inside the cellsreceptorsultimately producing a toxic effect.
Here we will focus our attention exclusively on the toxicokinetic processes inside the human organism following exposure to toxicants in the environment.
The molecules or ions of toxicants present in the environment will penetrate into the organism through the skin and mucosa, or the epithelial cells of the respiratory and gastrointestinal tracts, depending on the point of entry. That means molecules and ions of toxicants must penetrate through cellular membranes of these biological systems, as well as through an intricate system of endomembranes inside the cell.
All toxicokinetic and toxicodynamic processes occur on the molecular-cellular level. Numerous factors influence these processes and these can be divided into two basic groups:
· chemical constitution and physicochemical properties of toxicants
· structure of the cell especially properties and function of membranes around the cell and its interior organelles.
In 1854 the Russian toxicologist E.V. Pelikan started studies on the relation between the chemical structure of a substance and its biological activitythe structure activity relationship (SAR). Chemical structure directly determines physico-chemical properties, some of which are responsible for biological activity.
To define the chemical structure numerous parameters can be selected as descriptors, which can be divided into various groups:
· generalmelting point, boiling point, vapour pressure, dissociation constant (pKa), Nernst partition coefficient (P), activation energy, heat of reaction, reduction potential, etc.
· electricionization potential, dielectric constant, dipole moment, mass: charge ratio, etc.
· quantum chemicalatomic charge, bond energy, resonance energy, electron density, molecular reactivity, etc.
1. Steric: molecular volume, shape and surface area, substructure shape, molecular reactivity, etc.
2. Structura: number of bonds number of rings (in polycyclic compounds), extent of branching, etc.
For each toxicant it is necessary to select a set of descriptors related to a particular mechanism of activity. However, from the toxicokinetic point of view two parameters are of general importance for all toxicants:
· The Nernst partition coefficient (P) establishes the solubility of toxicant molecules in the two-phase octanol (oil)-water system, correlating to their lipo- or hydrosolubility. This parameter will greatly influence the distribution and accumulation of toxicant molecules in the organism.
· The dissociation constant (pKa) defines the degree of ionization (electrolytic dissociation) of molecules of a toxicant into charged cations and anions at a particular pH. This constant represents the pH at which 50% ionization is achieved. Molecules can be lipophilic or hydrophilic, but ions are soluble exclusively in the water of body fluids and tissues. Knowing pKa it is possible to calculate the degree of ionization of a substance for each pH using the Henderson-Hasselbach equation.
For inhaled dusts and aerosols, the particle size, shape, surface area and density also influence their toxicokinetics and toxicodynamics.
The eukaryotic cell of human and animal organisms is encircled by a cytoplasmic membrane regulating the transport of substances and maintaining cell homeostasis. The cell organelles (nucleus, mitochondria) possess membranes too. The cell cytoplasm is compartmentalized by intricate membranous structures, the endoplasmic reticulum and Golgi complex (endomembranes). All these membranes are structurally alike, but vary in the content of lipids and proteins.
The structural framework of membranes is a bilayer of lipid molecules (phospholipids, sphyngolipids, cholesterol). The backbone of a phospholipid molecule is glycerol with two of its -OH groups esterified by aliphatic fatty acids with 16 to 18 carbon atoms, and the third group esterified by a phosphate group and a nitrogenous compound (choline, ethanolamine, serine). In sphyngolipids, sphyngosine is the base.
The lipid molecule is amphipatic because it consists of a polar hydrophilic “head” (amino alcohol, phosphate, glycerol) and a non-polar twin “tail” (fatty acids). The lipid bilayer is arranged so that the hydrophilic heads constitute the outer and inner surface of membrane and lipophilic tails are stretched toward the membrane interior, which contains water, various ions and molecules.
Proteins and glycoproteins are inserted into the lipid bilayer (intrinsic proteins) or attached to the membrane surface (extrinsic proteins). These proteins contribute to the structural integrity of the membrane, but they may also perform as enzymes, carriers, pore walls or receptors.
The membrane represents a dynamic structure which can be disintegrated and rebuilt with a different proportion of lipids and proteins, according to functional needs.
Regulation of transport of substances into and out of the cell represents one of the basic functions of outer and inner membranes.
Some lipophilic molecules pass directly through the lipid bilayer. Hydrophilic molecules and ions are transported via pores. Membranes respond to changing conditions by opening or sealing certain pores of various sizes.
The following processes and mechanisms are involved in the transport of substances, including toxicants, through membranes:
· diffusion through lipid bilayer
· diffusion through pores
· transport by a carrier (facilitated diffusion).
· active transport by a carrier
· endocytosis (pinocytosis).
This represents the movement of molecules and ions through lipid bilayer or pores from a region of high concentration, or high electric potential, to a region of low concentration or potential (“downhill”). Difference in concentration or electric charge is the driving force influencing the intensity of the flux in both directions. In the equilibrium state, influx will be equal to efflux. The rate of diffusion follows Ficke’s law, stating that it is directly proportional to the available surface of membrane, difference in concentration (charge) gradient and characteristic diffusion coefficient, and inversely proportional to the membrane thickness.
Small lipophilic molecules pass easily through the lipid layer of membrane, according to the Nernst partition coefficient.
Large lipophilic molecules, water soluble molecules and ions will use aqueous pore channels for their passage. Size and stereoconfiguration will influence passage of molecules. For ions, besides size, the type of charge will be decisive. The protein molecules of pore walls can gain positive or negative charge. Narrow pores tend to be selectivenegatively charged ligands will allow passage only for cations, and positively charged ligands will allow passage only for anions. With the increase of pore diameter hydrodynamic flow is dominant, allowing free passage of ions and molecules, according to Poiseuille’s law. This filtration is a consequence of the osmotic gradient. In some cases ions can penetrate through specific complex moleculesionophoreswhich can be produced by micro-organisms with antibiotic effects (nonactin, valinomycin, gramacidin, etc.).
This requires the presence of a carrier in the membrane, usually a protein molecule (permease). The carrier selectively binds substances, resembling a substrate-enzyme complex. Similar molecules (including toxicants) can compete for the specific carrier until its saturation point is reached. Toxicants can compete for the carrier and when they are irreversibly bound to it the transport is blocked. The rate of transport is characteristic for each type of carrier. If transport is performed in both direction, it is called exchange diffusion.
For transport of some substances vital for the cell, a special type of carrier is used, transporting against the concentration gradient or electric potential (“uphill”). The carrier is very stereospecific and can be saturated.
For uphill transport, energy is required. The necessary energy is obtained by catalytic cleavage of ATP molecules to ADP by the enzyme adenosine triphosphatase (ATP-ase).
Toxicants can interfere with this transport by competitive or non-competitive inhibition of the carrier or by inhibition of ATP-ase activity.
Endocytosis is defined as a transport mechanism in which the cell membrane encircles material by enfolding to form a vesicle transporting it through the cell. When the material is liquid, the process is termed pinocytosis. In some cases the material is bound to a receptor and this complex is transported by a membrane vesicle. This type of transport is especially used by epithelial cells of the gastrointestinal tract, and cells of the liver and kidneys.
People are exposed to numerous toxicants present in the work and living environment, which can penetrate into the human organism by three main portals of entry:
· via the respiratory tract by inhalation of polluted air
· via the gastrointestinal tract by ingestion of contaminated food, water and drinks
· through the skin by dermal, cutaneous penetration.
In the case of exposure in industry, inhalation represents the dominant way of entry of toxicants, followed by dermal penetration. In agriculture, pesticides exposure via dermal absorption is almost equal to cases of combined inhalation and dermal penetration. The general population is mostly exposed by ingestion of contaminated food, water and beverages, then by inhalation and less often by dermal penetration.
Absorption in the lungs represents the main route of uptake for numerous airborne toxicants (gases, vapours, fumes, mists, smokes, dusts, aerosols, etc.).
The respiratory tract (RT) represents an ideal gas-exchange system possessing a membrane with a surface of 30 m2 (expiration) to 100 m2 (deep inspiration), behind which a network of about 2,000 km of capillaries is located. The system, developed through evolution, is accommodated into a relatively small space (chest cavity) protected by ribs.
Anatomically and physiologically the RT can be divided into three compartments:
· the upper part of RT, or nasopharingeal (NP), starting at nose nares and extended to the pharynx and larynx; this part serves as an air-conditioning system
· the tracheo-bronchial tree (TB), encompassing numerous tubes of various sizes, which bring air to the lungs
· the pulmonary compartment (P), which consists of millions of alveoli (air-sacs) arranged in grapelike clusters.
Hydrophilic toxicants are easily absorbed by the epithelium of the nasopharingeal region. The whole epithelium of the NP and TB regions is covered by a film of water. Lipophilic toxicants are partially absorbed in the NP and TB, but mostly in the alveoli by diffusion through alveolo-capillary membranes. The absorption rate depends on lung ventilation, cardiac output (blood flow through lungs), solubility of toxicant in blood and its metabolic rate.
In the alveoli, gas exchange is carried out. The alveolar wall is made up of an epithelium, an interstitial framework of basement membrane, connective tissue and the capillary endothelium. The diffusion of toxicants is very rapid through these layers, which have a thickness of about 0.8 µm. In alveoli, toxicant is transferred from the air phase into the liquid phase (blood). The rate of absorption (air to blood distribution) of a toxicant depends on its concentration in alveolar air and the Nernst partition coefficient for blood (solubility coefficient).
In the blood the toxicant can be dissolved in the liquid phase by simple physical processes or bound to the blood cells and/or plasma constituents according to chemical affinity or by adsorption. The water content of blood is 75% and, therefore, hydrophilic gases and vapours show a high solubility in plasma (e.g., alcohols). Lipophilic toxicants (e.g., benzene) are usually bound to cells or macro-molecules such as albumen.
From the very beginning of exposure in the lungs, two opposite processes are occurring: absorption and desorption. The equilibrium between these processes depends on the concentration of toxicant in alveolar air and blood. At the onset of exposure the toxicant concentration in the blood is 0 and retention in blood is almost 100%. With continuation of exposure, an equilibrium between absorption and desorption is attained. Hydrophilic toxicants will rapidly attain equilibrium, and the rate of absorption depends on pulmonary ventilation rather than on blood flow. Lipophilic toxicants need a longer time to achieve equilibrium, and here the flow of unsaturated blood governs the rate of absorption.
Deposition of particles and aerosols in the RT depends on physical and physiological factors, as well as particle size. In short, the smaller the particle the deeper it will penetrate into the RT.
Relatively constant low retention of dust particles in the lungs of persons who are highly exposed (e.g., miners) suggests the existence of a very efficient system for the clearance of particles. In the upper part of the RT (tracheo-bronchial) a mucociliary blanket performs the clearance. In the pulmonary part, three different mechanisms are at work.: (1) mucociliary blanket, (2) phagocytosis and (3) direct penetration of particles through the alveolar wall.
The first 17 of the 23 branchings of the tracheo-bronchial tree possess ciliated epithelial cells. By their strokes these cilia constantly move a mucous blanket toward the mouth. Particles deposited on this mucociliary blanket will be swallowed in the mouth (ingestion). A mucous blanket also covers the surface of the alveolar epithelium, moving toward the mucociliary blanket. Additionally the specialized moving cellsphagocytesengulf particles and micro-organisms in the alveoli and migrate in two possible directions:
· toward the mucociliary blanket, which transports them to the mouth
· through the intercellular spaces of the alveolar wall to the lymphatic system of the lungs; also particles can directly penetrate by this route.
Toxicants can be ingested in the case of accidental swallowing, intake of contaminated food and drinks, or swallowing of particles cleared from the RT.
The entire alimentary channel, from oesophagus to anus, is basically built in the same way. A mucous layer (epithelium) is supported by connective tissue and then by a network of capillaries and smooth muscle. The surface epithelium of the stomach is very wrinkled to increase the absorption/secretion surface area. The intestinal area contains numerous small projections (villi), which are able to absorb material by “pumping in”. The active area for absorption in the intestines is about 100 m2.
In the gastrointestinal tract (GIT) all absorption processes are very active:
· transcellular transport by diffusion through the lipid layer and/or pores of cell membranes, as well as pore filtration
· paracellular diffusion through junctions between cells
· facilitated diffusion and active transport
· endocytosis and the pumping mechanism of the villi.
Some toxic metal ions use specialized transport systems for essential elements: thallium, cobalt and manganese use the iron system, while lead appears to use the calcium system.
Many factors influence the rate of absorption of toxicants in various parts of the GIT:
· physico-chemical properties of toxicants, especially the Nernst partition coefficient and the dissociation constant; for particles, particle size is importantthe smaller the size, the higher the solubility
· quantity of food present in the GIT (diluting effect)
· residence time in each part of the GIT (from a few minutes in the mouth to one hour in the stomach to many hours in the intestines
· the absorption area and absorption capacity of the epithelium
· local pH, which governs absorption of dissociated toxicants; in the acid pH of the stomach, non-dissociated acidic compounds will be more quickly absorbed
· peristalsis (movement of intestines by muscles) and local blood flow
· gastric and intestinal secretions transform toxicants into more or less soluble products; bile is an emulsifying agent producing more soluble complexes (hydrotrophy)
· combined exposure to other toxicants, which can produce synergistic or antagonistic effects in absorption processes
· presence of complexing/chelating agents
· the action of microflora of the RT (about 1.5 kg), about 60 different bacterial species which can perform biotransformation of toxicants.
It is also necessary to mention the enterohepatic circulation. Polar toxicants and/or metabolites (glucuronides and other conjugates) are excreted with the bile into the duodenum. Here the enzymes of the microflora perform hydrolysis and liberated products can be reabsorbed and transported by the portal vein into the liver. This mechanism is very dangerous in the case of hepatotoxic substances, enabling their temporary accumulation in the liver.
In the case of toxicants biotransformed in the liver to less toxic or non-toxic metabolites, ingestion may represent a less dangerous portal of entry. After absorption in the GIT these toxicants will be transported by the portal vein to the liver, and there they can be partially detoxified by biotransformation.
The skin (1.8 m2 of surface in a human adult) together with the mucous membranes of the body orifices, covers the surface of the body. It represents a barrier against physical, chemical and biological agents, maintaining the body integrity and homeostasis and performing many other physiological tasks.
Basically the skin consists of three layers: epidermis, true skin (dermis) and subcutaneous tissue (hypodermis). From the toxicological point of view the epidermis is of most interest here. It is built of many layers of cells. A horny surface of flattened, dead cells (stratum corneum) is the top layer, under which a continuous layer of living cells (stratum corneum compactum) is located, followed by a typical lipid membrane, and then by stratum lucidum, stratum gramulosum and stratum mucosum. The lipid membrane represents a protective barrier, but in hairy parts of the skin, both hair follicles and sweat gland channels penetrate through it. Therefore, dermal absorption can occur by the following mechanisms:
· transepidermal absorption by diffusion through the lipid membrane (barrier), mostly by lipophilic substances (organic solvents, pesticides, etc.) and to a small extent by some hydrophilic substances through pores
· transfollicular absorption around the hair stalk into the hair follicle, bypassing the membrane barrier; this absorption occurs only in hairy areas of skin
· absorption via the ducts of sweat glands, which have a cross-sectional area of about 0.1 to 1% of the total skin area (relative absorption is in this proportion)
· absorption through skin when injured mechanically, thermally, chemically or by skin diseases; here the skin layers, including lipid barrier, are disrupted and the way is open for toxicants and harmful agents to enter.
The rate of absorption through the skin will depend on many factors:
· concentration of toxicant, type of vehicle (medium), presence of other substances
· water content of skin, pH, temperature, local blood flow, perspiration, surface area of contaminated skin, thickness of skin
· anatomical and physiological characteristics of the skin due to sex, age, individual variations, differences occurring in various ethnic groups and races, etc.
After absorption by any of these portals of entry, toxicants will reach the blood, lymph or other body fluids. The blood represents the major vehicle for transport of toxicants and their metabolites.
Blood is a fluid circulating organ, transporting necessary oxygen and vital substances to the cells and removing waste products of metabolism. Blood also contains cellular components, hormones, and other molecules involved in many physiological functions. Blood flows inside a relatively well closed, high-pressure circulatory system of blood vessels, pushed by the activity of the heart. Due to high pressure, leakage of fluid occurs. The lymphatic system represents the drainage system, in the form of a fine mesh of small, thin-walled lymph capillaries branching through the soft tissues and organs.
Blood is a mixture of a liquid phase (plasma, 55%) and solid blood cells (45%). Plasma contains proteins (albumins, globulins, fibrinogen), organic acids (lactic, glutamic, citric) and many other substances (lipids, lipoproteins, glycoproteins, enzymes, salts, xenobiotics, etc.). Blood cell elements include erythrocytes (Er), leukocytes, reticulocytes, monocytes, and platelets.
Toxicants are absorbed as molecules and ions. Some toxicants at blood pH form colloid particles as a third form in this liquid. Molecules, ions and colloids of toxicants have various possibilities for transport in blood:
· to be physically or chemically bound to the blood elements, mostly Er
· to be physically dissolved in plasma in a free state
· to be bound to one or more types of plasma proteins, complexed with the organic acids or attached to other fractions of plasma.
Most of the toxicants in blood exist partially in a free state in plasma and partially bound to erythrocytes and plasma constituents. The distribution depends on the affinity of toxicants to these constituents. All fractions are in a dynamic equilibrium.
Some toxicants are transported by the blood elementsmostly by erythrocytes, very rarely by leukocytes. Toxicants can be adsorbed on the surface of Er, or can bind to the ligands of stroma. If they penetrate into Er they can bind to the haem (e.g. carbon monoxide and selenium) or to the globin (Sb111, Po210). Some toxicants transported by Er are arsenic, cesium, thorium, radon, lead and sodium. Hexavalent chromium is exclusively bound to the Er and trivalent chromium to the proteins of plasma. For zinc, competition between Er and plasma occurs. About 96% of lead is transported by Er. Organic mercury is mostly bound to Er and inorganic mercury is carried mostly by plasma albumin. Small fractions of beryllium, copper, tellurium and uranium are carried by Er.
The majority of toxicants are transported by plasma or plasma proteins. Many electrolytes are present as ions in an equilibrium with non-dissociated molecules free or bound to the plasma fractions. This ionic fraction of toxicants is very diffusible, penetrating through the walls of capillaries into tissues and organs. Gases and vapours can be dissolved in the plasma.
Plasma proteins possess a total surface area of about 600 to 800 km2 offered for absorption of toxicants. Albumin molecules possess about 109 cationic and 120 anionic ligands at the disposal of ions. Many ions are partially carried by albumin (e.g., copper, zinc and cadmium), as are such compounds as dinitro- and ortho-cresols, nitro- and halogenated derivatives of aromatic hydrocarbons, and phenols.
Globulin molecules (alpha and beta) transport small molecules of toxicants as well as some metallic ions (copper, zinc and iron) and colloid particles. Fibrinogen shows affinity for certain small molecules. Many types of bonds can be involved in binding of toxicants to plasma proteins: Van der Waals forces, attraction of charges, association between polar and non-polar groups, hydrogen bridges, covalent bonds.
Plasma lipoproteins transport lipophilic toxicants such as PCBs. The other plasma fractions serve as a transport vehicle too. The affinity of toxicants for plasma proteins suggests their affinity for proteins in tissues and organs during distribution.
Organic acids (lactic, glutaminic, citric) form complexes with some toxicants. Alkaline earths and rare earths, as well as some heavy elements in the form of cations, are complexed also with organic oxy- and amino acids. All these complexes are usually diffusible and easily distributed in tissues and organs.
Physiologically chelating agents in plasma such as transferrin and metallothionein compete with organic acids and amino acids for cations to form stable chelates.
Diffusible free ions, some complexes and some free molecules are easily cleared from the blood into tissues and organs. The free fraction of ions and molecules is in a dynamic equilibrium with the bound fraction. The concentration of a toxicant in blood will govern the rate of its distribution into tissues and organs, or its mobilization from them into the blood.
The human organism can be divided into the following compartments. (1) internal organs, (2) skin and muscles, (3) adipose tissues, (4) connective tissue and bones. This classification is mostly based on the degree of vascular (blood) perfusion in a decreasing order. For example internal organs (including the brain), which represent only 12% of the total body weight, receive about 75% of the total blood volume. On the other hand, connective tissues and bones (15% of total body weight) receive only one per cent of the total blood volume.
The well-perfused internal organs generally achieve the highest concentration of toxicants in the shortest time, as well as an equilibrium between blood and this compartment. The uptake of toxicants by less perfused tissues is much slower, but retention is higher and duration of stay much longer (accumulation) due to low perfusion.
Three components are of major importance for the intracellular distribution of toxicants: content of water, lipids and proteins in the cells of various tissues and organs. The above-mentioned order of compartments also follows closely a decreasing water content in their cells. Hydrophilic toxicants will be more rapidly distributed to the body fluids and cells with high water content, and lipophilic toxicants to cells with higher lipid content (fatty tissue).
The organism possesses some barriers which impair penetration of some groups of toxicants, mostly hydrophilic, to certain organs and tissues, such as:
· the blood-brain barrier (cerebrospinal barrier), which restricts penetration of large molecules and hydrophilic toxicants to the brain and CNS; this barrier consists of a closely joined layer of endothelial cells; thus, lipophilic toxicants can penetrate through it
· the placental barrier, which has a similar effect on penetration of toxicants into the foetus from the blood of the mother
· the histo-haematologic barrier in the walls of capillaries, which is permeable for small- and intermediate-sized molecules, and for some larger molecules, as well as ions.
As previously noted only the free forms of toxicants in plasma (molecules, ions, colloids) are available for penetration through the capillary walls participating in distribution. This free fraction is in a dynamic equilibrium with the bound fraction. Concentration of toxicants in blood is in a dynamic equilibrium with their concentration in organs and tissues, governing retention (accumulation) or mobilization from them.
The condition of the organism, functional state of organs (especially neuro-humoral regulation), hormonal balance and other factors play a role in distribution.
Retention of toxicant in a particular compartment is generally temporary and redistribution into other tissues can occur. Retention and accumulation is based on the difference between the rates of absorption and elimination. The duration of retention in a compartment is expressed by the biological half-life. This is the time interval in which 50% of the toxicant is cleared from the tissue or organ and redistributed, translocated or eliminated from the organism.
Biotransformation processes occur during distribution and retention in various organs and tissues. Biotransformation produces more polar, more hydrophilic metabolites, which are more easily eliminated. A low rate of biotransformation of a lipophilic toxicant will generally cause its accumulation in a compartment.
The toxicants can be divided into four main groups according to their affinity, predominant retention and accumulation in a particular compartment:
1. Toxicants soluble in the body fluids are uniformly distributed according to the water content of compartments. Many monovalent cations (e.g., lithium, sodium, potassium, rubidium) and some anions (e.g., chlorine, bromine), are distributed according to this pattern.
2. Lipophilic toxicants show a high affinity for lipid-rich organs (CNS) and tissues (fatty, adipose).
3. Toxicants forming colloid particles are then trapped by specialized cells of the reticuloendothelial system (RES) of organs and tissues. Tri- and quadrivalent cations (lanthanum, cesium, hafnium) are distributed in the RES of tissues and organs.
4. Toxicants showing a high affinity for bones and connective tissue (osteotropic elements, bone seekers) include divalent cations (e.g., calcium, barium, strontium, radon, beryllium, aluminium, cadmium, lead).
The “standard man” of 70 kg body weight contains about 15% of body weight in the form of adipose tissue, increasing with obesity to 50%. However, this lipid fraction is not uniformly distributed. The brain (CNS) is a lipid-rich organ, and peripheral nerves are wrapped with a lipid-rich myelin sheath and Schwann cells. All these tissues offer possibilities for accumulation of lipophilic toxicants.
Numerous non-electrolytes and non-polar toxicants with a suitable Nernst partition coefficient will be distributed to this compartment, as well as numerous organic solvents (alcohols, aldehydes, ketones, etc.), chlorinated hydrocarbons (including organochlorine insecticides such as DDT), some inert gases (radon), etc.
Adipose tissue will accumulate toxicants due to its low vascularization and lower rate of biotransformation. Here accumulation of toxicants may represent a kind of temporary “neutralization” because of lack of targets for toxic effect. However, potential danger for the organism is always present due to the possibility of mobilization of toxicants from this compartment back to the circulation.
Deposition of toxicants in the brain (CNS) or lipid-rich tissue of the myelin sheath of the peripheral nervous system is very dangerous. The neurotoxicants are deposited here directly next to their targets. Toxicants retained in lipid-rich tissue of the endocrine glands can produce hormonal disturbances. Despite the blood-brain barrier, numerous neurotoxicants of a lipophilic nature reach the brain (CNS): anaesthetics, organic solvents, pesticides, tetraethyl lead, organomercurials, etc.
In each tissue and organ a certain percentage of cells is specialized for phagocytic activity, engulfing micro-organisms, particles, colloid particles, and so on. This system is called the reticuloendothelial system (RES), comprising fixed cells as well as moving cells (phagocytes). These cells are present in non-active form. An increase of the above-mentioned microbes and particles will activate the cells up to a saturation point.
Toxicants in the form of colloids will be captured by the RES of organs and tissues. Distribution depends on the colloid particle size. For larger particles, retention in the liver will be favoured. With smaller colloid particles, more or less uniform distribution will occur between the spleen, bone marrow and liver. Clearance of colloids from the RES is very slow, although small particles are cleared relatively more quickly.
About 60 elements can be identified as osteotropic elements, or bone seekers.
Osteotropic elements can be divided into three groups:
1. Elements representing or replacing physiological constituents of the bone. Twenty such elements are present in higher quantities. The others appear in trace quantities. Under conditions of chronic exposure, toxic metals such as lead, aluminium and mercury can also enter the mineral matrix of bone cells.
2. Alkaline earths and other elements forming cations with an ionic diameter similar to that of calcium are exchangeable with it in bone mineral. Also, some anions are exchangeable with anions (phosphate, hydroxyl) of bone mineral.
3. Elements forming microcolloids (rare earths) may be adsorbed on the surface of bone mineral.
The skeleton of a standard man accounts for 10 to 15% of the total body weight, representing a large potential storage depot for osteotropic toxicants. Bone is a highly specialized tissue consisting by volume of 54% minerals and 38% organic matrix. The mineral matrix of bone is hydroxyapatite, Ca10(PO4)6(OH)2, in which the ratio of Ca to P is about 1.5 to one. The surface area of mineral available for adsorption is about 100 m2 per g of bone.
Metabolic activity of the bones of the skeleton can be divided in two categories:
· active, metabolic bone, in which processes of resorption and new bone formation, or remodelling of existing bone, are very extensive
· stable bone with a low rate of remodelling or growth.
In the fetus, infant and young child metabolic bone (see “available skeleton”) represents almost 100% of the skeleton. With age this percentage of metabolic bone decreases. Incorporation of toxicants during exposure appears in the metabolic bone and in more slowly turning-over compartments.
Incorporation of toxicants into bone occurs in two ways:
1. For ions, an ion exchange occurs with physiologically present calcium cations, or anions (phosphate, hydroxyl).
2. For toxicants forming colloid particles, adsorption on the mineral surface occurs.
The bone mineral, hydroxyapatite, represents a complex ion-exchange system. Calcium cations can be exchanged by various cations. The anions present in bone can also be exchanged by anions: phosphate with citrates and carbonates, hydroxyl with fluorine. Ions which are not exchangeable can be adsorbed on the mineral surface. When toxicant ions are incorporated in the mineral, a new layer of mineral can cover the mineral surface, burying toxicant into the bone structure. Ion exchange is a reversible process, depending on the concentration of ions, pH and fluid volume. Thus, for example, an increase of dietary calcium may decrease the deposition of toxicant ions in the lattice of minerals. It has been mentioned that with age the percentage of metabolic bone is decreased, although ion exchange continues. With ageing, bone mineral resorption occurs, in which bone density actually decreases. At this point, toxicants in bone may be released (e.g., lead).
About 30% of the ions incorporated into bone minerals are loosely bound and can be exchanged, captured by natural chelating agents and excreted, with a biological half-life of 15 days. The other 70% is more firmly bound. Mobilization and excretion of this fraction shows a biological half-life of 2.5 years and more depending on bone type (remodelling processes).
Chelating agents (Ca-EDTA, penicillamine, BAL, etc.) can mobilize considerable quantities of some heavy metals, and their excretion in urine greatly increased.
Colloid particles are adsorbed as a film on the mineral surface (100 m2 per g) by Van der Waals forces or chemisorption. This layer of colloids on the mineral surfaces is covered with the next layer of formed minerals, and the toxicants are more buried into the bone structure. The rate of mobilization and elimination depends on remodelling processes.
The hair and nails contain keratin, with sulphydryl groups able to chelate metallic cations such as mercury and lead.
Recently the distribution of toxicants, especially some heavy metals, within cells of tissues and organs has become of importance. With ultracentrifugation techniques, various fractions of the cell can be separated to determine their content of metal ions and other toxicants.
Animal studies have revealed that after penetration into the cell, some metal ions are bound to a specific protein, metallothionein. This low molecular weight protein is present in the cells of liver, kidney and other organs and tissues. Its sulphydryl groups can bind six ions per molecule. Increased presence of metal ions induces the biosynthesis of this protein. Ions of cadmium are the most potent inducer. Metallothionein serves also to maintain homeostasis of vital copper and zinc ions. Metallothionein can bind zinc, copper, cadmium, mercury, bismuth, gold, cobalt and other cations.
During retention in cells of various tissues and organs, toxicants are exposed to enzymes which can biotransform (metabolize) them, producing metabolites. There are many pathways for the elimination of toxicants and/or metabolites: by exhaled air via the lungs, by urine via the kidneys, by bile via the GIT, by sweat via the skin, by saliva via the mouth mucosa, by milk via the mammary glands, and by hair and nails via normal growth and cell turnover.
The elimination of an absorbed toxicant depends on the portal of entry. In the lungs the absorption/desorption process starts immediately and toxicants are partially eliminated by exhaled air. Elimination of toxicants absorbed by other paths of entry is prolonged and starts after transport by blood, eventually being completed after distribution and biotransformation. During absorption an equilibrium exists between the concentrations of a toxicant in the blood and in tissues and organs. Excretion decreases toxicant blood concentration and may induce mobilization of a toxicant from tissues into blood.
Many factors can influence the elimination rate of toxicants and their metabolites from the body:
· physico-chemical properties of toxicants, especially the Nernst partition coefficient (P), dissociation constant (pKa), polarity, molecular structure, shape and weight
· level of exposure and time of post-exposure elimination
· portal of entry
· distribution in the body compartments, which differ in exchange rate with the blood and blood perfusion
· rate of biotransformation of lipophilic toxicants to more hydrophilic metabolites
· overall health condition of organism and, especially, of excretory organs (lungs, kidneys, GIT, skin, etc.)
· presence of other toxicants which can interfere with elimination.
Here we distinguish two groups of compartments: (1) the rapid-exchange system in these compartments, tissue concentration of toxicant is similar to that of the blood; and (2) the slow-exchange system, where tissue concentration of toxicant is higher than in blood due to binding and accumulationadipose tissue, skeleton and kidneys can temporarily retain some toxicants, e.g., arsenic and zinc.
A toxicant can be excreted simultaneously by two or more excretion routes. However, usually one route is dominant.
Scientists are developing mathematical models describing the excretion of a particular toxicant. These models are based on the movement from one or both compartments (exchange systems), biotransformation and so on.
Elimination via the lungs (desorption) is typical for toxicants with high volatility (e.g., organic solvents). Gases and vapours with low solubility in blood will be quickly eliminated this way, whereas toxicants with high blood solubility will be eliminated by other routes.
Organic solvents absorbed by the GIT or skin are excreted partially by exhaled air in each passage of blood through the lungs, if they have a sufficient vapour pressure. The Breathalyser test used for suspected drunk drivers is based on this fact. The concentration of CO in exhaled air is in equilibrium with the CO-Hb blood content. The radioactive gas radon appears in exhaled air due to the decay of radium accumulated in the skeleton.
Elimination of a toxicant by exhaled air in relation to the post-exposure period of time usually is expressed by a three-phase curve. The first phase represents elimination of toxicant from the blood, showing a short half-life. The second, slower phase represents elimination due to exchange of blood with tissues and organs (quick-exchange system). The third, very slow phase is due to exchange of blood with fatty tissue and skeleton. If a toxicant is not accumulated in such compartments, the curve will be two-phase. In some cases a four-phase curve is also possible.
Determination of gases and vapours in exhaled air in the post-exposure period is sometimes used for evaluation of exposures in workers.
The kidney is an organ specialized in the excretion of numerous water-soluble toxicants and metabolites, maintaining homeostasis of the organism. Each kidney possesses about one million nephrons able to perform excretion. Renal excretion represents a very complex event encompassing three different mechanisms:
· glomerular filtration by Bowman’s capsule
· active transport in the proximal tubule
· passive transport in the distal tubule.
Excretion of a toxicant via the kidneys to urine depends on the Nernst partition coefficient, dissociation constant and pH of urine, molecular size and shape, rate of metabolism to more hydrophilic metabolites, as well as health status of the kidneys.
The kinetics of renal excretion of a toxicant or its metabolite can be expressed by a two-, three- or four-phase excretion curve, depending on the distribution of the particular toxicant in various body compartments differing in the rate of exchange with the blood.
Some drugs and metallic ions can be excreted through the mucosa of the mouth by salivafor example, lead (“lead line”), mercury, arsenic, copper, as well as bromides, iodides, ethyl alcohol, alkaloids, and so on. The toxicants are then swallowed, reaching the GIT, where they can be reabsorbed or eliminated by faeces.
Many non-electrolytes can be partially eliminated via skin by sweat: ethyl alcohol, acetone, phenols, carbon disulphide and chlorinated hydrocarbons.
Many metals, organic solvents and some organochlorine pesticides (DDT) are secreted via the mammary gland in mother’s milk. This pathway can represent a danger for nursing infants.
Analysis of hair can be used as an indicator of homeostasis of some physiological substances. Also exposure to some toxicants, especially heavy metals, can be evaluated by this kind of bioassay.
Elimination of toxicants from the body can be increased by:
· mechanical translocation via gastric lavage, blood transfusion or dialysis
· creating physiological conditions which mobilize toxicants by diet, change of hormonal balance, improving renal function by application of diuretics
· administration of complexing agents (citrates, oxalates, salicilates, phosphates), or chelating agents (Ca-EDTA, BAL, ATA, DMSA, penicillamine); this method is indicated only in persons under strict medical control. Application of chelating agents is often used for elimination of heavy metals from the body of exposed workers in the course of their medical treatment. This method is also used for evaluation of total body burden and level of past exposure.
Determination of toxicants and metabolites in blood, exhaled air, urine, sweat, faeces and hair is more and more used for evaluation of human exposure (exposure tests) and/or evaluation of the degree of intoxication. Therefore biological exposure limits (Biological MAC Values, Biological Exposure IndicesBEI) have recently been established. These bioassays show “internal exposure” of the organism, that is, total exposure of the body in both the work and living environments by all portals of entry (see “Toxicology test methods: Biomarkers”).
People in the work and/or living environment are usually exposed simultaneously or consecutively to various physical and chemical agents. Also it is necessary to take into consideration that some persons use medications, smoke, consume alcohol and food containing additives and so on. That means that usually multiple exposure is occurring. Physical and chemical agents can interact in each step of toxicokinetic and/or toxicodynamic processes, producing three possible effects:
1. Independent. Each agent produces a different effect due to a different mechanism of action.
2. Synergistic. The combined effect is greater than that of each single agent. Here we differentiate two types: (a) additive, where the combined effect is equal to the sum of the effects produced by each agent separately and (b) potentiating, where the combined effect is greater than additive.
3. Antagonistic. The combined effect is lower than additive.
However, studies on combined effects are rare. This kind of study is very complex due to the combination of various factors and agents.
We can conclude that when the human organism is exposed to two or more toxicants simultaneously or consecutively, it is necessary to consider the possibility of some combined effects, which can increase or decrease the rate of toxicokinetic processes.
The priority objective of occupational and environmental toxicology is to improve the prevention or substantial limitation of health effects of exposure to hazardous agents in the general and occupational environments. To this end systems have been developed for quantitative risk assessment related to a given exposure (see the section “Regulatory toxicology”).
The effects of a chemical on particular systems and organs are related to the magnitude of exposure and whether exposure is acute or chronic. In view of the diversity of toxic effects even within one system or organ, a uniform philosophy concerning the critical organ and critical effect has been proposed for the purpose of risk assessment and development of health-based recommended concentration limits of toxic substances in different environmental media.
From the point of view of preventive medicine, it is of particular importance to identify early adverse effects, based on the general assumption that preventing or limiting early effects may prevent more severe health effects from developing.
Such an approach has been applied to heavy metals. Although heavy metals, such as lead, cadmium and mercury, belong to a specific group of toxic substances where the chronic effect of activity is dependent on their accumulation in the organs, the definitions presented below were published by the Task Group on Metal Toxicity (Nordberg 1976).
The definition of the critical organ as proposed by the Task Group on Metal Toxicity has been adopted with a slight modification: the word metal has been replaced with the expression potentially toxic substance (Duffus 1993).
Whether a given organ or system is regarded as critical depends not only on the toxicomechanics of the hazardous agent but also on the route of absorption and the exposed population.
· Critical concentration for a cell: the concentration at which adverse functional changes, reversible or irreversible, occur in the cell.
· Critical organ concentration: the mean concentration in the organ at the time at which the most sensitive type of cells in the organ reach critical concentration.
· Critical organ: that particular organ which first attains the critical concentration of metal under specified circumstances of exposure and for a given population.
· Critical effect: defined point in the relationship between dose and effect in the individual, namely the point at which an adverse effect occurs in cellular function of the critical organ. At an exposure level lower than that giving a critical concentration of metal in the critical organ, some effects may occur that do not impair cellular function per se, yet are detectable by means of biochemical and other tests. Such effects are defined as subcritical effects.
The biological meaning of subcritical effect is sometimes not known; it may stand for exposure biomarker, adaptation index or a critical effect precursor (see “Toxicology test methods: Biomarkers”). The latter possibility can be particularly significant in view of prophylactic activities.
Table 33.1 displays examples of critical organs and effects for different chemicals. In chronic environmental exposure to cadmium, where the route of absorption is of minor importance (cadmium air concentrations range from 10 to 20 µg/m3 in the urban and 1 to 2 µg/m3 in the rural areas), the critical organ is the kidney. In the occupational setting where the TLV reaches 50 µg/m3 and inhalation constitutes the main route of exposure, two organs, lung and kidney, are regarded as critical.
Critical organ in chronic exposure
Nonthreshold: Lung cancer (unit risk 4.6 x 10-3 )
Threshold: Increased excretion of low molecular proteins (β2 -M, RBP) in urine
Emphysema slight function changes
Increased delta-aminolevulinic acid excretion in urine (ALA-U); increased concentration of free erythrocyte protoporphyrin (FEP) in erythrocytes
Peripheral nervous system
Slowing of the conduction velocities of the slower nerve fibres
Central nervous system
Decrease in IQ and other subtle effects; mercurial tremor (fingers, lips, eyelids)
Central nervous system
Impairment of psychomotor functions
Central nervous system
Impairment of psychomotor functions
Cancer (angiosarcoma unit risk 1 x 10-6 )
For lead, the critical organs in adults are the haemopoietic and peripheral nervous systems, where the critical effects (e.g., elevated free erythrocyte protoporphyrin concentration (FEP), increased excretion of delta-aminolevulinic acid in urine, or impaired peripheral nerve conduction) manifest when the blood lead level (an index of lead absorption in the system) approaches 200 to 300 µg/l. In small children the critical organ is the central nervous system (CNS), and the symptoms of dysfunction detected with the use of a psychological test battery have been found to appear in the examined populations even at concentrations in the range of about 100 µg/l Pb in blood.
A number of other definitions have been formulated which may better reflect the meaning of the notion. According to WHO (1989), the critical effect has been defined as “the first adverse effect which appears when the threshold (critical) concentration or dose is reached in the critical organ. Adverse effects, such as cancer, with no defined threshold concentration are often regarded as critical. Decision on whether an effect is critical is a matter of expert judgement.” In the International Programme on Chemical Safety (IPCS) guidelines for developing Environmental Health Criteria Documents, the critical effect is described as “the adverse effect judged to be most appropriate for determining the tolerable intake”. The latter definition has been formulated directly for the purpose of evaluating the health-based exposure limits in the general environment. In this context the most essential seems to be determining which effect can be regarded as an adverse effect. Following current terminology, the adverse effect is the “change in morphology, physiology, growth, development or lifespan of an organism which results in impairment of the capacity to compensate for additional stress or increase in susceptibility to the harmful effects of other environmental influences. Decision on whether or not any effect is adverse requires expert judgement.”
Figure 33.1 displays hypothetical dose-response curves for different effects. In the case of exposure to lead, A can represent a subcritical effect (inhibition of erythrocyte ALA-dehydratase), B the critical effect (an increase in erythrocyte zinc protoporphyrin or increase in the excretion of delta-aminolevulinic acid, C the clinical effect (anaemia) and D the fatal effect (death). For lead exposure there is abundant evidence illustrating how particular effects of exposure are dependent on lead concentration in blood (practical counterpart of the dose), either in the form of the dose-response relationship or in relation to different variables (sex, age, etc.). Determining the critical effects and the dose-response relationship for such effects in humans makes it possible to predict the frequency of a given effect for a given dose or its counterpart (concentration in biological material) in a certain population.
The critical effects can be of two types: those considered to have a threshold and those for which there may be some risk at any exposure level (non-threshold, genotoxic carcinogens and germ mutagens). Whenever possible, appropriate human data should be used as a basis for the risk assessment. In order to determine the threshold effects for the general population, assumptions concerning the exposure level (tolerable intake, biomarkers of exposure) have to be made such that the frequency of the critical effect in the population exposed to a given hazardous agent corresponds to the frequency of that effect in the general population. In lead exposure, the maximum recommended blood lead concentration for the general population (200 µg/l, median below 100 µg/l) (WHO 1987) is practically below the threshold value for the assumed critical effect-the elevated free erythrocyte protoporphyrin level, although it is not below the level associated with effects on the CNS in children or blood pressure in adults. In general, if data from well-conducted human population studies defining a no observed adverse effect level are the basis for safety evaluation, then the uncertainty factor of ten has been considered appropriate. In the case of occupational exposure the critical effects may refer to a certain part of the population (e.g. 10%). Accordingly, in occupational lead exposure the recommended health-based concentration of blood lead has been adopted to be 400 mg/l in men where a 10% response level for ALA-U of 5 mg/l occurred at PbB concentrations of about 300 to 400 mg/l. For the occupational exposure to cadmium (assuming the increased urinary excretion of low-weight proteins to be the critical effect), the level of 200 ppm cadmium in renal cortex has been regarded as the admissible value, for this effect has been observed in 10% of the exposed population. Both these values are under consideration for lowering, in many countries, at the present time (i.e.,1996).
There is no clear consensus on appropriate methodology for the risk assessment of chemicals for which the critical effect may not have a threshold, such as genotoxic carcinogens. A number of approaches based largely on characterization of the dose- response relationship have been adopted for the assessment of such effects. Owing to the lack of socio-political acceptance of health risk caused by carcinogens in such documents as the Air Quality Guidelines for Europe (WHO 1987), only the values such as the unit lifetime risk (i.e., the risk associated with lifetime exposure to 1 µg/m3 of the hazardous agent) are presented for non-threshold effects (see "Regulatory toxicology").
Presently, the basic step in undertaking activities for risk assessment is determining the critical organ and critical effects. The definitions of both the critical and adverse effect reflect the responsibility of deciding which of the effects within a given organ or system should be regarded as critical, and this is directly related to the subsequent determination of recommended values for a given chemical in the general environment-for example, Air Quality Guidelines for Europe (WHO 1987) or health-based limits in occupational exposure (WHO 1980). Determining the critical effect from within the range of subcritical effects may lead to a situation where the recommended limits on toxic chemicals concentration in the general or occupational environment may be in practice impossible to maintain. Regarding as critical an effect that may overlap the early clinical effects may bring about the adoption of the values for which adverse effects may develop in some part of the population. The decision whether or not a given effect should be considered critical remains the responsibility of expert groups who specialize in toxicity and risk assessment.
There are often large differences among humans in the intensity of response to toxic chemicals, and variations in susceptibility of an individual over a lifetime. These can be attributed to a variety of factors capable of influencing absorption rate, distribution in the body, biotransformation and/or excretion rate of a particular chemical. Apart from the known hereditary factors which have been clearly demonstrated to be linked with increased susceptibility to chemical toxicity in humans (see “Genetic determinants of toxic response”), other factors include: constitutional characteristics related to age and sex; pre-existing disease states or a reduction in organ function (non-hereditary, i.e., acquired); dietary habits, smoking, alcohol consumption and use of medications; concomitant exposure to biotoxins (various microorganisms) and physical factors (radiation, humidity, extremely low or high temperatures or barometric pressures particularly relevant to the partial pressure of a gas), as well as concomitant physical exercise or psychological stress situations; previous occupational and/or environmental exposure to a particular chemical, and in particular concomitant exposure to other chemicals, not necessarily toxic (e.g., essential metals). The possible contributions of the aforementioned factors in either increasing or decreasing susceptibility to adverse health effects, as well as the mechanisms of their action, are specific for a particular chemical. Therefore only the most common factors, basic mechanisms and a few characteristic examples will be presented here, whereas specific information concerning each particular chemical can be found in elsewhere in this Encyclopaedia.
According to the stage at which these factors act (absorption, distribution, biotransformation or excretion of a particular chemical), the mechanisms can be roughly categorized according to two basic consequences of interaction: (1) a change in the quantity of the chemical in a target organ, that is, at the site(s) of its effect in the organism (toxicokinetic interactions), or (2) a change in the intensity of a specific response to the quantity of the chemical in a target organ (toxicodynamic interactions). The most common mechanisms of either type of interaction are related to competition with other chemical(s) for binding to the same compounds involved in their transport in the organism (e.g., specific serum proteins) and/or for the same biotransformation pathway (e.g., specific enzymes) resulting in a change in the speed or sequence between initial reaction and final adverse health effect. However, both toxicokinetic and toxicodynamic interactions may influence individual susceptibility to a particular chemical. The influence of several concomitant factors can result in either: (a) additive effectsthe intensity of the combined effect is equal to the sum of the effects produced by each factor separately, (b) synergistic effectsthe intensity of the combined effect is greater than the sum of the effects produced by each factor separately, or (c) antagonistic effectsthe intensity of the combined effect is smaller than the sum of the effects produced by each factor separately.
The quantity of a particular toxic chemical or characteristic metabolite at the site(s) of its effect in the human body can be more or less assessed by biological monitoring, that is, by choosing the correct biological specimen and optimal timing of specimen sampling, taking into account biological half-lives for a particular chemical in both the critical organ and in the measured biological compartment. However, reliable information concerning other possible factors that might influence individual susceptibility in humans is generally lacking, and consequently the majority of knowledge regarding the influence of various factors is based on experimental animal data.
It should be stressed that in some cases relatively large differences exist between humans and other mammals in the intensity of response to an equivalent level and/or duration of exposure to many toxic chemicals; for example, humans appear to be considerably more sensitive to the adverse health effects of several toxic metals than are rats (commonly used in experimental animal studies). Some of these differences can be attributed to the fact that the transportation, distribution and biotransformation pathways of various chemicals are greatly dependent on subtle changes in the tissue pH and the redox equilibrium in the organism (as are the activities of various enzymes), and that the redox system of the human differs considerably from that of the rat.
This is obviously the case regarding important antioxidants such as vitamin C and glutathione (GSH), which are essential for maintaining redox equilibrium and which have a protective role against the adverse effects of the oxygen- or xenobiotic-derived free radicals which are involved in a variety of pathological conditions (Kehrer 1993). Humans cannot auto-synthesize vitamin C, contrary to the rat, and levels as well as the turnover rate of erythrocyte GSH in humans are considerably lower than that in the rat. Humans also lack some of the protective antioxidant enzymes, compared to the rat or other mammals (e.g., GSH- peroxidase is considered to be poorly active in human sperm). These examples illustrate the potentially greater vulnerability to oxidative stress in humans (particularly in sensitive cells, e.g., apparently greater vulnerability of the human sperm to toxic influences than that of the rat), which can result in different response or greater susceptibility to the influence of various factors in humans compared to other mammals (Telišman 1995).
Compared to adults, very young children are often more susceptible to chemical toxicity because of their relatively greater inhalation volumes and gastrointestinal absorption rate due to greater permeability of the intestinal epithelium, and because of immature detoxification enzyme systems and a relatively smaller excretion rate of toxic chemicals. The central nervous system appears to be particularly susceptible at the early stage of development with regard to neurotoxicity of various chemicals, for example, lead and methylmercury. On the other hand, the elderly may be susceptible because of chemical exposure history and increased body stores of some xenobiotics, or pre-existing compromised function of target organs and/or relevant enzymes resulting in lowered detoxification and excretion rate. Each of these factors can contribute to weakening of the body’s defencesa decrease in reserve capacity, causing increased susceptibility to subsequent exposure to other hazards. For example, the cytochrome P450 enzymes (involved in the biotransformation pathways of almost all toxic chemicals) can be either induced or have lowered activity because of the influence of various factors over a lifetime (including dietary habits, smoking, alcohol, use of medications and exposure to environmental xenobiotics).
Gender-related differences in susceptibility have been described for a large number of toxic chemicals (approximately 200), and such differences are found in many mammalian species. It appears that males are generally more susceptible to renal toxins and females to liver toxins. The causes of the different response between males and females have been related to differences in a variety of physiological processes (e.g., females are capable of additional excretion of some toxic chemicals through menstrual blood loss, breast milk and/or transfer to the foetus, but they experience additional stress during pregnancy, delivery and lactation), enzyme activities, genetic repair mechanisms, hormonal factors, or the presence of relatively larger fat depots in females, resulting in greater accumulation of some lipophilic toxic chemicals, such as organic solvents and some medications.
Dietary habits have an important influence on susceptibility to chemical toxicity, mostly because adequate nutrition is essential for the functioning of the body’s chemical defence system in maintaining good health. Adequate intake of essential metals (including metalloids) and proteins, especially the sulphur-containing amino acids, is necessary for the biosynthesis of various detoxificating enzymes and the provision of glycine and glutathione for conjugation reactions with endogenous and exogenous compounds. Lipids, especially phospholipids, and lipotropes (methyl group donors) are necessary for the synthesis of biological membranes. Carbohydrates provide the energy required for various detoxification processes and provide glucuronic acid for conjugation of toxic chemicals and their metabolites. Selenium (an essential metalloid), glutathione, and vitamins such as vitamin C (water soluble), vitamin E and vitamin A (lipid soluble), have an important role as antioxidants (e.g., in controlling lipid peroxidation and maintaining integrity of cellular membranes) and free-radical scavengers for protection against toxic chemicals.
In addition, various dietary constituents (protein and fibre content, minerals, phosphates, citric acid, etc.) as well as the amount of food consumed can greatly influence the gastrointestinal absorption rate of many toxic chemicals (e.g., the average absorption rate of soluble lead salts taken with meals is approximately eight per cent, as opposed to approximately 60% in fasting subjects). However, diet itself can be an additional source of individual exposure to various toxic chemicals (e.g., considerably increased daily intakes and accumulation of arsenic, mercury, cadmium and/or lead in subjects who consume contaminated seafood).
The habit of smoking can influence individual susceptibility to many toxic chemicals because of the variety of possible interactions involving the great number of compounds present in cigarette smoke (especially polycyclic aromatic hydrocarbons, carbon monoxide, benzene, nicotine, acrolein, some pesticides, cadmium, and, to a lesser extent, lead and other toxic metals, etc.), some of which are capable of accumulating in the human body over a lifetime, including pre-natal life (e.g., lead and cadmium). The interactions occur mainly because various toxic chemicals compete for the same binding site(s) for transport and distribution in the organism and/or for the same biotransformation pathway involving particular enzymes. For example, several cigarette smoke constituents can induce cytochrome P450 enzymes, whereas others can depress their activity, and thus influence the common biotransformation pathways of many other toxic chemicals, such as organic solvents and some medications. Heavy cigarette smoking over a long period can considerably reduce the body’s defence mechanisms by decreasing reserve capacity to cope with the adverse influence of other life-style factors.
Consumption of alcohol (ethanol) can influence susceptibility to many toxic chemicals in several ways. It can influence the absorption rate and distribution of certain chemicals in the bodyfor example, increase the gastrointestinal absorption rate of lead, or decrease the pulmonary absorption rate of mercury vapour by inhibiting oxidation which is necessary for retention of inhaled mercury vapour. Ethanol can also influence susceptibility to various chemicals through short-term changes in tissue pH and increase in the redox potential resulting from ethanol metabolism, as both ethanol oxidizing to acetaldehyde and acetaldehyde oxidizing to acetate produce an equivalent of reduced nicotinamide adenine dinucleotide (NADH) and hydrogen (H+ ). Because the affinity of both essential and toxic metals and metalloids for binding to various compounds and tissues is influenced by pH and changes in the redox potential (Telišman 1995), even a moderate intake of ethanol may result in a series of consequences such as: (1) redistribution of long-term accumulated lead in the human organism in favour of a biologically active lead fraction, (2) replacement of essential zinc by lead in zinc-containing enzyme(s), thus affecting enzyme activity, or influence of mobilized lead on the distribution of other essential metals and metalloids in the organism such as calcium, iron, copper and selenium, (3) increased urinary excretion of zinc and so on. The effect of possible aforementioned events can be augmented due to the fact that alcoholic beverages can contain an appreciable amount of lead from vessels or processing (Prpic-Majic et al. 1984; Telišman et al. 1984; 1993).
Another common reason for ethanol-related changes in susceptibility is that many toxic chemicals, for example, various organic solvents, share the same biotransformation pathway involving the cytochrome P450 enzymes. Depending on the intensity of exposure to organic solvents as well as the quantity and frequency of ethanol ingestion (i.e., acute or chronic alcohol consumption), ethanol can either decrease or increase biotransformation rates of various organic solvents and thus influence their toxicity (Sato 1991).
The common use of various medications can influence susceptibility to toxic chemicals mainly because many drugs bind to serum proteins and thus influence the transport, distribution or excretion rate of various toxic chemicals, or because many drugs are capable of inducing relevant detoxifying enzymes or depressing their activity (e.g., the cytochrome P450 enzymes), thus affecting the toxicity of chemicals with the same biotransformation pathway. Characteristic for either of the mechanisms is increased urinary excretion of trichloroacetic acid (the metabolite of several chlorinated hydrocarbons) when using salicylate, sulphonamide or phenylbutazone, and an increased hepato-nephrotoxicity of carbon tetrachloride when using phenobarbital. In addition, some medications contain a considerable amount of a potentially toxic chemical, for example, the aluminium-containing antacids or preparations used for therapeutic management of the hyperphosphataemia arising in chronic renal failure.
The changes in susceptibility to adverse health effects due to interaction of various chemicals (i.e., possible additive, synergistic or antagonistic effects) have been studied almost exclusively in experimental animals, mostly in the rat. Relevant epidemiological and clinical studies are lacking. This is of concern particularly considering the relatively greater intensity of response or the variety of adverse health effects of several toxic chemicals in humans compared to the rat and other mammals. Apart from published data in the field of pharmacology, most data are related only to combinations of two different chemicals within specific groups, such as various pesticides, organic solvents, or essential and/or toxic metals and metalloids.
Combined exposure to various organic solvents can result in various additive, synergistic or antagonistic effects (depending on the combination of certain organic solvents, their intensity and duration of exposure), mainly due to the capability of influencing each other’s biotransformation (Sato 1991).
Another characteristic example are the interactions of both essential and/or toxic metals and metalloids, as these are involved in the possible influence of age (e.g., a lifetime body accumulation of environmental lead and cadmium), sex (e.g., common iron deficiency in women), dietary habits (e.g., increased dietary intake of toxic metals and metalloids and/or deficient dietary intake of essential metals and metalloids), smoking habit and alcohol consumption (e.g., additional exposure to cadmium, lead and other toxic metals), and use of medications (e.g., a single dose of antacid can result in a 50-fold increase in the average daily intake of aluminium through food). The possibility of various additive, synergistic or antagonistic effects of exposure to various metals and metalloids in humans can be illustrated by basic examples related to the main toxic elements (see table 33.2), apart from which further interactions may occur because essential elements can also influence one another (e.g., the well-known antagonistic effect of copper on the gastrointestinal absorption rate as well as the metabolism of zinc, and vice versa). The main cause of all these interactions is the competition of various metals and metalloids for the same binding site (especially the sulphhydryl group, -SH) in various enzymes, metalloproteins (especially metallothionein) and tissues (e.g., cell membranes and organ barriers). These interactions may have a relevant role in the development of several chronic diseases which are mediated through the action of free radicals and oxidative stress (Telišman 1995).
Toxic metal or metalloid
Basic effects of the interaction with other metal or metalloid
Decreases the absorption rate of Ca and impairs the metabolism of Ca; deficient dietary Ca increases the absorption rate of Al.
Impairs phosphate metabolism.
Data on interactions with Fe, Zn and Cu are equivocal (i.e., the possible role of another metal as a mediator).
Affects the distribution of Cu (an increase of Cu in the kidney, and a decrease of Cu in the liver, serum and urine).
Impairs the metabolism of Fe (an increase of Fe in the liver with concomitant decrease in haematocrit).
Zn decreases the absorption rate of inorganic As and decreases the toxicity of As.
Se decreases the toxicity of As and vice versa.
Decreases the absorption rate of Ca and impairs the metabolism of Ca; deficient dietary Ca increases the absorption rate of Cd.
Impairs the phosphate metabolism, i.e., increases urinary excretion of phosphates.
Impairs the metabolism of Fe; deficient dietary Fe increases the absorption rate of Cd.
Affects the distribution of Zn; Zn decreases the toxicity of Cd, whereas its influence on the absorption rate of Cd is equivocal.
Se decreases the toxicity of Cd.
Mn decreases the toxicity of Cd at low-level exposure to Cd.
Data on the interaction with Cu are equivocal (i.e., the possible role of Zn, or another metal, as a mediator).
High dietary levels of Pb, Ni, Sr, Mg or Cr(III) can decrease the absorption rate of Cd.
Affects the distribution of Cu (an increase of Cu in the liver).
Zn decreases the absorption rate of inorganic Hg and decreases the toxicity of Hg.
Se decreases the toxicity of Hg.
Cd increases the concentration of Hg in the kidney, but at the same time decreases the toxicity of Hg in the kidney (the influence ofthe Cd-induced metallothionein synthesis).
Impairs the metabolism of Ca; deficient dietary Ca increases the absorption rate of inorganic Pb and increases the toxicity of Pb.
Impairs the metabolism of Fe; deficient dietary Fe increases the toxicity of Pb, whereas its influence on the absorption rate of Pb is equivocal.
Impairs the metabolism of Zn and increases urinary excretion of Zn; deficient dietary Zn increases the absorption rate of inorganic Pb and increases the toxicity of Pb.
Se decreases the toxicity of Pb.
Data on interactions with Cu and Mg are equivocal (i.e., the possible role of Zn, or another metal, as a mediator).
Note: Data are mostly related to experimental studies in the rat, whereas relevant clinical and epidemiological data (particularly regarding quantitative dose-response relationships) are generally lacking (Elsenhans et al. 1991; Fergusson 1990; Telišman et al. 1993).
It has long been recognized that each person’s response to environmental chemicals is different. The recent explosion in molecular biology and genetics has brought a clearer understanding about the molecular basis of such variability. Major determinants of individual response to chemicals include important differences among more than a dozen superfamilies of enzymes, collectively termed xenobiotic- (foreign to the body) or drug-metabolizing enzymes. Although the role of these enzymes has classically been regarded as detoxification, these same enzymes also convert a number of inert compounds to highly toxic intermediates. Recently, many subtle as well as gross differences in the genes encoding these enzymes have been identified, which have been shown to result in marked variations in enzyme activity. It is now clear that each individual possesses a distinct complement of xenobiotic-metabolizing enzyme activities; this diversity might be thought of as a “metabolic fingerprint”. It is the complex interplay of these many different enzyme superfamilies which ultimately determines not only the fate and the potential for toxicity of a chemical in any given individual, but also assessment of exposure. In this article we have chosen to use the cytochrome P450 enzyme superfamily to illustrate the remarkable progress made in understanding individual response to chemicals. The development of relatively simple DNA-based tests designed to identify specific gene alterations in these enzymes, is now providing more accurate predictions of individual response to chemical exposure. We hope the result will be preventive toxicology. In other words, each individual might learn about those chemicals to which he or she is particularly sensitive, thereby avoiding previously unpredictable toxicity or cancer.
Although it is not generally appreciated, human beings are exposed daily to a barrage of innumerable diverse chemicals. Many of these chemicals are highly toxic, and they are derived from a wide variety of environmental and dietary sources. The relationship between such exposures and human health has been, and continues to be, a major focus of biomedical research efforts worldwide.
What are some examples of this chemical bombardment? More than 400 chemicals from red wine have been isolated and characterized. At least 1,000 chemicals are estimated to be produced by a lighted cigarette. There are countless chemicals in cosmetics and perfumed soaps. Another major source of chemical exposure is agriculture: in the United States alone, farmlands receive more than 75,000 chemicals each year in the form of pesticides, herbicides and fertilizing agents; after uptake by plants and grazing animals, as well as fish in nearby waterways, humans (at the end of the food chain) ingest these chemicals. Two other sources of large concentrations of chemicals taken into the body include (a) drugs taken chronically and (b) exposure to hazardous substances in the workplace over a lifetime of employment.
It is now well established that chemical exposure may adversely affect many aspects of human health, causing chronic diseases and the development of many cancers. In the last decade or so, the molecular basis of many of these relationships has begun to be unravelled. In addition, the realization has emerged that humans differ markedly in their susceptibility to the harmful effects of chemical exposure.
Current efforts to predict human response to chemical exposure combine two fundamental approaches (figure 33.2): monitoring the extent of human exposure through biological markers (biomarkers), and predicting the likely response of an individual to a given level of exposure. Although both of these approaches are extremely important, it should be emphasized that the two are distinctly different from one another. This article will focus on the genetic factors underlying individual susceptibility to any particular chemical exposure. This field of research is broadly termed ecogenetics, or pharmacogenetics (see Kalow 1962 and 1992). Many of the recent advances in determining individual susceptibility to chemical toxicity have evolved from a greater appreciation of the processes by which humans and other mammals detoxify chemicals, and the remarkable complexity of the enzyme systems involved.
Toxicologists and pharmacologists commonly speak about the average lethal dose for 50% of the population (LD50), the average maximal tolerated dose for 50% of the population (MTD50), and the average effective dose of a particular drug for 50% of the population (ED50). However, how do these doses affect each of us on an individual basis? In other words, a highly sensitive individual may be 500 times more affected or 500 times more likely to be affected than the most resistant individual in a population; for these people, the LD50 (and MTD50 and ED50) values would have little meaning. LD50, MTD50 and ED50 values are only relevant when referring to the population as a whole.
Figure 33.3 illustrates a hypothetical dose-response relationship for a toxic response by individuals in any given population. This generic diagram might represent bronchogenic carcinoma in response to the number of cigarettes smokes, chloracne as a function of dioxin levels in the workplace, asthma as a function of air concentrations of ozone or aldehyde, sunburn in response to ultraviolet light, decreased clotting time as a function of aspirin intake, or gastrointestinal distress in response to the number of jalapeño peppers consumed. Generally, in each of these instances, the greater the exposure, the greater the toxic response. Most of the population will exhibit the mean and standard deviation of toxic response as a function of dose. The "resistant outlier" (lower right in figure 33.3) is an individual having less of a response at higher doses or exposures. A “sensitive outlier” (upper left) is an individual having an exaggerated response to a relatively small dose or exposure. These outliers, with extreme differences in response compared to the majority of individuals in the population, may represent important genetic variants that can help scientists in attempting to understand the underlying molecular mechanisms of a toxic response.
Using these outliers in family studies, scientists in a number of laboratories have begun to appreciate the importance of Mendelian inheritance for a given toxic response. Subsequently, one can then turn to molecular biology and genetic studies to pinpoint the underlying mechanism at the gene level (genotype) responsible for the environmentally caused disease (phenotype).
How does the body respond to the myriad of exogenous chemicals to which we are exposed? Humans and other mammals have evolved highly complex metabolic enzyme systems comprising more than a dozen distinct superfamilies of enzymes. Almost every chemical to which humans are exposed will be modified by these enzymes, in order to facilitate removal of the foreign substance from the body. Collectively, these enzymes are frequently referred to as drug-metabolizing enzymes or xenobiotic-metabolizing enzymes. Actually, both terms are misnomers. First, many of these enzymes not only metabolize drugs but hundreds of thousands of environmental and dietary chemicals. Second, all of these enzymes also have normal body compounds as substrates; none of these enzymes metabolizes only foreign chemicals.
For more than four decades, the metabolic processes mediated by these enzymes have commonly been classified as either Phase I or Phase II reactions (figure 33.4).
Phase I (“functionalization”) reactions generally involve relatively minor structural modifications of the parent chemical via oxidation, reduction or hydrolysis in order to produce a more water-soluble metabolite. Frequently, Phase I reactions provide a “handle” for further modification of a compound by subsequent Phase II reactions. Phase I reactions are primarily mediated by a superfamily of highly versatile enzymes, collectively termed cytochromes P450, although other enzyme superfamilies can also be involved (figure 33.5).
Phase II reactions involve the coupling of a water-soluble endogenous molecule to a chemical (parent chemical or Phase I metabolite) in order to facilitate excretion. Phase II reactions are frequently termed “conjugation” or “derivatization” reactions. The enzyme superfamilies catalyzing Phase II reactions are generally named according to the endogenous conjugating moiety involved: for example, acetylation by the N-acetyltransferases, sulphation by the sulphotransferases, glutathione conjugation by the glutathione transferases, and glucuronidation by the UDP glucuronosyltransferases (figure 33.5). Although the major organ of drug metabolism is the liver, the levels of some drug- metabolizing enzymes are quite high in the gastrointestinal tract, gonads, lung, brain and kidney, and such enzymes are undoubtedly present to some extent in every living cell.
As we learn more about the biological and chemical processes leading to human health aberrations, it has become increasingly evident that drug-metabolizing enzymes function in an ambivalent manner (figure 33.4). In the majority of cases, lipid-soluble chemicals are converted to more readily excreted water-soluble metabolites. However, it is clear that on many occasions the same enzymes are capable of transforming other inert chemicals into highly reactive molecules. These intermediates can then interact with cellular macromolecules such as proteins and DNA. Thus, for each chemical to which humans are exposed, there exists the potential for the competing pathways of metabolic activation and detoxification.
In human genetics, each gene (locus) is located on one of the 23 pairs of chromosomes. The two alleles (one present on each chromosome of the pair) can be the same, or they can be different from one another. For example, the B and b alleles, in which B (brown eyes) is dominant over b (blue eyes): individuals of the brown-eyed phenotype can have either the BB or Bb genotypes, whereas individuals of the blue-eyed phenotype can only have the bb genotype.
A polymorphism is defined as two or more stably inherited phenotypes (traits)derived from the same gene(s)that are maintained in the population, often for reasons not necessarily obvious. For a gene to be polymorphic, the gene product must not be essential for development, reproductive vigour or other critical life processes. In fact, a “balanced polymorphism,” wherein the heterozygote has a distinct survival advantage over either homozygote (e.g., resistance to malaria, and the sickle-cell haemoglobin allele) is a common explanation for maintaining an allele in the population at otherwise unexplained high frequencies (see Gonzalez and Nebert 1990).
Genetic differences in the metabolism of various drugs and environmental chemicals have been known for more than four decades (Kalow 1962 and 1992). These differences are frequently referred to as pharmacogenetic or, more broadly, ecogenetic polymorphisms. These polymorphisms represent variant alleles that occur at a relatively high frequency in the population and are generally associated with aberrations in enzyme expression or function. Historically, polymorphisms were usually identified following unexpected responses to therapeutic agents. More recently, recombinant DNA technology has enabled scientists to identify the precise alterations in genes that are responsible for some of these polymorphisms. Polymorphisms have now been characterized in many drug-metabolizing enzymesincluding both Phase I and Phase II enzymes. As more and more polymorphisms are identified, it is becoming increasingly apparent that each individual may possess a distinct complement of drug-metabolizing enzymes. This diversity might be described as a “metabolic fingerprint”. It is the complex interplay of the various drug- metabolizing enzyme superfamilies within any individual that will ultimately determine his or her particular response to a given chemical (Kalow 1962 and 1992; Nebert 1988; Gonzalez and Nebert 1990; Nebert and Weber 1990).
How might we develop better predictors of human toxic responses to chemicals? Advances in defining the multiplicity of drug-metabolizing enzymes must be accompanied by precise knowledge as to which enzymes determine the metabolic fate of individual chemicals. Data gleaned from laboratory rodent studies have certainly provided useful information. However, significant interspecies differences in xenobiotic-metabolizing enzymes necessitate caution in extrapolating data to human populations. To overcome this difficulty, many laboratories have developed systems in which various cell lines in culture can be engineered to produce functional human enzymes that are stable and in high concentrations (Gonzalez, Crespi and Gelboin 1991). Successful production of human enzymes has been achieved in a variety of diverse cell lines from sources including bacteria, yeast, insects and mammals.
In order to define the metabolism of chemicals even more accurately, multiple enzymes have also been successfully produced in a single cell line (Gonzalez, Crespi and Gelboin 1991). Such cell lines provide valuable insights into the precise enzymes involved in the metabolic processing of any given compound and likely toxic metabolites. If this information can then be combined with knowledge regarding the presence and level of an enzyme in human tissues, these data should provide valuable predictors of response.
The cytochrome P450 superfamily is one of the most studied drug-metabolizing enzyme superfamilies, having a great deal of individual variability in response to chemicals. Cytochrome P450 is a convenient generic term used to describe a large superfamily of enzymes pivotal in the metabolism of innumerable endogenous and exogenous substrates. The term cytochrome P450 was first coined in 1962 to describe an unknown pigment in cells which, when reduced and bound with carbon monoxide, produced a characteristic absorption peak at 450 nm. Since the early 1980s, cDNA cloning technology has resulted in remarkable insights into the multiplicity of cytochrome P450 enzymes. To date, more than 400 distinct cytochrome P450 genes have been identified in animals, plants, bacteria and yeast. It has been estimated that any one mammalian species, such as humans, may possess 60 or more distinct P450 genes (Nebert and Nelson 1991). The multiplicity of P450 genes has necessitated the development of a standardized nomenclature system (Nebert et al. 1987; Nelson et al. 1993).
First proposed in 1987 and updated on a biannual basis, the nomenclature system is based on divergent evolution of amino acid sequence comparisons between P450 proteins. The P450 genes are divided into families and subfamilies: enzymes within a family display greater than 40% amino acid similarity, and those within the same subfamily display 55% similarity. P450 genes are named with the root symbol CYP followed by an arabic numeral designating the P450 family, a letter denoting the subfamily, and a further arabic numeral designating the individual gene (Nelson et al. 1993; Nebert et al. 1991). Thus, CYP1A1 represents P450 gene 1 in family 1 and subfamily A.
As of February 1995, there are 403 CYP genes in the database, composed of 59 families and 105 subfamilies. These include eight lower eukaryotic families, 15 plant families, and 19 bacterial families. The 15 human P450 gene families comprise 26 subfamilies, 22 of which have been mapped to chromosomal locations throughout most of the genome. Some sequences are clearly orthologous across many speciesfor example, only one CYP17 (steroid 17α-hydroxylase) gene has been found in all vertebrates examined to date; other sequences within a subfamily are highly duplicated, making the identification of orthologous pairs impossible (e.g., the CYP2C subfamily). Interestingly, human and yeast share an orthologous gene in the CYP51 family. Numerous comprehensive reviews are available for readers seeking further information on the P450 superfamily (Nelson et al. 1993; Nebert et al. 1991; Nebert and McKinnon 1994; Guengerich 1993; Gonzalez 1992).
The success of the P450 nomenclature system has resulted in similar terminology systems being developed for the UDP glucuronosyltransferases (Burchell et al. 1991) and flavin-containing mono-oxygenases (Lawton et al. 1994). Similar nomenclature systems based on divergent evolution are also under development for several other drug-metabolizing enzyme superfamilies (e.g., sulphotransferases, epoxide hydrolases and aldehyde dehydrogenases).
Recently, we divided the mammalian P450 gene superfamily into three groups (Nebert and McKinnon 1994)those involved principally with foreign chemical metabolism, those involved in the synthesis of various steroid hormones, and those participating in other important endogenous functions. It is the xenobiotic-metabolizing P450 enzymes that assume the most significance for prediction of toxicity.
P450 enzymes involved in the metabolism of foreign compounds and drugs are almost always found within families CYP1, CYP2, CYP3 and CYP4. These P450 enzymes catalyze a wide variety of metabolic reactions, with a single P450 often capable of metabolizing many different compounds. In addition, multiple P450 enzymes may metabolize a single compound at different sites. Also, a compound may be metabolized at the same, single site by several P450s, although at varying rates.
A most important property of the drug-metabolizing P450 enzymes is that many of these genes are inducible by the very substances which serve as their substrates. On the other hand, other P450 genes are induced by nonsubstrates. This phenomenon of enzyme induction underlies many drug-drug interactions of therapeutic importance.
Although present in many tissues, these particular P450 enzymes are found in relatively high levels in the liver, the primary site of drug metabolism. Some of the xenobiotic-metabolizing P450 enzymes exhibit activity toward certain endogenous substrates (e.g., arachidonic acid). However, it is generally believed that most of these xenobiotic-metabolizing P450 enzymes do not play important physiological rolesalthough this has not been established experimentally as yet. The selective homozygous disruption, or “knock-out,” of individual xenobiotic-metabolizing P450 genes by means of gene targeting methodologies in mice is likely to provide unequivocal information soon with regard to physiological roles of the xenobiotic-metabolizing P450s (for a review of gene targeting, see Capecchi 1994).
In contrast to P450 families encoding enzymes involved primarily in physiological processes, families encoding xenobiotic-metabolizing P450 enzymes display marked species specificity and frequently contain many active genes per subfamily (Nelson et al. 1993; Nebert et al. 1991).
Given the apparent lack of physiological substrates, it is possible that P450 enzymes in families CYP1, CYP2, CYP3 and CYP4 that have appeared in the past several hundred million years have evolved as a means of detoxifying foreign chemicals encountered in the environment and diet. Clearly, evolution of the xenobiotic-metabolizing P450s would have occurred over a time period which far precedes the synthesis of most of the synthetic chemicals to which humans are now exposed. The genes in these four gene families may have evolved and diverged in animals due to their exposure to plant metabolites during the last 1.2 billion yearsa process descriptively termed “animal-plant warfare” (Gonzalez and Nebert 1990). Animal-plant warfare is the phenomenon in which plants developed new chemicals (phytoalexins) as a defence mechanism in order to prevent ingestion by animals, and animals, in turn, responded by developing new P450 genes to accommodate the diversifying substrates. Providing further impetus to this proposal are the recently described examples of plant-insect and plant-fungus chemical warfare involving P450 detoxification of toxic substrates (Nebert 1994).
The following is a brief introduction to several of the human xenobiotic-metabolizing P450 enzyme polymorphisms in which genetic determinants of toxic response are believed to be of high significance. Until recently, P450 polymorphisms were generally suggested by unexpected variance in patient response to administered therapeutic agents. Several P450 polymorphisms are indeed named according to the drug with which the polymorphism was first identified. More recently, research efforts have focused on identification of the precise P450 enzymes involved in the metabolism of chemicals for which variance is observed and the precise characterization of the P450 genes involved. As described earlier, the measurable activity of a P450 enzyme towards a model chemical can be called the phenotype. Allelic differences in a P450 gene for each individual is termed the P450 genotype. As more and more scrutiny is applied to the analysis of P450 genes, the precise molecular basis of previously documented phenotypic variance is becoming clearer.
The CYP1A subfamily comprises two enzymes in humans and all other mammals: these are designated CYP1A1 and CYP1A2 under standard P450 nomenclature. These enzymes are of considerable interest, because they are involved in the metabolic activation of many procarcinogens and are also induced by several compounds of toxicological concern, including dioxin. For example, CYP1A1 metabolically activates many compounds found in cigarette smoke. CYP1A2 metabolically activates many arylaminesassociated with urinary bladder cancerfound in the chemical dye industry. CYP1A2 also metabolically activates 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK), a tobacco-derived nitrosamine. CYP1A1 and CYP1A2 are also found at higher levels in the lungs of cigarette smokers, due to induction by polycyclic hydrocarbons present in the smoke. The levels of CYP1A1 and CYP1A2 activity are therefore considered to be important determinants of individual response to many potentially toxic chemicals.
Toxicological interest in the CYP1A subfamily was greatly intensified by a 1973 report correlating the level of CYP1A1 inducibility in cigarette smokers with individual susceptibility to lung cancer (Kellermann, Shaw and Luyten-Kellermann 1973). The molecular basis of CYP1A1 and CYP1A2 induction has been a major focus of numerous laboratories. The induction process is mediated by a protein termed the Ah receptor to which dioxins and structurally related chemicals bind. The name Ah is derived from the aryl hydrocarbon nature of many CYP1A inducers. Interestingly, differences in the gene encoding the Ah receptor between strains of mice result in marked differences in chemical response and toxicity. A polymorphism in the Ah receptor gene also appears to occur in humans: approximately one-tenth of the population displays high induction of CYP1A1 and may be at greater risk than the other nine-tenths of the population for development of certain chemically induced cancers. The role of the Ah receptor in the control of enzymes in the CYP1A subfamily, and its role as a determinant of human response to chemical exposure, has been the subject of several recent reviews (Nebert, Petersen and Puga 1991; Nebert, Puga and Vasiliou 1993).
Are there other polymorphisms that might control the level of CYP1A proteins in a cell? A polymorphism in the CYP1A1 gene has also been identified, and this appears to influence lung cancer risk amongst Japanese cigarette smokers, although this same polymorphism does not appear to influence risk in other ethnic groups (Nebert and McKinnon 1994).
Variations in the rate at which individuals metabolize the anticonvulsant drug (S)-mephenytoin have been well documented for many years (Guengerich 1989). Between 2% and 5% of Caucasians and as many as 25% of Asians are deficient in this activity and may be at greater risk of toxicity from the drug. This enzyme defect has long been known to involve a member of the human CYP2C subfamily, but the precise molecular basis of this deficiency has been the subject of considerable controversy. The major reason for this difficulty was the six or more genes in the human CYP2C subfamily. It was recently demonstrated, however, that a single-base mutation in the CYP2C19 gene is the primary cause of this deficiency (Goldstein and de Morais 1994). A simple DNA test, based on the polymerase chain reaction (PCR), has also been developed to identify this mutation rapidly in human populations (Goldstein and de Morais 1994).
Perhaps the most extensively characterized variation in a P450 gene is that involving the CYP2D6 gene. More than a dozen examples of mutations, rearrangements and deletions affecting this gene have been described (Meyer 1994). This polymorphism was first suggested 20 years ago by clinical variability in patients’ response to the antihypertensive agent debrisoquine. Alterations in the CYP2D6 gene giving rise to altered enzyme activity are therefore collectively termed the debrisoquine polymorphism.
Prior to the advent of DNA-based studies, individuals had been classified as poor or extensive metabolizers (PMs, EMs) of debrisoquine based on metabolite concentrations in urine samples. It is now clear that alterations in the CYP2D6 gene may result in individuals displaying not only poor or extensive debrisoquine metabolism, but also ultrarapid metabolism. Most alterations in the CYP2D6 gene are associated with partial or total deficiency of enzyme function; however, individuals in two families have recently been described who possess multiple functional copies of the CYP2D6 gene, giving rise to ultrarapid metabolism of CYP2D6 substrates (Meyer 1994). This remarkable observation provides new insights into the wide spectrum of CYP2D6 activity previously observed in population studies. Alterations in CYP2D6 function are of particular significance, given the more than 30 commonly prescribed drugs metabolized by this enzyme. An individual’s CYP2D6 function is therefore a major determinant of both therapeutic and toxic response to administered therapy. Indeed, it has recently been argued that consideration of a patient’s CYP2D6 status is necessary for the safe use of both psychiatric and cardiovascular drugs.
The role of the CYP2D6 polymorphism as a determinant of individual susceptibility to human diseases such as lung cancer and Parkinson’s disease has also been the subject of intense study (Nebert and McKinnon 1994; Meyer 1994). While conclusions are difficult to define given the diverse nature of the study protocols utilized, the majority of studies appear to indicate an association between extensive metabolizers of debrisoquine (EM phenotype) and lung cancer. The reasons for such an association are presently unclear. However, the CYP2D6 enzyme has been shown to metabolize NNK, a tobacco-derived nitrosamine.
As DNA-based assays improveenabling even more accurate assessment of CYP2D6 statusit is anticipated that the precise relationship of CYP2D6 to disease risk will be clarified. Whereas the extensive metabolizer may be linked with susceptibility to lung cancer, the poor metabolizer (PM phenotype) appears to be associated with Parkinson’s disease of unknown cause. Whereas these studies are also difficult to compare, it appears that PM individuals having a diminished capacity to metabolize CYP2D6 substrates (e.g., debrisoquine) have a 2- to 2.5-fold increase in risk of developing Parkinson’s disease.
The CYP2E1 gene encodes an enzyme that metabolizes many chemicals, including drugs and many low-molecular-weight carcinogens. This enzyme is also of interest because it is highly inducible by alcohol and may play a role in liver injury induced by chemicals such as chloroform, vinyl chloride and carbon tetrachloride. The enzyme is primarily found in the liver, and the level of enzyme varies markedly between individuals. Close scrutiny of the CYP2E1 gene has resulted in the identification of several polymorphisms (Nebert and McKinnon 1994). A relationship has been reported between the presence of certain structural variations in the CYP2E1 gene and apparent lowered lung cancer risk in some studies; however, there are clear interethnic differences which require clarification of this possible relationship.
In humans, four enzymes have been identified as members of the CYP3A subfamily due to their similarity in amino acid sequence. The CYP3A enzymes metabolize many commonly prescribed drugs such as erythromycin and cyclosporin. The carcinogenic food contaminant aflatoxin B1 is also a CYP3A substrate. One member of the human CYP3A subfamily, designated CYP3A4, is the principal P450 in human liver as well as being present in the gastrointestinal tract. As is true for many other P450 enzymes, the level of CYP3A4 is highly variable between individuals. A second enzyme, designated CYP3A5, is found in only approximately 25% of livers; the genetic basis of this finding has not been elucidated. The importance of CYP3A4 or CYP3A5 variability as a factor in genetic determinants of toxic response has not yet been established (Nebert and McKinnon 1994).
Numerous polymorphisms also exist within other xenobiotic-metabolizing enzyme superfamilies (e.g., glutathione transferases, UDP glucuronosyltransferases, para-oxonases, dehydrogenases, N-acetyltransferases and flavin-containing mono-oxygenases). Because the ultimate toxicity of any P450-generated intermediate is dependent on the efficiency of subsequent Phase II detoxification reactions, the combined role of multiple enzyme polymorphisms is important in determining susceptibility to chemically induced diseases. The metabolic balance between Phase I and Phase II reactions (figure 33.4) is therefore likely to be a major factor in chemically induced human diseases and genetic determinants of toxic response.
A well studied example of a polymorphism in a Phase II enzyme is that involving a member of the glutathione S-transferase enzyme superfamily, designated GST mu or GSTM1. This particular enzyme is of considerable toxicological interest because it appears to be involved in the subsequent detoxification of toxic metabolites produced from chemicals in cigarette smoke by the CYP1A1 enzyme. The identified polymorphism in this glutathione transferase gene involves a total absence of functional enzyme in as many as half of all Caucasians studied. This lack of a Phase II enzyme appears to be associated with increased susceptibility to lung cancer. By grouping individuals on the basis of both variant CYP1A1 genes and the deletion or presence of a functional GSTM1 gene, it has been demonstrated that the risk of developing smoking-induced lung cancer varies significantly (Kawajiri, Watanabe and Hayashi 1994). In particular, individuals displaying one rare CYP1A1 gene alteration, in combination with an absence of the GSTM1 gene, were at higher risk (as much as ninefold) of developing lung cancer when exposed to a relatively low level of cigarette smoke. Interestingly, there appear to be interethnic differences in the significance of variant genes which necessitate further study in order to elucidate the precise role of such alterations in susceptibility to disease (Kalow 1962; Nebert and McKinnon 1994; Kawajiri, Watanabe and Hayashi 1994).
A toxic response to an environmental agent may be greatly exaggerated by the combination of two pharmacogenetic defects in the same individual, for example, the combined effects of the N-acetyltransferase (NAT2) polymorphism and the glucose-6-phosphate dehydrogenase (G6PD) polymorphism.
Occupational exposure to arylamines constitutes a grave risk of urinary bladder cancer. Since the elegant studies of Cartwright in 1954, it has become clear that the N-acetylator status is a determinant of azo-dye-induced bladder cancer. There is a highly significant correlation between the slow-acetylator phenotype and the occurrence of bladder cancer, as well as the degree of invasiveness of this cancer in the bladder wall. On the contrary, there is a significant association between the rapid-acetylator phenotype and the incidence of colorectal carcinoma. The N-acetyltransferase (NAT1, NAT2) genes have been cloned and sequenced, and DNA-based assays are now able to detect the more than a dozen allelic variants which account for the slow-acetylator phenotype. The NAT2 gene is polymorphic and responsible for most of the variability in toxic response to environmental chemicals (Weber 1987; Grant 1993).
Glucose-6-phosphate dehydrogenase (G6PD) is an enzyme critical in the generation and maintenance of NADPH. Low or absent G6PD activity can lead to severe drug- or xenobiotic-induced haemolysis, due to the absence of normal levels of reduced glutathione (GSH) in the red blood cell. G6PD deficiency affects at least 300 million people worldwide. More than 10% of African-American males exhibit the less severe phenotype, while certain Sardinian communities exhibit the more severe “Mediterranean type” at frequencies as high as one in every three persons. The G6PD gene has been cloned and localized to the X chromosome, and numerous diverse point mutations account for the large degree of phenotypic heterogeneity seen in G6PD-deficient individuals (Beutler 1992).
Thiozalsulphone, an arylamine sulpha drug, was found to cause a bimodal distribution of haemolytic anaemia in the treated population. When treated with certain drugs, individuals with the combination of G6PD deficiency plus the slow-acetylator phenotype are more affected than those with the G6PD deficiency alone or the slow-acetylator phenotype alone. G6PD-deficient slow acetylators are at least 40 times more susceptible than normal-G6PD rapid acetylators to thiozalsulphone-induced haemolysis.
Exposure assessment and biomonitoring (figure 33.2) also requires information on the genetic make-up of each individual. Given identical exposure to a hazardous chemical, the level of haemoglobin adducts (or other biomarkers) might vary by two or three orders of magnitude among individuals, depending upon each person’s metabolic fingerprint.
The same combined pharmacogenetics has been studied in chemical factory workers in Germany (table 33.3). Haemoglobin adducts among workers exposed to aniline and acetanilide are by far the highest in G6PD-deficient slow acetylators, as compared with the other possible combined pharmacogenetic phenotypes. This study has important implications for exposure assessment. These data demonstrate that, although two individuals might be exposed to the same ambient level of hazardous chemical in the work place, the amount of exposure (via biomarkers such as haemoglobin adducts) might be estimated to be two or more orders of magnitude less, due to the underlying genetic predisposition of the individual. Likewise, the resulting risk of an adverse health effect may vary by two or more orders of magnitude.
Source: Adapted from Lewalter and Korallus 1985.
It should be emphasized that the same case made here for metabolism can also be made for binding. Heritable differences in the binding of environmental agents will greatly affect the toxic response. For example, differences in the mouse cdm gene can profoundly affect individual sensitivity to cadmium-induced testicular necrosis (Taylor, Heiniger and Meier 1973). Differences in the binding affinity of the Ah receptor are likely affect dioxin-induced toxicity and cancer (Nebert, Petersen and Puga 1991; Nebert, Puga and Vasiliou 1993).
Figure 33.6 summarizes the role of metabolism and binding in toxicity and cancer. Toxic agents, as they exist in the environment or following metabolism or binding, elicit their effects by either a genotoxic pathway (in which damage to DNA occurs) or a non-genotoxic pathway (in which DNA damage and mutagenesis need not occur). Interestingly, it has recently become clear that “classical” DNA-damaging agents can operate via a reduced glutathione (GSH)-dependent nongenotoxic signal transduction pathway, which is initiated on or near the cell surface in the absence of DNA and outside the cell nucleus (Devary et al. 1993). Genetic differences in metabolism and binding remain, however, as the major determinants in controlling different individual toxic responses.
Genetically based variation in drug-metabolizing enzyme function is of major importance in determining individual response to chemicals. These enzymes are pivotal in determining the fate and time course of a foreign chemical following exposure.
As illustrated in figure 33.6, the importance of drug-metabolizing enzymes in individual susceptibility to chemical exposure may in fact present a far more complex issue than is evident from this simple discussion of xenobiotic metabolism. In other words, during the past two decades, genotoxic mechanisms (measurements of DNA adducts and protein adducts) have been greatly emphasized. However, what if nongenotoxic mechanisms are at least as important as genotoxic mechanisms in causing toxic responses?
As mentioned earlier, the physiological roles of many drug-metabolizing enzymes involved in xenobiotic metabolism have not been accurately defined. Nebert (1994) has proposed that, because of their presence on this planet for more than 3.5 billion years, drug-metabolizing enzymes were originally (and are now still primarily) responsible for regulating the cellular levels of many nonpeptide ligands important in the transcriptional activation of genes affecting growth, differentiation, apoptosis, homeostasis and neuroendocrine functions. Furthermore, the toxicity of most, if not all, environmental agents occurs by means of agonist or antagonist action on these signal transduction pathways (Nebert 1994). Based on this hypothesis, genetic variability in drug-metabolizing enzymes may have quite dramatic effects on many critical biochemical processes within the cell, thereby leading to important differences in toxic response. It is indeed possible that such a scenario may also underlie many idiosyncratic adverse reactions encountered in patients using commonly prescribed drugs.
The past decade has seen remarkable progress in our understanding of the genetic basis of differential response to chemicals in drugs, foods and environmental pollutants. Drug-metabolizing enzymes have a profound influence on the way humans respond to chemicals. As our awareness of drug-metabolizing enzyme multiplicity continues to evolve, we are increasingly able to make improved assessments of toxic risk for many drugs and environmental chemicals. This is perhaps most clearly illustrated in the case of the CYP2D6 cytochrome P450 enzyme. Using relatively simple DNA-based tests, it is possible to predict the likely response of any drug predominantly metabolized by this enzyme; this prediction will ensure the safer use of valuable, yet potentially toxic, medication.
The future will no doubt see an explosion in the identification of further polymorphisms (phenotypes) involving drug-metabolizing enzymes. This information will be accompanied by improved, minimally invasive DNA-based tests to identify genotypes in human populations.
Such studies should be particularly informative in evaluating the role of chemicals in the many environmental diseases of presently unknown origin. The consideration of multiple drug-metabolizing enzyme polymorphisms, in combination (e.g., table 33.3), is also likely to represent a particularly fertile research area. Such studies will clarify the role of chemicals in the causation of cancers. Collectively, this information should enable the formulation of increasingly individualized advice on avoidance of chemicals likely to be of individual concern. This is the field of preventive toxicology. Such advice will no doubt greatly assist all individuals in coping with the ever increasing chemical burden to which we are exposed.
Mechanistic toxicology is the study of how chemical or physical agents interact with living organisms to cause toxicity. Knowledge of the mechanism of toxicity of a substance enhances the ability to prevent toxicity and design more desirable chemicals; it constitutes the basis for therapy upon overexposure, and frequently enables a further understanding of fundamental biological processes. For purposes of this Encyclopaedia the emphasis will be placed on animals to predict human toxicity. Different areas of toxicology include mechanistic, descriptive, regulatory, forensic and environmental toxicology (Klaassen, Amdur and Doull 1991). All of these benefit from understanding the fundamental mechanisms of toxicity.
Understanding the mechanism by which a substance causes toxicity enhances different areas of toxicology in different ways. Mechanistic understanding helps the governmental regulator to establish legally binding safe limits for human exposure. It helps toxicologists in recommending courses of action regarding clean-up or remediation of contaminated sites and, along with physical and chemical properties of the substance or mixture, can be used to select the degree of protective equipment required. Mechanistic knowledge is also useful in forming the basis for therapy and the design of new drugs for treatment of human disease. For the forensic toxicologist the mechanism of toxicity often provides insight as to how a chemical or physical agent can cause death or incapacitation.
If the mechanism of toxicity is understood, descriptive toxicology becomes useful in predicting the toxic effects of related chemicals. It is important to understand, however, that a lack of mechanistic information does not deter health professionals from protecting human health. Prudent decisions based on animal studies and human experience are used to establish safe exposure levels. Traditionally, a margin of safety was established by using the “no adverse effect level” or a “lowest adverse effect level” from animal studies (using repeated-exposure designs) and dividing that level by a factor of 100 for occupational exposure or 1,000 for other human environmental exposure. The success of this process is evident from the few incidents of adverse health effects attributed to chemical exposure in workers where appropriate exposure limits had been set and adhered to in the past. In addition, the human lifespan continues to increase, as does the quality of life. Overall the use of toxicity data has led to effective regulatory and voluntary control. Detailed knowledge of toxic mechanisms will enhance the predictability of newer risk models currently being developed and will result in continuous improvement.
Understanding environmental mechanisms is complex and presumes a knowledge of ecosystem disruption and homeostasis (balance). While not discussed in this article, an enhanced understanding of toxic mechanisms and their ultimate consequences in an ecosystem would help scientists to make prudent decisions regarding the handling of municipal and industrial waste material. Waste management is a growing area of research and will continue to be very important in the future.
The majority of mechanistic studies start with a descriptive toxicological study in animals or clinical observations in humans. Ideally, animal studies include careful behavioural and clinical observations, careful biochemical examination of elements of the blood and urine for signs of adverse function of major biological systems in the body, and a post-mortem evaluation of all organ systems by microscopic examination to check for injury (see OECD test guidelines; EC directives on chemical evaluation; US EPA test rules; Japan chemicals regulations). This is analogous to a thorough human physical examination that would take place in a hospital over a two- to three-day time period except for the post-mortem examination.
Understanding mechanisms of toxicity is the art and science of observation, creativity in the selection of techniques to test various hypotheses, and innovative integration of signs and symptoms into a causal relationship. Mechanistic studies start with exposure, follow the time-related distribution and fate in the body (pharmacokinetics), and measure the resulting toxic effect at some level of the system and at some dose level. Different substances can act at different levels of the biological system in causing toxicity.
The route of exposure in mechanistic studies is usually the same as for human exposure. Route is important because there can be effects that occur locally at the site of exposure in addition to systemic effects after the chemical has been absorbed into the blood and distributed throughout the body. A simple yet cogent example of a local effect would be irritation and eventual corrosion of the skin following application of strong acid or alkaline solutions designed for cleaning hard surfaces. Similarly, irritation and cellular death can occur in cells lining the nose and/or lungs following exposure to irritant vapours or gases such as oxides of nitrogen or ozone. (Both are constituents of air pollution, or smog). Following absorption of a chemical into blood through the skin, lungs or gastrointestinal tract, the concentration in any organ or tissue is controlled by many factors which determine the pharmacokinetics of the chemical in the body. The body has the ability to activate as well as detoxify various chemicals as noted below.
Pharmacokinetics describes the time relationships for chemical absorption, distribution, metabolism (biochemical alterations in the body) and elimination or excretion from the body. Relative to mechanisms of toxicity, these pharmacokinetic variables can be very important and in some instances determine whether toxicity will or will not occur. For instance, if a material is not absorbed in a sufficient amount, systemic toxicity (inside the body) will not occur. Conversely, a highly reactive chemical that is detoxified quickly (seconds or minutes) by digestive or liver enzymes may not have the time to cause toxicity. Some polycyclic halogenated substances and mixtures as well as certain metals like lead would not cause significant toxicity if excretion were rapid; but accumulation to sufficiently high levels determines their toxicity since excretion is not rapid (sometimes measured in years). Fortunately, most chemicals do not have such long retention in the body. Accumulation of an innocuous material still would not induce toxicity. The rate of elimination from the body and detoxication is frequently referred to as the half-life of the chemical, which is the time for 50% of the chemical to be excreted or altered to a non-toxic form.
However, if a chemical accumulates in a particular cell or organ, that may signal a reason to further examine its potential toxicity in that organ. More recently, mathematical models have been developed to extrapolate pharmacokinetic variables from animals to humans. These pharmacokinetic models are extremely useful in generating hypotheses and testing whether the experimental animal may be a good representation for humans. Numerous chapters and texts have been written on this subject (Gehring et al. 1976; Reitz et al. 1987; Nolan et al. 1995). A simplified example of a physiological model is depicted in figure 33.7 .
Toxicity can be described at different biological levels. Injury can be evaluated in the whole person (or animal), the organ system, the cell or the molecule. Organ systems include the immune, respiratory, cardiovascular, renal, endocrine, digestive, muscolo-skeletal, blood, reproductive and central nervous systems. Some key organs include the liver, kidney, lung, brain, skin, eyes, heart, testes or ovaries, and other major organs. At the cellular/biochemical level, adverse effects include interference with normal protein function, endocrine receptor function, metabolic energy inhibition, or xenobiotic (foreign substance) enzyme inhibition or induction. Adverse effects at the molecular level include alteration of the normal function of DNA-RNA transcription, of specific cytoplasmic and nuclear receptor binding, and of genes or gene products.
Ultimately, dysfunction in a major organ system is likely caused by a molecular alteration in a particular target cell within that organ. However, it is not always possible to trace a mechanism back to a molecular origin of causation, nor is it necessary. Intervention and therapy can be designed without a complete understanding of the molecular target. However, knowledge about the specific mechanism of toxicity increases the predictive value and accuracy of extrapolation to other chemicals. Figure 33.8 is a diagrammatic representation of the various levels where interference of normal physiological processes can be detected. The arrows indicate that the consequences to an individual can be determined from top down (exposure, pharmacokinetics to system/organ toxicity) or from bottom up (molecular change, cellular/biochemical effect to system/organ toxicity).
Mechanisms of toxicity can be straightforward or very complex. Frequently, there is a difference among the type of toxicity, the mechanism of toxicity, and the level of effect, related to whether the adverse effects are due to a single, acute high dose (like an accidental poisoning), or a lower-dose repeated exposure (from occupational or environmental exposure). Classically, for testing purposes, an acute, single high dose is given by direct intubation into the stomach of a rodent or exposure to an atmosphere of a gas or vapour for two to four hours, whichever best resembles the human exposure. The animals are observed over a two-week period following exposure and then the major external and internal organs are examined for injury. Repeated-dose testing ranges from months to years. For rodent species, two years is considered a chronic (lifetime) study sufficient to evaluate toxicity and carcinogenicity, whereas for non-human primates, two years would be considered a subchronic (less than lifetime) study to evaluate repeated dose toxicity. Following exposure a complete examination of all tissues, organs and fluids is conducted to determine any adverse effects.
The following examples are specific to high-dose, acute effects which can lead to death or severe incapacitation. However, in some cases, intervention will result in transient and fully reversible effects. The dose or severity of exposure will determine the result.
Simple asphyxiants. The mechanism of toxicity for inert gases and some other non-reactive substances is lack of oxygen (anoxia). These chemicals, which cause deprivation of oxygen to the central nervous system (CNS), are termed simple asphyxiants. If a person enters a closed space that contains nitrogen without sufficient oxygen, immediate oxygen depletion occurs in the brain and leads to unconsciousness and eventual death if the person is not rapidly removed. In extreme cases (near zero oxygen) unconsciousness can occur in a few seconds. Rescue depends on rapid removal to an oxygenated environment. Survival with irreversible brain damage can occur from delayed rescue, due to the death of neurons, which cannot regenerate.
Chemical asphyxiants. Carbon monoxide (CO) competes with oxygen for binding to haemoglobin (in red blood cells) and therefore deprives tissues of oxygen for energy metabolism; cellular death can result. Intervention includes removal from the source of CO and treatment with oxygen. The direct use of oxygen is based on the toxic action of CO. Another potent chemical asphyxiant is cyanide. The cyanide ion interferes with cellular metabolism and utilization of oxygen for energy. Treatment with sodium nitrite causes a change in haemoglobin in red blood cells to methaemoglobin. Methaemoglobin has a greater binding affinity to the cyanide ion than does the cellular target of cyanide. Consequently, the methaemoglobin binds the cyanide and keeps the cyanide away from the target cells. This forms the basis for antidotal therapy.
Central nervous system (CNS) depressants. Acute toxicity is characterized by sedation or unconsciousness for a number of materials like solvents which are not reactive or which are transformed to reactive intermediates. It is hypothesized that sedation/anaesthesia is due to an interaction of the solvent with the membranes of cells in the CNS, which impairs their ability to transmit electrical and chemical signals. While sedation may seem a mild form of toxicity and was the basis for development of the early anaesthetics, “the dose still makes the poison”. If sufficient dose is administered by ingestion or inhalation the animal can die due to respiratory arrest. If anaesthetic death does not occur, this type of toxicity is usually readily reversible when the subject is removed from the environment or the chemical is redistributed or eliminated from the body.
Skin effects. Adverse effects to the skin can range from irritation to corrosion, depending on the substance encountered. Strong acids and alkaline solutions are incompatible with living tissue and are corrosive, causing chemical burns and possible scarring. Scarring is due to death of the dermal, deep skin cells responsible for regeneration. Lower concentrations may just cause irritation of the first layer of skin.
Another specific toxic mechanism of skin is that of chemical sensitization. As an example, sensitization occurs when 2,4-dinitrochlorobenzene binds with natural proteins in the skin and the immune system recognizes the altered protein-bound complex as a foreign material. In responding to this foreign material, the immune system activates special cells to eliminate the foreign substance by release of mediators (cytokines) which cause a rash or dermatitis (see “Immunotoxicology”). This is the same reaction of the immune system when exposure to poison ivy occurs. Immune sensitization is very specific to the particular chemical and takes at least two exposures before a response is elicited. The first exposure sensitizes (sets up the cells to recognize the chemical), and subsequent exposures trigger the immune system response. Removal from contact and symptomatic therapy with steroid-containing anti-inflammatory creams are usually effective in treating sensitized individuals. In serious or refractory cases a systemic acting immunosuppresant like prednisone is used in conjunction with topical treatment.
Lung sensitization. An immune sensitization response is elicited by toluene diisocyanate (TDI), but the target site is the lungs. TDI over-exposure in susceptible individuals causes lung oedema (fluid build-up), bronchial constriction and impaired breathing. This is a serious condition and requires removing the individual from potential subsequent exposures. Treatment is primarily symptomatic. Skin and lung sensitization follow a dose response. Exceeding the level set for occupational exposure can cause adverse effects.
Eye effects. Injury to the eye ranges from reddening of the outer layer (swimming-pool redness) to cataract formation of the cornea to damage to the iris (coloured part of the eye). Eye irritation tests are conducted when it is believed serious injury will not occur. Many of the mechanisms causing skin corrosion can also cause injury to the eyes. Materials corrosive to the skin, like strong acids (pH less than 2) and alkali (pH greater than 11.5), are not tested in the eyes of animals because most will cause corrosion and blindness due to a mechanism similar to that which causes skin corrosion. In addition, surface active agents like detergents and surfactants can cause eye injury ranging from irritation to corrosion. A group of materials that requires caution is the positively charged (cationic) surfactants, which can cause burns, permanent opacity of the cornea and vascularization (formation of blood vessels). Another chemical, dinitrophenol, has a specific effect of cataract formation. This appears to be related to concentration of this chemical in the eye, which is an example of pharmacokinetic distributional specificity.
While the listing above is far from exhaustive, it is designed to give the reader an appreciation for various acute toxicity mechanisms.
When given as a single high dose, some chemicals do not have the same mechanism of toxicity as when given repeatedly as a lower but still toxic dose. When a single high dose is given, there is always the possibility of exceeding the person’s ability to detoxify or excrete the chemical, and this can lead to a different toxic response than when lower repetitive doses are given. Alcohol is a good example. High doses of alcohol lead to primary central nervous system effects, while lower repetitive doses result in liver injury.
Anticholinesterase inhibition. Most organophosphate pesticides, for example, have little mammalian toxicity until they are metabolically activated, primarily in the liver. The primary mechanism of action of organophosphates is the inhibition of acetylcholinesterase (AChE) in the brain and peripheral nervous system. AChE is the normal enzyme that terminates the stimulation of the neurotransmitter acetylcholine. Slight inhibition of AChE over an extended period has not been associated with adverse effects. At high levels of exposure, inability to terminate this neuronal stimulation results in overstimulation of the cholinergic nervous system. Cholinergic overstimulation ultimately results in a host of symptoms, including respiratory arrest, followed by death if not treated. The primary treatment is the administration of atropine, which blocks the effects of acetylcholine, and the administration of pralidoxime chloride, which reactivates the inhibited AChE. Therefore, both the cause and the treatment of organophosphate toxicity are addressed by understanding the biochemical basis of toxicity.
Metabolic activation. Many chemicals, including carbon tetrachloride, chloroform, acetylaminofluorene, nitrosamines, and paraquat are metabolically activated to free radicals or other reactive intermediates which inhibit and interfere with normal cellular function. At high levels of exposure this results in cell death (see “Cellular injury and cellular death”). While the specific interactions and cellular targets remain unknown, the organ systems which have the capability to activate these chemicals, like the liver, kidney and lung, are all potential targets for injury. Specifically, particular cells within an organ have a greater or lesser capacity to activate or detoxify these intermediates, and this capacity determines the intracellular susceptibility within an organ. Metabolism is one reason why an understanding of pharmacokinetics, which describes these types of transformations and the distribution and elimination of these intermediates, is important in recognizing the mechanism of action of these chemicals.
Cancer mechanisms. Cancer is a multiplicity of diseases, and while the understanding of certain types of cancer is increasing rapidly due to the many molecular biological techniques that have been developed since 1980, there is still much to learn. However, it is clear that cancer development is a multi-stage process, and critical genes are key to different types of cancer. Alterations in DNA (somatic mutations) in a number of these critical genes can cause increased susceptibility or cancerous lesions (see “Genetic toxicology”). Exposure to natural chemicals (in cooked foods like beef and fish) or synthetic chemicals (like benzidine, used as a dye) or physical agents (ultraviolet light from the sun, radon from soil, gamma radiation from medical procedures or industrial activity) are all contributors to somatic gene mutations. However, there are natural and synthetic substances (such as anti-oxidants) and DNA repair processes which are protective and maintain homeostasis. It is clear that genetics is an important factor in cancer, since genetic disease syndromes such as xeroderma pigmentosum, where there is a lack of normal DNA repair, dramatically increase susceptibility to skin cancer from exposure to ultraviolet light from the sun.
Reproductive mechanisms. Similar to cancer, many mechanisms of reproductive and/or developmental toxicity are known, but much is to be learned. It is known that certain viruses (such as rubella), bacterial infections and drugs (such as thalidomide and vitamin A) will adversely affect development. Recently, work by Khera (1991), reviewed by Carney (1994), show good evidence that the abnormal developmental effects in animal tests with ethylene glycol are attributable to maternal metabolic acidic metabolites. This occurs when ethylene glycol is metabolized to acid metabolites including glycolic and oxalic acid. The subsequent effects on the placenta and foetus appear to be due to this metabolic toxication process.
The intent of this article is to give a perspective on several known mechanisms of toxicity and the need for future study. It is important to understand that mechanistic knowledge is not absolutely necessary to protect human or environmental health. This knowledge will enhance the professional’s ability to better predict and manage toxicity. The actual techniques used in elucidating any particular mechanism depend upon the collective knowledge of the scientists and the thinking of those who make decisions regarding human health.
Virtually all of medicine is devoted to either preventing cell death, in diseases such as myocardial infarction, stroke, trauma and shock, or causing it, as in the case of infectious diseases and cancer. It is, therefore, essential to understand the nature and mechanisms involved. Cell death has been classified as “accidental”, that is, caused by toxic agents, ischaemia and so on, or “programmed”, as occurs during embryological development, including formation of digits, and resorption of the tadpole tail.
Cell injury and cell death are, therefore, important both in physiology and in pathophysiology. Physiological cell death is extremely important during embryogenesis and embryonic development. The study of cell death during development has led to important and new information on the molecular genetics involved, especially through the study of development in invertebrate animals. In these animals, the precise location and the significance of cells that are destined to undergo cell death have been carefully studied and, with the use of classic mutagenesis techniques, several involved genes have now been identified. In adult organs, the balance between cell death and cell proliferation controls organ size. In some organs, such as the skin and the intestine, there is a continual turnover of cells. In the skin, for example, cells differentiate as they reach the surface, and finally undergo terminal differentiation and cell death as keratinization proceeds with the formation of crosslinked envelopes.
Many classes of toxic chemicals are capable of inducing acute cell injury followed by death. These include anoxia and ischaemia and their chemical analogues such as potassium cyanide; chemical carcinogens, which form electrophiles that covalently bind to proteins in nucleic acids; oxidant chemicals, resulting in free radical formation and oxidant injury; activation of complement; and a variety of calcium ionophores. Cell death is also an important component of chemical carcinogenesis; many complete chemical carcinogens, at carcinogenic doses, produce acute necrosis and inflammation followed by regeneration and preneoplasia.
Cell injury is defined as an event or stimulus, such as a toxic chemical, that perturbs the normal homeostasis of the cell, thus causing a number of events to occur (figure 33.9). The principal targets of lethal injury illustrated are inhibition of ATP synthesis, disruption of plasma membrane integrity or withdrawal of essential growth factors.
Lethal injuries result in the death of a cell after a variable period of time, depending on temperature, cell type and the stimulus; or they can be sublethal or chronicthat is, the injury results in an altered homeostatic state which, though abnormal, does not result in cell death (Trump and Arstila 1971; Trump and Berezesky 1992; Trump and Berezesky 1995; Trump, Berezesky and Osornio-Vargas 1981). In the case of a lethal injury, there is a phase prior to the time of cell death called the “prelethal phase”. If the injurious stimulus, such as anoxia, can be removed during this time, the cell will recover; however, after a particular point in time (the “point of no return” or point of cell death), the removal of the injury does not result in recovery but instead the cell undergoes degradation and hydrolysis, ultimately reaching physical-chemical equilibrium with the environment. This is the phase known as necrosis. During the prelethal phase, several principal types of change occur, depending on the cell and the type of injury. These are known as apoptosis and oncosis.
Apoptosis is derived from the Greek words apo, meaning away from, and ptosis, meaning to fall. The term falling away from is derived from the fact that, during this type of prelethal change, the cells shrink and undergo marked blebbing at the periphery. The blebs then detach and float away. Apoptosis occurs in a variety of cell types following various types of toxic injury (Wyllie, Kerr and Currie 1980). It is especially prominent in lymphocytes, where it is the predominant mechanism for turnover of lymphocyte clones. The resulting fragments result in the basophilic bodies seen within macrophages in lymph nodes. In other organs, apoptosis typically occurs in single cells which are rapidly cleared away before and following death by phagocytosis of the fragments by adjacent parenchymal cells or by macrophages. Apoptosis occurring in single cells with subsequent phagocytosis typically does not result in inflammation. Prior to death, apoptotic cells show a very dense cytosol with normal or condensed mitochondria. The endoplasmic reticulum (ER) is normal or only slightly dilated. The nuclear chromatin is markedly clumped along the nuclear envelope and around the nucleolus. The nuclear contour is also irregular and nuclear fragmentation occurs. The chromatin condensation is associated with DNA fragmentation which, in many instances, occurs between nucleosomes, giving a characteristic ladder appearance on electrophoresis.
In apoptosis, increased [Ca2+]i may stimulate K+ efflux resulting in cell shrinkage, which probably requires ATP. Injuries that totally inhibit ATP synthesis, therefore, are more likely to result in apoptosis. A sustained increase of [Ca2+]i has a number of deleterious effects including activation of proteases, endonucleases, and phospholipases. Endonuclease activation results in single and double DNA strand breaks which, in turn, stimulate increased levels of p53 and in poly-ADP ribosylation, and of nuclear proteins which are essential in DNA repair. Activation of proteases modifies a number of substrates including actin and related proteins leading to bleb formation. Another important substrate is poly(ADP-ribose) polymerase (PARP), which inhibits DNA repair. Increased [Ca2+]i is also associated with activation of a number of protein kinases, such as MAP kinase, calmodulin kinase and others. Such kinases are involved in activation of transcription factors which initiate transcription of immediate-early genes, for example, c-fos, c-jun and c-myc, and in activation of phospholipase A2 which results in permeabilization of the plasma membrane and of intracellular membranes such as the inner membrane of mitochondria.
Oncosis, derived from the Greek word onkos, to swell, is so named because in this type of prelethal change the cell begins to swell almost immediately following the injury (Majno and Joris 1995). The reason for the swelling is an increase in cations in the water within the cell. The principal cation responsible is sodium, which is normally regulated to maintain cell volume. However, in the absence of ATP or if Na-ATPase of the plasmalemma is inhibited, volume control is lost because of intracellular protein, and sodium in the water continuing to increase. Among the early events in oncosis are, therefore, increased [Na+]i which leads to cellular swelling and increased [Ca2+]i resulting either from influx from the extracellular space or release from intracellular stores. This results in swelling of the cytosol, swelling of the endoplasmic reticulum and Golgi apparatus, and the formation of watery blebs around the cell surface. The mitochondria initially undergo condensation, but later they too show high-amplitude swelling because of damage to the inner mitochondrial membrane. In this type of prelethal change, the chromatin undergoes condensation and ultimately degradation; however, the characteristic ladder pattern of apoptosis is not seen.
Necrosis refers to the series of changes that occur following cell death when the cell is converted to debris which is typically removed by the inflammatory response. Two types can be distinguished: oncotic necrosis and apoptotic necrosis. Oncotic necrosis typically occurs in large zones, for example, in a myocardial infarct or regionally in an organ after chemical toxicity, such as the renal proximal tubule following administration of HgCl2. Broad zones of an organ are involved and the necrotic cells rapidly incite an inflammatory reaction, first acute and then chronic. In the event that the organism survives, in many organs necrosis is followed by clearing away of the dead cells and regeneration, for example, in the liver or kidney following chemical toxicity. In contrast, apoptotic necrosis typically occurs on a single cell basis and the necrotic debris is formed within the phagocytes of macrophages or adjacent parenchymal cells. The earliest characteristics of necrotic cells include interruptions in plasma membrane continuity and the appearance of flocculent densities, representing denatured proteins within the mitochondrial matrix. In some forms of injury that do not initially interfere with mitochondrial calcium accumulation, calcium phosphate deposits can be seen within the mitochondria. Other membrane systems are similarly fragmenting, such as the ER, the lysosomes and the Golgi apparatus. Ultimately, the nuclear chromatin undergoes lysis, resulting from attack by lysosomal hydrolases. Following cell death, lysosomal hydrolases play an important part in clearing away debris with cathepsins, nucleolases and lipases since these have an acid pH optimum and can survive the low pH of necrotic cells while other cellular enzymes are denatured and inactivated.
In the case of lethal injuries, the most common initial interactions resulting in injury leading to cell death are interference with energy metabolism, such as anoxia, ischaemia or inhibitors of respiration, and glycolysis such as potassium cyanide, carbon monoxide, iodo-acetate, and so on. As mentioned above, high doses of compounds that inhibit energy metabolism typically result in oncosis. The other common type of initial injury resulting in acute cell death is modification of the function of the plasma membrane (Trump and Arstila 1971; Trump, Berezesky and Osornio-Vargas 1981). This can either be direct damage and permeabilization, as in the case of trauma or activation of the C5b-C9 complex of complement, mechanical damage to the cell membrane or inhibition of the sodium-potassium (Na+-K+) pump with glycosides such as ouabain. Calcium ionophores such as ionomycin or A23187, which rapidly carry [Ca2+] down the gradient into the cell, also cause acute lethal injury. In some cases, the pattern in the prelethal change is apoptosis; in others, it is oncosis.
With many types of injury, mitochondrial respiration and oxidative phosphorylation are rapidly affected. In some cells, this stimulates anaerobic glycolysis, which is capable of maintaining ATP, but with many injuries this is inhibited. The lack of ATP results in failure to energize a number of important homeostatic processes, in particular, control of intracellular ion homeostasis (Trump and Berezesky 1992; Trump, Berezesky and Osornio-Vargas 1981). This results in rapid increases of [Ca2+]i, and increased [Na+] and [Cl-] results in cell swelling. Increases in [Ca2+]i result in the activation of a number of other signalling mechanisms discussed below, including a series of kinases, which can result in increased immediate early gene transcription. Increased [Ca2+]i also modifies cytoskeletal function, in part resulting in bleb formation and in the activation of endonucleases, proteases and phospholipases. These seem to trigger many of the important effects discussed above, such as membrane damage through protease and lipase activation, direct degradation of DNA from endonuclease activation, and activation of kinases such as MAP kinase and calmodulin kinase, which act as transcription factors.
Through extensive work on development in the invertebrate C. elegans and Drosophila, as well as human and animal cells, a series of pro-death genes have been identified. Some of these invertebrate genes have been found to have mammalian counterparts. For example, the ced-3 gene, which is essential for programmed cell death in C. elegans, has protease activity and a strong homology with the mammalian interleukin converting enzyme (ICE). A closely related gene called apopain or prICE has recently been identified with even closer homology (Nicholson et al. 1995). In Drosophila, the reaper gene seems to be involved in a signal that leads to programmed cell death. Other pro-death genes include the Fas membrane protein and the important tumour-suppressor gene, p53, which is widely conserved. p53 is induced at the protein level following DNA damage and when phosphorylated acts as a transcription factor for other genes such as gadd45 and waf-1, which are involved in cell death signalling. Other immediate early genes such as c-fos, c-jun, and c-myc also seem to be involved in some systems.
At the same time, there are anti-death genes which appear to counteract the pro-death genes. The first of these to be identified was ced-9 from C. elegans, which is homologous to bcl-2 in humans. These genes act in an as yet unknown way to prevent cell killing by either genetic or chemical toxins. Some recent evidence indicates that bcl-2 may act as an antioxidant. Currently, there is much effort underway to develop an understanding of the genes involved and to develop ways to activate or inhibit these genes, depending on the situation.
Genetic toxicology, by definition, is the study of how chemical or physical agents affect the intricate process of heredity. Genotoxic chemicals are defined as compounds that are capable of modifying the hereditary material of living cells. The probability that a particular chemical will cause genetic damage inevitably depends on several variables, including the organism’s level of exposure to the chemical, the distribution and retention of the chemical once it enters the body, the efficiency of metabolic activation and/or detoxification systems in target tissues, and the reactivity of the chemical or its metabolites with critical macromolecules within cells. The probability that genetic damage will cause disease ultimately depends on the nature of the damage, the cell’s ability to repair or amplify genetic damage, the opportunity for expressing whatever alteration has been induced, and the ability of the body to recognize and suppress the multiplication of aberrant cells.
In higher organisms, hereditary information is organized in chromosomes. Chromosomes consist of tightly condensed strands of protein-associated DNA. Within a single chromosome, each DNA molecule exists as a pair of long, unbranched chains of nucleotide subunits linked together by phosphodiester bonds that join the 5 carbon of one deoxyribose moiety to the 3 carbon of the next (figure 33.10). In addition, one of four different nucleotide bases (adenine, cytosine, guanine or thymine) is attached to each deoxyribose subunit like beads on a string. Three-dimensionally, each pair of DNA strands forms a double helix with all of the bases oriented toward the inside of the spiral. Within the helix, each base is associated with its complementary base on the opposite DNA strand; hydrogen bonding dictates strong, noncovalent pairing of adenine with thymine and guanine with cytosine (figure 33.10). Since the sequence of nucleotide bases is complementary throughout the entire length of the duplex DNA molecule, both strands carry essentially the same genetic information. In fact, during DNA replication each strand serves as a template for the production of a new partner strand.
Using RNA and an array of different proteins, the cell ultimately deciphers the information encoded by the linear sequence of bases within specific regions of DNA (genes) and produces proteins that are essential for basic cell survival as well as normal growth and differentiation. In essence, the nucleotides function like a biological alphabet which is used to code for amino acids, the building blocks of proteins.
When incorrect nucleotides are inserted or nucleotides are lost, or when unnecessary nucleotides are added during DNA synthesis, the mistake is called a mutation. It has been estimated that less than one mutation occurs for every 109 nucleotides incorporated during the normal replication of cells. Although mutations are not necessarily harmful, alterations causing inactivation or overexpression of important genes can result in a variety of disorders, including cancer, hereditary disease, developmental abnormalities, infertility and embryonic or perinatal death. Very rarely, a mutation can lead to enhanced survival; such occurrences are the basis of natural selection.
Although some chemicals react directly with DNA, most require metabolic activation. In the latter case, electrophilic intermediates such as epoxides or carbonium ions are ultimately responsible for inducing lesions at a variety of nucleophilic sites within the genetic material (figure 33.11). In other instances, genotoxicity is mediated by by-products of compound interaction with intracellular lipids, proteins, or oxygen.
Because of their relative abundance in cells, proteins are the most frequent target of toxicant interaction. However, modification of DNA is of greater concern due to the central role of this molecule in regulating growth and differentiation through multiple generations of cells.
At the molecular level, electrophilic compounds tend to attack oxygen and nitrogen in DNA. The sites that are most prone to modification are illustrated in figure 33.12 . Although oxygens within phosphate groups in the DNA backbone are also targets for chemical modification, damage to bases is thought to be biologically more relevant since these groups are considered to be the primary informational elements in the DNA molecule.
Compounds that contain one electrophilic moiety typically exert genotoxicity by producing mono-adducts in DNA. Similarly, compounds that contain two or more reactive moieties can react with two different nucleophilic centres and thereby produce intra- or inter-molecular crosslinks in genetic material (figure 33.13). Interstrand DNA-DNA and DNA-protein crosslinks can be particularly cytotoxic since they can form complete blocks to DNA replication. For obvious reasons, the death of a cell eliminates the possibility that it will be mutated or neoplastically transformed. Genotoxic agents can also act by inducing breaks in the phosphodiester backbone, or between bases and sugars (producing abasic sites) in DNA. Such breaks may be a direct result of chemical reactivity at the damage site, or may occur during the repair of one of the aforementioned types of DNA lesion.
Over the past thirty to forty years, a variety of techniques have been developed to monitor the type of genetic damage induced by various chemicals. Such assays are described in detail elsewhere in this chapter and Encyclopaedia.
Misreplication of “microlesions” such as mono-adducts, abasic sites or single-strand breaks may ultimately result in nucleotide base-pair substitutions, or the insertion or deletion of short polynucleotide fragments in chromosomal DNA. In contrast, “macrolesions,” such as bulky adducts, crosslinks, or double-strand breaks may trigger the gain, loss or rearrangement of relatively large pieces of chromosomes. In any case, the consequences can be devastating to the organism since any one of these events can lead to cell death, loss of function or malignant transformation of cells. Exactly how DNA damage causes cancer is largely unknown. It is currently believed the process may involve inappropriate activation of proto-oncogenes such as myc and ras, and/or inactivation of recently identified tumour suppressor genes such as p53. Abnormal expression of either type of gene abrogates normal cellular mechanisms for controlling cell proliferation and/or differentiation.
The preponderance of experimental evidence indicates that the development of cancer following exposure to electrophilic compounds is a relatively rare event. This can be explained, in part, by the cell’s intrinsic ability to recognize and repair damaged DNA or the failure of cells with damaged DNA to survive. During repair, the damaged base, nucleotide or short stretch of nucleotides surrounding the damage site is removed and (using the opposite strand as a template) a new piece of DNA is synthesized and spliced into place. To be effective, DNA repair must occur with great accuracy prior to cell division, before opportunities for the propagation of mutation.
Clinical studies have shown that people with inherited defects in the ability to repair damaged DNA frequently develop cancer and/or developmental abnormalities at an early age (table 33.4). Such examples provide strong evidence linking accumulation of DNA damage to human disease. Similarly, agents that promote cell proliferation (such as tetradecanoylphorbol acetate) often enhance carcinogenesis. For these compounds, the increased likelihood of neoplastic transformation may be a direct consequence of a decrease in the time available for the cell to carry out adequate DNA repair.
High incidence of lymphoma
Hypersensitivity to ionizing radiation and certain alkylating agents.
Dysregulated replication of damaged DNA (may indicate shortened time for DNA repair)
Lesions on exposed skin
High incidence of tumours of the immune system and gastrointestinal tract
High frequency of chromosomal aberrations
Defective ligation of breaks associated with DNA repair
High incidence of leukaemia
Hypersensitivity to crosslinking agents
High frequency of chromosomal aberrations
Defective repair of crosslinks in DNA
Hereditary nonpolyposis colon cancer
High incidence of colon cancer
Defect in DNA mismatch repair (when insertion of wrong nucleotide occurs during replication)
High incidence of epithelioma on exposed areas of skin
Neurological impairment (in many cases)
Hypersensitivity to UV light and many chemical carcinogens
Defects in excision repair and/or replication of damaged DNA
The earliest theories on how chemicals interact with DNA can be traced back to studies conducted during the development of mustard gas for use in warfare. Further understanding grew out of efforts to identify anticancer agents that would selectively arrest the replication of rapidly dividing tumour cells. Increased public concern over hazards in our environment has prompted additional research into the mechanisms and consequences of chemical interaction with the genetic material. Examples of various types of chemicals which exert genotoxicity are presented in table 33.5 .
Class of chemical
Source of exposure
Probable genotoxic lesion
Bulky DNA adducts
Bulky DNA adducts
Mono-adducts, interstrand crosslinks and single-strand breaks in DNA.
Mono-adducts in DNA
Metals and metal compounds
Both intra- and inter-strand crosslinks in DNA
Mono-adducts and single-strand breaks in DNA
Mono-adducts and interstrand crosslinks in DNA
Mono-adducts in DNA
Polycyclic aromatic hydrocarbons
Bulky DNA adducts
The functions of the immune system are to protect the body from invading infectious agents and to provide immune surveillance against arising tumour cells. It has a first line of defence that is non-specific and that can initiate effector reactions itself, and an acquired specific branch, in which lymphocytes and antibodies carry the specificity of recognition and subsequent reactivity towards the antigen.
Immunotoxicology has been defined as “the discipline concerned with the study of the events that can lead to undesired effects as a result of interaction of xenobiotics with the immune system. These undesired events may result as a consequence of (1) a direct and/or indirect effect of the xenobiotic (and/or its biotransformation product) on the immune system, or (2) an immunologically based host response to the compound and/or its metabolite(s), or host antigens modified by the compound or its metabolites” (Berlin et al. 1987).
When the immune system acts as a passive target of chemical insults, the result can be decreased resistance to infection and certain forms of neoplasia, or immune disregulation/stimulation that can exacerbate allergy or auto-immunity. In the case that the immune system responds to the antigenic specificity of the xenobiotic or host antigen modified by the compound, toxicity can become manifest as allergies or autoimmune diseases.
Animal models to investigate chemical-induced immune suppression have been developed, and a number of these methods are validated (Burleson, Munson, and Dean 1995; IPCS 1996). For testing purposes, a tiered approach is followed to make an adequate selection from the overwhelming number of assays available. Generally, the objective of the first tier is to identify potential immunotoxicants. If potential immunotoxicity is identified, a second tier of testing is performed to confirm and characterize further the changes observed. Third-tier investigations include special studies on the mechanism of action of the compound. Several xenobiotics have been identified as immunotoxicants causing immunosuppression in such studies with laboratory animals.
The database on immune function disturbances in humans by environmental chemicals is limited (Descotes 1986; NRC Subcommittee on Immunotoxicology 1992). The use of markers of immunotoxicity has received little attention in clinical and epidemiological studies to investigate the effect of these chemicals on human health. Such studies have not been performed frequently, and their interpretation often does not permit unequivocal conclusions to be drawn, due for instance to the uncontrolled nature of exposure. Therefore, at present, immunotoxicity assessment in rodents, with subsequent extrapolation to man, forms the basis of decisions regarding hazard and risk.
Hypersensitivity reactions, notably allergic asthma and contact dermatitis, are important occupational health problems in industrialized countries (Vos, Younes and Smith 1995). The phenomenon of contact sensitization was investigated first in the guinea pig (Andersen and Maibach 1985). Until recently this has been the species of choice for predictive testing. Many guinea pig test methods are available, the most frequently employed being the guinea pig maximization test and the occluded patch test of Buehler. Guinea pig tests and newer approaches developed in mice, such as ear swelling tests and the local lymph node assay, provide the toxicologist with the tools to assess skin sensitization hazard. The situation with respect to sensitization of the respiratory tract is very different. There are, as yet, no well-validated or widely accepted methods available for the identification of chemical respiratory allergens although progress in the development of animal models for the investigation of chemical respiratory allergy has been achieved in the guinea pig and mouse.
Human data show that chemical agents, in particular drugs, can cause autoimmune diseases (Kammüller, Bloksma and Seinen 1989). There are a number of experimental animal models of human autoimmune diseases. Such comprise both spontaneous pathology (for example systemic lupus erythematosus in New Zealand Black mice) and autoimmune phenomena induced by experimental immunization with a cross-reactive autoantigen (for example the H37Ra adjuvant induced arthritis in Lewis strain rats). These models are applied in the preclinical evaluation of immunosuppressive drugs. Very few studies have addressed the potential of these models for assessment of whether a xenobiotic exacerbates induced or congenital autoimmunity. Animal models that are suitable to investigate the ability of chemicals to induce autoimmune diseases are virtually lacking. One model that is used to a limited extent is the popliteal lymph node assay in mice. Like the situation in humans, genetic factors play a crucial role in the development of autoimmune disease (AD) in laboratory animals, which will limit the predictive value of such tests.
The major function of the immune system is defence against bacteria, viruses, parasites, fungi and neoplastic cells. This is achieved by the actions of various cell types and their soluble mediators in a finely tuned concert. The host defence can be roughly divided into non-specific or innate resistance and specific or acquired immunity mediated by lymphocytes (Roitt, Brostoff and Male 1989).
Components of the immune system are present throughout the body (Jones et al. 1990). The lymphocyte compartment is found within lymphoid organs (figure 33.14). The bone marrow and thymus are classified as primary or central lymphoid organs; the secondary or peripheral lymphoid organs include lymph nodes, spleen and lymphoid tissue along secretory surfaces such as the gastrointestinal and respiratory tracts, the so-called mucosa-associated lymphoid tissue (MALT). About half of the body’s lymphocytes are located at any one time in MALT. In addition the skin is an important organ for the induction of immune responses to antigens present on the skin. Important in this process are epidermal Langerhans cells that have an antigen-presenting function.
Phagocytic cells of the monocyte/macrophage lineage, called the mononuclear phagocyte system (MPS), occur in lymphoid organs and also at extranodal sites; the extranodal phagocytes include Kupffer cells in the liver, alveolar macrophages in the lung, mesangial macrophages in the kidney and glial cells in the brain. Polymorphonuclear leukocytes (PMNs) are present mainly in blood and bone marrow, but accumulate at sites of inflammation.
A first line of defence to micro-organisms is executed by a physical and chemical barrier, such as at the skin, the respiratory tract and the alimentary tract. This barrier is helped by non-specific protective mechanisms including phagocytic cells, such as macrophages and polymorphonuclear leukocytes, which are able to kill pathogens, and natural killer cells, which can lyse tumour cells and virus-infected cells. The complement system and certain microbial inhibitors (e.g., lysozyme) also take part in the non-specific response.
After initial contact of the host with the pathogen, specific immune responses are induced. The hallmark of this second line of defence is specific recognition of determinants, so-called antigens or epitopes, of the pathogens by receptors on the cell surface of B- and T-lymphocytes. Following interaction with the specific antigen, the receptor-bearing cell is stimulated to undergo proliferation and differentiation, producing a clone of progeny cells that are specific for the eliciting antigen. The specific immune responses help the non-specific defence presented to the pathogens by stimulating the efficacy of the non-specific responses. A fundamental characteristic of specific immunity is that memory develops. Secondary contact with the same antigen provokes a faster and more vigorous but well-regulated response.
The genome does not have the capacity to carry the codes of an array of antigen receptors sufficient to recognize the number of antigens that can be encountered. The repertoire of specificity develops by a process of gene rearrangements. This is a random process, during which various specificities are brought about. This includes specificities for self components, which are undesirable. A selection process that takes place in the thymus (T cells), or bone marrow (B cells) operates to delete these undesirable specificities.
Normal immune effector function and homeostatic regulation of the immune response is dependent upon a variety of soluble products, known collectively as cytokines, which are synthesized and secreted by lymphocytes and by other cell types. Cytokines have pleiotropic effects on immune and inflammatory responses. Cooperation between different cell populations is required for the immune responsethe regulation of antibody responses, the accumulation of immune cells and molecules at inflammatory sites, the initiation of acute phase responses, the control of macrophage cytotoxic function and many other processes central to host resistance. These are influenced by, and in many cases are dependent upon, cytokines acting individually or in concert.
Two arms of specific immunity are recognizedhumoral immunity and cell-mediated or cellular immunity:
Humoral immunity. In the humoral arm B-lymphocytes are stimulated following recognition of antigen by cell-surface receptors. Antigen receptors on B-lymphocytes are immunoglobulins (Ig). Mature B cells (plasma cells) start the production of antigen-specific immunoglobulins that act as antibodies in serum or along mucosal surfaces. There are five major classes of immunoglobulins: (1) IgM, pentameric Ig with optimal agglutinating capacity, which is first produced after antigenic stimulation; (2) IgG, the main Ig in circulation, which can pass the placenta; (3) IgA, secretory Ig for the protection of mucosal surfaces; (4) IgE, Ig fixing to mast cells or basophilic granulocytes involved in immediate hypersensitivity reactions and (5) IgD, whose major function is as a receptor on B-lymphocytes.
Cell-mediated immunity. The cellular arm of the specific immune system is mediated by T-lymphocytes. These cells also have antigen receptors on their membranes. They recognize antigen if presented by antigen presenting cells in the context of histocompatibility antigens. Hence, these cells have a restriction in addition to the antigen specificity. T cells function as helper cells for various (including humoral) immune responses, mediate recruitment of inflammatory cells, and can, as cytotoxic T cells, kill target cells after antigen-specific recognition.
Effective host resistance is dependent upon the functional integrity of the immune system, which in turn requires that the component cells and molecules which orchestrate immune responses are available in sufficient numbers and in an operational form. Congenital immunodeficiencies in humans are often characterized by defects in certain stem cell lines, resulting in impaired or absent production of immune cells. By analogy with congenital and acquired human immunodeficiency diseases, chemical-induced immunosuppression may result simply from a reduced number of functional cells (IPCS 1996). The absence, or reduced numbers, of lymphocytes may have more or less profound effects on immune status. Some immunodeficiency states and severe immunosuppression, as can occur in transplantation or cytostatic therapy, have been associated in particular with increased incidences of opportunistic infections and of certain neoplastic diseases. The infections can be bacterial, viral, fungal or protozoan, and the predominant type of infection depends on the associated immunodeficiency. Exposure to immunosuppressive environmental chemicals may be expected to result in more subtle forms of immunosuppression, which may be difficult to detect. These may lead, for example, to an increased incidence of infections such as influenza or the common cold.
In view of the complexity of the immune system, with the wide variety of cells, mediators and functions that form a complicated and interactive network, immunotoxic compounds have numerous opportunities to exert an effect. Although the nature of the initial lesions induced by many immunotoxic chemicals have not yet been elucidated, there is increasing information available, mostly derived from studies in laboratory animals, regarding the immunobiological changes which result in depression of immune function (Dean et al. 1994). Toxic effects might occur at the following critical functions (and some examples are given of immunotoxic compounds affecting these functions):
· development and expansion of different stem cell populations (benzene exerts immunotoxic effects at the stem cell level, causing lymphocytopenia)
· proliferation of various lymphoid and myeloid cells as well as supportive tissues in which these cells mature and function (immunotoxic organotin compounds suppress the proliferative activity of lymphocytes in the thymic cortex through direct cytotoxicity; the thymotoxic action of 2,3,7,8-tetrachloro-dibenzo-p-dioxin (TCDD) and related compounds is likely due to an impaired function of thymic epithelial cells, rather than to direct toxicity for thymocytes)
· antigen uptake, processing and presentation by macrophages and other antigen-presenting cells (one of the targets of 7,12-dimethylbenz(a)anthracene (DMBA) and of lead is antigen presentation by macrophages; a target of ultraviolet radiation is the antigen-presenting Langerhans cell)
· regulatory function of T-helper and T-suppressor cells (T-helper cell function is impaired by organotins, aldicarb, polychlorinated biphenyls (PCBs), TCDD and DMBA; T-suppressor cell function is reduced by low-dose cyclophosphamide treatment)
· production of various cytokines or interleukins (benzo(a)pyrene (BP) suppresses interleukin-1 production; ultraviolet radiation alters production of cytokines by keratinocytes)
· synthesis of various classes of immunoglobulins IgM and IgG is suppressed following PCB and tributyltin oxide (TBT) treatment, and increased after hexachlorobenzene (HCB) exposure).
· complement regulation and activation (affected by TCDD)
· cytotoxic T cell function (3-methylcholanthrene (3-MC), DMBA, and TCDD suppress cytotoxic T cell activity)
· natural killer (NK) cell function (pulmonary NK activity is suppressed by ozone; splenic NK activity is impaired by nickel)
· macrophage and polymorphonuclear leukocyte chemotaxis and cytotoxic functions (ozone and nitrogen dioxide impair the phagocytic activity of alveolar macrophages).
Allergy may be defined as the adverse health effects which result from the induction and elicitation of specific immune responses. When hypersensitivity reactions occur without involvement of the immune system the term pseudo-allergy is used. In the context of immunotoxicology, allergy results from a specific immune response to chemicals and drugs that are of interest. The ability of a chemical to sensitize individuals is generally related to its ability to bind covalently to body proteins. Allergic reactions may take a variety of forms and these differ with respect to both the underlying immunological mechanisms and the speed of the reaction. Four major types of allergic reactions have been recognized: Type I hypersensitivity reactions, which are effectuated by IgE antibody and where symptoms are manifest within minutes of exposure of the sensitized individual. Type II hypersensitivity reactions result from the damage or destruction of host cells by antibody. In this case symptoms become apparent within hours. Type III hypersensitivity, or Arthus, reactions are also antibody mediated, but against soluble antigen, and result from the local or systemic action of immune complexes. Type IV, or delayed-type hypersensitivity, reactions are effected by T-lymphocytes and normally symptoms develop 24 to 48 hours following exposure of the sensitized individual.
The two types of chemical allergy of greatest relevance to occupational health are contact sensitivity or skin allergy and allergy of the respiratory tract.
Contact hypersensitivity. A large number of chemicals are able to cause skin sensitization. Following topical exposure of a susceptible individual to a chemical allergen, a T-lymphocyte response is induced in the draining lymph nodes. In the skin the allergen interacts directly or indirectly with epidermal Langerhans cells, which transport the chemical to the lymph nodes and present it in an immunogenic form to responsive T-lymphocytes. Allergen-activated T-lymphocytes proliferate, resulting in clonal expansion. The individual is now sensitized and will respond to a second dermal exposure to the same chemical with a more aggressive immune response, resulting in allergic contact dermatitis. The cutaneous inflammatory reaction which characterizes allergic contact dermatitis is secondary to the recognition of the allergen in the skin by specific T-lymphocytes. These lymphocytes become activated, release cytokines and cause the local accumulation of other mononuclear leukocytes. Symptoms develop some 24 to 48 hours following exposure of the sensitized individual, and allergic contact dermatitis therefore represents a form of delayed-type hypersensitivity. Common causes of allergic contact dermatitis include organic chemicals (such as 2,4-dinitrochlorobenzene), metals (such as nickel and chromium) and plant products (such as urushiol from poison ivy).
Respiratory hypersensitivity. Respiratory hypersensitivity is usually considered to be a Type I hypersensitivity reaction. However, late phase reactions and the more chronic symptoms associated with asthma may involve cell-mediated (Type IV) immune processes. The acute symptoms associated with respiratory allergy are effected by IgE antibody, the production of which is provoked following exposure of the susceptible individual to the inducing chemical allergen. The IgE antibody distributes systemically and binds, via membrane receptors, to mast cells which are found in vascularized tissues, including the respiratory tract. Following inhalation of the same chemical a respiratory hypersensitivity reaction will be elicited. Allergen associates with protein and binds to, and cross-links, IgE antibody bound to mast cells. This in turn causes the degranulation of mast cells and the release of inflammatory mediators such as histamine and leukotrienes. Such mediators cause bronchoconstriction and vasodilation, resulting in the symptoms of respiratory allergy; asthma and/or rhinitis. Chemicals known to cause respiratory hypersensitivity in man include acid anhydrides (such as trimellitic anhydride), some diisocyanates (such as toluene diisocyanate), platinum salts and some reactive dyes. Also, chronic exposure to beryllium is known to cause hypersensitivity lung disease.
Autoimmunity can be defined as the stimulation of specific immune responses directed against endogenous “self” antigens. Induced autoimmunity can result either from alterations in the balance of regulatory T-lymphocytes or from the association of a xenobiotic with normal tissue components such as to render them immunogenic (“altered self”). Drugs and chemicals known to incidentally induce or exacerbate effects like those of autoimmune disease (AD) in susceptible individuals are low molecular weight compounds (molecular weight 100 to 500) that are generally considered to be not immunogenic themselves. The mechanism of AD by chemical exposure is mostly unknown. Disease can be produced directly by means of circulating antibody, indirectly through the formation of immune complexes, or as a consequence of cell-mediated immunity, but likely occurs through a combination of mechanisms. The pathogenesis is best known in immune haemolytic disorders induced by drugs:
· The drug can attach to the red-cell membrane and interact with a drug-specific antibody.
· The drug can alter the red-cell membrane so that the immune system regards the cell as foreign.
· The drug and its specific antibody form immune complexes that adhere to the red-cell membrane to produce injury.
· Red-cell sensitization occurs due to the production of red-cell autoantibody.
A variety of chemicals and drugs, in particular the latter, have been found to induce autoimmune-like responses (Kamüller, Bloksma and Seinen 1989). Occupational exposure to chemicals may incidentally lead to AD-like syndromes. Exposure to monomeric vinyl chloride, trichloroethylene, perchloroethylene, epoxy resins and silica dust may induce scleroderma-like syndromes. A syndrome similar to systemic lupus erythematosus (SLE) has been described after exposure to hydrazine. Exposure to toluene diisocyanate has been associated with the induction of thrombocytopenic purpura. Heavy metals such as mercury have been implicated in some cases of immune complex glomerulonephritis.
The assessment of human immune status is performed mainly using peripheral blood for analysis of humoral substances like immunoglobulins and complement, and of blood leukocytes for subset composition and functionality of subpopulations. These methods are usually the same as those used to investigate humoral and cell-mediated immunity as well as nonspecific resistance of patients with suspected congenital immunodeficiency disease. For epidemiological studies (e.g., of occupationally exposed populations) parameters should be selected on the basis of their predictive value in human populations, validated animal models, and the underlying biology of the markers (see table 33.6). The strategy in screening for immunotoxic effects after (accidental) exposure to environmental pollutants or other toxicants is much dependent on circumstances, such as type of immunodeficiency to be expected, time between exposure and immune status assessment, degree of exposure and number of exposed individuals. The process of assessing the immunotoxic risk of a particular xenobiotic in humans is extremely difficult and often impossible, due largely to the presence of various confounding factors of endogenous or exogenous origin that influence the response of individuals to toxic damage. This is particularly true for studies which investigate the role of chemical exposure in autoimmune diseases, where genetic factors play a crucial role.
Should be included with general panels
Indicators of general health and organ system status
Blood urea nitrogen, blood glucose, etc.
Should be included with general panels
General indicators of immune status
Relatively low cost
Assay methods are standardized among laboratories
Results outside reference ranges are clinically interpretable
Complete blood counts
Serum IgG, IgA, IgM levels
Surface marker phenotypes for major lymphocyte subsets
Should be included when indicated by clinical findings, suspected exposures, or prior test results
Indicators of specific immune functions/events
Assay methods are standardized among laboratories
Results outside reference ranges are clinically interpretable
Antibodies to infectious agents
Total serum IgE
Skin tests for hypersensitivity
Granulocyte oxidative burst
Histopathology (tissue biopsy)
Should be included only with control populations and careful study design
Indicators of general or specific immune functions/events
Cost varies; often expensive
Assay methods are usually not standardized among laboratories
Results outside reference ranges are often not clinically interpretable
In vitro stimulation assays
Cell activation surface markers
Cytokine serum concentrations
Clonality assays (antibody, cellular, genetic)
As adequate human data are seldom available, the assessment of risk for chemical-induced immunosuppression in humans is in the majority of cases based upon animal studies. The identification of potential immunotoxic xenobiotics is undertaken primarily in controlled studies in rodents. In vivo exposure studies present, in this regard, the optimal approach to estimate the immunotoxic potential of a compound. This is due to the multifactoral and complex nature of the immune system and of immune responses. In vitro studies are of increasing value in the elucidation of mechanisms of immunotoxicity. In addition, by investigating the effects of the compound using cells of animal and human origin, data can be generated for species comparison, which can be used in the “parallelogram” approach to improve the risk assessment process. If data are available for three cornerstones of the parallelogram (in vivo animal, and in vitro animal and human) it may be easier to predict the outcome at the remaining cornerstone, that is, the risk in humans.
When assessment of risk for chemical-induced immunosuppression has to rely solely upon data from animal studies, an approach can be followed in the extrapolation to man by the application of uncertainty factors to the no observed adverse effect level (NOAEL). This level can be based on parameters determined in relevant models, such as host resistance assays and in vivo assessment of hypersensitivity reactions and antibody production. Ideally, the relevance of this approach to risk assessment requires confirmation by studies in humans. Such studies should combine the identification and measurement of the toxicant, epidemiological data and immune status assessments.
To predict contact hypersensitivity, guinea pig models are available and have been used in risk assessment since the 1970s. Although sensitive and reproducible, these tests have limitations as they depend on subjective evaluation; this can be overcome by newer and more quantitative methods developed in the mouse. Regarding chemical-induced hypersensitivity induced by inhalation or ingestion of allergens, tests should be developed and evaluated in terms of their predictive value in man. When it comes to setting safe occupational exposure levels of potential allergens, consideration has to be given to the biphasic nature of allergy: the sensitization phase and the elicitation phase. The concentration required to elicit an allergic reaction in a previously sensitized individual is considerably lower than the concentration necessary to induce sensitization in the immunologically naïve but susceptible individual.
As animal models to predict chemical-induced autoimmunity are virtually lacking, emphasis should be given to the development of such models. For the development of such models, our knowledge of chemical-induced autoimmunity in humans should be advanced, including the study of genetic and immune system markers to identify susceptible individuals. Humans that are exposed to drugs that induce autoimmunity offer such an opportunity.
The study and characterization of chemicals and other agents for toxic properties is often undertaken on the basis of specific organs and organ systems. In this chapter, two targets have been selected for in-depth discussion: the immune system and the gene. These examples were chosen to represent a complex target organ system and a molecular target within cells. For more comprehensive discussion of the toxicology of target organs, the reader is referred to standard toxicology texts such as Casarett and Doull, and Hayes. The International Programme on Chemical Safety (IPCS) has also published several criteria documents on target organ toxicology, by organ system.
Target organ toxicology studies are usually undertaken on the basis of information indicating the potential for specific toxic effects of a substance, either from epidemiological data or from general acute or chronic toxicity studies, or on the basis of special concerns to protect certain organ functions, such as reproduction or foetal development. In some cases, specific target organ toxicity tests are expressly mandated by statutory authorities, such as neurotoxicity testing under the US pesticides law (see “The United States approach to risk assessment of reproductive toxicants and neurotoxic agents,”) and mutagenicity testing under the Japanese Chemical Substance Control Law (see “Principles of hazard identification: The Japanese approach”).
As discussed in “Target organ and critical effects,” the identification of a critical organ is based upon the detection of the organ or organ system which first responds adversely or to the lowest doses or exposures. This information is then used to design specific toxicology investigations or more defined toxicity tests that are designed to elicit more sensitive indications of intoxication in the target organ. Target organ toxicology studies may also be used to determine mechanisms of action, of use in risk assessment (see “The United States approach to risk assessment of reproductive toxicants and neurotoxic agents”).
Target organs may be studied by exposure of intact organisms and detailed analysis of function and histopathology in the target organ, or by in vitro exposure of cells, tissue slices, or whole organs maintained for short or long term periods in culture (see “Mechanisms of toxicology: Introduction and concepts”). In some cases, tissues from human subjects may also be available for target organ toxicity studies, and these may provide opportunities to validate assumptions of cross-species extrapolation. However, it must be kept in mind that such studies do not provide information on relative toxicokinetics.
In general, target organ toxicity studies share the following common characteristics: detailed histopathological examination of the target organ, including post mortem examination, tissue weight, and examination of fixed tissues; biochemical studies of critical pathways in the target organ, such as important enzyme systems; functional studies of the ability of the organ and cellular constituents to perform expected metabolic and other functions; and analysis of biomarkers of exposure and early effects in target organ cells.
Detailed knowledge of target organ physiology, biochemistry and molecular biology may be incorporated in target organ studies. For instance, because the synthesis and secretion of small-molecular-weight proteins is an important aspect of renal function, nephrotoxicity studies often include special attention to these parameters (IPCS 1991). Because cell-to-cell communication is a fundamental process of nervous system function, target organ studies in neurotoxicity may include detailed neurochemical and biophysical measurements of neurotransmitter synthesis, uptake, storage, release and receptor binding, as well as electrophysiological measurement of changes in membrane potential associated with these events.
A high degree of emphasis is being placed upon the development of in vitro methods for target organ toxicity, to replace or reduce the use of whole animals. Substantial advances in these methods have been achieved for reproductive toxicants (Heindel and Chapin 1993).
In summary, target organ toxicity studies are generally undertaken as a higher order test for determining toxicity. The selection of specific target organs for further evaluation depends upon the results of screening level tests, such as the acute or subchronic tests used by OECD and the European Union; some target organs and organ systems may be a priori candidates for special investigation because of concerns to prevent certain types of adverse health effects.
The word biomarker is short for biological marker, a term that refers to a measurable event occurring in a biological system, such as the human body. This event is then interpreted as a reflection, or marker, of a more general state of the organism or of life expectancy. In occupational health, a biomarker is generally used as an indicator of health status or disease risk.
Biomarkers are used for in vitro as well as in vivo studies that may include humans. Usually, three specific types of biological markers are identified. Although a few biomarkers may be difficult to classify, usually they are separated into biomarkers of exposure, biomarkers of effect or biomarkers of susceptibility (see table 33.7).
Carbon monoxide exposure
Red blood cells
White blood cells
Given an acceptable degree of validity, biomarkers may be employed for several purposes. On an individual basis, a biomarker may be used to support or refute a diagnosis of a particular type of poisoning or other chemically-induced adverse effect. In a healthy subject, a biomarker may also reflect individual hypersusceptibility to specific chemical exposures and may therefore serve as a basis for risk prediction and counselling. In groups of exposed workers, some exposure biomarkers can be applied to assess the extent of compliance with pollution abatement regulations or the effectiveness of preventive efforts in general.
An exposure biomarker may be an exogenous compound (or a metabolite) within the body, an interactive product between the compound (or metabolite) and an endogenous component, or another event related to the exposure. Most commonly, biomarkers of exposures to stable compounds, such as metals, comprise measurements of the metal concentrations in appropriate samples, such as blood, serum or urine. With volatile chemicals, their concentration in exhaled breath (after inhalation of contamination-free air) may be assessed. If the compound is metabolized in the body, one or more metabolites may be chosen as a biomarker of the exposure; metabolites are often determined in urine samples.
Modern methods of analysis may allow separation of isomers or congeners of organic compounds, and determination of the speciation of metal compounds or isotopic ratios of certain elements. Sophisticated analyses allow determination of changes in the structure of DNA or other macromolecules caused by binding with reactive chemicals. Such advanced techniques will no doubt gain considerably in importance for applications in biomarker studies, and lower detection limits and better analytical validity are likely to make these biomarkers even more useful.
Particularly promising developments have occurred with biomarkers of exposure to mutagenic chemicals. These compounds are reactive and may form adducts with macromolecules, such as proteins or DNA. DNA adducts may be detected in white blood cells or tissue biopsies, and specific DNA fragments may be excreted in the urine. For example, exposure to ethylene oxide results in reactions with DNA bases, and, after excision of the damaged base, N-7-(2-hydroxyethyl)guanine will be eliminated in the urine. Some adducts may not refer directly to a particular exposure. For example, 8-hydroxy-2´-deoxyguanosine reflects oxidative damage to DNA, and this reaction may be triggered by several chemical compounds, most of which also induce lipid peroxidation.
Other macromolecules may also be changed by adduct formation or oxidation. Of special interest, such reactive compounds may generate haemoglobin adducts that can be determined as biomarkers of exposure to the compounds. The advantage is that ample amounts of haemoglobin can be obtained from a blood sample, and, given the four-month lifetime of red blood cells, the adducts formed with the amino acids of the protein will indicate the total exposure during this period.
Adducts may be determined by sensitive techniques such as high-performance lipid chromatography, and some immunological methods are also available. In general, the analytical methods are new, expensive and need further development and validation. Better sensitivity can be obtained by using the 32P post labelling assay, which is a nonspecific indication that DNA damage has taken place. All of these techniques are potentially useful for biological monitoring and have been applied in a growing number of studies. However, simpler and more sensitive analytical methods are needed. Given the limited specificity of some methods at low-level exposures, tobacco smoking or other factors may impact significantly on the measurement results, thus causing difficulties in interpretation.
Exposure to mutagenic compounds, or to compounds which are metabolized into mutagens, may also be determined by assessing the mutagenicity of the urine from an exposed individual. The urine sample is incubated with a strain of bacteria in which a specific point mutation is expressed in a way that can be easily measured. If mutagenic chemicals are present in the urine sample, then an increased rate of mutations will occur in the bacteria.
Exposure biomarkers must be evaluated with regard to temporal variation in exposure and the relation to different compartments. Thus, the time frame(s) represented by the biomarker, that is, the extent to which the biomarker measurement reflects past exposure(s) and/or accumulated body burden, must be determined from toxicokinetic data in order to interpret the result. In particular, the degree to which the biomarker indicates retention in specific target organs should be considered. Although blood samples are often used for biomarker studies, peripheral blood is generally not regarded as a compartment as such, although it acts as a transport medium between compartments. The degree to which the concentration in the blood reflects levels in different organs varies widely between different chemicals, and usually also depends upon the length of the exposure as well as time since exposure.
Sometimes this type of evidence is used to classify a biomarker as an indicator of (total) absorbed dose or an indicator of effective dose (i.e., the amount that has reached the target tissue). For example, exposure to a particular solvent may be evaluated from data on the actual concentration of the solvent in the blood at a particular time following the exposure. This measurement will reflect the amount of the solvent that has been absorbed into the body. Some of the absorbed amount will be exhaled due to the vapour pressure of the solvent. While circulating in the blood, the solvent will interact with various components of the body, and it will eventually become subject to breakdown by enzymes. The outcome of the metabolic processes can be assessed by determining specific mercapturic acids produced by conjugation with glutathione. The cumulative excretion of mercapturic acids may better reflect the effective dose than will the blood concentration.
Life events, such as reproduction and senescence, may affect the distribution of a chemical. The distribution of chemicals within the body is significantly affected by pregnancy, and many chemicals may pass the placental barrier, thus causing exposure of the foetus. Lactation may result in excretion of lipid-soluble chemicals, thus leading to a decreased retention in the mother along with an increased uptake by the infant. During weight loss or development of osteoporosis, stored chemicals may be released, which can then result in a renewed and protracted “endogenous” exposure of target organs. Other factors may affect individual absorption, metabolism, retention and distribution of chemical compounds, and some biomarkers of susceptibility are available (see below).
A marker of effect may be an endogenous component, or a measure of the functional capacity, or some other indicator of the state or balance of the body or organ system, as affected by the exposure. Such effect markers are generally preclinical indicators of abnormalities.
These biomarkers may be specific or non-specific. The specific biomarkers are useful because they indicate a biological effect of a particular exposure, thus providing evidence that can potentially be used for preventive purposes. The non-specific biomarkers do not point to an individual cause of the effect, but they may reflect the total, integrated effect due to a mixed exposure. Both types of biomarkers may therefore be of considerable use in occupational health.
There is not a clear distinction between exposure biomarkers and effect biomarkers. For example, adduct formation could be said to reflect an effect rather than the exposure. However, effect biomarkers usually indicate changes in the functions of cells, tissues or the total body. Some researchers include gross changes, such as an increase in liver weight of exposed laboratory animals or decreased growth in children, as biomarkers of effect. For the purpose of occupational health, effect biomarkers should be restricted to those that indicate subclinical or reversible biochemical changes, such as inhibition of enzymes. The most frequently used effect biomarker is probably inhibition of cholinesterase caused by certain insecticides, that is, organophosphates and carbamates. In most cases, this effect is entirely reversible, and the enzyme inhibition reflects the total exposure to this particular group of insecticides.
Some exposures do not result in enzyme inhibition but rather in increased activity of an enzyme. This is the case with several enzymes that belong to the P450 family (see “Genetic determinants of toxic response”). They may be induced by exposures to certain solvents and polyaromatic hydrocarbons (PAHs). Since these enzymes are mainly expressed in tissues from which a biopsy may be difficult to obtain, the enzyme activity is determined indirectly in vivo by administering a compound that is metabolized by that particular enzyme, and then the breakdown product is measured in urine or plasma.
Other exposures may induce the synthesis of a protective protein in the body. The best example is probably metallothionein, which binds cadmium and promotes the excretion of this metal; cadmium exposure is one of the factors that result in increased expression of the metallothionein gene. Similar protective proteins may exist but have not yet been explored sufficiently to become accepted as biomarkers. Among the candidates for possible use as biomarkers are the so-called stress proteins, originally referred to as heat shock proteins. These proteins are generated by a range of different organisms in response to a variety of adverse exposures.
Oxidative damage may be assessed by determining the concentration of malondialdehyde in serum or the exhalation of ethane. Similarly, the urinary excretion of proteins with a small molecular weight, such as albumin, may be used as a biomarker of early kidney damage. Several parameters routinely used in clinical practice (for example, serum hormone or enzyme levels) may also be useful as biomarkers. However, many of these parameters may not be sufficiently sensitive to detect early impairment.
Another group of effect parameters relate to genotoxic effects (changes in the structure of chromosomes). Such effects may be detected by microscopy of white blood cells that undergo cell division. Serious damage to the chromosomeschromosomal aberrations or formation of micronucleican be seen in a microscope. Damage may also be revealed by adding a dye to the cells during cell division. Exposure to a genotoxic agent can then be visualized as an increased exchange of the dye between the two chromatids of each chromosome (sister chromatid exchange). Chromosomal aberrations are related to an increased risk of developing cancer, but the significance of an increased rate of sister chromatid exchange is less clear.
More sophisticated assessment of genotoxicity is based on particular point mutations in somatic cells, that is, white blood cells or epithelial cells obtained from the oral mucosa. A mutation at a specific locus may make the cells capable of growing in a culture that contains a chemical that is otherwise toxic (such as 6-thioguanine). Alternatively, a specific gene product can be assessed (e.g., serum or tissue concentrations of oncoproteins encoded by particular oncogenes). Obviously, these mutations reflect the total genotoxic damage incurred and do not necessarily indicate anything about the causative exposure. These methods are not yet ready for practical use in occupational health, but rapid progress in this line of research would suggest that such methods will become available within a few years.
A marker of susceptibility, whether inherited or induced, is an indicator that the individual is particularly sensitive to the effect of a xenobiotic or to the effects of a group of such compounds. Most attention has been focused on genetic susceptibility, although other factors may be at least as important. Hypersusceptibility may be due to an inherited trait, the constitution of the individual, or environmental factors.
The ability to metabolize certain chemicals is variable and is genetically determined (see “Genetic determinants of toxic response”). Several relevant enzymes appear to be controlled by a single gene. For example, oxidation of foreign chemicals is mainly carried out be a family of enzymes belonging to the P450 family. Other enzymes make the metabolites more water soluble by conjugation (e.g., N-acetyltransferase and µ-glutathion-S-transferase). The activity of these enzymes is genetically controlled and varies considerably. As mentioned above, the activity can be determined by administering a small dose of a drug and then determining the amount of the metabolite in the urine. Some of the genes have now been characterized, and techniques are available to determine the genotype. Important studies suggest that a risk of developing certain cancer forms is related to the capability of metabolizing foreign compounds. Many questions still remain unanswered, thus at this time limiting the use of these potential susceptibility biomarkers in occupational health.
Other inherited traits, such as alpha1-antitrypsin deficiency or glucose-6-phosphate dehydrogenase deficiency, also result in deficient defence mechanisms in the body, thereby causing hypersusceptibility to certain exposures.
Most research related to susceptibility has dealt with genetic predisposition. Other factors play a role as well and have been partly neglected. For example, individuals with a chronic disease may be more sensitive to an occupational exposure. Also, if a disease process or previous exposure to toxic chemicals has caused some subclinical organ damage, then the capacity to withstand a new toxic exposure is likely to be less. Biochemical indicators of organ function may in this case be used as susceptibility biomarkers. Perhaps the best example regarding hypersusceptibility relates to allergic responses. If an individual has become sensitized to a particular exposure, then specific antibodies can be detected in serum. Even if the individual has not become sensitized, other current or past exposures may add to the risk of developing an adverse effect related to an occupational exposure.
A major problem is to determine the joint effect of mixed exposures at work. In addition, personal habits and drug use may result in an increased susceptibility. For example, tobacco smoke usually contains a considerable amount of cadmium. Thus, with occupational exposure to cadmium, a heavy smoker who has accumulated substantial amounts of this metal in the body will be at increased risk of developing cadmium-related kidney disease.
Biomarkers are extremely useful in toxicological research, and many may be applicable in biological monitoring. Nonetheless, the limitations must also be recognized. Many biomarkers have so far been studied only in laboratory animals. Toxicokinetic patterns in other species may not necessarily reflect the situation in human beings, and extrapolation may require confirmatory studies in human volunteers. Also, account must be taken of individual variations due to genetic or constitutional factors.
In some cases, exposure biomarkers may not at all be feasible (e.g., for chemicals which are short-lived in vivo). Other chemicals may be stored in, or may affect, organs which cannot be accessed by routine procedures, such as the nervous system. The route of exposure may also affect the distribution pattern and therefore also the biomarker measurement and its interpretation. For example, direct exposure of the brain via the olfactory nerve is likely to escape detection by measurement of exposure biomarkers. As to effect biomarkers, many of them are not at all specific, and the change can be due to a variety of causes, including lifestyle factors. Perhaps in particular with the susceptibility biomarkers, interpretation must be very cautious at the moment, as many uncertainties remain about the overall health significance of individual genotypes.
In occupational health, the ideal biomarker should satisfy several requirements. First of all, sample collection and analysis must be simple and reliable. For optimal analytical quality, standardization is needed, but the specific requirements vary considerably. Major areas of concern include: preparation of the individual, sampling procedure and sample handling, and measurement procedure; the latter encompasses technical factors, such as calibration and quality assurance procedures, and individual-related factors, such as education and training of operators.
For documentation of analytical validity and traceability, reference materials should be based on relevant matrices and with appropriate concentrations of toxic substances or relevant metabolites at appropriate levels. For biomarkers to be used for biological monitoring or for diagnostic purposes, the responsible laboratories must have well-documented analytical procedures with defined performance characteristics, and accessible records to allow verification of the results. At the same time, nonetheless, the economics of characterizing and using reference materials to supplement quality assurance procedures in general must be considered. Thus, the achievable quality of results, and the uses to which they are put, have to be balanced against the added costs of quality assurance, including reference materials, manpower and instrumentation.
Another requirement is that the biomarker should be specific, at least under the circumstances of the study, for a particular type of exposure, with a clear-cut relationship to the degree of exposure. Otherwise, the result of the biomarker measurement may be too difficult to interpret. For proper interpretation of the measurement result of an exposure biomarker, the diagnostic validity must be known (i.e., the translation of the biomarker value into the magnitude of possible health risks). In this area, metals serve as a paradigm for biomarker research. Recent research has demonstrated the complexity and subtlety of dose-response relationships, with considerable difficulty in identifying no-effect levels and therefore also in defining tolerable exposures. However, this kind of research has also illustrated the types of investigation and the refinement that are necessary to uncover the relevant information. For most organic compounds, quantitative associations between exposures and the corresponding adverse health effects are not yet available; in many cases, even the primary target organs are not known for sure. In addition, evaluation of toxicity data and biomarker concentrations is often complicated by exposure to mixtures of substances, rather than exposure to a single compound at the time.
Before the biomarker is applied for occupational health purposes, some additional considerations are necessary. First, the biomarker must reflect a subclinical and reversible change only. Second, given that the biomarker results can be interpreted with regard to health risks, then preventive efforts should be available and should be considered realistic in case the biomarker data suggests a need to reduce the exposure. Third, the practical use of the biomarker must be generally regarded as ethically acceptable.
Industrial hygiene measurements may be compared with applicable exposure limits. Likewise, results on exposure biomarkers or effect biomarkers may be compared to biological action limits, sometimes referred to as biological exposure indices. Such limits should be based on the best advice of clinicians and scientists from appropriate disciplines, and responsible administrators as “risk managers” should then take into account relevant ethical, social, cultural and economic factors. The scientific basis should, if possible, include dose-response relationships supplemented by information on variations in susceptibility within the population at risk. In some countries, workers and members of the general public are involved in the standard-setting process and provide important input, particularly when scientific uncertainty is considerable. One of the major uncertainties is how to define an adverse health effect that should be preventedfor example, whether adduct formation as an exposure biomarker by itself represents an adverse effect (i.e., effect biomarker) that should be prevented. Difficult questions are likely to arise when deciding whether it is ethically defensible, for the same compound, to have different limits for adventitious exposure, on the one hand, and occupational exposure, on the other.
The information generated by the use of biomarkers should generally be conveyed to the individuals examined within the physician-patient relationship. Ethical concerns must in particular be considered in connection with highly experimental biomarker analyses that cannot currently be interpreted in detail in terms of actual health risks. For the general population, for example, limited guidance exists at present with regard to interpretation of exposure biomarkers other than the blood-lead concentration. Also of importance is the confidence in the data generated (i.e., whether appropriate sampling has been done, and whether sound quality assurance procedures have been utilized in the laboratory involved). An additional area of special worry relates to individual hypersusceptibility. These issues must be taken into account when providing the feedback from the study.
All sectors of society affected by, or concerned with carrying out, a biomarker study need to be involved in the decision-making process on how to handle the information generated by the study. Specific procedures to prevent or overcome inevitable ethical conflicts should be developed within the legal and social frameworks of the region or country. However, each situation represents a different set of questions and pitfalls, and no single procedure for public involvement can be developed to cover all applications of exposure biomarkers.
Genetic toxicity assessment is the evaluation of agents for their ability to induce any of three general types of changes (mutations) in the genetic material (DNA): gene, chromosomal and genomic. In organisms such as humans, the genes are composed of DNA, which consists of individual units called nucleotide bases. The genes are arranged in discrete physical structures called chromosomes. Genotoxicity can result in significant and irreversible effects upon human health. Genotoxic damage is a critical step in the induction of cancer and it can also be involved in the induction of birth defects and foetal death. The three classes of mutations mentioned above can occur within either of the two types of tissues possessed by organisms such as humans: sperm or eggs (germ cells) and the remaining tissue (somatic cells).
Assays that measure gene mutation are those that detect the substitution, addition or deletion of nucleotides within a gene. Assays that measure chromosomal mutation are those that detect breaks or chromosomal rearrangements involving one or more chromosomes. Assays that measure genomic mutation are those that detect changes in the number of chromosomes, a condition called aneuploidy. Genetic toxicity assessment has changed considerably since the development by Herman Muller in 1927 of the first assay to detect genotoxic (mutagenic) agents. Since then, more than 200 assays have been developed that measure mutations in DNA; however, fewer than ten assays are used commonly today for genetic toxicity assessment. This article reviews these assays, describes what they measure, and explores the role of these assays in toxicity assessment.
Genetic toxicology has become an integral part of the overall risk assessment process and has gained in stature in recent times as a reliable predictor for carcinogenic activity. However, prior to the development of genetic toxicology (before 1970), other methods were and are still being used to identify potential cancer hazards to humans. There are six major categories of methods currently used for identifying human cancer risks: epidemiological studies, long-term in vivo bioassays, mid-term in vivo bioassays, short-term in vivo and in vitro bioassays, artificial intelligence (structure-activity), and mechanism-based inference.
Table 33.8 gives advantages and disadvantages for these methods.
(1) humans are ultimate indicators of disease; (2) evaluate sensitive or susceptible populations; (3) occupational exposure cohorts; (4) environmental sentinel alerts
(1) generally retrospective (death certificates, recall biases, etc.); (2) insensitive, costly, lengthy; (3) reliable exposure data sometimes unavailable or difficult to obtain; (4) combined, multiple and complex exposures; lack of appropriate control cohorts; (5) experiments on humans not done; (6) cancer detection, not prevention
Long-term in vivo bioassays
(1) prospective and retrospective (validation) evaluations; (2) excellent correlation with identified human carcinogens; (3) exposure levels and conditions known; (4) identifies chemical toxicity and carcinogenicity effects; (5) results obtained relatively quickly; (6) qualitative comparisons among chemical classes; (7) integrative and interactive biologic systems related closely to humans
(1) rarely replicated, resource intensive; (3) limited facilities suitable for such experiments; (4) species extrapolation debate; (5) exposures used are often at levels far in excess of those experienced by humans; (6) single-chemical exposure does not mimic human exposures, which are generally to multiple chemicals simultaneously
Mid- and short-term in vivo and in vitro bioassays
(1) more rapid and less expensive than other assays; (2) large samples that are easily replicated; (3) biologically meaningful end points are measured (mutation, etc.); (4) can be used as screening assays to select chemicals for long-term bioassays
(1) in vitro not fully predictive of in vivo; (2) usually organism or organ specific; (3) potencies not comparable to whole animals or humans
Chemical structure–biological activity associations
(1) relatively easy, rapid, and inexpensive; (2) reliable for certain chemical classes (e.g., nitrosamines and benzidine dyes); (3) developed from biological data but not dependent on additional biological experimentation
(1) not “biological”; (2) many exceptions to formulated rules; (3) retrospective and rarely (but becoming) prospective
(1) reasonably accurate for certain classes of chemicals; (2) permits refinements of hypotheses; (3) can orient risk assessments to sensitive populations
(1) mechanisms of chemical carcinogenesis undefined, multiple, and likely chemical or class specific; (2) may fail to highlight exceptions to general mechanisms
Although the exact types and numbers of assays used for genetic toxicity assessment are constantly evolving and vary from country to country, the most common ones include assays for (1) gene mutation in bacteria and/or cultured mammalian cells and (2) chromosomal mutation in cultured mammalian cells and/or bone marrow within living mice. Some of the assays within this second category can also detect aneuploidy. Although these assays do not detect mutations in germ cells, they are used primarily because of the extra cost and complexity of performing germ-cell assays. Nonetheless, germ-cell assays in mice are used when information about germ-cell effects is desired.
Systematic studies over a 25-year period (1970-1995), especially at the US National Toxicology Program in North Carolina, have resulted in the use of a discrete number of assays for detecting the mutagenic activity of agents. The rationale for evaluating the usefulness of the assays was based on their ability to detect agents that cause cancer in rodents and that are suspected of causing cancer in humans (i.e., carcinogens). This is because studies during the past several decades have indicated that cancer cells contain mutations in certain genes and that many carcinogens are also mutagens. Thus, cancer cells are viewed as containing somatic-cell mutations, and carcinogenesis is viewed as a type of somatic-cell mutagenesis.
The genetic toxicity assays used most commonly today have been selected not only because of their large database, relatively low cost, and ease of performance, but because they have been shown to detect many rodent and, presumptively, human carcinogens. Consequently, genetic toxicity assays are used to predict the potential carcinogenicity of agents.
An important conceptual and practical development in the field of genetic toxicology was the recognition that many carcinogens were modified by enzymes within the body, creating altered forms (metabolites) that were frequently the ultimate carcinogenic and mutagenic form of the parent chemical. To duplicate this metabolism in a petri dish, Heinrich Malling showed that the inclusion of a preparation from rodent liver contained many of the enzymes necessary to perform this metabolic conversion or activation. Thus, many genetic toxicity assays performed in dishes or tubes (in vitro) employ the addition of similar enzyme preparations. Simple preparations are called S9 mix, and purified preparations are called microsomes. Some bacterial and mammalian cells have now been genetically engineered to contain some of the genes from rodents or humans that produce these enzymes, reducing the need to add S9 mix or microsomes.
The primary bacterial systems used for genetic toxicity screening are the Salmonella (Ames) mutagenicity assay and, to a much lesser extent, strain WP2 of Escherichia coli. Studies in the mid-1980s indicated that the use of only two strains of the Salmonella system (TA98 and TA100) were sufficient to detect approximately 90% of the known Salmonella mutagens. Thus, these two strains are used for most screening purposes; however, various other strains are available for more extensive testing.
These assays are performed in a variety of ways, but two general procedures are the plate-incorporation and liquid-suspension assays. In the plate-incorporation assay, the cells, the test chemical and (when desired) the S9 are added together into a liquefied agar and poured onto the surface of an agar petri plate. The top agar hardens within a few minutes, and the plates are incubated for two to three days, after which time mutant cells have grown to form visually detectable clusters of cells called colonies, which are then counted. The agar medium contains selective agents or is composed of ingredients such that only the newly mutated cells will grow. The liquid-incubation assay is similar, except the cells, test agent, and S9 are incubated together in liquid that does not contain liquefied agar, and then the cells are washed free of the test agent and S9 and seeded onto the agar.
Mutations in cultured mammalian cells are detected primarily in one of two genes: hprt and tk. Similar to the bacterial assays, mammalian cell lines (developed from rodent or human cells) are exposed to the test agent in plastic culture dishes or tubes and then are seeded into culture dishes that contain medium with a selective agent that permits only mutant cells to grow. The assays used for this purpose include the CHO/HPRT, the TK6, and the mouse lymphoma L5178Y/TK+/- assays. Other cell lines containing various DNA repair mutations as well as containing some human genes involved in metabolism are also used. These systems permit the recovery of mutations within the gene (gene mutation) as well as mutations involving regions of the chromosome flanking the gene (chromosomal mutation). However, this latter type of mutation is recovered to a much greater extent by the tk gene systems than by the hprt gene systems due to the location of the tk gene.
Similar to the liquid-incubation assay for bacterial mutagenicity, mammalian cell mutagenicity assays generally involve the exposure of the cells in culture dishes or tubes in the presence of the test agent and S9 for several hours. The cells are then washed, cultured for several more days to allow the normal (wild-type) gene products to be degraded and the newly mutant gene products to be expressed and accumulate, and then they are seeded into medium containing a selective agent that permits only the mutant cells to grow. Like the bacterial assays, the mutant cells grow into visually detectable colonies that are then counted.
Chromosomal mutation is identified primarily by cytogenetic assays, which involve exposing rodents and/or rodent or human cells in culture dishes to a test chemical, allowing one or more cell divisions to occur, staining the chromosomes, and then visually examining the chromosomes through a microscope to detect alterations in the structure or number of chromosomes. Although a variety of endpoints can be examined, the two that are currently accepted by regulatory agencies as being the most meaningful are chromosomal aberrations and a subcategory called micronuclei.
Considerable training and expertise are required to score cells for the presence of chromosomal aberrations, making this a costly procedure in terms of time and money. In contrast, micronuclei require little training, and their detection can be automated. Micronuclei appear as small dots within the cell that are distinct from the nucleus, which contains the chromosomes. Micronuclei result from either chromosome breakage or from aneuploidy. Because of the ease of scoring micronuclei compared to chromosomal aberrations, and because recent studies indicate that agents that induce chromosomal aberrations in the bone marrow of living mice generally induce micronuclei in this tissue, micronuclei are now commonly measured as an indication of the ability of an agent to induce chromosomal mutation.
Although germ-cell assays are used far less frequently than the other assays described above, they are indispensable in determining whether an agent poses a risk to the germ cells, mutations in which can lead to health effects in succeeding generations. The most commonly used germ-cell assays are in mice, and involve systems that detect (1) heritable translocations (exchanges) among chromosomes (heritable translocation assay), (2) gene or chromosomal mutations involving specific genes (visible or biochemical specific-locus assays), and (3) mutations that affect viability (dominant lethal assay). As with the somatic-cell assays, the working assumption with the germ-cell assays is that agents positive in these assays are presumed to be potential human germ-cell mutagens.
Recent studies have indicated that only three pieces of information were necessary to detect approximately 90% of a set of 41 rodent carcinogens (i.e., presumptive human carcinogens and somatic-cell mutagens). These included (1) knowledge of the chemical structure of agent, especially if it contains electrophilic moieties (see section on structure-activity relationships); (2) Salmonella mutagenicity data; and (3) data from a 90-day chronic toxicity assay in rodents (mice and rats). Indeed, essentially all of the IARC-declared human carcinogens are detectable as mutagens using just the Salmonella assay and the mouse-bone marrow micronucleus assay. The use of these mutagenicity assays for detecting potential human carcinogens is supported further by the finding that most human carcinogens are carcinogenic in both rats and mice (trans-species carcinogens) and that most trans-species carcinogens are mutagenic in Salmonella and/or induce micronuclei in mouse bone marrow.
With advances in DNA technology, the human genome project, and an improved understanding of the role of mutation in cancer, new genotoxicity assays are being developed that will likely be incorporated into standard screening procedures. Among these are the use of transgenic cells and rodents. Transgenic systems are those in which a gene from another species has been introduced into a cell or organism. For example, transgenic mice are now in experimental use that permit the detection of mutation in any organ or tissue of the animal, based on the introduction of a bacterial gene into the mouse. Bacterial cells, such as Salmonella, and mammalian cells (including human cell lines) are now available that contain genes involved in the metabolism of carcinogenic/mutagenic agents, such as the P450 genes. Molecular analysis of the actual mutations induced in the trans-gene within transgenic rodents, or within native genes such as hprt, or the target genes within Salmonella can now be performed, so that the exact nature of the mutations induced by the chemicals can be determined, providing insights into the mechanism of action of the chemical and allowing comparisons to mutations in humans presumptively exposed to the agent.
Molecular advances in cytogenetics now permit more detailed evaluation of chromosomal mutations. These include the use of probes (small pieces of DNA) that attach (hybridize) to specific genes. Rearrangements of genes on the chromosome can then be revealed by the altered location of the probes, which are fluorescent and easily visualized as colored sectors on the chromosomes. The single-cell gel electrophoresis assay for DNA breakage (commonly called the “comet” assay) permits the detection of DNA breaks within single cells and may become an extremely useful tool in combination with cytogenetic techniques for detecting chromosomal damage.
After many years of use and the generation of a large and systematically developed database, genetic toxicity assessment can now be done with just a few assays for relatively small cost in a short period of time (a few weeks). The data produced can be used to predict the ability of an agent to be a rodent and, presumptively, human carcinogen/somatic-cell mutagen. Such an ability makes it possible to limit the introduction into the environment of mutagenic and carcinogenic agents and to develop alternative, nonmutagenic agents. Future studies should lead to even better methods with greater predictivity than the current assays.
The emergence of sophisticated technologies in molecular and cellular biology has spurred a relatively rapid evolution in the life sciences, including toxicology. In effect, the focus of toxicology is shifting from whole animals and populations of whole animals to the cells and molecules of individual animals and humans. Since the mid-1980s, toxicologists have begun to employ these new methodologies in assessing the effects of chemicals on living systems. As a logical progression, such methods are being adapted for the purposes of toxicity testing. These scientific advances have worked together with social and economic factors to effect change in the evaluation of product safety and potential risk.
Economic factors are specifically related to the volume of materials that must be tested. A plethora of new cosmetics, pharmaceuticals, pesticides, chemicals and household products is introduced into the market every year. All of these products must be evaluated for their potential toxicity. In addition, there is a backlog of chemicals already in use that have not been adequately tested. The enormous task of obtaining detailed safety information on all of these chemicals using traditional whole animal testing methods would be costly in terms of both money and time, if it could even be accomplished.
There are also societal issues that relate to public health and safety, as well as increasing public concern about the use of animals for product safety testing. With regard to human safety, public interest and environmental advocacy groups have placed significant pressure on government agencies to apply more stringent regulations on chemicals. A recent example of this has been a movement by some environmental groups to ban chlorine and chlorine-containing compounds in the United States. One of the motivations for such an extreme action lies in the fact that most of these compounds have never been adequately tested. From a toxicological perspective, the concept of banning a whole class of diverse chemicals based simply on the presence of chlorine is both scientifically unsound and irresponsible. Yet, it is understandable that from the public’s perspective, there must be some assurance that chemicals released into the environment do not pose a significant health risk. Such a situation underscores the need for more efficient and rapid methods to assess toxicity.
The other societal concern that has impacted the area of toxicity testing is animal welfare. The growing number of animal protection groups throughout the world have voiced considerable opposition to the use of whole animals for product safety testing. Active campaigns have been waged against manufacturers of cosmetics, household and personal care products and pharmaceuticals in attempts to stop animal testing. Such efforts in Europe have resulted in the passage of the Sixth Amendment to Directive 76/768/EEC (the Cosmetics Directive). The consequence of this Directive is that cosmetic products or cosmetic ingredients that have been tested in animals after January 1, 1998 cannot be marketed in the European Union, unless alternative methods are insufficiently validated. While this Directive has no jurisdiction over the sale of such products in the United States or other countries, it will significantly affect those companies that have international markets that include Europe.
The concept of alternatives, which forms the basis for the development of tests other than those on whole animals, is defined by the three Rs: reduction in the numbers of animals used; refinement of protocols so that animals experience less stress or discomfort; and replacement of current animal tests with in vitro tests (i.e., tests done outside of the living animal), computer models or test on lower vertebrate or invertebrate species. The three Rs were introduced in a book published in 1959 by two British scientists, W.M.S. Russell and Rex Burch, The Principles of Humane Experimental Technique. Russell and Burch maintained that the only way in which valid scientific results could be obtained is through the humane treatment of animals, and believed that methods should be developed to reduce animal use and ultimately replace it. Interestingly, the principles outlined by Russell and Burch received little attention until the resurgence of the animal welfare movement in the mid-1970s. Today the concept of the three Rs is very much in the forefront with regard to research, testing and education.
In summary, the development of in vitro test methodologies has been influenced by a variety of factors that have converged over the last ten to 20 years. It is difficult to ascertain if any of these factors alone would have had such a profound effect on toxicity testing strategies.
This section will focus solely on in vitro methods for evaluating toxicity, as one of the alternatives to whole-animal testing. Additional non-animal alternatives such as computer modelling and quantitative structure-activity relationships are discussed in other articles of this chapter.
In vitro studies are generally conducted in animal or human cells or tissues outside of the body. In vitro literally means “in glass”, and refers to procedures carried out on living material or components of living material cultured in petri dishes or in test tubes under defined conditions. These may be contrasted with in vivo studies, or those carried out “in the living animal”. While it is difficult, if not impossible, to project the effects of a chemical on a complex organism when the observations are confined to a single type of cells in a dish, in vitro studies do provide a significant amount of information about intrinsic toxicity as well as cellular and molecular mechanisms of toxicity. In addition, they offer many advantages over in vivo studies in that they are generally less expensive and they may be conducted under more controlled conditions. Furthermore, despite the fact that small numbers of animals are still needed to obtain cells for in vitro cultures, these methods may be considered reduction alternatives (since many fewer animals are used compared to in vivo studies) and refinement alternatives (because they eliminate the need to subject the animals to the adverse toxic consequences imposed by in vivo experiments).
In order to interpret the results of in vitro toxicity tests, determine their potential usefulness in assessing toxicity and relate them to the overall toxicological process in vivo, it is necessary to understand which part of the toxicological process is being examined. The entire toxicological process consists of events that begin with the organism’s exposure to a physical or chemical agent, progress through cellular and molecular interactions and ultimately manifest themselves in the response of the whole organism. In vitro tests are generally limited to the part of the toxicological process that takes place at the cellular and molecular level. The types of information that may be obtained from in vitro studies include pathways of metabolism, interaction of active metabolites with cellular and molecular targets and potentially measurable toxic endpoints that can serve as molecular biomarkers for exposure. In an ideal situation, the mechanism of toxicity of each chemical from exposure to organismal manifestation would be known, such that the information obtained from in vitro tests could be fully interpreted and related to the response of the whole organism. However, this is virtually impossible, since relatively few complete toxicological mechanisms have been elucidated. Thus, toxicologists are faced with a situation in which the results of an in vitro test cannot be used as an entirely accurate prediction of in vivo toxicity because the mechanism is unknown. However, frequently during the process of developing an in vitro test, components of the cellular and molecular mechanism(s) of toxicity are elucidated.
One of the key unresolved issues surrounding the development and implementation of in vitro tests is related to the following consideration: should they be mechanistically based or is it sufficient for them to be descriptive? It is inarguably better from a scientific perspective to utilize only mechanistically based tests as replacements for in vivo tests. However in the absence of complete mechanistic knowledge, the prospect of developing in vitro tests to completely replace whole animal tests in the near future is almost nil. This does not, however, rule out the use of more descriptive types of assays as early screening tools, which is the case presently. These screens have resulted in a significant reduction in animal use. Therefore, until such time as more mechanistic information is generated, it may be necessary to employ to a more limited extent, tests whose results simply correlate well with those obtained in vivo.
In this section, several in vitro tests that have been developed to assess a chemical’s cytotoxic potential will be described. For the most part, these tests are easy to perform and analysis can be automated. One commonly used in vitro test for cytotoxicity is the neutral red assay. This assay is done on cells in culture, and for most applications, the cells can be maintained in culture dishes that contain 96 small wells, each 6.4 mm in diameter. Since each well can be used for a single determination, this arrangement can accommodate multiple concentrations of the test chemical as well as positive and negative controls with a sufficient number of replicates for each. Following treatment of the cells with various concentrations of the test chemical ranging over at least two orders of magnitude (e.g., from 0.01 mM to 1 mM), as well as positive and negative control chemicals, the cells are rinsed and treated with neutral red, a dye that can be taken up and retained only by live cells. The dye may be added upon removal of the test chemical to determine immediate effects, or it may be added at various times after the test chemical is removed to determine cumulative or delayed effects. The intensity of the colour in each well corresponds to the number of live cells in that well. The colour intensity is measured by a spectrophotometer which may be equipped with a plate reader. The plate reader is programmed to provide individual measurements for each of the 96 wells of the culture dish. This automated methodology permits the investigator to rapidly perform a concentration-response experiment and to obtain statistically useful data.
Another relatively simple assay for cytotoxicity is the MTT test. MTT (3[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide) is a tetrazolium dye that is reduced by mitochondrial enzymes to a blue colour. Only cells with viable mitochondria will retain the ability to carry out this reaction; therefore the colour intensity is directly related to the degree of mitochondrial integrity. This is a useful test to detect general cytotoxic compounds as well as those agents that specifically target mitochondria.
The measurement of lactate dehydrogenase (LDH) activity is also used as a broad-based assay for cytotoxicity. This enzyme is normally present in the cytoplasm of living cells and is released into the cell culture medium through leaky cell membranes of dead or dying cells that have been adversely affected by a toxic agent. Small amounts of culture medium may be removed at various times after chemical treatment of the cells to measure the amount of LDH released and determine a time course of toxicity. While the LDH release assay is a very general assessment of cytotoxicity, it is useful because it is easy to perform and it may be done in real time.
There are many new methods being developed to detect cellular damage. More sophisticated methods employ fluorescent probes to measure a variety of intracellular parameters, such as calcium release and changes in pH and membrane potential. In general, these probes are very sensitive and may detect more subtle cellular changes, thus reducing the need to use cell death as an endpoint. In addition, many of these fluorescent assays may be automated by the use of 96-well plates and fluorescent plate readers.
Once data have been collected on a series of chemicals using one of these tests, the relative toxicities may be determined. The relative toxicity of a chemical, as determined in an in vitro test, may be expressed as the concentration that exerts a 50% effect on the endpoint response of untreated cells. This determination is referred to as the EC50 (Effective Concentration for 50% of the cells) and may be used to compare toxicities of different chemicals in vitro. (A similar term used in evaluating relative toxicity is IC50, indicating the concentration of a chemical that causes a 50% inhibition of a cellular process, e.g., the ability to take up neutral red.) It is not easy to assess whether the relative in vitro toxicity of the chemicals is comparable to their relative in vivo toxicities, since there are so many confounding factors in the in vivo system, such as toxicokinetics, metabolism, repair and defence mechanisms. In addition, since most of these assays measure general cytotoxicity endpoints, they are not mechanistically based. Therefore, agreement between in vitro and in vivo relative toxicities is simply correlative. Despite the numerous complexities and difficulties in extrapolating from in vitro to in vivo, these in vitro tests are proving to be very valuable because they are simple and inexpensive to perform and may be used as screens to flag highly toxic drugs or chemicals at early stages of development.
In vitro tests can also be used to assess specific target organ toxicity. There are a number of difficulties associated with designing such tests, the most notable being the inability of in vitro systems to maintain many of the features of the organ in vivo. Frequently, when cells are taken from animals and placed into culture, they tend either to degenerate quickly and/or to dedifferentiate, that is, lose their organ-like functions and become more generic. This presents a problem in that within a short period of time, usually a few days, the cultures are no longer useful for assessing organ-specific effects of a toxin.
Many of these problems are being overcome because of recent advances in molecular and cellular biology. Information that is obtained about the cellular environment in vivo may be utilized in modulating culture conditions in vitro. Since the mid-1980s, new growth factors and cytokines have been discovered, and many of these are now available commercially. Addition of these factors to cells in culture helps to preserve their integrity and may also help to retain more differentiated functions for longer periods of time. Other basic studies have increased the knowledge of the nutritional and hormonal requirements of cells in culture, so that new media may be formulated. Recent advances have also been made in identifying both naturally occurring and artificial extracellular matrices on which cells may be cultured. Culture of cells on these different matrices can have profound effects on both their structure and function. A major advantage derived from this knowledge is the ability to intricately control the environment of cells in culture and individually examine the effects of these factors on basic cell processes and on their responses to different chemical agents. In short, these systems can provide great insight into organ-specific mechanisms of toxicity.
Many target organ toxicity studies are conducted in primary cells, which by definition are freshly isolated from an organ, and usually exhibit a finite lifetime in culture. There are many advantages to having primary cultures of a single cell type from an organ for toxicity assessment. From a mechanistic perspective, such cultures are useful for studying specific cellular targets of a chemical. In some instances, two or more cell types from an organ may be cultured together, and this provides an added advantage of being able to look at cell-cell interactions in response to a toxin. Some co-culture systems for skin have been engineered so that they form a three dimensional structure resembling skin in vivo. It is also possible to co-culture cells from different organsfor example, liver and kidney. This type of culture would be useful in assessing the effects specific to kidney cells, of a chemical that must be bioactivated in the liver.
Molecular biological tools have also played an important role in the development of continuous cell lines that can be useful for target organ toxicity testing. These cell lines are generated by transfecting DNA into primary cells. In the transfection procedure, the cells and the DNA are treated such that the DNA can be taken up by the cells. The DNA is usually from a virus and contains a gene or genes that, when expressed, allow the cells to become immortalized (i.e., able to live and grow for extended periods of time in culture). The DNA can also be engineered so that the immortalizing gene is controlled by an inducible promoter. The advantage of this type of construct is that the cells will divide only when they receive the appropriate chemical stimulus to allow expression of the immortalizing gene. An example of such a construct is the large T antigen gene from Simian Virus 40 (SV40) (the immortalizing gene), preceded by the promoter region of the metallothionein gene, which is induced by the presence of a metal in the culture medium. Thus, after the gene is transfected into the cells, the cells may be treated with low concentrations of zinc to stimulate the MT promoter and turn on the expression of the T antigen gene. Under these conditions, the cells proliferate. When zinc is removed from the medium, the cells stop dividing and under ideal conditions return to a state where they express their tissue-specific functions.
The ability to generate immortalized cells combined with the advances in cell culture technology have greatly contributed to the creation of cell lines from many different organs, including brain, kidney and liver. However, before these cell lines may be used as a surrogate for the bona fide cell types, they must be carefully characterized to determine how “normal” they really are.
Other in vitro systems for studying target organ toxicity involve increasing complexity. As in vitro systems progress in complexity from single cell to whole organ culture, they become more comparable to the in vivo milieu, but at the same time they become much more difficult to control given the increased number of variables. Therefore, what may be gained in moving to a higher level of organization can be lost in the inability of the researcher to control the experimental environment. Table 33.9 compares some of the characteristics of various in vitro systems that have been used to study hepatotoxicity.
Complexity (level of interaction)
Ability to retain liver-specific functions
Potential duration of culture
Ability to control environment
Immortalized cell lines
some cell to cell (varies with cell line)
poor to good (varies with cell line)
Primary hepatocyte cultures
cell to cell
fair to excellent (varies with culture conditions)
days to weeks
Liver cell co-cultures
cell to cell (between the same and different cell types)
good to excellent
cell to cell (among all cell types)
good to excellent
hours to days
Isolated, perfused liver
cell to cell (among all cell types), and intra-organ
Precision-cut tissue slices are being used more extensively for toxicological studies. There are new instruments available that enable the researcher to cut uniform tissue slices in a sterile environment. Tissue slices offer some advantage over cell culture systems in that all of the cell types of the organ are present and they maintain their in vivo architecture and intercellular communication. Thus, in vitro studies may be conducted to determine the target cell type within an organ as well as to investigate specific target organ toxicity. A disadvantage of the slices is that they degenerate rapidly after the first 24 hours of culture, mainly due to poor diffusion of oxygen to the cells on the interior of the slices. However, recent studies have indicated that more efficient aeration may be achieved by gentle rotation. This, together with the use of a more complex medium, allows the slices to survive for up to 96 hours.
Tissue explants are similar in concept to tissue slices and may also be used to determine the toxicity of chemicals in specific target organs. Tissue explants are established by removing a small piece of tissue (for teratogenicity studies, an intact embryo) and placing it into culture for further study. Explant cultures have been useful for short-term toxicity studies including irritation and corrosivity in skin, asbestos studies in trachea and neurotoxicity studies in brain tissue.
Isolated perfused organs may also be used to assess target organ toxicity. These systems offer an advantage similar to that of tissue slices and explants in that all cell types are present, but without the stress to the tissue introduced by the manipulations involved in preparing slices. In addition, they allow for the maintenance of intra-organ interactions. A major disadvantage is their short-term viability, which limits their use for in vitro toxicity testing. In terms of serving as an alternative, these cultures may be considered a refinement since the animals do not experience the adverse consequences of in vivo treatment with toxicants. However, their use does not significantly decrease the numbers of animals required.
In summary, there are several types of in vitro systems available for assessing target organ toxicity. It is possible to acquire much information about mechanisms of toxicity using one or more of these techniques. The difficulty remains in knowing how to extrapolate from an in vitro system, which represents a relatively small part of the toxicological process, to the whole process occurring in vivo.
Perhaps the most contentious whole-animal toxicity test from an animal welfare perspective is the Draize test for eye irritation, which is conducted in rabbits. In this test, a small fixed dose of a chemical is placed in one of the rabbit’s eyes while the other eye is used as a control. The degree of irritation and inflammation is scored at various times after exposure. A major effort is being made to develop methodologies to replace this test, which has been criticized not only for humane reasons, but also because of the subjectivity of the observations and variability of the results. It is interesting to note that despite the harsh criticism the Draize test has received, it has proven to be remarkably successful in predicting human eye irritants, particularly slightly to moderately irritating substances, that are difficult to identify by other methods. Thus, the demands on in vitro alternatives are great.
The quest for alternatives to the Draize test is a complicated one, albeit one that is predicted to be successful. Numerous in vitro and other alternatives have been developed and in some cases they have been implemented. Refinement alternatives to the Draize test, which by definition, are less painful or distressful to the animals, include the Low Volume Eye Test, in which smaller amounts of test materials are placed in the rabbits’ eyes, not only for humane reasons, but to more closely mimic the amounts to which people may actually be accidentally exposed. Another refinement is that substances which have a pH less than 2 or greater than 11.5 are no longer tested in animals since they are known to be severely irritating to the eye.
Between 1980 and 1989, there has been an estimated 87% decline in the number of rabbits used for eye irritation testing of cosmetics. In vitro tests have been incorporated as part of a tier-testing approach to bring about this vast reduction in whole-animal tests. This approach is a multi-step process that begins with a thorough examination of the historical eye irritation data and physical and chemical analysis of the chemical to be evaluated. If these two processes do not yield enough information, then a battery of in vitro tests is performed. The additional data obtained from the in vitro tests might then be sufficient to assess the safety of the substance. If not, then the final step would be to perform limited in vivo tests. It is easy to see how this approach can eliminate or at least drastically reduce the numbers of animals needed to predict the safety of a test substance.
The battery of in vitro tests that is used as part of this tier-testing strategy depends upon the needs of the particular industry. Eye irritation testing is done by a wide variety of industries from cosmetics to pharmaceuticals to industrial chemicals. The type of information required by each industry varies and therefore it is not possible to define a single battery of in vitro tests. A test battery is generally designed to assess five parameters: cytotoxicity, changes in tissue physiology and biochemistry, quantitative structure-activity relationships, inflammation mediators, and recovery and repair. An example of a test for cytotoxicity, which is one possible cause for irritation, is the neutral red assay using cultured cells (see above). Changes in cellular physiology and biochemistry resulting from exposure to a chemical may be assayed in cultures of human corneal epithelial cells. Alternatively, investigators have also used intact or dissected bovine or chicken eyeballs obtained from slaughterhouses. Many of the endpoints measured in these whole organ cultures are the same as those measured in vivo, such as corneal opacity and corneal swelling.
Inflammation is frequently a component of chemical-induced eye injury, and there are a number of assays available to examine this parameter. Various biochemical assays detect the presence of mediators released during the inflammatory process such as arachidonic acid and cytokines. The chorioallantoic membrane (CAM) of the hen’s egg may also be used as an indicator of inflammation. In the CAM assay, a small piece of the shell of a ten-to-14-day chick embryo is removed to expose the CAM. The chemical is then applied to the CAM and signs of inflammation, such as vascular hemorrhaging, are scored at various times thereafter.
One of the most difficult in vivo processes to assess in vitro is recovery and repair of ocular injury. A newly developed instrument, the silicon microphysiometer, measures small changes in extracellular pH and can been used to monitor cultured cells in real time. This analysis has been shown to correlate fairly well with in vivo recovery and has been used as an in vitro test for this process. This has been a brief overview of the types of tests being employed as alternatives to the Draize test for ocular irritation. It is likely that within the next several years a complete series of in vitro test batteries will be defined and each will be validated for its specific purpose.
The key to regulatory acceptance and implementation of in vitro test methodologies is validation, the process by which the credibility of a candidate test is established for a specific purpose. Efforts to define and coordinate the validation process have been made both in the United States and in Europe. The European Union established the European Centre for the Validation of Alternative Methods (ECVAM) in 1993 to coordinate efforts there and to interact with American organizations such as the Johns Hopkins Centre for Alternatives to Animal Testing (CAAT), an academic centre in the United States, and the Interagency Coordinating Committee for the Validation of Alternative Methods (ICCVAM), composed of representatives from the National Institutes of Health, the US Environmental Protection Agency, the US Food and Drug Administration and the Consumer Products Safety Commission.
Validation of in vitro tests requires substantial organization and planning. There must be consensus among government regulators and industrial and academic scientists on acceptable procedures, and sufficient oversight by a scientific advisory board to ensure that the protocols meet set standards. The validation studies should be performed in a series of reference laboratories using calibrated sets of chemicals from a chemical bank and cells or tissues from a single source. Both intralaboratory repeatability and interlaboratory reproducibility of a candidate test must be demonstrated and the results subjected to appropriate statistical analysis. Once the results from the different components of the validation studies have been compiled, the scientific advisory board can make recommendations on the validity of the candidate test(s) for a specific purpose. In addition, results of the studies should be published in peer-reviewed journals and placed in a database.
The definition of the validation process is currently a work in progress. Each new validation study will provide information useful to the design of the next study. International communication and cooperation are essential for the expeditious development of a widely acceptable series of protocols, particularly given the increased urgency imposed by the passage of the EC Cosmetics Directive. This legislation may indeed provide the needed impetus for a serious validation effort to be undertaken. It is only through completion of this process that the acceptance of in vitro methods by the various regulatory communities can commence.
This article has provided a broad overview of the current status of in vitro toxicity testing. The science of in vitro toxicology is relatively young, but it is growing exponentially. The challenge for the years ahead is to incorporate the mechanistic knowledge generated by cellular and molecular studies into the vast inventory of in vivo data to provide a more complete description of toxicological mechanisms as well as to establish a paradigm by which in vitro data may be used to predict toxicity in vivo. It will only be through the concerted efforts of toxicologists and government representatives that the inherent value of these in vitro methods can be realized.
Structure activity relationships (SAR) analysis is the utilization of information on the molecular structure of chemicals to predict important characteristics related to persistence, distribution, uptake and absorption, and toxicity. SAR is an alternative method of identifying potential hazardous chemicals, which holds promise of assisting industries and governments in prioritizing substances for further evaluation or for early-stage decision making for new chemicals. Toxicology is an increasingly expensive and resource-intensive undertaking. Increased concerns over the potential for chemicals to cause adverse effects in exposed human populations have prompted regulatory and health agencies to expand the range and sensitivity of tests to detect toxicological hazards. At the same time, the real and perceived burdens of regulation upon industry have provoked concerns for the practicality of toxicity testing methods and data analysis. At present, the determination of chemical carcinogenicity depends upon lifetime testing of at least two species, both sexes, at several doses, with careful histopathological analysis of multiple organs, as well as detection of preneoplastic changes in cells and target organs. In the United States, the cancer bioassay is estimated to cost in excess of $3 million (1995 dollars).
Even with unlimited financial resources, the burden of testing the approximately 70,000 existing chemicals produced in the world today would exceed the available resources of trained toxicologists. Centuries would be required to complete even a first tier evaluation of these chemicals (NRC 1984). In many countries ethical concerns over the use of animals in toxicity testing have increased, bringing additional pressures upon the uses of standard methods of toxicity testing. SAR has been widely used in the pharmaceutical industry to identify molecules with potential for beneficial use in treatment (Hansch and Zhang 1993). In environmental and occupational health policy, SAR is used to predict the dispersion of compounds in the physical-chemical environment and to screen new chemicals for further evaluation of potential toxicity. Under the US Toxic Substances Control Act (TSCA), the EPA has used since 1979 an SAR approach as a “first screen” of new chemicals in the premanufacture notification (PMN) process; Australia uses a similar approach as part of its new chemicals notification (NICNAS) procedure. In the US SAR analysis is an important basis for determining that there is a reasonable basis to conclude that manufacture, processing, distribution, use or disposal of the substance will present an unreasonable risk of injury to human health or the environment, as required by Section 5(f) of TSCA. On the basis of this finding, EPA can then require actual tests of the substance under Section 6 of TSCA.
The scientific rationale for SAR is based upon the assumption that the molecular structure of a chemical will predict important aspects of its behaviour in physical-chemical and biological systems (Hansch and Leo 1979).
The SAR review process includes identification of the chemical structure, including empirical formulations as well as the pure compound; identification of structurally analogous substances; searching databases and literature for information on structural analogs; and analysis of toxicity and other data on structural analogs. In some rare cases, information on the structure of the compound alone can be sufficient to support some SAR analysis, based upon well-understood mechanisms of toxicity. Several databases on SAR have been compiled, as well as computer-based methods for molecular structure prediction.
With this information, the following endpoints can be estimated with SAR:
· physical-chemical parameters: boiling point, vapour pressure, water solubility, octanol/water partition coefficient
· biological/environmental fate parameters: biodegradation, soil sorption, photodegradation, pharmacokinetics
· toxicity parameters: aquatic organism toxicity, absorption, acute mammalian toxicity (limit test or LD50), dermal, lung and eye irritation, sensitization, subchronic toxicity, mutagenicity.
It should be noted that SAR methods do not exist for such important health endpoints as carcinogenicity, developmental toxicity, reproductive toxicity, neurotoxicity, immunotoxicity or other target organ effects. This is due to three factors: the lack of a large database upon which to test SAR hypotheses, lack of knowledge of structural determinants of toxic action, and the multiplicity of target cells and mechanisms that are involved in these endpoints (see “The United States approach to risk assessment of reproductive toxicants and neurotoxic agents”). Some limited attempts to utilize SAR for predicting pharmacokinetics using information on partition coefficients and solubility (Johanson and Naslund 1988). More extensive quantitative SAR has been done to predict P450-dependent metabolism of a range of compounds and binding of dioxin- and PCB-like molecules to the cytosolic “dioxin” receptor (Hansch and Zhang 1993).
SAR has been shown to have varying predictability for some of the endpoints listed above, as shown in table 33.10 . This table presents data from two comparisons of predicted activity with actual results obtained by empirical measurement or toxicity testing. SAR as conducted by US EPA experts performed more poorly for predicting physical-chemical properties than for predicting biological activity, including biodegradation. For toxicity endpoints, SAR performed best for predicting mutagenicity. Ashby and Tennant (1991) in a more extended study also found good predictability of short-term genotoxicity in their analysis of NTP chemicals. These findings are not surprising, given current understanding of molecular mechanisms of genotoxicity (see “Genetic toxicology”) and the role of electrophilicity in DNA binding. In contrast, SAR tended to underpredict systemic and subchronic toxicity in mammals and to overpredict acute toxicity to aquatic organisms.
Acute mammalian toxicity (LD50)
Carcinogenicity3 : Two year bioassay
Source: Data from OECD, personal communication C. Auer ,US EPA. Only those endpoints for which comparable SAR predictions and actual test data were available were used in this analysis. NTP data are from Ashby and Tennant 1991.
1 Of concern was the failure by SAR to predict acute toxicity in 12% of the chemicals tested.
2 OECD data, based on Ames test concordance with SAR
3 NTP data, based on genetox assays compared to SAR predictions for several classes of “structurally alerting chemicals”.
4 Concordance varies with class; highest concordance was with aromatic amino/nitro compounds; lowest with “miscellaneous” structures.
For other toxic endpoints, as noted above, SAR has less demonstrable utility. Mammalian toxicity predictions are complicated by the lack of SAR for toxicokinetics of complex molecules. Nevertheless, some attempts have been made to propose SAR principles for complex mammalian toxicity endpoints (for instance, see Bernstein (1984) for an SAR analysis of potential male reproductive toxicants). In most cases, the database is too small to permit rigorous testing of structure-based predictions.
At this point it may be concluded that SAR may be useful mainly for prioritizing the investment of toxicity testing resources or for raising early concerns about potential hazard. Only in the case of mutagenicity is it likely that SAR analysis by itself can be utilized with reliability to inform other decisions. For no endpoint is it likely that SAR can provide the type of quantitative information required for risk assessment purposes as discussed elsewhere in this chapter and Encyclopaedia.
Toxicology plays a major role in the development of regulations and other occupational health policies. In order to prevent occupational injury and illness, decisions are increasingly based upon information obtainable prior to or in the absence of the types of human exposures that would yield definitive information on risk such as epidemiology studies. In addition, toxicological studies, as described in this chapter, can provide precise information on dose and response under the controlled conditions of laboratory research; this information is often difficult to obtain in the uncontrolled setting of occupational exposures. However, this information must be carefully evaluated in order to estimate the likelihood of adverse effects in humans, the nature of these adverse effects, and the quantitative relationship between exposures and effects.
Considerable attention has been given in many countries, since the 1980s, to developing objective methods for utilizing toxicological information in regulatory decision-making. Formal methods, frequently referred to as risk assessment, have been proposed and utilized in these countries by both governmental and non-governmental entities. Risk assessment has been varyingly defined; fundamentally it is an evaluative process that incorporates toxicology, epidemiology and exposure information to identify and estimate the probability of adverse effects associated with exposures to hazardous substances or conditions. Risk assessment may be qualitative in nature, indicating the nature of an adverse effect and a general estimate of likelihood, or it may be quantitative, with estimates of numbers of affected persons at specific levels of exposure. In many regulatory systems, risk assessment is undertaken in four stages: hazard identification, the description of the nature of the toxic effect; dose-response evaluation, a semi-quantitative or quantitative analysis of the relationship between exposure (or dose) and severity or likelihood of toxic effect; exposure assessment, the evaluation of information on the range of exposures likely to occur for populations in general or for subgroups within populations; risk characterization, the compilation of all the above information into an expression of the magnitude of risk expected to occur under specified exposure conditions (see NRC 1983 for a statement of these principles).
In this section, three approaches to risk assessment are presented as illustrative.It is impossible to provide a comprehensive compendium of risk assessment methods used throughout the world, and these selections should not be taken as prescriptive. It should be noted that there are trends towards harmonization of risk assessment methods, partly in response to provisions in the recent GATT accords. Two processes of international harmonization of risk assessment methods are currently underway, through the International Programme on Chemical Safety (IPCS) and the Organization for Economic Cooperation and Development (OECD). These organizations also maintain current information on national approaches to risk assessment.
As in many other countries, risk due to exposure to chemicals is regulated in Japan according to the category of chemicals concerned, as listed in table 33.11 . The governmental ministry or agency in charge varies. In the case of industrial chemicals in general, the major law that applies is the Law Concerning Examination and Regulation of Manufacture, Etc. of Chemical Substances, or Chemical Substances Control Law (CSCL) for short. The agencies in charge are the Ministry of International Trade and Industry and the Ministry of Health and Welfare. In addition, the Labour Safety and Hygiene Law (by the Ministry of Labour) provides that industrial chemicals should be examined for possible mutagenicity and, if the chemical in concern is found to be mutagenic, the exposure of workers to the chemical should be minimized by enclosure of production facilities, installation of local exhaust systems, use of protective equipment, and so on.
Food and food additives
Foodstuff Hygiene Law
Narcotics Control Law
Agricultural Chemicals Control Law
Chemical Substances Control Law
MHW & MITI
All chemicals except for radioactive substances
Law concerning Regulation of House-Hold Products Containing Hazardous Substances
Poisonous and Deleterious Substances Control Law
Labour Safety and Hygiene Law
Law concerning Radioactive Substances
Abbreviations: MHWMinistry of Health and Welfare; MAFFMinistry of Agriculture, Forestry and Fishery; MITIMinistry of International Trade and Industry; MOLMinistry of Labour; STAScience and Technology Agency.
Because hazardous industrial chemicals will be identified primarily by the CSCL, the framework of tests for hazard identification under CSCL will be described in this section.
The original CSCL was passed by the Diet (the parliament of Japan) in 1973 and took effect on 16 April 1974. The basic motivation for the Law was the prevention of environmental pollution and resulting human health effects by PCBs and PCB-like substances. PCBs are characterized by (1) persistency in the environment (poorly biodegradable), (2) increasing concentration as one goes up the food chain (or food web) (bioaccumulation) and (3) chronic toxicity in humans. Accordingly, the Law mandated that each industrial chemical be examined for such characteristics prior to marketing in Japan. In parallel with the passage of the Law, the Diet decided that the Environment Agency should monitor the general environment for possible chemical pollution. The Law was then amended by the Diet in 1986 (the amendment taking effect in 1987) in order to harmonize with actions of the OECD regarding health and the environment, the lowering of non-tariff barriers in international trade and especially the setting of a minimum premarketing set of data (MPD) and related test guidelines. The amendment was also a reflection of observation at the time, through monitoring of the environment, that chemicals such as trichloroethylene and tetrachloroethylene, which are not highly bioaccumulating although poorly biodegradable and chronically toxic, can pollute the environment; these chemical substances were detected in groundwater nationwide.
The Law classifies industrial chemicals into two categories: existing chemicals and new chemicals. The existing chemicals are those listed in the “Existing Chemicals Inventory” (established with the passage of the original Law) and number about 20,000, the number depending on the way some chemicals are named in the inventory. Chemicals not in the inventory are called new chemicals. The government is responsible for hazard identification of the existing chemicals, whereas the company or other entity that wishes to introduce a new chemical into the market in Japan is responsible for hazard identification of the new chemical. Two governmental ministries, the Ministry of Health and Welfare (MHW) and the Ministry of International Trade and Industry (MITI), are in charge of the Law, and the Environment Agency can express its opinion when necessary. Radioactive substances, specified poisons, stimulants and narcotics are excluded because they are regulated by other laws.
The flow scheme of examination is depicted in figure 33.15, which is a stepwise system in principle. All chemicals ( for exceptions, see below) should be examined for biodegradability in vitro. In case the chemical is readily biodegradable, it is considered "safe". Otherwise, the chemical is then examined for bioaccumulation. If it is found to be "highly accumulating," full toxicity data are requested, based on which the chemical will be classified as a "Class 1 specified chemical substance" when toxicity is confirmed, or a "safe" one otherwise. The chemical with no or low accumulation will be subject to toxicity screening tests, which consist of mutagenicity tests and 28-day repeated dosing to experimental animals (for details, see table 33.12). After comprehensive evaluation of the toxicity data, the chemical will be classified as a "Designate chemical substance" if the data indicate toxicity. Otherwise, it is considered "safe". When other data suggest that there is a great possibility of environmental pollution with the chemical in concern, full toxicity data are requested, from which the designated chemical will be reclassified as "Class 2 specified chemical substance" when positive. Otherwise, it is considered "safe". Toxicological and ecotoxicological characteristics of "Class 1 specific chemical substance," "Class 2 specific chemical substance" and "Designated chemical substance" are listed in table 33.13 together with outlines of regulatory actions.
For 2 weeks in principle, in vitro, with activated sludge
For 8 weeks in principle, with carp
|Ames’ test and test with E. coli, ± S9 mix|
CHL cells, etc., ±S9 mix
28-day repeated dosing
Rats, 3 dose levels plus control for NOEL, 2 weeks recovery test at the highest dose level in addition
Authorization to manufacture or import necessary1
Notification on scheduled manufacturing or import quantity
Designated chemical substances
Report on manufacturing or import quantity
1 No authorization in practice.
Testing is not required for a new chemical with a limited use amount (i.e., less than 1,000 kg/company/year and less than 1,000 kg/year for all of Japan). Polymers are examined following the high molecular-weight compound flow scheme, which is developed with an assumption that chances are remote for absorption into the body when the chemical has a molecular weight of greater than 1,000 and is stable in the environment.
In the 26 years from the time CSCL went into effect in 1973 to the end of 1996, 1,087 existing chemical items were examined under the original and amended CSCL. Among the 1,087, nine items (some are identified by generic names) were classified as "Class 1 specified chemical substance". Among those remaining, 36 were classified as "designated", of which 23 were reclassified as "Class 2 specified chemical substance" and another 13 remained as "designated". The names of Class 1 and 2 specified chemical substances are listed in figure 33.16 . It is clear from the table that most of the Class 1 chemicals are organochlorine pesticides in addition to PCB and its substitute, except for one seaweed killer. A majority of the Class 2 chemicals are seaweed killers, with the exceptions of three once widely used chlorinated hydrocarbon solvents.
In the same period from 1973 to the end of 1996, about 2,335 new chemicals were submitted for approval, of which 221 (about 9.5%) were identified as "designated", but none as Class 1 or 2 chemicals. Other chemicals were considered "safe" and approved for manufacturing or import.
Neurotoxicity and reproductive toxicity are important areas for risk assessment, since the nervous and reproductive systems are highly sensitive to xenobiotic effects. Many agents have been identified as toxic to these systems in humans (Barlow and Sullivan 1982; OTA 1990). Many pesticides are deliberately designed to disrupt reproduction and neurological function in target organisms, such as insects, through interference with hormonal biochemistry and neurotransmission.
It is difficult to identify substances potentially toxic to these systems for three interrelated reasons: first, these are among the most complex biological systems in humans, and animal models of reproductive and neurological function are generally acknowledged to be inadequate for representing such critical events as cognition or early embryofoetal development; second, there are no simple tests for identifying potential reproductive or neurological toxicants; and third, these systems contain multiple cell types and organs, such that no single set of mechanisms of toxicity can be used to infer dose-response relationships or predict structure-activity relationships (SAR). Moreover, it is known that the sensitivity of both the nervous and reproductive systems varies with age, and that exposures at critical periods may have much more severe effects than at other times.
Neurotoxicity is an important public health problem. As shown in table 33.14 , there have been several episodes of human neurotoxicity involving thousands of workers and other populations exposed through industrial releases, contaminated food, water and other vectors. Occupational exposures to neurotoxins such as lead, mercury, organophosphate insecticides and chlorinated solvents are widespread throughout the world (OTA 1990; Johnson 1978).
Hippocrates recognizes lead toxicity in the mining industry.
United States (Southeast)
Compound often added to lubricating oils contaminates “Ginger Jake,” an alcoholic beverage; more than 5,000 paralyzed, 20,000 to 100,000 affected.
Apiol (with TOCP)
Abortion-inducing drug containing TOCP causes 60 cases of neuropathy.
United States (California)
Barley laced with thallium sulphate, used as rodenticide, is stolen and used to make tortillas; 13 family members hospitalized with neurological symptoms; 6 deaths.
60 South Africans develop paralysis after using contaminated cooking oil.
More than 25 individuals suffer neurological effects after cleaning gasoline tanks.
Hundreds ingest fish and shellfish contaminated with mercury from chemical plant; 121 poisoned, 46 deaths, many infants with serious nervous system damage.
Contamination of Stallinon with triethyltin results in more than 100 deaths.
150 ore miners suffer chronic manganese intoxication involving severe neurobehavioural problems.
Component of fragrances found to be neurotoxic; withdrawn from market in 1978; human health effects unknown.
49 persons become ill after eating bakery foods prepared from flour contaminated with the insecticide endrin; convulsions result in some instances.
Hexachlorobenzene, a seed grain fungicide, leads to poisoning of 3,000 to 4,000; 10 per cent mortality rate.
Drug used to treat travellers’ diarrhoea found to cause neuropathy; as many as 10,000 affected over two decades.
Cooking oil contaminated with lubricating oil affects some 10,000 individuals.
Mercury used as fungicide to treat seed grain used in bread; more than 1,000 people affected.
Methylmercury affects 646 people.
Polychlorinated biphenyls leaked into rice oil; 1,665 people affected.
93 cases of neuropathy occur following exposure to n-hexane, used to make vinyl sandals.
After years of bathing infants in 3 per cent hexachlorophene, the disinfectant is found to be toxic to the nervous system and other systems.
Mercury used as fungicide to treat seed grain is used in bread; more than 5,000 severe poisonings, 450 hospital deaths, effects on many infants exposedprenatally not documented.
United States (Ohio)
Fabric production plant employees exposed to solvent; more than 80 workers suffer neuropathy, 180 have less severe effects.
United States (Hopewell, VA)
Chemical plant employees exposed to insecticide; more than 20 suffer severe neurologicalproblems, more than 40 have less severe problems.
United States (Texas)
At least 9 employees suffer severe neurological problems following exposure to insecticide during manufacturing process.
United States (California)
Dichloropropene (Telone II)
24 individuals hospitalized after exposure to pesticide Telone following traffic accident.
United States (Lancaster, TX)
Seven employees at plastic bathtub manufacturing plant experience serious neurologicalproblems following exposure to BHMH.
Impurity in synthesis of illicit drug found to cause symptoms identical to those of Parkinson’s disease.
Contaminated toxic oil
20,000 persons poisoned by toxic substance in oil, resulting in more than 500 deaths; many suffer severe neuropathy.
United States and Canada
More than 1,000 individuals in California and other Western States and British Columbia experience neuromuscular and cardiac problems following ingestion of melons contaminated with the pesticide aldicarb.
Ingestion of mussels contaminated with domoic acid causes 129 illnesses and 2 deaths; symptoms include memory loss, disorientation and seizures.
Source: OTA 1990.
Chemicals may affect the nervous system through actions at any of several cellular targets or biochemical processes within the central or peripheral nervous system. Toxic effects on other organs may also affect the nervous system, as in the example of hepatic encephalopathy. The manifestations of neurotoxicity include effects on learning (including memory, cognition and intellectual performance), somatosensory processes (including sensation and proprioreception), motor function (including balance, gait and fine movement control), affect (including personality status and emotionality) and autonomic function (nervous control of endocrine function and internal organ systems). The toxic effects of chemicals upon the nervous system often vary in sensitivity and expression with age: during development, the central nervous system may be especially susceptible to toxic insult because of the extended process of cellular differentiation, migration, and cell-to-cell contact that takes place in humans (OTA 1990). Moreover, cytotoxic damage to the nervous system may be irreversible because neurons are not replaced after embryogenesis. While the central nervous system (CNS) is somewhat protected from contact with absorbed compounds through a system of tightly joined cells (the blood-brain barrier, composed of capillary endothelial cells that line the vasculature of the brain), toxic chemicals can gain access to the CNS by three mechanisms: solvents and lipophilic compounds can pass through cell membranes; some compounds can attach to endogenous transporter proteins that serve to supply nutrients and biomolecules to the CNS; small proteins if inhaled can be directly taken up by the olfactory nerve and transported to the brain.
Statutory authority for regulating substances for neurotoxicity is assigned to four agencies in the United States: the Food and Drug Administration (FDA), the Environmental Protection Agency (EPA), the Occupational Safety and Health Administration (OSHA), and the Consumer Product Safety Commission (CPSC). While OSHA generally regulates occupational exposures to neurotoxic (and other) chemicals, the EPA has authority to regulate occupational and nonoccupational exposures to pesticides under the Federal Insecticide, Fungicide and Rodenticide Act (FIFRA). EPA also regulates new chemicals prior to manufacture and marketing, which obligates the agency to consider both occupational and nonoccupational risks.
Agents that adversely affect the physiology, biochemistry, or structural integrity of the nervous system or nervous system function expressed behaviourally are defined as neurotoxic hazards (EPA 1993). The determination of inherent neurotoxicity is a difficult process, owing to the complexity of the nervous system and the multiple expressions of neurotoxicity. Some effects may be delayed in appearance, such as the delayed neurotoxicity of certain organophosphate insecticides. Caution and judgement are required in determining neurotoxic hazard, including consideration of the conditions of exposure, dose, duration and timing.
Hazard identification is usually based upon toxicological studies of intact organisms, in which behavioural, cognitive, motor and somatosensory function is assessed with a range of investigative tools including biochemistry, electrophysiology and morphology (Tilson and Cabe 1978; Spencer and Schaumberg 1980). The importance of careful observation of whole organism behaviour cannot be overemphasized. Hazard identification also requires evaluation of toxicity at different developmental stages, including early life (intrauterine and early neonatal) and senescence. In humans, the identification of neurotoxicity involves clinical evaluation using methods of neurological assessment of motor function, speech fluency, reflexes, sensory function, electrophysiology, neuropsychological testing, and in some cases advanced techniques of brain imaging and quantitative electroencephalography. WHO has developed and validated a neurobehavioural core test battery (NCTB), which contains probes of motor function, hand-eye coordination, reaction time, immediate memory, attention and mood. This battery has been validated internationally by a coordinated process (Johnson 1978).
Hazard identification using animals also depends upon careful observational methods. The US EPA has developed a functional observational battery as a first-tier test designed to detect and quantify major overt neurotoxic effects (Moser 1990). This approach is also incorporated in the OECD subchronic and chronic toxicity testing methods. A typical battery includes the following measures: posture; gait; mobility; general arousal and reactivity; presence or absence of tremor, convulsions, lacrimation, piloerection, salivation, excess urination or defecation, stereotypy, circling, or other bizarre behaviours. Elicited behaviours include response to handling, tail pinch, or clicks; balance, righting reflex, and hind limb grip strength. Some representative tests and agents identified with these tests are shown in table 33.15 .
Grip strength; swimming endurance; suspension from rod; discriminative motor function; hind limb splay
n-Hexane, Methylbutylketone, Carbaryl
Rotorod, gait measurements
Rating scale, spectral analysis
Chlordecone, Type I Pyrethroids, DDT
Rating scale, spectral analysis
DDT, Type II Pyrethroids
Discriminant conditioning, reflex modification
Discriminant conditioning (btration); functional observational battery
Nictitating membrane, conditioned flavour aversion, passive avoidance, olfactory conditioning
Aluminium, Carbaryl, Trimethyltin, IDPN, Trimethyltin (neonatal)
Operant or instrumental conditioning
One-way avoidance, Two-way avoidance, Y-maze avoidance, Biol watermaze, Morris water maze, Radial arm maze, Delayed matching to sample, Repeated acquisition, Visual discrimination learning
Chlordecone, Lead (neonatal), Hypervitaminosis A, Styrene, DFP, Trimethyltin, DFP. Carbaryl, Lead
Source: EPA 1993.
These tests may be followed by more complex assessments usually reserved for mechanistic studies rather than hazard identification. In vitro methods for neurotoxicity hazard identification are limited since they do not provide indications of effects on complex function, such as learning, but they may be very useful in defining target sites of toxicity and improving the precision of target site dose-response studies (see WHO 1986 and EPA 1993 for comprehensive discussions of principles and methods for identifying potential neurotoxicants).
The relationship between toxicity and dose may be based upon human data when available or upon animal tests, as described above. In the United States, an uncertainty or safety factor approach is generally used for neurotoxicants. This process involves determining a “no observed adverse effect level” (NOAEL) or “lowest observed adverse effect level” (LOAEL) and then dividing this number by uncertainty or safety factors (usually multiples of 10) to allow for such considerations as incompleteness of data, potentially higher sensitivity of humans and variability of human response due to age or other host factors. The resultant number is termed the reference dose (RfD) or reference concentration (RfC). The effect occurring at the lowest dose in the most sensitive animal species and gender is generally used to determine the LOAEL or NOAEL. Conversion of animal dose to human exposure is done by standard methods of cross-species dosimetry, taking into account differences in lifespan and exposure duration.
The use of the uncertainty factor approach assumes that there is a threshold, or dose below which no adverse effect is induced. Thresholds for specific neurotoxicants may be difficult to determine experimentally; they are based upon assumptions as to mechanism of action which may or may not hold for all neurotoxicants (Silbergeld 1990).
At this stage, information is evaluated on sources, routes, doses and durations of exposure to the neurotoxicant for human populations, subpopulations or even individuals. This information may be derived from monitoring of environmental media or human sampling, or from estimates based upon standard scenarios (such as workplace conditions and job descriptions) or models of environmental fate and dispersion (see EPA 1992 for general guidelines on exposure assessment methods). In some limited cases, biological markers may be used to validate exposure inferences and estimates; however, there are relatively few usable biomarkers of neurotoxicants.
The combination of hazard identification, dose-response and exposure assessment is used to develop the risk characterization. This process involves assumptions as to the extrapolation of high to low doses, extrapolation from animals to humans, and the appropriateness of threshold assumptions and use of uncertainty factors.
Reproductive hazards may affect multiple functional endpoints and cellular targets within humans, with consequences for the health of the affected individual and future generations. Reproductive hazards may affect the development of the reproductive system in males or females, reproductive behaviours, hormonal function, the hypothalamus and pituitary, gonads and germ cells, fertility, pregnancy and the duration of reproductive function (OTA 1985). In addition, mutagenic chemicals may also affect reproductive function by damaging the integrity of germ cells (Dixon 1985).
The nature and extent of adverse effects of chemical exposures upon reproductive function in human populations is largely unknown. Relatively little surveillance information is available on such endpoints as fertility of men or women, age of menopause in women, or sperm counts in men. However, both men and women are employed in industries where exposures to reproductive hazards may occur (OTA 1985).
This section does not recapitulate those elements common to both neurotoxicant and reproductive toxicant risk assessment, but focuses upon issues specific to reproductive toxicant risk assessment. As with neurotoxicants, authority to regulate chemicals for reproductive toxicity is placed by statute in the EPA, OSHA, the FDA and the CPSC. Of these agencies, only the EPA has a stated set of guidelines for reproductive toxicity risk assessment. In addition, the state of California has developed methods for reproductive toxicity risk assessment in response to a state law, Proposition 65 (Pease et al. 1991).
Reproductive toxicants, like neurotoxicants, may act by affecting any of a number of target organs or molecular sites of action. Their assessment has additional complexity because of the need to evaluate three distinct organisms separately and togetherthe male, the female and the offspring (Mattison and Thomford 1989). While an important endpoint of reproductive function is the generation of a healthy child, reproductive biology also plays a role in the health of developing and mature organisms regardless of their involvement in procreation. For instance, loss of ovulatory function through natural depletion or surgical removal of oocytes has substantial effects upon the health of women, involving changes in blood pressure, lipid metabolism and bone physiology. Changes in hormone biochemistry may affect susceptibility to cancer.
The identification of a reproductive hazard may be made on the basis of human or animal data. In general, data from humans are relatively sparse, owing to the need for careful surveillance to detect alterations in reproductive function, such as sperm count or quality, ovulatory frequency and cycle length, or age at puberty. Detecting reproductive hazards through collection of information on fertility rates or data on pregnancy outcome may be confounded by the intentional suppression of fertility exercised by many couples through family-planning measures. Careful monitoring of selected populations indicates that rates of reproductive failure (miscarriage) may be very high, when biomarkers of early pregnancy are assessed (Sweeney et al. 1988).
Testing protocols using experimental animals are widely used to identify reproductive toxicants. In most of these designs, as developed in the United States by the FDA and the EPA and internationally by the OECD test guidelines program, the effects of suspect agents are detected in terms of fertility after male and/or female exposure; observation of sexual behaviours related to mating; and histopathological examination of gonads and accessory sex glands, such as mammary glands (EPA 1994). Often reproductive toxicity studies involve continuous dosing of animals for one or more generations in order to detect effects on the integrated reproductive process as well as to study effects on specific organs of reproduction. Multigenerational studies are recommended because they permit detection of effects that may be induced by exposure during the development of the reproductive system in utero. A special test protocol, the Reproductive Assessment by Continuous Breeding (RACB), has been developed in the United States by the National Toxicology Program. This test provides data on changes in the temporal spacing of pregnancies (reflecting ovulatory function), as well as number and size of litters over the entire test period. When extended to the lifetime of the female, it can yield information on early reproductive failure. Sperm measures can be added to the RACB to detect changes in male reproductive function. A special test to detect pre- or postimplantation loss is the dominant lethal test, designed to detect mutagenic effects in male spermatogenesis.
In vitro tests have also been developed as screens for reproductive (and developmental) toxicity (Heindel and Chapin 1993). These tests are generally used to supplement in vivo test results by providing more information on target site and mechanism of observed effects.
Table 33.16 shows the three types of endpoints in reproductive toxicity assessmentcouple-mediated, female-specific and male-specific. Couple-mediated endpoints include those detectable in multigenerational and single-organism studies. They generally include assessment of offspring as well. It should be noted that fertility measurement in rodents is generally insensitive, as compared to such measurement in humans, and that adverse effects on reproductive function may well occur at lower doses than those that significantly affect fertility (EPA 1994). Male-specific endpoints can include dominant lethality tests as well as histopathological evaluation of organs and sperm, measurement of hormones, and markers of sexual development. Sperm function can also be assessed by in vitro fertilization methods to detect germ cell properties of penetration and capacitation; these tests are valuable because they are directly comparable to in vitro assessments conducted in human fertility clinics, but they do not by themselves provide dose-response information. Female-specific endpoints include, in addition to organ histopathology and hormone measurements, assessment of the sequelae of reproduction, including lactation and offspring growth.
Other reproductive endpoints
Mating rate, time to mating (time to pregnancy1)
Litter size (total and live)
Number of live and dead offspring (foetal death rate1)
External malformations and variations1
Internal malformations and variations1
Postnatal structural and functional development1
Testes, epididymides, seminal vesicles, prostate, pituitary
Visual examination and histopathology
Testes, epididymides, seminal vesicles, prostate, pituitary
Sperm number (count) and quality (morphology, motility)
Luteinizing hormone, follicle stimulating hormone, testosterone, oestrogen, prolactin
Testis descent1, preputial separation, sperm production1, ano-genital distance, normality of external genitalia1
Ovary, uterus, vagina, pituitary
Visual examination and histopathology
Ovary, uterus, vagina, pituitary, oviduct, mammary gland
Oestrous (menstrual1) cycle normality
Vaginal smear cytology
LH, FSH, oestrogen, progesterone, prolactin
Normality of external genitalia1, vaginal opening, vaginal smear cytology, onset of oestrus behaviour (menstruation1)
Vaginal smear cytology, ovarian histology
1 Endpoints that can be obtained relatively noninvasively with humans.
Source: EPA 1994.
In the United States, the hazard identification concludes with a qualitative evaluation of toxicity data by which chemicals are judged to have either sufficient or insufficient evidence of hazard (EPA 1994). “Sufficient” evidence includes epidemiological data providing convincing evidence of a causal relationship (or lack thereof), based upon case-control or cohort studies, or well-supported case series. Sufficient animal data may be coupled with limited human data to support a finding of a reproductive hazard: to be sufficient, the experimental studies are generally required to utilize EPA’s two-generation test guidelines, and must include a minimum of data demonstrating an adverse reproductive effect in an appropriate, well-conducted study in one test species. Limited human data may or may not be available; it is not necessary for the purposes of hazard identification. To rule out a potential reproductive hazard, the animal data must include an adequate array of endpoints from more than one study showing no adverse reproductive effect at doses minimally toxic to the animal (EPA 1994).
As with the evaluation of neurotoxicants, the demonstration of dose-related effects is an important part of risk assessment for reproductive toxicants. Two particular difficulties in dose-response analyses arise due to complicated toxicokinetics during pregnancy, and the importance of distinguishing specific reproductive toxicity from general toxicity to the organism. Debilitated animals, or animals with substantial nonspecific toxicity (such as weight loss) may fail to ovulate or mate. Maternal toxicity can affect the viability of pregnancy or support for lactation. These effects, while evidence of toxicity, are not specific to reproduction (Kimmel et al. 1986). Assessing dose response for a specific endpoint, such as fertility, must be done in the context of an overall assessment of reproduction and development. Dose-response relationships for different effects may differ significantly, but interfere with detection. For instance, agents that reduce litter size may result in no effects upon litter weight because of reduced competition for intrauterine nutrition.
An important component of exposure assessment for reproductive risk assessment relates to information on the timing and duration of exposures. Cumulative exposure measures may be insufficiently precise, depending upon the biological process that is affected. It is known that exposures at different developmental stages in males and females can result in different outcomes in both humans and experimental animals (Gray et al. 1988). The temporal nature of spermatogenesis and ovulation also affects outcome. Effects on spermatogenesis may be reversible if exposures cease; however, oocyte toxicity is not reversible since females have a fixed set of germ cells to draw upon for ovulation (Mattison and Thomford 1989).
As with neurotoxicants, the existence of a threshold is usually assumed for reproductive toxicants. However, the actions of mutagenic compounds on germ cells may be considered an exception to this general assumption. For other endpoints, an RfD or RfC is calculated as with neurotoxicants by determination of the NOAEL or LOAEL and application of appropriate uncertainty factors. The effect used for determining the NOAEL or LOAEL is the most sensitive adverse reproductive endpoint from the most appropriate or most sensitive mammalian species (EPA 1994). Uncertainty factors include consideration of interspecies and intraspecies variation, ability to define a true NOAEL, and sensitivity of the endpoint detected.
Risk characterizations should also be focused upon specific subpopulations at risk, possibly specifying males and females, pregnancy status, and age. Especially sensitive individuals, such as lactating women, women with reduced oocyte numbers or men with reduced sperm counts, and prepubertal adolescents may also be considered.
The identification of carcinogenic risks to humans has been the objective of the IARC Monographs on the Evaluation of Carcinogenic Risks to Humans since 1971. To date, 69 volumes of monographs have been published or are in press, with evaluations of carcinogenicity of 836 agents or exposure circumstances (see Appendix).
These qualitative evaluations of carcinogenic risk to humans are equivalent to the hazard identification phase in the now generally accepted scheme of risk assessment, which involves identification of hazard, dose-response assessment (including extrapolation outside the limits of observations), exposure assessment and risk characterization.
The aim of the IARC Monographs programme has been to publish critical qualitative evaluations on the carcinogenicity to humans of agents (chemicals, groups of chemicals, complex mixtures, physical or biological factors) or exposure circumstances (occupational exposures, cultural habits) through international cooperation in the form of expert working groups. The working groups prepare monographs on a series of individual agents or exposures and each volume is published and widely distributed. Each monograph consists of a brief description of the physical and chemical properties of the agent; methods for its analysis; a description of how it is produced, how much is produced, and how it is used; data on occurrence and human exposure; summaries of case reports and epidemiological studies of cancer in humans; summaries of experimental carcinogenicity tests; a brief description of other relevant biological data, such as toxicity and genetic effects, that may indicate its possible mechanism of action; and an evaluation of its carcinogenicity. The first part of this general scheme is adjusted appropriately when dealing with agents other than chemicals or chemical mixtures.
The guiding principles for evaluating carcinogens have been drawn up by various ad-hoc groups of experts and are laid down in the Preamble to the Monographs (IARC 1994a).
Associations are established by examining the available data from studies of exposed humans, the results of bioassays in experimental animals and studies of exposure, metabolism, toxicity and genetic effects in both humans and animals.
Three types of epidemiological studies contribute to an assessment of carcinogenicity: cohort studies, case-control studies and correlation (or ecological) studies. Case reports of cancer may also be reviewed.
Cohort and case-control studies relate individual exposures under study to the occurrence of cancer in individuals and provide an estimate of relative risk (ratio of the incidence in those exposed to the incidence in those not exposed) as the main measure of association.
In correlation studies, the unit of investigation is usually whole populations (e.g., particular geographical areas) and cancer frequency is related to a summary measure of the exposure of the population to the agent. Because individual exposure is not documented, a causal relationship is less easy to infer from such studies than from cohort and case-control studies. Case reports generally arise from a suspicion, based on clinical experience, that the concurrence of two eventsthat is, a particular exposure and occurrence of a cancerhas happened rather more frequently than would be expected by chance. The uncertainties surrounding interpretation of case reports and correlation studies make them inadequate, except in rare cases, to form the sole basis for inferring a causal relationship.
In the interpretation of epidemiological studies, it is necessary to take into account the possible roles of bias and confounding. By bias is meant the operation of factors in study design or execution that lead erroneously to a stronger or weaker association than in fact exists between disease and an agent. By confounding is meant a situation in which the relationship with disease is made to appear stronger or weaker than it truly is as a result of an association between the apparent causal factor and another factor that is associated with either an increase or decrease in the incidence of the disease.
In the assessment of the epidemiological studies, a strong association (i.e., a large relative risk) is more likely to indicate causality than a weak association, although it is recognized that relative risks of small magnitude do not imply lack of causality and may be important if the disease is common. Associations that are replicated in several studies of the same design or using different epidemiological approaches or under different circumstances of exposure are more likely to represent a causal relationship than isolated observations from single studies. An increase in risk of cancer with increasing amounts of exposure is considered to be a strong indication of causality, although the absence of a graded response is not necessarily evidence against a causal relationship. Demonstration of a decline in risk after cessation of or reduction in exposure in individuals or in whole populations also supports a causal interpretation of the findings.
When several epidemiological studies show little or no indication of an association between an exposure and cancer, the judgement may be made that, in the aggregate, they show evidence suggesting lack of carcinogenicity. The possibility that bias, confounding or misclassification of exposure or outcome could explain the observed results must be considered and excluded with reasonable certainty. Evidence suggesting lack of carcinogenicity obtained from several epidemiological studies can apply only to those type(s) of cancer, dose levels and intervals between first exposure and observation of disease that were studied. For some human cancers, the period between first exposure and the development of clinical disease is seldom less than 20 years; latent periods substantially shorter than 30 years cannot provide evidence suggesting lack of carcinogenicity.
The evidence relevant to carcinogenicity from studies in humans is classified into one of the following categories:
Sufficient evidence of carcinogenicity. A causal relationship has been established between exposure to the agent, mixture or exposure circumstance and human cancer. That is, a positive relationship has been observed between the exposure and cancer in studies in which chance, bias and confounding could be ruled out with reasonable confidence.
Limited evidence of carcinogenicity. A positive association has been observed between exposure to the agent, mixture or exposure circumstance and cancer for which a causal interpretation is considered to be credible, but chance, bias or confounding cannot be ruled out with reasonable confidence.
Inadequate evidence of carcinogenicity. The available studies are of insufficient quality, consistency or statistical power to permit a conclusion regarding the presence or absence of a causal association, or no data on cancer in humans are available.
Evidence suggesting lack of carcinogenicity. There are several adequate studies covering the full range of levels of exposure that human beings are known to encounter, which are mutually consistent in not showing a positive association between exposure to the agent and the studied cancer at any observed level of exposure. A conclusion of “evidence suggesting lack of carcinogenicity” is inevitably limited to the cancer sites, conditions and levels of exposure and length of observation covered by the available studies.
The applicability of an evaluation of the carcinogenicity of a mixture, process, occupation or industry on the basis of evidence from epidemiological studies depends on time and place. The specific exposure, process or activity considered most likely to be responsible for any excess risk should be sought and the evaluation focused as narrowly as possible. The long latent period of human cancer complicates the interpretation of epidemiological studies. A further complication is the fact that humans are exposed simultaneously to a variety of chemicals, which can interact either to increase or decrease the risk for neoplasia.
Studies in which experimental animals (usually mice and rats) are exposed to potential carcinogens and examined for evidence of cancer were introduced about 50 years ago with the aim of introducing a scientific approach to the study of chemical carcinogenesis and to avoid some of the disadvantages of using only epidemiological data in humans. In the IARC Monographs all available, published studies of carcinogenicity in animals are summarized, and the degree of evidence of carcinogenicity is then classified into one of the following categories:
Sufficient evidence of carcinogenicity. A causal relationship has been established between the agent or mixture and an increased incidence of malignant neoplasms or of an appropriate combination of benign and malignant neoplasms in two or more species of animals or in two or more independent studies in one species carried out at different times or in different laboratories or under different protocols. Exceptionally, a single study in one species might be considered to provide sufficient evidence of carcinogenicity when malignant neoplasms occur to an unusual degree with regard to incidence, site, type of tumour or age at onset.
Limited evidence of carcinogenicity. The data suggest a carcinogenic effect but are limited for making a definitive evaluation because, for example, (a) the evidence of carcinogenicity is restricted to a single experiment; or (b) there are some unresolved questions regarding the adequacy of the design, conduct or interpretation of the study; or (c) the agent or mixture increases the incidence only of benign neoplasms or lesions of uncertain neoplastic potential, or of certain neoplasms which may occur spontaneously in high incidences in certain strains.
Inadequate evidence of carcinogenicity. The studies cannot be interpreted as showing either the presence or absence of a carcinogenic effect because of major qualitative or quantitative limitations, or no data on cancer in experimental animals are available.
Evidence suggesting lack of carcinogenicity. Adequate studies involving at least two species are available which show that, within the limits of the tests used, the agent or mixture is not carcinogenic. A conclusion of evidence suggesting lack of carcinogenicity is inevitably limited to the species, tumour sites and levels of exposure studied.
Data on biological effects in humans that are of particular relevance include toxicological, kinetic and metabolic considerations and evidence of DNA binding, persistence of DNA lesions or genetic damage in exposed humans. Toxicological information, such as that on cytotoxicity and regeneration, receptor binding and hormonal and immunological effects, and data on kinetics and metabolism in experimental animals are summarized when considered relevant to the possible mechanism of the carcinogenic action of the agent. The results of tests for genetic and related effects are summarized for whole mammals including man, cultured mammalian cells and nonmammalian systems. Structure-activity relationships are mentioned when relevant.
For the agent, mixture or exposure circumstance being evaluated, the available data on end-points or other phenomena relevant to mechanisms of carcinogenesis from studies in humans, experimental animals and tissue and cell test systems are summarized within one or more of the following descriptive dimensions:
· evidence of genotoxicity (i.e., structural changes at the level of the gene): for example, structure-activity considerations, adduct formation, mutagenicity (effect on specific genes), chromosomal mutation or aneuploidy
· evidence of effects on the expression of relevant genes (i.e., functional changes at the intracellular level): for example, alterations to the structure or quantity of the product of a proto-oncogene or tumour suppressor gene, alterations to metabolic activation, inactivation or DNA repair
· evidence of relevant effects on cell behaviour (i.e., morphological or behavioural changes at the cellular or tissue level): for example, induction of mitogenesis, compensatory cell proliferation, preneoplasia and hyperplasia, survival of premalignant or malignant cells (immortalization, immunosuppression), effects on metastatic potential
· evidence from dose and time relationships of carcinogenic effects and interactions between agents: for example, early versus late stage, as inferred from epidemiological studies; initiation, promotion, progression or malignant conversion, as defined in animal carcinogenicity experiments; toxicokinetics.
These dimensions are not mutually exclusive, and an agent may fall within more than one. Thus, for example, the action of an agent on the expression of relevant genes could be summarized under both the first and second dimension, even if it were known with reasonable certainty that those effects resulted from genotoxicity.
Finally, the body of evidence is considered as a whole, in order to reach an overall evaluation of the carcinogenicity to humans of an agent, mixture or circumstance of exposure. An evaluation may be made for a group of chemicals when supporting data indicate that other, related compounds for which there is no direct evidence of capacity to induce cancer in humans or in animals may also be carcinogenic, a statement describing the rationale for this conclusion is added to the evaluation narrative.
The agent, mixture or exposure circumstance is described according to the wording of one of the following categories, and the designated group is given. The categorization of an agent, mixture or exposure circumstance is a matter of scientific judgement, reflecting the strength of the evidence derived from studies in humans and in experimental animals and from other relevant data.
The agent (mixture) is carcinogenic to humans. The exposure circumstance entails exposures that are carcinogenic to humans.
This category is used when there is sufficient evidence of carcinogenicity in humans. Exceptionally, an agent (mixture) may be placed in this category when evidence in humans is less than sufficient but there is sufficient evidence of carcinogenicity in experimental animals and strong evidence in exposed humans that the agent (mixture) acts through a relevant mechanism of carcinogenicity.
This category includes agents, mixtures and exposure circumstances for which, at one extreme, the degree of evidence of carcinogenicity in humans is almost sufficient, as well as those for which, at the other extreme, there are no human data but for which there is evidence of carcinogenicity in experimental animals. Agents, mixtures and exposure circumstances are assigned to either group 2A (probably carcinogenic to humans) or group 2B (possibly carcinogenic to humans) on the basis of epidemiological and experimental evidence of carcinogenicity and other relevant data.
Group 2A. The agent (mixture) is probably carcinogenic to humans. The exposure circumstance entails exposures that are probably carcinogenic to humans. This category is used when there is limited evidence of carcinogenicity in humans and sufficient evidence of carcinogenicity in experimental animals. In some cases, an agent (mixture) may be classified in this category when there is inadequate evidence of carcinogenicity in humans and sufficient evidence of carcinogenicity in experimental animals and strong evidence that the carcinogenesis is mediated by a mechanism that also operates in humans. Exceptionally, an agent, mixture or exposure circumstance may be classified in this category solely on the basis of limited evidence of carcinogenicity in humans.
Group 2B. The agent (mixture) is possibly carcinogenic to humans. The exposure circumstance entails exposures that are possibly carcinogenic to humans. This category is used for agents, mixtures and exposure circumstances for which there is limited evidence of carcinogenicity in humans and less than sufficient evidence of carcinogenicity in experimental animals. It may also be used when there is inadequate evidence of carcinogenicity in humans but there is sufficient evidence of carcinogenicity in experimental animals. In some instances, an agent, mixture or exposure circumstance for which there is inadequate evidence of carcinogenicity in humans but limited evidence of carcinogenicity in experimental animals together with supporting evidence from other relevant data may be placed in this group.
The agent (mixture or exposure circumstance) is not classifiable as to its carcinogenicity to humans. This category is used most commonly for agents, mixtures and exposure circumstances for which the evidence of carcinogenicity is inadequate in humans and inadequate or limited in experimental animals.
Exceptionally, agents (mixtures) for which the evidence of carcinogenicity is inadequate in humans but sufficient in experimental animals may be placed in this category when there is strong evidence that the mechanism of carcinogenicity in experimental animals does not operate in humans.
The agent (mixture) is probably not carcinogenic to humans. This category is used for agents or mixtures for which there is evidence suggesting lack of carcinogenicity in humans and in experimental animals. In some instances, agents or mixtures for which there is inadequate evidence of carcinogenicity in humans but evidence suggesting lack of carcinogenicity experimental animals, consistently and strongly supported by a broad range of other relevant data, may be classified in this group.
Classification systems made by humans are not sufficiently perfect to encompass all the complex entities of biology. They are, however, useful as guiding principles and may be modified as new knowledge of carcinogenesis becomes more firmly established. In the categorization of an agent, mixture or exposure circumstance, it is essential to rely on scientific judgements formulated by the group of experts.
To date, 69 volumes of IARC Monographs have been published or are in press, in which evaluations of carcinogenicity to humans have been made for 836 agents or exposure circumstances. Seventy-four agents or exposures have been evaluated as carcinogenic to humans (Group 1), 56 as probably carcinogenic to humans (Group 2A), 225 as possibly carcinogenic to humans (Group 2B) and one as probably not carcinogenic to humans (Group 4). For 480 agents or exposures, the available epidemiological and experimental data did not allow an evaluation of their carcinogenicity to humans (Group 3).
The revised Preamble, which first appeared in volume 54 of the IARC Monographs, allows for the possibility that an agent for which epidemiological evidence of cancer is less than sufficient can be placed in Group 1 when there is sufficient evidence of carcinogenicity in experimental animals and strong evidence in exposed humans that the agent acts through a relevant mechanism of carcinogenicity. Conversely, an agent for which there is inadequate evidence of carcinogenicity in humans together with sufficient evidence in experimental animals and strong evidence that the mechanism of carcinogenesis does not operate in humans may be placed in Group 3 instead of the normally assigned Group 2Bpossibly carcinogenic to humanscategory.
The use of such data on mechanisms has been discussed on three recent occasions:
While it is generally accepted that solar radiation is carcinogenic to humans (Group 1), epidemiological studies on cancer in humans for UVA and UVB radiation from sun lamps provide only limited evidence of carcinogenicity. Special tandem base substitutions (GC®TT) have been observed in p53 tumour suppression genes in squamous-cell tumours at sun-exposed sites in humans. Although UVR can induce similar transitions in some experimental systems and UVB, UVA and UVC are carcinogenic in experimental animals, the available mechanistic data were not considered strong enough to allow the working group to classify UVB, UVA and UVC higher than Group 2A (IARC 1992). In a study published after the meeting (Kress et al. 1992), CC®TT transitions in p53 have been demonstrated in UVB-induced skin tumours in mice, which might suggest that UVB should also be classified as carcinogenic to humans (Group 1).
The second case in which the possibility of placing an agent in Group 1 in the absence of sufficient epidemiological evidence was considered was 4,4´-methylene-bis(2-chloroaniline) (MOCA). MOCA is carcinogenic in dogs and rodents and is comprehensively genotoxic. It binds to DNA through reaction with N-hydroxy MOCA and the same adducts that are formed in target tissues for carcinogenicity in animals have been found in urothelial cells from a small number of exposed humans. After lengthy discussions on the possibility of an upgrading, the working group finally made an overall evaluation of Group 2A, probably carcinogenic to humans (IARC 1993).
During a recent evaluation of ethylene oxide (IARC 1994b), the available epidemiological studies provided limited evidence of carcinogenicity in humans, and studies in experimental animals provided sufficient evidence of carcinogenicity. Taking into account the other relevant data that (1) ethylene oxide induces a sensitive, persistent, dose-related increase in the frequency of chromosomal aberrations and sister chromatid exchanges in peripheral lymphocytes and micronuclei in bone-marrow cells from exposed workers; (2) it has been associated with malignancies of the lymphatic and haematopoietic system in both humans and experimental animals; (3) it induces a dose-related increase in the frequency of haemoglobin adducts in exposed humans and dose-related increases in the numbers of adducts in both DNA and haemoglobin in exposed rodents; (4) it induces gene mutations and heritable translocations in germ cells of exposed rodents; and (5) it is a powerful mutagen and clastogen at all phylogenetic levels; ethylene oxide was classified as carcinogenic to humans (Group 1).
In the case where the Preamble allows for the possibility that an agent for which there is sufficient evidence of carcinogenicity in animals can be placed in Group 3 (instead of Group 2B, in which it would normally be categorized) when there is strong evidence that the mechanism of carcinogenicity in animals does not operate in humans, this possibility has not yet been used by any working group. Such a possibility could have been envisaged in the case of d-limonene had there been sufficient evidence of its carcinogenicity in animals, since there are data suggesting that a2-microglobulin production in male rat kidney is linked to the renal tumours observed.
Among the many chemicals nominated as priorities by an ad-hoc working group in December 1993, some common postulated intrinsic mechanisms of action appeared or certain classes of agents based upon their biological properties were identified. The working group recommended that before evaluations are made on such agents as peroxisome proliferators, fibres, dusts and thyrostatic agents within the Monographs programme, special ad-hoc groups should be convened to discuss the latest state of the art on their particular mechanisms of action.
Bis(2-chloroethyl)-2-naphthylamine (Chlornaphazine) (
1,4-Butanediol dimethanesulphonate (Myleran) (
1-(2-Chloroethyl)-3-(4-methylcyclohexyl)-1-nitrosourea (Methyl-CCNU; Semustine) (
Chromium(VI) compounds (1990)3
Ethylene oxide4 (
Helicobacter pylori (infection with) (1994)
Hepatitis B virus (chronic infection with) (1993)
Hepatitis C virus (chronic infection with) (1993)
Human papillomavirus type 16 (1995)
Human papillomavirus type 18 (1995)
Human T-cell lymphotropic virus type I (1996)
8-Methoxypsoralen (Methoxsalen) (
MOPP and other combined chemotherapy including alkylating agents
Mustard gas (Sulphur mustard) (
Nickel compounds (1990)3
Oestrogen replacement therapy
Opisthorchis viverrini (infection with) (1994)
Oral contraceptives, combined5
Oral contraceptives, sequential
Schistosoma haematobium (infection with) (1994)
Solar radiation (1992)
Talc containing asbestiform fibres
Vinyl chloride (
Alcoholic beverages (1988)
Analgesic mixtures containing phenacetin
Betel quid with tobacco
Coal-tar pitches (
Mineral oils, untreated and mildly treated
Salted fish (Chinese-style) (1993)
Shale oils (
Tobacco products, smokeless
Auramine, manufacture of
Boot and shoe manufacture and repair
Furniture and cabinet making
Haematite mining (underground) with exposure to radon
Iron and steel founding
Isopropanol manufacture (strong-acid process)
Magenta, manufacture of (1993)
Painter (occupational exposure as a) (1989)
Strong-inorganic-acid mists containing sulphuric acid (occupational exposure to) (1992)
Androgenic (anabolic) steroids
Bischloroethyl nitrosourea (BCNU) (
1-(2-Chloroethyl)-3-cyclohexyl-1-nitrosourea8 (CCNU) (
Clonorchis sinensis (infection with)8 (1994)
Diethyl sulphate (
Dimethylcarbamoyl chloride8 (
Dimethyl sulphate8 (
Ethylene dibromide8 (
IQ8 (2-Amino-3-methylimidazo(4,5-f)quinoline) (
4,4´-Methylene bis(2-chloroaniline) (MOCA)8 (
N-Methyl-N´-nitro-N-nitrosoguanidine8 (MNNG) (
Nitrogen mustard (
Procarbazine hydrochloride8 (
Ultraviolet radiation A8 (1992)
Ultraviolet radiation B8 (1992)
Ultraviolet radiation C8 (1992)
Vinyl bromide6 (
Vinyl fluoride (
Diesel engine exhaust (1989)
Hot mate (1991)
Non-arsenical insecticides (occupational exposures in spraying and application of) (1991)
Polychlorinated biphenyls (
Art glass, glass containers and pressed ware (manufacture of) (1993)
Hairdresser or barber (occupational exposure as a) (1993)
Petroleum refining (occupational exposures in) (1989)
Sunlamps and sunbeds (use of) (1992)
A–α–C (2-Amino-9H-pyrido(2,3-b)indole) (
AF-2 (2-(2-Furyl)-3-(5-nitro-2-furyl)acrylamide) (
Aflatoxin M1 (
Antimony trioxide (
Benzyl violet 4B (
Butylated hydroxyanisole (BHA) (
Caffeic acid (
Carbon tetrachloride (
Chlordecone (Kepone) (
Chlorendic acid (
α-Chlorinated toluenes (benzyl chloride, benzal chloride, benzotrichloride)
CI Acid Red 114 (
CI Basic Red 9 (
CI Direct Blue 15 (
Citrus Red No. 2 (
Dantron (Chrysazin; 1,8-Dihydroxyanthraquinone) (
4,4´-Diaminodiphenyl ether (
3,3´-Dichloro-4,4´-diaminodiphenyl ether (
Dichloromethane (methylene chloride) (
Diglycidyl resorcinol ether (
Diisopropyl sulphate (
3,3´-Dimethoxybenzidine (o-Dianisidine) (
trans-2-((Dimethylamino)methylimino)-5-(2-(5-nitro-2-furyl)- vinyl)-1,3,4-oxadiazole (
2,6-Dimethylaniline (2,6-xylidine) (
3,3´-Dimethylbenzidine (o-tolidine) (
Disperse Blue 1 (
Ethyl acrylate (
Ethylene thiourea (
Ethyl methanesulphonate (
Glass wool (1988)
Glu-P-1 (2-amino-6-methyldipyrido(1,2-a:3´,2´-d)imidazole) (
Glu-P-2 (2-aminodipyrido(1,2-a:3´,2´-d)imidazole) (
HC Blue No. 1 (
Human immunodeficiency virus type 2 (infection with) (1996)
Human papillomaviruses: some types other than 16, 18, 31 and 33 (1995)
Iron-dextran complex (
MeA-α-C (2-Amino-3-methyl-9H-pyrido(2,3-b)indole) (
Medroxyprogesterone acetate (
MeIQ (2-Amino-3,4-dimethylimidazo(4,5-f)quinoline) (
MeIQx (2-Amino-3,8-dimethylimidazo(4,5-f)quinoxaline) (
2-Methylaziridine (propyleneimine) (
Methylazoxymethanol acetate (
4,4´-Methylene bis(2-methylaniline) (
Methylmercury compounds (1993)3
Methyl methanesulphonate (
Mitomycin C (
5-(Morpholinomethyl)-3-((5-nitrofurfurylidene)amino)-2- oxazolidinone (
Nickel, metallic (
Nitrilotriacetic acid (
Nitrogen mustard N-oxide (
4-(N-Nitrosomethylamino)-1-(3-pyridyl)-1-butanone (NNK) (
Ochratoxin A (
Oil Orange SS (
Palygorskite (attapulgite) (
Panfuran S (containing dihydroxymethylfuratrizine (
Phenazopyridine hydrochloride (
Phenoxybenzamine hydrochloride (
Phenyl glycidyl ether (
PhIP (2-Amino-1-methyl-6-phenylimidazo(4,5-b)pyridine) (
Ponceau MX (
Ponceau 3R (
Potassium bromate (
1,3-Propane sultone (
Propylene oxide (
Schistosoma japonicum (infection with) (1994)
Sodium o-phenylphenate (
Toluene diisocyanates (
Trichlormethine (Trimustine hydrochloride) (
Trp-P-1 (3-Amino-1,4-dimethyl-5H-pyrido(4,3-b)indole) (
Trp-P-2 (3-Amino-1-methyl-5H-pyrido(4,3-b)indole) (
Trypan blue (
Uracil mustard (
Vinyl acetate (
4-Vinylcyclohexene diepoxide (
Chlorinated paraffins of average carbon chain length C12 and average degree of chlorination approximately 60% (1990)
Coffee (urinary bladder)9 (1991)
Diesel fuel, marine (1989)
Engine exhaust, gasoline (1989)
Fuel oils, residual (heavy) (1989)
Pickled vegetables (traditional in Asia) (1993)
Polybrominated biphenyls (Firemaster BP-6,
Toxaphene (Polychlorinated camphenes) (
Toxins derived from Fusarium moniliforme (1993)
Welding fumes (1990)
Carpentry and joinery
Dry cleaning (occupational exposures in) (1995)
Printing processes (occupational exposures in) (1996)
Textile manufacturing industry (work in) (1990)
Acridine orange (
Acriflavinium chloride (
Acrylic acid (
Actinomycin D (
Allyl chloride (
Allyl isothiocyanate (
Allyl isovalerate (
p-Aminobenzoic acid (
11-Aminoundecanoic acid (
Anthranilic acid (
Antimony trisulphide (
p-Aramid fibrils (
Aziridyl benzoquinone (
p-Benzoquinone dioxime (
Benzoyl chloride (
Benzoyl peroxide (
Benzyl acetate (
Bis(1-aziridinyl)morpholinophosphine sulphide (
Bisphenol A diglycidyl ether (
Blue VRS (
Brilliant Blue FCF, disodium salt (
n-Butyl acrylate (
Butylated hydroxytoluene (BHT) (
Butyl benzyl phthalate (
Chloral hydrate (
Chlorinated dibenzodioxins (other than TCDD)
Chlorinated drinking-water (1991)
Chromium(III) compounds (1990)
CI Acid Orange 3 (
Cinnamyl anthranilate (
CI Pigment Red 3 (
Clomiphene citrate (
Coal dust (1997)
Copper 8-hydroxyquinoline (
Cyclamates (sodium cyclamate,
D & C Red No. 9 (
Decabromodiphenyl oxide (
Dichloroacetic acid (
p-Dimethylaminoazobenzenediazo sodium sulphonate (
Dimethyl hydrogen phosphite (
Disperse Yellow 3 (
3,4-Epoxy-6-methylcyclohexylmethyl-3,4-epoxy-6-methylcyclohexane carboxylate (
cis-9,10-Epoxystearic acid (
Ethylene sulphide (
2-Ethylhexyl acrylate (
Ethyl selenac (
Ethyl tellurac (
Evans blue (
Fast Green FCF (
Ferric oxide (
Fluorescent lighting (1992)
Fluorides (inorganic, used in drinking-water)
Furosemide (Frusemide) (
Glass filaments (1988)
Glycidyl oleate (
Glycidyl stearate (
Guinea Green B (
HC Blue No. 2 (
HC Red No. 3 (
HC Yellow No. 4 (
Hepatitis D virus (1993)
Human T-cell lymphotropic virus type II (1996)
Hycanthone mesylate (
Hydrochloric acid (
Hydrogen peroxide (
Hypochlorite salts (1991)
Iron-dextrin complex (
Iron sorbitol-citric acid complex (
Isonicotinic acid hydrazide (Isoniazid) (
Lauroyl peroxide (
Lead, organo (
Light Green SF (
Maleic hydrazide (
Mannomustine dihydrochloride (
Methyl acrylate (
Methyl bromide (
Methyl carbamate (
Methyl chloride (
4,4´-Methylenediphenyl diisocyanate (
Methyl iodide (
Methyl methacrylate (
Methyl parathion (
Methyl red (
Methyl selenac (
Musk ambrette (
Musk xylene (
1,5-Naphthalene diisocyanate (
1-Naphthylthiourea (ANTU) (
Nitrofural (Nitrofurazone) (
N-Nitrosofolic acid (
4-(N-Nitrosomethylamino)-4-(3-pyridyl)-1-butanal (NNA) (
Nylon 6 (
Oestradiol mustard (
Oestrogen-progestin replacement therapy
Opisthorchis felineus (infection with) (1994)
Orange I (
Orange G (
Palygorskite (attapulgite) (
Paracetamol (Acetaminophen) (
Parasorbic acid (
Penicillic acid (
Phenelzine sulphate (
Piperonyl butoxide (
Polyacrylic acid (
Polychlorinated dibenzo-p-dioxins (other than 2,3,7,8-tetra-chlorodibenzo-p-dioxin) (1997)
Polychlorinated dibenzofurans (1997)
Polymethylene polyphenyl isocyanate (
Polymethyl methacrylate (
Polyurethane foams (
Polyvinyl acetate (
Polyvinyl alcohol (
Polyvinyl chloride (
Polyvinyl pyrrolidone (
Ponceau SX (
Potassium bis(2-hydroxyethyl)dithiocarbamate (
Pronetalol hydrochloride (
n-Propyl carbamate (
Quintozene (Pentachloronitrobenzene) (
Rhodamine B (
Rhodamine 6G (
Saccharated iron oxide (
Scarlet Red (
Schistosoma mansoni (infection with) (1994)
Semicarbazide hydrochloride (
Shikimic acid (
Sodium chlorite (
Sodium diethyldithiocarbamate (
Styrene-acrylonitrile copolymers (
Styrene-butadiene copolymers (
Succinic anhydride (
Sudan I (
Sudan II (
Sudan III (
Sudan Brown RR (
Sudan Red 7B (
Sulphafurazole (Sulphisoxazole) (
Sulphur dioxide (
Sunset Yellow FCF (
Tannic acid (
Tetrakis(hydroxymethyl)phosphonium salts (1990)
Titanium dioxide (
Toxins derived from Fusarium graminearum, F. culmorum and F. crookwellense (1993)
Toxins derived from Fusarium sporotrichioides (1993)
Trichloroacetic acid (
Triethylene glycol diglydicyl ether (
Tris(aziridinyl)-p-benzoquinone (Triaziquone) (
Tris(1-aziridinyl)phosphine oxide (
Tris(2-methyl-1-aziridinyl)phosphine oxide (
Vat Yellow 4 (
Vinblastine sulphate (
Vincristine sulphate (
Vinyl acetate (
Vinyl chloride-vinyl acetate copolymers (
Vinylidene chloride (
Vinylidene chloride-vinyl chloride copolymers (
Vinylidene fluoride (
Vinyl toluene (
Yellow AB (
Yellow OB (
Betel quid, without tobacco
Crude oil (
Diesel fuels, distillate (light) (1989)
Fuel oils, distillate (light) (1989)
Jet fuel (1989)
Mineral oils, highly refined
Petroleum solvents (1989)
Printing inks (1996)
Terpene polychlorinates (StrobaneR) (
Flat-glass and specialty glass (manufacture of) (1993)
Hair colouring products (personal use of) (1993)
Leather goods manufacture
Leather tanning and processing
Lumber and sawmill industries (including logging)
Paint manufacture (occupational exposure in) (1989)
Pulp and paper manufacture
1 Chemical Abstract Number and year are given in parentheses: year in which the evaluation was published subsequent to the Supplement 7 Working Group for agents, mixtures or exposure circumstances considered in Volumes 43 to 61 of the Monographs.
2 This evaluation applies to the group of chemicals as a whole and not necessarily to all individual chemicals within the group.
3 Evaluated as a group.
4 Overall evaluation upgraded from 2A to 1 with supporting evidence from other data relevant to the evaluation of carcinogenicity and its mechanisms.
5 There is also conclusive evidence that these agents have a protective effect against cancers of the ovary and the endometrium.
6 There is also conclusive evidence that this agent (tamoxifen) reduces the risk of contralateral breast cancer.
7 Overall evaluation upgraded from 2A to 1 with supporting evidence from other data relevant to the evaluation of carcinogenicity and its mechanisms.
8 Overall evaluation upgraded from 2B to 2A with supporting evidence from other relevant data.
9 Overall evaluation upgraded from 3 to 2B with supporting evidence from other relevant data.
10 There is some evidence of an inverse relationship between coffee drinking and cancer of the large bowel; coffee drinking could not be classified as to its carcinogenicity to other organs.
Whereas the principles and methods of risk assessment for non-carcinogenic chemicals are similar in different parts of the world, it is striking that approaches for risk assessment of carcinogenic chemicals vary greatly. There are not only marked differences between countries, but even within a country different approaches are applied or advocated by various regulatory agencies, committees and scientists in the field of risk assessment. Risk assessment for non-carcinogens is rather consistent and pretty well established partly because of the long history and better understanding of the nature of toxic effects in comparison with carcinogens and a high degree of consensus and confidence by both scientists and the general public on methods used and their outcome.
For non-carcinogenic chemicals, safety factors were introduced to compensate for uncertainties in the toxicology data (which are derived mostly from animal experiments) and in their applicability to large, heterogeneous human populations. In doing so, recommended or required limits on safe human exposures were usually set at a fraction (the safety or uncertainty factor approach) of the exposure levels in animals that could be clearly documented as the no observed adverse effects level (NOAEL) or the lowest observed adverse effects level (LOAEL). It was then assumed that as long as human exposure did not exceed the recommended limits, the hazardous properties of chemical substances would not be manifest. For many types of chemicals, this practice, in somewhat refined form, continues to this day in toxicological risk assessment.
During the late 1960s and early 1970s regulatory bodies, starting in the United States, were confronted with an increasingly important problem for which many scientists considered the safety factor approach to be inappropriate, and even dangerous. This was the problem with chemicals that under certain conditions had been shown to increase the risk of cancers in humans or experimental animals. These substances were operationally referred to as carcinogens. There is still debate and controversy on the definition of a carcinogen, and there is a wide range of opinion about techniques to identify and classify carcinogens and the process of cancer induction by chemicals as well.
The initial discussion started much earlier, when scientists in the 1940s discovered that chemical carcinogens caused damage by a biological mechanism that was of a totally different kind from those that produced other forms of toxicity. These scientists, using principles from the biology of radiation-induced cancers, put forth what is referred to as the “non-threshold” hypothesis, which was considered applicable to both radiation and carcinogenic chemicals. It was hypothesized that any exposure to a carcinogen that reaches its critical biological target, especially the genetic material, and interacts with it, can increase the probability (the risk) of cancer development.
Parallel to the ongoing scientific discussion on thresholds, there was a growing public concern on the adverse role of chemical carcinogens and the urgent need to protect the people from a set of diseases collectively called cancer. Cancer, with its insidious character and long latency period together with data showing that cancer incidences in the general population were increasing, was regarded by the general public and politicians as a matter of concern that warranted optimal protection. Regulators were faced with the problem of situations in which large numbers of people, sometimes nearly the entire population, were or could be exposed to relatively low levels of chemical substances (in consumer products and medicines, at the workplace as well as in air, water, food and soils) that had been identified as carcinogenic in humans or experimental animals under conditions of relatively intense exposures.
Those regulatory officials were confronted with two fundamental questions which, in most cases, could not be fully answered using available scientific methods:
1. What risk to human health exists in the range of exposure to chemicals below the relatively intense and narrow exposure range under which a cancer risk could be directly measured?
2. What could be said about risks to human health when experimental animals were the only subjects in which risks for the development of cancer had been established?
Regulators recognized the need for assumptions, sometimes scientifically based but often also unsupported by experimental evidence. In order to achieve consistency, definitions and specific sets of assumptions were adapted that would be generically applied to all carcinogens.
Several lines of evidence support the conclusion that chemical carcinogenesis is a multistage process driven by genetic damage and epigenetic changes, and this theory is widely accepted in the scientific community all over the world (Barrett 1993). Although the process of chemical carcinogenesis is often separated into three stagesinitiation, promotion and progressionthe number of relevant genetic changes is not known.
Initiation involves the induction of an irreversibly altered cell and is for genotoxic carcinogens always equated with a mutational event. Mutagenesis as a mechanism of carcinogenesis was already hypothesized by Theodor Boveri in 1914, and many of his assumptions and predictions have subsequently been proven to be true. Because irreversible and self-replicating mutagenic effects can be caused by the smallest amount of a DNA-modifying carcinogen, no threshold is assumed. Promotion is the process by which the initiated cell expands (clonally) by a series of divisions, and forms (pre)neoplastic lesions. There is considerable debate as to whether during this promotion phase initiated cells undergo additional genetic changes.
Finally in the progression stage “immortality” is obtained and full malignant tumours can develop by influencing angiogenesis, escaping the reaction of the host control systems. It is characterized by invasive growth and frequently metastatic spread of the tumour. Progression is accompanied by additional genetic changes due to the instability of proliferating cells and selection.
Therefore, there are three general mechanisms by which a substance can influence the multistep carcinogenic process. A chemical can induce a relevant genetic alteration, promote or facilitate clonal expansion of an initiated cell or stimulate progression to malignancy by somatic and/or genetic changes.
Risk can be defined as the predicted or actual frequency of occurrence of an adverse effect on humans or the environment, from a given exposure to a hazard. Risk assessment is a method of systematically organizing the scientific information and its attached uncertainties for description and qualification of the health risks associated with hazardous substances, processes, actions or events. It requires evaluation of relevant information and selection of the models to be used in drawing inferences from that information. Further, it requires explicit recognition of uncertainties and appropriate acknowledgement that alternative interpretation of the available data may be scientifically plausible. The current terminology used in risk assessment was proposed in 1984 by the US National Academy of Sciences. Qualitative risk assessment changed into hazard characterization/identification and quantitative risk assessment was divided into the components dose-response, exposure assessment and risk characterization.
In the following section these components will be briefly discussed in view of our current knowledge of the process of (chemical) carcinogenesis. It will become clear that the dominant uncertainty in the risk assessment of carcinogens is the dose-response pattern at low dose levels characteristic for environmental exposure.
This process identifies which compounds have the potential to cause cancer in humansin other words it identifies their intrinsic genotoxic properties. Combining information from various sources and on different properties serves as a basis for classification of carcinogenic compounds. In general the following information will be used:
· epidemiological data (e.g., vinylchloride, arsenic, asbestos)
· animal carcinogenicity data
· genotoxic activity/DNA adduct formation
· mechanisms of action
· pharmacokinetic activity
· structure-activity relationships.
Classification of chemicals into groups based on the assessment of the adequacy of the evidence of carcinogenesis in animals or in man, if epidemiological data are available, is a key process in hazard identification. The best known schemes for categorizing carcinogenic chemicals are those of IARC (1987), EU (1991) and the EPA (1986). An overview of their criteria for classification (e.g., low-dose extrapolation methods) is given in table 33.17 .
Current US EPA
Linearized multistage procedure using most appropriate low-dose model
MLE from 1- and 2-hit models plus judgement of best outcome
No procedure specified
No model, scientific expertise and judgement from all available data
Linear model using TD50 (Peto method) or “Simple Dutch Method” if no TD50
No procedure specified
Same as above
Biologically-based model of Thorslund or multistage or Mantel-Bryan model, based on tumour origin and dose-response
Use NOAEL and safety factors
Use NOEL and safety factors to set ADI
Use NOEL and safety factors to set ADI
One important issue in classifying carcinogens, with sometimes far-reaching consequences for their regulation, is the distinction between genotoxic and non-genotoxic mechanisms of action. The US Environmental Protection Agency (EPA) default assumption for all substances showing carcinogenic activity in animal experiments is that no threshold exists (or at least none can be demonstrated), so there is some risk with any exposure. This is commonly referred to as the non-threshold assumption for genotoxic (DNA-damaging) compounds. The EU and many of its members, such as the United Kingdom, the Netherlands and Denmark, make a distinction between carcinogens that are genotoxic and those believed to produce tumours by non-genotoxic mechanisms. For genotoxic carcinogens quantitative dose-response estimation procedures are followed that assume no threshold, although the procedures might differ from those used by the EPA. For non-genotoxic substances it is assumed that a threshold exists, and dose-response procedures are used that assume a threshold. In the latter case, the risk assessment is generally based on a safety factor approach, similar to the approach for non-carcinogens.
It is important to keep in mind that these different schemes were developed to deal with risk assessments in different contexts and settings. The IARC scheme was not produced for regulatory purposes, although it has been used as a basis for developing regulatory guidelines. The EPA scheme was designed to serve as a decision point for entering quantitative risk assessment, whereas the EU scheme is currently used to assign a hazard (classification) symbol and risk phrases to the chemical’s label. A more extended discussion on this subject is presented in a recent review (Moolenaar 1994) covering procedures used by eight governmental agencies and two often-cited independent organizations, the International Agency for Research on Cancer (IARC) and the American Conference of Governmental Industrial Hygienists (ACGIH).
The classification schemes generally do not take into account the extensive negative evidence that may be available. Also, in recent years a greater understanding of the mechanism of action of carcinogens has emerged. Evidence has accumulated that some mechanisms of carcinogenicity are species-specific and are not relevant for man. The following examples will illustrate this important phenomenon. First, it has been recently demonstrated in studies on the carcinogenicity of diesel particles, that rats respond with lung tumours to a heavy loading of the lung with particles. However, lung cancer is not seen in coal miners with very heavy lung burdens of particles. Secondly, there is the assertion of the nonrelevance of renal tumours in the male rat on the basis that the key element in the tumourgenic response is the accumulation in the kidney of α-2 microglobulin, a protein that does not exist in humans (Borghoff, Short and Swenberg 1990). Disturbances of rodent thyroid function and peroxisome proliferation or mitogenesis in the mouse liver have also to be mentioned in this respect.
This knowledge allows a more sophisticated interpretation of the results of a carcinogenicity bioassay. Research towards a better understanding of the mechanisms of action of carcinogenicity is encouraged because it may lead to an altered classification and to the addition of a category in which chemicals are classified as not carcinogenic to humans.
Exposure assessment is often thought to be the component of risk assessment with the least inherent uncertainty because of the ability to monitor exposures in some cases and the availability of relatively well-validated exposure models. This is only partially true, however, because most exposure assessments are not conducted in ways that take full advantage of the range of available information. For that reason there is a great deal of room for improving exposure distribution estimates. This holds for both external as well as for internal exposure assessments. Especially for carcinogens, the use of target tissue doses rather than external exposure levels in modelling dose-response relationships would lead to more relevant predictions of risk, although many assumptions on default values are involved. Physiologically based pharmacokinetic (PBPK) models to determine the amount of reactive metabolites that reaches the target tissue are potentially of great value to estimate these tissue doses.
The dose level or exposure level that causes an effect in an animal study and the likely dose causing a similar effect in humans is a key consideration in risk characterization. This includes both dose-response assessment from high to low dose and interspecies extrapolation. The extrapolation presents a logical problem, namely that data are being extrapolated many orders of magnitude below the experimental exposure levels by empirical models that do not reflect the underlying mechanisms for carcinogenicity. This violates a basic principle in fitting of empirical models, namely not to extrapolate outside the range of the observable data. Therefore, this empirical extrapolation results in large uncertainties, both from a statistical and from a biological point of view. At present no single mathematical procedure is recognized as the most appropriate one for low-dose extrapolation in carcinogenesis. The mathematical models that have been used to describe the relation between the administered external dose, the time and the tumour incidence are based on either tolerance-distribution or mechanistic assumptions, and sometimes based on both. A summary of the most frequently cited models (Kramer et al. 1995) is listed in table 33.18 .
Tolerance distribution models
Biologically based models
Cohen and Ellwein
1 Time-to-tumour models.
These dose-response models are usually applied to tumour-incidence data corresponding to only a limited number of experimental doses. This is due to the standard design of the applied bioassay. Instead of determining the complete dose-response curve, a carcinogenicity study is in general limited to three (or two) relatively high doses, using the maximum tolerated dose (MTD) as highest dose. These high doses are used to overcome the inherent low statistical sensitivity (10 to 15% over background) of such bioassays, which is due to the fact that (for practical and other reasons) a relatively small number of animals is used. Because data for the low-dose region are not available (i.e., cannot be determined experimentally), extrapolation outside the range of observation is required. For almost all data sets, most of the above-listed models fit equally well in the observed dose range, due to the limited number of doses and animals. However, in the low-dose region these models diverge several orders of magnitude, thereby introducing large uncertainties to the risk estimated for these low exposure levels.
Because the actual form of the dose-response curve in the low-dose range cannot be generated experimentally, mechanistic insight into the process of carcinogenicity is crucial to be able to discriminate on this aspect between the various models. Comprehensive reviews discussing the various aspects of the different mathematical extrapolation models are presented in Kramer et al. (1995) and Park and Hawkins (1993).
Besides the current practice of mathematical modelling several alternative approaches have been proposed recently.
Currently, the biologically based models such as the Moolgavkar-Venzon-Knudson (MVK) models are very promising, but at present these are not sufficiently well advanced for routine use and require much more specific information than currently is obtained in bioassays. Large studies (4,000 rats) such as those carried out on N-nitrosoalkylamines indicate the size of the study which is required for the collection of such data, although it is still not possible to extrapolate to low doses. Until these models are further developed they can be used only on a case-by-case basis.
The use of mathematical models for extrapolation below the experimental dose range is in effect equivalent to a safety factor approach with a large and ill-defined uncertainty factor. The simplest alternative would be to apply an assessment factor to the apparent “no effect level”, or the “lowest level tested”. The level used for this assessment factor should be determined on a case-by-case basis considering the nature of the chemical and the population being exposed.
The basis of this approach is a mathematical model fitted to the experimental data within the observable range to estimate or interpolate a dose corresponding to a defined level of effect, such as one, five or ten per cent increase in tumour incidence (ED01, ED05, ED10). As a ten per cent increase is about the smallest change that statistically can be determined in a standard bioassay, the ED10 is appropriate for cancer data. Using a BMD that is within the observable range of the experiment avoids the problems associated with dose extrapolation. Estimates of the BMD or its lower confidence limit reflect the doses at which changes in tumour incidence occurred, but are quite insensitive to the mathematical model used. A benchmark dose can be used in risk assessment as a measure of tumour potency and combined with appropriate assessment factors to set acceptable levels for human exposure.
Krewski et al. (1990) have reviewed the concept of a “threshold of regulation” for chemical carcinogens. Based on data obtained from the carcinogen potency database (CPDB) for 585 experiments, the dose corresponding to 10-6 risk was roughly log-normally distributed around a median of 70 to 90 ng/kg/d. Exposure to dose levels greater than this range would be considered unacceptable. The dose was estimated by linear extrapolation from the TD50 (the dose inducing toxicity is 50% of the animals tested) and was within a factor of five to ten of the figure obtained from the linearized multistage model. Unfortunately, the TD50 values will be related to the MTD, which again casts doubt on the validity of the measurement. However the TD50 will often be within or very close to the experimental data range.
Such an approach as using a threshold of regulation would require much more consideration of biological, analytical and mathematical issues and a much wider database before it could be considered. Further investigation into the potencies of various carcinogens may throw further light onto this area.
Looking back to the original expectations on the regulation of (environmental) carcinogens, namely to achieve a major reduction in cancer, it appears that the results at present are disappointing. Over the years it became apparent that the number of cancer cases estimated to be produced by regulatable carcinogens was disconcertingly small. Considering the high expectations that launched the regulatory efforts in the 1970s, a major anticipated reduction in the cancer death rate has not been achieved in terms of the estimated effects of environmental carcinogens, not even with ultraconservative quantitative assessment procedures. The main characteristic of the EPA procedures is that low-dose extrapolations are made in the same way for each chemical regardless of the mechanism of tumour formation in experimental studies. It should be noted, however, that this approach stands in sharp contrast to approaches taken by other governmental agencies. As indicated above, the EU and several European governmentsDenmark, France, Germany, Italy, the Netherlands, Sweden, Switzerland, UKdistinguish between genotoxic and non-genotoxic carcinogens, and approach risk estimation differently for the two categories. In general, non-genotoxic carcinogens are treated as threshold toxicants. No effect levels are determined, and uncertainty factors are used to provide an ample margin of safety. To determine whether or not a chemical should be regarded as non-genotoxic is a matter of scientific debate and requires clear expert judgement.
The fundamental issue is: What is the cause of cancer in humans and what is the role of environmental carcinogens in that causation? The hereditary aspects of cancer in humans are much more important than previously anticipated. The key to significant advancement in the risk assessment of carcinogens is a better understanding of the causes and mechanisms of cancer. The field of cancer research is entering a very exciting area. Molecular research may radically alter the way we view the impact of environmental carcinogens and the approaches to control and prevent cancer, both for the general public and the workplace. Risk assessment of carcinogens needs to be based on concepts of the mechanisms of action that are, in fact, just emerging. One of the important aspects is the mechanism of heritable cancer and the interaction of carcinogens with this process. This knowledge will have to be incorporated into the systematic and consistent methodology that already exists for the risk assessment of carcinogens.