In 1966, long before job stress and psychosocial factors became household expressions, a special report entitled “Protecting the Health of Eighty Million Workers - A National Goal for Occupational Health” was issued to the Surgeon General of the United States (US Department of Health and Human Services 1966). The report was prepared under the auspices of the National Advisory Environmental Health Committee to provide direction to Federal programmes in occupational health. Among its many observations, the report noted that psychological stress was increasingly apparent in the workplace, presenting “... new and subtle threats to mental health,” and possible risk of somatic disorders such as cardiovascular disease. Technological change and the increasing psychological demands of the workplace were listed as contributing factors. The report concluded with a list of two dozen “urgent problems” requiring priority attention, including occupational mental health and contributing workplace factors.
Thirty years later, this report has proven remarkably prophetic. Job stress has become a leading source of worker disability in North America and Europe. In 1990, 13% of all worker disability cases handled by Northwestern National Life, a major US underwriter of worker compensation claims, were due to disorders with a suspected link to job stress (Northwestern National Life 1991). A 1985 study by the National Council on Compensation Insurance found that one type of claim, involving psychological disability due to “gradual mental stress” at work, had grown to 11% of all occupational disease claims (National Council on Compensation Insurance 1985).*
* In the United States, occupational disease claims are distinct from injury claims, which tend to greatly outnumber disease claims.
These developments are understandable considering the demands of modern work. A 1991 survey of European Union members found that “The proportion of workers who complain from organizational constraints, which are in particular conducive to stress, is higher than the proportion of workers complaining from physical constraints” (European Foundation for the Improvement of Living and Working Conditions 1992). Similarly, a more recent study of the Dutch working population found that one-half of the sample reported a high work pace, three-fourths of the sample reported poor possibilities of promotion, and one-third reported a poor fit between their education and their jobs (Houtman and Kompier 1995). On the American side, data on the prevalence of job stress risk factors in the workplace are less available. However, in a recent survey of several thousand US workers, over 40% of the workers reported excessive workloads and said they were “used up” and “emotionally drained” at the end of the day (Galinsky, Bond and Friedman 1993).
The impact of this problem in terms of lost productivity, disease and reduced quality of life is undoubtedly formidable, although difficult to estimate reliably. However, recent analyses of data from over 28,000 workers by the Saint Paul Fire and Marine Insurance company are of interest and relevance. This study found that time pressure and other emotional and personal problems at work were more strongly associated with reported health problems than any other personal life stressor; more so than even financial or family problems, or death of a loved one (St. Paul Fire and Marine Insurance Company 1992).
Looking to the future, rapid changes in the fabric of work and the workforce pose unknown, and possibly increased, risks of job stress. For example, in many countries the workforce is rapidly ageing at a time when job security is decreasing. In the United States, corporate downsizing continues almost unabated into the last half of the decade at a rate of over 30,000 jobs lost per month (Roy 1995). In the above-cited study by Galinsky, Bond and Friedman (1993) nearly one-fifth of the workers thought it likely they would lose their jobs in the forthcoming year. At the same time the number of contingent workers, who are generally without health benefits and other safety nets, continues to grow and now comprises about 5% of the workforce (USBLS 1995).
The aim of this chapter is to provide an overview of current knowledge on conditions which lead to stress at work and associated health and safety problems. These conditions, which are commonly referred to as psychosocial factors, include aspects of the job and work environment such as organizational climate or culture, work roles, interpersonal relationships at work, and the design and content of tasks (e.g., variety, meaning, scope, repetitiveness, etc.). The concept of psychosocial factors extends also to the extra-organizational environment (e.g., domestic demands) and aspects of the individual (e.g., personality and attitudes) which may influence the development of stress at work. Frequently, the expressions work organization or organizational factors are used interchangeably with psychosocial factors in reference to working conditions which may lead to stress.
This section of the Encyclopaedia begins with descriptions of several models of job stress which are of current scientific interest, including the job demands-job control model, the person- environment (P-E) fit model, and other theoretical approaches to stress at work. Like all contemporary notions of job stress, these models have a common theme: job stress is conceptualized in terms of the relationship between the job and the person. According to this view, job stress and the potential for ill health develop when job demands are at variance with the needs, expectations or capacities of the worker. This core feature is implicit in figure 34.1, which shows the basic elements of a stress model favoured by researchers at the National Institute for Occupational Safety and Health (NIOSH). In this model, work-related psychosocial factors (termed stressors) result in psychological, behavioural and physical reactions which may ultimately influence health. However, as illustrated in figure 34.1 , individual and contextual factors (termed stress moderators) intervene to influence the effects of job stressors on health and well-being. (See Hurrell and Murphy 1992 for a more elaborate description of the NIOSH stress model.)
But putting aside this conceptual similarity, there are also non-trivial theoretical differences among these models. For example, unlike the NIOSH and P-E fit models of job stress, which acknowledge a host of potential psychosocial risk factors in the workplace, the job demands-job control model focuses most intensely on a more limited range of psychosocial dimensions pertaining to psychological workload and opportunity for workers to exercise control (termed decision latitude) over aspects of their jobs. Further, both the demand-control and the NIOSH models can be distinguished from the P-E fit models in terms of the focus placed on the individual. In the P-E fit model, emphasis is placed on individuals’ perceptions of the balance between features of the job and individual attributes. This focus on perceptions provides a bridge between P-E fit theory and another variant of stress theory attributed to Lazarus (1966), in which individual differences in appraisal of psychosocial stressors and in coping strategies become critically important in determining stress outcomes. In contrast, while not denying the importance of individual differences, the NIOSH stress model gives primacy to environmental factors in determining stress outcomes as suggested by the geometry of the model illustrated in figure 34.1 . In essence, the model suggests that most stressors will be threatening to most of the people most of the time, regardless of circumstances. A similar emphasis can be seen in other models of stress and job stress (e.g., Cooper and Marshall 1976; Kagan and Levi 1971; Matteson and Ivancevich 1987).
These differences have important implications for both guiding job stress research and intervention strategies at the workplace. The NIOSH model, for example, argues for primary prevention of job stress via attention first to psychosocial stressors in the workplace and, in this regard, is consistent with a public health model of prevention. Although a public health approach recognizes the importance of host factors or resistance in the aetiology of disease, the first line of defence in this approach is to eradicate or reduce exposure to environmental pathogens.
The NIOSH stress model illustrated in figure 34.1 provides an organizing framework for the remainder of this section. Following the discussions of job stress models are short articles containing summaries of current knowledge on workplace psychosocial stressors and on stress moderators. These subsections address conditions which have received wide attention in the literature as stressors and stress moderators, as well as topics of emerging interest such as organizational climate and career stage. Prepared by leading authorities in the field, each summary provides a definition and brief overview of relevant literature on the topic. Further, to maximize the utility of these summaries, each contributor has been asked to include information on measurement or assessment methods and on prevention practices.
The final subsection of the chapter reviews current knowledge on a wide range of potential health risks of job stress and underlying mechanisms for these effects. Discussion ranges from traditional concerns, such as psychological and cardiovascular disorders, to emerging topics such as depressed immune function and musculoskeletal disease.
In summary, recent years have witnessed unprecedented changes in the design and demands of work, and the emergence of job stress as a major concern in occupational health. This section of the Encyclopaedia tries to promote understanding of psychosocial risks posed by the evolving work environment, and thus better protect the well-being of workers.
In the language of engineering, stress is “a force which deforms bodies”. In biology and medicine, the term usually refers to a process in the body, to the body’s general plan for adapting to all the influences, changes, demands and strains to which it is exposed. This plan swings into action, for example, when a person is assaulted on the street, but also when someone is exposed to toxic substances or to extreme heat or cold. It is not just physical exposures which activate this plan however; mental and social ones do so as well. For instance, if we are insulted by our supervisor, reminded of an unpleasant experience, expected to achieve something of which we do not believe we are capable, or if, with or without cause, we worry about our job or marriage.
There is something common to all these cases in the way the body attempts to adapt. This common denominator - a kind of “revving up” or “stepping on the gas” - is stress. Stress is, then, a stereotype in the body’s responses to influences, demands or strains. Some level of stress is always to be found in the body, just as, to draw a rough parallel, a country maintains a certain state of military preparedness, even in peacetime. Occasionally this preparedness is intensified, sometimes with good cause and at other times without.
In this way the stress level affects the rate at which processes of wear and tear on the body take place. The more “gas” given, the higher the rate at which the body’s engine is driven, and hence the more quickly the “fuel” is used up and the “engine” wears out. Another metaphor also applies: if you burn a candle with a high flame, at both ends, it will give off brighter light but will also burn down more quickly. A certain amount of fuel is necessary otherwise the engine will stand still, the candle will go out; that is, the organism would be dead. Thus, the problem is not that the body has a stress response, but that the degree of stress - the rate of wear and tear - to which it is subject may be too great. This stress response varies from one minute to another even in one individual, the variation depending in part on the nature and state of the body and in part on the external influences and demands - the stressors - to which the body is exposed. (A stressor is thus something that produces stress.)
Sometimes it is difficult to determine whether stress in a particular situation is good or bad. Take, for instance, the exhausted athlete on the winner’s stand, or the newly appointed but stress-racked executive. Both have achieved their goals. In terms of pure accomplishment, one would have to say that their results were well worth the effort. In psychological terms, however, such a conclusion is more doubtful. A good deal of torment may have been necessary to get so far, involving long years of training or never-ending overtime, usually at the expense of family life. From the medical viewpoint such achievers may be considered to have burnt their candles at both ends. The result could be physiological; the athlete may rupture a muscle or two and the executive develop high blood pressure or have a heart attack.
An example may clarify how stress reactions can arise at work and what they might lead to in terms of health and quality of life. Let us imagine the following situation for a hypothetical male worker. Based on economic and technical considerations, management has decided to break up a production process into very simple and primitive elements which are to be performed on an assembly line. Through this decision, a social structure is created and a process set into motion which can constitute the starting point in a stress- and disease-producing sequence of events. The new situation becomes a psychosocial stimulus for the worker, when he first perceives it. These perceptions may be further influenced by the fact that the worker may have previously received extensive training, and thus was consequently expecting a work assignment which required higher qualifications, not reduced skill levels. In addition, past experience of work on an assembly line was strongly negative (that is, earlier environmental experiences will influence the reaction to the new situation). Furthermore, the worker’s hereditary factors make him more prone to react to stressors with an increase in blood pressure.
Because he is more irritable, perhaps his wife criticizes him for accepting his new assignment and bringing his problems home. As a result of all these factors, the worker reacts to the feelings of distress, perhaps with an increase in alcohol consumption or by experiencing undesirable physiological reactions, such as the elevation in blood pressure. The troubles at work and in the family continue, and his reactions, originally of a transient type, become sustained. Eventually, he may enter a chronic anxiety state or develop alcoholism or chronic hypertensive disease. These problems, in turn, increase his difficulties at work and with his family, and may also increase his physiological vulnerability. A vicious cycle may set in which may end in a stroke, a workplace accident or even suicide. This example illustrates the environmental programming involved in the way a worker reacts behaviourally, physiologically and socially, leading to increased vulnerability, impaired health and even death.
According to an important International Labour Organization (ILO) (1975) resolution, work should not only respect workers’ lives and health and leave them free time for rest and leisure, but also allow them to serve society and achieve self-fulfilment by developing their personal capabilities. These principles were also set down as early as 1963, in a report from the London Tavistock Institute (Document No. T813) which provided the following general guidelines for job design:
1. The job should be reasonably demanding in terms other than sheer endurance and provide at least a minimum of variety.
2. The worker should be able to learn on the job and go on learning.
3. The job should comprise some area of decision-making that the individual can call his or her own.
4. There should be some degree of social support and recognition in the workplace.
5. The worker should be able to relate what he or she does or produces to social life.
6. The worker should feel that the job leads to some sort of desirable future.
The Organization for Economic Cooperation and Development (OECD), however, draws a less hopeful picture of the reality of working life, pointing out that:
· Work has been accepted as a duty and a necessity for most adults.
· Work and workplaces have been designed almost exclusively with reference to criteria of efficiency and cost.
· Technological and capital resources have been accepted as the imperative determinants of the optimum nature of jobs and work systems.
· Changes have been motivated largely by aspirations to unlimited economic growth.
· The judgement of the optimum designs of jobs and choice of work objectives has resided almost wholly with managers and technologists, with only a slight intrusion from collective bargaining and protective legislation.
· Other societal institutions have taken on forms that serve to sustain this type of work system.
In the short run, benefits of the developments which have proceeded according to this OECD list have brought more productivity at lesser cost, as well as an increase in wealth. However, the long-term disadvantages of such developments are often more worker dissatisfaction, alienation and possibly ill health which, when considering society in general, in turn, may affect the economic sphere, although the economic costs of these effects have only recently been taken into consideration (Cooper, Luikkonen and Cartwright 1996; Levi and Lunde-Jensen 1996).
We also tend to forget that, biologically, humankind has not changed much during the last 100,000 years, whereas the environment - and in particular the work environment - has changed dramatically, particularly during the past century and decades. This change has been partly for the better; however, some of these “improvements” have been accompanied by unexpected side effects. For example, data collected by the National Swedish Central Bureau of Statistics during the 1980s showed that:
· 11% of all Swedish employees are continuously exposed to deafening noise.
· 15% have work which makes them very dirty (oil, paint, etc.).
· 17% have inconvenient working hours, i.e., not only daytime work but also early or late night work, shift work or other irregular working hours.
· 9% have gross working hours exceeding 11 per day (this concept includes hours of work, breaks, travelling time, overtime, etc.; in other words, that part of the day which is set aside for work).
· 11% have work that is considered both “hectic” and “monotonous”.
· 34% consider their work “mentally exacting”.
· 40% consider themselves “without influence on the arrangement of time for breaks”.
· 45% consider themselves without “opportunities to learn new things” at their work.
· 26% have an instrumental attitude to their work. They consider “their work to yield nothing except the payi.e. no feeling of personal satisfaction”. Work is regarded purely as an instrument for acquiring an income.
In its major study of conditions of work in the 12 member States of the European Union at that time (1991/92), the European Foundation (Paoli 1992) found that 30% of the workforce regarded their work to risk their health, 23 million to have night work more than 25% of total hours worked, each third to report highly repetitive, monotonous work, each fifth male and each sixth female to work under “continuous time pressure”, and each fourth worker to carry heavy loads or to work in a twisted or painful position more than 50% of his or her working time.
As already indicated, stress is caused by a bad “person- environment fit”, objectively, subjectively, or both, at work or elsewhere and in an interaction with genetic factors. It is like a badly fitting shoe: environmental demands are not matched to individual ability, or environmental opportunities do not measure up to individual needs and expectations. For example, the individual is able to perform a certain amount of work, but much more is required, or on the other hand no work at all is offered. Another example would be that the worker needs to be part of a social network, to experience a sense of belonging, a sense that life has meaning, but there may be no opportunity to meet these needs in the existing environment and the “fit” becomes bad.
Any fit will depend on the “shoe” as well as on the “foot”, on situational factors as well as on individual and group characteristics. The most important situational factors that give rise to “misfit” can be categorized as follows:
Quantitative overload. Too much to do, time pressure and repetitive work-flow. This is to a great extent the typical feature of mass production technology and routinized office work.
Qualitative underload. Too narrow and one-sided job content, lack of stimulus variation, no demands on creativity or problem- solving, or low opportunities for social interaction. These jobs seem to become more common with suboptimally designed automation and increased use of computers in both offices and manufacturing even though there may be instances of the opposite.
Role conflicts. Everybody occupies several roles concurrently. We are the superiors of some people and the subordinates of others. We are children, parents, marital partners, friends and members of clubs or trade unions. Conflicts easily arise among our various roles and are often stress evoking, as when, for instance, demands at work clash with those from a sick parent or child or when a supervisor is divided between loyalty to superiors and to fellow workers and subordinates.
Lack of control over one’s own situation. When someone else decides what to do, when and how; for example, in relation to work pace and working methods, when the worker has no influence, no control, no say. Or when there is uncertainty or lack of any obvious structure in the work situation.
Lack of social support at home and from your boss or fellow workers.
Physical stressors. Such factors can influence the worker both physically and chemically, for example, direct effects on the brain of organic solvents. Secondary psychosocial effects can also originate from the distress caused by, say, odours, glare, noise, extremes of air temperature or humidity and so on. These effects can also be due to the worker’s awareness, suspicion or fear that he is exposed to life-threatening chemical hazards or to accident risks.
Finally, real life conditions at work and outside work usually imply a combination of many exposures. These might become superimposed on each other in an additive or synergistic way. The straw which breaks the camel’s back may therefore be a rather trivial environmental factor, but one that comes on top of a very considerable, pre-existing environmental load.
Some of the specific stressors in industry merit special discussion, namely those characteristic of:
· mass production technology
· highly automated work processes
· shift work.
Mass production technology. Over the past century work has become fragmented in many workplaces, changing from a well defined job activity with a distinct and recognized end-product, into numerous narrow and highly specified subunits which bear little apparent relation to the end-product. The growing size of many factory units has tended to result in a long chain of command between management and the individual workers, accentuating remoteness between the two groups. The worker also becomes remote from the consumer, since rapid elaborations for marketing, distribution and selling interpose many steps between the producer and the consumer.
Mass production, thus, normally involves not just a pronounced fragmentation of the work process but also a decrease in worker control of the process. This is partly because work organization, work content and work pace are determined by the machine system. All these factors usually result in monotony, social isolation, lack of freedom and time pressure, with possible long-term effects on health and well-being.
Mass production, moreover, favours the introduction of piece rates. In this regard, it can be assumed that the desire - or necessity - to earn more can, for a time, induce the individual to work harder than is good for the organism and to ignore mental and physical “warnings”, such as a feeling of tiredness, nervous problems and functional disturbances in various organs or organ systems. Another possible effect is that the employee, bent on raising output and earnings, infringes safety regulations thereby increasing the risk of occupational disease and of accidents to oneself and others (e.g., lorry drivers on piece rates).
Highly automated work processes. In automated work the repetitive, manual elements are taken over by machines, and the workers are left with mainly supervisory, monitoring and controlling functions. This kind of work is generally rather skilled, not regulated in detail and the worker is free to move about. Accordingly, the introduction of automation eliminates many of the disadvantages of the mass-production technology. However, this holds true mainly for those stages of automation where the operator is indeed assisted by the computer and maintains some control over its services. If, however, operator skills and knowledge are gradually taken over by the computer - a likely development if decision making is left to economists and technologists - a new impoverishment of work may result, with a re-introduction of monotony, social isolation and lack of control.
Monitoring a process usually calls for sustained attention and readiness to act throughout a monotonous term of duty, a requirement that does not match the brain’s need for a reasonably varied flow of stimuli in order to maintain optimal alertness. It is well documented that the ability to detect critical signals declines rapidly even during the first half-hour in a monotonous environment. This may add to the strain inherent in the awareness that temporary inattention and even a slight error could have extensive economic and other disastrous consequences.
Other critical aspects of process control are associated with very special demands on mental skill. The operators are concerned with symbols, abstract signals on instrument arrays and are not in touch with the actual product of their work.
Shift work. In the case of shift work, rhythmical biological changes do not necessarily coincide with corresponding environmental demands. Here, the organism may “step on the gas” and activation occurs at a time when the worker needs to sleep (for example, during the day after a night shift), and deactivation correspondingly occurs at night, when the worker may need to work and be alert.
A further complication arises because workers usually live in a social environment which is not designed for the needs of shift workers. Last but not least, shift workers must often adapt to regular or irregular changes in environmental demands, as in the case of rotating shifts.
In summary, the psychosocial demands of the modern workplace are often at variance with the workers’ needs and capabilities, leading to stress and ill health. This discussion provides only a snapshot of psychosocial stressors at work, and how these unhealthy conditions can arise in today’s workplace. In the sections that follow, psychosocial stressors are analysed in greater detail with respect to their sources in modern work systems and technologies, and with respect to their assessment and control.
Most previous stress theories were developed to describe reactions to “inevitable” acute stress in situations threatening biological survival (Cannon 1935; Selye 1936). However, the Demand/Control model was developed for work environments where “stressors” are chronic, not initially life threatening, and are the product of sophisticated human organizational decision making. Here, the controllability of the stressor is very important, and becomes more important as we develop ever more complex and integrated social organizations, with ever more complex limitations on individual behaviour. The Demand/Control model (Karasek 1976; Karasek 1979; Karasek and Theorell 1990), which is discussed below, is based on psychosocial characteristics of work: the psychological demands of work and a combined measure of task control and skill use (decision latitude). The model predicts, first, stress-related illness risk, and, secondly, active/passive behavioural correlates of jobs. It has mainly been used in epidemiological studies of chronic disease, such as coronary heart disease.
Pedagogically, it is a simple model which can help to demonstrate clearly several important issues relevant for social policy discussions of occupational health and safety:
1. that the social organizational characteristics of work, and not just physical hazards, lead to illness and injury
2. that stress-related consequences are related to the social organization of work activity and not just its demands
3. that work’s social activity affects stress-related risks, not just person-based characteristics
4. that the possibility of both “positive stress” and “negative stress” can be explained in terms of combinations of demands and control
5. that can provide the simple model - with basic face validity - to begin discussions on personal stress response for shop-floor workers, clerical staff and other lay people for whom this is a sensitive topic.
Beyond the health consequences of work, the model also captures the perspectives of the work’s organizers who are concerned with productivity results. The psychological demand dimension relates to “how hard workers work”; the decision latitude dimension reflects work organization issues of who makes decisions and who does what tasks. The model’s active learning hypothesis describes the motivation processes of high performance work. The economic logic of extreme labour specialization, the past conventional wisdom about productive job design is contradicted by adverse health consequences in the Demand/Control model. The model implies alternative, health-promoting perspectives on work organization which emphasize broad skills and participation for workers, and which may also bring economic advantages for innovative manufacturing and in service industries because of the increased possibilities for learning and participation.
The first hypothesis is that the most adverse reactions of psychological strain occur (fatigue, anxiety, depression and physical illness) when the psychological demands of the job are high and the worker’s decision latitude in the task is low (figure 34.2 [PSY02FE], lower right cell). These undesirable stress-like reactions, which result when arousal is combined with restricted opportunities for action or coping with the stressor, are referred to as psychological strain (the term stress is not used at this point as it is defined differently by many groups).
For example, the assembly-line worker has almost every behaviour rigidly constrained. In a situation of increased demands (“speed-up”), more than just the constructive response of arousal, the often helpless, long-lasting, and negatively experienced response of residual psychological strain occurs. When the lunch-time rush occurs (Whyte 1948), it is the restaurant worker who does not know how to “control” her customers’ behaviour (“get the jump on the customer”) who experiences the greatest strain on the job. Kerckhoff and Back (1968) describe garment workers under heavy deadline pressure and the subsequent threat of layoff. They conclude that when the actions normally needed to cope with job pressures cannot be taken, the most severe behavioural symptoms of strain occur (fainting, hysteria, social contagion). It is not only the freedom of action as to how to accomplish the formal work task that relieves strain, it may also be the freedom to engage in the informal “rituals”, the coffee break, smoke break or fidgeting, which serve as supplementary “tension release” mechanisms during the work day (Csikszentmihalyi 1975).These are often social activities with other workers precisely those activities eliminated as “wasted motions” and “soldiering” by Frederick Taylor’s methods (1911 (1967)). This implies a needed expansion of the model to include social relations and social support.
In the model, decision latitude refers to the worker’s ability to control his or her own activities and skill usage, not to control others. Decision latitude scales have two components: task authority - a socially predetermined control over detailed aspects of task performance (also called autonomy); and skill discretion control over use of skills by the individual, also socially determined at work (and often called variety or “substantive complexity” (Hackman and Lawler 1971; Kohn and Schooler 1973)). In modern organizational hierarchies, the highest levels of knowledge legitimate the exercise of the highest levels of authority, and workers with limited-breadth, specialized tasks are coordinated by managers with higher authority levels. Skill discretion and authority over decisions are so closely related theoretically and empirically that they are often combined.
Examples of work’s psychological demands - “how hard you work” - include the presence of deadlines, the mental arousal or stimulation necessary to accomplish the task, or coordination burdens. The physical demands of work are not included (although psychological arousal comes with physical exertion). Other components of psychological job demands are stressors arising from personal conflicts. Fear of losing a job or skill obsolescence may obviously be a contributor. Overall, Buck (1972) notes that “task requirements” (workload) are the central component of psychological job demands for most workers in spite of the above diversity. While simple measures of working hours, in moderate ranges, do not seem to strongly predict illness, one such measure, shiftwork - especially rotating shiftwork, is associated with substantial social problems as well as increased illness.
While some level of “demands” is necessary to achieve new learning and effective performance on the job (i.e., interest), too high a level is obviously harmful. This has implied the inverted “U-shaped” curve of “optimal” level of demands in the well known General Adaptation Syndrome of Selye (1936) and related, classic theories by Yerkes and Dodson (1908) and Wundt (1922) on stress and performance.* However, our findings show that most work situations have an overload, rather than an underload, problem.
* Although Selye’s “U-shaped” association between demands and stress purported to be unidimensional along a stressor axis, it probably also included a second dimension of constraint in his animal experiments - and thus was really a composite model of stress-related physiological deterioration - potentially similar to the high demand, low control situation, as other researchers have found (Weiss 1971).
When control on the job is high, and psychological demands are also high, but not overwhelming (figure 34.2 upper right cell) learning and growth are the predicted behavioural outcomes (i.e., the active learning hypothesis). Such a job is termed the “active job”, since research in both the Swedish and American populations has shown this to be the most active group outside of work in leisure and political activity, in spite of heavy work demands (Karasek and Theorell 1990). Only average psychological strain is predicted for the ‘active job’ because much of the energy aroused by the job’s many stressors (“challenges”) are translated into direct action - effective problem solving - with little residual strain to cause disturbance. This hypothesis parallels White’s “concept of competence” (1959): the psychological state of individuals in challenging circumstances is enhanced by increasing “demands”, an environment-based theory of motivation. The model also predicts that the growth and learning stimuli of these settings, when they occur in a job context, are conducive to high productivity.
In the Demand/Control model, learning occurs in situations which require both individual psychological energy expenditure (demands or challenges) and the exercise of decision-making capability. As the individual with decision-making latitude makes a “choice” as to how to best cope with a new stressor, that new behaviour response, if effective, will be incorporated into the individual’s repertoire of coping strategies (i.e., it will be “learned”). The potential activity level in the future will be raised because of the expanded range of solutions to environmental challenges, yielding an increase in motivation. Opportunities for constructive reinforcement of behaviour patterns are optimal when the challenges in the situation are matched by the individual’s control over alternatives or skill in dealing with those challenges (Csikszentmihalyi 1975). The situation will not be unchallengingly simple (thus, unimportant) nor so demanding that appropriate actions can not be taken because of high anxiety level (the psychological “strain” situation).
The Demand/Control model predicts that situations of low demand and low control (figure 34.2 opposite end of diagonal B) cause a very “unmotivating” job setting which leads to “negative learning” or gradual loss of previously acquired skills. Evidence shows that disengagement from leisure and political activity outside the job appear to increase over time in such jobs (Karasek and Theorell 1990). These “passive” job, may be the result of “learned helplessness”, discussed by Seligman (1975) from a sequence of job situations which reject worker’s initiatives.
The fact that environmental demands can thus be conceptualized in both positive and negative terms is congruent with the common understanding that there is both “good” and “bad” stress. Evidence that at least two separable mechanisms must be used to describe “psychological functioning” on the job is one of the primary validations of the multidimensional “Demand/ Control” model structure. The “active”-“passive” diagonal B implies that learning mechanisms are independent of (i.e., orthogonal to) psychological strain mechanisms. This yields a parsimonious model with two broad dimensions of work activity and two major psychological mechanisms (the primary reason for calling it an “interaction” model (Southwood 1978)). (Multiplicative interactions for the axes is too restrictive a test for most sample sizes.)
The Demand/Control model has sometimes been assumed to be congruent with a model of “demands and resources”, allowing a simple fit with currently common “cost/benefit” thinking - where the positive “benefits” of resources are subtracted from the negative “costs” of demands. “Resources” allows inclusion of many factors outside the worker’s immediate task experience of obvious importance. However, the logic of the Demand/ Control model hypotheses cannot be collapsed into a unidimensional form. The distinction between decision latitude and psychological stressors must be retained because the model predicts both learning and job strain - from two different combinations of demands and control which are not simply mathematically additive.
Job “control” is not merely a negative stressor, and “demands and challenges” associated with lack of control are not associated with increased learning. Having decision latitude over the work process will reduce a worker’s stress, but increase his learning, while psychological demands would increase both learning and stress. This distinction between demands and control allows understanding of the otherwise unclear prediction of the effects of: (a) “responsibility”, which actually combines high demands and high decision latitude; (b) “qualitative job demands”, which also measures the possibility of decision making about what skills to employ; and (c) “piece work”, where the decision latitude to work faster almost directly brings with it increased demands.
The Demand/Control model has been usefully expanded by Johnson by the addition of social support as a third dimension (Johnson 1986; Kristensen 1995). The primary hypothesis, that jobs which are high in demands, low in control - and also low in social support at work (high “iso-strain”) carry the highest risks of illness, has been empirically successful in a number of chronic disease studies. The addition clearly acknowledges the need of any theory of job stress to assess social relations at the workplace (Karasek and Theorell 1990; Johnson and Hall 1988). Social support “buffering” of psychological strain may depend on the degree of social and emotion integration and trust between co-workers, supervisors, etc.“socio-emotional support” (Israel and Antonnuci 1987). Addition of social support also makes the Demand/Control perspective more useful in job redesigning. Changes in social relations between workers (i.e., autonomous work groups) and changes in decision latitude are almost inseparable in job redesign processes, particularly “participatory” processes (House 1981).
However, a full theoretical treatment of the impact of social relations on both job stress and behaviour is a very complex problem which needs further work. The associations with measures of co-worker and supervisor interactions and chronic disease are less consistent than for decision latitude, and social relations can strongly increase, as well as decrease, the nervous system arousal that may be the risk-inducing link between social situation and illness. The dimensions of work experience that reduce job stress would not necessarily be the same dimensions that are relevant for active behaviour in the Demand/Control model. Facilitating collective forms of active behaviour would likely focus on the distribution of and ability to use competences, communication structure and skills, coordination possibilities, “emotional intelligence skills” (Goleman 1995) - as well as the trust important for social support.
Job characteristics can be displayed in a four quadrant diagram using the average job characteristics of occupations in the US Census occupation codes (Karasek and Theorell 1990). The “active” job quadrant, with high demand and high control, has high-prestige occupations: lawyers, judges, physicians, professors, engineers, nurses and managers of all kinds. The “passive” job quadrant, with low demands and low control, has clerical workers such as stock and billing clerks, transport operatives and low status service personnel such as janitors. The “high strain” quadrant, with high demands and low control, has machine-paced operatives such as assemblers, cutting operatives, inspectors and freight handlers, as well as other low-status service operatives such as waiters or cooks. Female-dominated occupations are frequent (garment stitchers, waitresses, telephone operators and other office automation workers). “Low strain” self-paced occupations, such as repairmen, sales clerks, foresters, linemen and natural scientists, often involve significant training and self-pacing.
Thus, executives and professionals have a moderate level of stress, and not the highest level of stress, as popular belief often holds. While “managerial stress” certainly exists because of the high psychological demands that come with these jobs, it appears that the frequent occasions for decision-making and deciding how to do the job are a significant stress moderator. Of course, at the highest status levels, executive jobs consist of decision-making as the primary psychological demand, and then the Demand/ Control model fails. However, the implication here is that executives could reduce their stress if they made fewer decisions, and lower status workers would be better off with more decision opportunities, so that all groups could be better off with a more equal share of decision power.
Men are more likely than women to have high control over their work process at the task level, with a difference as great as wage differentials (Karasek and Theorell 1990). Another major gender difference is the negative correlation between decision latitude and demands for women: women with low control also have higher job demands. This means that women are several times as likely to hold high strain jobs in the full working population. By contrast, men’s high demand jobs are generally accompanied by somewhat higher decision latitude (“authority commensurate with responsibility”).
The Demand/Control models arises out of theoretical integration of several disparate scientific directions. Thus, it falls outside the boundaries of a number of established scientific traditions from which it has gained contributions or with which it is often contrasted: mental health epidemiology and sociology, and stress physiology, cognitive psychology and personality psychology. Some of these previous stress theories have focused on a person-based causal explanation, while the Demand/Control model predicts a stress response to social and psychological environments. However, the Demand/Control model has attempted to provide a set of interfacing hypotheses with person-based perspectives. In addition, linkage to macro social organizational and political economic issues, such as social class, have also been proposed. These theoretical integrations and contrasts with other theories are discussed below at several levels. The linkages below provide the background for an extended set of scientific hypotheses.
One area of stress theory grows out of the currently popular field of cognitive psychology. The central tenet of the cognitive model of human psychological functioning is that it is the processes of perception and interpretation of the external world that determine the development of psychological states in the individual. Mental workload is defined as the total information load that the worker is required to perceive and interpret while performing job tasks (Sanders and McCormick 1993; Wickens 1984). “Overload” and stress occur when this human information processing load is too large for the individual’s information processing capabilities. This model has enjoyed great currency since modelling human mental functions in the same rough conceptual model as modern computers utilize, and thus fits an engineering conception of work design. This model makes us aware of the importance of information overloads, communication difficulties and memory problems. It does well in the design of some aspects of human/computer interfaces and human monitoring of complex processes.
However, the cognitive psychological perspective tends to downplay the importance of “objective” workplace stressors, for example, and emphasize instead the importance of the stressed individuals’ interpretation of the situation. In the cognitive-based “coping approach”, Lazarus and Folkman (1986) advocate that the individual “cognitively reinterpret” the situation in a way that makes it appear less threatening, thus reducing experienced stress. However, this approach could be harmful to workers in situations where the environmental stressors are “objectively” real and must be modified. Another variant of the cognitive approach, more consistent with worker empowerment, is Bandura’s (1977) “self-efficacy /motivation” theory which emphasizes the increases in self-esteem which occur when individuals: (a) define a goal for a change process; (b) receive feedback on the positive results from the environment; and (c) successfully achieve incremental progress.
Several omissions in the cognitive model are problematic for an occupational health perspective on stress and conflict with the Demand/Control model:
· There is no role for the social and mental “demands” of work that do not translate into information loads (i.e., no role for tasks which require social organizational demands, conflicts and many non-intellectual time deadlines).
· The cognitive model predicts that situations which require taking a lot of decisions are stressful because they can overload the individual’s information-processing capacity. This directly contradicts the Demand/Control model which predicts lower strain in demanding situations that allow freedom of decision making. The majority of epidemiological evidence from field studies supports the Demand/Control model, but laboratory tests can generate decision-based cognitive overload effect also.
· The cognitive model also omits physiological drives and primitive emotions, which often dominate cognitive response in challenging situations. There is little discussion of how either negative emotions, nor learning-based behaviour (except for Bandura, above) arise in common adult social situations.
Although overlooked in the cognitive model, emotional response is central to the notion of “stress”, since the initial stress problem is often what leads to unpleasant emotional states such as anxiety, fear and depression. “Drives” and emotions are most centrally affected by the limbic regions of the brain - a different and more primitive brain region than the cerebral cortex addressed by most of the processes described by cognitive psychology. Possibly, the failure to develop an integrated perspective on psychological functioning reflects the difficulty of integrating different research specializations focusing on two different neurological systems in the brain. However, recently, evidence has begun to accumulate about the joint effects of emotion and cognition. The conclusion seems to be that emotion is an underlying determinant of strength of behaviour pattern memory and cognition (Damasio 1994; Goleman 1995).
The goal of the Demand/Control model has been to integrate understanding of the social situation with evidence of emotional response, psychosomatic illness symptoms and active behaviour development in major spheres of adult life activity, particularly in the highly socially structured work situation. However, when the model was being developed, one likely platform for this work, sociological research exploring illness in large population studies, often omitted the detailed level of social or personal response data of stress research, and thus much integrating work was needed to develop the model.
The first Demand/Control integrating ideafor social situation and emotional response - involved stress symptoms, and linked two relatively unidimensional sociological and social psychological research traditions. First, the life stress/illness tradition (Holmes and Rahe 1967; Dohrenwend and Dohrenwend 1974) predicted that illness was based on social and psychological demands alone, without mention of control over stressors. Second, the importance of control at the workplace had been clearly recognized in the job satisfaction literature (Kornhauser 1965): task autonomy and skill variety were used to predict job satisfaction, absenteeism or productivity, with limited additions reflecting the workers’ social relationship to the job - but there was little mention of job workloads. Integrating studies helped bridge the gaps in the area of illness and mental strain. Sundbom (1971) observed symptoms of psychological strain in “mentally heavy work” - which was actually measured by questions relating to both heavy mental pressures and monotonous work (presumably also representing restricted control). The combined insight of these two studies and research traditions was that a two-dimensional model was needed to predict illness: the level of psychological demands determined whether low control could lead to two significantly different types of problem: psychological strain, or passive withdrawal.
The second Demand/Control integration predicted behaviour patterns related to work experience. Behavioural outcomes of work activity also appeared to be affected by the same two broad job characteristics - but in a different combination. Kohn and Schooler (1973) had observed that active orientations to the job were the consequence of both high skill and autonomy levels, plus psychologically demanding work. Social class measures were important correlates here. Meissner (1971) had also found that leisure behaviour was positively associated with opportunities both to take decisions on the job and to perform mentally challenging work. The combined insight of these studies was that “challenge” or mental arousal was necessary, on the one hand, for effective learning and, on the other, could contribute to psychological strain. “Control” was the crucial moderating variable that determined whether environmental demands would lead to “positive” learning consequences, or “negative” strain consequences.
The combination of these two integrating hypotheses, predicting both health and behavioural outcomes, is the basis of the Demand/Control model. “Demand” levels are the contingent factor which determines whether low control leads to either passivity or psychological strain; and “control” levels are the contingent factor which determines whether demands lead to either active learning or psychological strain (Karasek 1976; 1979). The model was then tested on a representative national sample of Swedes (Karasek 1976) to predict both illness symptoms and leisure and political behavioural correlates of psychosocial working conditions. The hypotheses were confirmed in both areas, although many confounding factors obviously share in these results. Shortly after these empirical confirmations, two other conceptual formulations, consistent with the Demand/Control model, appeared, which confirmed the robustness of the general hypotheses. Seligman (1976) observed depression and learned helplessness in conditions of intense demand with restricted control. Simultaneously, Csikszentmihalyi (1975) found that an “active experience” (“flow”) resulted from situations which involved both psychological challenges and high levels of competence. Use of this integrated model was able to resolve some paradoxes in job satisfaction and mental strain research (Karasek 1979): for example, that qualitative workloads were often negatively associated with strain (because they also reflected the individual’s control over his or her use of skills). The most extensive acceptance of the model by other researchers came in 1979 after the expansion of empirical prediction to coronary heart disease, with the assistance of colleague Tores Theorell, a physician with significant background in cardiovascular epidemiology.
Additional research has allowed a second level of integration linking the Demand/Control model to physiological response.* The main research developments in physiological research had identified two patterns of an organism’s adaptation to its environment. Cannon’s (1914) fight-flight response is most associated with stimulation of the adrenal medullaand adrenaline secretion. This pattern, occurring in conjunction with sympathetic arousal of the cardiovascular system, is clearly an active and energetic response mode where the human body is able to use maximum metabolic energy to support both mental and physical exertion necessary to escape major threats to its survival. In the second physiological response pattern, the adrenocortical response is a response to defeat or withdrawal in a situation with little possibility of victory. Selye’s research (1936) on stress dealt with the adrenocortical response to animals in a stressed but passive condition (i.e., his animal subjects were restrained while they were stressed, not a fight-flight situation). Henry and Stephens (1977) describe this behaviour as the defeat or loss of social attachments, which leads to a withdrawal and submissiveness in social interactions.
* A major stimulus for the development of the strain hypothesis of the Demand/Control model in 1974 were Dement’s observations (1969) that vital relaxation related to REM dreaming was inhibited if sleep-deprived cats were “constrained” by a treadmill (perhaps like an assembly line) after periods of extreme psychological stressor exposure. The combined actions of both environmental stressors and low environmental control were essential elements in producing these effects. The negative impacts, in terms of mental derangement, were catastrophic and led to inability to coordinate the most basic physiological processes.
In the early 1980s, Frankenhaeuser’s (1986) research demonstrated the congruence of these two patterns of physiological response with the main hypotheses of the Demand/ Control model - allowing linkage to be made between physiological response and social situation, and emotional response patterns. In high-strain situations, cortisol from the adrenal cortex, and adrenaline from the adrenal medulla, secretions are both elevated, whereas in a situation where the subject has a controllable and predictable stressor, adrenaline secretion alone is elevated (Frankenhaeuser, Lundberg and Forsman 1980). This demonstrated a significant differentiation of psychoendocrine response associated with different environmental situations. Frankenhaeuser used a two-dimension model with the same structure as the Demand/Control model, but with dimensions labelling personal emotional response. “Effort” describes adrenal-medullary stimulating activity (demands in the Demand/Control model) and “distress” describes adrenocortical stimulating activity (lack of decision latitude in the Demand/ Control model). Frankenhaeuser’s emotional response categories illuminate a clearer link between emotion and physiological response, but in this form the Demand/Control model fails to illuminate the association between work sociology and physiological response, which has been another strength of the model.
One of the challenges behind the development of the Demand/ Control model has been to develop an alternative to the socially conservative explanation that the worker’s perception or response orientations are primary responsible for stress - the claim of some person-based stress theories. For example, it is hard to accept the claims, extended by personality-based stress models, that the majority of stress reactions develop because common individual personality types habitually misinterpret real world stresses or are oversensitive to them, and that these types of personality can be identified on the basis of simple tests. Indeed, evidence for such personality effects has been mixed at best with even the most common measures (although a stress denial personality has been identified - alexithymia (Henry and Stephens 1977). The Type A behaviour pattern, for example, was originally interpreted as the individual’s proclivity to select stressful activities, but research in this area has now shifted to the “anger-prone” personality (Williams 1987). Of course, anger response could have a significant environment-response component. A more generalized version of the personality approach is found in the “person-environment fit” model (Harrison 1978), which postulates that a good match between the person and the environment is what reduces stress. Here also it has been difficult to specify the specific personality characteristics to be measured. Nevertheless, personal response/personality-based approaches addressed the obvious fact that: (a) person-based perceptions are an important part of the process in which environments affect individuals; and (b) there are long-term differences in personal responses to environments. Thus, a time dynamic, integrated environment and person-based version of the Demand/Control model was developed.
The dynamic version of the Demand/Control model (figure 34.3) integrates environment effects with person-based phenomena such as self-esteem development and long-term exhaustion. The dynamic version integrates person-based and environmental factors by building two combined hypotheses on the original strain and learning mechanisms: (a) that stress inhibits learning; and (b) that learning, in the long term, can inhibit stress. The first hypothesis is that high-strain levels may inhibit the normal capacity to accept a challenge, and thus inhibit new learning. These high-strain levels may be the result of long-lasting psychological strain accumulated over time - and reflected in person-based measures (figure 34.3, diagonal arrow B). The second hypothesis is that new learning may lead to feelings of mastery or confidence - a person-based measure. These feelings of mastery, in turn, can lead to reduced perceptions of event as stressful and increased coping success (figure 34.3 , diagonal arrow A). Thus, environmental factors, over the long term, partly determine personality, and later, environmental effects are moderated by these previously developed personality orientations. This broad model could incorporate the following, more specific measures of personal response: feelings of mastery, denial, alexithymia, trait anxiety, trait anger, vital exhaustion, burnout, culmulative life-stressor implications, and possibly Type A behaviour components.
The dynamic model yields the possibility of two long-term dynamic “spirals” of behaviour. The positive behavioural dynamic begins with the active job setting, the increased “feeling of mastery”, and the increased ability to cope with inevitable job stressors. These, in turn, reduce accumulated anxiety and thus increase the capacity to accept still more learning challenges - yielding still further positive personality change and improved well-being. The undesirable behavioural dynamic begins with the high-strain job, the high accumulated residual strain and the restricted capacity to accept learning challenges. These, in turn, lead to diminishing self-esteem and increased stress perceptions - yielding still further negative personality change and diminished well-being. Evidence for submechanisms is discussed in Karasek and Theorell (1990), although the complete model has not been tested. Two promising research directions which could easily integrate with Demand/Control research are “vital exhaustion” research integrated with changing responses to life demands (Appels 1990), and Bandura’s (1977) “self-efficacy” methods, which integrate skill development and self-esteem development.
One necessary next step for Demand/Control research is a more comprehensive specification of the physiological pathways of illness causation. Physiological response is increasingly being understood as a complex system response. The physiology of human stress response - to accomplish, for example, a fight or flight behaviour - is a highly integrated combination of changes in cardiovascular output, brain-stem regulation, respiratory interaction, limbic-system control of the endocrine response, general cortical activation and peripheral circulatory system changes. The concept of “stress” is very possibly most relevant for complex systems - which involve multiple, interacting subsystems and complex causality.* Accompanying this new perspective of systems dynamic principles in physiology, are definitions of many diseases as disorders of system regulation (Henry and Stephens 1977; Weiner 1977), and investigation of the results of time-dependent, multifactoral adjustments to system equilibrium, or alternatively, their absence in “chaos”.
* Instead of a single and unambiguous cause and effect linkage, as in the “hard sciences” (or hard science mythologically), in stress models causal associations are more complex: there may be many causes which “accumulate” to contribute to a single effect; a single cause (“stressor”) may have many effects; or effects which occur only after significant time delays.
Interpreting such observations from the perspective of a “generalized” Demand/Control model, we could say that stress refers to a disequilibrium of the system as a whole, even when parts of the system are functioning. All organisms must have control mechanisms to integrate the actions of separate subsystems (i.e., the brain, the heart and the immune systems). Stress (or job strain) would be an overload condition experienced by the organism’s “control system” when it attempts to maintain integrated functioning in the face of too many environmental challenges (“high demands”), and when the system’s capability of integrated control of its submechanisms fails (“high strain”). To impose order on its chaotic environment, the individual’s internal physiological control systems must “do the work” of maintaining a coordinated physiological regularity (i.e., a constant heart rate) in the face of irregular environmental demands. When the organism’s control capacity is exhausted after too much “organizing” (a low entropy condition, by analogy from thermodynamics), further demands lead to excess fatigue or debilitating strain. Furthermore, all organisms must periodically return their control systems to the rest-state - sleep or relaxation periods (a state of relaxed disorder or high entropy) - to be capable of undertaking the next round of coordinating tasks. The system’s coordination processes or its relaxation attempts may be inhibited if it cannot follow its own optimal course of action, i.e., if it has no possibilities to control its situation or find a satisfactory internal equilibrium state. In general, “lack of control” may represent restriction of the organism’s ability to use all of its adaptive mechanisms to maintain physiological equilibrium in the face of demands, leading to increased long-term burdens and disease risk. This is a direction for future Demand/Control physiological research.
One potentially consistent finding is that while the Demand/Control model predicts cardiovascular mortality, no single conventional risk factor or physiological indicator seems to be the primary pathway of this risk. Future research may show whether “systems dynamic failures” are the pathway.
Models which integrate over several spheres of research allow broader predictions about the health consequences of human social institutions. For example, Henry and Stephens (1977) observe that in the animal world “psychological demands” result from the thoroughly “social” responsibilities of finding family food and shelter, and rearing and defending offspring; situations of enforced demands combined with social isolation would be hard to imagine. However, the human world of work is so organized that demands can occur without any social affiliation at all. Indeed, according to Frederick Taylor’s Principles of Scientific Management (1911 (1967)), increasing workers’ job demands often should be done in isolation, otherwise the workers would revolt against the processand return to time-wasting socializing! In addition to showing the utility of an integrated model, this example shows the need to expand even further the social understanding of the human stress response (for example, by adding a social support dimension to the Demand/Control model).
An integrated, socially anchored, understanding of human stress response is particularly needed to understand future economic and political development. Less comprehensive models could be misleading. For example, according to the cognitive model which has dominated public dialogues about future social and industrial development (i.e., the direction for worker’s skills, life in the information society, etc.), an individual has freedom to interpret - i.e., reprogramme - his perception of real world events as stressful or non-stressful. The social implication is that, literally, we can design for ourselves any social arrangement - and we should take the responsibility for adapting to any stresses it may cause. However, many of the physiological consequences of stress relate to the “emotional brain” in the limbic system, which has a deterministic structure with clear limitations on overall demands. It is definitely not “infinitely” re-programmable, as studies of post traumatic stress syndrome clearly indicate (Goleman 1995). Overlooking the limbic system’s limits - and the integration of emotional response and social integration - can lead to a very modern set of basic conflicts for human development. We may be developing social systems on the basis of the extraordinary cognitive capabilities of our brain cortex that place impossible demands on the more basic limbic brain functions in terms of overloads: lost social bonds, lack of internal control possibilities, and restricted ability to see the “whole picture”. In short, we appear to be running the risk of developing work organizations for which we are sociobiologically misfit. These results are not just the consequence of scientific incomplete models, they also facilitate the wrong kinds of social process - processes where the interests of some groups with social power are served to the cost to others of previously inexperienced levels of social and personal dysfunction.
In many cases, individual level stressors can be modelled as the causal outcome of larger-scale social, dynamic and political- economic processes. Thus, theoretical linkages to concepts such as social class are also needed. Assessment of associations between social situation and illness raise the question of the relation between psychosocial Demand/Control factors and broad measures of social circumstance such as social class. Job decision latitude measure is, indeed, clearly correlated with education and other measures of social class. However, social class conventionally measures effects of income and education which operate via different mechanisms than the psychosocial pathways of the Demand/Control model. Importantly, the job strain construct is almost orthogonal to most social class measures in national populations (however, the active/passive dimension is highly correlated with social class among high status workers (only)) (Karasek and Theorell 1990). The low-decision latitude aspects of low status jobs appear to be a more important contributor to psychological strain than the distinction between mental and physical workload, the conventional determinant of white/blue-collar status. Indeed, the physical exertion common in many blue-collar jobs may be protective for psychological strain in some circumstances. While job strain is indeed more common in low status jobs, psychosocial job dimensions define a strain-risk picture which is significantly independent of the conventional social class measures.
Although it has been suggested that the observed Demand/Control job/illness associations merely reflect social class differences (Ganster 1989; Spector 1986), a review of evidence rejects this view (Karasek and Theorell 1990). Most of the Demand/Control research has simultaneously controlled for social class, and Demand/Control associations persist within social class groups. However, blue-collar associations with the model are more consistently confirmed, and the strength of white-collar associations varies (see “Job strain and cardiovascular disease”, below) across studies, with white-collar single occupation studies being somewhat less robust. (Of course, for the very highest status managers and professionals decision making may become a significant demand in itself.)
The fact that conventional “social class” measures often find weaker associations with mental distress and illness outcomes than the Demand/Control model actually makes a case for new social class conceptions. Karasek and Theorell (1990) define a new set of psychosocially advantaged and disadvantaged workers, with job stress “losers” in routinized, commercialized and bureaucratized jobs, and “winners” in highly creative learning-focused intellectual work. Such a definition is consistent with a new, skill-based industrial output in the “information society”, and a new perspective on class politics.
Self-report questionnaires administered to workers have been the most common method of gathering data on psychosocial characteristics of work since they are simple to administer and can be easily designed to tap core concepts in work redesign efforts also (Hackman and Oldham’s JDS 1975), Job Content Questionnaire (Karasek 1985), the Swedish Statshalsan questionnaire. While designed to measure the objective job, such questionnaire instruments inevitably measure job characteristics as perceived by the worker. Self-report bias of findings can occur with self-reported dependent variables such as depression, exhaustion and dissatisfaction. One remedy is to aggregate self-report responses by work groups with similar work situations - diluting individual biases (Kristensen 1995). This is the basis of extensively used systems linking psychosocial job characteristics to occupations (Johnson et al. 1996).
There is also evidence assessing the “objective” validity of self-reported psychosocial scales: correlations between self-report and expert observation data are typically 0.70 or higher for decision latitude, and lower (0.35) correlations for work demands (Frese and Zapf 1988). Also supporting objective validity is the high between-occupation variances of (40 to 45%) of decision latitude scales, which compare favourably with 21% for income and 25% for the physical exertion, which are acknowledged to vary dramatically by occupation (Karasek and Theorell 1990). However, only 7% and 4%, of psychological demands and social support scale variance, respectively, is between occupations, leaving the possibility of a large person-based component of self-reports of these measures.
More objective measurement strategies would be desirable. Some well-known objective assessment methods are congruent with the Demand/Control model (for decision latitude: VERA, Volpert et al. (1983)). However, expert observations have problems also: observations are costly, time consuming, and, in assessment of social interactions, obviously do not generate more accurate measures. There are also theoretical biases involved in the very concept of standard “expert” measures: it is much easier to “measure” the easily observed, repetitive quality of the low status assembly-line worker jobs, than the diverse tasks of high status managers or professionals. Thus, objectivity of the psychosocial measures is inversely related to the decision latitude of the subject.
Job strain and heart disease associations represent the broadest base of empirical support for the model. Recent comprehensive reviews have been done by Schnall, Landsbergis and Baker (1994), Landsbergis et al. (1993) and Kristensen (1995). Summarizing Schnall, Landsbergis and Baker(1994) (updated by Landsbergis, personal communication, Fall 1995): 16 of 22 studies have confirmed a job strain association with cardiovascular mortality using a wide range of methodologies, including 7 of 11 cohort studies; 2 of 3 cross-sectional studies; 4 of 4 case control studies; and 3 of 3 studies utilizing disease symptom indicators. Most negative studies have been in older populations (mainly over age 55, some with much post-retirement time) and are mainly based upon aggregated occupation scores which, although they minimize self-report bias, are weak in statistical power. The job strain hypothesis appears to be somewhat more consistent when predicting blue-collar than white-collar CVD (Marmot and Theorell 1988). Conventional CVD risk factors such as serum cholesterol, smoking and even blood pressure, when measured in the conventional manner, have so far only shown inconsistent or weak job-strain effects. However, more sophisticated methods (ambulatory blood pressures) show substantial positive results (Theorell and Karasek 1996).
Psychological disorder findings are reviewed in Karasek and Theorell (1990). The majority of the studies confirm a job strain association and are from broadly representative or nationally representative populations in a number of countries. The common study limitations are cross-section design and the difficult-to-avoid problem of self-reported job and psychological strain questionnaires, although some studies also include objective observer assessment of work situations and there are also supportive longitudinal studies. While some have claimed that a person-based tendency towards negative affect inflates work-mental strain associations (Brief et al. 1988), this could not be true for several strong findings on absenteeism (North et al. 1996; Vahtera Uutela and Pentii 1996). Associations in some studies are very strong and, in a number of studies, are based on a linkage system which minimizes potential self-report bias (at the risk of loss of statistical power). These studies confirm associations for a broad range of psychological strain outcomes: moderately severe forms of depression, exhaustion, drug consumption, and life and job dissatisfaction, but findings also differ by outcome. There is also some differentiation of negative affect by Demand/Control model dimensions. Exhaustion, rushed tempo or simply reports of “feeling stressed” are more strongly related to psychological demands - and are higher for managers and professionals. More serious strain symptoms such as depression, loss of self-esteem, and physical illness seem to be more strongly associated with low decision latitude - a larger problem for low status workers.
Evidence of the utility of the Demand/Control model is accumulating in other areas (see Karasek and Theorell 1990). Prediction of occupational musculoskeletal illness is reviewed for 27 studies by Bongers et al. (1993) and other researchers (Leino and Häøninen 1995; Faucett and Rempel 1994). This work supports the predictive utility of the Demand/ Control/support model, particularly for upper extremity disorders. Recent studies of pregnancy disorders (Fenster et al. 1995; Brandt and Nielsen 1992) also show job strain associations.
The Demand/Control/support model has stimulated much research during recent years. The model has helped to document more specifically the importance of social and psychological factors in the structure of current occupations as a risk factor for industrial society’s most burdensome diseases and social conditions. Empirically, the model has been successful: a clear relationship between adverse job conditions (particularly low decision latitude) and coronary heart disease has been established.
However, it is still difficult to be precise about which aspects of psychological demands, or decision latitude, are most important in the model, and for what categories of workers. Answers to these questions require more depth of explanation of the physiological and micro-behavioural effects of psychological demands, decision latitude and social support than the model’s original formulation provided, and require simultaneous testing of the dynamic version of the model, including the active/passive hypotheses. Future utility of Demand/Control research could be enhanced by an expanded set of well-structured hypotheses, developed through integration with other intellectual areas, as outlined above (also in Karasek and Theorell 1990). The active/passive hypotheses, in particular, have received too little attention in health outcome research.
Other areas of progress are also needed, particularly new methodological approaches in the psychological demand area. Also, more longitudinal studies are needed, methodological advances are needed to address self-report bias and new physiological monitoring technologies must be introduced. At the macro level, macro social occupational factors, such as worker collective and organizational level decision influence and support, communication limitations and job and income insecurity, need to be more clearly integrated into the model. The linkages to social class concepts need to be further explored, and the strength of the model for women and the structure of work/family linkages need to be further investigated. Population groups in insecure employment arrangements, which have the highest stress levels, must be covered by new types of study designs - especially relevant as the global economy changes the nature of work relationships. As we are more exposed to the strains of the global economy, new measures at macro levels are needed to test the lack of local control and increased intensity of work activity - apparently making the general form of the Demand/Control model relevant in the future.
Various definitions of stress have been formulated since the concept was first named and described by Hans Selye (Selye 1960). Almost invariably these definitions have failed to capture what is perceived as the essence of the concept by a major proportion of stress researchers.
The failure to reach a common and generally acceptable definition may have several explanations; one of them may be that the concept has become so widespread and has been used in so many different situations and settings and by so many researchers, professionals and lay persons that to agree on a common definition is no longer possible. Another explanation is that there really is no empirical basis for a single common definition. The concept may be so diverse that one single process simply does not explain the whole phenomenon. One thing is clear - in order to examine the health effects of stress, the concept needs to include more than one component. Selye’s definition was concerned with the physiological fight or flight reaction in response to a threat or a challenge from the environment. Thus his definition involved only the individual physiological response. In the 1960s a strong interest arose in so-called life events, that is, major stressful experiences that occur in an individual’s life. The work by Holmes and Rahe (1967) nicely demonstrated that an accumulation of life events was harmful to health. These effects were found mostly in retrospective studies. To confirm the findings prospectively proved to be more difficult (Rahe 1988).
In the 1970s another concept was introduced into the theoretical framework, that of the vulnerability or resistance of the individual who was exposed to stressful stimuli. Cassel (1976) hypothesized that host resistance was a crucial factor in the outcome of stress or the impact of stress on health. The fact that host resistance had not been taken into account in many studies might explain why so many inconsistent and contradictory results had been obtained on the health effect of stress. According to Cassel, two factors were essential in determining the degree of a person’s host resistance: his or her capacity for coping and his or her social supports.
Today’s definition has come to include considerably more than the physiological “Selye stress” reactions. Both social environ-mental effects as represented by (for instance) life events and the resistance or vulnerability of the individual exposed to the life events are included.
In the stress-disease model proposed by Kagan and Levi (1971), several distinctions between different components are made (figure 34.4). These components are:
· stressful factors or stressors in the environment - social or psychological stimuli that evoke certain harmful reactions
· the individual psychobiological programme, predetermined both by genetic factors and early experiences and learning
· individual physiological stress reactions (“Selye Stress” reactions). A combination of these three factors may lead to
· precursors which may eventually provoke the final outcome, namely
· manifest physical illness.
It is important to note, that - contrary to Selye's beliefs - several different physiological pathways have been identified that mediate the effects of stressors on physical health outcomes. These include not only the originally described sympatho-adreno-medullary reaction but also the action of the sympatho-adreno-cortical axis, which may be of equal importance, and the counterbalance provided by parasympathetic gastrointestinal neurohormonal regulation, which has been observed to dampen and buffer the harmful effects of stress. In order for a stressor to evoke such reactions, a harmful influence of the psychobiological programme is required - in other words, an individual propensity to react to stressors has to be present. This individual propensity is both genetically determined and based on early childhood experiences and learning.
If the physiological stress reactions are severe and long-standing enough, they may eventually lead to chronic states, or become precursors of illness. An example of such a precursor is hypertension, which is often stress-related and may lead to manifest somatic disease, such as stroke or heart disease.
Another important feature of the model is that the interaction effects of intervening variables are anticipated at each step, further increasing the complexity of the model. This complexity is illustrated by feed-back loops from all stages and factors in the model to every other stage or factor. Thus the model is complex - but so is nature.
Our empirical knowledge about the accuracy of this model is still insufficient and unclear at this stage, but further insight will be gained by applying the interactive model to stress research. For example, our ability to predict disease may increase if the attempt is made to apply the model.
In our group of investigators at the Karolinska Institute in Stockholm, recent research has been focused on factors that promote host resistance. We have hypothesized that one such powerful factor is the health-promoting effects of well-functioning social networks and social support.
Our first endeavour to investigate the effects of social networks on health were focused on the entire Swedish population from a "macroscopic" level. In cooperation with the Central Swedish Bureau of Statistics we were able to evaluate the effects of self-assessed social network interactions on health outcome, in this case on survival (Orth-Gomér and Johnson 1987).
Representing a random sample of the adult Swedish population, 17,433 men and women responded to a questionnaire about their social ties and social networks. The questionnaire was included in two of the annual Surveys of Living Conditions in Sweden, which were designed to assess and measure the welfare of the nation in material as well as in social and psychological terms. Based on the questionnaire, we created a comprehensive social network interaction index which included the number of members in the network and the frequency of contacts with each member. Seven sources of contacts were identified by means of factor analysis: parents, siblings, nuclear family (spouse and children), close relatives, co-workers, neighbours, distant relatives and friends. The contacts with each source were calculated and added up to a total index score, which ranged from zero to 106.
By linking the Surveys of Living Conditions with the national death register, we were able to investigate the impact of the social network interaction index on mortality. Dividing the study population into tertiles according to their index score, we found that those men and women who were in the lower tertile had an invariably higher mortality risk than those who were in the middle and upper tertiles of the index score.
The risk of dying if one was in the lower tertile was four to five times higher than in the other tertiles, although many other factors might explain this association such as the fact that increasing age is associated with higher risk of dying. Also, as one ages the number of social contacts decrease. If one is sick and disabled, mortality risk increases and it is likely that the extent of the social network decreases. Morbidity and mortality are also higher in lower social classes, and social networks are also smaller and social contacts less abundant. Thus, controlling for these and other mortality risk factors is necessary in any analysis. Even when these factors were taken into account, a statistically significant 40% increase in risk was found to be associated with a sparse social network among those in the lowest third of the population. It is interesting to note that there was no additional health-promoting effect of being in the highest as compared to the middle tertile. Possibly, a great number of contacts can represent a strain on the individual as well as protection against harmful health effects.
Thus, without even knowing anything further about the stressors in the lives of these men and women we were able to confirm a health-promoting effect of social networks.
Social networks alone cannot explain the health effects observed. It is probable that the way in which a social network functions and the basis of support the network members provide are more important than the actual number of people included in the network. In addition, an interactive effect of different stressors is possible. For example the effects of work-related stress have been found to worsen when there is also a lack of social support and social interaction at work (Karasek and Theorell 1990).
In order to explore the issues of interaction, research studies have been carried out using various measures for assessing both qualitative and quantitative aspects of social support. Several interesting results were obtained which are illustrative of the health effects that have been associated with social support. For example, one study of heart disease (myocardial infarct and sudden cardiac death) in a population of 776 fifty-year-old men born in Gothenburg, randomly selected from the general population and found healthy on initial examination, smoking and lack of social support were found to be the strongest predictors of disease (Orth-Gomér, Rosengren and Wilheemsen 1993). Other risk factors included elevated blood pressure, lipids, fibrinogen and a sedentary lifestyle.
In the same study it was shown that only in those men who lacked support, in particular emotional support from a spouse, close relatives or friends, were the effects of stressful life events harmful. Men who both lacked support and had experienced several serious life events had more than five times the mortality of men who enjoyed close and emotional support (Rosengren et al. 1993).
Another example of interactive effects was offered in a study of cardiac patients who were examined for psychosocial factors such as social integration and social isolation, as well as myocardial indicators of an unfavourable prognosis and then followed for a ten-year period. Personality and behaviour type, in particular the Type A behaviour pattern, was also assessed.
The behaviour type in itself had no impact on prognosis in these patients. Of Type A men, 24% died as compared to 22% of Type B men. But when considering the interactive effects with social isolation another picture emerged.
Using a diary of activities during a regular week, men partici-pating in the study were asked to describe anything they would do in the evenings and weekends of a normal week. Activities were then divided into those that involved physical exercise, those that were mainly involved with relaxation and performed at home and those that were performed for recreation together with others. Of these activity types, lack of social recreational activity was the strongest predictor of mortality. Men who never engaged in such activities - called socially isolated in the study-had about three times higher mortality risk than those who were socially active. In addition, Type A men who were socially isolated had an even higher mortality risk than those in any of the other categories (Orth-Gomér, Undén and Edwards 1988).
These studies demonstrate the need to consider several aspects of the psychosocial environment, individual factors as well as of course the physiological stress mechanisms. They also demonstrate that social support is one important factor in stress-related health outcomes.
Person–environment fit (PE) theory offers a framework for assessing and predicting how characteristics of the employee and the work environment jointly determine worker well-being and, in the light of this knowledge, how a model for identifying points of preventive intervention may be elaborated. Several PE fit formulations have been proposed, the most widely known ones being those of Dawis and Lofquist (1984); French, Rodgers and Cobb (1974); Levi (1972); McGrath (1976); and Pervin (1967). The theory of French and colleagues, illustrated in figure 34.5 , may be used to discuss the conceptual components of PE fit theory and their implications for research and application.
Poor PE fit can be viewed from the perspectives of the employee’s needs (needs–supplies fit) as well as the job–environment’s demands (demands–abilities fit). The term needs–supplies fit refers to the degree to which employee needs, such as the need to use skills and abilities, are met by the work environment’s supplies and opportunities to satisfy those needs. Demands–abilities fit refers to the degree to which the job’s demands are met by the employee’s skills and abilities. These two types of fit can overlap. For example, work overload may leave the employer’s demands unmet as well as threaten the employee’s need to satisfy others.
Characteristics of the person (P) include needs as well as abilities. Characteristics of the environment (E) include supplies and opportunities for meeting the employee’s needs as well as demands which are made on the employee’s abilities. In order to assess the degree to which P equals (or fits), exceeds, or is less than E, the theory requires that P and E be measured along commensurate dimensions. Ideally, P and E should be measured on equal interval scales with true zero points. For example, one could assess PE fit on workload for a data-entry operator in terms of both the number of data-entry keystrokes per minute demanded by the job (E) and the employee’s keystroke speed (P). As a less ideal alternative, investigators often use Likert type scales. For example, one could assess how much the employee wants to control the work pace (P) and how much control is provided by the job’s technology (E) by using a rating scale, where a value of 1 corresponds to no control, or almost no control and a value of 5 corresponds to complete control.
Subjective fit (FS) refers to the employee’s perceptions of P and E, whereas objective fit (FO) refers to assessments that are, in theory, free of subjective bias and error. In practice, there is always measurement error, so that it is impossible to construct truly objective measures. Consequently, many researchers prefer to create a working distinction between subjective and objective fit, referring to measures of objective fit as ones which are relatively, rather than absolutely, immune to sources of bias and error. For example, one can assess objective PE fit on keystroke ability by examining the fit between a count of required keystrokes per minute in the actual workload assigned to the employee (EO) and the employee’s ability as assessed on an objective-type test of keystroke ability (PO). Subjective PE fit might be assessed by asking the employee to estimate per minute keystroke ability (PS) and the number of keystrokes per minute demanded by the job (ES).
Given the challenges of objective measurement, most tests of PE fit theory have used only subjective measures of P and E (for an exception, see Chatman 1991). These measures have tapped a variety of dimensions including fit on responsibility for the work and well-being of other persons, job complexity, quantitative workload and role ambiguity.
Figure 34.5 depicts objective fit influencing subjective fit which, in turn, has direct effects on well-being. Well-being is broken down into responses called strains, which serve as risk factors for subsequent illness. These strains can involve emotional (e.g., depression, anxiety), physiological (e.g., serum cholesterol, blood pressure), cognitive (e.g., low self-evaluation, attributions of blame to self or others), as well as behavioural responses (e.g., aggression, changes in lifestyle, drug and alcohol use).
According to the model, levels of and changes in objective fit, whether due to planned intervention or otherwise, are not always perceived accurately by the employee, so that discrepancies arise between objective and subjective fit. Thus, employees can perceive good fit as well as poor fit when, objectively, such is not the case.
Inaccurate employee perceptions can arise from two sources. One source is the organization, which, unintentionally or by design (Schlenker 1980), may provide the employee with inadequate information regarding the environment and the employee. The other source is the employee. The employee might fail to access available information or might defensively distort objective information about what the job requires or about his or her abilities and needs - Taylor (1991) cites such an example.
French, Rodgers and Cobb (1974) use the concept of defences to refer to employee processes for distorting the components of subjective fit, PS and ES, without changing the commensurate components of objective fit, PO and EO. By extension, the organization can also engage in defensive processes - for example, cover-ups, denial or exaggeration - aimed at modifying employee perceptions of subjective fit without concomitantly modifying objective fit.
The concept of coping is, by contrast, reserved for responses and processes that aim to alter and, in particular, improve objective fit. The employee can attempt to cope by improving objective skills (PO) or by changing objective job demands and resources (EO) such as through a change of jobs or assigned responsibilities. By extension, the organization can also apply coping strategies to improve objective PE fit. For example, organizations can make changes in selection and promotion strategies, in training and in job design to alter EO and PO.
The distinctions between coping and defence on the one hand and objective and subjective fit on the other can lead to an array of practical and scientific questions regarding the consequences of using coping and defence and the methods for distinguishing between effects of coping and effects of defence on PE fit. By derivation from the theory, sound answers to such questions require sound measures of objective as well as subjective PE fit.
PE fit can have non-linear relations with psychological strain. Figure 34.6 presents a U-shaped curve as an illustration. The lowest level of psychological strain on the curve occurs when employee and job characteristics fit each other (P = E). Strain increases as the employee’s abilities or needs respectively fall short of the job’s demands or resources (P<E) or exceed them (P>E). Caplan and colleagues (1980) report a U-shaped relation between PE fit on job complexity and symptoms of depression in a study of employees from 23 occupations.
A variety of different approaches to the measurement of PE fit demonstrate the model’s potential for predicting well-being and performance. For example, careful statistical modelling found that PE fit explained about 6% more variance in job satisfaction than was explained by measures of P or E alone (Edwards and Harrison 1993). In a series of seven studies of accountants measuring PE fit using a card-sort method, high-performers had higher correlations between P and E (average r = 0.47) than low performers (average r = 0.26; Caldwell and O’Reilly 1990). P was assessed as the employee’s knowledge, skills and abilities (KSAs), and E was assessed as the commensurate KSAs required by the job. Poor PE fit between the accountant’s values and the firm’s also served to predict employee turnover (Chatman 1991).
Knowledge about human needs, abilities and constraints provides guidelines for shaping psychosocial work conditions so as to reduce stress and improve occupational health (Frankenhaeuser 1989). Brain research and behavioural research have identified the conditions under which people perform well and the conditions under which performance deteriorates. When the total inflow of impressions from the outside world falls below a critical level and work demands are too low, people tend to become inattentive and bored and to lose their initiative. Under conditions of excessive stimulus flow and too high demands, people lose their ability to integrate messages, thought processes become fragmented and judgement is impaired. This inverted U-relationship between workload and brain function is a fundamental biological principle with wide applications in working life. Stated in terms of efficiency at different workloads, it means that the optimal level of mental functioning is located at the midpoint of a scale ranging from very low to very high work demands. Within this middle zone the degree of challenge is “just right”, and the human brain functions efficiently. The location of the optimal zone varies among different people, but the crucial point is that large groups spend their lives outside the optimal zone that would provide opportunities for them to develop their full potential. Their abilities are constantly either underutilized or overtaxed.
A distinction should be made between quantitative overload, which means too much work within a given time period, and qualitative underload, which means that tasks are too repetitive, lacking variety and challenge (Levi, Frankenhaeuser and Gardell 1986).
Research has identified criteria for “healthy work” (Frankenhaeuser and Johansson 1986; Karasek and Theorell 1990). These criteria emphasize that workers should be given the opportunity to: (a) influence and control their work; (b) understand their contribution in a wider context; (c) experience a sense of togetherness and belonging at their place of work; and (d) develop their own abilities and vocational skill by continuous learning.
People are challenged by different work demands whose nature and strength are appraised via the brain. The appraisal process involves a weighing, as it were, of the severity of the demands against one’s own coping abilities. Any situation which is perceived as a threat or challenge requiring compensatory effort is accompanied by the transmission of signals from the brain to the adrenal medulla, which responds with an output of the catecholamines epinephrine and norepinephrine. These stress hormones make us mentally alert and physically fit. In the event that the situation induces feelings of uncertainty and helplessness, the brain messages also travel to the adrenal cortex, which secretes cortisol, a hormone which plays an important part in the body’s immune defence (Frankenhaeuser 1986).
With the development of biochemical techniques that permit the determination of exceedingly small amounts of hormones in blood, urine and saliva, stress hormones have come to play an increasingly important role in research on working life. In the short term, a rise in stress hormones is often beneficial and seldom a threat to health. But in the longer term, the picture may include damaging effects (Henry and Stephens 1977; Steptoe 1981). Frequent or long-lasting elevations of stress-hormone levels in the course of daily life may result in structural changes in the blood vessels which, in turn, may lead to cardiovascular disease. In other words, consistently high levels of stress hormones should be regarded as warning signals, telling us that the person may be under excessive pressure.
Biomedical recording techniques permit the monitoring of bodily responses at the workplace without interfering with the worker’s activities. Using such ambulatory-monitoring techniques, one can find out what makes the blood pressure rise, the heart beat faster, the muscles tense up. These are important pieces of information which, together with stress-hormone assays, have helped in identifying both aversive and protective factors related to job content and work organization. Thus, when searching the work environment for harmful and protective factors, one can use the people themselves as “measuring rods”. This is one way in which the study of human stress and coping may contribute to intervention and prevention at the workplace (Frankenhaeuser et al. 1989; Frankenhaeuser 1991).
Data from both epidemiological and experimental studies support the notion that personal control and decision latitude are important “buffering” factors which help people to simultaneously work hard, enjoy their jobs and remain healthy (Karasek and Theorell 1990). The chance of exercising control may “buffer” stress in two ways: first, by increasing job satisfaction, thus reducing bodily stress responses, and secondly, by helping people develop an active, participatory work role. A job that allows the worker to use his or her skills to the full will increase self-esteem. Such jobs, while demanding and taxing, may help to develop competencies that aid in coping with heavy workloads.
The pattern of stress hormones varies with the interplay of positive versus negative emotional responses evoked by the situation. When demands are experienced as a positive and manageable challenge, the adrenaline output is typically high, whereas the cortisol-producing system is put to rest. When negative feelings and uncertainty dominate, both cortisol and adrenaline increase. This would imply that the total load on the body, the “cost of achievement”, will be lower during demanding, enjoyable work than during less demanding but tedious work, and it would seem that the fact that cortisol tends to be low in controllable situations could account for the positive health effects of personal control. Such a neuroendocrine mechanism could explain the epidemiological data obtained from national surveys in different countries which show that high job demands and work overload have adverse health consequences mainly when combined with low control over job-related decisions (Frankenhaeuser 1991; Karasek and Theorell 1990; Levi, Frankenhaeuser and Gardell 1986).
In order to assess the relative workloads associated with men’s and women’s different life situations, it is necessary to modify the concept of work so as to include the notion of total workload, that is, the combined load of demands related to paid and unpaid work. This includes all forms of productive activities defined as “all the things that people do that contribute to the goods and services that other people use and value” (Kahn 1991). Thus, a person’s total workload includes regular employment and overtime at work as well as housework, child care, care of elderly and sick relatives and work in voluntary organizations and unions. According to this definition, employed women have a higher workload than men at all ages and all occupational levels (Frankenhaeuser 1993a, 1993b and 1996; Kahn 1991).
The fact that the division of labour between spouses in the home has remained the same, while the employment situation of women has changed radically, has led to a heavy workload for women, with little opportunity for them to relax in the evenings (Frankenhaeuser et al. 1989). Until a better insight has been gained into the causal links between workload, stress and health, it will remain necessary to regard prolonged stress responses, displayed in particular by women at the managerial level, as warning signals of possible long-term health risks (Frankenhaeuser, Lundberg and Chesney 1991).
The patterning and duration of the hours a person works are a very important aspect of his or her experience of the work situation. Most workers feel that they are paid for their time rather than explicitly for their efforts, and thus the transaction between the worker and the employer is one of exchanging time for money. Thus, the quality of the time being exchanged is a very important part of the equation. Time that has high value because of its importance to the worker in terms of allowing sleep, interaction with family and friends and participation in community events may be more highly prized, and thus require extra financial compensation, as compared to normal “day work” time when many of the worker’s friends and family members are themselves at work or at school. The balance of the transaction can also be changed by making the time spent at work more congenial to the worker, for example, by improving working conditions. The commute to and from work is unavailable to the worker for recreation, so this time too must be considered as “grey time” (Knauth et al. 1983) and therefore a “cost” to the worker. Thus, measures such as compressed workweeks, which reduce the number of commuting trips taken per week, or flexitime, which reduces the commute time by allowing the worker to avoid the rush hour, are again likely to change the balance.
As Kogi (1991) has remarked, there is a general trend in both manufacturing and service industries towards greater flexibility in the temporal programming of work. There are a number of reasons for this trend, including the high cost of capital equipment, consumer demand for around-the-clock service, legislative pressure to reduce the length of the workweek and (in some societies such as the United States and Australia) taxation pressure on the employer to have as few different employees as possible. For many employees, the conventional “9 to 5” or “8 to 4”, Monday through Friday workweek is a thing of the past, either because of new work systems or because of the large amounts of excessive overtime required.
Kogi notes that while the benefits to the employer of such flexibility are quite clear in allowing extended business hours, accommodation of market demand and greater management flexibility, the benefits to the worker may be less certain. Unless the flexible schedule involves elements of choice for workers with respect to their particular hours of work, flexibility can often mean disruptions in their biological clocks and domestic situations. Extended work shifts may also lead to fatigue, compromising safety and productivity, as well as to increased exposure to chemical hazards.
Human biology is specifically oriented towards wakefulness during daylight and sleep at night. Any work schedule which requires late evening or all-night wakefulness as a result of compressed workweeks, mandatory overtime or shiftwork will lead, therefore, to disruptions of the biological clock (Monk and Folkard 1992). These disruptions can be assessed by measuring workers’ “circadian rhythms”, which comprise regular fluctuations over the 24 hours in vital signs, blood and urine composition, mood and performance efficiency over the 24-hour period (Aschoff 1981). The measure used most often in shiftwork studies has been body temperature, which, under normal conditions, shows a clear rhythm with a peak at about 2000 hours, a trough at about 0500 hours and a difference of about 0.7°C. between the two. After an abrupt change in routine, the amplitude (size) of the rhythm diminishes and the phase (timing) of the rhythm is slow to adjust to the new schedule. Until the adjustment process is complete, sleep is disrupted and daytime mood and performance efficiency are impaired. These symptoms can be regarded as the shiftwork equivalent of jet-lag and can be extremely long lasting (Knauth and Rutenfranz 1976).
Abnormal work hours can also lead to poor health. Although it has proved difficult to precisely quantify the exact size of the effect, it appears that, in addition to sleep disorders, gastrointestinal disorders (including peptic ulcers) and cardiovascular disease can be more frequently found in shift workers (and former shift workers) than in day workers (Scott and LaDou 1990). There is also some preliminary evidence for increased incidence of psychiatric symptoms (Cole, Loving and Kripke 1990).
Not only human biology, but also human society, opposes those who work abnormal hours. Unlike the nocturnal sleep of the majority, which is carefully protected by strict taboos against loud noise and telephone use at night, the late wakening, day-sleeping and napping that are required by those working abnormal work hours is only grudgingly tolerated by society. Evening and weekend community events can also be denied to these people, leading to feelings of alienation.
It is with the family, however, that the social disruptions of abnormal work hours may be the most devastating. For the worker, the family roles of parent, caregiver, social companion and sexual partner can all be severely compromised by abnormal work hours, leading to marital disharmony and problems with children (Colligan and Rosa 1990). Moreover, the worker’s attempts to rectify, or to avoid, such social problems may result in a decrease in sleep time, thus leading to poor alertness and compromised safety and productivity.
Just as the problems of abnormal work hours are multifaceted, so too must be the solutions to those problems. The primary areas to be addressed should include:
1. selection and education of the worker
2. selection of the most appropriate work schedule or roster
3. improvement of the work environment.
Selection and education of the worker should involve identification and counselling of those persons likely to experience difficulties with abnormal or extended work hours (e.g., older workers and those with high sleep needs, extensive domestic workloads or long commutes). Education in circadian and sleep hygiene principles and family counselling should also be made available (Monk and Folkard 1992). Education is an extremely powerful tool in helping those with abnormal work hours to cope, and in reassuring them about why they may be experiencing problems. Selection of the most appropriate schedule should begin with a decision as to whether abnormal work hours are actually needed at all. For example, night work may in many cases be done better at a different time of day (Knauth and Rutenfranz 1982). Consideration should be also be given to the schedule best suited to the work situation, bearing in mind the nature of the work and the demographics of the workforce. Improvement of the work environment may involve raising illumination levels and providing adequate canteen facilities at night.
The particular pattern of work hours chosen for an employee can represent a significant challenge to his or her biology, domestic situation and role in the community. Informed decisions should be made, incorporating a study of the demands of the work situation and the demographics of the workforce. Any changes in hours of work should be preceded by detailed investigation and consultation with the employees and followed by evaluation studies.
In this article, the links between the physical features of the workplace and occupational health are examined. Workplace design is concerned with a variety of physical conditions within work environments that can be objectively observed or recorded and modified through architectural, interior design and site planning interventions. For the purposes of this discussion, occupational health is broadly construed to encompass multiple facets of workers’ physical, mental and social well-being (World Health Organization 1984). Thus, a broad array of health outcomes is examined, including employee satisfaction and morale, work-group cohesion, stress reduction, illness and injury prevention, as well as environmental supports for health promotion at the worksite.
Empirical evidence for the links between workplace design and occupational health is reviewed below. This review, highlighting the health effects of specific design features, must be qualified in certain respects. First, from an ecological perspective, worksites function as complex systems comprised of multiple social and physical environmental conditions, which jointly influence employee well-being (Levi 1992; Moos 1986; Stokols 1992). Thus, the health consequences of environmental conditions are often cumulative and sometimes involve complex mediated and moderated relationships among the sociophysical environment, personal resources and dispositions (Oldham and Fried 1987; Smith 1987; Stellman and Henifin 1983). Moreover, enduring qualities of people-environment transaction, such as the degree to which employees perceive their work situation to be controllable, socially supportive and compatible with their particular needs and abilities, may have a more pervasive influence on occupational health than any single facet of workplace design (Caplan 1983; Karasek and Theorell 1990; Parkes 1989; Repetti 1993; Sauter, Hurrell and Cooper 1989). The research findings reviewed should be interpreted in light of these caveats.
The relationships between worksite design and occupational health can be considered at several levels of analysis, including the:
1. physical arrangement of employees’ immediate work area
2. ambient environmental qualities of the work area
3. physical organization of buildings that comprise a particular workplace
4. exterior amenities and site planning of those facilities.
Previous research has focused primarily on the first and second levels, while giving less attention to the third and fourth levels of workplace design.
The immediate work area extends from the core of an employee’s desk or workstation to the physical enclosure or imaginary boundary surrounding his or her work space. Several features of the immediate work area have been found to influence employee well-being. The degree of physical enclosure surrounding one’s desk or workstation, for example, has been shown in several studies to be positively related to the employee’s perception of privacy, satisfaction with the work environment and overall job satisfaction (Brill, Margulis and Konar 1984; Hedge 1986; Marans and Yan 1989; Oldham 1988; Sundstrom 1986; Wineman 1986). Moreover, “open-plan” (low enclosure) work areas have been linked to more negative social climates in work groups (Moos 1986) and more frequent reports of headaches among employees (Hedge 1986).
It is important to note, however, that the potential health effects of workstation enclosure may depend on the type of work being performed (e.g., confidential versus non-confidential, team versus individualized tasks; see Brill, Margulis and Konar 1984), job status (Sundstrom 1986), levels of social density adjacent to one’s work area (Oldham and Fried 1987), and workers’ needs for privacy and stimulation screening (Oldham 1988).
A number of studies have shown that the presence of windows in the employees’ immediate work areas (especially windows that afford views of natural or landscaped settings), exposure to indoor natural elements (e.g., potted plants, pictures of wilderness settings), and opportunities to personalize the decor of one’s office or workstation are associated with higher levels of environmental and job satisfaction and lower levels of stress (Brill, Margulis and Konar 1984; Goodrich 1986; Kaplan and Kaplan 1989; Steele 1986; Sundstrom 1986). Providing employees with localized controls over acoustic, lighting and ventilation conditions within their work areas has been linked to higher levels of environmental satisfaction and lower levels of stress in some studies (Becker 1990; Hedge 1991; Vischer 1989). Finally, several research programmes have documented the health benefits associated with employees’ use of adjustable, ergonomically sound furniture and equipment; these benefits include reduced rates of eyestrain and of repetitive motion injuries and lower back pain (Dainoff and Dainoff 1986; Grandjean 1987; Smith 1987).
Ambient environmental conditions originate from outside the worker’s immediate work area. These pervasive qualities of the worksite influence the comfort and well-being of employees whose work spaces are located within a common region (e.g., a suite of offices located on one floor of a building). Examples of ambient environmental qualities include levels of noise, speech privacy, social density, illumination and air quality - conditions that are typically present within a particular portion of the worksite. Several studies have documented the adverse health impacts of chronic noise disturbance and low levels of speech privacy in the workplace, including elevated levels of physiological and psychological stress and reduced levels of job satisfaction (Brill, Margulis and Konar 1984; Canter 1983; Klitzman and Stellman 1989; Stellman and Henifin 1983; Sundstrom 1986; Sutton and Rafaeli 1987). High levels of social density in the immediate vicinity of one’s work area have also been linked with elevated stress levels and reduced job satisfaction (Oldham 1988; Oldham and Fried 1987; Oldham and Rotchford 1983).
Health consequences of office lighting and ventilation systems have been observed as well. In one study, lensed indirect fluorescent uplighting was associated with higher levels of employee satisfaction and reduced eyestrain, in comparison with traditional fluorescent downlighting (Hedge 1991). Positive effects of natural lighting on employees’ satisfaction with the workplace also have been reported (Brill, Margulis and Konar 1984; Goodrich 1986; Vischer and Mees 1991). In another study, office workers exposed to chilled-air ventilation systems evidenced higher rates of upper-respiratory problems and physical symptoms of “sick building syndrome” than those whose buildings were equipped with natural or mechanical (non-chilled, non-humidified) ventilation systems (Burge et al. 1987; Hedge 1991).
Features of the ambient environment that have been found to enhance the social climate and cohesiveness of work groups include the provision of team-oriented spaces adjacent to individualized offices and workstations (Becker 1990; Brill, Margulis and Konar 1984; Steele 1986; Stone and Luchetti 1985) and visible symbols of corporate and team identity displayed within lobbies, corridors, conference rooms, lounges and other collectively used areas of the worksite (Becker 1990; Danko, Eshelman and Hedge 1990; Ornstein 1990; Steele 1986).
This level of design encompasses the interior physical features of work facilities that extend throughout an entire building, many of which are not immediately experienced within an employee’s own work space or within those adjacent to it. For example, enhancing the structural integrity and fire-resistance of buildings, and designing stairwells, corridors and factories to prevent injuries, are essential strategies for promoting worksite safety and health (Archea and Connell 1986; Danko, Eshelman and Hedge 1990). Building layouts that are consistent with the adjacency needs of closely interacting units within an organization can improve coordination and cohesion among work groups (Becker 1990; Brill, Margulis and Konar 1984; Sundstrom and Altman 1989). The provision of physical fitness facilities at the worksite has been found to be an effective strategy for enhancing employees’ health practices and stress management (O’Donnell and Harris 1994). Finally, the presence of legible signs and wayfinding aids, attractive lounge and dining areas, and child-care facilities at the worksite have been identified as design strategies that enhance employees’ job satisfaction and stress management (Becker 1990; Brill, Margulis and Konar 1984; Danko, Eshelman and Hedge 1990; Steele 1986; Stellman and Henifin 1983; Vischer 1989).
Exterior environmental conditions adjacent to the worksite may also carry health consequences. One study reported an association between employees’ access to landscaped, outdoor recreational areas and reduced levels of job stress (Kaplan and Kaplan 1989). Other researchers have suggested that the geographic location and site planning of the worksite can influence the mental and physical well-being of workers to the extent that they afford greater access to parking and public transit, restaurants and retail services, good regional air quality and the avoidance of violent or otherwise unsafe areas in the surrounding neighbourhood (Danko, Eshelman and Hedge 1990; Michelson 1985; Vischer and Mees 1991). However, the health benefits of these design strategies have not yet been evaluated in empirical studies.
Prior studies of environmental design and occupational health reflect certain limitations and suggest several issues for future investigation. First, earlier research has emphasized the health effects of specific design features (e.g., workstation enclosure, furnishings, lighting systems), while neglecting the joint influence of physical, interpersonal and organizational factors on well-being. Yet the health benefits of improved environmental design may be moderated by the social climate and organizational qualities (as moderated, for example, by a participative versus non-participative structure) of the workplace (Becker 1990; Parkes 1989; Klitzman and Stellman 1989; Sommer 1983; Steele 1986). The interactive links between physical design features, employee characteristics, social conditions at work and occupational health, therefore, warrant greater attention in subsequent studies (Levi 1992; Moos 1986; Stokols 1992). At the same time, an important challenge for future research is to clarify the operational definitions of particular design features (e.g., the “open plan” office), which have varied widely in earlier studies (Brill, Margulis and Konar 1984; Marans and Yan 1989; Wineman 1986).
Secondly, employee characteristics such as job status, gender and dispositional styles have been found to mediate the health consequences of worksite design (Burge et al. 1987; Oldham 1988; Hedge 1986; Sundstrom 1986). Yet, it is often difficult to disentangle the separate effects of environmental features and individual differences (these differences may have to do with, for example, workstation enclosures, comfortable furnishings, and job status) because of ecological correlations among these variables (Klitzman and Stellman 1989). Future studies should incorporate experimental techniques and sampling strategies that permit an assessment of the main and interactive effects of personal and environmental factors on occupational health. Moreover, specialized design and ergonomic criteria to enhance the health of diverse and vulnerable employee groups (e.g., disabled, elderly and single-parent female workers) remain to be developed in future research (Michelson 1985; Ornstein 1990; Steinfeld 1986).
Thirdly, prior research on the health outcomes of worksite design has relied heavily on survey methods to assess employees’ perceptions of both their work environments and health status, placing certain constraints (for example, “common method variance”) on the interpretation of data (Klitzman and Stellman 1989; Oldham and Rotchford 1983). Furthermore, the majority of these studies have used cross-sectional rather than longitudinal research designs, the latter incorporating comparative assessments of intervention and control groups. Future studies should emphasize both field-experimental research designs and multi-method strategies that combine survey techniques with more objective observations and recordings of environmental conditions, medical exams and physiological measures.
Finally, the health consequences of building organization, exterior amenities and site-planning decisions have received considerably less attention in prior studies than those associated with the more immediate, ambient qualities of employees’ work areas. The health relevance of both proximal and remote aspects of workplace design should be examined more closely in future research.
Several environmental design resources and their potential health benefits are summarized in table 34.1 , based on the preceding review of research findings. These resources are grouped according to the four levels of design noted above and emphasize physical features of work settings that have been empirically linked to improved mental, physical and social health outcomes (especially those found at levels 1 and 2), or have been identified as theoretically plausible leverage points for enhancing employee well-being (e.g., several of the features subsumed under levels 3 and 4).
Levels of environmental design
Environmental design features of the workplace
Emotional, social and physical health outcomes
Immediate work area
Physical enclosure of the work area
Enhanced privacy and job satisfaction
Adjustable furniture and equipment
Reduced eyestrain and repetitive-strain and lower-back injuries
Localized controls of acoustics, lighting and ventilation
Enhanced comfort and stress reduction
Natural elements and personalized decor
Enhanced sense of identity and involvement at the workplace
Presence of windows in work area
Job satisfaction and stress reduction
Ambient qualities of the work area
Speech privacy and noise control
Lower physiological, emotional stress
Comfortable levels of social density
Good mix of private and team spaces
Improved social climate, cohesion
Symbols of corporate and team identity
Improved social climate, cohesion
Natural, task, and lensed indirect lighting
Reduced eyestrain, enhanced satisfaction
Natural ventilation vs. chilled-air systems
Lower rates of respiratory problems
Adjacencies among interacting units
Enhanced coordination and cohesion
Legible signage and wayfinding aids
Reduced confusion and distress
Lower rates of unintentional injuries
Attractive lounge and food areas onsite
Enhanced satisfaction with job, worksite
Availability of worksite child care
Employee convenience, stress reduction
Physical fitness facilities onsite
Improved health practices, lower stress
Exterior amenities and site planning
Availability of outside recreation areas
Enhanced cohesion, stress reduction
Access to parking and public transit
Employee convenience, stress reduction
Proximity to restaurants and stores
Employee convenience, stress reduction
Good air quality in surrounding area
Improved respiratory health
Low levels of neighbourhood violence
Reduced rates of intentional injuries
The incorporation of these resources into the design of work environments should, ideally, be combined with organizational and facilities management policies that maximize the health- promoting qualities of the workplace. These corporate policies include:
1. the designation of worksites as “smoke-free” (Fielding and Phenow 1988)
2. the specification and use of non-toxic, ergonomically sound furnishings and equipment (Danko, Eshelman and Hedge 1990)
3. managerial support for employees’ personalization of their workspace (Becker 1990; Brill, Margulis and Konar 1984; Sommer 1983; Steele 1986)
4. job designs that prevent health problems linked with computer-based work and repetitive tasks (Hackman and Oldham 1980; Sauter, Hurrell and Cooper 1989; Smith and Sainfort 1989)
5. the provision of employee training programmes in the areas of ergonomics and occupational safety and health (Levy and Wegman 1988)
6. incentive programmes to encourage employees’ use of physical fitness facilities and compliance with injury prevention protocols (O’Donnell and Harris 1994)
7. flexitime, telecommuting, job-sharing and ride-sharing programmes to enhance workers’ effectiveness in residential and corporate settings (Michelson 1985; Ornstein 1990; Parkes 1989; Stokols and Novaco 1981)
8. the involvement of employees in the planning of worksite relocations, renovations and related organizational developments (Becker 1990; Brill, Margulis and Konar 1984; Danko, Eshelman and Hedge 1990; Miller and Monge 1986; Sommer 1983; Steele 1986; Stokols et al. 1990).
Organizational efforts to enhance employee well-being are likely to be more effective to the extent that they combine complementary strategies of environmental design and facilities management, rather than relying exclusively on either one of these approaches.
The purpose of this article is to afford the reader an understanding of how ergonomic conditions can affect the psychosocial aspects of working, employee satisfaction with the work environment, and employee health and well-being. The major thesis is that, with respect to physical surroundings, job demands and technological factors, improper design of the work environment and job activities can cause adverse employee perceptions, psychological stress and health problems (Smith and Sainfort 1989; Cooper and Marshall 1976).
Industrial ergonomics is the science of fitting the work environment and job activities to the capabilities, dimensions and needs of people. Ergonomics deals with the physical work environment, tools and technology design, workstation design, job demands and physiological and biomechanical loading on the body. Its goal is to increase the degree of fit among the employees, the environments in which they work, their tools and their job demands. When the fit is poor, stress and health problems can occur. The many relationships between the demands of the job and psychological distress are discussed elsewhere in this chapter as well as in Smith and Sainfort (1989), in which a definition is given of the balance theory of job stress and job design. Balance is the use of different aspects of job design to counteract job stressors. The concept of job balance is important in the examination of ergonomic considerations and health. For instance, the discomforts and disorders produced by poor ergonomic conditions can make an individual more susceptible to job stress and psychological disorders, or can intensify the somatic effects of job stress.
As spelled out by Smith and Sainfort (1989), there are various sources of job stress, including
1. job demands such as high workload and work pace
2. poor job content factors that produce boredom and lack of meaningfulness
3. limited job control or decision latitude
4. organizational policies and procedures that alienate the workforce
5. supervisory style affecting participation and socialization
6. environmental contamination
7. technology factors
8. ergonomic conditions.
Smith (1987) and Cooper and Marshall (1976) discuss the characteristics of the workplace that can cause psychological stress. These include improper workload, heavy work pressure, hostile environment, role ambiguity, lack of challenging tasks, cognitive overload, poor supervisory relations, lack of task control or decision-making authority, poor relationship with other employees and lack of social support from supervisors, fellow employees and family.
Adverse ergonomic characteristics of work can cause visual, muscular and psychological disturbances such as visual fatigue, eye strain, sore eyes, headaches, fatigue, muscle soreness, cumulative trauma disorders, back disorders, psychological tension, anxiety and depression. Sometimes these effects are temporary and may disappear when the individual is removed from work or given an opportunity to rest at work, or when the design of the work environment is improved. When exposure to poor ergonomic conditions is chronic, then the effects can become permanent. Visual and muscular disturbances, and aches and pains can induce anxiety in employees.
The result may be psychological stress or an exacerbation of the stress effects of other adverse working conditions that cause stress. Visual and musculoskeletal disorders that lead to a loss of function and disability can lead to anxiety, depression, anger and melancholy. There is a synergistic relationship among the disorders caused by ergonomic misfit, so that a circular effect is created in which visual or muscular discomfort generates more psychological stress, which then leads to a greater sensitivity in pain perception in the eyes and muscles, which leads to more stress and so on.
Smith and Sainfort (1989) have defined five elements of the work system that are significant in the design of work that relate to the causes and control of stress. These are: (1) the person; (2) the physical work environment; (3) tasks; (4) technology; and (5) work organization. All but the person are discussed.
The physical work environment produces sensory demands which affect an employee’s ability to see, hear and touch properly, and includes such features as air quality, temperature and humidity. In addition, noise is one of the most prominent of the ergonomic conditions that produce stress (Cohen and Spacapan 1983). When physical working conditions produce a “poor fit” with employees’ needs and capabilities, generalized fatigue, sensory fatigue and performance frustration are the result. Such conditions can lead to psychological stress (Grandjean 1968).
Various aspects of technology have proved troublesome for employees, including incompatible controls and displays, poor response characteristics of controls, displays with poor sensory sensitivity, difficulty in operating characteristics of the technology, equipment that impairs employee performance and equipment breakdowns (Sanders and McCormick 1993; Smith et al. 1992a). Research has shown that employees with such problems report more physical and psychological stress (Smith and Sainfort 1989; Sauter, Dainoff and Smith 1990).
Two very critical ergonomic task factors that have been tied to job stress are heavy workloads and work pressure (Cooper and Smith 1985). Too much or too little work produces stress, as does unwanted overtime work. When employees must work under time pressure, for example, to meet deadlines or when the workload is unrelentingly high, then stress is also high. Other critical task factors that have been tied to stress are machine pacing of the work process, a lack of cognitive content of the job tasks and low task control. From an ergonomic perspective, workloads should be established using scientific methods of time and motion evaluation (ILO 1986), and not be set by other criteria such as economic need to recover capital investment or by the capacity of the technology.
Three ergonomic aspects of the management of the work process have been identified as conditions that can lead to employee psychological stress. These are shift work, machine-paced work or assembly-line work, and unwanted overtime (Smith 1987). Shift work has been shown to disrupt biological rhythms and basic physiological functioning (Tepas and Monk 1987; Monk and Tepas 1985). Machine-paced work or assembly-line work that produces short-cycle tasks with little cognitive content and low employee control over the process leads to stress (Sauter, Hurrell and Cooper 1989). Unwanted overtime can lead to employee fatigue and to adverse psychological reactions such as anger and mood disturbances (Smith 1987). Machine-paced work, unwanted overtime and perceived lack of control over work activities have also been linked to mass psychogenic illness (Colligan 1985).
Autonomy and job control are concepts with a long history in the study of work and health. Autonomy - the extent to which workers can exercise discretion in how they perform their work - is most closely associated with theories that are concerned with the challenge of designing work so that it is intrinsically motivating, satisfying and conducive to physical and mental well-being. In virtually all such theories, the concept of autonomy plays a central role. The term control (defined below) is generally understood to have a broader meaning than autonomy. In fact, one could consider autonomy to be a specialized form of the more general concept of control. Because control is the more inclusive term, it will be used throughout the remainder of this article.
Throughout the 1980s, the concept of control formed the core of perhaps the most influential theory of occupational stress (see, for example, the review of the work stress literature by Ganster and Schaubroeck 1991b). This theory, usually known as the Job Decision Latitude Model (Karasek 1979) stimulated many large-scale epidemiological studies that investigated the joint effects of control in conjunction with a variety of demanding work conditions on worker health. Though there has been some controversy regarding the exact way that control might help determine health outcomes, epidemiologists and organizational psychologists have come to regard control as a critical variable that should be given serious consideration in any investigation of psychosocial work stress conditions. Concern for the possible detrimental effects of low worker control was so high, for example, that in 1987 the National Institute for Occupational Safety and Health (NIOSH) of the United States organized a special workshop of authorities from epidemiology, psychophysiology, and industrial and organizational psychology to critically review the evidence concerning the impact of control on worker health and well-being. This workshop eventually culminated in the comprehensive volume Job Control and Worker Health (Sauter, Hurrell and Cooper 1989) that provides a discussion of the global research efforts on control. Such widespread acknowledgement of the role of control in worker well-being also had an impact on governmental policy, with the Swedish Work Environment Act (Ministry of Labour 1987) stating that “the aim must be for work to be arranged in such a way so that the employee himself can influence his work situation”. In the remainder of this article I summarize the research evidence on work control with the goal of providing the occupational health and safety specialist with the following:
1. a discussion of aspects of worker control that might be important
2. guidelines about how to assess job control in the worksite
3. ideas on how to intervene so as to reduce the deleterious effects of low worker control.
First, what exactly is meant by the term control? In its broadest sense it refers to workers’ ability to actually influence what happens in their work environment. Moreover, this ability to influence the work setting should be considered in light of the worker’s goals. The term refers to the ability to influence matters that are relevant to one’s personal goals. This emphasis on being able to influence the work environment distinguishes control from the related concept of predictability. The latter refers to one’s being able to anticipate what demands will be made on oneself, for example, but does not imply any ability to alter those demands. Lack of predictability constitutes a source of stress in its own right, particularly when it produces a high level of ambiguity about what performance strategies one ought to adopt to perform effectively or if one even has a secure future with the employer. Another distinction that should be made is that between control and the more inclusive concept of job complexity. Early conceptualizations of control considered it together with such aspects of work as skill level and availability of social interaction. Our discussion here discriminates control from these other domains of job complexity.
One can consider mechanisms by which workers can exercise control and the domains over which that control can apply. One way that workers can exercise control is by making decisions as individuals. These decisions can be about what tasks to complete, the order of those tasks, and the standards and processes to follow in completing those tasks, to name but a few. The worker might also have some collective control either through representation or by social action with co-workers. In terms of domains, control might apply to such matters as the work pace, the amount and timing of interaction with others, the physical work environment (lighting, noise and privacy), scheduling of vacations or even matters of policy at the worksite. Finally, one can distinguish between objective and subjective control. One might, for example, have the ability to choose one’s work pace but not be aware of it. Similarly, one might believe that one can influence policies in the workplace even though this influence is essentially nil.
How can the occupational health and safety specialist assess the level of control in a work situation? As recorded in the literature, basically two approaches have been taken. One approach has been to make an occupational-level determination of control. In this case every worker in a given occupation would be considered to have the same level of control, as it is assumed to be determined by the nature of the occupation itself. The disadvantage to this approach, of course, is that one cannot obtain much insight as to how workers are faring in a particular worksite, where their control might have been determined as much by their employer’s policies and practices as by their occupational status. The more common approach is to survey workers about their subjective perceptions of control. A number of psychometrically sound measures have been developed for this purpose and are readily available. The NIOSH control scale (McLaney and Hurrell 1988), for example, consists of sixteen questions and provides assessments of control in the domains of task, decision, resources and physical environment. Such scales can easily be incorporated into an assessment of worker safety and health concerns.
Is control a significant determinant of worker safety and health? This question has driven many large-scale research efforts since at least 1985. Since most of these studies have consisted of non- experimental field surveys in which control was not purposely manipulated, the evidence can only show a systematic correlation between control and health and safety outcome variables. The lack of experimental evidence prevents us from making direct causal assertions, but the correlational evidence is quite consistent in showing that workers with lower levels of control suffer more from mental and physical health complaints. The evidence is strongly suggestive, then, that increasing worker control constitutes a viable strategy for improving the health and welfare of workers. A more controversial question is whether control interacts with other sources of psychosocial stress to determine health outcomes. That is, will high control levels counteract the deleterious effects of other job demands? This is an intriguing question, for, if true, it suggests that the ill effects of high workloads, for example, can be negated by increasing worker control with no corresponding need to lower workload demands. The evidence is clearly mixed on this question, however. About as many investigators have reported such interaction effects as have not. Thus, control should not be considered a panacea that will cure the problems brought on by other psychosocial stressors.
Work by organizational researchers suggests that increasing worker control can significantly improve health and well-being. Moreover, it is relatively easy to make a diagnosis of low worker control through the use of brief survey measures. How can the health and safety specialist intervene, then, to increase worker control levels? As there are many domains of control, there are many ways to increase workplace control. These range from providing opportunities for workers to participate in decisions that affect them to the fundamental redesign of jobs. What is clearly important is that control domains be targeted that are relevant to the primary goals of the workers and that fit the situational demands. These domains can probably best be determined by involving workers in joint diagnosis and problem-solving sessions. It should be noted, however, that the kinds of changes in the workplace that in many cases are necessary to achieve real gains in control involve fundamental changes in management systems and policies. Increasing control might be as simple as providing a switch that allows machine-paced workers to control their pace, but it is just as likely to involve important changes in the decision-making authority of workers. Thus, organizational decision makers must usually be full and active supporters of control enhancing interventions.
In this article, the reasons machine-pacing is utilized in the workplace are reviewed. Furthermore, a classification of machine-paced work, information on the impact of machine-paced work on well-being and methodologies by which the effects can be alleviated or reduced, are set forth.
The effective utilization of machine-paced work has the following benefits for an organization:
· It increases customer satisfaction: for example, it provides speedier service in drive-in restaurants when a number of stations are assigned to serve the customers sequentially.
· It reduces overhead cost through economic use of high technology, reduction of stock set aside for processing, reduction in factory floor space and reduction in supervisory costs.
· It reduces direct costs through reduced training time, lower hourly wages and high production return per unit of wages.
· It contributes to national productivity through provision of employment for unskilled workers and reduction in the production costs of goods and services.
A classification of paced work is provided in figure 34.7 .
Machine-paced research has been carried out in laboratory settings, in industry (by case studies and controlled experiments) and by epidemiological studies (Salvendy 1981).
An analysis was performed of 85 studies dealing with machine-paced and self-paced work, of which 48% were laboratory studies, 30% industrial, 14% review studies, 4% combined laboratory and industrial, and 4% conceptual studies (Burke and Salvendy 1981). Of the 103 variables used in these studies, 41% were physiological, 32% were performance variables and 27% psychological. From this analysis, the following practical implications were derived for the use of machine-paced versus self-paced work arrangements :
· Tasks with high cognitive or perceptual load should be administered under self-paced as opposed to machine-paced conditions.
· To reduce error and low productivity, jobs should be allocated according to the worker’s personality and capacities.
· Intelligent, shrewd, creative and self-sufficient operators prefer to work on self-paced rather than machine-paced tasks. (See table 34.2 for more complete psychological profiles.)
· Workers should be encouraged to select a workload capacity which is optimum for them in any given situation.
· To maintain a high activation level (or the required level for performing the task), the work sessions should be interrupted by rest periods or by other types of work. This type of break should be implemented before the onset of deactivation.
· Maximal work speeds are not economical and can result in workers’ becoming overstrained when they continue to work excessively fast for a long time. On the other hand, too low a speed may also be detrimental to workers’ performance.
In studying industrial workers for an entire year in our experimentally controlled situation, in which over 50 million data points were collected, it was shown that 45% of the labour force prefers self-paced work, 45% prefers machine-paced work, and 10% does not like work of any type (Salvendy1976).
Uncertainty is the most significant contributor to stress and can be effectively managed by performance feedback (see figure 34.8) (Salvendy and Knight 1983).
The computerization of work has made possible the development of a new approach to work monitoring called electronic performance monitoring (EPM). EPM has been defined as the “computerized collection, storage, analysis, and reporting of information about employees’ activities on a continuous basis” (USOTA 1987). Although banned in many European countries, electronic performance monitoring is increasing throughout the world on account of intense competitive pressures to improve productivity in a global economy.
EPM has changed the psychosocial work environment. This application of computer technology has significant implications for work supervision, workload demands, performance appraisal, performance feedback, rewards, fairness and privacy. As a result, occupational health researchers, worker representatives, government agencies and the public news media have expressed concern about the stress-health effects of electronic performance monitoring (USOTA 1987).
Traditional approaches to work monitoring include direct observation of work behaviours, examination of work samples, review of progress reports and analysis of performance measures (Larson and Callahan 1990). Historically, employers have always attempted to improve on these methods of monitoring worker performance. Considered as part of a continuing monitoring effort across the years, then, EPM is not a new development. What is new, however, is the use of EPM, particularly in office and service work, to capture employee performance on a second-by-second, keystroke-by-keystroke basis so that work management in the form of corrective action, performance feedback, delivery of incentive pay, or disciplinary measures can be taken at any time (Smith 1988). In effect, the human supervisor is being replaced by an electronic supervisor.
EPM is used in office work such as word processing and data entry to monitor keystroke production and error rates. Airline reservation clerks and directory assistance operators are monitored by computers to determine how long it takes to service customers and to measure the time interval between calls. EPM also is used in more traditional economic sectors. Freight haulers, for example, are using computers to monitor driver speed and fuel consumption, and tire manufacturers are electronically monitoring the productivity of rubber workers. In sum, EPM is used to establish performance standards, track employee performance, compare actual performance with predetermined standards and administer incentive pay programmes based on these standards (USOTA 1987).
Advocates of EPM assert that continuous electronic work monitoring is essential to high performance and productivity in the contemporary workplace. It is argued that EPM enables managers and supervisors to organize and control human, material and financial resources. Specifically, EPM provides for:
1. increased control over performance variability
2. increased objectivity and timeliness of performance evaluation and feedback
3. efficient management of large office and customer service operations through the electronic supervision of work, and
4. establishment and enforcement of performance standards (for example, number of forms processed per hour).
Supporters of electronic monitoring also claim that, from the worker’s perspective, there are several benefits. Electronic monitoring, for example, can provide regular feedback of work performance, which enables workers to take corrective action when necessary. It also satisfies the worker’s need for self-evaluation and reduces performance uncertainty.
Despite the possible benefits of EPM, there is concern that certain monitoring practices are abusive and constitute an invasion of employee privacy (USOTA 1987). Privacy has become an issue particularly when workers do not know when or how often they are being monitored. Since work organizations often do not share performance data with workers, a related privacy issue is whether workers should have access to their own performance records or the right to question possible wrong information.
Workers also have raised objections to the manner in which monitoring systems have been implemented (Smith, Carayon and Miezio 1986; Westin 1986). In some workplaces, monitoring is perceived as an unfair labour practice when it is used to measure individual, as opposed to group, performance. In particular, workers have taken exception to the use of monitoring to enforce compliance with performance standards that impose excessive workload demands. Electronic monitoring also can make the work process more impersonal by replacing a human supervisor with an electronic supervisor. In addition, the overemphasis on increased production may encourage workers to compete instead of cooperate with one another.
Various theoretical paradigms have been postulated to account for the possible stress-health effects of EPM (Amick and Smith 1992; Schleifer and Shell 1992; Smith et al. 1992b). A fundamental assumption made by many of these models is that EPM indirectly influences stress-health outcomes by intensifying workload demands, diminishing job control and reducing social support. In effect, EPM mediates changes in the psychosocial work environment that result in an imbalance between the demands of the job and the worker’s resources to adapt.
The impact of EPM on the psychosocial work environment is felt at three levels of the work system: the organization-technology interface, the job-technology interface and the human-technology interface (Amick and Smith 1992). The extent of work system transformation and the subsequent implications for stress outcomes are contingent upon the inherent characteristics of the EPM process; that is, the type of information gathered, the method of gathering the information and the use of the information (Carayon 1993). These EPM characteristics can interact with various job design factors and increase stress-health risks.
An alternative theoretical perspective views EPM as a stressor that directly results in strain independent of other job-design stress factors (Smith et al. 1992b; Carayon 1994). EPM, for example, can generate fear and tension as a result of workers being constantly watched by “Big Brother”. EPM also may be perceived by workers as an invasion of privacy that is highly threatening.
With respect to the stress effects of EPM, empirical evidence obtained from controlled laboratory experiments indicates that EPM can produce mood disturbances (Aiello and Shao 1993; Schleifer, Galinsky and Pan 1995) and hyperventilatory stress reactions (Schleifer and Ley 1994). Field studies have also reported that EPM alters job-design stress factors (for example, workload), which, in turn, generate tension or anxiety together with depression (Smith, Carayon and Miezio 1986; Ditecco et al. 1992; Smith et al. 1992b; Carayon 1994). In addition, EPM is associated with symptoms of musculoskeletal discomfort among telecommunication workers and data-entry office workers (Smith et al. 1992b; Sauter et al. 1993; Schleifer, Galinsky and Pan 1995).
The use of EPM to enforce compliance with performance standards is perhaps one of the most stressful aspects of this approach to work monitoring (Schleifer and Shell 1992). Under these conditions, it may be useful to adjust performance standards with a stress allowance (Schleifer and Shell 1992): a stress allowance would be applied to the normal cycle time, as is the case with other more conventional work allowances such as rest breaks and machine delays. Particularly among workers who have difficulty meeting EPM performance standards, a stress allowance would optimize workload demands and promote well-being by balancing the productivity benefits of electronic performance monitoring against the stress effects of this approach to work monitoring.
Beyond the question of how to minimize or prevent the possible stress-health effects of EPM, a more fundamental issue is whether this “Tayloristic” approach to work monitoring has any utility in the modern workplace. Work organizations are increasingly utilizing sociotechnical work-design methods, “total quality management” practices, participative work groups, and organizational, as opposed to individual, measures of performance. As a result, electronic work monitoring of individual workers on a continuous basis may have no place in high-performance work systems. In this regard, it is interesting to note that those countries (for example, Sweden and Germany) that have banned EPM are the same countries which have most readily embraced the principles and practices associated with high-performance work systems.
Roles represent sets of behaviours that are expected of employees. To understand how organizational roles develop, it is particularly informative to see the process through the eyes of a new employee. Starting with the first day on the job, a new employee is presented with considerable information designed to communicate the organization’s role expectations. Some of this information is presented formally through a written job description and regular communications with one’s supervisor. Hackman (1992), however, states that workers also receive a variety of informal communications (termed discretionary stimuli) designed to shape their organizational roles. For example, a junior school faculty member who is too vocal during a departmental meeting may receive looks of disapproval from more senior colleagues. Such looks are subtle, but communicate much about what is expected of a junior colleague.
Ideally, the process of defining each employee’s role should proceed such that each employee is clear about his or her role. Unfortunately, this is often not the case and employees experience a lack of role clarity or, as it is commonly called, role ambiguity. According to Breaugh and Colihan (1994), employees are often unclear about how to do their jobs, when certain tasks should be performed and the criteria by which their performance will be judged. In some cases, it is simply difficult to provide an employee with a crystal-clear picture of his or her role. For example, when a job is relatively new, it is still “evolving” within the organization. Furthermore, in many jobs the individual employee has tremendous flexibility regarding how to get the job done. This is particularly true of highly complex jobs. In many other cases, however, role ambiguity is simply due to poor communication between either supervisors and subordinates or among members of work groups.
Another problem that can arise when role-related information is communicated to employees is role overload. That is, the role consists of too many responsibilities for an employee to handle in a reasonable amount of time. Role overload can occur for a number of reasons. In some occupations, role overload is the norm. For example, physicians in training experience tremendous role overload, largely as preparation for the demands of medical practice. In other cases, it is due to temporary circumstances. For example, if someone leaves an organization, the roles of other employees may need to be temporarily expanded to make up for the missing worker’s absence. In other instances, organizations may not anticipate the demands of the roles they create, or the nature of an employee’s role may change over time. Finally, it is also possible that an employee may voluntarily take on too many role responsibilities.
What are the consequences to workers in circumstances characterized by either role ambiguity, role overload or role clarity? Years of research on role ambiguity has shown that it is a noxious state which is associated with negative psychological, physical and behavioural outcomes (Jackson and Schuler 1985). That is, workers who perceive role ambiguity in their jobs tend to be dissatisfied with their work, anxious, tense, report high numbers of somatic complaints, tend to be absent from work and may leave their jobs. The most common correlates of role overload tend to be physical and emotional exhaustion. In addition, epidemiological research has shown that overloaded individuals (as measured by work hours) may be at greater risk for coronary heart disease. In considering the effects of both role ambiguity and role overload, it must be kept in mind that most studies are cross-sectional (measuring role stressors and outcomes at one point in time) and have examined self-reported outcomes. Thus, inferences about causality must be somewhat tentative.
Given the negative effects of role ambiguity and role overload, it is important for organizations to minimize, if not eliminate, these stressors. Since role ambiguity, in many cases, is due to poor communication, it is necessary to take steps to communicate role requirements more effectively. French and Bell (1990), in a book entitled Organization Development, describe interventions such as responsibility charting, role analysis and role negotiation. (For a recent example of the application of responsibility charting, see Schaubroeck et al. 1993). Each of these is designed to make employees’ role requirements explicit and well defined. In addition, these interventions allow employees input into the process of defining their roles.
When role requirements are made explicit, it may also be revealed that role responsibilities are not equitably distributed among employees. Thus, the previously mentioned interventions may also prevent role overload. In addition, organizations should keep up to date regarding individuals’ role responsibilities by reviewing job descriptions and carrying out job analyses (Levine 1983). It may also help to encourage employees to be realistic about the number of role responsibilities they can handle. In some cases, employees who are under pressure to take on too much may need to be more assertive when negotiating role responsibilities.
As a final comment, it must be remembered that role ambiguity and role overload are subjective states. Thus, efforts to reduce these stressors must consider individual differences. Some workers may in fact enjoy the challenge of these stressors. Others, however, may find them aversive. If this is the case, organizations have a moral, legal and financial interest in keeping these stressors at manageable levels.
Historically, the sexual harassment of female workers has been ignored, denied, made to seem trivial, condoned and even implicitly supported, with women themselves being blamed for it (MacKinnon 1978). Its victims are almost entirely women, and it has been a problem since females first sold their labour outside the home.
Although sexual harassment also exists outside the workplace, here it will be taken to denote harassment in the workplace.
Sexual harassment is not an innocent flirtation nor the mutual expression of attraction between men and women. Rather, sexual harassment is a workplace stressor that poses a threat to a woman’s psychological and physical integrity and security, in a context in which she has little control because of the risk of retaliation and the fear of losing her livelihood. Like other workplace stressors, sexual harassment may have adverse health consequences for women that can be serious and, as such, qualifies as a workplace health and safety issue (Bernstein 1994).
In the United States, sexual harassment is viewed primarily as a discrete case of wrongful conduct to which one may appropriately respond with blame and recourse to legal measures for the individual. In the European Community it tends to be viewed rather as a collective health and safety issue (Bernstein 1994).
Because the manifestations of sexual harassment vary, people may not agree on its defining qualities, even where it has been set forth in law. Still, there are some common features of harassment that are generally accepted by those doing work in this area:
1. Sexual harassment may involve verbal or physical sexual behaviours directed at a specific woman (quid pro quo), or it may involve more general behaviours that create a “hostile environment” that is degrading, humiliating and intimidating towards women (MacKinnon 1978).
2. It is unwelcome and unwanted.
3. It can vary in severity.
When directed towards a specific woman it can involve sexual comments and seductive behaviours, “propositions” and pressure for dates, touching, sexual coercion through the use of threats or bribery and even physical assault and rape. In the case of a “hostile environment”, which is probably the more common state of affairs, it can involve jokes, taunts and other sexually charged comments that are threatening and demeaning to women; pornographic or sexually explicit posters; and crude sexual gestures, and so forth. One can add to these characteristics what is sometimes called “gender harassment”, which more involves sexist remarks that demean the dignity of women.
Women themselves may not label unwanted sexual attention or sexual remarks as harassing because they accept it as “normal” on the part of males (Gutek 1985). In general, women (especially if they have been harassed) are more likely to identify a situation as sexual harassment than men, who tend rather to make light of the situation, to disbelieve the woman in question or to blame her for “causing” the harassment (Fitzgerald and Ormerod 1993). People also are more likely to label incidents involving supervisors as sexually harassing than similar behaviour by peers (Fitzgerald and Ormerod 1993). This tendency reveals the significance of the differential power relationship between the harasser and the female employee (MacKinnon 1978.) As an example, a comment that a male supervisor may believe is complimentary may still be threatening to his female employee, who may fear that it will lead to pressure for sexual favours and that there will be retaliation for a negative response, including the potential loss of her job or negative evaluations.
Even when co-workers are involved, sexual harassment can be difficult for women to control and can be very stressful for them. This situation can occur where there are many more men than women in a work group, a hostile work environment is created and the supervisor is male (Gutek 1985; Fitzgerald and Ormerod 1993).
National data on sexual harassment are not collected, and it is difficult to obtain accurate numbers on its prevalence. In the United States, it has been estimated that 50% of all women will experience some form of sexual harassment during their working lives (Fitzgerald and Ormerod 1993). These numbers are consistent with surveys conducted in Europe (Bustelo 1992), although there is variation from country to country (Kauppinen-Toropainen and Gruber 1993). The extent of sexual harassment is also difficult to determine because women may not label it accurately and because of underreporting. Women may fear that they will be blamed, humiliated and not believed, that nothing will be done and that reporting problems will result in retaliation (Fitzgerald and Ormerod 1993). Instead, they may try to live with the situation or leave their jobs and risk serious financial hardship, a disruption of their work histories and problems with references (Koss et al. 1994).
Sexual harassment reduces job satisfaction and increases turnover, so that it has costs for the employer (Gutek 1985; Fitzgerald and Ormerod 1993; Kauppinen-Toropainen and Gruber 1993). Like other workplace stressors, it also can have negative effects on health that are sometimes quite serious. When the harassment is severe, as with rape or attempted rape, women are seriously traumatized. Even where sexual harassment is less severe, women can have psychological problems: they may become fearful, guilty and ashamed, depressed, nervous and less self-confident. They may have physical symptoms such as stomach-aches, headaches or nausea. They may have behavioural problems such as sleeplessness, over- or undereating, sexual problems and difficulties in their relations with others (Swanson et al. 1997).
Both the formal American and informal European approaches to combating harassment provide illustrative lessons (Bernstein 1994). In Europe, sexual harassment is sometimes dealt with by conflict resolution approaches that bring in third parties to help eliminate the harassment (e.g., England’s “challenge technique”). In the United States, sexual harassment is a legal wrong that provides victims with redress through the courts, although success is difficult to achieve. Victims of harassment also need to be supported through counselling, where needed, and helped to understand that they are not to blame for the harassment.
Prevention is the key to combating sexual harassment. Guidelines encouraging prevention have been promulgated through the European Commission Code of Practice (Rubenstein and DeVries 1993). They include the following: clear anti-harassment policies that are effectively communicated; special training and education for managers and supervisors; a designated ombudsperson to deal with complaints; formal grievance procedures and alternatives to them; and disciplinary treatment of those who violate the policies. Bernstein (1994) has suggested that mandated self-regulation may be a viable approach.
Finally, sexual harassment needs to be openly discussed as a workplace issue of legitimate concern to women and men. Trade unions have a critical role to play in helping place this issue on the public agenda. Ultimately, an end to sexual harassment requires that men and women reach social and economic equality and full integration in all occupations and workplaces.
The nature, prevalence, predictors and possible consequences of workplace violence have begun to attract the attention of labour and management practitioners, and researchers. The reason for this is the increasing occurrence of highly visible workplace murders. Once the focus is placed on workplace violence, it becomes clear that there are several issues, including the nature (or definition), prevalence, predictors, consequences and ultimately prevention of workplace violence.
The definition and prevalence of workplace violence are integrally related.
Consistent with the relative recency with which workplace violence has attracted attention, there is no uniform definition. This is an important issue for several reasons. First, until a uniform definition exists, any estimates of prevalence remain incomparable across studies and sites. Secondly, the nature of the violence is linked to strategies for prevention and interventions. For example, focusing on all instances of shootings within the workplace includes incidents that reflect the continuation of family conflicts, as well as those that reflect work-related stressors and conflicts. While employees would no doubt be affected in both situations, the control the organization has over the former is more limited, and hence the implications for interventions are different from those situations in which workplace shootings are a direct function of workplace stressors and conflicts.
Some statistics suggest that workplace murders are the fastest growing form of murder in the United States (for example, Anfuso 1994). In some jurisdictions (for example, New York State), murder is the modal cause of death in the workplace. Because of statistics such as these, workplace violence has attracted considerable attention recently. However, early indications suggest that those acts of workplace violence with the highest visibility (for example, murder, shootings) attract the greatest research scrutiny, but also occur with the least frequency. In contrast, verbal and psychological aggression against supervisors, subordinates and co-workers are far more common, but gather less attention. Supporting the notion of a close integration between definitional and prevalence issues, this would suggest that what is being studied in most cases is aggression rather than violence in the workplace.
A reading of the literature on the predictors of workplace violence would reveal that most of the attention has been focused on the development of a “profile” of the potentially violent or “disgruntled” employee (for example, Mantell and Albrecht 1994; Slora, Joy and Terris 1991), most of which would identify the following as the salient personal characteristics of a disgruntled employee: white, male, aged 20-35, a “loner”, probable alcohol problem and a fascination with guns. Aside from the problem of the number of false-positive identifications this would lead to, this strategy is also based on identifying individuals predisposed to the most extreme forms of violence, and ignores the larger group involved in most of the aggressive and less violent workplace incidents.
Going beyond “demographic” characteristics, there are suggestions that some of the personal factors implicated in violence outside of the workplace would extend to the workplace itself. Thus, inappropriate use of alcohol, general history of aggression in one’s current life or family of origin, and low self-esteem have been implicated in workplace violence.
A more recent strategy has been to identify the workplace conditions under which workplace violence is most likely to occur: identifying the physical and psychosocial conditions in the workplace. While the research on psychosocial factors is still in its infancy, it would appear as though feelings of job insecurity, perceptions that organizational policies and their implementation are unjust, harsh management and supervision styles, and electronic monitoring are associated with workplace aggression and violence (United States House of Representatives 1992; Fox and Levin 1994).
Cox and Leather (1994) look to the predictors of aggression and violence in general in their attempt to understand the physical factors that predict workplace violence. In this respect, they suggest that workplace violence may be associated with perceived crowding, and extreme heat and noise. However, these suggestions about the causes of workplace violence await empirical scrutiny.
The research to date suggests that there are primary and secondary victims of workplace violence, both of which are worthy of research attention. Bank tellers or store clerks who are held up and employees who are assaulted at work by current or former co-workers are the obvious or direct victims of violence at work. However, consistent with the literature showing that much human behaviour is learned from observing others, witnesses to workplace violence are secondary victims. Both groups might be expected to suffer negative effects, and more research is needed to focus on the way in which both aggression and violence at work affect primary and secondary victims.
Most of the literature on the prevention of workplace violence focuses at this stage on prior selection, i.e., the prior identification of potentially violent individuals for the purpose of excluding them from employment in the first instance (for example, Mantell and Albrecht 1994). Such strategies are of dubious utility, for ethical and legal reasons. From a scientific perspective, it is equally doubtful whether we could identify potentially violent employees with sufficient precision (e.g., without an unacceptably high number of false-positive identifications). Clearly, we need to focus on workplace issues and job design for a preventive approach. Following Fox and Levin’s (1994) reasoning, ensuring that organizational policies and procedures are characterized by perceived justice will probably constitute an effective prevention technique.
Research on workplace violence is in its infancy, but gaining increasing attention. This bodes well for the further understanding, prediction and control of workplace aggression and violence.
Downsizing, layoffs, re-engineering, reshaping, reduction in force (RIF), mergers, early retirement, and outplacementthe description of these increasingly familiar changes has become a matter of commonplace jargon around the world in the past two decades. As companies have fallen on hard times, workers at all organizational levels have been expended and many remaining jobs have been altered. The job loss count in a single year (1992–93) includes Eastman Kodak, 2,000; Siemens, 13,000; Daimler-Benz, 27,000; Phillips, 40,000; and IBM, 65,000 (The Economist 1993, extracted from “Job Future Ambiguity” (John M. Ivancevich)). Job cuts have occurred at companies earning healthy profits as well as at firms faced with the need to cut costs. The trend of cutting jobs and changing the way remaining jobs are performed is expected to continue even after worldwide economic growth returns.
Why has losing and changing jobs become so widespread? There is no simple answer that fits every organization or situation. However, one or more of a number of factors is usually implicated, including lost market share, increasing international and domestic competition, increasing labour costs, obsolete plant and technologies and poor managerial practices. These factors have resulted in managerial decisions to slim down, re-engineer jobs and alter the psychological contract between the employer and the worker.
A work situation in which an employee could count on job security or the opportunity to hold multiple positions via career-enhancing promotions in a single firm has changed drastically. Similarly, the binding power of the traditional employer-worker psychological contract has weakened as millions of managers and non-managers have been let go. Japan was once famous for providing “lifetime” employment to individuals. Today, even in Japan, a growing number of workers, especially in large firms, are not assured of lifetime employment. The Japanese, like their counterparts across the world, are facing what can be referred to as increased job insecurity and an ambiguous picture of what the future holds.
Maslow (1954), Herzberg, Mausner and Snyderman (1959) and Super (1957) have proposed that individuals have a need for safety or security. That is, individual workers sense security when holding a permanent job or when being able to control the tasks performed on the job. Unfortunately, there has been a limited number of empirical studies that have thoroughly examined the job security needs of workers (Kuhnert and Pulmer 1991; Kuhnert, Sims and Lahey 1989).
On the other hand, with the increased attention that is being paid to downsizing, layoffs and mergers, more researchers have begun to investigate the notion of job insecurity. The nature, causes and consequences of job insecurity have been considered by Greenhalgh and Rosenblatt (1984) who offer a definition of job insecurity as “perceived powerlessness to maintain desired continuity in a threatened job situation”. In Greenhalgh and Rosenblatt’s framework, job insecurity is considered a part of a person’s environment. In the stress literature, job insecurity is considered to be a stressor that introduces a threat that is interpreted and responded to by an individual. An individual’s interpretation and response could possibly include the decreased effort to perform well, feeling ill or below par, seeking employment elsewhere, increased coping to deal with the threat, or seeking more colleague interaction to buffer the feelings of insecurity.
Lazarus’ theory of psychological stress (Lazarus 1966; Lazarus and Folkman 1984) is centred on the concept of cognitive appraisal. Regardless of the actual severity of the danger facing a person, the occurrence of psychological stress depends upon the individual’s own evaluation of the threatening situation (here, job insecurity).
Unfortunately, like the research on job security, there is a paucity of well-designed studies of job insecurity. Furthermore, the majority of job insecurity studies incorporate unitary measurement methods. Few researchers examining stressors in general or job insecurity specifically have adopted a multiple-level approach to assessment. This is understandable because of the limitations of resources. However, the problems created by unitary assessments of job insecurity have resulted in a limited understanding of the construct. There are available to researchers four basic methods of measuring job insecurity: self-report, performance, psychophysiological and biochemical. It is still debatable whether these four types of measure assess different aspects of the consequences of job insecurity (Baum, Grunberg and Singer 1982). Each type of measure has limitations that must be recognized.
In addition to measurement problems in job insecurity research, it must be noted that there is a predominance of concentration in imminent or actual job loss. As noted by researchers (Greenhalgh and Rosenblatt 1984; Roskies and Louis-Guerin 1990), there should be more attention paid to “concern about a significant deterioration in terms and conditions of employment.” The deterioration of working conditions would logically seem to play a role in a person’s attitudes and behaviours.
Brenner (1987) has discussed the relationship between a job insecurity factor, unemployment, and mortality. He proposed that uncertainty, or the threat of instability, rather than unemployment itself causes higher mortality. The threat of being unemployed or losing control of one’s job activities can be powerful enough to contribute to psychiatric problems.
In a study of 1,291 managers, Roskies and Louis-Guerin (1990) examined the perceptions of workers facing layoffs, as well as those of managerial personnel working in firms that worked in stable, growth-oriented firms. A minority of managers were stressed about imminent job loss. However, a substantial number of managers were more stressed about a deterioration in working conditions and long-term job security.
Roskies, Louis-Guerin and Fournier (1993) proposed in a research study that job insecurity may be a major psychological stressor. In this study of personnel in the airline industry, the researchers determined that personality disposition (positive and negative) plays a role in the impact of job security or the mental health of workers.
Organizations have numerous alternatives to downsizing, layoffs and reduction in force. Displaying compassion that clearly shows that management realizes the hardships that job loss and future job ambiguity pose is an important step. Alternatives such as reduced work weeks, across-the-board salary cuts, attractive early retirement packages, retraining existing employees and voluntary layoff programmes can be implemented (Wexley and Silverman 1993).
The global marketplace has increased job demands and job skill requirements. For some people, the effect of increased job demands and job skill requirements will provide career opportunities. For others, these changes could exacerbate the feelings of job insecurity. It is difficult to pinpoint exactly how individual workers will respond. However, managers must be aware of how job insecurity can result in negative consequences. Furthermore, managers need to acknowledge and respond to job insecurity. But possessing a better understanding of the notion of job insecurity and its potential negative impact on the performance, behaviour and attitudes of workers is a step in the right direction for managers.
It will obviously require more rigorous research to better understand the full range of consequences of job insecurity among selected workers. As additional information becomes available, managers need to be open-minded about attempting to help workers cope with job insecurity. Redefining the way work is organized and executed should become a useful alternative to traditional job design methods. Managers have a responsibility:
1. to identify and attempt to alleviate sources of job insecurity among workers
2. to attempt to encourage feelings of being in control and of empowerment in the workforce, and
3. to show compassion when workers express feelings of job insecurity.
Since job insecurity is likely to remain a perceived threat for many, but not all, workers, managers need to develop and implement strategies to address this factor. The institutional costs of ignoring job insecurity are too great for any firm to accept. Whether managers can efficiently deal with workers who feel insecure about their jobs and working conditions is fast becoming a measure of managerial competency.
The term unemployment describes the situation of individuals who desire to work but are unable to trade their skills and labour for pay. It is used to indicate either an individual’s personal experience of failure to find gainful work, or the experience of an aggregate in a community, a geographic region or a country. The collective phenomenon of unemployment is often expressed as the unemployment rate, that is, the number of people who are seeking work divided by the total number of people in the labour force, which in turn consists of both the employed and the unemployed. Individuals who desire to work for pay but have given up their efforts to find work are termed discouraged workers. These persons are not listed in official reports as members of the group of unemployed workers, for they are no longer considered to be part of the labour force.
The Organization for Economic Cooperation and Development (OECD) provides statistical information on the magnitude of unemployment in 25 countries around the world (OECD 1995). These consist mostly of the economically developed countries of Europe and North America, as well as Japan, New Zealand and Australia. According to the report for the year 1994, the total unemployment rate in these countries was 8.1% (or 34.3 million individuals). In the developed countries of central and western Europe, the unemployment rate was 9.9% (11 million), in the southern European countries 13.7% (9.2 million), and in the United States 6.1% (8 million). Of the 25 countries studied, only six (Austria, Iceland, Japan, Mexico, Luxembourg and Switzerland) had an unemployment rate below 5%. The report projected only a slight overall decrease (less than one-half of 1%) in unemployment for the years 1995 and 1996. These figures suggest that millions of individuals will continue to be vulnerable to the harmful effects of unemployment in the foreseeable future (Reich 1991).
A large number of people become unemployed at various periods during their lives. Depending on the structure of the economy and on its cycles of expansion and contraction, unemployment may strike students who drop out of school; those who have been graduated from a high school, trade school or college but find it difficult to enter the labour market for the first time; women seeking to return to gainful employment after raising their children; veterans of the armed services; and older persons who want to supplement their income after retirement. However, at any given time, the largest segment of the unemployed population, usually between 50 and 65%, consists of displaced workers who have lost their jobs. The problems associated with unemployment are most visible in this segment of the unemployed partly because of its size. Unemployment is also a serious problem for minorities and younger persons. Their unemployment rates are often two to three times higher than that of the general population (USDOL 1995).
The fundamental causes of unemployment are rooted in demographic, economic and technological changes. The restructuring of local and national economies usually gives rise to at least temporary periods of high unemployment rates. The trend towards the globalization of markets, coupled with accelerated technological changes, results in greater economic competition and the transfer of industries and services to new places that supply more advantageous economic conditions in terms of taxation, a cheaper labour force and more accommodating labour and environmental laws. Inevitably, these changes exacerbate the problems of unemployment in areas that are economically depressed.
Most people depend on the income from a job to provide themselves and their families with the necessities of life and to sustain their accustomed standard of living. When they lose a job, they experience a substantial reduction in their income. Mean duration of unemployment, in the United States for example, varies between 16 and 20 weeks, with a median between eight and ten weeks (USDOL 1995). If the period of unemployment that follows the job loss persists so that unemployment benefits are exhausted, the displaced worker faces a financial crisis.
That crisis plays itself out as a cascading series of stressful events that may include loss of a car through repossession, foreclosure on a house, loss of medical care, and food shortages. Indeed, an abundance of research in Europe and the United States shows that economic hardship is the most consistent outcome of unemployment (Fryer and Payne 1986), and that economic hardship mediates the adverse impact of unemployment on various other outcomes, in particular, on mental health (Kessler, Turner and House 1988).
There is a great deal of evidence that job loss and unemployment produce significant deterioration in mental health (Fryer and Payne 1986). The most common outcomes of job loss and unemployment are increases in anxiety, somatic symptoms and depression symptomatology (Dooley, Catalano and Wilson 1994; Hamilton et al. 1990; Kessler, House and Turner 1987; Warr, Jackson and Banks 1988). Furthermore, there is some evidence that unemployment increases by over twofold the risk of onset of clinical depression (Dooley, Catalano and Wilson 1994). In addition to the well-documented adverse effects of unemployment on mental health, there is research that implicates unemployment as a contributing factor to other outcomes (see Catalano 1991 for a review). These outcomes include suicide (Brenner 1976), separation and divorce (Stack 1981; Liem and Liem 1988), child neglect and abuse (Steinberg, Catalano and Dooley 1981), alcohol abuse (Dooley, Catalano and Hough 1992; Catalano et al. 1993a), violence in the workplace (Catalano et al. 1993b), criminal behaviour (Allan and Steffensmeier 1989), and highway fatalities (Leigh and Waldon 1991). Finally, there is also some evidence, based primarily on self-report, that unemployment contributes to physical illness (Kessler, House and Turner 1987).
The adverse effects of unemployment on displaced workers are not limited to the period during which they have no jobs. In most instances, when workers become re-employed, their new jobs are significantly worse than the jobs they lost. Even after four years in their new positions, their earnings are substantially lower than those of similar workers who were not laid off (Ruhm 1991).
Because the fundamental causes of job loss and unemployment are rooted in societal and economic processes, remedies for their adverse social effects must be sought in comprehensive economic and social policies (Blinder 1987). At the same time, various community-based programmes can be undertaken to reduce the negative social and psychological impact of unemployment at the local level. There is overwhelming evidence that re-employment reduces distress and depression symptoms and restores psychosocial functioning to pre-unemployment levels (Kessler, Turner and House 1989; Vinokur, Caplan and Williams 1987). Therefore, programmes for displaced workers or others who wish to become employed should be aimed primarily at promoting and facilitating their re-employment or new entry into the labour force. A variety of such programmes have been tried successfully. Among these are special community-based intervention programmes for creating new ventures that in turn generate job opportunities (e.g., Last et al. 1995), and others that focus on retraining (e.g., Wolf et al. 1995).
Of the various programmes that attempt to promote re-employment, the most common are job search programmes organized as job clubs that attempt to intensify job search efforts (Azrin and Beasalel 1982), or workshops that focus more broadly on enhancing job search skills and facilitating transition into re-employment in high-quality jobs (e.g., Caplan et al. 1989). Cost/benefit analyses have demonstrated that these job search programmes are cost effective (Meyer 1995; Vinokur et al. 1991). Furthermore, there is also evidence that they could prevent deterioration in mental health and possibly the onset of clinical depression (Price, van Ryn and Vinokur 1992).
Similarly, in the case of organizational downsizing, industries can reduce the scope of unemployment by devising ways to involve workers in the decision-making process regarding the management of the downsizing programme (Kozlowski et al. 1993; London 1995; Price 1990). Workers may choose to pool their resources and buy out the industry, thus avoiding layoffs; to reduce working hours to spread and even out the reduction in force; to agree to a reduction in wages to minimize layoffs; to retrain and/or relocate to take new jobs; or to participate in outplacement programmes. Employers can facilitate the process by timely implementation of a strategic plan that offers the above-mentioned programmes and services to workers at risk of being laid off. As has been indicated already, unemployment leads to pernicious outcomes at both the personal and societal level. A combination of comprehensive government policies, flexible downsizing strategies by business and industry, and community-based programmes can help to mitigate the adverse consequences of a problem that will continue to affect the lives of millions of people for years to come.
One of the more remarkable social transformations of this century was the emergence of a powerful Japanese economy from the debris of the Second World War. Fundamental to this climb to global competitiveness were a commitment to quality and a determination to prove false the then-common belief that Japanese goods were shoddy and worthless. Guided by the innovative teachings of Deming (1993), Juran (1988) and others, Japanese managers and engineers adopted practices that have ultimately evolved into a comprehensive management system rooted in the basic concept of quality. Fundamentally, this system represents a shift in thinking. The traditional view was that quality had to be balanced against the cost of attaining it. The view that Deming and Juran urged was that higher quality led to lower total cost and that a systems approach to improving work processes would help in attaining both of these objectives. Japanese managers adopted this management philosophy, engineers learned and practised statistical quality control, workers were trained and involved in process improvement, and the outcome was dramatic (Ishikawa 1985; Imai 1986).
By 1980, alarmed at the erosion of their markets and seeking to broaden their reach in the global economy, European and American managers began to search for ways to regain a competitive position. In the ensuing 15 years, more and more companies came to understand the principles underlying quality management and to apply them, initially in industrial production and later in the service sector as well. While there are a variety of names for this management system, the most commonly used is total quality management or TQM; an exception is the health care sector, which more frequently uses the term continuous quality improvement, or CQI. Recently, the term business process reengineering (BPR) has also come into use, but this tends to mean an emphasis on specific techniques for process improvement rather than on the adoption of a comprehensive management system or philosophy.
TQM is available in many “flavours,” but it is important to understand it as a system that includes both a management philosophy and a powerful set of tools for improving the efficiency of work processes. Some of the common elements of TQM include the following (Feigenbaum 1991; Mann 1989; Senge 1991):
· primary emphasis on quality
· focus on meeting customer expectations (“customer satisfaction”)
· commitment to employee participation and involvement (“empowerment”)
· viewing the organization as a system (“optimization”)
· monitoring statistical outputs of processes (“management by fact”)
· leadership (“vision”)
· strong commitment to training (“becoming a learning organization”).
Typically, organizations successfully adopting TQM find they must make changes on three fronts.
One is transformation. This involves such actions as defining and communicating a vision of the organization’s future, changing the management culture from top-down oversight to one of employee involvement, fostering collaboration instead of competition and refocusing the purpose of all work on meeting customer requirements. Seeing the organization as a system of interrelated processes is at the core of TQM, and is an essential means of securing a totally integrated effort towards improving performance at all levels. All employees must know the vision and the aim of the organization (the system) and understand where their work fits in it, or no amount of training in applying TQM process improvement tools can do much good. However, lack of genuine change of organizational culture, particularly among lower echelons of managers, is frequently the downfall of many nascent TQM efforts; Heilpern (1989) observes, “We have come to the conclusion that the major barriers to quality superiority are not technical, they are behavioural.” Unlike earlier, flawed “quality circle” programmes, in which improvement was expected to “convect” upward, TQM demands top management leadership and the firm expectation that middle management will facilitate employee participation (Hill 1991).
A second basis for successful TQM is strategic planning. The achievement of an organization’s vision and goals is tied to the development and deployment of a strategic quality plan. One corporation defined this as “a customer-driven plan for the application of quality principles to key business objectives and the continuous improvement of work processes” (Yarborough 1994). It is senior management’s responsibility - indeed, its obligation to workers, stockholders and beneficiaries alike - to link its quality philosophy to sound and feasible goals that can reasonably be attained. Deming (1993) called this “constancy of purpose” and saw its absence as a source of insecurity for the workforce of the organization. The fundamental intent of strategic planning is to align the activities of all of the people throughout the company or organization so that it can achieve its core goals and can react with agility to a changing environment. It is evident that it both requires and reinforces the need for widespread participation of supervisors and workers at all levels in shaping the goal-directed work of the company (Shiba, Graham and Walden 1994).
Only when these two changes are adequately carried out can one hope for success in the third: the implementation of continuous quality improvement. Quality outcomes, and with them customer satisfaction and improved competitive position, ultimately rest on widespread deployment of process improvement skills. Often, TQM programmes accomplish this through increased investments in training and through assignment of workers (frequently volunteers) to teams charged with addressing a problem. A basic concept of TQM is that the person most likely to know how a job can be done better is the person who is doing it at a given moment. Empowering these workers to make useful changes in their work processes is a part of the cultural transformation underlying TQM; equipping them with knowledge, skills and tools to do so is part of continuous quality improvement.
The collection of statistical data is a typical and basic step taken by workers and teams to understand how to improve work processes. Deming and others adapted their techniques from the seminal work of Shewhart in the 1920s (Schmidt and Finnigan 1992). Among the most useful TQM tools are: (a) the Pareto Chart, a graphical device for identifying the more frequently occurring problems, and hence the ones to be addressed first; (b) the statistical control chart, an analytic tool for ascertaining the degree of variability in the unimproved process; and (c) flow charting, a means to document exactly how the process is carried out at present. Possibly the most ubiquitous and important tool is the Ishikawa Diagram (or “fishbone” diagram), whose invention is credited to Kaoru Ishikawa (1985). This instrument is a simple but effective way by which team members can collaborate on identifying the root causes of the process problem under study, and thus point the path to process improvement.
TQM, effectively implemented, may be important to workers and worker health in many ways. For example, the adoption of TQM can have an indirect influence. In a very basic sense, an organization that makes a quality transformation has arguably improved its chances of economic survival and success, and hence those of its employees. Moreover, it is likely to be one where respect for people is a basic tenet. Indeed, TQM experts often speak of “shared values”, those things that must be exemplified in the behaviour of both management and workers. These are often publicized throughout the organization as formal values statements or aspiration statements, and typically include such emotive language as “trust”, “respecting each other”, “open communications”, and “valuing our diversity” (Howard 1990).
Thus, it is tempting to suppose that quality workplaces will be “worker-friendly” - where worker-improved processes become less hazardous and where the climate is less stressful. The logic of quality is to build quality into a product or service, not to detect failures after the fact. It can be summed up in a word - prevention (Widfeldt and Widfeldt 1992). Such a logic is clearly compatible with the public health logic of emphasizing prevention in occupational health. As Williams (1993) points out in a hypothetical example, “If the quality and design of castings in the foundry industry were improved there would be reduced exposure ... to vibration as less finishing of castings would be needed.” Some anecdotal support for this supposition comes from satisfied employers who cite trend data on job health measures, climate surveys that show better employee satisfaction, and more numerous safety and health awards in facilities using TQM. Williams further presents two case studies in UK settings that exemplify such employer reports (Williams 1993).
Unfortunately, virtually no published studies offer firm evidence on the matter. What is lacking is a research base of controlled studies that document health outcomes, consider the possibility of detrimental as well as positive health influences, and link all of this causally to measurable factors of business philosophy and TQM practice. Given the significant prevalence of TQM enterprises in the global economy of the 1990s, this is a research agenda with genuine potential to define whether TQM is in fact a supportive tool in the prevention armamentarium of occupational safety and health.
We are on somewhat firmer ground to suggest that TQM can have a direct influence on worker health when it explicitly focuses quality improvement efforts on safety and health. Obviously, like all other work in an enterprise, occupational and environmental health activity is made up of interrelated processes, and the tools of process improvement are readily applied to them. One of the criteria against which candidates are examined for the Baldridge Award, the most important competitive honour granted to US organizations, is the competitor’s improvements in occupational health and safety. Yarborough has described how the occupational and environmental health (OEH) employees of a major corporation were instructed by senior management to adopt TQM with the rest of the company and how OEH was integrated into the company’s strategic quality plan (Yarborough 1994). The chief executive of a US utility that was the first non-Japanese company ever to win Japan’s coveted Deming Prize notes that safety was accorded a high priority in the TQM effort: “Of all the company’s major quality indicators, the only one that addresses the internal customer is employee safety.” By defining safety as a process, subjecting it to continuous improvement, and tracking lost-time injuries per 100 employees as a quality indicator, the utility reduced its injury rate by half, reaching the lowest point in the history of the company (Hudiberg 1991).
In summary, TQM is a comprehensive management system grounded in a management philosophy that emphasizes the human dimensions of work. It is supported by a powerful set of technologies that use data derived from work processes to document, analyse and continuously improve these processes.
Selye (1974) suggested that having to live with other people is one of the most stressful aspects of life. Good relations between members of a work group are considered a central factor in individual and organizational health (Cooper and Payne 1988) particularly in terms of the boss–subordinate relationship. Poor relationships at work are defined as having “low trust, low levels of supportiveness and low interest in problem solving within the organization” (Cooper and Payne 1988). Mistrust is positively correlated with high role ambiguity, which leads to inadequate interpersonal communications between individuals and psychological strain in the form of low job satisfaction, decreased well-being and a feeling of being threatened by one’s superior and colleagues (Kahn et al. 1964; French and Caplan 1973).
Supportive social relationships at work are less likely to create the interpersonal pressures associated with rivalry, office politics and unconstructive competition (Cooper and Payne 1991). McLean (1979) suggests that social support in the form of group cohesion, interpersonal trust and liking for a superior is associated with decreased levels of perceived job stress and better health. Inconsiderate behaviour on the part of a supervisor appears to contribute significantly to feelings of job pressure (McLean 1979). Close supervision and rigid performance monitoring also have stressful consequences - in this connection a great deal of research has been carried out which indicates that a managerial style characterized by lack of effective consultation and communication, unjustified restrictions on employee behaviour, and lack of control over one’s job is associated with negative psychological moods and behavioural responses (for example, escapist drinking and heavy smoking) (Caplan et al. 1975), increased cardiovascular risk (Karasek 1979) and other stress-related manifestations. On the other hand, offering broader opportunities to employees to participate in decision making at work can result in improved performance, lower staff turnover and improved levels of mental and physical well-being. A participatory style of management should also extend to worker involvement in the improvement of safety in the workplace; this could help to overcome apathy among blue-collar workers, which is acknowledged as a significant factor in the cause of accidents (Robens 1972; Sutherland and Cooper 1986).
Early work in the relationship between managerial style and stress was carried out by Lewin (for example, in Lewin, Lippitt and White 1939), in which he documented the stressful and unproductive effects of authoritarian management styles. More recently, Karasek’s (1979) work highlights the importance of managers’ providing workers with greater control at work or a more participative management style. In a six-year prospective study he demonstrated that job control (i.e., the freedom to use one’s intellectual discretion) and work schedule freedom were significant predictors of risk of coronary heart disease. Restriction of opportunity for participation and autonomy results in increased depression, exhaustion, illness rates and pill consumption. Feelings of being unable to make changes concerning a job and lack of consultation are commonly reported stressors among blue-collar workers in the steel industry (Kelly and Cooper 1981), oil and gas workers on rigs and platforms in the North Sea (Sutherland and Cooper 1986) and many other blue-collar workers (Cooper and Smith 1985). On the other hand, as Gowler and Legge (1975) indicate, a participatory management style can create its own potentially stressful situations, for example, a mismatch of formal and actual power, resentment of the erosion of formal power, conflicting pressures both to be participative and to meet high production standards, and subordinates’ refusal to participate.
Although there has been a substantial research focus on the differences between authoritarian versus participatory management styles on employee performance and health, there have also been other, idiosyncratic approaches to managerial style (Jennings, Cox and Cooper 1994). For example, Levinson (1978) has focused on the impact of the “abrasive” manager. Abrasive managers are usually achievement-oriented, hard-driving and intelligent (similar to the type A personality), but function less well at the emotional level. As Quick and Quick (1984) point out, the need for perfection, the preoccupation with self and the condescending, critical style of the abrasive manager induce feelings of inadequacy among their subordinates. As Levinson suggests, the abrasive personality as a peer is both difficult and stressful to deal with, but as a superior, the consequences are potentially very damaging to interpersonal relationships and highly stressful for subordinates in the organization.
In addition, there are theories and research which suggest that the effect on employee health and safety of managerial style and personality can only be understood in the context of the nature of the task and the power of the manager or leader. For example, Fiedler’s (1967) contingency theory suggests that there are eight main group situations based upon combinations of dichotomies: (a) the warmth of the relations between the leader and follower; (b) the level structure imposed by the task; and (c) the power of the leader. The eight combinations could be arranged in a continuum with, at one end (octant one) a leader who has good relations with members, facing a highly structured task and possessing strong power; and, at the other end (octant eight), a leader who has poor relations with members, facing a loosely structured task and having low power. In terms of stress, it could be argued that the octants formed a continuum from low stress to high stress. Fiedler also examined two types of leader: the leader who would value negatively most of the characteristics of the member he liked least (the lower LPC leader) and the leader who would see many positive qualities even in the members whom he disliked (the high LPC leader). Fiedler made specific predictions about the performance of the leader. He suggested that the low LPC leader (who had difficulty in seeing merits in subordinates he disliked) would be most effective in octants one and eight, where there would be very low and very high levels of stress, respectively. On the other hand, a high LPC leader (who is able to see merits even in those he disliked) would be more effective in the middle octants, where moderate stress levels could be expected. In general, subsequent research (for example, Strube and Garcia 1981) has supported Fiedler’s ideas.
Additional leadership theories suggest that task-oriented managers or leaders create stress. Seltzer, Numerof and Bass (1989) found that intellectually stimulating leaders increased perceived stress and “burnout” among their subordinates. Misumi (1985) found that production-oriented leaders generated physiological symptoms of stress. Bass (1992) finds that in laboratory experiments, production-oriented leadership causes higher levels of anxiety and hostility. On the other hand, transformational and charismatic leadership theories (Burns 1978) focus upon the effect which those leaders have upon their subordinates who are generally more self-assured and perceive more meaning in their work. It has been found that these types of leader or manager reduce the stress levels of their subordinates.
On balance, therefore, managers who tend to demonstrate “considerate” behaviour, to have a participative management style, to be less production- or task-oriented and to provide subordinates with control over their jobs are likely to reduce the incidence of ill health and accidents at work.
Most of the articles in this chapter deal with aspects of the work environment that are proximal to the individual employee. The focus of this article, however, is to examine the impact of more distal, macrolevel characteristics of organizations as a whole that may affect employees’ health and well-being. That is, are there ways in which organizations structure their internal environments that promote health among the employees of that organization or, conversely, place employees at greater risk of experiencing stress? Most theoretical models of occupational or job stress incorporate organizational structural variables such as organizational size, lack of participation in decision making, and formalization (Beehr and Newman 1978; Kahn and Byosiere 1992).
Organizational structure refers to the formal distribution of work roles and functions within an organization coordinating the various functions or subsystems within the organization to efficiently attain the organization’s goals (Porras and Robertson 1992). As such, structure represents a coordinated set of subsystems to facilitate the accomplishment of the organization’s goals and mission and defines the division of labour, the authority relationships, formal lines of communication, the roles of each organizational subsystem and the interrelationships among these subsystems. Therefore, organizational structure can be viewed as a system of formal mechanisms to enhance the understandability of events, predictability of events and control over events within the organization which Sutton and Kahn (1987) proposed as the three work-relevant antidotes against the stress-strain effect in organizational life.
One of the earliest organizational characteristics examined as a potential risk factor was organizational size. Contrary to the literature on risk of exposure to hazardous agents in the work environment, which suggests that larger organizations or plants are safer, being less hazardous and better equipped to handle potential hazards (Emmett 1991), larger organizations originally were hypothesized to put employees at greater risk of occupational stress. It was proposed that larger organizations tend to adapt a bureaucratic organizational structure to coordinate the increased complexity. This bureaucratic structure would be characterized by a division of labour based on functional specialization, a well-defined hierarchy of authority, a system of rules covering the rights and duties of job incumbents, impersonal treatment of workers and a system of procedures for dealing with work situations (Bennis 1969). On the surface, it would appear that many of these dimensions of bureaucracy would actually improve or maintain the predictability and understandability of events in the work environment and thus serve to reduce stress within the work environment. However, it also appears that these dimensions can reduce employees’ control over events in the work environment through a rigid hierarchy of authority.
Given these characteristics of bureaucratic structure, it is not surprising that organizational size, per se, has received no consistent support as a macro-organization risk factor (Kahn and Byosiere 1992). Payne and Pugh’s (1976) review, however, provides some evidence that organizational size indirectly increases the risk of stress. They report that larger organizations suffered a reduction in the amount of communication, an increase in the amount of job and task specifications and a decrease in coordination. These effects could lead to less understanding and predictability of events in the work environment as well as a decrease in control over work events, thus increasing experienced stress (Tetrick and LaRocco 1987).
These findings on organizational size have led to the supposition that the two aspects of organizational structure that seem to pose the most risk for employees are formalization and centralization. Formalization refers to the written procedures and rules governing employees’ activities, and centralization refers to the extent to which the decision-making power in the organization is narrowly distributed to higher levels in the organization. Pines (1982) pointed out that it is not formalization within a bureaucracy that results in experienced stress or burnout but the unnecessary red tape, paperwork and communication problems that can result from formalization. Rules and regulations can be vague creating ambiguity or contradiction resulting in conflict or lack of understanding concerning appropriate actions to be taken in specific situations. If the rules and regulations are too detailed, employees may feel frustrated in their ability to achieve their goals especially in customer or client-oriented organizations. Inadequate communication can result in employees feeling isolated and alienated based on the lack of predictability and understanding of events in the work environment.
While these aspects of the work environment appear to be accepted as potential risk factors, the empirical literature on formalization and centralization are far from consistent. The lack of consistent evidence may stem from at least two sources. First, in many of the studies, there is an assumption of a single organizational structure having a consistent level of formalization and centralization throughout the entire organization. Hall (1969) concluded that organizations can be meaningfully studied as totalities; however, he demonstrated that the degree of formalization as well as decision-making authority can differ within organizational units. Therefore, if one is looking at an individual level phenomenon such as occupational stress, it may be more meaningful to look at the structure of smaller organizational units than that of the whole organization. Secondly, there is some evidence suggesting that there are individual differences in response to structural variables. For example, Marino and White (1985) found that formalization was positively related to job stress among individuals with an internal locus of control and negatively related to stress among individuals who generally believe that they have little control over their environments. Lack of participation, on the other hand, was not moderated by locus of control and resulted in increased levels of job stress. There also appear to be some cultural differences affecting individual responses to structural variables, which would be important for multinational organizations having to operate across national boundaries (Peterson et al. 1995). These cultural differences also may explain the difficulty in adopting organizational structures and procedures from other nations.
Despite the rather limited empirical evidence implicating structural variables as psychosocial risk factors, it has been recommended that organizations should change their structures to be flatter with fewer levels of hierarchy or number of communication channels, more decentralized with more decision- making authority at lower levels in the organization and more integrated with less job specialization (Newman and Beehr 1979). These recommendations are consistent with organizational theorists who have suggested that traditional bureaucratic structure may not be the most efficient or healthiest form of organizational structure (Bennis 1969). This may be especially true in light of technological advances in production and communication that characterize the postindustrial workplace (Hirschhorn 1991).
The past two decades have seen considerable interest in the redesign of organizations to deal with external environmental threats resulting from increased globalization and international competition in North America and Western Europe (Whitaker 1991). Straw, Sandelands and Dutton (1988) proposed that organizations react to environmental threats by restricting information and constricting control. This can be expected to reduce the predictability, understandability and control of work events thereby increasing the stress experienced by the employees of the organization. Therefore, structural changes that prevent these threat-ridigity effects would appear to be beneficial to both the organization’s and employees’ health and well-being.
The use of a matrix organizational structure is one approach for organizations to structure their internal environments in response to greater environmental instability. Baber (1983) describes the ideal type of matrix organization as one in which there are two or more intersecting lines of authority, organizational goals are achieved through the use of task-oriented work groups which are cross-functional and temporary, and functional departments continue to exist as mechanisms for routine personnel functions and professional development. Therefore, the matrix organization provides the organization with the needed flexibility to be responsive to environmental instability if the personnel have sufficient flexibility gained from the diversification of their skills and an ability to learn quickly.
While empirical research has yet to establish the effects of this organizational structure, several authors have suggested that the matrix organization may increase the stress experienced by employees. For example, Quick and Quick (1984) point out that the multiple lines of authority (task and functional supervisors) found in matrix organizations increase the potential for role conflict. Also, Hirschhorn (1991) suggests that with postindustrial work organizations, workers frequently face new challenges requiring them to take a learning role. This results in employees having to acknowledge their own temporary incompetencies and loss of control which can lead to increased stress. Therefore, it appears that new organizational structures such as the matrix organization also have potential risk factors associated with them.
Attempts to change or redesign organizations, regardless of the particular structure that an organization chooses to adopt, can have stress-inducing properties by disrupting security and stability, generating uncertainty for people’s position, role and status, and exposing conflict which must be confronted and resolved (Golembiewski 1982). These stress-inducing properties can be offset, however, by the stress-reducing properties of organizational development which incorporate greater empowerment and decision making across all levels in the organization, enhanced openness in communication, collaboration and training in team building and conflict resolution (Golembiewski 1982; Porras and Robertson 1992).
While the literature suggests that there are occupational risk factors associated with various organizational structures, the impact of these macrolevel aspects of organizations appear to be indirect. Organizational structure can provide a framework to enhance the predictability, understandability and control of events in the work environment; however, the effect of structure on employees’ health and well-being is mediated by more proximal work-environment characteristics such as role characteristics and interpersonal relations. Structuring organizations for healthy employees as well as healthy organizations requires organizational flexibility, worker flexibility and attention to the sociotechnical systems that coordinate the technological demands and the social structure within the organization.
The organizational context in which people work is characterized by numerous features (e.g., leadership, structure, rewards, communication) subsumed under the general concepts of organizational climate and culture. Climate refers to perceptions of organizational practices reported by people who work there (Rousseau 1988). Studies of climate include many of the most central concepts in organizational research. Common features of climate include communication (as describable, say, by openness), conflict (constructive or dysfunctional), leadership (as it involves support or focus) and reward emphasis (i.e., whether an organization is characterized by positive versus negative feedback, or reward- or punishment-orientation). When studied together, we observe that organizational features are highly interrelated (e.g., leadership and rewards). Climate characterizes practices at several levels in organizations (e.g., work unit climate and organizational climate). Studies of climate vary in the activities they focus upon, for example, climates for safety or climates for service. Climate is essentially a description of the work setting by those directly involved with it.
The relationship of climate to employee well-being (e.g., satisfaction, job stress and strain) has been widely studied. Since climate measures subsume the major organizational characteristics workers experience, virtually any study of employee perceptions of their work setting can be thought of as a climate study. Studies link climate features (particularly leadership, communication openness, participative management and conflict resolution) with employee satisfaction and (inversely) stress levels (Schneider 1985). Stressful organizational climates are characterized by limited participation in decisions, use of punishment and negative feedback (rather than rewards and positive feedback), conflict avoidance or confrontation (rather than problem solving), and nonsupportive group and leader relations. Socially supportive climates benefit employee mental health, with lower rates of anxiety and depression in supportive settings (Repetti 1987). When collective climates exist (where members who interact with each other share common perceptions of the organization) research observes that shared perceptions of undesirable organizational features are linked with low morale and instances of psychogenic illness (Colligan, Pennebaker and Murphy 1982). When climate research adopts a specific focus, as in the study of climate for safety in an organization, evidence is provided that lack of openness in communication regarding safety issues, few rewards for reporting occupational hazards, and other negative climate features increase the incidence of work-related accidents and injury (Zohar 1980).
Since climates exist at many levels in organizations and can encompass a variety of practices, assessment of employee risk factors needs to systematically span the relationships (whether in the work unit, the department or the entire organization) and activities (e.g., safety, communication or rewards) in which employees are involved. Climate-based risk factors can differ from one part of the organization to another.
Culture constitutes the values, norms and ways of behaving which organization members share. Researchers identify five basic elements of culture in organizations: fundamental assumptions (unconscious beliefs that shape member’s interpretations, e.g., views regarding time, environmental hostility or stability), values (preferences for certain outcomes over others, e.g., service or profit), behavioural norms (beliefs regarding appropriate and inappropriate behaviours, e.g., dress codes and teamwork), patterns of behaviours (observable recurrent practices, e.g., structured performance feedback and upward referral of decisions) and artefacts (symbols and objects used to express cultural messages, e.g., mission statements and logos). Cultural elements which are more subjective (i.e., assumptions, values and norms) reflect the way members think about and interpret their work setting. These subjective features shape the meaning that patterns of behaviours and artefacts take on within the organization. Culture, like climate, can exist at many levels, including:
1. a dominant organizational culture
2. subcultures associated with specific units, and
3. countercultures, found in work units that are poorly integrated with the larger organization.
Cultures can be strong (widely shared by members), weak (not widely shared), or in transition (characterized by gradual replacement of one culture by another).
In contrast with climate, culture is less frequently studied as a contributing factor to employee well-being or occupational risk. The absence of such research is due both to the relatively recent emergence of culture as a concept in organizational studies and to ideological debates regarding the nature of culture, its measurement (quantitative versus qualitative), and the appropriateness of the concept for cross-sectional study (Rousseau 1990). According to quantitative culture research focusing on behavioural norms and values, team-oriented norms are associated with higher member satisfaction and lower strain than are control- or bureaucratically -oriented norms (Rousseau 1989). Furthermore, the extent to which the worker’s values are consistent with those of the organization affects stress and satisfaction (O’Reilly and Chatman 1991). Weak cultures and cultures fragmented by role conflict and member disagreement are found to provoke stress reactions and crises in professional identities (Meyerson 1990). The fragmentation or breakdown of organizational cultures due to economic or political upheavals affects the well-being of members psychologically and physically, particular in the wake of downsizings, plant closings and other effects of concurrent organizational restructurings (Hirsch 1987). The appropriateness of particular cultural forms (e.g., hierarchic or militaristic) for modern society has been challenged by several culture studies (e.g., Hirschhorn 1984; Rousseau 1989) concerned with the stress and health-related outcomes of operators (e.g., nuclear power technicians and air traffic controllers) and subsequent risks for the general public.
Assessing risk factors in the light of information about organizational culture requires first attention to the extent to which organization members share or differ in basic beliefs, values and norms. Differences in function, location and education create subcultures within organizations and mean that culture-based risk factors can vary within the same organization. Since cultures tend to be stable and resistant to change, organizational history can aid assessment of risk factors both in terms of stable and ongoing cultural features as well as recent changes that can create stressors associated with turbulence (Hirsch 1987).
Climate and culture overlap to a certain extent, with perceptions of culture’s patterns of behaviour being a large part of what climate research addresses. However, organization members may describe organizational features (climate) in the same way but interpret them differently due to cultural and subcultural influences (Rosen, Greenlagh and Anderson 1981). For example, structured leadership and limited participation in decision making may be viewed as negative and controlling from one perspective or as positive and legitimate from another. Social influence reflecting the organization’s culture shapes the interpretation members make of organizational features and activities. Thus, it would seem appropriate to assess both climate and culture simultaneously in investigating the impact of the organization on the well-being of members.
There are many forms of compensation used in business and government organizations throughout the world to pay workers for their physical and mental contribution. Compensation provides money for human effort and is necessary for individual and family existence in most societies. Trading work for money is a long-established practice.
The health-stressor aspect of compensation is most closely linked with compensation plans that offer incentives for extra or sustained human effort. Job stress can certainly exist in any work setting where compensation is not based on incentives. However, physical and mental performance levels that are well above normal and that could lead to physical injury or injurious mental stress is more likely to be found in environments with certain kinds of incentive compensation.
Performance measurements in one form or another are used by most organizations, and are essential for incentive programmes. Performance measures (standards) can be established for output, quality, throughput time, or any other productivity measure. Lord Kelvin in 1883 had this to say about measurements: “I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be.”
Performance measures should be carefully linked to the fundamental goals of the organization. Inappropriate performance measurements have often had little or no effect on goal attainment. Some common criticisms of performance measures include unclear purpose, vagueness, lack of connection (or even opposition, for that matter) to the business strategy, unfairness or inconsistency, and their liability to be used chiefly for “punishing” people. But measurements can serve as indispensable benchmarks: remember the saying, “If you don’t know where you are, you can’t get to where you want to be”. The bottom line is that workers at all levels in an organization demonstrate more of the behaviours that they are measured on and rewarded to evince. What gets measured and rewarded gets done.
Performance measures must be fair and consistent to minimize stress among the workforce. There are several methods utilised to establish performance measures ranging from judgement estimation (guessing) to engineered work measurement techniques. Under the work measurement approach to setting performance measures, 100% performance is defined as a “fair day’s work pace”. This is the work effort and skill at which an average well-trained employee can work without undue fatigue while producing an acceptable quality of work over the course of a work shift. A 100% performance is not maximum performance; it is the normal or average effort and skill for a group of workers. By way of comparison, the 70% benchmark is generally regarded as the minimum tolerable level of performance, while the 120% benchmark is the incentive effort and skill that the average worker should be able to attain when provided with a bonus of at least 20% above the base rate of pay. While a number of incentive plans have been established using the 120% benchmark, this value varies among plans. The general design criteria recommended for wage incentive plans provide workers the opportunity to earn approximately 20 to 35% above base rate if they are normally skilled and execute high effort continuously.
Despite the inherent appeal of a “fair day’s work for a fair day’s pay”, some possible stress problems exist with a work measurement approach to setting performance measures. Performance measures are fixed in reference to the normal or average performance of a given work group (i.e., work standards based on group as opposed to individual performance). Thus, by definition, a large segment of those working at a task will fall below average (i.e., the 100% performance benchmark) generating a demand–resource imbalance that exceeds physical or mental stress limits. Workers who have difficulty meeting performance measures are likely to experience stress through work overload, negative supervisor feedback, and threat of job loss if they consistently perform below the 100% performance benchmark.
In one form or another, incentives have been used for many years. For example, in the New Testament (II Timothy 2:6) Saint Paul declares, “It is the hard-working farmer who ought to have the first share of the crops”. Today, most organizations are striving to improve productivity and quality in order to maintain or improve their position in the business world. Most often workers will not give extra or sustained effort without some form of incentive. Properly designed and implemented financial incentive programmes can help. Before any incentive programme is implemented, some measure of performance must be established. All incentive programmes can be categorized as follows: direct financial, indirect financial, and intangible (non-financial).
Direct financial programmes may be applied to individuals or groups of workers. For individuals, each employee’s incentive is governed by his or her performance relative to a standard for a given time period. Group plans are applicable to two or more individuals working as a team on tasks that are usually interdependent. Each employee’s group incentive is usually based on his or her base rate and the group performance during the incentive period.
The motivation to sustain higher output levels is usually greater for individual incentives because of the opportunity for the high-performing worker to earn a greater incentive. However, as organizations move toward participative management and empowered work groups and teams, group incentives usually provide the best overall results. The group effort makes overall improvements to the total system as compared to optimizing individual outputs. Gainsharing (a group incentive system that has teams for continuous improvement and provides a share, usually 50%, of all productivity gains above a benchmark standard) is one form of a direct group incentive programme that is well suited for the continuous improvement organization.
Indirect financial programmes are usually less effective than direct financial programmes because direct financial incentives are stronger motivators. The principal advantage of indirect plans is that they require less detailed and accurate performance measures. Organizational policies that favourably affect morale, result in increased productivity and provide some financial benefit to employees are considered to be indirect incentive programmes. It is important to note that for indirect financial programmes no exact relationship exists between employee output and financial incentives. Examples of indirect incentive programmes include relatively high base rates, generous fringe benefits, awards programmes, year-end bonuses and profit-sharing.
Intangible incentive programmes include rewards that do not have any (or very little) financial impact on employees. These programmes, however, when viewed as desirable by the employees, can improve productivity. Examples of intangible incentive programmes include job enrichment (adding challenge and intrinsic satisfaction to the specific task assignments), job enlargement (adding tasks to complete a “whole” piece or unit of work output), nonfinancial suggestion plans, employee involvement groups and time off without any reduction in pay.
Incentives in some form are an integral part of many compensation plans. In general, incentive plans should be carefully evaluated to make sure that workers are not exceeding safe ergonomic or mental stress limits. This is particularly important for individual direct financial plans. It is usually a lesser problem in group direct, indirect or intangible plans.
Incentives are desirable because they enhance productivity and provide workers an opportunity to earn extra income or other benefits. Gainsharing is today one of the best forms of incentive compensation for any work group or team organization that wishes to offer bonus earnings and to achieve improvement in the workplace without risking the imposition of negative health-stressors by the incentive plan itself.
The nations of the world vary dramatically in both their use and treatment of employees in their contingent workforce. Contingent workers include temporary workers hired through temporary help agencies, temporary workers hired directly, voluntary and “non-voluntary” part-timers (the non-voluntary would prefer full-time work) and the self-employed. International comparisons are difficult due to differences in the definitions of each of these categories of worker.
Overman (1993) stated that the temporary help industry in Western Europe is about 50% larger than it is in the United States, where about 1% of the workforce is made up of temporary workers. Temporary workers are almost non-existent in Italy and Spain.
While the subgroups of contingent workers vary considerably, the majority of part-time workers in all European countries are women at low salary levels. In the United States, contingent workers also tend to be young, female and members of minority groups. Countries vary considerably in the degree to which they protect contingent workers with laws and regulations covering their working conditions, health and other benefits. The United Kingdom, the United States, Korea, Hong Kong, Mexico and Chile are the least regulated, with France, Germany, Argentina and Japan having fairly rigid requirements (Overman 1993). A new emphasis on providing contingent workers with greater benefits through increased legal and regulatory requirements will help to alleviate occupational stress among those workers. However, those increased regulatory requirements may result in employers’ hiring fewer workers overall due to increased benefit costs.
An alternative to contingent work is “job sharing,” which can take three forms: two employees share the responsibilities for one full-time job; two employees share one full-time position and divide the responsibilities, usually by project or client group; or two employees perform completely separate and unrelated tasks but are matched for purposes of headcount (Mattis 1990). Research has indicated that most job sharing, like contingent work, is done by women. However, unlike contingent work, job sharing positions are often subject to the protection of wage and hour laws and may involve professional and even managerial responsibilities. Within the European Community, job sharing is best known in Britain, where it was first introduced in the public sector (Lewis, Izraeli and Hootsmans 1992). The United States Federal Government, in the early 1990s, implemented a nationwide job sharing programme for its employees; in contrast, many state governments have been establishing job sharing networks since 1983 (Lee 1983). Job sharing is viewed as one way to balance work and family responsibilities.
Many alternative terms are used to denote flexiplace and home work: telecommuting, the alternative worksite, the electronic cottage, location-independent work, the remote workplace and work-at-home. For our purposes, this category of work includes “work performed at one or more ‘predetermined locations’ such as the home or a satellite work space away from the conventional office where at least some of the communications maintained with the employer occur through the use of telecommunications equipment such as computers, telephones and fax machines” (Pitt-Catsouphes and Marchetta 1991).
LINK Resources, Inc., a private-sector firm monitoring worldwide telecommuting activity, has estimated that there were 7.6 million telecommuters in 1993 in the United States out of the over 41.1 million work-at-home households. Of these telecommuters 81% worked part-time for employers with less than 100 employees in a wide array of industries across many geographical locations. Fifty-three% were male, in contrast to figures showing a majority of females in contingent and job-sharing work. Research with fifty US companies also showed that the majority of telecommuters were male with successful flexible work arrangements including supervisory positions (both line and staff), client-centred work and jobs that included travel (Mattis 1990). In 1992, 1.5 million Canadian households had at least one person who operated a business from home.
Lewis, Izraeli and Hootsman(1992) reported that, despite earlier predictions, telecommuting has not taken over Europe. They added that it is best established in the United Kingdom and Germany for professional jobs including computer specialists, accountants and insurance agents.
In contrast, some home-based work in both the United States and Europe pays by the piece and involves short deadlines. Typically, while telecommuters tend to be male, homeworkers in low-paid, piece-work jobs with no benefits tend to be female (Hall 1990).
Recent research has concentrated on identifying; (a) the type of person best suited for home work; (b) the type of work best accomplished at home; (c) procedures to ensure successful home work experiences and (d) reasons for organizational support (Hall 1990; Christensen 1992).
The general approach to social welfare issues and programmes varies throughout the world depending upon the culture and values of the nation studied. Some of the differences in welfare facilities in the United States, Canada and Western Europe are documented by Ferber, O’Farrell and Allen (1991).
Recent proposals for welfare reform in the United States suggest overhauling traditional public assistance in order to make recipients work for their benefits. Cost estimates for welfare reform range from US$15 billion to $20 billion over the next five years, with considerable cost savings projected for the long term. Welfare administration costs in the United States for such programmes as food stamps, Medicaid and Aid to Families with Dependent Children have risen 19% from 1987 to 1991, the same percentage as the increase in the number of beneficiaries.
Canada has instituted a “work sharing” programme as an alternative to layoffs and welfare. The Canada Employment and Immigration Commission (CEIC) programme enables employers to face cutbacks by shortening the work week by one to three days and paying reduced wages accordingly. For the days not worked, the CEIC arranges for the workers to draw normal unemployment insurance benefits, an arrangement that helps to compensate them for the lower wages received from their employer and to relieve the hardships of being laid off. The duration of the programme is 26 weeks, with a 12-week extension. Workers can use work-sharing days for training and the federal Canadian government may reimburse the employer for a major portion of the direct training costs through the “Canadian Jobs Strategy”.
The degree of child-care support is dependent upon the sociological underpinnings of the nation’s culture (Scharlach, Lowe and Schneider 1991). Cultures that:
1. support the full participation of women in the workplace
2. view child care as a public responsibility rather than a concern of individual families
3. value child care as an extension of the educational system, and
4. view early childhood experiences as important and formative
will devote greater resources to supporting those programmes.
Thus, international comparisons are complicated by these four factors and “high quality care” may be dependent on the needs of children and families in specific cultures.
Within the European Community, France provides the most comprehensive child-care programme. The Netherlands and the United Kingdom were late in addressing this issue. Only 3% of British employers provided some form of child care in 1989. Lamb et al. (1992) present nonparental child-care case studies from Sweden, the Netherlands, Italy, the United Kingdom, the United States, Canada, Israel, Japan, the People’s Republic of China, Cameroon, East Africa and Brazil. In the United States, approximately 3,500 private companies of the 17 million firms nationwide offer some type of child-care assistance to their employees. Of those firms, approximately 1,100 offer flexible spending accounts, 1,000 offer information and referral services and fewer than 350 have onsite or near-site child-care centres (Bureau of National Affairs 1991).
In a research study in the United States, 44% of men and 76% of women with children under six missed work in the previous three months for a family-related reason. The researchers estimated that the organizations they studied paid over $4 million in salary and benefits to employees who were absent because of child-care problems (see study by Galinsky and Hughes in Fernandez 1990). A study by the United States General Accounting Office in 1981 showed that American companies lose over $700 million a year because of inadequate parental leave policies.
It will take only 30 years (from the time of this writing, 1994) for the proportion of elderly in Japan to climb from 7% to 14%, while in France it took over 115 years and in Sweden 90 years. Before the end of the century, one out of every four persons in many member States of the Commission of the European Communities will be over 60 years old. Yet, until recently in Japan, there were few institutions for the elderly and the issue of eldercare has found scant attention in Britain and other European countries (Lewis, Izraeli and Hootsmans 1992). In America, there are approximately five million older Americans who require assistance with day-to-day tasks in order to remain in the community, and 30 million who are currently age 65 or older. Family members provide more than 80% of the assistance that these elderly people need (Scharlach, Lowe and Schneider 1991).
Research has shown that those employees who have elder-care responsibilities report significantly greater overall job stress than do other employees (Scharlach, Lowe and Schneider 1991). These caretakers often experience emotional stress and physical and financial strain. Fortunately, global corporations have begun to recognize that difficult family situations can result in absenteeism, decreased productivity and lower morale, and they are beginning to provide an array of “cafeteria benefits” to assist their employees. (The name “cafeteria” is intended to suggest that employees may select the benefits that would be most helpful to them from an array of benefits.) Benefits might include flexible work hours, paid “family illness” hours, referral services for family assistance, or a dependent-care salary-reduction account that allows employees to pay for elder care or day care with pre-tax dollars.
The author wishes to acknowledge the assistance of Charles Anderson of the Personnel Resources and Development Center of the United States Office of Personnel Management, Tony Kiers of the C.A.L.L. Canadian Work and Family Service, and Ellen Bankert and Bradley Googins of the Center on Work and Family of Boston University in acquiring and researching many of the references cited in this article.
The process by which outsiders become organizational insiders is known as organizational socialization. While early research on socialization focused on indicators of adjustment such as job satisfaction and performance, recent research has emphasized the links between organizational socialization and work stress.
Entering a new organization is an inherently stressful experience. Newcomers encounter a myriad of stressors, including role ambiguity, role conflict, work and home conflicts, politics, time pressure and work overload. These stressors can lead to distress symptoms. Studies in the 1980s, however, suggest that a properly managed socialization process has the potential for moderating the stressor-strain connection.
Two particular themes have emerged in the contemporary research on socialization:
1. the acquisition of information during socialization,
2. supervisory support during socialization.
Information acquired by newcomers during socialization helps alleviate the considerable uncertainty in their efforts to master their new tasks, roles and interpersonal relationships. Often, this information is provided via formal orientation-cum-socialization programmes. In the absence of formal programmes, or (where they exist) in addition to them, socialization occurs informally. Recent studies have indicated that newcomers who proactively seek out information adjust more effectively (Morrison l993). In addition, newcomers who underestimate the stressors in their new job report higher distress symptoms (Nelson and Sutton l99l).
Supervisory support during the socialization process is of special value. Newcomers who receive support from their supervisors report less stress from unmet expectations (Fisher l985) and fewer psychological symptoms of distress (Nelson and Quick l99l). Supervisory support can help newcomers cope with stressors in at least three ways. First, supervisors may provide instrumental support (such as flexible work hours) that helps alleviate a particular stressor. Secondly, they may provide emotional support that leads a newcomer to feel more efficacy in coping with a stressor. Thirdly, supervisors play an important role in helping newcomers make sense of their new environment (Louis l980). For example, they can frame situations for newcomers in a way that helps them appraise situations as threatening or nonthreatening.
In summary, socialization efforts that provide necessary information to newcomers and support from supervisors can prevent the stressful experience from becoming distressful.
The organizational socialization process is dynamic, interactive and communicative, and it unfolds over time. In this complexity lies the challenge of evaluating socialization efforts. Two broad approaches to measuring socialization have been proposed. One approach consists of the stage models of socialization (Feldman l976; Nelson l987). These models portray socialization as a multistage transition process with key variables at each of the stages. Another approach highlights the various socialization tactics that organizations use to help newcomers become insiders (Van Maanen and Schein l979).
With both approaches, it is contended that there are certain outcomes that mark successful socialization. These outcomes include performance, job satisfaction, organizational commit-ment, job involvement and intent to remain with the organization. If socialization is a stress moderator, then distress symptoms (specifically, low levels of distress symptoms) should be included as an indicator of successful socialization.
Because the relationship between socialization and stress has only recently received attention, few studies have included health outcomes. The evidence indicates, however, that the socialization process is linked to distress symptoms. Newcomers who found interactions with their supervisors and other newcomers helpful reported lower levels of psychological distress symptoms such as depression and inability to concentrate (Nelson and Quick l99l). Further, newcomers with more accurate expectations of the stressors in their new jobs reported lower levels of both psychological symptoms (e.g., irritability) and physiological symptoms (e.g., nausea and headaches).
Because socialization is a stressful experience, health outcomes are appropriate variables to study. Studies are needed that focus on a broad range of health outcomes and that combine self-reports of distress symptoms with objective health measures.
The contemporary research on organizational socialization suggests that it is a stressful process that, if not managed well, can lead to distress symptoms and other health problems. Organizations can take at least three actions to ease the transition by way of intervening to ensure positive outcomes from socialization.
First, organizations should encourage realistic expectations among newcomers of the stressors inherent in the new job. One way of accomplishing this is to provide a realistic job preview that details the most commonly experienced stressors and effective ways of coping (Wanous l992). Newcomers who have an accurate view of what they will encounter can preplan coping strategies and will experience less reality shock from those stressors about which they have been forewarned.
Secondly, organizations should make numerous sources of accurate information available to newcomers in the form of booklets, interactive information systems or hotlines (or all of these). The uncertainty of the transition into a new organization can be overwhelming, and multiple sources of informational support can aid newcomers in coping with the uncertainty of their new jobs. In addition, newcomers should be encouraged to seek out information during their socialization experiences.
Thirdly, emotional support should be explicitly planned for in designing socialization programmes. The supervisor is a key player in the provision of such support and may be most helpful by being emotionally and psychologically available to newcomers (Hirshhorn l990). Other avenues for emotional support include mentoring, activities with more senior and experienced co-workers, and contact with other newcomers.
The career stage approach is one way to look at career development. The way in which a researcher approaches the issue of career stages is frequently based on Levinson’s life stage development model (Levinson 1986). According to this model, people grow through specific stages separated by transition periods. At each stage a new and crucial activity and psychological adjustment may be completed (Ornstein, Cron and Slocum 1989). In this way, defined career stages can be, and usually are, based on chronological age. The age ranges assigned for each stage have varied considerably between empirical studies, but usually the early career stage is considered to range from the ages of 20 to 34 years, the mid-career from 35 to 50 years and the late career from 50 to 65 years.
According to Super’s career development model (Super 1957; Ornstein, Cron and Slocum 1989) the four career stages are based on the qualitatively different psychological task of each stage. They can be based either on age or on organizational, positional or professional tenure. The same people can recycle several times through these stages in their work career. For example, according to the Career Concerns Inventory Adult Form, the actual career stage can be defined at an individual or group level. This instrument assesses an individual’s awareness of and concerns with various tasks of career development (Super, Zelkowitz and Thompson 1981). When tenure measures are used, the first two years are seen as a trial period. The establishment period from two to ten years means career advancement and growth. After ten years comes the maintenance period, which means holding on to the accomplishments achieved. The decline stage implies the development of one’s self-image independently of one’s career.
Because the theoretical bases of the definition of the career stages and the sorts of measure used in practice differ from one study to another, it is apparent that the results concerning the health- and job-relatedness of career development vary, too.
Most studies of career stage as a moderator between job characteristics and the health or well-being of employees deal with organizational commitment and its relation to job satisfaction or to behavioural outcomes such as performance, turnover and absenteeism (Cohen 1991). The relationship between job characteristics and strain has also been studied. The moderating effect of career stage means statistically that the average correlation between measures of job characteristics and well-being varies from one career stage to another.
Work commitment usually increases from early career stages to later stages, although among salaried male professionals, job involvement was found to be lowest in the middle stage. In the early career stage, employees had a stronger need to leave the organization and to be relocated (Morrow and McElroy 1987). Among hospital staff, nurses’ measures of well-being were most strongly associated with career and affective-organizational commitment (i.e., emotional attachment to the organization). Continuance commitment (this is a function of perceived number of alternatives and degree of sacrifice) and normative commitment (loyalty to organization) increased with career stage (Reilly and Orsak 1991).
A meta-analysis was carried out of 41 samples dealing with the relationship between organizational commitment and outcomes indicating well-being. The samples were divided into different career stage groups according to two measures of career stage: age and tenure. Age as a career stage indicator significantly affected turnover and turnover intentions, while organizational tenure was related to job performance and absenteeism. Low organizational commitment was related to high turnover, especially in the early career stage, whereas low organizational commitment was related to high absenteeism and low job performance in the late career stage (Cohen 1991).
The relationship between work attitudes, for instance job satisfaction and work behaviour, has been found to be moderated by career stage to a considerable degree (e.g., Stumpf and Rabinowitz 1981). Among employees of public agencies, career stage measured with reference to organizational tenure was found to moderate the relationship between job satisfaction and job performance. Their relation was strongest in the first career stage. This was supported also in a study among sales personnel. Among academic teachers, the relationship between satisfaction and performance was found to be negative during the first two years of tenure.
Most studies of career stage have dealt with men. Even many early studies in the 1970s, in which the sex of the respondents was not reported, it is apparent that most of the subjects were men. Ornstein and Lynn (1990) tested how the career stage models of Levinson and Super described differences in the career attitudes and intentions among professional women. The results suggest that career stages based on age were related to organizational commitment, intention to leave the organization and a desire for promotion. These findings were, in general, similar to the ones found among men (Ornstein, Cron and Slocum 1989). However, no support was derived for the predictive value of career stages as defined on a psychological basis.
Studies of stress have generally either ignored age, and consequently career stage, in their study designs or treated it as a confounding factor and controlled its effects. Hurrell, McLaney and Murphy (1990) contrasted the effects of stress in mid-career to its effects in early and late career using age as a basis for their grouping of US postal workers. Perceived ill health was not related to job stressors in mid-career, but work pressure and underutilization of skills predicted it in early and late career. Work pressure was related also to somatic complaints in the early and late career group. Underutilization of abilities was more strongly related to job satisfaction and somatic complaints among mid-career workers. Social support had more influence on mental health than physical health, and this effect is more pronounced in mid-career than in early or late career stages. Because the data were taken from a cross sectional study, the authors mention that cohort explanation of the results might also be possible (Hurrell, McLaney and Murphy 1990).
When adult male and female workers were grouped according to age, the older workers more frequently reported overload and responsibility as stressors at work, whereas the younger workers cited insufficiency (e.g., not challenging work), boundary-spanning roles and physical environment stressors (Osipow, Doty and Spokane 1985). The older workers reported fewer of all kinds of strain symptoms: one reason for this may be that older people used more rational-cognitive, self-care and recreational coping skills, evidently learned during their careers, but selection that is based on symptoms during one’s career may also explain these differences. Alternatively it might reflect some self-selection, when people leave jobs that stress them excessively over time.
Among Finnish and US male managers, the relationship between job demands and control on the one hand, and psychosomatic symptoms on the other, was found in the studies to vary according to career stage (defined on the basis of age) (Hurrell and Lindström 1992, Lindström and Hurrell 1992). Among US managers, job demands and control had a significant effect on symptom reporting in the middle career stage, but not in the early and late stage, while among Finnish managers, the long weekly working hours and low job control increased stress symptoms in the early career stage, but not in the later stages. Differences between the two groups might be due to the differences in the two samples studied. The Finnish managers, being in the construction trades, had high workloads already in their early career stage, whereas US managersthese were public sector workers - had the highest workloads in their middle career stage.
To sum up the results of research on the moderating effects of career stage: early career stage means low organizational commitment related to turnover as well as job stressors related to perceived ill health and somatic complaints. In mid-career the results are conflicting: sometimes job satisfaction and performance are positively related, sometimes negatively. In mid-career, job demands and low control are related to frequent symptom reporting among some occupational groups. In late career, organizational commitment is correlated to low absenteeism and good performance. Findings on relations between job stressors and strain are inconsistent for the late career stage. There are some indications that more effective coping decreases work-related strain symptoms in late career.
Practical interventions to help people to cope better with the specific demands of each career stage would be beneficial. Vocational counselling at the entry stage of one’s work life would be especially useful. Interventions for minimizing the negative impact of career plateauing are suggested because this can be either a time of frustration or an opportunity to face new challenges or to reappraise one’s life goals (Weiner, Remer and Remer 1992). Results of age-based health examinations in occupational health services have shown that job-related problems lowering working ability gradually increase and qualitatively change with age. In early and mid-career they are related to coping with work overload, but in later middle and late career they are gradually accompanied by declining psychological condition and physical health, facts that indicate the importance of early institutional intervention at an individual level (Lindström, Kaihilahti and Torstila 1988). Both in research and in practical interventions, mobility and turnover pattern should be taken into account, as well as the role played by one’s occupation (and situation within that occupation) in one’s career development.
The Type A behaviour pattern is an observable set of behaviours or style of living characterized by extremes of hostility, competitiveness, hurry, impatience, restlessness, aggressiveness (sometimes stringently suppressed), explosiveness of speech, and a high state of alertness accompanied by muscular tension. People with strong Type A behaviour struggle against the pressure of time and the challenge of responsibility (Jenkins 1979). Type A is neither an external stressor nor a response of strain or discomfort. It is more like a style of coping. At the other end of this bipolar continuum, Type B persons are more relaxed, cooperative, steady in their pace of activity, and appear more satisfied with their daily lives and the people around them.
The Type A/B behavioural continuum was first conceptualized and labelled in 1959 by the cardiologists Dr. Meyer Friedman and Dr. Ray H. Rosenman. They identified Type A as being typical of their younger male patients with ischaemic heart disease (IHD).
The intensity and frequency of Type A behaviour increases as societies become more industrialized, competitive and hurried. Type A behaviour is more frequent in urban than rural areas, in managerial and sales occupations than among technical workers, skilled craftsmen or artists, and in businesswomen than in housewives.
Type A behaviour has been studied as part of the fields of personality and social psychology, organizational and industrial psychology, psychophysiology, cardiovascular disease and occupational health.
Research relating to personality and social psychology has yielded considerable understanding of the Type A pattern as an important psychological construct. Persons scoring high on Type A measures behave in ways predicted by Type A theory. They are more impatient and aggressive in social situations and spend more time working and less in leisure. They react more strongly to frustration.
Research that incorporates the Type A concept into organizational and industrial psychology includes comparisons of different occupations as well as employees’ responses to job stress. Under conditions of equivalent external stress, Type A employees tend to report more physical and emotional strain than Type B employees. They also tend to move into high-demand jobs (Type A behavior 1990).
Pronounced increases in blood pressure, serum cholesterol and catecholamines in Type A persons were first reported by Rosenman and al. (1975) and have since been confirmed by many other investigators. The tenor of these findings is that Type A and Type B persons are usually quite similar in chronic or baseline levels of these physiological variables, but that environmental demands, challenges or frustrations create far larger reactions in Type A than Type B persons. The literature has been somewhat inconsistent, partly because the same challenge may not physiologically activitate men or women of different backgrounds. A preponderance of positive findings continues to be published (Contrada and Krantz 1988).
The history of Type A/B behaviour as a risk factor for ischeamic heart disease has followed a common historical trajectory: a trickle then a flow of positive findings, a trickle then a flow of negative findings, and now intense controversy (Review Panel on Coronary-Prone Behavior and Coronary Heart Disease 1981). Broad-scope literature searches now reveal a continuing mixture of positive associations and non-associations between Type A behaviour and IHD. The general trend of the findings is that Type A behaviour is more likely to be positively associated with a risk of IHD:
1. in cross-sectional and case-control studies rather than prospective studies
2. in studies of general populations and occupational groups rather than studies limited to persons with cardiovascular disease or who score high on other IHD risk factors
3. in younger study groups (under age 60) rather than older populations
4. in countries still in the process of industrialization or still at the peak of their economic development.
The Type A pattern is not “dead” as an IHD risk factor, but in the future must be studied with the expectation that it may convey greater IHD risk only in certain sub-populations and in selected social settings. Some studies suggest that hostility may be the most damaging component of Type A.
A newer development has been the study of Type A behaviour as a risk factor for injuries and mild and moderate illnesses both in occupational and student groups. It is rational to hypothesize that people who are hurried and aggressive will incur the most accidents at work, in sports and on the highway. This has been found to be empirically true (Elander, West and French 1993). It is less clear theoretically why mild acute illnesses in a full array of physiologic systems should occur more often to Type A than Type B persons, but this has been found in a few studies (e. g. Suls and Sanders 1988). At least in some groups, Type A was found to be associated with a higher risk of future mild episodes of emotional distress. Future research needs to address both the validity of these associations and the physical and psychological reasons behind them.
The Type A/B behaviour pattern was first measured in research settings by the Structured Interview (SI). The SI is a carefully administered clinical interview in which about 25 questions are asked at different rates of speed and with different degrees of challenge or intrusiveness. Special training is necessary for an interviewer to be certified as competent both to administer and interpret the SI. Typically, interviews are tape-recorded to permit subsequent study by other judges to ensure reliability. In comparative studies among several measures of Type A behaviour, the SI seems to have greater validity for cardiovascular and psychophysiological studies than is found for self-report questionnaires, but little is known about its comparative validity in psychological and occupational studies because the SI is used much less frequently in these settings.
The most common self-report instrument is the Jenkins Activity Survey (JAS), a self-report, computer-scored, multiple-choice questionnaire. It has been validated against the SI and against the criteria of current and future IHD, and has accumulated construct validity. Form C, a 52-item version of the JAS published in 1979 by the Psychological Corporation, is the most widely used. It has been translated into most of the languages of Europe and Asia. The JAS contains four scales: a general Type A scale, and factor-analytically derived scales for speed and impatience, job involvement and hard-driving competitiveness. A short form of the Type A scale (13 items) has been used in epidemiological studies by the World Health Organization.
The Framingham Type A Scale (FTAS) is a ten-item questionnaire shown to be a valid predictor of future IHD for both men and women in the Framingham Heart Study (USA). It has also been used internationally both in cardiovascular and psychological research. Factor analysis divides the FTAS into two factors, one of which correlates with other measures of Type A behaviour while the second correlates with measures of neuroticism and irritability.
The Bortner Rating Scale (BRS) is composed of fourteen items, each in the form of an analogue scale. Subsequent studies have performed item-analysis on the BRS and have achieved greater internal consistency or greater predictability by shortening the scale to 7 or 12 items. The BRS has been widely used in international translations. Additional Type A scales have been developed internationally, but these have mostly been used only for specific nationalities in whose language they were written.
Systematic efforts have been under way for at least two decades to help persons with intense Type A behaviour patterns to change them to more of a Type B style. Perhaps the largest of these efforts was in the Recurrent Coronary Prevention Project conducted in the San Francisco Bay area in the 1980s. Repeated follow-up over several years documented that changes were achieved in many people and also that the rate of recurrent myocardial infarction was reduced in persons receiving the Type A behaviour reduction efforts as opposed to those receiving only cardiovascular counselling (Thoreson and Powell 1992).
Intervention in the Type A behaviour pattern is difficult to accomplish successfully because this behavioural style has so many rewarding features, particularly in terms of career advancement and material gain. The programme itself must be carefully crafted according to effective psychological principles, and a group process approach appears to be more effective than individual counselling.
The characteristic of hardiness is based in an existential theory of personality and is defined as a person’s basic stance towards his or his place in the world that simultaneously expresses commitment, control and readiness to respond to challenge (Kobasa 1979; Kobasa, Maddi and Kahn 1982). Commitment is the tendency to involve oneself in, rather than experience alienation from, whatever one is doing or encounters in life. Committed persons have a generalized sense of purpose that allows them to identify with and find meaningful the persons, events and things of their environment. Control is the tendency to think, feel and act as if one is influential, rather than helpless, in the face of the varied contingencies of life. Persons with control do not naïvely expect to determine all events and outcomes but rather perceive themselves as being able to make a difference in the world through their exercise of imagination, knowledge, skill and choice. Challenge is the tendency to believe that change rather than stability is normal in life and that changes are interesting incentives to growth rather than threats to security. So far from being reckless adventurers, persons with challenge are rather individuals with an openness to new experiences and a tolerance of ambiguity that enables them to be flexible in the face of change.
Conceived of as a reaction and corrective to a pessimistic bias in early stress research that emphasized persons’ vulnerability to stress, the basic hardiness hypothesis is that individuals characterized by high levels of the three interrelated orientations of commitment, control and challenge are more likely to remain healthy under stress than those individuals who are low in hardiness. The personality possessing hardiness is marked by a way of perceiving and responding to stressful life events that prevents or minimizes the strain that can follow stress and that, in turn, can lead to mental and physical illness.
The initial evidence for the hardiness construct was provided by retrospective and longitudinal studies of a large group of middle- and upper-level male executives employed by a Midwestern telephone company in the United States during the time of the divestiture of American Telephone and Telegraph (ATT). Executives were monitored through yearly questionnaires over a five-year period for stressful life experiences at work and at home, physical health changes, personality characteristics, a variety of other work factors, social support and health habits. The primary finding was that under conditions of highly stressful life events, executives scoring high on hardiness are significantly less likely to become physically ill than are executives scoring low on hardiness, an outcome that was documented through self-reports of physical symptoms and illnesses and validated by medical records based on yearly physical examinations. The initial work also demonstrated: (a) the effectiveness of hardiness combined with social support and exercise to protect mental as well as physical health; and (b) the independence of hardiness with respect to the frequency and severity of stressful life events, age, education, marital status and job level. Finally, the body of hardiness research initially assembled as a result of the study led to further research that showed the generalizability of the hardiness effect across a number of occupational groups, including non-executive telephone personnel, lawyers and US Army officers (Kobasa 1982).
Since those basic studies, the hardiness construct has been employed by many investigators working in a variety of occupational and other contexts and with a variety of research strategies ranging from controlled experiments to more qualitative field investigations (for reviews, see Maddi 1990; Orr and Westman 1990; Ouellette 1993). The majority of these studies have basically supported and expanded the original hardiness formulation, but there have also been disconfirmations of the moderating effect of hardiness and criticisms of the strategies selected for the measurement of hardiness (Funk and Houston 1987; Hull, Van Treuren and Virnelli 1987).
Emphasizing individuals’ ability to do well in the face of serious stressors, researchers have confirmed the positive role of hardiness among many groups including, in samples studied in the United States, bus drivers, military air-disaster workers, nurses working in a variety of settings, teachers, candidates in training for a number of different occupations, persons with chronic illness and Asian immigrants. Elsewhere, studies have been carried out among businessmen in Japan and trainees in the Israeli defence forces. Across these groups, one finds an association between hardiness and lower levels of either physical or mental symptoms, and, less frequently, a significant interaction between stress levels and hardiness that provides support for the buffering role of personality. In addition, results establish the effects of hardiness on non-health outcomes such as work performance and job satisfaction as well as on burnout. Another large body of work, most of it conducted with college-student samples, confirms the hypothesized mechanisms through which hardiness has its health-protective effects. These studies demonstrated the influence of hardiness upon the subjects’ appraisal of stress (Wiebe and Williams 1992). Also relevant to construct validity, a smaller number of studies have provided some evidence for the psychophysiological arousal correlates of hardiness and the relationship between hardiness and various preventive health behaviours.
Essentially all of the empirical support for a link between hardiness and health has relied upon data obtained through self-report questionnaires. Appearing most often in publications is the composite questionnaire used in the original prospective test of hardiness and abridged derivatives of that measure. Fitting the broad-based definition of hardiness as defined in the opening words of this article, the composite questionnaire contains items from a number of established personality instruments that include Rotter’s Internal-External Locus of Control Scale (Rotter, Seeman and Liverant 1962), Hahn’s California Life Goals Evaluation Schedules (Hahn 1966), Maddi’s Alienation versus Commitment Test (Maddi, Kobasa and Hoover 1979) and Jackson’s Personality Research Form (Jackson 1974). More recent efforts at questionnaire development have led to the development of the Personal Views Survey, or what Maddi (1990) calls the “Third Generation Hardiness Test”. This new questionnaire addresses many of the criticisms raised with respect to the original measure, such as the preponderance of negative items and the instability of hardiness factor structures. Furthermore, studies of working adults in both the United States and the United Kingdom have yielded promising reports as to the reliability and validity of the hardiness measure. Nonetheless, not all of the problems have been resolved. For example, some reports show low internal reliability for the challenge component of hardiness. Another pushes beyond the measurement issue to raise a conceptual concern about whether hardiness should always be seen as a unitary phenomenon rather than a multidimensional construct made up of separate components that may have relationships with health independently of each other in certain stressful situations. The challenge to future on researchers hardiness is to retain both the conceptual and human richness of the hardiness notion while increasing its empirical precision.
Although Maddi and Kobasa (1984) describe the childhood and family experiences that support the development of personality hardiness, they and many other hardiness researchers are committed to defining interventions to increase adults’ stress- resistance. From an existential perspective, personality is seen as something that one is constantly constructing, and a person’s social context, including his or her work environment, is seen as either supportive or debilitating as regards the maintenance of hardiness. Maddi (1987, 1990) has provided the most thorough depiction and rationale for hardiness intervention strategies. He outlines a combination of focusing, situational reconstruction, and compensatory self-improvement strategies that he has used successfully in small group sessions to enhance hardiness and decrease the negative physical and mental effects of stress in the workplace.
Low self-esteem (SE) has long been studied as a determinant of psychological and physiological disorders (Beck 1967; Rosenberg 1965; Scherwitz, Berton and Leventhal 1978). Beginning in the 1980s, organizational researchers have investigated self-esteem’s moderating role in relationships between work stressors and individual outcomes. This reflects researchers’ growing interest in dispositions that seem either to protect or make a person more vulnerable to stressors.
Self-esteem can be defined as “the favorability of individuals’ characteristic self-evaluations” (Brockner 1988). Brockner (1983, 1988) has advanced the hypothesis that persons with low SE (low SEs) are generally more susceptible to environmental events than are high SEs. Brockner (1988) reviewed extensive evidence that this “plasticity hypothesis” explains a number of organizational processes. The most prominent research into this hypothesis has tested self-esteem’s moderating role in the relationship between role stressors (role conflict and role ambiguity) and health and affect. Role conflict (disagreement among one’s received roles) and role ambiguity (lack of clarity concerning the content of one’s role) are generated largely by events that are external to the individual, and therefore, according to the plasticity hypothesis, high SEs would be less vulnerable to them.
In a study of 206 nurses in a large southwestern US hospital, Mossholder, Bedeian and Armenakis (1981) found that self-reports of role ambiguity were negatively related to job satisfaction for low SEs but not for high SEs. Pierce et al. (1993) used an organization-based measure of self-esteem to test the plasticity hypothesis on 186 workers in a US utility company. Role ambiguity and role conflict were negatively related to satisfaction only among low SEs. Similar interactions with organization-based self-esteem were found for role overload, environmental support and supervisory support.
In the studies reviewed above, self-esteem was viewed as a proxy (or alternative measure) for self-appraisals of competence on the job. Ganster and Schaubroeck (1991a) speculated that the moderating role of self-esteem on role stressors’ effects was instead caused by low SEs’ lack of confidence in influencing their social environment, the result being weaker attempts at coping with these stressors. In a study of 157 US fire-fighters, they found that role conflict was positively related to somatic health complaints only among low SEs. There was no such interaction with role ambiguity.
In a separate analysis of the data on nurses’ reported in their earlier study (Mossholder, Bedeian and Armenakis 1981), these authors (1982) found that peer group interaction had a significantly more negative relationship to self-reported tension among low SEs than among high SEs. Likewise, low SEs reporting high peer-group interaction were less likely to wish to leave the organization than were high SEs reporting high peer-group interaction.
Several measures of self-esteem exist in the literature. Possibly the most often used of these is the ten-item instrument developed by Rosenberg (1965). This instrument was used in the Ganster and Schaubroeck (1991a) study. Mossholder and his colleagues (1981, 1982) used the self-confidence scale from Gough and Heilbrun’s (1965) Adjective Check List. The organization-based measure of self-esteem used by Pierce et al. (1993) was a ten-item instrument developed by Pierce et al. (1989).
The research findings suggest that health reports and satisfaction among low SEs can be improved either by reducing their role stressors or increasing their self-esteem. The organization development intervention of role clarification (dyadic supervisor-subordinate exchanges directed at clarifying the subordinate’s role and reconciling incompatible expectations), when combined with responsibility charting (clarifying and negotiating the roles of different departments), proved successful in a randomized field experiment at reducing role conflict and role ambiguity (Schaubroeck et al. 1993). It seems unlikely, however, that many organizations will be able and willing to undertake this rather extensive practice unless role stress is seen as particularly acute.
Brockner (1988) suggested a number of ways organizations can enhance employee self-esteem. Supervision practices are a major area in which organizations can improve. Performance appraisal feedback which focuses on behaviours rather than on traits, providing descriptive information with evaluative summations, and participatively developing plans for continuous improvement, is likely to have fewer adverse effects on employee self-esteem, and it may even enhance the self-esteem of some workers as they discover ways to improve their performance. Positive reinforcement of effective performance events is also critical. Training approaches such as mastery modelling (Wood and Bandura 1989) also ensure that positive efficacy perceptions are developed for each new task; these perceptions are the basis of organization-based self-esteem.
Locus of control (LOC) refers to a personality trait reflecting the generalized belief that either events in life are controlled by one’s own actions (an internal LOC) or by outside influences (an external LOC). Those with an internal LOC believe that they can exert control over life events and circumstances, including the associated reinforcements, that is, those outcomes which are perceived to reward one’s behaviours and attitudes. In contrast, those with an external LOC believe they have little control over life events and circumstances, and attribute reinforcements to powerful others or to luck.
The construct of locus of control emerged from Rotter’s (1954) social learning theory. To measure LOC, Rotter (1966) developed the Internal-External (I-E) scale, which has been the instrument of choice in most research studies. However, research has questioned the unidimensionality of the I-E scale, with some authors suggesting that LOC has two dimensions (e.g., personal control and social system control), and others suggesting that LOC has three dimensions (personal efficacy, control ideology and political control). More recently developed scales to measure LOC are multidimensional, or assess LOC for specific domains, such as health or work (Hurrell and Murphy 1992).
One of the most consistent and widespread findings in the general research literature is the association between an external LOC and poor physical and mental health (Ganster and Fusilier 1989). A number of studies in occupational settings report similar findings: workers with an external LOC tended to report more burnout, job dissatisfaction, stress and lower self-esteem than those with an internal LOC (Kasl 1989). Recent evidence suggests that LOC moderates the relationship between role stressors (role ambiguity and role conflict) and symptoms of distress (Cvetanovski and Jex 1994; Spector and O’Connell 1994).
However, research linking LOC beliefs and ill health is difficult to interpret for several reasons (Kasl 1989). First, there may be conceptual overlap between the measures of health and locus of control scales. Secondly, a dispositional factor, like negative affectivity, may be present which is responsible for the relationship. For example, in the study by Spector and O’Connell (1994), LOC beliefs correlated more strongly with negative affectivity than with perceived autonomy at work, and did not correlate with physical health symptoms. Thirdly, the direction of causality is ambiguous; it is possible that the work experience may alter LOC beliefs. Finally, other studies have not found moderating effects of LOC on job stressors or health outcomes (Hurrell and Murphy 1992).
The question of how LOC moderates job stressor-health relationships has not been well researched. One proposed mechanism involves the use of more effective, problem-focused coping behaviour by those with an internal LOC. Those with an external LOC might use fewer problem-solving coping strategies because they believe that events in their lives are outside their control. There is evidence that people with an internal LOC utilize more task-centred coping behaviours and fewer emotion-centred coping behaviours than those with an external LOC (Hurrell and Murphy 1992). Other evidence indicates that in situations viewed as changeable, those with an internal LOC reported high levels of problem-solving coping and low levels of emotional suppression, whereas those with an external LOC showed the reverse pattern. It is important to bear in mind that many workplace stressors are not under the direct control of the worker, and that attempts to change uncontrollable stressors might actually increase stress symptoms (Hurrell and Murphy 1992).
A second mechanism whereby LOC could influence stressor-health relationships is via social support, another moderating factor of stress and health relationships. Fusilier, Ganster and Mays (1987) found that locus of control and social support jointly determined how workers responded to job stressors and Cummins (1989) found that social support buffered the effects of job stress, but only for those with an internal LOC and only when the support was work-related.
Although the topic of LOC is intriguing and has stimulated a great deal of research, there are serious methodological problems attaching to investigations in this area which need to be addressed. For example, the trait-like (unchanging) nature of LOC beliefs has been questioned by research which showed that people adopt a more external orientation with advancing age and after certain life experiences such as unemployment. Furthermore, LOC may be measuring worker perceptions of job control, instead of an enduring trait of the worker. Still other studies have suggested that LOC scales may not only measure beliefs about control, but also the tendency to use defensive manoeuvres, and to display anxiety or proneness to Type A behaviour (Hurrell and Murphy 1992).
Finally, there has been little research on the influence of LOC on vocational choice, and the reciprocal effects of LOC and job perceptions. Regarding the former, occupational differences in the proportion of “internals” and “externals” may be evidence that LOC influences vocational choice (Hurrell and Murphy 1992). On the other hand, such differences might reflect exposure to the job environment, just as the work environment is thought to be instrumental in the development of the Type A behaviour pattern. A final alternative is that occupational differences in LOC are be due to “drift”, that is the movement of workers into or out of certain occupations as a result of job dissatisfaction, health concerns or desire for advancement.
In summary, the research literature does not present a clear picture of the influence of LOC beliefs on job stressor or health relationships. Even where research has produced more or less consistent findings, the meaning of the relationship is obscured by confounding influences (Kasl 1989). Additional research is needed to determine the stability of the LOC construct and to identify the mechanisms or pathways through which LOC influences worker perceptions and mental and physical health. Components of the path should reflect the interaction of LOC with other traits of the worker, and the interaction of LOC beliefs with work environment factors, including reciprocal effects of the work environment and LOC beliefs. Future research should produce less ambiguous results if it incorporates measures of related individual traits (e.g., Type A behaviour or anxiety) and utilizes domain-specific measures of locus of control (e.g., work).
Coping has been defined as “efforts to reduce the negative impacts of stress on individual well-being” (Edwards 1988). Coping, like the experience of work stress itself, is a complex, dynamic process. Coping efforts are triggered by the appraisal of situations as threatening, harmful or anxiety producing (i.e., by the experience of stress). Coping is an individual difference variable that moderates the stress-outcome relationship.
Coping styles encompass trait-like combinations of thoughts, beliefs and behaviours that result from the experience of stress and may be expressed independently of the type of stressor. A coping style is a dispositional variable. Coping styles are fairly stable over time and situations and are influenced by personality traits, but are different from them. The distinction between the two is one of generality or level of abstraction. Examples of such styles, expressed in broad terms, include: monitor-blunter (Miller 1979) and repressor-sensitizer (Houston and Hodges 1970). Individual differences in personality, age, experience, gender, intellectual ability and cognitive style affect the way an individual copes with stress. Coping styles are the result of both prior experience and previous learning.
Shanan (1967) offered an early perspective on what he termed an adaptive coping style. This “response set” was characterized by four ingredients: the availability of energy directly focused on potential sources of the difficulty; a clear distinction between events internal and external to the person; confronting rather than avoiding external difficulties; and balancing external demands with needs of the self. Antonovsky (1987) similarly suggests that, to be effective, the individual person must be motivated to cope, have clarified the nature and dimensions of the problem and the reality in which it exists, and then selected the most appropriate resources for the problem at hand.
The most common typology of coping style (Lazarus and Folkman 1984) includes problem-focused coping (which includes information seeking and problem solving) and emotion-focused coping (which involves expressing emotion and regulating emotions). These two factors are sometimes complemented by a third factor, appraisal-focused coping (whose components include denial, acceptance, social comparison, redefinition and logical analysis).
Moos and Billings (1982) distinguish among the following coping styles:
· Active-cognitive. The person tries to manage their appraisal of the stressful situation.
· Active-behavioural. This style involves behaviour dealing directly with the stressful situations.
· Avoidance. The person avoids confronting the problem.
Greenglass (1993) has recently proposed a coping style termed social coping, which integrates social and interpersonal factors with cognitive factors. Her research showed significant relationships between various kinds of social support and coping forms (e.g., problem-focused and emotion-focused). Women, generally possessing relatively greater interpersonal competence, were found to make greater use of social coping.
In addition, it may be possible to link another approach to coping, termed preventive coping, with a large body of previously separate writing dealing with healthy lifestyles (Roskies 1991). Wong and Reker (1984) suggest that a preventive coping style is aimed at promoting one’s well-being and reducing the likelihood of future problems. Preventive coping includes such activities as physical exercise and relaxation, as well as the development of appropriate sleeping and eating habits, and planning, time management and social support skills.
Another coping style, which has been described as a broad aspect of personality (Watson and Clark 1984), involves the concepts of negative affectivity (NA) and positive affectivity (PA). People with high NA accentuate the negative in evaluating themselves, other people and their environment in general and reflect higher levels of distress. Those with high PA focus on the positives in evaluating themselves, other people and their world in general. People with high PA report lower levels of distress.
These two dispositions can affect a person’s perceptions of the number and magnitude of potential stressors as well as his or her coping responses (i.e., one’s perceptions of the resources that one has available, as well as the actual coping strategies that are used). Thus, those with high NA will report fewer resources available and are more likely to use ineffective (defeatist) strategies (such as releasing emotions, avoidance and disengagement in coping) and less likely to use more effective strategies (such as direct action and cognitive reframing). Individuals with high PA would be more confident in their coping resources and use more productive coping strategies.
Antonovsky’s (1979; 1987) sense of coherence (SOC) concept overlaps considerably with PA. He defines SOC as a generalized view of the world as meaningful and comprehensible. This orientation allows the person to first focus on the specific situation and then to act on the problem and the emotions associated with the problem. High SOC individuals have the motivation and the cognitive resources to engage in these sorts of behaviours likely to resolve the problem. In addition, high SOC individuals are more likely to realize the importance of emotions, more likely to experience particular emotions and to regulate them, and more likely to take responsibility for their circumstances instead of blaming others or projecting their perceptions upon them. Considerable research has since supplied support for Antonovsky’s thesis.
Coping styles can be described with reference to dimensions of complexity and flexibility (Lazarus and Folkman 1984). People using a variety of strategies exhibit a complex style; those preferring a single strategy exhibit a single style. Those who use the same strategy in all situations exhibit a rigid style; those who use different strategies in the same, or different, situations exhibit a flexible style. A flexible style has been shown to be more effective than a rigid style.
Coping styles are typically measured by using self-reported questionnaires or by asking individuals, in an open-ended way, how they coped with a particular stressor. The questionnaire developed by Lazarus and Folkman (1984), the “Ways of Coping Checklist”, is the most widely used measure of problem-focused and emotion-focused coping. Dewe (1989), on the other hand, has frequently used individuals’ descriptions of their own coping initiatives in his research on coping styles.
There are a variety of practical interventions that may be implemented with regard to coping styles. Most often, intervention consists of education and training in which individuals are presented with information, sometimes coupled with self-assessment exercises that enable them to examine their own preferred coping style as well as other varieties of coping styles and their potential usefulness. Such information is typically well received by the persons to whom the intervention is directed, but the demonstrated usefulness of such information in helping them cope with real life stressors is lacking. In fact, the few studies that considered individual coping (Shinn et al. 1984; Ganster et al. 1982) have reported limited practical value in such education, particularly when a follow-up has been undertaken (Murphy 1988).
Matteson and Ivancevich (1987) outline a study dealing with coping styles as part of a longer programme of stress management training. Improvements in three coping skills are addressed: cognitive, interpersonal and problem solving. Coping skills are classified as problem-focused or emotion-focused. Problem-focused skills include problem solving, time management, communication and social skills, assertiveness, lifestyle changes and direct actions to change environmental demands. Emotion-focused skills are designed to relieve distress and foster emotion regulation. These include denial, expressing feelings and relaxation.
The preparation of this article was supported in part by the Faculty of Administrative Studies, York University.
During the mid-1970s public health practitioners, and in particular, epidemiologists “discovered” the concept of social support in their studies of causal relationships between stress, mortality and morbidity (Cassel 1974; Cobb 1976). In the past decade there has been an explosion in the literature relating the concept of social support to work-related stressors. By contrast, in psychology, social support as a concept had already been well integrated into clinical practice. Rogers’ (1942) client-centred therapy of unconditional positive regard is fundamentally a social support approach. Lindeman’s (1944) pioneering work on grief management identified the critical role of support in moderating the crisis of death loss. Caplin’s (1964) model of preventive community psychiatry (1964) elaborated on the importance of community and support groups.
Cassel (1976) adapted the concept of social support into public health theory as a way of explaining the differences in diseases that were thought to be stress-related. He was interested in understanding why some individuals appeared to be more resistant to stress than others. The idea of social support as a factor in disease causation was reasonable since, he noted, both people and animals who experienced stress in the company of “significant others” seemed to suffer fewer adverse consequences than those who were isolated. Cassel proposed that social support could act as a protective factor buffering an individual from the effects of stress.
Cobb (1976) expanded on the concept by noting that the mere presence of another person is not social support. He suggested that an exchange of “information” was needed. He established three categories for this exchange:
· information leading the person to the belief that one is loved or cared for (emotional support)
· information leading to the belief that one is esteemed and valued (esteem support)
· information leading to the belief that one belongs to a network of mutual obligations and communication.
Cobb reported that those experiencing severe events without such social support were ten times more likely to come to be depressed and concluded that somehow intimate relations, or social support, was protective of the effects of stress reactions. He also proposed that social support operates throughout one’s life span, encompassing various life events such as unemployment, severe illness and bereavement. Cobb pointed out the great diversity of studies, samples, methods and outcomes as convincing evidence that social support is a common factor in modifying stress, but is, in itself, not a panacea for avoiding its effects.
According to Cobb, social support increases coping ability (environmental manipulation) and facilitates adaptation (self-change to improve the person-environment fit). He cautioned, however, that most research was focused on acute stressors and did not permit generalizations of the protective nature of social support for coping with the effects of chronic stressors or traumatic stress.
Over the intervening years since the publication of these seminal works, investigators have moved away from considering social support as a unitary concept, and have attempted to understand the components of social stress and social support.
Hirsh (1980) describes five possible elements of social support:
· emotional support: care, comfort, love, affection, sympathy
· encouragement: praise, compliments; the extent to which one feels inspired by the supporter to feel courage, hope or to prevail
· advice: useful information to solve problems; the extent to which one feels informed
· companionship: time spent with supporter; the extent to which one does not feel alone
· tangible aid: practical resources, such as money or aid with chores; the extent to which one feels relieved of burdens.
Another framework is used by House (1981), to discuss social support in the context of work-related stress:
· emotional: empathy, caring, love, trust, esteem or demons-trations of concern
· appraisal: information relevant to self-evaluation, feedback from others useful in self-affirmation
· informational: suggestions, advice or information useful in problem-solving
· instrumental: direct aid in the form of money, time or labour.
House felt that emotional support was the most important form of social support. In the workplace, the supportiveness of the supervisor was the most important element, followed by co-worker support. The structure and organization of the enterprise, as well as the specific jobs within it, could either enhance or inhibit potential for support. House found that greater task specialization and fragmentation of work leads to more isolated work roles and to decreased opportunities for support.
Pines’ (1983) study of burnout, which is a phenomenon discussed separately in this chapter, found that the availability of social support at work is negatively correlated with burnout. He identifies six different relevant aspects of social support which modify the burnout response. These include listening, encouragement, giving advice and, providing companionship and tangible aid.
As one may gather from the foregoing discussion in which the models proposed by several researchers have been described, while the field has attempted to specify the concept of social support, there is no clear consensus on the precise elements of the concept, although considerable overlap between models is evident.
Although the literature on stress and social support is quite extensive, there is still considerable debate as to the mechanisms by which stress and social support interact. A long-standing question is whether social support has a direct or indirect effect on health.
Social support can have a direct or main effect by serving as a barrier to the effects of the stressor. A social support network may provide needed information or needed feedback in order to overcome the stressor. It may provide a person with the resources he or she needs to minimize the stress. An individual’s self-perception may also be influenced by group membership so as to provide self-confidence, a sense of mastery and skill and hence thereby a sense of control over the environment. This is relevant to Bandura’s (1986) theories of personal control as the mediator of stress effects. There appears to be a minimum threshold level of social contact required for good health, and increases in social support above the minimum are less important. If one considers social support as having a direct - or main - effect, then one can create an index by which to measure it (Cohen and Syme 1985; Gottlieb 1983).
Cohen and Syme (1985), however, also suggest that an alternative explanation to social support acting as a main effect is that it is the isolation, or lack of social support, which causes the ill health rather than the social support itself promoting better health. This is an unresolved issue. Gottlieb also raises the issue of what happens when the stress results in the loss of the social network itself, such as might occur during disasters, major accidents or loss of work. This effect has not yet been quantified.
The buffering hypothesis is that social support intervenes between the stressor and the stress response to reduce its effects. Buffering could change one’s perception of the stressor, thus diminishing its potency, or it could increases one’s coping skills. Social support from others may provide tangible aid in a crisis, or it may lead to suggestions that facilitate adaptive responses. Finally, social support may be the stress-modifying effect which calms the neuroendocrine system so that the person may be less reactive to the stressor.
Pines (1983) notes that the relevant aspect of social support may be in the sharing of a social reality. Gottlieb proposes that social support could offset self-recrimination and dispel notions that the individual is him or herself responsible for the problems. Interaction with a social support system can encourage the venting of fears and can assist re-establishing a meaningful social identity.
Research thus far has tended to treat social support as a static, given factor. While the issue of its change over time has been raised, little data exist on the time course of social support (Gottlieb 1983; Cohen and Syme 1985). Social support is, of course, fluid, just as the stressors that it affects. It varies as the individual passes through the stages of life. It can also change over the short-term experience of a particular stressful event (Wilcox 1981).
Such variability probably means that social support fulfils different functions during different developmental stages or during different phases of a crisis. For example at the onset of a crisis, informational support may be more essential than tangible aid. The source of support, its density and the length of time it is operative will also be in flux. The reciprocal relationship between stress and social support must be recognized. Some stressors themselves have a direct impact on available support. Death of a spouse, for example, usually reduces the extent of the network and may have serious consequences for the survivor (Goldberg et al. 1985).
Social support is not a magic bullet that reduces the impact of stress. Under certain conditions it may exacerbate or be the cause of stress. Wilcox (1981) noted that those with a denser kin network had more difficulties adjusting to divorce because their families were less likely to accept divorce as a solution to marital problems. The literature on addiction and family violence also shows possible severe negative effects of social networks. Indeed, as Pines and Aronson (1981) point out, much of professional mental health interventions are devoted to undoing destructive relationships, and to teaching interpersonal skills and to assisting people to recover from social rejection.
There are a large number of studies employing a variety of measures of the functional content of social support. These measures have a wide range of reliability and construct validity. Another methodological problem is that these analyses depend largely on the self-reports of those being studied. The responses will therefore of necessity be subjective and will cause one to wonder whether it is the actual event or level of social support that is important or whether it is the individual’s perception of support and outcomes that is more critical. If it is the perception that is critical, then it may be that some other, third variable, such as personality type, is affecting both stress and social support (Turner 1983). For example, a third factor, such as age or socio-economic status, may influence change in both social support and outcome, according to Dooley (1985). Solomon (1986) provides some evidence for this idea with a study of women who have been forced by financial constraints into involuntary interdependence on friends and kin. She found that such women opt out of these relationships as quickly as they are financially able to do so.
Thoits (1982) raises concerns about reverse causation. It may be, she points out, that certain disorders chase away friends and lead to loss of support. Studies by Peters-Golden (1982) and Maher (1982) on cancer victims and social support appear to be consistent with this proposition.
Studies on the relationship between social support and work stress indicate that successful coping is related to the effective use of support systems (Cohen and Ahearn 1980). Successful coping activities have emphasized the use of both formal and informal social support in dealing with work stress. Laid-off workers, for example, are advised to actively seek support to provide informational, emotional and tangible support. There have been relatively few evaluations of the effectiveness of such interventions. It appears, however, that formal support is only effective in the short term and informal systems are necessary for longer-term coping. Attempts to provide institutional formal social support can create negative outcomes, since the anger and rage about layoff or bankruptcy, for example, may be displaced to those who provide the social support. Prolonged reliance on social support may create a sense of dependency and lowered self- esteem.
In some occupations, such as seafarers, fire-fighters or staff in remote locations such as on oil rigs, there is a consistent, long-term, highly defined social network which can be compared to a family or kin system. Given the necessity for small work groups and joint efforts, it is natural that a strong sense of social cohesion and support develops among workers. The sometimes hazardous nature of the work requires that workers develop mutual respect, trust and confidence. Strong bonds and interdependence are created when people are dependent on each other for their survival and well-being.
Further research on the nature of social support during routine periods, as well as downsizing or major organizational change, is necessary to further define this factor. For example, when an employee is promoted to a supervisory position, he or she normally must distance him or herself from the other members of the work group. Does this make a difference in the day-to-day levels of social support he or she receives or requires? Does the source of support shift to other supervisors or to the family or somewhere else? Do those in positions of responsibility or authority experience different work stressors? Do these individuals require different types, sources or functions of social support?
If the target of the group-based interventions is also changing the functions of social support or the nature of the network, does this provide a preventive effect in future stressful events?
What will be the effect of growing numbers of women in these occupations? Does their presence change the nature and functions of support for all or does each sex require different levels or types of support?
The workplace presents a unique opportunity to study the intricate web of social support. As a closed subculture, it provides a natural experimental setting for research into the role of social support, social networks and their interrelationships with acute, cumulative and traumatic stress.
Do job stressors affect men and women differently? This question has only recently been addressed in the job stress–illness literature. In fact, the word gender does not even appear in the index of the first edition of the Handbook of Stress (Goldberger and Breznitz 1982) nor does it appear in the indices of such major reference books as Job Stress and Blue Collar Work (Cooper and Smith 1985) and Job Control and Worker Health (Sauter, Hurrell and Cooper 1989). Moreover, in a 1992 review of moderator variables and interaction effects in the occupational stress literature, gender effects were not even mentioned (Holt 1992). One reason for this state of affairs lies in the history of occupational health and safety psychology, which in turn reflects the pervasive gender stereotyping in our culture. With the exception of reproductive health, when researchers have looked at physical health outcomes and physical injuries, they have generally studied men and variations in their work. When researchers have studied mental health outcomes, they have generally studied women and variations in their social roles.
As a result, the “available evidence” on the physical health impact of work has until recently been almost completely limited to men (Hall 1992). For example, attempts to identify correlates of coronary heart disease have been focused exclusively on men and on aspects of their work; researchers did not even inquire into their male subjects’ marital or parental roles (Rosenman et al. 1975). Indeed, few studies of the job stress–illness relationship in men include assessments of their marital and parental relationships (Caplan et al. 1975).
In contrast, concern about reproductive health, fertility and pregnancy focused primarily on women. Not surprisingly, “the research on reproductive effects of occupational exposures is far more extensive on females than on males” (Walsh and Kelleher 1987). With respect to psychological distress, attempts to specify the psychosocial correlates, in particular the stressors associated with balancing work and family demands, have centred heavily on women.
By reinforcing the notion of “separate spheres” for men and women, these conceptualizations and the research paradigms they generated prevented any examination of gender effects, thereby effectively controlling for the influence of gender. Extensive sex segregation in the workplace (Bergman 1986; Reskin and Hartman 1986) also acts as a control, precluding the study of gender as a moderator. If all men are employed in “men’s jobs” and all women are employed in “women’s jobs”, it would not be reasonable to ask about the moderating effect of gender on the job stress–illness relationship: job conditions and gender would be confounded. It is only when some women are employed in jobs that men occupy and when some men are employed in jobs that women occupy that the question is meaningful.
Controlling is one of three strategies for treating the effects of gender. The other two are ignoring these effects or analysing them (Hall 1991). Most investigations of health have either ignored or controlled for gender, thereby accounting for the dearth of references to gender as discussed above and for a body of research that reinforces stereotyped views about the role of gender in the job stress–illness relationship. These views portray women as essentially different from men in ways that render them less robust in the workplace, and portray men as comparatively unaffected by non-workplace experiences.
In spite of this beginning, the situation is already changing. Witness the publication in 1987 of Gender and Stress (Barnett, Biener and Baruch 1987), the first edited volume focusing specifically on the impact of gender at all points in the stress reaction. And the second edition of the Handbook of Stress (Barnett 1992) includes a chapter on gender effects. Indeed, current studies increasingly reflect the third strategy: analysing gender effects. This strategy holds great promise, but also has pitfalls. Operationally, it involves analysing data relating to males and females and estimating both the main and the interaction effects of gender. A significant main effect tells us that after controlling for the other predictors in the model, men and women differ with respect to the level of the outcome variable. Interaction-effects analyses concern differential reactivity, that is, does the relationship between a given stressor and a health outcome differ for women and men?
The main promise of this line of inquiry is to challenge stereotyped views of women and men. The main pitfall is that conclusions about gender difference can still be drawn erroneously. Because gender is confounded with many other variables in our society, these variables have to be taken into account before conclusions about gender can be inferred. For example, samples of employed men and women will undoubtedly differ with respect to a host of work and non-work variables that could reasonably affect health outcomes. Most important among these contextual variables are occupational prestige, salary, part-time versus full-time employment, marital status, education, employment status of spouse, overall work burdens and responsibility for care of younger and older dependants. In addition, evidence suggests the existence of gender differences in several personality, cognitive, behavioural and social system variables that are related to health outcomes. These include: sensation seeking; self-efficacy (feelings of competence); external locus of control; emotion-focused versus problem-focused coping strategies; use of social resources and social support; harmful acquired risks, such as smoking and alcohol abuse; protective behaviours, such as exercise, balanced diets and preventive health regimens; early medical intervention; and social power (Walsh, Sorensen and Leonard, in press). The better one can control these contextual variables, the closer one can get to understanding the effect of gender per se on the relationships of interest, and thereby to understanding whether it is gender or other, gender-related variables that are the effective moderators.
To illustrate, in one study (Karasek 1990) job changes among white-collar workers were less likely to be associated with negative health outcomes if the changes resulted in increased job control. This finding was true for men, not women. Further analyses indicated that job control and gender were confounded. For women, one of “the less aggressive (or powerful) groups in the labour market” (Karasek 1990), white-collar job changes often involved reduced control, whereas for men, such job changes often involved increased control. Thus, power, not gender, accounted for this interaction effect. Such analyses lead us to refine the question about moderator effects. Do men and women react differentially to workplace stressors because of their inherent (i.e., biological) nature or because of their different experiences?
Although only a few studies have examined gender interaction effects, most report that when appropriate controls are utilized, the relationship between job conditions and physical or mental health outcomes is not affected by gender. (Lowe and Northcott 1988 describe one such study). In other words, there is no evidence of an inherent difference in reactivity.
Findings from a random sample of full-time employed men and women in dual-earner couples illustrates this conclusion with respect to psychological distress. In a series of cross-sectional and longitudinal analyses, a matched pairs design was used that controlled for such individual-level variables as age, education, occupational prestige and marital-role quality, and for such couple-level variables as parental status, years married and household income (Barnett et al. 1993; Barnett et al. 1995; Barnett, Brennan and Marshall 1994). Positive experiences on the job were associated with low distress; insufficient skill discretion and overload were associated with high distress; experiences in the roles of partner and parent moderated the relationship between job experiences and distress; and change over time in skill discretion and overload were each associated with change over time in psychological distress. In no case was the effect of gender significant. In other words, the magnitude of these relationships was not affected by gender.
One important exception is tokenism (see, for example, Yoder 1991). Whereas “it is clear and undeniable that there is a considerable advantage in being a member of the male minority in any female profession” (Kadushin 1976), the opposite is not true. Women who are in minority in a male work situation experience a considerable disadvantage. Such a difference is readily understandable in the context of men’s and women’s relative power and status in our culture.
Overall, studies of physical health outcomes also fail to reveal significant gender interaction effects. It appears, for example, that characteristics of work activity are stronger determinants of safety than are attributes of workers, and that women in traditionally male occupations suffer the same types of injury with approximately the same frequency as their male counterparts. Moreover, poorly designed protective equipment, not any inherent incapacity on the part of women in relation to the work, is often to blame when women in male-dominated jobs experience more injuries (Walsh, Sorensen and Leonard, 1995).
Two caveats are in order. First, no one study controls for all the gender-related covariates. Therefore, any conclusions about “gender” effects must be tentative. Secondly, because controls vary from study to study, comparisons between studies are difficult.
As increasing numbers of women enter the labour force and occupy jobs similar to those occupied by men, both the opportunity and the need for analysing the effect of gender on the job stress–illness relationship also increase. In addition, future research needs to refine the conceptualization and measurement of the stress construct to include job stressors important to women; extend interaction effects analyses to studies previously restricted to male or female samples, for example, studies of reproductive health and of stresses due to non-workplace variables; and examine the interaction effects of race and class as well as the joint interaction effects of gender x race and gender x class.
Major changes are taking place within the workforces of many of the world’s leading industrial nations, with members of ethnic minority groups making up increasingly larger proportions. However, little of the occupational stress research has focused on ethnic minority populations. The changing demographics of the world’s workforce give clear notice that these populations can no longer be ignored. This article briefly addresses some of the major issues of occupational stress in ethnic minority populations with a focus on the United States. However, much of the discussion should be generalizable to other nations of the world.
Much of the occupational stress research either excludes ethnic minorities, includes too few to allow meaningful comparisons or generalizations to be made, or does not report enough information about the sample to determine racial or ethnic participation. Many studies fail to make distinctions among ethnic minorities, treating them as one homogeneous group, thus minimizing the differences in demographic characteristics, culture, language and socio-economic status which have been documented both between and within ethnic minority groups (Olmedo and Parron 1981).
In addition to the failure to address issues of ethnicity, by far the greater part of research does not examine class or gender differences, or class by race and gender interactions. Moreover, little is known about the cross-cultural utility of many of the assessment procedures. Documentation used in such procedures is not adequately translated nor is there demonstrated equivalency between the standardized English and other language versions. Even when the reliabilities appear to indicate equivalence across ethnic or cultural groups, there is uncertainty about which symptoms in the scale are elicited in a reliable fashion, that is, whether the phenomenology of a disorder is similar across groups (Roberts, Vernon and Rhoades 1989).
Many assessment instruments inadequately assess conditions within ethnic minority populations; consequently results are often suspect. For example, many stress scales are based on models of stress as a function of undesirable change or readjustment. However, many minority individuals experience stress in large part as a function of ongoing undesirable situations such as poverty, economic marginality, inadequate housing, unemployment, crime and discrimination. These chronic stressors are not usually reflected in many of the stress scales. Models which conceptualize stress as resulting from the interplay between both chronic and acute stressors, and various internal and external mediating factors, are more appropriate for assessing stress in ethnic minority and poor populations (Watts-Jones 1990).
A major stressor affecting ethnic minorities is the prejudice and discrimination they encounter as a result of their minority status in a given society (Martin 1987; James 1994). It is a well- established fact that minority individuals experience more prejudice and discrimination as a result of their ethnic status than do members of the majority. They also perceive greater discrimination and fewer opportunities for advancement as compared with whites (Galinsky, Bond and Friedman 1993). Workers who feel discriminated against or who feel that there are fewer chances for advancement for people of their ethnic group are more likely to feel “burned out” in their jobs, care less about working hard and doing their jobs well, feel less loyal to their employers, are less satisfied with their jobs, take less initiative, feel less committed to helping their employers succeed and plan to leave their current employers sooner (Galinsky, Bond and Friedman 1993). Moreover, perceived prejudice and discrimination are positively correlated with self-reported health problems and higher blood pressure levels (James 1994).
An important focus of occupational stress research has been the relationship between social support and stress. However, there has been little attention paid to this variable with respect to ethnic minority populations. The available research tends to show conflicting results. For example, Hispanic workers who reported higher levels of social support had less job-related tension and fewer reported health problems (Gutierres, Saenz and Green 1994); ethnic minority workers with lower levels of emotional support were more likely to experience job burn-out, health symptoms, episodic job stress, chronic job stress and frustration; this relationship was strongest for women and for management as opposed to non-management personnel (Ford 1985). James (1994), however, did not find a significant relationship between social support and health outcomes in a sample of African-American workers.
Most models of job satisfaction have been derived and tested using samples of white workers. When ethnic minority groups have been included, they have tended to be African-Americans, and potential effects due to ethnicity were often masked (Tuch and Martin 1991). Research that is available on African-American employees tends to yield significantly lower scores on overall job satisfaction in comparison to whites (Weaver 1978, 1980; Staines and Quinn 1979; Tuch and Martin 1991). Examining this difference, Tuch and Martin (1991) noted that the factors determining job satisfaction were basically the same but that African-Americans were less likely to have the situations that led to job satisfaction. More specifically, extrinsic rewards increase African-Americans’ job satisfaction, but African-Americans are disadvantaged relatively to whites on these variables. On the other hand, blue-collar incumbency and urban residence decrease job satisfaction for African-Americans but African-Americans are overrepresented in these areas. Wright, King and Berg (1985) found that organizational variables (i.e., job authority, qualifications for the position and a sense that advancement within the organization is possible) were the best predictors of job satisfaction in their sample of black female managers in keeping with previous research on primarily white samples.
Ethnic minority workers are more likely than their white counterparts to be in jobs with hazardous work conditions. Bullard and Wright (1986/1987) noted this propensity and indicated that the population differences in injuries are likely to be the result of racial and ethnic disparities in income, education, type of employment and other socio-economic factors correlated with exposure to hazards. One of the most likely reasons, they noted, was that occupational injuries are highly dependent on the job and industry category of the workers and ethnic minorities tend to work in more hazardous occupations.
Foreign workers who have entered the country illegally often experience special work stress and maltreatment. They often endure substandard and unsafe working conditions and accept less than minimum wages because of fear of being reported to the immigration authorities and they have few options for better employment. Most health and safety regulations, guidelines for use, and warnings are in English and many immigrants, illegal or otherwise, may not have a good understanding of written or spoken English (Sanchez 1990).
Some areas of research have almost totally ignored ethnic minority populations. For example, hundreds of studies have examined the relationship between Type A behaviour and occupational stress. White males constitute the most frequently studied groups with ethnic minority men and women almost totally excluded. Available research - e.g., a study by Adams et al. (1986), using a sample of college freshmen, and e.g., Gamble and Matteson (1992), investigating black workersindicates the same positive relationship between Type A behaviour and self-reported stress as that found for white samples.
Similarly, little research on issues such as job control and work demands is available for ethnic minority workers, although these are central constructs in occupational stress theory. Available research tends to show that these are important constructs for ethnic minority workers as well. For example, African-American licensed practical nurses (LPNs) report significantly less decision authority and more dead-end jobs (and hazard exposures) than do white LPNs and this difference is not a function of educational differences (Marshall and Barnett 1991); the presence of low decision latitude in the face of high demands tends to be the pattern most characteristic of jobs with low socio-economic status, which are more likely to be held by ethnic minority workers (Waitzman and Smith 1994); and middle- and upper-level white men rate their jobs consistently higher than their ethnic minority (and female) peers on six work design factors (Fernandez 1981).
Thus, it appears that many research questions remain regarding ethnic minority populations in the occupational stress and health arena as regards ethnic minority populations. These questions will not be answered until ethnic minority workers are included in study samples and in the development and validation of investigatory instruments.
The acute physiological adjustments recorded during the performance of problem-solving or psychomotor tasks in the laboratory include: raised heart rate and blood pressure; alterations in cardiac output and peripheral vascular resistance; increased muscle tension and electrodermal (sweat gland) activity; disturbances in breathing pattern; and modifications in gastrointestinal activity and immune function. The best studied neurohormonal responses are those of the catecholamines (adrenaline and noradrenaline) and cortisol. Noradrenaline is the primary transmitter released by the nerves of the sympathetic branch of the autonomic nervous system. Adrenaline is released from the adrenal medulla following stimulation of the sympathetic nervous system, while activation of the pituitary gland by higher centres in the brain results in the release of cortisol from the adrenal cortex. These hormones support autonomic activation during stress and are responsible for other acute changes, such as stimulation of the processes that govern blood clotting, and the release of stored energy supplies from adipose tissue. It is likely that these kinds of response will also be seen during job stress, but studies in which work conditions are simulated, or in which people are tested in their normal jobs, are required to demonstrate such effects.
A variety of methods is available to monitor these responses. Conventional psychophysiological techniques are used to assess autonomic responses to demanding tasks (Cacioppo and Tassinary 1990). Levels of stress hormones can be measured in the blood or urine, or in the case of cortisol, in the saliva. The sympathetic activity associated with challenge has also been documented by measures of noradrenaline spillover from nerve terminals, and by direct recording of sympathetic nervous activity with miniature electrodes. The parasympathetic or vagal branch of the autonomic nervous system typically responds to task performance with reduced activity, and this can, under certain circumstances, be indexed through recording heart rate variability or sinus arrhythmia. In recent years, power spectrum analysis of heart rate and blood pressure signals has revealed wave bands that are characteristically associated with sympathetic and parasympathetic activity. Measures of the power in these wavebands can be used to index autonomic balance, and have shown a shift towards the sympathetic branch at the expense of the parasympathetic branch during task performance.
Few laboratory assessments of acute physiological responses have simulated work conditions directly. However, dimensions of task demand and performance that are relevant to work have been investigated. For example, as the demands of externally paced work increase (through faster pace or more complex problem solving), there is a rise in adrenaline level, heart rate and blood pressure, a reduction in heart rate variability and an increase in muscle tension. In comparison with self-paced tasks performed at the same rate, external pacing results in greater blood pressure and heart rate increases (Steptoe et al. 1993). In general, personal control over potentially stressful stimuli reduces autonomic and neuroendocrine activation in comparison with uncontrollable situations, although the effort of maintaining control over the situation itself has its own physiological costs.
Frankenhaeuser (1991) has suggested that adrenaline levels are raised when a person is mentally aroused or performing a demanding task, and that cortisol levels are raised when an individual is distressed or unhappy. Applying these ideas to job stress, Frankenhaeuser has proposed that job demand is likely to lead to increased effort and thus to raise levels of adrenaline, while lack of job control is one of the main causes of distress at work and is therefore likely to stimulate raised cortisol levels. Studies comparing levels of these hormones in people doing their normal work with levels in the same people at leisure have shown that adrenaline is normally raised when people are at work. Effects for noradrenaline are inconsistent and may depend on the amount of physical activity that people carry out during work and leisure time. It has also been shown that adrenaline levels at work correlate positively with levels of job demand. In contrast, cortisol levels have not been shown typically to be raised in people at work, and it is yet to be demonstrated that cortisol levels vary according to the degree of job control. In the “Air Traffic Controller Health Change Study”, only a small proportion of workers produced consistent increases in cortisol as the objective workload became greater (Rose and Fogg 1993).
Thus only adrenaline among the stress hormones has been shown conclusively to rise in people at work, and to do so according to the level of demand they experience. There is evidence that levels of prolactin increase in response to stress while levels of testosterone decrease. However, studies of these hormones in people at work are very limited. Acute changes in the concentration of cholesterol in the blood have also been observed with increased workload, but the results are not consistent (Niaura, Stoney and Herbst 1992).
As far as cardiovascular variables are concerned, it has repeatedly been found that blood pressure is higher in men and women during work than either after work or during equivalent times of day spent at leisure. These effects have been observed both with self-monitored blood pressure and with automated portable (or ambulatory) monitoring instruments. Blood pressure is especially high during periods of increased work demand (Rose and Fogg 1993). It has also been found that blood pressure rises with emotional demands, for example, in studies of paramedics attending the scenes of accidents. However, it is often difficult to determine whether blood pressure fluctuations at work are due to psychological demands or to associated physical activity and changes in posture. The raised blood pressure recorded at work is especially pronounced among people reporting high job strain according to the Demand-Control model (Schnall et al. 1990).
Heart rate has not been shown to be consistently raised during work. Acute elevations of heart rate may nevertheless be elicited by disruption of work, for example with breakdown of equipment. Emergency workers such as fire-fighters exhibit extremely fast heart rates in response to alarm signals at work. On the other hand, high levels of social support at work are associated with reduced heart rates. Abnormalities of cardiac rhythm may also be elicited by stressful working conditions, but the pathological significance of such responses has not been established.
Gastrointestinal problems are commonly reported in studies of job stress (see “Gastrointestinal problems” [PSY39AE] below). Unfortunately, it is difficult to assess the physiological systems underlying gastrointestinal symptoms in the work setting. Acute mental stress has variable effects on gastric acid secretion, stimulating large increases in some individuals and reduced output in others. Shift workers have a particularly high prevalence of gastrointestinal problems, and it has been suggested that these may arise when diurnal rhythms in the central nervous system’s control of gastric acid secretion are disrupted. Anomalies of small bowel motility have been recorded using radiotelemetry in patients diagnosed with irritable bowel syndrome while they go about their everyday lives. Health complaints, including gastrointestinal symptoms, have been shown to co-vary with perceived workload, but it is not clear whether this reflects objective changes in physiological function or patterns of symptom perception and reporting.
Researchers may disagree on the meaning of the term stress. However, there is a basic agreement that perceived work-related stress may be implicated in behavioural outcomes such as absenteeism, substance abuse, sleep disturbances, smoking and caffeine use (Kahn and Byosiere 1992). Recent evidence supporting these relationships is reviewed in this chapter. Emphasis is placed upon the aetiological role of work-related stress in each of these outcomes. There are qualitative differences, along several dimensions, among these outcomes. To illustrate, in contrast to the other behavioural outcomes, which are all considered problematic to the health of those engaging in them excessively, absenteeism, while detrimental to the organization, is not necessarily harmful to those employees who are absent from work. There are, however, common problems in the research on these outcomes, as discussed in this section.
The varying definitions of work-related stress have already been mentioned above. By way of illustration, consider the different conceptualizations of stress on the one hand as events and on the other as chronic demands at the workplace. These two approaches to stress measurement have seldom been combined in a single study designed to predict the sorts of behavioural outcome considered here. The same generalization is relevant to the combined use, in the same study, of family-related and work-related stress to predict any of these outcomes. Most of the studies referred to in this chapter were based on a cross-sectional design and employees’ self-reports on the behavioural outcome in question. In most of the research that concerned behavioural outcomes of work-related stress, the joint moderating or mediating roles of predisposing personality variables, like the Type A behaviour pattern or hardiness, and situational variables like social support and control, have hardly been investigated. Seldom have antecedent variables, like objectively measured job stress, been included in the research designs of the studies reviewed here. Finally, the research covered in this article employed divergent methodologies. Because of these limitations, a frequently encountered conclusion is that the evidence for work-related stress as a precursor of a behavioural outcome is inconclusive.
Beehr (1995) considered the question of why so few studies have systematically examined the associations between work- related stress and substance abuse. He argued that such neglect may be due in part to the failure of researchers to find these associations. To this failure, one should add the well-known bias of periodicals against publishing research that reports null results. To illustrate the inconclusiveness of the evidence linking stress and substance abuse, consider two large-scale national samples of employees in the United States. The first, by French, Caplan and Van Harrison (1982), failed to find significant correlations between types of work-related stress and either smoking, drug use or on-the-job caffeine ingestion. The second, an earlier research study by Mangione and Quinn (1975), did report such associations.
The study of the behavioural outcomes of stress is further complicated because they frequently appear in pairs or triads. Different combinations of outcomes are the rule rather than the exception. The very close association of stress, smoking and caffeine is alluded to below. Yet another example concerns the comorbidity of post-traumatic stress disorder (PTSD), alcoholism and drug abuse (Kofoed, Friedman and Peck 1993). This is a basic characteristic of several behavioural outcomes considered in this article. It has led to the construction of “dual diagnosis” and “triple diagnosis” schemes and to the development of comprehensive, multifaceted treatment approaches. An example of such an approach is that in which PTSD and substance abuse are treated simultaneously (Kofoed, Friedman and Peck 1993).
The pattern represented by the appearance of several outcomes in a single individual may vary, depending on background characteristics and genetic and environmental factors. The literature on stress outcomes is only beginning to address the complex questions involved in identifying the specific pathophysiological and neurobiological disease models leading to different combinations of outcome entities.
A large body of epidemiological, clinical and pathological studies relates cigarette smoking to the development of cardiovascular heart disease and other chronic diseases. Consequently, there is a growing interest in the pathway leading from stress, including stress at work, to smoking behaviour. Stress, and the emotional responses associated with it, anxiety and irritability, are known to be attenuated by smoking. However, these effects have been shown to be short-lived (Parrott 1995). Impairments of mood and affective states tend to occur in a repetitive cycle between each cigarette smoked. This cycle provides a clear pathway leading to the addictive use of cigarettes (Parrott 1995). Smokers, therefore, obtain only a short-lived relief from adverse states of anxiety and irritability that follow the experience of stress.
The aetiology of smoking is multifactorial (like most other behavioural outcomes considered here). To illustrate, consider a recent review of smoking among nurses. Nurses, the largest professional group in health care, smoke excessively compared with the adult population (Adriaanse et al. 1991). According to their study, this is true for both male and female nurses, and is explained by work stress, lack of social support and unmet expectations that characterize nurses’ professional socialization. Nurses’ smoking is considered a special public health problem since nurses often act as role models to patients and their families.
Smokers who express high motivation to smoke have reported, in several studies, above-average stress that they had experienced before smoking, rather than below-average stress after smoking (Parrott 1995). Consequently, stress management and anxiety reduction programmes in the workplace do have the potential of influencing motivation for smoking. However, workplace-based smoking-cessation programmes do bring to the fore the conflict between health and performance. Among aviators, as an example, smoking is a health hazard in the cockpit. However, pilots who are required to abstain from smoking during and before flights may suffer cockpit performance decrements (Sommese and Patterson 1995).
A recurrent problem is that often researchers do not distinguish between drinking and problem-drinking behaviour (Sadava 1987). Problem-drinking is associated with adverse health or performance consequences. Its aetiology has been shown to be associated with several factors. Among them, the literature refers to prior incidents of depression, lack of supportive family environment, impulsiveness, being female, other concurrent substance abuse and stress (Sadava 1987). The distinction between the simple act of drinking alcohol and problem drinking is important because of the current controversy on the reported beneficial effects of alcohol on low density lipoprotein (LDL) cholesterol and on the incidence of heart disease. Several studies have shown a J-shaped or U-shaped relationship between alcohol ingestion and the incidence of cardiovascular heart disease (Pohorecky 1991).
The hypothesis that people ingest alcohol even in an incipiently abusive pattern to reduce stress and anxiety is no longer accepted as adequate. Contemporary approaches to alcohol abuse view it as determined by processes set forth in a multifactorial model or models (Gorman 1994). Among risk factors for alcohol abuse, recent reviews refer to the following factors: sociocultural (i.e., whether alcohol is readily available and its use tolerated, condoned or even promoted), socio-economic (i.e., the price of alcohol), environmental (alcohol advertising and licensing laws affect the consumers’ motivation to drink), interpersonal influences (such as family drinking habits), and employment-related factors, including stress at work (Gorman 1994). It follows that stress is but one of several factors in a multidimensional model that explains alcohol abuse.
The practical consequence of the multifactorial model view of alcoholism is the decrease in the emphasis on the role of stress in the diagnosis, prevention and treatment of substance abuse in the workplace. As noted by a recent review of this literature (Peyser 1992), in specific job situations, such as those illustrated below, attention to work-related stress is important in formulating preventive policies directed at substance abuse.
Despite considerable research on stress and alcohol, the mechanisms that link them are not entirely understood. The most widely accepted hypothesis is that alcohol disrupts the subject’s initial appraisal of stressful information by constraining the spread of activation of associated information previously stored in long-term memory (Petraitis, Flay and Miller 1995).
Work organizations contribute to and may induce drinking behaviour, including problem drinking, by three basic processes documented in the research literature. First, drinking, abusive or not, may be affected by the development of organizational norms with respect to drinking on the job, including the local “official” definition of problem drinking and the mechanisms for its control established by management. Secondly, some stressful working conditions, like sustained overload or machine-paced jobs or the lack of control may produce alcohol abuse as a coping strategy alleviating the stress. Thirdly, work organizations may explicitly or implicitly encourage the development of occupationally based drinking subcultures, such as those that often emerge among professional drivers of heavy vehicles (James and Ames 1993).
In general, stress plays a different role in provoking drinking behaviour in different occupations, age groups, ethnic categories and other social groupings. Thus stress probably plays a predisposing role with respect to alcohol consumption among adolescents, but much less so among women, the elderly and college-age social drinkers (Pohorecky 1991).
The social stress model of substance abuse (Lindenberg, Reiskin and Gendrop 1994) suggests that the likelihood of employees’ drug abuse is influenced by the level of environmental stress, social support relevant to the experienced stress, and individual resources, particularly social competence. There are indications that drug abuse among certain minority groups (like Native American youth living on reservations: see Oetting, Edwards and Beauvais 1988) is influenced by the prevalence of acculturation stress among them. However, the same social groups are also exposed to adverse social conditions like poverty, prejudices and impoverished opportunities for economic, social and educational opportunities.
Caffeine is the most widely consumed pharmacologically active substance in the world. The evidence bearing upon its possible implications for human health, that is whether it has chronic physiological effects on habitual consumers, is as yet inconclusive (Benowitz 1990). It has long been suspected that repeated exposure to caffeine may produce tolerance to its physiological effects (James 1994). The consumption of caffeine is known to improve physical performance and endurance during prolonged activity at submaximal intensity (Nehlig and Debry 1994). Caffeine’s physiological effects are linked to the antagonism of adenosine receptors and to the increased production of plasma catecholamines (Nehlig and Debry 1994).
The study of the relationship of work-related stress on caffeine ingestion is complicated because of the significant inter-dependance of coffee consumption and smoking (Conway et al. 1981). A meta-analysis of six epidemiological studies (Swanson, Lee and Hopp 1994) has shown that about 86% of smokers consumed coffee while only 77% of the non-smokers did so. Three major mechanisms have been suggested to account for this close association: (1) a conditioning effect; (2) reciprocal interaction, that is, caffeine intake increases arousal while nicotine intake decreases it and (3) the joint effect of a third variable on both. Stress, and particularly work-related stress, is a possible third variable influencing both caffeine and nicotine intake (Swanson, Lee and Hopp 1994).
The modern era of sleep research began in the 1950s, with the discovery that sleep is a highly active state rather than a passive condition of nonresponsiveness. The most prevalent type of sleep disturbance, insomnia, may occur in a transient short-term form or in a chronic form. Stress is probably the most frequent cause of transient insomnia (Gillin and Byerley 1990). Chronic insomnia usually results from an underlying medical or psychiatric disorder. Between one-third and two-thirds of patients with chronic insomnia have a recognizable psychiatric illness (Gillin and Byerley 1990).
One of the mechanisms suggested is that the effect of stress on sleep disturbances is mediated via certain changes in the cerebral system at different levels, and changes in the biochemical body functions that disturb the 24-hour rhythms (Gillin and Byerley 1990). There is some evidence that the above linkages are moderated by personality characteristics, such as the Type A behaviour pattern (Koulack and Nesca 1992). Stress and sleep disturbances may reciprocally influence each other: stress may promote transient insomnia, which in turn causes stress and increases the risk of episodes of depression and anxiety (Partinen 1994).
Chronic stress associated with monotonous, machine-paced jobs coupled with the need for vigilancejobs frequently found in continuous-processing manufacturing industriesmay lead to sleep disturbances, subsequently causing decrements in performance (Krueger 1989). There is some evidence that there are synergetic effects among work-related stress, circadian rhythms and reduced performance (Krueger 1989). The adverse effects of sleep loss, interacting with overload and a high level of arousal, on certain important aspects of job performance have been documented in several studies of sleep deprivation among hospital doctors at the junior level (Spurgeon and Harrington 1989).
The study by Mattiason et al. (1990) provides intriguing evidence linking chronic job stress, sleep disturbances and increases in plasma cholesterol. In this study, 715 male shipyard employees exposed to the stress of unemployment were systematically compared with 261 controls before and after the economic instability stress was made apparent. It was found that among the shipyard employees exposed to job insecurity, but not among the controls, sleep disturbances were positively correlated with increases in total cholesterol. This is a naturalistic field study in which the period of uncertainty preceding actual layoffs was allowed to elapse for about a year after some employees received notices concerning the impending layoffs. Thus the stress studied was real, severe, and could be considered chronic.
Absence behaviour may be viewed as an employee coping behaviour that reflects the interaction of perceived job demands and control, on the one hand, and self-assessed health and family conditions on the other. Absenteeism has several major dimensions, including duration, spells and reasons for being absent. It was shown in a European sample that about 60% of the hours lost to absenteeism were due to illness (Ilgen 1990). To the extent that work-related stress was implicated in these illnesses, then there should be some relationship between stress on the job and that part of absenteeism classified as sick days. The literature on absenteeism covers primarily blue-collar employees, and few studies have included stress in a systematic way. (McKee, Markham and Scott 1992). Jackson and Schuler’s meta-analysis (1985) of the consequences of role stress reported an average correlation of 0.09 between role ambiguity and absence and -0.01 between role conflict and absence. As several meta-analytic studies of the literature on absenteeism show, stress is but one of many variables accounting for these phenomena, so we should not expect work-related stress and absenteeism to be strongly correlated (Beehr 1995).
The literature on absenteeism suggests that the relationship between work-related stress and absenteeism may be mediated by employee-specific characteristics. For example, the literature refers to the propensity to use avoidance coping in response to stress at work, and to being emotionally exhausted or physically fatigued (Saxton, Phillips and Blakeney 1991). To illustrate, Kristensen’s (1991) study of several thousand Danish slaughterhouse employees over a one-year period has shown that those who reported high job stress had significantly higher absence rates and that perceived health was closely associated with absenteeism due to illness.
Several studies of the relationships between stress and absenteeism provide evidence that supports the conclusion that they may be occupationally determined (Baba and Harris 1989). To illustrate, work-related stress among managers tends to be associated with the incidence of absenteeism but not with days lost attributed to illness, while this is not so with shop-floor employees (Cooper and Bramwell 1992). Occupational specificity of the stresses predisposing employees to be absent has been regarded as a major explanation of the meagre amount of absence variance explained by work-related stress across many studies (Baba and Harris 1989). Several studies have found that among blue-collar employees who work on jobs considered stressful - that is those that possess a combination of the characteristics of assembly-line type of jobs (namely, a very short cycle of operations and a piece-rate wage system) - job stress is a strong predictor of unexcused absence. (For a recent review of these studies, see McKee, Markham and Scott 1992; note that Baba and Harris 1989 do not support their conclusion that job stress is a strong predictor of unexcused absence).
The literature on stress and absenteeism provides a convincing example of a limitation noted in the introduction. The reference is to the failure of most research on stress-behavioural outcome relations to cover systematically, in the design of this research, both work and non-work stresses. It was noted that in research on absenteeism non-work stress contributed more than work-related stress to the prediction of absence, lending support to the view that absence may be non-work behaviour more than work-related behaviour (Baba and Harris 1989).
Jobs can have a substantial impact on the affective well-being of job holders. In turn, the quality of workers’ well-being on the job influences their behaviour, decision making and interactions with colleagues, and spills over into family and social life as well.
Research in many countries has pointed to the need to define the concept in terms of two separate dimensions that may be viewed as independent of each other (Watson, Clark and Tellegen 1988; Warr 1994). These dimensions may be referred to as “pleasure” and “arousal”. As illustrated in figure 34.9 a particular degree of pleasure or displeasure may be accompanied by high or low levels of mental arousal, and mental arousal may be either pleasurable or unpleasurable. This is indicated in terms of the three axes of well-being which are suggested for measurement: displeasure-to-pleasure, anxiety-to-comfort, and depression-to-enthusiasm.
Job-related well-being has often been measured merely along the horizontal axis, extending from “feeling bad” to “feeling good”. The measurement is usually made with reference to a scale of job satisfaction, and data are obtained by workers’ indicating their agreement or disagreement with a series of statements describing their feelings about their jobs. However, job satisfaction scales do not take into account differences in mental arousal, and are to that extent relatively insensitive. Additional forms of measurement are also needed, in terms of the other two axes in the figure.
When low scores on the horizontal axis are accompanied by raised mental arousal (upper left quadrant), low well-being is typically evidenced in the forms of anxiety and tension; however, low pleasure in association with low mental arousal (lower left) is observable as depression and associated feelings. Conversely, high job-related pleasure may be accompanied by positive feelings that are characterized either by enthusiasm and energy (3b) or by psychological relaxation and comfort (2b). This latter distinction is sometimes described in terms of motivated job satisfaction (3b) versus resigned, apathetic job satisfaction (2b).
In studying the impact of organizational and psychosocial factors on employee well-being, it is desirable to examine all three of the axes. Questionnaires are widely used for this purpose. Job satisfaction (1a to 1b) may be examined in two forms, sometimes referred to as “facet-free” and “facet-specific” job satisfaction. Facet-free, or overall, job satisfaction is an overarching set of feelings about one’s job as a whole, whereas facet-specific satisfactions are feelings about particular aspects of a job. Principal facets include pay, working conditions, one’s supervisor and the nature of the work undertaken.
These several forms of job satisfaction are positively intercorrelated, and it is sometimes appropriate merely to measure overall, facet-free satisfaction, rather than to examine separate, facet-specific satisfactions. A widely used general question is “On the whole, how satisfied are you with the work you do?”. Commonly used responses are very dissatisfied, a little dissatisfied, moderately satisfied, very satisfied and extremely satisfied, and are designated by scores from 1 to 5 respectively. In national surveys it is usual to find that about 90% of employees report themselves as satisfied to some degree, and a more sensitive measuring instrument is often desirable to yield more differentiated scores.
A multi-item approach is usually adopted, perhaps covering a range of different facets. For instance, several job satisfaction questionnaires ask about a person’s satisfaction with facets of the following kinds: the physical work conditions; the freedom to choose your own method of working; your fellow workers; the recognition you get for good work; your immediate boss; the amount of responsibility you are given; your rate of pay; your opportunity to use your abilities; relations between managers and workers; your workload; your chance of promotion; the equipment you use; the way your firm is managed; your hours of work; the amount of variety in your job; and your job security. An average satisfaction score may be calculated across all the items, responses to each item being scored from 1 to 5, for instance (see the preceding paragraph). Alternatively, separate values can be computed for “intrinsic satisfaction” items (those dealing with the content of the work itself) and “extrinsic satisfaction” items (those referring to the context of the work, such as colleagues and working conditions).
Self-report scales which measure axes two and three have often covered only one end of the possible distribution. For example, some scales of job-related anxiety ask about a worker’s feelings of tension and worry when on the job (2a), but do not in addition test for more positive forms of affect on this axis (2b). Based on studies in several settings (Watson, Clark and Tellegen 1988; Warr 1990), a possible approach is as follows.
Axes 2 and 3 may be examined by putting this question to workers: “Thinking of the past few weeks, how much of the time has your job made you feel each of the following?”, with response options of never, occasionally, some of the time, much of the time, most of the time, and all the time (scored from 1 to 6 respectively). Anxiety-to-comfort ranges across these states: tense, anxious, worried, calm, comfortable and relaxed. Depression-to-enthusiasm covers these states: depressed, gloomy, miserable, motivated, enthusiastic and optimistic. In each case, the first three items should be reverse-scored, so that a high score always reflects high well-being, and the items should be mixed randomly in the questionnaire. A total or average score can be computed for each axis.
More generally, it should be noted that affective well-being is not determined solely by a person’s current environment. Although job characteristics can have a substantial effect, well-being is also a function of some aspects of personality; people differ in their baseline well-being as well as in their reactions to particular job characteristics.
Relevant personality differences are usually described in terms of individuals’ continuing affective dispositions. The personality trait of positive affectivity (corresponding to the upper right-quadrant) is characterized by generally optimistic views of the future, emotions which tend to be positive and behaviours which are relatively extroverted. On the other hand, negative affectivity (corresponding to the upper left-hand quadrant) is a disposition to experience negative emotional states. Individuals with high negative affectivity tend in many situations to feel nervous, anxious or upset; this trait is sometimes measured by means of personality scales of neuroticism. Positive and negative affectivities are regarded as traits, that is, they are relatively constant from one situation to another, whereas a person’s well-being is viewed as an emotional state which varies in response to current activities and environmental influences.
Measures of well-being necessarily identify both the trait (the affective disposition) and the state (current affect). This fact should be borne in mind in examining people’s well-being score on an individual basis, but it is not a substantial problem in studies of the average findings for a group of employees. In longitudinal investigations of group scores, observed changes in well-being can be attributed directly to changes in the environment, since every person’s baseline well-being is held constant across the occasions of measurement; and in cross-sectional group studies an average affective disposition is recorded as a background influence in all cases.
Note also that affective well-being may be viewed at two levels. The more focused perspective relates to a specific domain, such as an occupational setting: this may be a question of “job-related” well-being (as discussed here) and is measured through scales which directly concern feelings when a person is at work. However, more wide-ranging, “context-free” or “general,” well-being is sometimes of interest, and measurement of that wider construct requires a less specific focus. The same three axes should be examined in both cases, and more general scales are available for life satisfaction or general distress (axis 1), context-free anxiety (axis 2) and context-free depression (axis 3).
When a human being or an animal is subjected to a psychological stress situation, there is a general response involving psychological as well as somatic (bodily) responses. This is a general alarm response, or general activation or wake-up call, which affects all physiological responses, including the musculoskeletal system, the vegetative system (the autonomic system), the hormones and also the immune system.
Since the 1960s, we have been learning how the brain, and through it, psychological factors, regulates and influences all physiological processes, whether directly or indirectly. Previously it was held that large and essential parts of our physiology were regulated “unconsciously,” or not by brain processes at all. The nerves that regulate the gut, glands and the cardiovascular system were “autonomic”, or independent of the central nervous system (CNS); similarly, the hormones and the immune system were beyond central nervous control. However, the autonomic nervous system is regulated by the limbic structures of the brain, and may be brought under direct instrumental control through classical and instrumental learning procedures. The fact that the central nervous system controls endocrinological processes is also well established.
The last development to undercut the view that the CNS was isolated from many physiological processes was the evolution of psychoimmunology. It has now been demonstrated that the interaction of the brain (and psychological processes), may influence immune processes, either via the endocrine system or by direct innervation of lymphoid tissue. The white blood cells themselves may also be influenced directly by signal molecules from nervous tissue. Depressed lymphocyte function has been demonstrated to follow bereavement (Bartrop et al. 1977), and conditioning of the immune-suppressive response in animals (Cohen et al. 1979) and psychological processes were shown to have effects bearing on animal survival (Riley 1981); these discoveries were milestones in the development of psychoimmunology.
It is now well established that psychological stress produces changes in the level of antibodies in the blood, and in the level of many of the white blood cells. A brief stress period of 30 minutes may produce significant increases in lymphocytes and natural killer (NK) cells. Following more long-lasting stress situations, changes are also found in the other components of the immune system. Changes have been reported in the counts of almost all types of white blood cell and in the levels of immunoglobulins and their complements; the changes also affect important elements of the total immune response and the “immune cascade” as well. These changes are complex and seem to be bidirectional. Both increases and decreases have been reported. The changes seem to depend not only on the stress-inducing situation, but on also what type of coping and defence mechanisms the individual is using to handle this situation. This is particularly clear when the effects of real long-lasting stress situations are studied, for instance those associated with the job or with difficult life situations (“life stressors”). Highly specific relationships between coping and defence styles and several subsets of immune cells (number of lympho-, leuko- and monocytes; total T cells and NK cells) have been described (Olff et al. 1993).
The search for immune parameters as markers for long-lasting, sustained stress has not been all that successful. Since the relationships between immunoglobulins and stress factors have been demonstrated to be so complex, there is, understandably, no simple marker available. Such relationships as have been found are sometimes positive, sometimes negative. As far as psycho-logical profiles are concerned, to some extent the correlation matrix with one and the same psychological battery shows different patterns, varying from one occupational group to another (Endresen et al. 1991). Within each group, the patterns seem stable over long periods of time, up to three years. It is not known whether there are genetic factors that influence the highly specific relationships between coping styles and immune responses; if so, the manifestions of these factors must be highly dependent on interaction with life stressors. Also, it is not known whether it is possible to follow an individual’s stress level over a long period, given that the individual’s coping, defence and immune response style is known. This type of research is being pursued with highly selected personnel, for instance astronauts.
There may be a major flaw in the basic argument that immunoglobulins can be used as valid health risk markers. The original hypothesis was that low levels of circulating immunoglobulins might signal a low resistance and low immune competence. However, low values may not signal low resistance: they may only signal that this particular individual has not been challenged by infectious agents for a while - in fact, they may signal an extraordinary degree of health. The low values sometimes reported from returning astronauts and Antarctic personnel may not be a signal of stress, but only of the low levels of bacterial and viral challenge in the environment they have left.
There are many anecdotes in the clinical literature suggesting that psychological stress or critical life events can have an impact on the course of serious and non-serious illness. In the opinion of some, placebos and “alternative medicine” may exert their effects through psychoimmunological mechanisms. There are claims that reduced (and sometimes increased) immune competence should lead to increased susceptibility to infections in animals and in humans, and to inflammatory states like rheumatoid arthritis as well. It has been demonstrated convincingly that psychological stress affects the immune response to various types of inoculations. Students under examination stress report more symptoms of infectious illness in this period, which coincides with poorer cellular immune control (Glaser et al. 1992). There are also some claims that psychotherapy, in particular cognitive stress-management training, together with physical training, may affect the antibody response to viral infection.
There are also some positive findings with regard to cancer development, but only a few. The controversy over the claimed relationship between personality and cancer susceptibility has not been solved. Replications should be extended to include measures of immune responses to other factors, including lifestyle factors, which may be related to psychology, but the cancer effect may be a direct consequence of the lifestyle.
There is ample evidence that acute stress alters immune functions in human subjects and that chronic stress may also affect these functions. But to what extent are these changes valid and useful indicators of job stress? To what extent are immune changes - if they occur - a real health risk factor? There is no consensus in the field as of the time of this writing (1995).
Sound clinical trials and sound epidemiological research are required to advance in this field. But this type of research requires more funds than are available to the researchers. This work also requires an understanding of the psychology of stress, which is not always available to immunologists, and a profound understanding of how the immune system operates, which is not always available to psychologists.
The scientific evidence suggesting that exposure to job stress increases the risk for cardiovascular disease increased substantially beginning in the mid-1980s (Gardell 1981; Karasek and Theorell 1990; Johnson and Johansson 1991). Cardiovascular disease (CVD) remains the number one cause of death in economically developed societies, and contributes to increasing medical care costs. Diseases of the cardiovascular system include coronary heart disease (CHD), hypertensive disease, cerebrovascular disease and other disorders of the heart and circulatory system.
Most manifestations of coronary heart disease are caused partly by narrowing of the coronary arteries due to atherosclerosis. Coronary atherosclerosis is known to be influenced by a number of individual factors including: family history, dietary intake of saturated fat, high blood pressure, cigarette smoking and physical exercise. Except for heredity, all these factors could be influenced by the work environment. A poor work environment may decrease the willingness to stop smoking and adopt a healthy lifestyle. Thus, an adverse work environment could influence coronary heart disease via its effects on the classical risk factors.
There are also direct effects of stressful work environments on neurohormonal elevations as well as on heart metabolism. A combination of physiological mechanisms, shown to be related to stressful work activities, may increase the risk of myocardial infarction. The elevation of energy-mobilizing hormones, which increase during periods of excessive stress, may make the heart more vulnerable to the actual death of the muscle tissue. Conversely, energy-restoring and repairing hormones which protect the heart muscle from the adverse effects of energy-mobilizing hormones, decrease during periods of stress. During emotional (and physical) stress the heart beats faster and harder over an extended period of time, leading to excessive oxygen consumption in the heart muscle and the increased possibility of a heart attack. Stress may also disturb the cardiac rhythm of the heart. A disturbance associated with a fast heart rhythm is called tachyarrhythmia. When the heart rate is so fast that the heartbeat becomes inefficient a life-threatening ventricular fibrillation may result.
Early epidemiological studies of psychosocial working conditions associated with CVD suggested that high levels of work demands increased CHD risk. For example a prospective study of Belgian bank employees found that those in a privately owned bank had a significantly higher incidence of myocardial infarction than workers in public banks, even after adjustment was made for biomedical risk factors (Komitzer et al. 1982). This study indicated a possible relationship between work demands (which were higher in the private banks) and risk of myocardial infarction. Early studies also indicated a higher incidence of myocardial infarction among lower level employees in large companies (Pell and d’Alonzo 1963). This raised the possibility that psychosocial stress may not primarily be a problem for people with a high degree of responsibility, as had been assumed previously.
Since the early 1980s, many epidemiological studies have examined the specific hypothesis suggested by the Demand/ Control model developed by Karasek and others (Karasek and Theorell 1990; Johnson and Johansson 1991). This model states that job strain results from work organizations that combine high- performance demands with low levels of control over how the work is to be done. According to the model, work control can be understood as “job decision latitude”, or the task-related decision-making authority permitted by a given job or work organization. This model predicts that those workers who are exposed to high demand and low control over an extended period of time will have a higher risk of neurohormonal arousal which may result in adverse pathophysiological effects on the CVD system - which could eventually lead to increased risk of atherosclerotic heart disease and myocardial infarction.
Between 1981 and 1993, the majority of the 36 studies that examined the effects of high demands and low control on cardiovascular disease found significant and positive associations. These studies employed a variety of research designs and were performed in Sweden, Japan, the United States, Finland and Australia. A variety of outcomes was examined including CHD morbidity and mortality, as well as CHD risk factors including blood pressure, cigarette smoking, left ventricular mass index and CHD symptoms. Several recent review papers summarize these studies (Kristensen 1989; Baker et al. 1992; Schnall, Landsbergis and Baker 1994; Theorell and Karasek 1996). These reviewers note that the epidemiological quality of these studies is high and, moreover, that the stronger study designs have generally found greater support for the Demand/Control models. In general the adjustment for standard risk factors for cardiovascular disease does not eliminate nor significantly reduce the magnitude of the association between the high demand/low control combination and the risk of cardiovascular disease.
It is important to note, however, that the methodology in these studies varied considerably. The most important distinction is that some studies used the respondent’s own descriptions of their work situations, whereas others used an ‘average score’ method based on aggregating the responses of a nationally representative sample of workers within their respective job title groups. Studies utilizing self-reported work descriptions showed higher relative risks (2.0–4.0 versus 1.3–2.0). Psychological job demands were shown to be relatively more important in studies utilizing self-reported data than in studies utilizing aggregated data. The work control variables were more consistently found to be associated with excess CVD risk regardless of which exposure method was used.
Recently, work-related social support has been added to the demand-control formulation and workers with high demands, low control and low support, have been shown to have over a twofold risk for CVD morbidity and mortality compared to those with low demands, high control and high support (Johnson and Hall 1994). Currently efforts are being made to examine sustained exposure to demands, control and support over the course of the “psychosocial work career”. Descriptions of all the occupations during the whole work career are obtained for the participants and occupational scores are used for a calculation of the total lifetime exposure. The “total job control exposure” in relation to cardiovascular mortality incidence in working Swedes was studied and even after adjustment was made for age, smoking habits, exercise, ethnicity, education and social class, low total job control exposure was associated with a nearly twofold risk of dying a cardiovascular death over a 14-year follow-up period (Johnson et al. 1996).
A model similar to the Demand/Control model has been developed and tested by Siegrist and co-workers 1990 that uses “effort” and “social reward” as the crucial dimensions, the hypothesis being that high effort without social reward leads to increasing risk of cardiovascular disease. In a study of industrial workers it was shown that combinations of high effort and lack of reward predicted increased myocardial infarction risk independently of biomedical risk factors.
Other aspects of work organization, such as shift work, have also been shown to be associated with CVD risk. Constant rotation between night and day work has been found to be associated with increased risk of developing a myocardial infarction (Kristensen 1989; Theorell 1992).
Future research in this area particularly needs to focus on specifying the relationship between work stress exposure and CVD risk across different class, gender and ethnic groups.
For many years, psychological stress has been assumed to contribute to the development of peptic ulcer disease (which involves ulcerating lesions in the stomach or duodenum). Researchers and health care providers have proposed more recently that stress might also be related to other gastrointestinal disorders such as non-ulcer dyspepsia (associated with symptoms of upper abdominal pain, discomfort and nausea persisting in the absence of any identifiable organic cause) and irritable bowel syndrome (defined as altered bowel habits plus abdominal pain in the absence of abnormal physical findings). In this article, the question is examined whether there is strong empirical evidence to suggest that psychological stress is a predisposing factor in the aetiology or exacerbation of these three gastrointestinal disorders.
There is clear evidence that humans who are exposed to acute stress in the context of severe physical trauma are prone to the development of ulcers. It is less obvious, however, whether life stressors per se (such as job demotion or the death of a close relative) precipitate or exacerbate ulcers. Lay people and health care practitioners alike commonly associate ulcers and stress, perhaps as a consequence of Alexander’s (1950) early psychoanalytic perspective on the topic. Alexander proposed that ulcer-prone persons suffered dependency conflicts in their relationships with others; coupled with a constitutional tendency toward chronic hypersecretion of gastric acid, dependency conflicts were believed to lead to ulcer formation. The psychoanalytic perspective has not received strong empirical support. Ulcer patients do not appear to display greater dependency conflicts than comparison groups, though ulcer patients do exhibit higher levels of anxiety, submissiveness and depression (Whitehead and Schuster 1985). The level of neuroticism characterizing some ulcer patients tends to be slight, however, and few could be considered as exhibiting psychopathological signs. In any case, studies of emotional disorder in ulcer patients have generally involved those persons who seek medical attention for their disorder; these individuals may not be representative of all ulcer patients.
The association between stress and ulcers follows from the assumption that certain persons are genetically predisposed to hypersecrete gastric acid, especially during stressful episodes. Indeed, about two thirds of duodenal ulcer patients show elevated pepsinogen levels; elevated levels of pepsinogen are also associated with peptic ulcer disease. Brady and associates’ (1958) studies of “executive” monkeys lent initial support to the idea that a stressful lifestyle or vocation may contribute to the pathogenesis of gastrointestinal disease. They found that monkeys required to perform a lever press task to avoid painful electric shocks (the presumed “executives”, which controlled the stressor) developed more gastric ulcers than comparison monkeys that passively received the same number and intensity of shocks. The analogy to the hard-driving businessman was very cogent for a time. Unfortunately, their results were confounded with anxiety; anxious monkeys were more likely to be assigned to the “executive” role in Brady’s laboratory because they learned the lever press task quickly. Efforts to replicate their results, using random assignment of subjects to conditions, have failed. Indeed, evidence shows that animals who lack control over environmental stressors develop ulcers (Weiss 1971). Human ulcer patients also tend to be shy and inhibited, which runs counter to the stereotype of the ulcer-prone hard-driving businessman. Finally, animal models are of limited utility because they focus on the development of gastric ulcers, while most ulcers in humans occur in the duodenum. Laboratory animals rarely develop duodenal ulcers in response to stress.
Experimental studies of the physiological reactions of ulcer patients versus normal subjects to laboratory stressors do not uniformly show excessive reactions in the patients. The premise that stress leads to increased acid secretion which, in turn, leads to ulceration, is problematic when one realizes that psychological stress usually produces a response from the sympathetic nervous system. The sympathetic nervous system inhibits, rather than enhances, the gastric secretion that is mediated via the splanchnic nerve. Besides hypersecretion other factors in the aetiology of ulcer have been proposed, namely, rapid gastric emptying, inadequate secretion of bicarbonate and mucus, and infection. Stress could potentially affect these processes though evidence is lacking.
Ulcers have been reported to be more common during wartime, but methodological problems in these studies necessitate caution. A study of air traffic controllers is sometimes cited as evidence supporting the role of psychological stress for the development of ulcers (Cobb and Rose 1973). Although air traffic controllers were significantly more likely than a control group of pilots to report symptoms typical of ulcer, the incidence of confirmed ulcer among the air traffic controllers was not elevated above the base rate of ulcer occurrence in the general population.
Studies of acute life events also present a confusing picture of the relationship between stress and ulcer (Piper and Tennant 1993). Many investigations have been conducted, though most of these studies employed small samples and were cross-sectional or retrospective in design. The majority of studies did not find that ulcer patients incurred more acute life events than community controls or patients with conditions in which stress is not implicated, such as gallstones or renal stones. However, ulcer patients reported more chronic stressors involving personal threat or goal frustration prior to the onset or recrudescence of ulcer. In two prospective studies, reports of subjects being under stress or having family problems at baseline levels predicted subsequent development of ulcers. Unfortunately, both prospective studies used single-item scales to measure stress. Other research has shown that slow healing of ulcers or relapse was associated with higher stress levels, but the stress indices used in these studies were unvalidated and may have been confounded with personality factors.
In summary, evidence for the role of stress in ulcer causation and exacerbation is limited. Large-scale population-based prospective studies of the occurrence of life events are needed which use validated measures of acute and chronic stress and objective indicators of ulcer. At this point, evidence for an association between psychological stress and ulcer is weak.
Irritable bowel syndrome (IBS) has been considered a stress- related disorder in the past, in part because the physiological mechanism of the syndrome is unknown and because a large proportion of IBS sufferers report that stress caused a change in their bowel habits. As in the ulcer literature, it is difficult to evaluate the value of retrospective accounts of stressors and symptoms among IBS patients. In an effort to explain their discomfort, ill persons may mistakenly associate symptoms with stressful life events. Two recent prospective studies shed more light on the subject, and both found a limited role for stressful events in the occurrence of IBS symptoms. Whitehead et al. (1992) had a sample of community residents suffering from IBS symptoms report life events and IBS symptoms at three-month intervals. Only about 10% of the variance in bowel symptoms among these residents could be attributed to stress. Suls, Wan and Blanchard (1994) had IBS patients keep diary records of stressors and symptoms for 21 successive days. They found no consistent evidence that daily stressors increased the incidence or severity of IBS symptomatology. Life stress appears to have little effect on acute changes in IBS.
The symptoms of non-ulcer dyspepsia (NUD) include bloating and fullness, belching, borborygmi, nausea and heartburn. In one retrospective study, NUD patients reported more acute life events and more highly threatening chronic difficulties compared to healthy community members, but other investigations failed to find a relationship between life stress and functional dyspepsia. NUD cases also show high levels of psychopathology, notably anxiety disorders. In the absence of prospective studies of life stress, few conclusions can be made (Bass 1986; Whitehead 1992).
Despite considerable empirical attention, no verdict has yet been reached on the relationship between stress and the development of ulcers. Contemporary gastroenterologists have focused mainly on heritable pepsinogen levels, inadequate secretion of bicarbonate and mucus, and Heliobacter pylori infection as causes of ulcer. If life stress plays a role in these processes, its contribution is probably weak. Though fewer studies address the role of stress in IBS and NUD, evidence for a connection to stress is also weak here. For all three disorders, there is evidence that anxiety is higher among patients compared to the general population, at least among those persons who refer themselves for medical care (Whitehead 1992). Whether this is a precursor or a consequence of gastrointestinal disease has not been definitively determined, although the latter opinion seems to be more likely to be true. In current practice, ulcer patients receive pharmacological treatment, and psychotherapy is rarely recommended. Anti-anxiety drugs are commonly prescribed to IBS and NUD patients, probably because the physiological origins of these disorders are still unknown. Stress management has been employed with IBS patients with some success (Blanchard et al. 1992) although this patient group also responds to placebo treatments quite readily. Finally, patients experiencing ulcer, IBS or NUD may well be frustrated by assumptions from family members, friends and practitioners alike that their condition was produced by stress.
Stress, the physical and/or psychological departure from a person’s stable equilibrium, can result from a large number of stressors, those stimuli that produce stress. For a good general view of stress and the most common job stressors, Levi’s discussion in this chapter of job stress theories is recommended.
In addressing the question of whether job stress can and does affect the epidemiology of cancer, we face limitations: a search of the literature located only one study on actual job stress and cancer in urban bus drivers (Michaels and Zoloth 1991) (and there are only few studies in which the question is considered more generally). We cannot accept the findings of that study, because the authors did not take into account either the effects of high density exhaust fumes or smoking. Further, one cannot carry over the findings from other diseases to cancer because the disease mechanisms are so vastly different.
Nevertheless, it is possible to describe what is known about the connections between more general life stressors and cancer, and further, one might reasonably apply those findings to the job situation. We differentiate relationships of stress to two outcomes: cancer incidence and cancer prognosis. The term incidence evidently means the occurrence of cancer. However, incidence is established either by the doctor’s clinical diagnosis or at autopsy. Since tumour growth is slow - 1 to 20 years may elapse from the malignant mutation of one cell to the detection of the tumour mass - incidence studies include both initiation and growth. The second question, whether stress can affect prognosis, can be answered only in studies of cancer patients after diagnosis.
We distinguish cohort studies from case-control studies. This discussion focuses on cohort studies, where a factor of interest, in this case stress, is measured on a cohort of healthy persons, and cancer incidence or mortality is determined after a number of years. For several reasons, little emphasis is given to case-control studies, those which compare reports of stress, either current or before diagnosis, in cancer patients (cases) and persons without cancer (controls). First, one can never be sure that the control group is well-matched to the case group with respect to other factors that can influence the comparison. Secondly, cancer can and does produce physical, psychological and attitudinal changes, mostly negative, that can bias conclusions. Thirdly, these changes are known to result in an increase in the number of reports of stressful events (or of their severity) compared to reports by controls, thus leading to biased conclusions that patients experienced more, or more severe, stressful events than did controls (Watson and Pennebaker 1989).
Most studies on stress and cancer incidence have been of the case-control sort, and we find a wild mix of results. Because, in varying degrees, these studies have failed to control contaminating factors, we don’t know which ones to trust, and they are ignored here. Among cohort studies, the number of studies showing that persons under greater stress did not experience more cancer than those under lesser stress exceeded by a large margin the number showing the reverse (Fox 1995). The results for several stressed groups are given.
1. Bereaved spouses. In a Finnish study of 95,647 widowed persons their cancer death rate differed by only 3% from the rate of an age-equivalent non-widowed population over a period of five years. A study of causes of death during the 12 years following bereavement in 4,032 widowed persons in the state of Maryland showed no more cancer deaths among the widowed than among those still married - in fact, there were slightly fewer deaths than in the married. In England and Wales, the Office of Population Censuses and Surveys showed little evidence of an increase in cancer incidence after death of a spouse, and only a slight, non-significant increase in cancer mortality.
2. Depressed mood. One study showed, but four studies did not, an excess of cancer mortality in the years following the measurement of a depressed mood (Fox 1989). This must be distinguished from hospitalizable depression, on which no well-controlled large-scale cohort studies have been done, and which clearly involves pathological depression, not applicable to the healthy working population. Even among this group of clinically depressed patients, however, most properly analysed smaller studies show no excess of cancer.
3. A group of 2,020 men, aged 35 to 55, working in an electrical products factory in Chicago, was followed for 17 years after being tested. Those whose highest score on a variety of personality scales was reported on the depressed mood scale showed a cancer death rate 2.3 times that of men whose highest score was not referable to depressed mood. The researcher’s colleague followed the surviving cohort for another three years; the cancer death rate in the whole high-depressed-mood group had dropped to 1.3 times that of the control group. A second study of 6,801 adults in Alameda County, California, showed no excess cancer mortality among those with depressed mood when followed for 17 years. In a third study of 2,501 people with depressed mood in Washington County, Maryland, non-smokers showed no excess cancer mortality over 13 years compared to non-smoking controls, but there was an excess mortality among smokers. The results for smokers were later shown to be wrong, the error arising from a contaminating factor overlooked by the researchers. A fourth study, of 8,932 women at the Kaiser-Permanente Medical Center in Walnut Creek, California showed no excess of deaths due to breast cancer over 11 to 14 years among women with depressed mood at the time of measurement. A fifth study, done on a randomized national sample of 2,586 people in the National Health and Nutrition Examination Survey in the United States, showed no excess of cancer mortality among those showing depressed mood when measured on either of two independent mood scales. The combined findings of studies on 22,351 persons made up of disparate groups weigh heavily against the contrary findings of the one study on 2,020 persons.
4. Other stressors. A study of 4,581 Hawaiian men of Japanese descent found no greater cancer incidence over a period of 10 years among those reporting high levels of stressful life events at the start of the study than those reporting lower levels. A study was carried out on 9,160 soldiers in the US Army who had been prisoners of war in the Pacific and European theatres in the Second World War and in Korea during the Korean conflict. The cancer death rate from 1946 to 1975 was either less than or no different from that found among soldiers matched by combat zone and combat activity who were not prisoners of war. In a study of 9,813 US Army personnel separated from the army during the year 1944 for “psychoneurosis”, a prima facie state of chronic stress, their cancer death rate over the period 1946 to 1969 was compared with that of a matched group not so diagnosed. The psychoneurotics’ rate was no greater than that of matched controls, and was, in fact, slightly lower, although not significantly so.
5. Lowered levels of stress. There is evidence in some studies, but not in others, that higher levels of social support and social connections are associated with less cancer risk in the future. There are so few studies on this topic and the observed differences so unconvincing that the most a prudent reviewer can reasonably do is suggest the possibility of a true relationship. We need more solid evidence than that offered by the contradictory studies that have already been carried out.
This topic is of lesser interest because so few people of working age get cancer. Nevertheless, it ought to be mentioned that while survival differences have been found in some studies with regard to reported pre-diagnosis stress, other studies have shown no differences. One should, in judging these findings, recall the parallel ones showing that not only cancer patients, but also those with other ills, report more past stressful events than well people to a substantial degree because of the psychological changes brought about by the disease itself and, further, by the knowledge that one has the disease. With respect to prognosis, several studies have shown increased survival among those with good social support as against those with less social support. Perhaps more social support produces less stress, and vice versa. As regards both incidence and prognosis, however, the extant studies are at best only suggestive (Fox 1995).
It might be instructive to see what effects stress has had in experiments with animals. The results among well-conducted studies are much clearer, but not decisive. It was found that stressed animals with viral tumours show faster tumour growth and die earlier than unstressed animals. But the reverse is true of non-viral tumours, that is, those produced in the laboratory by chemical carcinogens. For these, stressed animals have fewer tumours and longer survival after the start of cancer than unstressed animals (Justice 1985). In industrial nations, however, only 3 to 4% of human malignancies are viral. All the rest are due to chemical or physical stimuli - smoking, x rays, industrial chemicals, nuclear radiation (e.g., that due to radon), excessive sunlight and so on. Thus, if one were to extrapolate from the findings for animals, one would conclude that stress is beneficial both to cancer incidence and survival. For a number of reasons one should not draw such an inference (Justice 1985; Fox 1981). Results with animals can be used to generate hypotheses relating to data describing humans, but cannot be the basis for conclusions about them.
In view of the variety of stressors that has been examined in the literature - long-term, short-term, more severe, less severe, of many types - and the preponderance of results suggesting little or no effect on later cancer incidence, it is reasonable to suggest that the same results apply in the work situation. As for cancer prognosis, too few studies have been done to draw any conclusions, even tentative ones, about stressors. It is, however, possible that strong social support may decrease incidence a little, and perhaps increase survival.
There is growing evidence in the occupational health literature that psychosocial work factors may influence the development of musculoskeletal problems, including both low back and upper extremity disorders (Bongers et al. 1993). Psychosocial work factors are defined as aspects of the work environment (such as work roles, work pressure, relationships at work) that can contribute to the experience of stress in individuals (Lim and Carayon 1994; ILO 1986). This paper provides a synopsis of the evidence and underlying mechanisms linking psychosocial work factors and musculoskeletal problems with the emphasis on studies of upper extremity disorders among office workers. Directions for future research are also discussed.
An impressive array of studies from 1985 to 1995 had linked workplace psychosocial factors to upper extremity musculoskeletal problems in the office work environment (see Moon and Sauter 1996 for an extensive review). In the United States, this relationship was first suggested in an exploratory research by the National Institute for Occupational Safety and Health (NIOSH) (Smith et al. 1981). Results of this research indicated that video display unit (VDU) operators who reported less autonomy and role clarity and greater work pressure and management control over their work processes also reported more musculoskeletal problems than their counterparts who did not work with VDUs (Smith et al. 1981).
Recent studies employing more powerful inferential statistical techniques point more strongly to an effect of psychosocial work factors on upper extremity musculoskeletal disorders among office workers. For example, Lim and Carayon (1994) used structural analysis methods to examine the relationship between psychosocial work factors and upper extremity musculoskeletal discomfort in a sample of 129 office workers. Results showed that psychosocial factors such as work pressure, task control and production quotas were important predictors of upper extremity musculoskeletal discomfort, especially in the neck and shoulder regions. Demographic factors (age, gender, tenure with employer, hours of computer use per day) and other confounding factors (self-reports of medical conditions, hobbies and keyboard use outside work) were controlled for in the study and were not related to any of these problems.
Confirmatory findings were reported by Hales et al. (1994) in a NIOSH study of musculoskeletal disorders in 533 tele-communication workers from 3 different metropolitan cities. Two types of musculoskeletal outcomes were investigated: (1) upper extremity musculoskeletal symptoms determined by questionnaire alone; and (2) potential work-related upper extremity musculoskeletal disorders which were determined by physical examination in addition to the questionnaire. Using regression techniques, the study found that factors such as work pressure and little decision-making opportunity were associated both with intensified musculoskeletal symptoms and also with increased physical evidence of disease. Similar relationships have been observed in the industrial environment, but mainly for back pain (Bongers et al. 1993).
Researchers have suggested a variety of mechanisms underlying the relationship between psychosocial factors and musculoskeletal problems (Sauter and Swanson 1996; Smith and Carayon 1996; Lim 1994; Bongers et al. 1993). These mechanisms can be classified into four categories:
It has been demonstrated that individuals subject to stressful psychosocial working conditions also exhibit increased autonomic arousal (e.g., increased catecholomine secretion, increased heart rate and blood pressure, increased muscle tension etc.) (Frankenhaeuser and Gardell 1976). This is a normal and adaptive psychophysiological response which prepares the individual for action. However, prolonged exposure to stress may have a deleterious effect on musculoskeletal function as well as on health in general. For example, stress-related muscle tension may increase the static loading of muscles, thereby accelerating muscle fatigue and associated discomfort (Westgaard and Bjorklund 1987; Grandjean 1986).
Individuals who are under stress may alter their work behaviour in a way that increases musculoskeletal strain. For example, psychological stress may result in greater application of force than necessary during typing or other manual tasks, leading to increased wear and tear on the musculoskeletal system.
Psychosocial factors may influence the physical (ergonomic) demands of the job directly. For example, an increase in time pressure is likely to lead to an increase in work pace (i.e., increased repetition) and increased strain. Alternatively, workers who are given more control over their tasks may be able to adjust their tasks in ways that lead to reduced repetitiveness (Lim and Carayon 1994).
Sauter and Swanson (1996) suggest that the relationship between biomechanical stressors (e.g., ergonomic factors) and the development of musculoskeletal problems is mediated by perceptual processes which are influenced by workplace psychosocial factors. For example, symptoms might become more evident in dull, routine jobs than in more engrossing tasks which more fully occupy the attention of the worker (Pennebaker and Hall 1982).
Additional research is needed to assess the relative importance of each of these mechanisms and their possible interactions. Further, our understanding of causal relationships between psychosocial work factors and musculoskeletal disorders would benefit from: (1) increased use of longitudinal study designs; (2) improved methods for assessing and disentangling psychosocial and physical exposures; and (3) improved measurement of musculoskeletal outcomes.
Still, the current evidence linking psychosocial factors and musculoskeletal disorders is impressive and suggests that psychosocial interventions probably play an important role in preventing musculoskeletal problems in the workplace. In this regard, several publications (NIOSH 1988; ILO 1986) provide directions for optimizing the psychosocial environment at work. As suggested by Bongers et al. (1993), special attention should be given to providing a supportive work environment, manageable workloads and increased worker autonomy. Positive effects of such variables were evident in a case study by Westin (1990) of the Federal Express Corporation. According to Westin, a programme of work reorganization to provide an “employee-supportive” work environment, improve communications and reduce work and time pressures was associated with minimal evidence of musculoskeletal health problems.
Mental illness is one of the chronic outcomes of work stress that inflicts a major social and economic burden on communities (Jenkins and Coney 1992; Miller and Kelman 1992). Two disciplines, psychiatric epidemiology and mental health sociology (Aneshensel, Rutter and Lachenbruch 1991), have studied the effects of psychosocial and organizational factors of work on mental illness. These studies can be classified according to four different theoretical and methodological approaches: (1) studies of only a single occupation; (2) studies of broad occupational categories as indicators of social stratification; (3) comparative studies of occupational categories; and (4) studies of specific psychosocial and organizational risk factors. We review each of these approaches and discuss their implications for research and prevention.
There are numerous studies in which the focus has been a single occupation. Depression has been the focus of interest in recent studies of secretaries (Garrison and Eaton 1992), professionals and managers (Phelan et al. 1991; Bromet et al. 1990), computer workers (Mino et al. 1993), fire-fighters (Guidotti 1992), teachers (Schonfeld 1992), and “maquiladoras” (Guendelman and Silberg 1993). Alcoholism and drug abuse and dependence have been recently related to mortality among bus drivers (Michaels and Zoloth 1991) and to managerial and professional occupations (Bromet et al. 1990). Symptoms of anxiety and depression which are indicative of psychiatric disorder have been found among garment workers, nurses, teachers, social workers, offshore oil industry workers and young physicians (Brisson, Vezina and Vinet 1992; Fith-Cozens 1987; Fletcher 1988; McGrath, Reid and Boore 1989; Parkes 1992). The lack of a comparison group makes it difficult to determine the significance of this type of study.
The use of occupations as indicators of social stratification has a long tradition in mental health research (Liberatos, Link and Kelsey 1988). Workers in unskilled manual jobs and lower-grade civil servants have shown high prevalence rates of minor psychiatric disorders in England (Rodgers 1991; Stansfeld and Marmot 1992). Alcoholism has been found to be prevalent among blue-collar workers in Sweden (Ojesjo 1980) and even more prevalent among managers in Japan (Kawakami et al. 1992). Failure to differentiate conceptually between effects of occupations per se from “lifestyle” factors associated with occupational strata is a serious weakness of this type of study. It is also true that occupation is an indicator of social stratification in a sense different from social class, that is, as the latter implies control over productive assets (Kohn et al. 1990; Muntaner et al. 1994). However, there have not been empirical studies of mental illness using this conceptualization.
Census categories for occupations constitute a readily available source of information that allows one to explore associations between occupations and mental illness (Eaton et al. 1990). Epidemiological Catchment Area (ECA) study analyses of comprehensive occupational categories have yielded findings of a high prevalence of depression for professional, administrative support and household services occupations (Roberts and Lee 1993). In another major epidemiological study, the Alameda county study, high rates of depression were found among workers in blue-collar occupations (Kaplan et al. 1991). High 12-month prevalence rates of alcohol dependence among workers in the Unites States have been found in craft occupations (15.6%) and labourers (15.2%) among men, and in farming, forestry and fishing occupations (7.5%) and unskilled service occupations (7.2%) among women (Harford et al. 1992). ECA rates of alcohol abuse and dependence yielded high prevalence among transportation, craft and labourer occupations (Roberts and Lee 1993). Workers in the service sector, drivers and unskilled workers showed high rates of alcoholism in a study of the Swedish population (Agren and Romelsjo 1992). Twelve-month prevalence of drug abuse or dependence in the ECA study was higher among farming (6%), craft (4.7%), and operator, transportation and labourer (3.3%) occupations (Roberts and Lee 1993). The ECA analysis of combined prevalence for all psychoactive substance abuse or dependence syndromes (Anthony et al. 1992) yielded higher prevalence rates for construction labourers, carpenters, construction trades as a whole, waiters, waitresses and transportation and moving occupations. In another ECA analysis (Muntaner et al. 1991), as compared to managerial occupations, greater risk of schizophrenia was found among private household workers, while artists and construction trades were found at higher risk of schizophrenia (delusions and hallucinations), according to criterion A of the Diagnostic and Statistics Manual of Mental Disorders (DSM-III) (APA 1980).
Several ECA studies have been conducted with more specific occupational categories. In addition to specifying occupational environments more closely, they adjust for sociodemographic factors which might have led to spurious results in uncontrolled studies. High 12-month prevalence rates of major depression (above the 3 to 5% found in the general population (Robins and Regier 1990), have been reported for data entry keyers and computer equipment operators (13%) and typists, lawyers, special education teachers and counsellors (10%) (Eaton et al. 1990). After adjustment for sociodemographic factors, lawyers, teachers and counsellors had significantly elevated rates when compared to the employed population (Eaton et al. 1990). In a detailed analysis of 104 occupations, construction labourers, skilled construction trades, heavy truck drivers and material movers showed high rates of alcohol abuse or dependence (Mandell et al. 1992).
Comparative studies of occupational categories suffer from the same flaws as social stratification studies. Thus, a problem with occupational categories is that specific risk factors are bound to be missed. In addition, “lifestyle” factors associated with occupational categories remain a potent explanation for results.
Most studies of work stress and mental illness have been conducted with scales from Karasek’s Demand/Control model (Karasek and Theorell 1990) or with measures derived from the Dictionary of Occupational Titles (DOT) (Cain and Treiman 1981). In spite of the methodological and theoretical differences underlying these systems, they measure similar psychosocial dimensions (control, substantive complexity and job demands) (Muntaner et al. 1993). Job demands have been associated with major depressive disorder among male power-plant workers (Bromet 1988). Occupations involving lack of direction, control or planning have been shown to mediate the relation between socioeconomic status and depression (Link et al. 1993). However, in one study the relationship between low control and depression was not found (Guendelman and Silberg 1993). The number of negative work-related effects, lack of intrinsic job rewards and organizational stressors such as role conflict and ambiguity have also been associated with major depression (Phelan et al. 1991). Heavy alcohol drinking and alcohol-related problems have been linked to working overtime and to lack of intrinsic job rewards among men and to job insecurity among women in Japan (Kawakami et al. 1993), and to high demands and low control among males in the United States (Bromet 1988). Also among US males, high psychological or physical demands and low control were predictive of alcohol abuse or dependence (Crum et al. 1995). In another ECA analysis, high physical demands and low skill discretion were predictive of drug dependence (Muntaner et al. 1995). Physical demands and job hazards were predictors of schizophrenia or delusions or hallucinations in three US studies (Muntaner et al. 1991; Link et al. 1986; Muntaner et al. 1993). Physical demands have also been associated with psychiatric disease in the Swedish population (Lundberg 1991). These investigations have the potential for prevention because specific, potentially malleable risk factors are the focus of study.
Future studies might benefit from studying the demographic and sociological characteristics of workers in order to sharpen their focus on the occupations proper (Mandell et al. 1992). When occupation is considered an indicator of social stratification, adjustment for non-work stressors should be attempted. The effects of chronic exposure to lack of democracy in the workplace need to be investigated (Johnson and Johansson 1991). A major initiative for the prevention of work-related psychological disorders has emphasized improving working conditions, services, research and surveillance (Keita and Sauter 1992; Sauter, Murphy and Hurrell 1990).
While some researchers maintain that job redesign can improve both productivity and workers’ health (Karasek and Theorell 1990), others have argued that a firm’s profit maximization goals and workers’ mental health are in conflict (Phelan et al. 1991; Muntaner and O’Campo 1993; Ralph 1983).
Burnout is a type of prolonged response to chronic emotional and interpersonal stressors on the job. It has been conceptualized as an individual stress experience embedded in a context of complex social relationships, and it involves the person’s conception of both self and others. As such, it has been an issue of particular concern for human services occupations where: (a) the relationship between providers and recipients is central to the job; and (b) the provision of service, care, treatment or education can be a highly emotional experience. There are several types of occupations that meet these criteria, including health care, social services, mental health, criminal justice and education. Even though these occupations vary in the nature of the contact between providers and recipients, they are similar in having a structured caregiving relationship centred around the recipient’s current problems (psychological, social and/or physical). Not only is the provider’s work on these problems likely to be emotionally charged, but solutions may not be easily forthcoming, thus adding to the frustration and ambiguity of the work situation. The person who works continuously with people under such circumstances is at greater risk from burnout.
The operational definition (and the corresponding research measure) that is most widely used in burnout research is a three-component model in which burnout is conceptualized in terms of emotional exhaustion, depersonalization and reduced personal accomplishment (Maslach 1993; Maslach and Jackson 1981/1986). Emotional exhaustion refers to feelings of being emotionally overextended and depleted of one’s emotional resources. Depersonalization refers to a negative, callous or excessively detached response to the people who are usually the recipients of one’s service or care. Reduced personal accomplishment refers to a decline in one’s feelings of competence and successful achievement in one’s work.
This multidimensional model of burnout has important theoretical and practical implications. It provides a more complete understanding of this form of job stress by locating it within its social context and by identifying the variety of psychological reactions that different workers can experience. Such differential responses may not be simply a function of individual factors (such as personality), but may reflect the differential impact of situational factors on the three burnout dimensions. For example, certain job characteristics may influence the sources of emotional stress (and thus emotional exhaustion), or the resources available to handle the job successfully (and thus personal accomplishment). This multidimensional approach also implies that interventions to reduce burnout should be planned and designed in terms of the particular component of burnout that needs to be addressed. That is, it may be more effective to consider how to reduce the likelihood of emotional exhaustion, or to prevent the tendency to depersonalize, or to enhance one’s sense of accomplishment, rather than to use a more unfocused approach.
Consistent with this social framework, the empirical research on burnout has focused primarily on situational and job factors. Thus, studies have included such variables as relationships on the job (clients, colleagues, supervisors) and at home (family), job satisfaction, role conflict and role ambiguity, job withdrawal (turnover, absenteeism), expectations, workload, type of position and job tenure, institutional policy and so forth. The personal factors that have been studied are most often demographic variables (sex, age, marital status, etc.). In addition, some attention has been given to personality variables, personal health, relations with family and friends (social support at home), and personal values and commitment. In general, job factors are more strongly related to burnout than are biographical or personal factors. In terms of antecedents of burnout, the three factors of role conflict, lack of control or autonomy, and lack of social support on the job, seem to be most important. The effects of burnout are seen most consistently in various forms of job withdrawal and dissatisfaction, with the implication of a deterioration in the quality of care or service provided to clients or patients. Burnout seems to be correlated with various self-reported indices of personal dysfunction, including health problems, increased use of alcohol and drugs, and marital and family conflicts. The level of burnout seems fairly stable over time, underscoring the notion that its nature is more chronic than acute (see Kleiber and Enzmann 1990; Schaufeli, Maslach and Marek 1993 for reviews of the field).
An issue for future research concerns possible diagnostic criteria for burnout. Burnout has often been described in terms of dysphoric symptoms such as exhaustion, fatigue, loss of self-esteem and depression. However, depression is considered to be context-free and pervasive across all situations, whereas burnout is regarded as job-related and situation-specific. Other symptoms include problems in concentration, irritability and negativism, as well as a significant decrease in work performance over a period of several months. It is usually assumed that burnout symptoms manifest themselves in “normal” persons who do not suffer from prior psychopathology or an identifiable organic illness. The implication of these ideas about possible distinctive symptoms of burnout is that burnout could be diagnosed and treated at the individual level.
However, given the evidence for the situational aetiology of burnout, more attention has been given to social, rather than personal, interventions. Social support, particularly from one’s peers, seems to be effective in reducing the risk of burnout. Adequate job training that includes preparation for difficult and stressful work-related situations helps develop people’s sense of self-efficacy and mastery in their work roles. Involvement in a larger community or action-oriented group can also counteract the helplessness and pessimism that are commonly evoked by the absence of long-term solutions to the problems with which the worker is dealing. Accentuating the positive aspects of the job and finding ways to make ordinary tasks more meaningful are additional methods for gaining greater self-efficacy and control.
There is a growing tendency to view burnout as a dynamic process, rather than a static state, and this has important implications for the proposal of developmental models and process measures. The research gains to be expected from this newer perspective should yield increasingly sophisticated knowledge about the experience of burnout, and will enable both individuals and institutions to deal with this social problem more effectively.
Any organization which seeks to establish and maintain the best state of mental, physical and social wellbeing of its employees needs to have policies and procedures which comprehensively address health and safety. These policies will include a mental health policy with procedures to manage stress based on the needs of the organization and its employees. These will be regularly reviewed and evaluated.
There are a number of options to consider in looking at the prevention of stress, which can be termed as primary, secondary and tertiary levels of prevention and address different stages in the stress process (Cooper and Cartwright 1994). Primary prevention is concerned with taking action to reduce or eliminate stressors (i.e., sources of stress), and positively promoting a supportive and healthy work environment. Secondary prevention is concerned with the prompt detection and management of depression and anxiety by increasing self-awareness and improving stress management skills. Tertiary prevention is concerned with the rehabilitation and recovery process of those individuals who have suffered or are suffering from serious ill health as a result of stress.
To develop an effective and comprehensive organizational policy on stress, employers need to integrate these three approaches (Cooper, Liukkonen and Cartwright 1996).
First, the most effective way of tackling stress is to eliminate it at its source. This may involve changes in personnel policies, improving communication systems, redesigning jobs, or allowing more decision making and autonomy at lower levels. Obviously, as the type of action required by an organization will vary according to the kinds of stressor operating, any intervention needs to be guided by some prior diagnosis or stress audit to identify what these stressors are and whom they are affecting.
Stress audits typically take the form of a self-report questionnaire administered to employees on an organization- wide, site or departmental basis. In addition to identifying the sources of stress at work and those individuals most vulnerable to stress, the questionnaire usually measures levels of employee job satisfaction, coping behaviour, and physical and psychological health comparative to similar occupational groups and industries. Stress audits are an extremely effective way of directing organizational resources into areas where they are most needed. Audits also provide a means of regularly monitoring stress levels and employee health over time, and provide a base line whereby subsequent interventions can be evaluated.
Diagnostic instruments, such as the Occupational Stress Indicator (Cooper, Sloan and Williams 1988) are increasingly being used by organizations for this purpose. They are usually administered through occupational health and/or personnel/human resource departments in consultation with a psychologist. In smaller companies, there may be the opportunity to hold employee discussion groups or develop checklists which can be administered on a more informal basis. The agenda for such discussions/ checklists should address the following issues:
· job content and work scheduling
· physical working conditions
· employment terms and expectations of different employee groups within the organization
· relationships at work
· communication systems and reporting arrangements.
Another alternative is to ask employees to keep a stress diary for a few weeks in which they record any stressful events they encounter during the course of the day. Pooling this information on a group/departmental basis can be useful in identifying universal and persistent sources of stress.
Another key factor in primary prevention is the development of the kind of supportive organizational climate in which stress is recognized as a feature of modern industrial life and not interpreted as a sign of weakness or incompetence. Mental ill health is indiscriminate - it can affect anyone irrespective of their age, social status or job function. Therefore, employees should not feel awkward about admitting to any difficulties they encounter.
Organizations need to take explicit steps to remove the stigma often attached to those with emotional problems and maximize the support available to staff (Cooper and Williams 1994). Some of the formal ways in which this can be done include:
· informing employees of existing sources of support and advice within the organization, like occupational health
· specifically incorporating self-development issues within appraisal systems
· extending and improving the “people” skills of managers and supervisors so they that convey a supportive attitude and can more comfortably handle employee problems.
Most importantly, there has to be demonstrable commitment to the issue of stress and mental health at work from both senior management and unions. This may require a move to more open communication and the dismantling of cultural norms within the organization which inherently promote stress among employees (e.g., cultural norms which encourage employees to work excessively long hours and feel guilty about leaving “on time”). Organizations with a supportive organizational climate will also be proactive in anticipating additional or new stressors which may be introduced as a result of proposed changes. For example, restructuring, new technology and take steps to address this, perhaps by training initiatives or greater employee involvement. Regular communication and increased employee involvement and participation play a key role in reducing stress in the context of organizational change.
Initiatives which fall into this category are generally focused on training and education, and involve awareness activities and skill- training programmes.
Stress education and stress management courses serve a useful function in helping individuals to recognize the symptoms of stress in themselves and others and to extend and develop their coping skills and abilities and stress resilience.
The form and content of this kind of training can vary immensely but often includes simple relaxation techniques, lifestyle advice and planning, basic training in time management, assertiveness and problem-solving skills. The aim of these programmes is to help employees to review the psychological effects of stress and to develop a personal stress-control plan (Cooper 1996).
This kind of programme can be beneficial to all levels of staff and is particularly useful in training managers to recognize stress in their subordinates and be aware of their own managerial style and its impact on those they manage. This can be of great benefit if carried out following a stress audit.
Organizations, with the cooperation of occupational health personnel, can also introduce initiatives which directly promote positive health behaviours in the workplace. Again, health promotion activities can take a variety of forms. They may include:
· the introduction of regular medical check-ups and health screening
· the design of “healthy” canteen menus
· the provision of on-site fitness facilities and exercise classes
· corporate membership or concessionary rates at local health and fitness clubs
· the introduction of cardiovascular fitness programmes
· advice on alcohol and dietary control (particularly cutting down on cholesterol, salt and sugar)
· smoking-cessation programmes
· advice on lifestyle management, more generally.
For organizations without the facilities of an occupational health department, there are external agencies that can provide a range of health-promotion programmes. Evidence from established health-promotion programmes in the United States have produced some impressive results (Karasek and Theorell 1990). For example, the New York Telephone Company’s Wellness Programme, designed to improve cardiovascular fitness, saved the organization $2.7 million in absence and treatment costs in one year alone.
Stress management/lifestyle programmes can be particularly useful in helping individuals to cope with environmental stressors which may have been identified by the organization, but which cannot be changed, e.g., job insecurity.
An important part of health promotion in the workplace is the detection of mental health problems as soon as they arise and the prompt referral of these problems for specialist treatment. The majority of those who develop mental illness make a complete recovery and are able to return to work. It is usually far more costly to retire a person early on medical grounds and re-recruit and train a successor than it is to spend time easing a person back to work. There are two aspects of tertiary prevention which organizations can consider:
Organizations can provide access to confidential professional counselling services for employees who are experiencing problems in the workplace or personal setting (Swanson and Murphy 1991). Such services can be provided either by in-house counsellors or outside agencies in the form of an Employee Assistance Programme (EAP).
EAPs provide counselling, information and/or referral to appropriate counselling treatment and support services. Such services are confidential and usually provide a 24-hour contact line. Charges are normally made on a per capita basis calculated on the total number of employees and the number of counselling hours provided by the programme.
Counselling is a highly skilled business and requires extensive training. It is important to ensure that counsellors have received recognized counselling skills training and have access to a suitable environment which allows them to conduct this activity in an ethical and confidential manner.
Again, the provision of counselling services is likely to be particularly effective in dealing with stress as a result of stressors operating within the organization which cannot be changed (e.g., job loss) or stress caused by non-work related problems (e.g., bereavement, marital breakdown), but which nevertheless tend to spill over into work life. It is also useful in directing employees to the most appropriate sources of help for their problems.
For those employees who are absent from work as a result of stress, it has to be recognized that the return to work itself is likely to be a “stressful” experience. It is important that organizations are sympathetic and understanding in these circumstances. A “return to work” interview should be conducted to establish whether the individual concerned is ready and happy to return to all aspects of their job. Negotiations should involve careful liaison between the employee, line manager and doctor. Once the individual has made a partial or complete return to his or her duties, a series of follow-up interviews are likely to be useful to monitor their progress and rehabilitation. Again, the occupational health department can play an important role in the rehabilitation process.
The options outlined above should not be regarded as mutually exclusive but rather as being potentially complimentary. Stress- management training, health-promotion activities and counselling services are useful in extending the physical and psychological resources of the individual to help them to modify their appraisal of a stressful situation and cope better with experienced distress (Berridge, Cooper and Highley 1997). However, there are many potential and persistent sources of stress the individual is likely to perceive him- or herself as lacking the resource or positional power to change (e.g., the structure, management style or culture of the organization). Such stressors require organizational level intervention if their long-term dysfunctional impact on employee health is to be overcome satisfactorily. They can only be identified by a stress audit.