Chapter 9
Profound Change in Medical Technologies
Time to Reexamine the Technology-Structure Nexus in Health Care?

This work is the responsibility of the authors and does not reflect the position or opinions of the National Cancer Institute.

Mary L. Fennell, Steven B. Clauser and Miriam Plavin-Masterman

Learning Objectives

  1. Understand the theoretical foundations of structural contingency theory and neoinstitutional theory, and consider recent developments regarding the influence of institutional logics.
  2. Examine genomic medicine in cancer treatment and identify developments in genomic medicine as a form of disruptive technology.
  3. Evaluate and reconceptualize the technology-structure relationship in health care in light of disruptive changes in medical technology, influential institutional environments, and emergent institutional logics.
  4. Consider levels of analysis most useful in a reconceptualization of structural contingency theory.
  5. Identify barriers characterizing the gap between genomic medicine technology and the structures of cancer treatment organizations.
  6. Consider a neostructural contingency model, incorporating institutional logics with the theoretical foundations of structural contingency theory to understand how changing medical technologies influence and exert pressure on health care organizations.

For decades during the middle of the twentieth century, the structural contingency approach was the leading paradigm for connecting changes in technology to expected or resulting changes in organizational structure (Lawrence and Lorsch, 1967). The theory assumed that “under norms of rationality” (Thompson, 1967), the characteristics of technology would provide clear guidelines for appropriate structural configurations. Contingency theory (Morgan, 1986; Scott and Davis, 2007) went one step further and predicted that organizational performance would depend on the fit between characteristics of technology and characteristics of structure: the better the fit, the better the performance (Donaldson, 2001). These dominant theories were eventually eclipsed in the late 1970s by theories of organizational environments, which emphasized fit with environmental contingencies as the primary concern. More recently, various theories of organizational change have shifted emphasis to the political conflicts that accompany technological change and to the nonlinear aspects of technology change that are linked to negotiating and renegotiating organizational norms and logics (Barley, 1986; Prasad, 1993).

We are now witnessing profound changes in all areas of medical technology, including innovations and discoveries in biology, cellular biology, genomics and proteomics, and concomitant changes in pharmaceuticals, medical devices, and information technology. Some have noted that this recent scientific avalanche has already brought about a complete paradigm shift in certain approaches to patient treatment, such as for cancer, Alzheimer's disease, organ and limb replacement, and various autoimmune system disorders (Niederhuber, 2007; Scully et al., 2011; Hamburg and Collins, 2010; McBride et al., 2008; Fennell, 2008).

Despite these radical shifts in our understanding of genomic function, there appears to be a large and growing gap between the capacity of science to develop genomic applications in medicine and the capacity of our medical treatment organizations to advance these innovations fully and adopt them for actual patient treatment (Khoury et al., 2011). In short, the technology of treatment is changing profoundly in many medical arenas, but the structures of treatment have yet to change.

In this chapter, we define technology, particularly medical technology, as the work performed by medical and health care organizations (Scott and Davis, 2007); this definition encompasses both the hardware used to provide medical care and the skills and knowledge of medical workers. More specifically, medical technology is the application of procedures, information, and equipment to support the work of medical professionals and organizations. This chapter explores several questions concerning the rapidly changing connection between genomic medical technologies and the structures of health care organizations, focusing on the growing gap between the two. We use cancer treatment as a signature case and consider the following questions:

  • How should we redefine and reconceptualize structural contingency theories given the disruptive and transformative nature of genomic medicine and the observed gap between medical technology and medical structures?
  • What levels of analysis are most useful in rethinking the technology-structure nexus in health care?
  • How best should we model the concept of fit between new disruptive technologies and health care organizational performance? How are changes in the organization of medical practice aligning with (or frustrating) needed structural changes to enhance the use of genomic medicine?
  • What are the types of structural changes needed to match genomic medicine, both internal arrangements (e.g., the capacity to provide multidisciplinary treatment, the capacity to collect and process biospecimens, the capacity for updated clinical trials, and changes in provider-patient relationships), and external arrangements (e.g., linkages needed to national networks of data sharing and best practices collaboration)?

In other words, is it time to reconsider, or even rebuild, structural contingency theory (SCT) as a somewhat nonlinear, multilevel framework to understand the interplay between disruptive technology diffusion and the reframing of institutional logics? We hope to better understand the complexities in that model by working through the example of genomic medicine and cancer care. This is an example that forces us to more explicitly consider change over time in both the technologies of care and the institutional logics that emerge over time and become the focus of conflict between the professional groups involved in defining the technology and its appropriate use. We propose a neostructural contingency model that explicitly borrows from institutional theory—one that incorporates the important concept of institutional logics.

In the following section, we briefly review the theoretical foundations of structural contingency models and their most recent versions, working toward the goal of conceptualizing genomic medicine and its influence on cancer care as an example of a major technological contingency in health care delivery.

Update on Structural Contingency Models

The heyday of the structural contingency approach (SCA) in the 1960s coincided with the domination of organization theory by a combined focus on closed system models and a rational systems view of organizations (Scott and Davis, 2007). Scholars studied organizations with a primary focus on specific goals and a formal structure characterized by rules and routines; the analysis of organizations tended to focus inside the “machine,” with little attention to social influences or how the environment of the organization affects behavior and performance.

However, even as early as the late 1950s and the early 1960s, we witnessed a burst of writing that focused on interorganizational analyses that broke with the closed-system approach (Dill, 1958; Lawrence and Lorsch, 1967; Terreberry, 1968). Contingency theory approaches moved even further away from the closed-rational model to examine how aspects of the environment influenced both management style and organizational structure. Theorists described how organizations need to satisfy and balance their internal needs and adapt to environmental circumstances instead of operating as closed systems (Morgan, 1986), processes of most importance to a natural system approach. For example, an organization's performance was seen as a function of the fit between it and its environment, strategy, and structure (Duncan, 1972; Miles and Snow, 1978; Venkatraman, 1989). An important point these theorists made was that the fit between the organization and environment is not necessarily static or linearly deterministic.

Pennings (1975) found that the structural contingency model may be most appropriate for work organizations with a higher or stronger degree of task interdependence with regard to work flow. The degree of interdependence may determine how much uncertainty affects different components or clusters of operations. The more tightly interlinked or sequentially dependent the segments of work flow (e.g., the stages of care in a cancer patient's treatment plan) are, the more likely that disturbances in the environment will spread across multiple organizational units.

The natural systems view pushed theorists beyond a rational view of routines, rules, and formal blueprints to focus on individuals, interests, and interactions. Structural contingency theory argues that “there is no one best way to organize,” and “any way of organizing is not equally effective” under all conditions (Galbraith, 1973, p. 2). To be most effective, organizational structures should be appropriate to the work performed or the environmental conditions facing the organization, or both (Galbraith, 1973). Included within the arenas of technology and environmental forces are nonrational forces, interpersonal connections, and informal routines, all of which can push the alignment between technology and structure out of whack. Put another way, structural adjustments will be needed periodically within and across different parts of the organization to regain fit with either work (technology) or environmental conditions (Donaldson, 1987).

The open systems view of organizations developed as a reaction to the work of both rational and natural system theorists, and it acknowledges that organizations are situated in environments that are sources of inputs (individuals, groups, and organizations), technologies, markets, and a host of other external pressures. Organizations have to defend themselves from environmental threats, such as external control, or from unanticipated shocks or changes in resource supplies or markets. Organizations must adapt to their environments in order to survive; the ability to adapt is how they can maintain themselves even in the face of external shocks or turbulent markets. There are numerous ways to adapt, given combinations of many diverse organizational structural components.

In SCT terms, environmental characteristics influence strategies and structures (Fredericks, 2005): differing types and intensities of environmental pressures or forces (e.g., market, technology, competitors) must be dealt with. In addition, within organizations there may be competing conceptualizations of technology and structure (Glazer and Weiss, 1993), and it is therefore up to organizations to figure out how best to compete based on evolving strategies and structures that are particularly effective for the organization and the particular markets it serves. One well-known example is Schoonhoven's (1981) study of hospital operating room suites. She argued that there must be room in contingency theory for representing the complexity of relations of technological uncertainty, structure, and organizational effectiveness and that the selection of a particular level of analysis (such as the work group versus the entire organization) is an important decision. Drazin and Van de Ven (1985) make a similar case for flexibility; their work shows that fit is the joint product of managerial selection and departures from an ideal (multivariate) pattern.

For nearly thirty years, a framework that combines both natural and open systems emphases has dominated organization theory: neoinstitutional theory. At the heart of this theory is the notion that institutions such as the state, the church, and the family affect and determine organizational structure—in other words, that organizations are embedded in a matrix of social structures. The level of analysis in neoinstitutional theory focuses on the organizational field, defined by DiMaggio and Powell (1983) as “those organizations that, in the aggregate, constitute a recognized area of institutional life, key suppliers, resources and produce consumers, regulatory agencies, and other organizations that produce similar services and products” (p. 148). This is a level of analysis that bridges other more commonly used foci of either the organization itself or macrounits of analysis such as networks of organizations or populations of organizations (DiMaggio and Powell, 1991; Hannan and Freeman, 1989; Scott, 2001). The organizational field is typically more heterogeneous than an organizational population and comprises both horizontal linkages to similar organizations (as in a network) and vertical linkages to upstream and downstream actors, government agencies, and social institutions.

DiMaggio and Powell (1983) argued that once a set of organizations emerges as a field, a paradox arises: rational actors make their organizations increasingly similar to each other as they try to change them in response to environmental pressures. Those pressures could take the form of government regulations, perceptions of what is legitimate (e.g., what is normative, professionally desired, or culturally valued; Suchman, 1995), or both. Once this has happened, powerful forces emerge that lead organizations to become more similar to each other. This kind of homogenization is isomorphic—a constraining process that forces one unit in a population to resemble others facing the same set of environmental conditions. As Scott (1998) explains, “Institutional theory emphasizes that organizations are … strongly influenced by their environments—but … socially constructed belief systems and normative rules exercise enormous control over organizations—both how they are structured and how they carry out their work” (p. 117).

The concept of institutional logics illustrates how these broader belief systems in turn shape the cognition and behavior of actors in a given environment. Thornton and Ocasio (2008) define institutional logics as “the socially constructed, historical patterns of material practices, assumptions, values, beliefs, and rules by which individuals produce and reproduce their material subsistence, organize time and space, and provide meaning to their social reality” (p. 804). Organizational sociologists often use the concept of agency to describe the capacity of actors, institutional or individual, to act independently and make their own free choices. Institutional logics can be thought of as linking agency and cognition, or sense making about the environment, to socially constructed institutional practices and rule structures. Understanding organizational behavior requires considering it within its social and institutional contexts: institutional context both regularizes behavior and provides opportunity for agency and change.

A relatively recent but growing body of work argues that manipulating institutional logics—particularly when they are overlapping, plural in nature, or only partly developed (Barley and Tolbert, 1997; Phillips, Lawrence, and Hardy, 2004)—is a key mechanism to explain institutional change (Green, 2004). There have also been recent studies addressing the process of creating new meanings within organizational fields (Friedland and Alford, 1991; Thornton, Jones, and Kury, 2005; Tripsas, 2009; Whittington, Owen-Smith, and Powell, 2009). These studies show that changes in meaning influence both institutional structures and the power that participants have within the organizational field—those horizontal linkages to similar organizations (as in a network) as well as vertical linkages to upstream and downstream actors, government agencies, and social institutions (DiMaggio and Powell, 1983; Thornton et al., 2005).

In most of these recent studies, the organizational field is the key level of analysis. We argue that the field level is critical to reformulating and applying SCT to understand how changing medical technologies influence and exert pressure on health care organizations. Scott (2012) recently reminded us that the concept of organizational field goes beyond simply the vertical and horizontal linkages connecting organizations within a market. Rather, the field “connotes the existence of a community of organizations that partakes of a common meaning system and whose participants interact more frequently and fatefully with one another than with actors outside the field” (p. 32). In fact, Scott's description of organizational fields includes actors, governance structures, and institutional logics. Furthermore, organizational fields are the arenas within which conflict over definitions of new technologies (and control over their use) is observed and resolved, particularly within fields where multiple professional groups must interact and multiple definitions are possible. And finally, the organizational field is itself a unit of observation that spans more traditional levels of analysis within the study of organizations: the organization, the network, professional associations connected to those organizations, and both local markets and national policy structures.

To summarize, then, these more recent developments in neoinstitutional theory lead us to emphasize two important distinctions: (1) there are at least two important levels of analysis to consider for rebuilding SCT (meso- and macrolevels), and the organizational field approach spans both; and (2) there are two types of institutional forces at work at both of these levels: institutional pressures and institutional logics. Institutional pressures work as a set of external environmental pressures to which organizations must respond (as in neoinstitutional theory of the 1990s). In addition, the institutional environment can be seen as the stage on which definitions of technologies and generally assumed logics or meanings are debated, argued, and enacted. At the mesolevel, these environmental pressures and conflicts over logics blossom within and across professional groups and networks. The influence of professionals could be felt indirectly, as through professional associations. Professionals' influence could also be felt directly, as in conversations about how to define technology or in debates about the appropriate status hierarchy within a multidisciplinary care treatment team.

At the macrolevel, definitions of new technologies become codified into political stances characterizing interprofessional conflicts over control, regulatory structures governing licensure and certification, and reimbursement mechanisms over payments for the use of new technologies. Definitions of new technologies change over time in a nonlinear fashion: they are emergent rather than deterministic. They are often stated in terms of associations rather than direct causality, and they are subject to political and policy processes that are themselves often unpredictable, recursive, or circular.

The changing nature of technology and the changing interpretations of professional groups require a more sophisticated set of organizational forms than we have witnessed in the past or that empirical studies based on strategic contingency theory have traditionally used. This matters for an organization because of the pressure that technological change places on the organization's structure; changing technology also means the organization must have the flexibility in its structure to meet those changes dynamically. Thus, a neo-SCT approach must account for more than just changes in technology; it must also account for changing (and conflicting) interpretations of new technologies and their institutional logics by professional groups involved in the work of the organizational field.

In the next section we briefly review developments in genomic medicine. Then we move toward an analysis of genomic medicine in cancer treatment as a type of disruptive technology.

How to Conceptualize Genomic Medicine

Advances in genomically informed therapy are rapidly changing the nature of cancer prevention and therapy. The use of pharmacogenomic testing accelerated with the completion of the Human Genome Project in 2003, which created the first complete mapping of the human genetic code (Little et al., 2003). The years following the success of the Genome Project led to a rapid expansion and discovery of genetic research. This work included the International Haplotype Map project that discovered genetic variants called single nucleotide polymorphisms (SNPs) and a number of variations whose presence is often associated with greater propensity to develop certain diseases (Feero, Guttmacher, and Collins, 2008). The application of these variants in understanding various disease processes has been spurred by population-based genome-wide association studies (GWAS; P3G Consortium et al., 2009). Finally, in 2008, a new genetic mapping consortium project, the 1000 Genomes Project, was established to sequence the genome of one thousand individuals to understand the most common genetic variants, which were then compiled into a comprehensive database available to researchers (1000 Genes Consortium, 2010).

These major research efforts contributed to the rapid expansion of GWAS studies related to the diagnosis and treatment of cancer. Genetic linkage studies in families with hereditary breast, ovarian, and colon cancer have identified several important genetic variants that are strongly predictive of developing the disease. These markers are currently being used for screening, disease risk counseling, and preventive treatment programs for breast cancer (McDermott, Downing, and Stratton, 2011). The use of presymptomatic genetic testing and targeted therapies tailored to genetic profiles of tumors is part of the recommended evaluation for other tumor sites as well, including cancers of the colon and lung. Genetic alterations and expression profiles are already being used as prognostic markers to direct chemotherapy and other interventions. For example, KRAS (from Kirsten rat sarcoma) genotyping of colon tumors has been shown to correlate with improved treatment efficacy and reduced toxicity for colon cancer (Macconaill and Garraway, 2012; McDermott et al., 2008). Similar results exist for the oncotype Dx genetic variant for the treatment of certain breast cancers (Zujewski and Karmin, 2008).

These advances have been challenging for cancer care organizations in part because they are disruptive to the old paradigm of clinical treatment. Clinicians no longer deal only with classes of chemotherapy regimens around standardized treatment guidelines and protocols; now they must consider, for selected cancers, whether certain gene variants are present or absent in order to target therapies in specific ways. This has led to an explosion of genetic testing, now one of the fastest-growing areas of billing and reimbursement in cancer (UnitedHealth Center for Health Reform and Modernization, 2012). Some surgical and medical oncologists routinely order full-panel genetic profiles for selected cancers, even though many genetic variants affect only a small number of individuals who either are at risk for or have cancer. Other clinicians inadvertently order multiple genetic tests because genomic testing is inadequately embedded in the medical record. Universal standards and approaches to educate physicians regarding the appropriate interpretation and use of these tests are limited. Complex issues surrounding reimbursement for both the diagnostic approaches and the resulting therapeutic implications need to be addressed, especially when genetic information suggests that off-label treatment avenues might be effective. Some of the demand also may reflect the fact that these tests are increasingly marketed directly to consumers, who are inquiring about the advantages of genetic testing for them (Geransar and Einsiedel, 2008). Whether these practices have clinical utility and lead to more cost-effective cancer treatment through more exact matching of genetic information to optimum treatment is of concern to health insurers and health policymakers. Clinical guidelines are only beginning to emerge for genetic testing, and most cancer care organizations do not have the capacity to translate general guidelines related to genomics into specific clinical practices.

As a result of this uncertainty, cancer care organizations are increasingly challenged with finding the best methods of using genomic analysis in day-to-day practice. It is insufficient just to recommend a specific genetic test; the organization must ensure the availability and adequacy of all the steps needed in the testing process. The ability of information systems to track patients and families to ensure appropriate referral, counseling, and testing is limited. Investment decisions in appropriate infrastructure, including geneticists, genetic counselors, and continuous training of clinical staff, are juxtaposed against the rapid pace of innovation in this area, coupled with uncertainties over what payers will reimburse and how the availability of these technologies affects malpractice and organizational liability. New organizational alignments are necessary with patients and their families, clinical laboratories, state licensing agencies, and national accrediting organizations. Managing these uncertainties, opportunities, and new alliances both within and outside the cancer organization will require considerable adaptation and change in the coming years.

Given these recent developments in genomic medicine and the proliferation of genetic testing, it makes sense to conceptualize genomic medicine as a form of disruptive technology, a concept with a long history in both economics and organization theory literatures. Not surprisingly, economics takes a market-based view of disruptive technologies, assuming that firms choose to engage in developing and deploying disruptive technologies to respond to marketplace shifts. Disruptive technologies create significant uncertainty. Differing skill sets and new markets created by the disruptive technology may lead firms to recombine resources and develop new capabilities, and they may need to design and employ different leveraging strategies to exploit their new and current capabilities (Bowman and Hurry, 1993). In fact, the case of direct-to-consumer (DTC) genetic testing is itself a good example of a disruptive technology transforming the industry (and market) of genetic testing. As reviewed by Leachman and colleagues (2011), a wide range of products are available for direct purchase by consumers without physician intermediation and at a reasonable cost, from genotyping of specific genetic variants, to genome-wide “SNP chips,” which are capable of identifying thousands of individual genes for a single person, and, for a substantial price, a complete overview of most variations present within individual genomes.

At first, demand for these products rose sharply and prices declined. More recently this disruptive technology and the firms pursuing a DTC approach have been “the subject of vocal criticism from members of the medical community and unwelcome attention from the U.S. Congress and regulators. Both doctors and legislators have suggested that such tests are of little value and may well be dangerous if customers receive bad news in an unmediated fashion” (Leachman et al., 2011, p. 36). As a result, a number of genetic testing firms are backing away from the DTC model and returning to a physician-ordered and -interpreted model.

The organization theory literature also conceptualizes disruptive technology as linked to environmental change, but it is usually framed as a severe shock or jolt caused by unforeseen or unanticipated environmental changes (Eppink, 1978; Meyer, 1982; Zajac and Shortell, 1989; Bahrami, 1992; Sirmon, Hitt, and Ireland, 2007). Disruptive technologies and environmental shocks can lead to profound organizational change as organizations struggle to adapt to uncertain and fast-occurring environmental changes (Aaker and Mascarenhas, 1984; Alexander, D'Aunno, and Succi, 1996). Shifts in the biological understanding of the genomic bases of cancer are a perfect example of a discontinuous, disruptive technological change that has precipitated a paradigm shift in cancer treatment (Sirmon et al., 2007). As we will describe in detail, cancer treatment is now profoundly dependent on a whole range of innovative technologies, including electronic health records, the capacity to link to large information technology data warehouses, and biospecimen sampling, assay, and storage capacity. In essence, genomic medicine requires cancer delivery systems to rethink how to diagnose, deliver, and care for patients. It means a complete shift away from standardized care processes for patients, since the same disease may look different and respond to the same treatment differently in different patients.

Genomic Medicine and Cancer Treatment: Uncertainty and Multiple Barriers

Genomic medicine has changed both the testing processes to diagnose cancer and the way treatment regimens are imagined, organized, and delivered. Of key importance are the criticality of genetic testing and the interpretation of genetic tests. Within cancer care, there are at least four important areas in which genetic testing is relevant (Khoury et al., 2012): predispositional or susceptibility testing, diagnostic testing, prognostic testing (to predict the risk of recurrence), and pharmacogenomic testing (to predict drug response). In essence, these potential genomic medicine applications span the continuum of cancer diagnosis and treatment management. Nonetheless, there has been very little research either on the effectiveness of genomic medicine testing within these areas or on implementation. What we do not know about the application of genomic medicine to cancer treatment substantially overwhelms what we do know, and these knowledge lacunae constitute important barriers to realizing the promise of genomic medicine in cancer treatment. This lack of research on genomic medicine testing effectiveness further complicates the objective of restructuring health care treatment organizations to incorporate genomic medicine: restructuring cannot easily proceed while there are still major questions and uncertainties surrounding genomic medicine technology.

Despite those uncertainties, it is clear that the organizations involved in cancer treatment will be subject to substantial change as they address the challenges in providing safe and technologically appropriate environments for biologics, genetically modified organisms, and other targeted therapies. Unfortunately, there are few detailed descriptions of the types of changes needed or the many strategic decisions that must be addressed by multiple stakeholders in this process. An exception includes the well-detailed example provided in Bamford, Wood, and Shaw's (2005) description of the change approach used by a large London teaching hospital within the National Health Service (NHS) as it prepared to engage in clinical trials of new gene therapy agents. Bamford and colleagues reviewed all of the regulations covering gene therapy at multiple jurisdictional levels, national through local. They also documented the risk assessment process used to prepare an entire hospital environment for gene therapies, including implementing all systems and processes needed to sample tissue, determine genetic composition, match to appropriate standards, and even dispose of biologic wastes. This enormous change process began with involving the core employees (clinical, technical, and managerial) at all levels who were committed to the goals of gene therapy. These core teams won the support of different levels of employees, regulatory bodies, and patients throughout the hospital. In terms of structures needed to accomplish this huge task, the NHS example points to the importance of mapping and engaging all possible stakeholders into something like a multilevel, multifocused shared decision-making process. A more recent example is Lubin and colleagues' (2009) report of a project to improve the clarity of laboratory reports of genetic test results for use by primary care physicians. This project used facilitated work group discussions with clinicians from pediatric, obstetrics-gynecology, and family practices who provided their perspectives on how best to structure molecular genetic testing results so as to enhance readability and comprehension and avoid misunderstandings that might compromise patient care. Although this task was more narrowly defined than the NHS task and focused entirely on improved presentation and comprehension of genetic test results, this team also found it necessary to cast a very broad net. The facilitated work groups reviewed a combination of reporting styles and frameworks, included a broad mix of medical and laboratory professionals (as well as patients), and performed separate analyses related to genetic test ordering versus reviewing and interpreting test results. Of considerable interest were results concerning the wide variety of medical and nonmedical personnel in primary care practices whose work involves reading or communicating the results of genetic tests, from secretaries to medical geneticists. These work groups recommended something very similar to the tiered approach recommended by the Organization for Economic Cooperation and Development (2005) on a test report structure for genetic tests, an international guideline development conference held in 2005. Both groups recommended that test results should provide information in three categories: “i) basic, but essential information (unique identifiers and the genotypic result); ii) specific information (e.g., date of birth and reason for testing); and, iii) other useful information (e.g., suggestions for further testing)” (Lubin et al., 2009, p. 169). Thus, a lot more information and interpretation is suggested than most standard lab reports provide.

Given the difficulty of interpreting genetic test results, the question arises of who should be involved in the interpretation of genetic tests. Lubin and colleagues' results affirm that the reality of cancer care today is that primary care practice is a very common location for cancer treatment planning and delivery. Clinical professionals with specialized training or genetic counselors are not the only people involved (and, of course, such tests are often prescribed by physicians but are also directly available to consumers through direct-to-consumer advertising). The extent to which medical professionals are able to provide such counseling is not well described in the literature, but there is some evidence that referrals to professionals for genetic counseling do not necessarily increase in areas where levels of consumer demand for genetic counseling are high (Centers for Disease Control, 2004; Khoury et al., 2011). There is also considerable uncertainty about whether the supply of genetic counselors nationwide is large enough to meet the growing demand. Training requirements for genetic counseling include a bachelor's degree from an accredited undergraduate institution plus an accredited master's degree in genetic counseling (American Board of Genetic Counseling, 2012). Thus, the pipeline for new genetic counselors is constrained by the currently limited number of accredited training programs at the master's level.

Private foundations and the federal government have recently supported a number of large-scale demonstration projects that are expected to provide additional models of organizational change to foster the effective translation of genomic medicine in patient care; among them are the Center for Medical Technology Policy, the Institute of Medicine's Roundtable on Evidence-Based Medicine, and the Center for Comparative Effectiveness Information. Of particular importance here is the National Cancer Institute's (NCI) four-year demonstration project: the NCI Community Cancer Centers Program (NCCCP; see Johnson et al., 2009).

This project is unusual in that it works to increase the number of early-stage clinical trials on cancer treatment available in smaller community hospitals to support transformative technologies in cancer care. The NCCCP operates in thirty communities across twenty-two states (both freestanding and system-connected community hospitals). An important goal of this demonstration project has been to study ways in which the community health care system can be electronically connected so that its patients can take part in the early-phase trials of promising new genomic medicine–based treatments. Significant attention has been focused on how to structurally upgrade community hospital infrastructure to allow the collection, storage, proper annotation, and sharing of blood and tissue samples needed for research. The evaluation of this demonstration project has provided some information on both clinical outcomes and organizational change and may provide guidance on what types of changes in community cancer centers can be sustained to support access to new technologies in cancer care. With continued NCI “branding,” local community cancer centers (and their host community hospitals) intend to continue efforts to expand involvement in clinical trials and quality improvement activities. However, it is not clear whether information technology enhancements would have occurred without the American Recovery and Reinvestment Act (ARRA) incentives to do so, and a number of programs and activities focused on the survivorship stages of cancer treatment and community outreach may not be sustainable without NCI funds. The interconnections between community cancer programs and other research partners (including NCI-designated cancer centers) that were enhanced by the NCCCP were generally considered very valuable by both cancer programs and hospitals executives.

The Political Side of Disruptive Technologies

We return now to the questions of theoretical development posed in the introduction of this chapter: how to reconceptualize the technology-structure relationship in medical care using contemporary cancer care as the exemplar. We have noted the expansion of the concept of the institutional environment. It shifts from the idea of the institutional environment as simply a set of external pressures to a more complicated portrait of the environment as the stage on which definitions of technologies and generally assumed logics or meanings are debated, argued, and enacted. Profoundly disruptive technologies such as the development of genomic medicine can generate protracted periods of “sensemaking and restructuring” (Weick, 1995; Barley and Tolbert, 1997; M. Suchman, 2010) at both the level of the health care organization and the organizational field. Within organizations, this framing process might involve intense political conflict about the vocabulary and normative and valuative logics for thinking and talking about the technology. This framing process can result in internal conflicts over choice of strategic path: whether to bet on new technology and label the organization as an innovator or instead to slow down the innovative process while different professional groups spar over control issues. The framing process can also span organizational levels, particularly across organizational fields, causing gaps in technology adoption readiness when, for example, the organization may be ready and willing to adopt a disruptive technology, but a higher level of control (such as accrediting agencies or regulatory agencies) may be unprepared to provide the superstructures that encourage adoption of the new technology.

We briefly review several barriers that have characterized the gap between genomic medicine technology and the structures of cancer treatment organizations. These barriers provide examples of how disruptive technologies, institutional environments, and emergent logics interact to complicate the technology-structure relationship.

At least three long-running schisms within the cancer research and treatment communities have contributed to roadblocks in the journey toward adoption of genomic medicine in cancer treatment, and all three represent important examples of conflicts over the terms or logics, or both, surrounding the adoption of this disruptive technology:

  • The split between research and clinical practice communities (Mukherjee, 2010; Abbott, 2001; Hafferty and Light, 1995)
  • The split between academic research centers and community hospitals (Kaluzny et al., 1995; Fennell and Warnecke, 1988)
  • The split between bench research and other stages of translational research, usually defined as research focused on how quickly and efficiently to translate new findings in basic or lab research into medical practice.

Khoury and colleagues (2012) have argued that we particularly lack research on what is often labeled the third stage of translational research for genomic medicine (T3): the assessment of candidate genomic applications into clinical practice.

Conflicts have been reported within the halls of the NCI, within the medical professions engaged in cancer research and treatment, and between major cancer research centers and some of the larger patient advocacy organizations, such as Breast Cancer Action and the National Breast Cancer Coalition (see Mukherjee, 2010). Within the NCI, these discussions have historically focused on the appropriate emphasis and balance between basic science research, clinical translation (Woolf and Johnson, 2007), and cancer control or detection, or both. Within the medical professions involved in cancer research and treatment, the status hierarchy has long favored the basic scientist over those who study clinical applications and are community-based clinicians (Abbott, 2001; Hafferty and Light, 1995). At the level of health care organizations and care networks, there are treatment access differentials. The most current methods of cancer treatment and clinical trials tend to be made available within academically based research centers and teaching hospitals; approximately 15 percent of cancer patients are able to access such care (Johnson et al., 2009). Far fewer cancer patients treated in the community have knowledge of or access to these trials, and there is considerable room for improvement in the availability of evidence-based therapy (Institute of Medicine, 2006; McGlynn et al, 2003; Schrag et al., 2000). And finally, as Khoury and colleagues (2012) noted, funding and research efforts through the national institutes have emphasized genomic discovery and research in the first stage of translational research (T1), which connects discovery to potential clinical applications (Scully et al., 2011). Very little research exists in T2 stages or beyond, which assess clinical applications, the implementation of those applications, and the population health impacts of new applications. The result is a gap in knowledge about how these processes work.

The Liability of Newness within Organizations: Multidisciplinary Care Delivery Teams and Genomic Medicine

Quality cancer care is complex. It depends on careful coordination between multiple treatments and providers and on technical information exchange and regular communication flow between all those involved in treatment, including patients, specialist physicians, other specialty disciplines, primary care physicians, and support services (parts of this section draw from Fennell et al., 2010). Taplin and colleagues (2012) have pointed out the challenges of transferring information and responsibility among providers and institutions, a problem at the interfaces of care. Advances in surgical procedures, chemotherapy, computer technology, and targeted molecular and radiation therapies have all led to an increase in multimodality therapy, which increases the number of interfaces among cancer specialists and other clinicians in the treatment of any single patient. Thus, the likelihood of missed connections between providers or treatment stages, or both, due to interface problems has increased substantially as multimodal treatment options become more prominent.

Within cancer programs, one method of ensuring the exchange of patient-related and technical information between all physicians and support services in a patient's care is through multidisciplinary care treatment (MDC) teams. The setting and format of the MDC encourage active involvement of all actors, including patient and family, in the development of a care plan. Once MDC teams formed, meetings can be convened at multiple times throughout the process of care and can thus serve as an ongoing communication structure aimed at smoothing the transitions between multiple stages of care. MDC teams are normatively approved structures for the development of prospective treatment planning. There is a considerable literature on the expected benefits of MDC in the cancer arena, and both the Commission on Cancer and the NCI have encouraged the diffusion of MDCs (see the review of this literature in Fennell et al., 2010). However, this literature is primarily speculative and is based on the assumption that the team process better ensures coordination in treatment planning when multiple specialties are involved. In fact, there is very little information on exactly what happens within those team meetings and what leads to optimum care planning. In some ways, the multidisciplinary treatment planning team parallels the well-known matrix structure within organizations, in which experts are drawn from across divisions (or disciplines) within a firm in order to work together in teams or project groups on complex problems (Sayles, 1976). Matrix teams allow both functional and product demands to be considered simultaneously, and both vertical and lateral channels of communication are open. The disadvantages of matrix teams, of course, include the very real impact of heightened conflict between functional and product interests and the high-stress work environment where competing claims are common.

Despite the lack of empirical studies, we suspect wide variation across team structures, styles, and processes, all of which can affect MDC team performance. The “multidisciplinary care” label is used quite widely in contemporary cancer practice to describe diverse care structures, some of which look more like tumor boards (large conferences called to retrospectively present and discuss cancer cases and treatment decisions) than MDCs. For example, one important structural difference concerns the distinction between actual and virtual teams. Actual teams perform their work (consultation, care planning, and changes in care plans) with all teams members present at the same time and place. This is similar to Thompson's (1967) notion of a “mediating” coordination structure as the best strategy to use when faced with complicated “reciprocal” technologies (tasks that require feedback and communication between all performers). Virtual teams are those in which team members do not meet face-to-face, or do so infrequently. Using a virtual team strategy, the MDC would operate like either a tumor conference (a treatment planning meeting that occurs only once) or the traditional pattern in which patients are seen in a sequence of separate appointments with multiple physicians over a short period of time. Virtual teams are similar to the concept of “teams without co-location” described by Hinds and Bailey (2003). The locus of coordination also differs between actual and virtual teams. For actual teams, care coordination occurs within team meetings, and care options are discussed live, as would follow-up coordination. Careful records of care plan decisions made during MDC team meetings (e.g., use of a treatment summary document as recommended by the Commission on Cancer), and their communication to all team members, are essential. Virtual teams rely on coordination through a staff or clinic coordinator and require formally structured communication.

It is also possible that the MDC team meeting provides a venue for status differences to emerge across the various specialties involved in multimodal treatment. Just some of the specialties are medical oncology, radiation oncology, surgical oncology, cancer site specialist (such as breast or thoracic sites), primary care, nursing, pathology, pulmonology, diagnostic radiology, interventional radiology, psychologists, social workers, nutritionists, and clergy or spiritual advisors. With all members having tight schedules, the logistics of organizing such meetings are formidable, and the need to share patient data efficiently, consult collaboratively, and formulate treatment plans probably requires some reliance on customary professional hierarchies when there is insufficient time available for emergent team processes to unfold gradually.

The Liability of Newness as a Multilevel Problem: MDCs and Billing

The success of MDC teams is also dependent on the context within which the teams work: the cancer program, the host hospital, the integrated health system within which that hospital and cancer program may be embedded, and the more general institutional environment of payment and regulatory policies. MDC teams may be located in regions where major third-party payers do not provide an option for reimbursement of physician time devoted to MDC team conferences or to hospitals where MDC teams are sponsored. Currently there is no incentive anywhere in the health policy reform legislation to reimburse providers for the core activity around which multidisciplinary care is based where multiple providers, whether face-to-face or virtual, jointly determine a prospective care plan for the patient. To our knowledge, there is no generally accepted method to bill for or be reimbursed for the time of physicians devoted to team meetings or follow-up discussions. Team discussions are more likely to occur within large multidisciplinary care practices or among salaried physicians within health maintenance organizations where the organization expects and encourages collaborative care planning. However, in the private practice or managed care setting, the cancer care physician must decide as an individual to volunteer time to MDC teams. This lack of a billing or reimbursement method, or both, represents a major disjunction between the institutional environment of most MDC teams and the sustainability of the MDC concept. The effective operation of such teams is then dependent on the commitment of individual physicians and the extent to which the health care organizations within which they practice provide flexible time and organizational support for the operation of MDC teams (such as team coordinators, data, and information technology support).

Disruptive Technologies and the Upheaval of Traditional Status Hierarchies

Within MDC teams, the involvement of multiple cancer specialists is crucial. The most commonly recognized trio of important multimodal care providers is usually the cancer surgeon, the medical oncologist, and the radiation therapist. However, the newly developed prominence of genomics in cancer treatment planning may lead to an upset in that traditional medical hierarchy. The role of pathologist in cancer treatment has become much more central to the treatment planning process. The pathologist provides critical interpretation of genomic tests, and those results then dictate the types of treatments needed and, in some cases, the most appropriate sequence of multimodal treatments. In a recent statement published by the American Journal of Clinical Pathology, Haspel and colleagues (2010) stated:

Today we use a combination of chemical stains, antibodies, and specialized techniques and instrumentation to identify and characterize cells and tissues. Tomorrow, these methods will be augmented, and, in some cases supplanted, by digital gene expression profiling that will elucidate “disease pathways” at the molecular level to provide the high-precision diagnostic information required for exquisitely tailored (i.e., individualized) pharmacotherapy. Perhaps the most compelling current example is in oncology… The role of pathologists will be to integrate these data with other pertinent information in the (electronic) medical record and produce clinically actionable recommendations. (p. 833)

This report also cites Hunter, Khoury, and Drazen's suggestion that “the genomic ‘genie’ is out of the bottle” (2008, p. 105) and that the medical specialty most appropriate to meet the challenges of genomics medicine is pathology, in collaboration with genetic counselors. In fact, the Association for Molecular Pathology issued a statement suggesting that pathologists should be the primary consult for both physicians and patients: “Molecular pathology professionals who perform and interpret genetic tests play a key role in the education of clinicians and consumers in the best use and interpretation of genetic tests” (as described in Lakhman, 2010). This suggests the central importance of pathologists to the direct-to-consumer genomics market. Training programs have been established at Beth Israel Hospital in Boston and elsewhere to steer pathologists toward becoming “the diagnostic enablers” of genomic medicine for both clinicians and consumers (Lakhman, 2010).

Structural Gaps between Genomic Medicine Requirements and Health Care Organizational Capacity

Finally, we present an example from the domain of cancer research and treatment that underscores the gap between what is needed to provide genomic medicine-based cancer treatment and what current health care organizations are capable of providing. Seldom recognized, but an essential part of the infrastructure of genomic research and clinical applications of genomic testing, are the services provided by biobanks to universities, hospitals, pharmaceutical companies, research and development labs, and researchers of the National Institutes of Health. Biobanks are the organizations that archive biospecimens of various types, such as blood, saliva, plasma, and organ tissue. These are the tissues needed in both genomic research and cancer treatment planning to deliver personalized treatment plans. Silberman (2010) has called biobanks “the biological back end of data-driven medicine” (p. 159). Recently the plans of the NCI to catalogue all of the genetic variations that turn healthy cells into cancer cells, the Cancer Genome Atlas (TCGA), were brought to a standstill because of what was discovered concerning the quality of the human tissue samples stored in most US biobanks. Nationally, the rate of unacceptable (spoiled, decayed, lacking viable DNA material) tissue samples varies across biobanks, but many were found to have unacceptable rates as high as 99 percent. Plans to obtain fifteen hundred samples of glioblastoma for analysis by TCGA were shelved when no more than five hundred could be located worldwide.

These shockingly high sample failure rates were the result of several important characteristics of what until recently had been considered acceptable tissue sampling and storage protocols but are grossly inadequate for the scope of genomic medicine–based research and treatment. First, the protocols for tissue collection have been simplistic, and the extent to which they are followed with any care varies enormously across operating suites and cancer treatment centers. Surgeons operating on cancer patients are, of course, consumed with the job of providing precise and skilled surgical procedures, focusing on the patient on the operating table and feedback from monitoring equipment. What happens to the tumor once it is excised is often an afterthought. In fact, it should be transported immediately, preserved, and frozen to prevent decay, but samples are often allowed to sit at room temperature for hours (or even over the weekend) before being processed. Carolyn Compton, director of NCI's Office of Biorepositories and Biospecimen Research, said, “Fixing this will require a new level of awareness that the tissue in the bucket is now one of the most important parts of the patient… Analysis of that tissue will determine all treatment decisions downstream” (quoted in Silberman, 2010).

In addition to poor sampling techniques, scientists have only recently recognized that gene expression patterns in sampled tissues change rapidly as soon as that tissue is separated from its human blood supply. In essence, “the tumor that is deposited into a biobank is not really the same one that was removed from the patient” (Silberman, 2010). Thus, the sample can generate unreliable data, which are then translated into erroneous treatment decisions. The path of the sampled tissue into storage continues the process of decay and faulty tissue samples. Most of our methods for tissue storage were developed in the 1940s, with only a few innovative updates in technique in the interim. Those methods, relying primarily on formalin fixation and cryogenic freezing with glycerol or paraffin as a protectant, were more than adequate for the world of medical research that needed only small study sizes and simple tissue samples. But for genetic materials, these methods introduce structural damage and significant alterations to cellular RNA. The quality of the tissue data being generated, and of the genetic treatments themselves that are sometimes injected directly into patients, introduces significant health hazards.

Before we leave this disturbing story, we should mention another disjunction, this time between the institutional level of oversight and the treatment and research levels of the organizations directly involved in biobanks and cancer patient treatment. The Food and Drug Administration does not monitor biobanks; they are outside the purview of federal regulatory agencies and are subject only to industry self-imposed supervision. The International Society for Biological and Environmental Repositories provides some of that supervision, but only a small proportion of US biobanks are members of the society. Furthermore, the first set of guidelines for the industry was published only in 2005; thus, there is no history or professional experience of using guidelines within this industry.

Conclusion

We began this chapter by posing a set of questions concerning the redefinition and reconceptualization of structural contingency theory. Those questions highlighted the nature of disruptive technologies (such as genomic medicine in cancer treatment), the level of analysis that would be most useful in such a reconceptualization, and how to define the fit within the world of genomic medicine in cancer care and the quest for better health care organizational performance. The bulk of this chapter has addressed those questions by examining genomic medicine–related information on how this technology works within cancer care, who is involved in discussions to define that technology and use it, and how the actors in those discussions and the definitions of technology can change over time.

Consider the original social context of the era when SCT developed and the types of technologies around which the theory's precepts emerged: technologies were often complex (assembly lines, continuous process technologies) but still categorizable into straightforward theoretical templates. Thompson's (1967) typology of task interdependence remains valuable to this day. The basic question was whether a task was based on a series of sequential steps or on the pooling of separate activities into an end product, or on reciprocal interdependence between subunits. When SCT was developed, it was far simpler to think about structures fitting the characteristics of technologies because the technology could be labeled and the managers and owners of an individual organization could do that cognitive work. But with genomic medicine, the technology is highly uncertain. The field is still an emergent knowledge base and subject to reinterpretation and renegotiation as that knowledge base expands, affecting both cancer diagnosis and treatment domains. Guidelines are still being developed and redeveloped. In this domain, the task of figuring out the fit equation is never completed: no one is never completely done and no one cannot just figure it out once.

Furthermore, an individual organization cannot define the nature of this technology: the organizational field is fully engaged in this process. That field includes a number of professional groups inside and outside medicine, a variety of diagnosis and treatment organizations, advocacy and patient support groups, government agencies at both state and federal levels, and research institutes in both the private and public sectors. Recall Scott's recent emphasis on the organizational field as “a community of organizations that partakes of a common meaning system and whose participants interact more frequently and fatefully with each other than with actors outside the field” (2012, p. 32). But in the case of genomic medicine, and no doubt in the case of other truly disruptive technologies, the members of that community are changeable and the common meaning system is only partially present. The concepts, definitions, and logics in this arena are not well established and are subject to considerable argument and disruption. By way of example, recall that status hierarchies among specialty care physicians are not as well established in cancer treatment as they were twenty years ago, and the lead actors are themselves emergent and changeable.

Furthermore, we need to remember that the organizational field is itself a construct that spans multiple levels of analysis. The field is definitely an appropriate type of focus for this thorny, whiplashed, rapidly developing, contested, disruptive technology, but it is a unit of analysis that includes other commonly examined units of analysis: organizations, markets, professional networks, regulatory agencies. It also includes the vertical and hierarchical connections that link these various actors. And those linkages and the networks they define are dynamic and changeable.

Drawing from the example of genomic medicine in cancer care, we have identified several critical precepts that can be pulled together to define a neostructural contingency theory, using important contributions from neoinstitutional theory. Whenever disruptive technologies like genomic medicine in cancer treatment emerge, questions about what types of organizational forms are best suited to produce high levels of performance reliably must be based on models that can handle a set of essential characteristics (these are summarized in table 9.1):

  • An examination of structures at multiple levels—certainly the organizational field but also the structures of organizational and professional networks.
  • An examination of structures that are expected to change (perhaps rapidly) over time. Thus our analytical models must allow dynamic change, and organizations and networks themselves will need fluid structures in order to perform at high levels over time. Dynamic models of change require longitudinal (and historical) methods of data collection and analysis. This will frustrate those who prefer quick answers and short-term results.
  • A focus on the relationships linking professions and organizations and regulatory structures, and active consideration of those linkages as the stage on which institutional logics defining and interpreting new technologies emerge, reach dominance, become the focus of disagreement, and are renegotiated. This is not a static model, and changes in the definition of disruptive technologies will perforce lead to changes in how organizational performance is defined and what types of organizational structures are more likely to produce high level performance.
  • An appreciation of the complex feedback loops that will cause change across traditional levels of analysis and within organizational fields. Institutional logics may emerge at any level of this messy picture: within professional networks, across treatment organizations, or within the agencies of research found in either the public (NIH) or private (American Cancer Society) sectors. Where those logics emerge, how they travel or cross levels and barriers, and how they then change in content or form are important parts of a neostructural comparative theory.

Table 9.1 Neostructural Contingency Theory

Original Structural Contingency Theory Neostructural Contingency Theory
Technology is complex but categorizable. Technology is highly uncertain and disruptive.
A “fit equation” can be identified and obtained. A “fit equation” is never completely obtained and must be revisited continually.
The individual organization defines technology. Technology is defined by organizations as well as organizational fields and institutional logics.
Emphasis on the stability of organizations and meanings. Emphasis on the changing nature of members of organizational fields and institutional logics defining the environment (connected to conflicting logics).
Examination of structures at the organizational level. Examination of structures at multiple levels, including organizational field and networks.
Assumed change occurs slowly; emphasis on the static nature of technologies and structures. Emphasis on dynamic change, viewing organizations as needing fluid structures in order to perform at high levels over time.
Focused on organization-based technologies, particularly manufacturing (assembly lines) or batch processing; professionals consulted to help manage the technology. Focuses on the relationships linking professions and organizations and regulatory structures; these relationships provide the platform on which institutional logics are defined and interpreted, and new technologies emerge, reach dominance, become the focus of disagreement, and are renegotiated.
There is no one best way to organize. There is no one best way to organize.

Finally, at the heart of this new approach is the acknowledgment of the core precept of even the earliest version of SCT: there is no one best way to organize. That is still true, whether the disruptive technology under consideration is the discovery of diagnostic radiology or the application of genomic medicine to cancer care. It is a puzzle wrapped in at least four complex propositions (maybe enigmas). Our job now is to construct the framework of that puzzle to function across levels, over time, and across politics and meanings within and across organizations. That is a job still to be completed, and the development of a complete logical calculus of assumptions and derived hypotheses is needed to fully realize a neostructural contingency theory. We hope this chapter has contributed at least the foundational structure for that building project.

Key Terms

  1. Cancer care
  2. Disruptive technology
  3. Genetic testing
  4. Genomic medicine
  5. Institutional logics
  6. Medical technology
  7. Multidisciplinary care treatment team
  8. Neoinstitutional theory
  9. Neostructural contingency theory
  10. Organizational field
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset