What is the organization that establishes standards for the operation of hospitals?

Michael G. H. McGeary

Since the passage of Medicare legislation in 1965, Section 1861 of the Social Security Act has stated that hospitals participating in Medicare must meet certain requirements specified in the act and that the Secretary of the Department of Health, Education and Welfare (HEW) [now the Department of Health and Human Services (DHHS)] may impose additional requirements found necessary to ensure the health and safety of Medicare beneficiaries receiving services in hospitals. On this basis, the Conditions of Participation, a set of regulations setting minimum health and safety standards for hospitals participating in Medicare, were promulgated in 1966 and substantially revised in 1986.

Also since 1965, under authority of Section 1865 of the Social Security Act, hospitals accredited by the Joint Commission on Accreditation of Healthcare Organizations (JCAHO or the Joint Commission) or the American Osteopathic Association (AOA) have been automatically ''deemed'' to meet all the health and safety requirements for participation except the utilization review requirement, the psychiatric hospital special conditions, and the special requirements for hospital providers of long-term-care services. As a result of this deemed status provision, most hospitals participating in Medicare do so by meeting the standards of a private body governed by representatives of the health providers themselves. Currently, about 5,400 (77.1 percent) of the 7,000 or so hospitals participating in Medicare are accredited. The 1,600 or so participating hospitals that are unaccredited1 tend to be small and located in nonurbanized areas. A 1980 study found that about 70 percent of the unaccredited hospitals had fewer than 50 beds, compared with only 13 percent of the accredited hospitals (see Table 7.1).

What is the organization that establishes standards for the operation of hospitals?

TABLE 7.1

Medicare Participating Hospitals, 1980.

The current federal standards for hospitals participating in Medicare are presented in the Code of Federal Regulations (CFR) as 24 "Conditions of

Participation," containing 75 specific standards (see Table 7.2).2 The responsibility for revising the Conditions of Participation lies with the Bureau of Eligibility, Reimbursement and Coverage of the Health Care Financing Administration (HCFA). A separate HCFA unit, the Bureau of Health Standards and Quality (HSQB), is responsible for administering and enforcing the Conditions of Participation. In addition to overseeing about 1,600 certified and 5,400 accredited hospitals, HSQB enforces separate sets of Conditions of Participation for over 25,000 other Medicare providers, including approximately 10,000 skilled nursing facilities, 5,700 home health agencies, and 4,775 laboratories. The actual compliance of hospitals with the Conditions of Participation is monitored for the federal government by each state through periodic on-site surveys by personnel of the state agency that licenses hospitals and other health facilities (or, in a few cases, by an equivalent agency).

TABLE 7.2

Current Medicare Conditions of Participation and Standards for Hospitals.

The Joint Commission on Accreditation of Hospitals (JCAH) was created in 1951 to accredit hospitals that met its minimum health and safety standards. In 1987, JCAH changed its name to the Joint Commission on Accreditation of Healthcare Organizations in recognition that since 1970 it had developed accreditation programs for additional health services organizations delivering long term care, ambulatory health care, home care, hospice care, mental health care, and "managed" care [for example, health maintenance organizations (HMOs) and preferred provider organizations (PPOs)].

The Joint Commission's standards for the 5,400 hospitals it accredits currently are contained in the Accreditation Manual for Hospitals, some sections of which are revised each year through an elaborate process of professional consensus coordinated by its department of standards (see Table 7.3 for the outline of the Joint Commission's hospital standards). The Joint Commission currently is governed by a board of 24 commissioners, 7 each appointed by the American Medical Association (AMA) and the American Hospital Association (AHA), 3 each by the American College of Surgeons (ACS) and the American College of Physicians, 1 by the American Dental Association, and 3 private citizens appointed by the board to add the consumer perspective (JCAHO, 1988a).3 As of late 1988, the Joint Commission had a staff of 320 at its headquarters in Chicago and 310 surveyors located around the country.

TABLE 7.3

Joint Commission on Accreditation of Healthcare Organizations' Hospital Standards, 1990.

Both governmental regulation by HCFA and professional self-regulation by the Joint Commission are aimed at assuring the quality of care provided in hospitals.4 Both sets of standards have evolved from efforts to assure a minimum capacity to provide adequate care to more ambitious efforts to make hospitals assess and improve their organizational and clinical performance in a comprehensive and continuous manner.

Hospital Standards: Origin And Development

Private, voluntary efforts to improve the quality of care in hospitals by setting minimum, and later, optimum standards date from 1918. However, federal facility standards have inevitably accompanied any significant federal expenditures on hospital services or construction, beginning with the first grant-in-aid program for maternal and child health services, the Sheppard-Towner Act of 1921. The two approaches were formally joined in 1965, when the Social Security Act amendments creating Medicare specified that accreditation by JCAH meant that a participating hospital was automatically deemed to meet the federal Conditions of Participation in the Medicare program. Initially, about 60 percent of participating hospitals qualified through accreditation; today about four-fifths of the participating hospitals are accredited by the Joint Commission or, in some cases, the AOA.

Development Of Early Voluntary Standards By The Acs And JCAH

The first standards for the organization and operation of hospitals were set forth by the ACS in 1918 (Davis, 1973; Stephenson, 1981; Roberts et al., 1987). The founders of the ACS considered conditions in many hospitals to be deplorable for patients and physicians alike, and hospital standardization was a stated purpose of the organization at its founding in 1912.

Sixty percent of the applicants for fellowship in the first 3 years of the ACS were rejected because the information in their medical case records was inadequate to judge clinical competence. Thus, the ACS formally established the Hospital Standardization Program, which existed until it was superseded by the JCAH in 1951.

Although the ACS initially only promulgated five requirements, called the "Minimum Standard," only 89 of the 692 hospitals inspected in 1919 met these requirements. The number of accredited hospitals increased steadily, however; by 1950 nearly 3,300 hospitals met the Minimum Standard, which accounted for more than half the hospitals in the United States5

The Minimum Standard emphasized basic structural characteristics considered to be essential to "safeguard the care of every patient within a hospital" (Roberts et al., 1987, p. 937). It required an organized medical staff of licensed medical school graduates who were competent, worthy in character, and ethical. The medical staff had to develop policies and rules approved by the governing body that governed the professional work of the hospital. The rules had to require medical staff meetings at least monthly and periodic reviews of patient care in each department, based on patient records. The specifications for complete patient medical records were detailed, including condition on discharge, follow-up, and autopsy findings in the case of death. Finally, diagnostic and therapeutic facilities had to include at least a clinical laboratory and X-ray department (the entire minimum standard is reproduced in Roberts et al., 1987).

The Minimum Standard had dramatic results (Jost, 1983). By 1935, for example, the proportion of hospitals with organized medical staffs increased from 20 percent to 90 percent. The ACS standards were revised and expanded a number of times over the years. By 1941 an additional 16 standards addressing physical plant, equipment, and administrative organization supplemented the Minimum Standard. Eventually, however, the burden of accrediting several thousand hospitals became too great for the ACS to carry alone. In 1951 it joined with the American College of Physicians, the AHA, and the AMA to form the JCAH (Jost, 1983).6

JCAH carried on the ACS principles for improving health care in hospitals—voluntary private accreditation, minimum health and safety standards based on the consensus of health professionals, and confidential on-site surveys that involved education and consultation as well as evaluation (Roberts et al., 1987). In 1961 JCAH began to hire its own surveyors rather than use ACS and AMA staff and in 1964 it began to charge a fee for inspections (Jost, 1983). By 1965, when the legislation creating Medicare and Medicaid was passed, JCAH was already accrediting 60 percent of the hospitals (4,308 of 7,123) with 66 percent of the beds (1.13 million of 1.7 million) (AHA, 1966).

Early Government Standards

State licensing programs for hospitals were not common until the early 1950s. Most were stimulated by federal requirements (the link in timing between federal requirements and state regulatory activity is evident from inspecting the tables in Fry, 1965). Fewer than a dozen states had hospital regulations before World War II (Worthington and Silver, 1970). Federal hospital standards were imposed in 1935 for maternity and children's services, under regulatory authority contained in Title V of the Social Security Act (Somers, 1969). In 1946 the Hospital Survey and Construction (Hill-Burton) Act required the states to establish minimum standards for maintaining and operating hospital buildings aided by the act. At that time the AHA, the Public Health Service (PHS), the Council of State Governments, and other organizations sponsored a model hospital licensing law. This model law was adopted in many states, especially after 1950 amendments to the Social Security Act required states using federal matching funds for the payment of health care for welfare recipients to designate an agency to establish and maintain standards for facilities providing the care (Somers, 1969).

In 1964 the Hill-Harris amendments to the Hill-Burton Act required state licensure programs that went beyond building conditions to the administration of services. Nevertheless, in 1965 one state (Delaware) still did not license hospitals and Ohio and Wisconsin only licensed maternity hospitals and maternity units in general hospitals. Connecticut, on the other hand, had an extensive program for inspecting and licensing hospitals (Foster, 1965). New York and Michigan had just passed the first comprehensive hospital codes that addressed the quality of medical service organization and delivery (Worthington and Silver, 1970).

A series of studies and surveys in the late 1950s and early 1960s also found that the hospital survey programs of the states varied greatly in focus, intensity, and composition of the inspection tern (Taylor and Donald, 1957; McNerney, 1962; Foster, 1965; Fry, 1965). Nearly all emphasized fire safety and sanitation, but fewer than 40 looked at nurse staffing and practices and fewer than 30 looked at medical staffing and practices. Just 37 states inspected hospitals annually. Nurses were on inspection teams in only 27 states and the use of physicians in state licensure programs was rare (Foster, 1965).

Development Of The Medicare Conditions Of Participation, 1965-1966

The drafters of the Medicare legislation were aware of the variability in the extent and application of state licensure standards. They knew that several thousand, primarily small rural or proprietary hospitals, with a third of the nation's bed supply, were not in JCAH's voluntary accreditation program. In order to maximize access of beneficiaries to services, they did not want to exclude unaccredited hospitals from participating in the Medicare program. They could not rely, therefore, on licensure or accreditation to ensure minimum health and safety conditions in all hospitals. At the same time, federal policymakers did not want to create a national licensure program with federal inspectors. Accordingly, the Medicare legislation outlined a program in which hospitals and other providers could participate voluntarily if employees of a state health facility inspection agency certified that the providers met certain federal statutory and regulatory requirements or if they were accredited by JCAH or another nationally recognized accreditation organization.

The 1965 amendments to the Social Security Act that established Medicare contained certain minimum requirements for hospitals, including the maintenance of clinical records, medical staff bylaws, a 24-hour nursing service supervised by a registered nurse, utilization review planning, institutional planning and capital budgeting, and state licensure. Hospitals also had to meet any other requirements as the Secretary of HEW found necessary that were in the interest of the health and safety of individuals furnished services in the institution, provided that such other requirements were not higher than the comparable requirements prescribed for the accreditation of hospitals by JCAH. In addition, institutions accredited as hospitals by JCAH were ''deemed'' by the law to meet federal requirements without additional inspection or documentation (except the legislative requirements for utilization review, psychiatric hospital special conditions, and special requirements for hospitals providing long-term-care services).

The Bureau of Health Insurance (BHI) of the Social Security Administration's Medicare Bureau was responsible for drafting the Conditions of Participation. Staff of the Division of Medical Care Administration in the PHS served as technical advisors, and a task force made up of representatives of major hospitals and health care and consumer organizations participated in the drafting of the conditions (HCFA, personal communication, 1989). Although the opportunity existed to develop model national standards, the efforts were severely constrained by the wording of the law, political and time pressures, the need to rely on state agency surveyors to inspect unaccredited hospitals, and the lack of knowledge about how to measure and achieve quality of medical care (Cashman and Myers, 1967). Except for utilization review, Congress prohibited standards higher than those of JCAH, even though JCAH itself described its 1965 accreditation standards as the minimum ones necessary to assure an acceptable level of quality. Congressmen and administration officials had assured the hospital community since 1961 that JCAH-accredited hospitals would automatically be eligible for participation in Medicare.7 There was tremendous political pressure to deliver Medicare benefits quickly and universally and therefore to involve as many hospitals as possible in order that every Social Security recipient would have access to hospital care (Feder, 1977a, 1977b).8 The conditions and procedures for applying them had to be developed in a few months: the law passed on July 30, 1965, and the conditions were mailed to hospitals at the end of January, 1966. The standards could not be too complicated because they had to be applied by state surveyors with widely varying experience and training, who, in most cases, were new to their jobs. Finally, even the best standards of the time were considered to be, at best, merely indicators of the structural and organizational capacity to deliver quality care. In the words of the PHS advisors on the conditions (Cashman and Myers, 1967, p. 1108), "... when a provider complies with the standards, it has demonstrated a capacity to furnish a stated level of quality of care. The key element here is that standards define a certain capacity for quality and not quality itself. We assume that, given this capacity, a level of quality will result. And experience informs us that without this capacity, achievement of quality is difficult, if not impossible."

BHI proceeded to draft Conditions of Participation that would be equivalent to those of JCAH. Except for utilization review, the 16 standards corresponded to the areas covered in JCAH's 1965 hospital accreditation standards. The standards were mostly qualitative and subjective rather than quantitative. For example, they did not specify staffing ratios but referred to "adequate" staffing, "qualified" personnel, and an ''effective" staff organization.

Next, procedures had to be worked out by which a number of hospitals that could not meet the standards, at least initially, could participate in Medicare while, hopefully, bringing themselves into compliance (Cash-man and Myers, 1967). The solution was the concept of substantial compliance, which meant that a hospital could be certified for participation even if it had significant deficiencies in meeting one or more standards, as long as the significant deficiencies did not interfere with adequate care or represent a hazard to patient health and safety. Meanwhile, the hospital had to develop and make an adequate effort to complete a plan of correction. However, as the starting date of July 1, 1966, approached, the pressure to make the program universal was overwhelming, and there was notable resistance to denying certification to any hospital that could meet the basic statutory requirements, which were embodied in 8 of the 100 standards (Cashman and Myers, 1967). Also, provisions were made for special certification of hospitals in geographically remote areas where denial would have a major impact on the access of beneficiaries to services.9

The federal standard-setters expected and found widely varying state-to-state interpretations of the conditions (Cashman and Myers, 1967). Of the 2,700 unaccredited hospitals applying by September 30, 1966, less than 8 percent could not meet the conditions according to state surveyors, but the rate of denial recommendations varied from 0 in 18 states to 20 percent or more in 7 states. In all, just 15 percent of the 2,400 unaccredited hospitals that were certified were in compliance without any significant deficiencies. Nearly a third (1,556) were certified with correctable deficiencies, and more than a Fifth (545) were not in compliance but were certified in the special categories to ensure access. Some states did not recommend special certification for any hospitals; others recommended special certification for half their hospitals. In all, some 700 hospitals had significant deficiencies in at least 6 of the conditions.10

Given that the federal requirements were minimum standards, the authors of the original Conditions of Participation concluded that future progress would have to take place through innovative leadership by professionals through the accreditation process. They called on professional standard-setters to establish optimal rather than minimum standards for medical care (Cashman and Myers, 1967).

Jcah And Medicare

In 1966, with its standards forming the basis for the hospital Conditions of Participation in the Medicare program, JCAH found that the federal government was "usurping" its traditional role of guaranteeing minimum hospital standards (Roberts et al., 1987). Already, in December 1965, the JCAH board of commissioners had adopted a utilization review standard.11 In August 1966, JCAH's board of commissioners decided to issue optimum achievable standards rather than minimum essential standards for hospital accreditation. The resulting 1970 Accreditation Manual for Hospitals con-mined 152 pages of standards, compared with just 10 pages of standards in 1965 (JCAH, 1965, 1971). Meanwhile, however, JCAH went through a period of negative publicity that culminated in legislative changes in 1972 that imposed federal oversight of the accreditation process. In 1969 the Health Insurance Benefits Advisory Council, the advisory group to the Social Security Administration on the implementation of Medicare, criticized JCAH's standards and inspection process in its first report. According to the report, some JCAH standards were too low, the inspection cycle (2 years at that time) was too infrequent, and the surveyors (then just physicians) were too narrowly focused on medical staff and medical record issues. The council recommended that the Secretary of HEW be given authority to set standards higher than those of JCAH and that state agencies be given the authority to inspect accredited hospitals (Health Insurance Benefits Advisory Council, 1969).

In 1969 and 1970, JCAH-accredited (but with 1 year provisional certificates) Boston City Hospital, D.C. General Hospital, San Francisco General Hospital, St. Louis City Hospital, and other major urban hospitals despite extensive publicity about serious problems in patient care (Worthington and Silver, 1970). Consumer groups presented JCAH with demands for patient rights and consumer participation in the accreditation process (Silver, 1974). Some groups sued HEW, arguing that the delegation of Medicare certification to the private JCAH was unconstitutional, and legislation was even introduced to establish a federal accreditation commission (Jost, 1983).

In 1972 Congress responded with amendments to the Social Security Act that gave the HEW Secretary the authority to promulgate standards higher than those of JCAH, to conduct inspections of a random sample of accredited hospitals each year, to investigate allegations of substantial deficiencies in accredited hospitals, and, finally, to decertify hospitals that failed to meet federal requirements even though they were accredited. As a result of the first year of validation surveys, 107 of the 163 hospitals inspected by state agencies for HEW lost deemed status for being out of compliance with the Conditions of Participation. The state inspectors found 4,300 deficiencies where JCAH had only found 2,993 contingencies; moreover, only 7 percent of the deficiencies cited by both groups were similar. JCAH and the AHA responded that the discrepancies had more to do with differences in the size and composition of the survey teams and duration of the inspection visit than real differences in hospital conditions (Phillips and Kessler, 1975). For example, more than half of the deficiencies found by state inspectors (2,305) related to the Life Safety Code (LSC), which, JCAH argued, were not significantly related to quality of patient care or safety. In contrast, JCAH surveyors found more deficiencies than state inspectors concerning patient care; that is, in such areas as medical staff, medical records, and radiology. The first annual validation report strongly recommended that JCAH strengthen its capacity w evaluate and enforce fire safety requirements. As a result, JCAH introduced revised fire safety standards and procedures in October 1976.

A study of the situation by the General Accounting Office (GAO, 1979) was more critical of HEW and its loose oversight of state agency operations than of JCAH. The GAO found that JCAH was finding more violations of requirements identified as essential by HEW and obtaining faster compliance, although state agency surveyors often found some deficiencies that JCAH did not. The GAO report concluded that state survey results were less reliable and had less impact than those of JCAH because HEW guidelines for compliance were inadequate and federal specifications for survey team composition and training and survey duration were too weak to ensure consistency. Among alternatives for improving the certification process, GAO gave its highest recommendation to contracting with JCAH for the conduct of all certification surveys, subject to validation by federal surveyors, because "this arrangement would provide a better, more consistent evaluation of hospitals and eliminate the problems associated with having more than 50 independent decision makers" (GAO, 1979, p. 31).

The discrepancies between JCAH and state agency surveys were much reduced with the introduction of the Fire Safety Evaluation System (FSES), a system for evaluating alternative ways of meeting the intent of the LSC. The FSES was developed for HEW by the National Bureau of Standards. Although more recent annual reports on validation surveys continued to recommend improvements in JCAH surveying of the LSC, they concluded that JCAH's surveying of accredited hospitals is "equivalent" to state agency surveying of unaccredited hospitals (DHHS, 1988). For example, the proportion of JCAH-accredited hospitals subject to validation surveys that was found out of compliance with one or more conditions was 20 percent in fiscal year (FY) 1982, 15 percent in FY 1983, 20 percent in FY 1984, and 29 percent in FY 1985, compared with an average of 25 percent among unaccredited hospitals (Table 7.4). Also, the proportion of noncompliance with each condition is similar for accredited and unaccredited hospitals (Table 7.5).

TABLE 7.4

Noncompliance of Joint Commission on Accreditation. of Hospitals (JCAH)-Accredited and Unaccredited Hospitals with One or More Medicare Conditions of Participation, Fiscal Year 1985.

TABLE 7.5

Noncompliance of Joint Commission on Accreditation of Hospitals (JCAH)-Accredited and Unaccredited Hospitals by Medicare Condition of Participation, Fiscal Year 1985.

In other words, HCFA has concluded that compliance with the Conditions of Participation is about the same in accredited and unaccredited hospitals.12 This does not, however, preclude the possibility that Joint Commission accreditation has a greater positive impact on quality of patient care than the federal-state survey and certification program, because in recent years, as will be seen below, the former's standards have been higher and much more detailed with regard to quality assurance processes than the conditions.

Despite the drastic revision and expansion of the accreditation standards in 1970, the JCAH standards still emphasized the structure and process features of hospital organization and administration that were believed to create the capacity to deliver quality patient care rather than evaluating the hospital's actual performance (JCAHO, 1987). In the early 1970s, aware of criticism of the emphasis on organizational and clinical capacity rather than actual performance (Somers, 1969), and stimulated by the advent of the Professional Standards Review Organizations (PSROs) with their mandate to review quality of care, JCAH began to emphasize the medical audit as the mechanism for assuring quality of care and to specify the use of explicit criteria and formal procedures in place of the informal and subjective review processes already presumed to take place at the monthly medical staff and department meetings required since 1918 (Roberts and Walczak, 1984). For example, JCAH sponsored the development of PEP, the Performance Evaluation Procedure for Auditing and Improving Patient Care, an elaborate medical audit system that was taught in workshops for accredited hospitals (JCAH, 1975; Jacobs et al., 1976). The PEP methodology was based on several decades of efforts to develop objective methods of appraising clinical performance through retrospective auditing of medical charts using explicit criteria (Sanazaro, 1980).

In 1976 a new section of the accreditation manual for hospitals on quality of professional services called for a certain number of medical audits depending on hospital size, but it soon became apparent that the methodology was being applied mechanistically with little impact on medical practice. Meanwhile, JCAH survey results indicated that surgical case review, drug and blood utilization review, and review of appointments and reappointments by the medical staff were subjective and informal and often ineffective in finding or resolving patient care and clinical performance problems (Affeldt et al., 1983).

In 1979, JCAH dropped numerical medical audit requirements and introduced a new quality assurance standard in a separate chapter of the accreditation manual. The new standard required the development of a hospitalwide program that not only identified specific problems in patient care and clinical performance but documented attempts to resolve them. Since 1979 the accreditation manual for hospitals has undergone substantial change in an effort to incorporate quality assurance activities in each clinical activity of a hospital. The revised standards are analyzed and recent efforts to develop explicit indicators of clinical and organizational performance are described in later sections of this chapter.

Evolution Of The Hospital Conditions Of Participation, 1966-1986

The final regulations on the original Conditions of Participation that were promulgated in late 1966 were basically the same as those issued earlier in the year, except they accorded deemed status to hospitals accredited by the AOA. Those regulations included 16 conditions, broken down into about 100 standards and several hundred explanatory factors (Table 7.6). The conditions were criticized from the beginning for only looking at the capacity of a hospital to provide adequate quality of care rather than its actual performance or effect on patient well-being. Nevertheless, the conditions were not revised in a significant way for 20 years.

TABLE 7.6

Medicare Conditions of Participation for Hospitals, 1965.

Generally, the conditions in effect from 1966 until 1986 emphasized structure over process measures of organizational and clinical capacity, such as staff qualifications, written policies and procedures, and committee structure, which were usually specified at the standard level. The process aspects of quality-of-care standards were usually suggested as explanatory factors that could be used to evaluate compliance with the standard. For example, there was no quality-of-care or quality assurance condition or standard. Instead, the medical staff condition had a meetings standard, calling for regular meetings of the medical staff to review, analyze, and evaluate the clinical work of its members, using an adequate evaluation method. The explanatory factors that surveyors were supposed to use to determine compliance with the standard included attendance records at staff or departmental meetings and minutes that showed reviews of clinical practice at least monthly. The reviews were supposed to consider selected deaths, unimproved cases, infections, complications, errors in diagnosis, results of treatment, and review of transfusions, based on the hospital statistical report on admissions, discharges, clinical classifications of patients, autopsy rates, hospital infections, and other pertinent hospital statistics. The minutes were also supposed to contain short synopses of the cases discussed, the names of the discussants, and the duration of the meeting.

In the 1970s there were several unsuccessful efforts by the government to revise the conditions. In 1977, HCFA developed specifications for revising the Conditions of Participation and invited comments from interested parties in the Federal Register. After considering more than 2,000 comments, HCFA published draft revised conditions in the Federal Register in 1980 (Federal Register, 1980, p. 41794). Generally, the new conditions proposed in 1980 would have eliminated a number of prescriptive requirements, especially those specifying personnel credentials and certain committees of the governing board and medical staff, replacing them with statements of the functions to be performed. The new conditions also recognized changes in medical practice by adopting JCAH definitions and standards in new conditions for nuclear medicine; for rehabilitative, respiratory, and psychiatric services; and for special care units.

The proposed 1980 regulations also included a new standard, Quality Assurance, in the governing body condition. The new standard would have required a hospitalwide quality assurance program involving the medical staff in peer review and requiring performance evaluations by each organized service.

Although the Reagan administration withdrew the proposed new Conditions of Participation for hospitals widen it took office in January 1981, they were among the top five sets of regulations addressed by the Vice President's task force on deregulation. A committee of top political appointees and career staff in HCFA reviewed the Conditions of Participation line by line, developing detailed worksheets analyzing each condition and standard in terms of its statutory basis, pertinent public comments on the proposed 1980 regulations, and, in the several cases where they existed, research findings.13

The revised conditions that were proposed in 1983 (Federal Register , 1983, p. 299) and finalized in 1986 (Federal Register, 1986, p. 22010) were based in part on those proposed in 1980, although, in line with the Reagan administration's emphasis on deregulation, the resulting regulations carried further the process of eliminating prescriptive requirements specifying credentials or committees, departments, and other organizational arrangements. They were replaced with more general statements of desired performance or outcome in order to increase administrative flexibility (see statements on the proposed and final regulations in the Federal Registers cited above). On the other hand, the activities proposed for elevation to the condition level in 1980 to give them more emphasis in me certification process were retained as new conditions, including infection control and surgical and anesthesia services. In addition, quality assurance was made a separate condition. The possible impact of the new condition on quality of care is analyzed in a later section of this chapter, along with the JCAH quality assurance standards.

The new Conditions of Participation took effect on September 15, 1986. They were accompanied by interpretive guidelines and detailed survey procedures developed by HCFA to increase consistency of interpretation and application by the state agency surveyors (HCFA, 1986). Use of the new quality assurance condition as a basis for decertification was delayed for a year. The state inspectors did survey the condition, however. After the first 2 years, 128 (9 percent) of the 1,420 hospitals surveyed were found to be out of compliance with the new quality assurance condition (data supplied by HSQB). The states with the most hospitals failing this condition were Texas, with 23 (15 percent) of its 150 unaccredited hospitals, and Montana, with 10 (23 percent) of its 43 unaccredited hospitals. Other states with smaller numbers of unaccredited hospitals had higher rates of noncompliance: 6 of 10 in South Carolina; 2 of 4 in Virginia, and 1 of 3 in New Jersey.

Medicare Certification And Joint Commission Accreditation Standards And Procedures For Assuring Quality Of Patient Care In Hospitals

Although one is governmental and the other private, both. HCFA and the Joint Commission are regulatory in their approach. They each attempt to assure quality of care by influencing individual and institutional behavior. As in any regulatory system, quality assurance in health delivery organizations has three components (IOM, 1986). First, standards have to be set that relate to quality of care. Second, the extent of compliance of hospitals with the standards must be monitored. Third, procedures for enforcing compliance are necessary. The HCFA and Joint Commission standards and their procedures for monitoring and enforcing compliance with the standards are described, analyzed, and compared in this section.

Standards

In 1966, at the time the Conditions of Participation were first drafted, Donabedian (1966) identified three aspects of patient care that could be measured in assessing the quality of care: structure, process, and outcome. Theoretically, structure, process, and outcome are related, and, ideally, a good structure for patient care (e.g., safe and sanitary buildings, necessary equipment, qualified personnel, and properly organized staff) increases the likelihood of a good process of patient care (e.g., the right diagnosis and best treatment available), and a good process increases the likelihood of a good outcome (e.g., the highest health status possible) (Donabedian, 1988).

Structure And Process Orientation Of Hospital Standards

The original conditions of 1966, and the JCAH standards they were based on, were almost exclusively based on structural aspects of patient care, because structural measures are the easiest for standard-setters to specify, for surveyors to assess, and for enforcers to use in justifying their actions.

Unfortunately, there is very little knowledge about the relations between structural characteristics and process features or outcomes of care. What knowledge exists on the relations between structure and process indicates that they are weak (Palmer and Reilly, 1979; Donabedian, 1985). At best, then, the use of structurally oriented standards ensures that care is given in an environment that is conducive to good care (Donabedian, 1988). Not meeting minimum structural standards may make it impossible to provide good care. Thus, structural standards may be necessary, but they are far from sufficient guarantors of good care.

Clinical decision making is very complex, and, despite the development of complex clinical decision-making algorithms for assessing quality (Greenfield et al., 1975, 1977, 1981), it has proved to be difficult to develop objective criteria for assessing the quality of clinical processes in particular cases. In some instances, something is known about the relations between clinical processes and clinical outcomes, for example, where properly controlled experiments have been conducted. In most instances, however, standards for best clinical practices are based on professional consensus, even though the relations between clinical practices considered by professional consensus to be best and favorable outcomes are generally weak (Schroeder, 1987).

Outcome-based standards are the most difficult to apply or justify. Consider, for example, a standard that stated that the death rate should be no more than X percent during a specified time period among patients who had a particular diagnosis or who underwent a particular procedure. Because a number of factors influence death rates besides the clinical setting and processes used, death rates would have to be carefully adjusted for initial severity of illness and other case-mix differences before they could be used in setting regulatory standards. In any case, for compliance and enforcement purposes, outcome measures such as death rates, however adjusted, would have to be followed by assessment and documentation of the processes used in particular cases that caused the adverse outcomes.

Both HCFA and the Joint Commission are severely constrained in their efforts to assure quality of care in hospitals or other health care organizations by this fundamental lack of knowledge about relations between the aspects of care that can be most easily regulated (such as building specifications, staff credentials, regular committee meetings, complete medical records, written quality assurance plans, and number of medical care audits) and those aspects of patient care that pertain more directly to quality (such as how well each patient is treated, how each patient's health status is affected by the care provided, or how the health status of the population served is being affected by a hospital's services).

Traditionally, given these limitations, HCFA and the Joint Commission standard-setters did not try to assess the quality of care actually given.

Instead, they adopted standards that, if met, would indicate that a hospital had the capacity to provide a minimum level of quality of care. Both sets of standards have always included standards for the construction, maintenance, and safe operation of hospital buildings. Currently, for example, compliance with the 1981 LSC and infection control standards (elevated to a Medicare Condition of Participation in 1986) are required. Both sets of standards require an organized medical staff and appointment of a hospital administrator, although the requirements have become less prescriptive over the years. For example, rather than require certain committees or credentials, the standards specify the functions that must be carried out.

By and large, these capacity-oriented standards are based on professional consensus, although some are based on research. The LSC is a set of consensus-based standards for fire safety developed by the National Fire Protection Association. Infection control was raised to a condition in 1986, in pan because of research by the Centers for Disease Control showing that 5 percent of patients in acute care hospitals contracted nosocomial infections, necessitating several days of additional hospitalization at a cost of $1 billion a year (Federal Register, 1983, p. 303). The requirements that the medical staff be organized under bylaws and that the medical staff and hospital administrator be accountable to a governing body were retained in the 1986 revision of the conditions in pan because of research indicating that medical care is better in well-organized and supervised hospitals (HCFA Task Force, 1982).

Shift From Capacity Standards To Performance Standards

In recent years, HCFA and the Joint Commission have tried to revise their standards in ways that would impel hospitals to examine and, hopefully, improve the quality of their organizational and clinical performance. Thus, for example, both organizations have adopted quality assurance standards that call for hospitals to set up structures and processes for monitoring patient care, identifying and resolving problems, and evaluating the impact of quality assurance activities. Under these standards, the medical staff is required to develop or adopt indicators of quality of care, gather information on the indicators, select criteria for deciding when an indicator is signaling a possible problem, and act on those signals.

The Joint Commission calls these quality assurance activities ''outcome-oriented,'' although the main emphasis of the new standards is to make hospitals adopt processes for monitoring indicators of the quality of their performance. Only a few of the indicators are likely to be outcomes, and those are most likely to be intermediate outcomes. For example, a radiology department might agree that the accuracy of upper gastrointestinal (GI) contrast studies is an important indicator of quality (JCAH, 1986). Data from the records of 20 percent of the department's patients would be collected monthly and aggregated by the radiologist and physician ordering an upper GI series, to determine whether or not the criteria for upper GI series are being met. Some of the criteria might be: 100 (or 98) percent of the requisitions for upper GI series contain the pertinent history, physical findings, and suspected diagnosis, or that radiologic interpretations shall be consistent with endoscopic findings 100 (or 97) percent of the time. Other indicators (for other departments or hospitalwide) might be: hospital-acquired infections, severe adverse drug reactions, agreement of final pathology diagnoses with patients' previous diagnoses, or transfer of patients from postsurgical recovery units to operating rooms (JCAHO, 1988c).

Evolution Of The Joint Commission's Quality Assurance Standards

The shift from prescriptive to performance-oriented standards began at JCAH in 1978, when the board of commissioners decided to replace the numerical medical audit requirement with a new quality assurance standard that mandated an ongoing, hospitalwide effort to monitor care, identify problems or ways to improve care, and resolve any problems (Affeldt et al., 1983). The new quality assurance program was to involve all departments and services, not just a quality assurance unit. It was to be problem-focused rather than mindlessly to collect vast quantifies of data for their own sake, which the old medical audit standard had encouraged. The new standard was approved in 1979 but not implemented until 1981, to give hospitals time to develop systematic quality assurance programs. In 1981 the JCAH board voted to revise all the hospital standards by 1983 according to five principles (JCAH, 1981):

1.

The standards would be essential ones that any hospital should meet.

2.

The standards should be statements of objectives, leaving the means to achieve their intent to the discretion of individual hospitals.

3.

The standards should focus on elements essential to high-quality patient care, including the environment in which that care is given.

4.

The standards must be reasonable and surveyable.

5.

The standards should reflect the current state of the art.

The standards for governing bodies, medical staffs, management and administrative services, medical records, and quality and appropriateness review for support services were revised first. Despite the intention to simplify the standards and make them less prescriptive and more goal-oriented, the revision process ended up involving substantial expansion and formalization of quality assurance activities in each chapter of the hospital accreditation manual, including an increasing specification of processes needed to achieve the objectives of JCAH's new quality assurance standard.

In 1981 the new quality assurance chapter of the hospital accreditation manual had one standard: There shall be evidence of a well-defined, organized program designed to enhance patient care through the ongoing objective assessment of important aspects of patient care and the correction of identified problems. According to a standard in the governing body chapter, the governing body was to hold the medical staff responsible for establishing quality assurance mechanisms. One of the medical staff standards required regular review, evaluation, and monitoring of the quality and appropriateness of patient care provided by each member of the medical staff as well as surgical case (tissue) review, review of pharmacy and therapeutic activities, review of medical records, blood utilization review, review of the clinical use of antibiotics, and participation in hospitalwide functions such as infection control, safety and sanitation, and utilization review.

In 1984 uniform language for the monitoring and evaluation of quality and appropriateness of care was added into each of 14 chapters on specific clinical services, e.g., anesthesia, nursing, radiology, and social work services: "As part of the hospital's quality assurance program, the quality and appropriateness of patient care provided by the X department/service are monitored and evaluated, and identified problems are resolved" (JCAH, 1983, p. 6). The required characteristics of an acceptable process for carrying out the standard included: designation of the department head as responsible for the process, routine collection of data about important aspects of the care provided, periodic assessment of the data to identify problems or opportunities to improve care, use of objective criteria that reflect current knowledge and clinical experience, taking actions to address problems and document and report problems to the hospitalwide quality assurance program, and, finally, evaluating the impact of the actions taken (JCAH, 1983).

In 1984, after four field reviews of several drafts, revised medical staff standards were included in the hospital accreditation manual but not used for accreditation decisions until 1985. The standard for medical staff monitoring and evaluation of the quality and appropriateness of patient care now included departmental review of the clinical performance of all individuals with clinical privileges and went on to specify the same required characteristics included in the other chapters on clinical services (JCAH, 1984a).

In 1985 the quality assurance chapter was revised to add three standards. The second standard codified the monitoring and evaluation functions already specified in the medical staff chapter and in each of the chapters on other services. It mandated certain hospitalwide activities (infection control, utilization control, and review of accidents, injuries, and safety hazards) and required that the relevant findings of quality assurance activities were considered in the reappraisal or reappointment of medical staff members and renewal of clinical privileges of independent practitioners. The third standard required the use of the same steps for carrying out monitoring and evaluation activities already listed as required characteristics in each of the clinical chapters in the 1984 manual. The fourth standard called for hospitalwide coordination and oversight of quality assurance activities (JCAH, 1984b) (see Table 7.7).

TABLE 7.7

Joint Commission on Accreditation of Healthcare Organizations Quality Assurance Standards for Hospitals.

By 1985, then, an elaborate set of quality assurance processes had evolved as standards and required characteristics in every chapter of the hospital accreditation manual. The object of these processes is aimed at making hospitals, through their medical staff, review and assess the quality of care given by each person with clinical privileges and in each clinical department and to act on problems or opportunities that are identified. Most hospitals, however, have had significant problems complying with the standards. As already noted, the quality assurance standard adopted in 1979 was not implemented until 1981. Even then, hospitals only had to comply with the first three steps: assignment of authority and responsibility for quality assurance activities to a specific individual or group; progress in coordinating existing quality assurance mechanisms; written plan (JCAH, 1981). In 1982 more than 60 percent of the 12,000 contingencies given by JCAH to the 1,150 hospitals surveyed were for quality assurance problems. The proportion of hospitals with contingencies or recommendations for credentialing was 63 percent and for surgical case review was 45 percent (Roberts and Walczak, 1984).

Despite compliance problems, JCAH increased the level of compliance required with the quality assurance standard during 1983, requiring evidence that quality assurance information was being integrated, that patient care problems were being identified through the monitoring and evaluation activities of the medical staff and support services, and that the problems were being resolved (JCAH, 1982). Medical staff quality assurance activities still accounted for a large proportion of the contingencies and recommendations given in 1984, in areas such as the following: monthly department meetings to consider monitoring and evaluation findings (46 percent of hospitals surveyed); medical staff monitoring and evaluation actions are documented and reported (44 percent); and when important problems in patient care or opportunities to improve care are identified, problems are resolved (32 percent) (Longo et al., 1986).

In 1985, JCAH introduced implementation monitoring, by which certain standards would be surveyed and recommendations made, but lack of compliance would not affect accreditation decisions. JCAH explained that some changes in standards were taking more than 3 years for full implementation because they were difficult for hospitals to meet and required more time for learning (and for education of surveyors) (JCAH, 1985). Not surprisingly, most of the standards placed on implementation-monitoring status initially, from January 1986 through June 1987, pertained to quality assurance: some parts of medical staff departmental monitoring and evaluation, use of medical staff quality assurance findings, and quality and appropriateness review in support services.

In early 1988 the Joint Commission again eased implementation of the quality assurance standards. It no longer gave contingencies if hospitals were using only generic rather than department-specific indicators in monitoring and evaluating the quality and appropriateness of care in the various departments and services. The explanation for the change in contingency policies referred to the problems the Joint Commission itself had encountered in developing quality indicators for various types of care: "As the Agenda for Change activities have moved forward, it has become evident that the clinical literature does not provide sufficient information to permit health care organizations to select a full set of validated indicators for each area of clinical practice" (JCAHO, 1988b, p. 5).

The problems that many hospitals were having in complying with the Joint Commission standards for outcome-oriented monitoring and evaluating quality of care were part of the impetus for the Joint Commission effort, called the Agenda for Change, to develop indicators of organizational and clinical performance for the hospitals to use (JCAHO, 1988c, 1988d, 1988e). The data on such indicators would be transmitted by each hospital to the Joint Commission for use in developing empirical norms for hospitals to use in comparing their performance. Eventually, such indicator data could be used by the Joint Commission for monitoring compliance with accreditation standards.

Development Of The Quality Assurance Condition Of Participation

The quality assurance condition implemented in late 1986 by HCFA is similar in approach to, although less elaborate than, the Joint Commission's quality assurance standards. The task force of HCFA officials that developed the revised conditions in 1981-1982 consciously tried to make the new requirements consistent with JCAH standards. In the preface of their recommendations, HCFA noted that in 1966 the conditions were similar to JCAH standards in 1966 but no longer were. JCAH had revised and updated its standards continuously while Medicare had not. The task force stated: "Another recent consideration is the movement toward providing hospitals with greater flexibility in determining how they can best assure the health and safety of patients. The current regulations are, in many cases, overly prescriptive and not sufficiently outcome oriented. This trend toward increased internal hospital accountability has been reflected in recent revisions to JCAH standards" (HCFA Task Force, 1982).

Task force members agreed that a quality assurance program aimed at the identification and correction of patient care problems should be a condition because it was important and cut across all aspects of direct patient care. The task force suggested three minimal standards: (1) the organized, hospitalwide quality assurance program must be ongoing and have a written plan of implementation; (2) the hospital must take appropriate remedial action to address any deficiencies found; and (3) there must be evaluations of all organized services and of nosocomial infections, medicine therapy, and tissue removal.

The new quality assurance condition as finally promulgated calls for a formal, ongoing, hospitalwide program that evaluates all patient care services (Table 7.8), although the explicit references to nosocomial infections, medicine therapy, and tissue removal were dropped. The interpretive guide lines state that information gathered by the hospital to monitor and evaluate the provision of patient care should be based on criteria and measures generated by the medical and professional staffs and reflect hospital practice patterns, staff performance, and patient outcomes. The term outcome does not appear in the language of the conditions or standards, however, because the majority of the task force did not think that outcome measures could be used in the survey process. The discussion in the task force report of the new condition pointed out that outcomes were difficult to use because of the differences in the pre-operative condition of patients. Although outcome measures were desirable, because they promised maximum flexibility to hospitals, they were difficult to assess without undertaking longitudinal studies beyond the given episode of care, which would be too cumbersome for hospitals and surveyors and difficult to use in enforcement.

TABLE 7.8

Medicare's Quality Assurance Condition of Participation.

One objective of the 1986 revision of the Conditions of Participation was simplification of the regulations, and overlapping language in different conditions was usually eliminated. Accordingly, the monitoring and evaluation activities in each department and service implied by the quality assurance condition are not repeated under the other conditions, whereas the appropriate quality assurance standards are repeated in the various chapters of the Joint Commission's hospital accreditation manual and are cross-referenced with the quality assurance chapter. There are few other references to quality in the other conditions. However, the governing body condition has a standard for ensuring that the medical staff is accountable for the quality of patient care, and the medical staff condition has a parallel standard: The medical staff must be well organized and accountable to the governing body for the quality of the medical care provided to the patients. The interpretive guidelines for the medical staff condition also require that periodic appraisals of staff include information on competence from the quality assurance program. The only other reference to the quality assurance program outside the quality assurance condition itself is in the infection control condition, where a standard assigns responsibility to the chief executive officer, medical staff, and director of nursing services to assure that hospitalwide quality assurance and training programs address problems identified by the infection control officers.

The 1986 revisions of the Conditions of Participation, including the new quality assurance condition, were based in pan on work done in the late 1970s and very early 1980s. They resemble the evolution of the JCAH standards in the same time period, when JCAH adopted a quality assurance standard and began to revise the other standards to make them more flexible and less prescriptive. However, the Joint Commission's standards have undergone substantial evolution since the early 1980s. The latter's quality assurance standard in particular has undergone a great deal of elaboration in the process of trying to help hospitals understand how to comply with its intent.

Survey Process

Compliance with hospital regulatory standards is monitored and enforced through a process of on-site surveying by health professionals. The resources and procedures of Medicare and the Joint Commission for surveying are described and compared in this section.

Surveyors And Survey Teams

Section 1864 of the Social Security Act directs the Secretary of DHHS to enter into agreements with any ''able and willing'' state, under which the state health department or other appropriate state agency surveys health facilities wishing to participate in Medicare and certifies whether they meet the federal Conditions of Participation and other requirements. In return, the secretary agrees to pay for the reasonable costs of the survey and certification activities of the state agency. With very few exceptions, the same state agencies conduct state licensure and federal certification surveys of all health providers in their states, including nursing homes, laboratories, home health agencies, and hospitals. Most of the state agency survey load consists of nursing homes, because they are much more numerous than hospitals but do not have Joint Commission deemed status.

Funding for Medicare certification activities comes from the Medicare trust funds. For FY 1990, HSQB has budgeted $91.2 million for state surveys of facilities participating in Medicare, about $10.0 million of it for surveys and follow-up visits to unaccredited hospitals. HSQB estimates average survey costs by type of facility and allocates the funds to each federal regional office by its share of each type of facility. In FY 1990, for example, the unit cost for a survey of an unaccredited hospital was $7,500. Each regional office, however, uses a different method of distributing survey funds to the states.

The states are also reimbursed for surveys of Medicaid facilities and use state funds for licensure activities. An Institute of Medicine (IOM) study of nursing home regulations in 1986 found great variation in state survey agency budgets and policies. As a result, the number of surveyors and the intensity of the surveys, as measured by average person-days at a facility, varied tremendously (IOM, 1986).

Federal regulations and HCFA's state operations manual are very general regarding survey agency staffing levels and qualifications. As a result, there are large state-to-state differences in the experience and educational backgrounds as well as numbers of the surveyors. This affects the composition of survey teams—e.g., how many nurses, generalists, sanitarians, and other specialists such as pharmacists and physicians are on the teams or available as consultants. Nationally about half are nurses, 20 percent are sanitarians, and most of the rest are engineers, administrators, and generalists (DHHS, 1983). But in 1983, eight states had only one or two licensed nurses on staff (Association of Health Facility Licensure and Certification Agency Directors, 1983). Only a few state agencies have physicians on staff.

The Joint Commission has 190 surveyors in its hospital accreditation program, 61 full-time, 74 part-time, and 55 consultants, who are based around the country (JCAHO, 1988f). Most of the consultants are physician rehabilitation and psychiatric specialists who survey rehabilitation and psychiatric hospitals and those same services in general hospitals, if provided. Joint Commission survey team composition for the typical general acute-care hospital is a physician, an administrator, a registered nurse, and a medical technologist. The survey team may be tailored for hospitals that offer psychiatric, substance abuse, or rehabilitation services by including or adding physician surveyors with the appropriate specialty to the team.

In 1988 the Joint Commission adopted a formula for determining survey costs, which are paid by the hospital desiring accreditation. The fee consists of a base fee and an additional charge that varies with the annual number of total patient encounters. A hospital with 150,000 inpatient and outpatient encounters a year would pay $8,652 for a full accreditation survey. A follow-up visit to verify correction of a problem (contingency) found in the full survey would cost $900 per surveyor. In recent years, fees have amounted to about 70 percent of the Joint Commission's revenues; most of the rest is derived from the sale of publications and educational services.

Survey Cycle

HCFA does not have a fixed survey cycle for hospitals. Beginning in FY 1991, state agencies were funded to survey 100 percent of unaccredited hospitals (currently, 75 percent). The visits are scheduled ahead of time. Once certified, a hospital stays certified until and if a subsequent survey finds it out of compliance with one or more conditions, which could be more than a year.

Until 1982, hospitals meeting JCAH standards were accredited for 2 years or, if there were problems, 1 year. Since 1982, a hospital found to be in substantial compliance with Joint Commission standards has been awarded accreditation for 3 years. The surveys are scheduled in writing at least 4 weeks ahead of time.

Survey Procedures

Both state agency and Joint Commission surveyors use survey report forms. State agency surveyors fill out survey forms provided by HCFA (Form HCFA-1537), which permit the surveyor to mark as "met" or "not met" each condition, each standard under a condition, and each element of a standard if specified in the regulations. Altogether more than 300 items are checked as met or not met.

The surveyors may refer to interpretive guidelines in the HCFA state operations manual (HCFA, 1986), which provide further guidance for evaluating compliance with the regulation (condition, standard, or element) but do not have force of law. The interpretive guidelines also specify the survey procedures to be used in verifying compliance. For example, element (3) of the quality assurance standard, Clinical Plan, states: "All medical and surgical services performed in the hospital must be evaluated as they relate to appropriateness of diagnosis and treatment" (see Table 7.8, and HCFA, 1986, p. A16). The language is further explicated in the interpretive guidelines: "All services provided in the hospital must be periodically evaluated to determine whether an acceptable level of quality is provided. The services provided by each practitioner with hospital privileges must be periodically evaluated to determine whether they are of an acceptable level of quality and appropriateness." Finally, a surveyor may refer to the survey procedures column: "Determine that the hospital is monitoring patient care including clinical performance. Determine that a review of medical records is conducted and that the records contain sufficient data to support the diagnosis and to determine that the procedures are appropriate to the diagnosis.''

The Joint Commission survey report forms (one for each surveyor discipline, e.g., physician, nurse, etc.) list the hundreds of standards and associated required characteristics (350 items in the case of the physician surveyor) and provide a scale for rating compliance with most of them. The scale goes from 1 for substantial compliance to 5 for noncompliance. To help the surveyors to determine the degree of compliance with an item, the Joint Commission has developed explicit scoring guidelines for most chapters in the hospital accreditation manual as well as for the monitoring and evaluation of quality and appropriateness of care in each of the clinical services chapters. The scoring guidelines have been published and are available for sale to the hospitals.

Table 7.9 provides an example of how the first nursing services standard should be scored. If the standard or required characteristic receives a score of 3 for partial compliance, 4 for minimal compliance, or 5 for no compliance, the surveyor must document the findings on blank pages that face each page of items in the survey report form.

TABLE 7.9

Method of the Joint Commission on Accreditation of Healthcare Organizations for Scoring the First Nursing Services Standard.

State agency and Joint Commission survey teams present their findings at exit conferences, and hospitals with significant problems may begin to make corrections to head off a possible decertification or nonaccreditation action. Some state surveyors obtain plans of correction at this time, whereas others ask for them after reviewing the findings at the office.

Enforcement Procedures

Enforcement begins with a formal finding of noncompliance that necessitates correction. This is a deficiency in HCFA's lexicon, a contingency in the Joint Commission's. In both cases the facility may be and usually is certified or accredited on the basis of, or contingent on, a plan of correction that will, if carried out, bring the hospital into compliance. Depending on the nature and seriousness of the problem, the state agency or the Joint Commission may require written documentation of corrective action or may decide to schedule an on-site visit by a surveyor to verify compliance. In most cases, enforcement ends when the plan of correction is carried out, and more formal enforcement action is rarely taken.

In about 15 percent of the cases (100 of the 700 hospitals surveyed per year), problems are of a nature or degree of seriousness that an unaccredited hospital may be found out of compliance with a Condition of Participation, and decertification proceedings are begun. If it is an "immediate and serious" deficiency, a fast-track termination process is triggered that results in decertification within 23 days. In other cases, and in fast-track cases when the immediate jeopardy is removed, the process takes 90 days. In most cases, the hospitals move to make the changes necessary to have the proceedings dropped, but about 10 to 20 are terminated each year.

Traditionally, the Joint Commission has denied accreditation to between 10 and 15 hospitals a year (about 1 percent of those surveyed). When the 3-year survey cycle with the contingency system was started in 1982, about 15 percent of hospitals were accredited without contingencies and the rest, 83 to 84 percent, were accredited with contingencies that had to be removed within a certain time period, usually 6 months. More recently, 99 percent of the accredited hospitals have been receiving contingencies, several hundred of them serious enough to trigger tentative nonaccreditation procedures, but, due to serious lags in computerizing the new procedures, only four lost accreditation in 1986 and five in 1987 (Bogdanich, 1988). As a result, several hospitals with very serious problems identified in Joint Commission surveys were able to retain their accreditation status for months and even years. Meanwhile, they had lost their Medicare certification as a result of validation surveys triggered by complaints.

Enforcement Criteria

HCFA, in its state operations manual or otherwise, provides little guidance to the state agencies on how to decide whether the deficiencies found by surveyors amount to noncompliance with a Condition of Participation. For example, Hospital A may have deficiencies in four of the five standards comprising a condition but still be judged in compliance with the condition, whereas Hospital B may only have deficiencies in three standards and be ruled out of compliance with the condition. The judgment is left to the state survey agency.

In contrast, the Joint Commission has developed a complex algorithm for converting the scores on completed survey report forms for each standard and required characteristic into summary ratings on a decision grid sheet for each of the major performance-related functions that are taken into account in making accreditation decisions and decisions on whether to assign contingencies or not. In some cases, such as medical staff appointment, clinical privileges, and monitoring functions (e.g., reviews of blood utilization, medical records, and surgical cases), the score is taken directly from the survey form. In most cases, a set of scores of related items on the survey report form are aggregated according to specific written rules into a summary score. For example, the summary score for "evidence of quality assurance actions taken" is aggregated from some 21 scores on related items in 18 chapters of the accreditation manual.

The accreditation decision grid, then, aggregates the hundreds of scores given by surveyors into 43 summary scores under 10 headings (e.g., medical staff, monitoring functions, nursing services, quality assurance, medical records). Another 7 scores for standards on implementation monitoring status are listed but not used in making the accreditation decision. Another set of rules is then applied to determine whether the hospital should be accredited. This set of rules is also used to decide whether contingencies should be assigned, with what deadlines, and whether subject to a follow-up visit or just written documentation of corrective action. For example, a tentative nonaccreditation decision is forwarded to the Accreditation Committee of the Joint Commission's board of commissioners if the four elements under the medical staff heading are scored 4 or 5, or five of the seven elements under the monitoring heading are scored 4 or 5, and so forth. Similarly specific rules determine whether 1 month, 3-month, 6-month, or 9-month. written progress reports are required, or 6-month, 9-month, or 12-month on-site surveys are necessary.

These three sets of decision rules (surveyor scoring of individual items on the survey report form, aggregation of the individual surveyor scores into summary scores on the accreditation grid sheet, and the rules used to make nonaccreditation and contingency decisions) are new and constantly evolving as they are used in practice. They were adopted in response to complaints about variations in surveyor judgment and in Joint Commission decision making about accreditation; the advent of computers has made it possible.

Conclusions, Issues, And Options

Conclusion: Quality Assurance Through Certification And Accreditation Is Limited

Federal and Joint Commission efforts to develop and apply quality assurance standards are hampered in several ways. First, despite 70 years of efforts, we still do not have adequate and valid outcome standards.14 Because outcomes by themselves are affected by many factors besides what happens in hospitals, adverse or even improved outcomes can only be indicators of possible quality problems or opportunities that, in turn, trigger further investigation to see if some aspect of hospital care was involved (Donabedian, 1966, 1988; Lohr, 1988). Medicare and Joint Commission standard-setters therefore have tried to mandate quality assurance processes in which hospitals use indicators of quality—outcome-oriented if possible but usually process and even structural in nature—to examine quality of care. However, few clinical indicators have been adequately validated through research. Even fewer indicators of the quality of organizational performance exist. Nevertheless, to the extent there is knowledge about how to improve quality or make quality assurance more effective, it should be reflected in the Medicare and Joint Commission standards and survey processes.

The second barrier to quality assurance through certification and accreditation is the limited surveillance capacity inherent in any system of periodic inspections. A 2 day visit every year or two limits the ability of even the best surveyors to see if the process of care conforms to standards of best practice in an adequate sample of cases, let alone to see what the outcomes were. This "distance" problem is another reason why the standard-setters have tried externally to impose quality assurance standards that make the hospital itself conduct such surveillance continuously after the inspectors leave (Vladeck, 1988).

A third impediment to using regulatory, or self-regulatory, standards to assure quality is the ambivalent attitude of Medicare officials, the state agencies that actually survey the facilities, and Joint Commission leaders toward the use of sanctions. The raison d'etre of the Joint Commission is professional self-improvement Federal and state officials are primarily motivated by the desire to make Medicare benefits widely available, and they are also subject to political pressure to keep facilities open, if at all possible. The only formal sanction is loss of formal certification or accreditation, a drastic step that officials are reluctant to take except in extreme cases. The due process protections of the legal system also discourage enforcement attempts, as do the difficulties of documenting quality problems more subtle than gross negligence or death. Thus, for a variety of reasons, officials are very reluctant to take formal enforcement actions, especially to the extent of terminating a facility, preferring instead to work with substandard or marginal facilities over time and bring them into compliance. This approach works well if the hospitals involved have the will and capacity to improve, if shown how to do it, but it is ill-equipped to deal with places that cannot or will not improve.

Fourth, while the federal government has delegated much of the standard-setting and enforcement to private accreditation bodies on the one hand, it has given away much discretion to the states on the other. The states have always varied greatly in their interpretation of federal standards, and little has been done to increase consistency. HCFA requirements for state survey programs are very loose. Federal officials recognized from the beginning that who does the surveying is critical, "since this greatly influences what the emphasis will be, regardless of what the standard-setters think the emphasis should be" (Cashman and Myers, 1967, p. 1112), but little has been done to standardize state survey capacity or process. The development of interpretive guidelines and survey procedures for the new Conditions of Participation was a step in the right direction. HCFA could develop more sophisticated decision rules for state agencies to use in determining compliance and making enforcement decisions. It also could develop a more statistically credible survey validation program to check the performance of the Joint Commission and the states.15

Conclusion: Certification And Accreditation Could Play A Role In Quality Assurance

Many of the obstacles to more effective quality assurance facing HCFA's survey and certification and the Joint Commission's accreditation efforts are those facing Medicare's Utilization and Quality Control Peer Review Organizations (PROs): lack of knowledge about the relations among structure, process, and outcome; distance; and political pressure. One of the advantages of the PRO program is its continuous access to information on individuals and the episodes of care they experience. Unlike the survey agencies or the Joint Commission (at least until and if its plan to develop and then collect data on clinical and organizational indicators is carried out), PROs can actively screen data using indicators of poor quality or inappropriate care. This at least allows them to identify statistically aberrant hospitals and physicians through the use of aggregate profiles. However, the PROs are not well able to make the in-depth on-site investigations of places the indicators may identify, especially small, remote hospitals in rural areas.

The survey agencies, on the other hand, can and do mandate certain minimum capacity characteristics of hospitals. In addition, they can require that hospitals have and use internal quality assurance standards and procedures. They can require those specific process characteristics that research has or will show are associated with favorable outcomes. In the meantime, the standards should be periodically revised in accord with expert consensus about best practices. Finally, survey agencies could be involved formally and systematically in investigations of hospitals where PRO-derived quality indicators signal possible quality problems and could use their legal authority to mandate changes needed.

Issues And Options

Major Issue 1: Role Of Certification In Quality Assurance

The Conditions of Participation and procedures for enforcing them are a part of the federal government's quality assurance effort, and, as such, they should be the best possible, given the state of current knowledge and availability of resources, and they should be consistent with and supportive of other federal quality assurance activities.

Pros:

  • A large number of hospitals (1,600) with a significant number of beds are outside the accreditation system, and they tend to be the only hospitals in their area.

  • Hospitals that have lost accreditation have applied for and received certification.

  • The conditions mandate some important basic structure and process standards (e.g., life safety code, sanitation and infection control, etc.) that can be enforced legally if there are related quality problems found by PROs or otherwise (e.g., through complaints).

  • State health facility surveyors are useful for investigating the muses of indicators of poor quality revealed through surveillance of case statistics.

  • Quality is multifaceted and multiple systems of surveillance and enforcement are useful.

Cons:

  • The inherent limits on the ability of periodic facility inspections to find problems in the quality of patient care are too great (compared to, say, a peer review approach) to justify more investment in this approach.

  • Quality-of-care problems in unaccredited hospitals could be effectively dealt with by the PROs or other programs based on systematic, ongoing review of cases.

  • Political pressures on state health agencies and HCFA to keep hospitals open, especially in rural areas, are too great.

  • The need to keep PRO data confidential precludes coordination with the certification process; potential triggering of regulatory enforcement would poison the peer review process.

Related issue: Improving the standards.

If certification is considered to be an important part of the federal quality assurance effort, the standards (Conditions of Participation) should be revised to be consistent and supportive of the overall federal quality assurance effort and kept up-to-date.

Pros:

  • The current conditions and related standards and elements were developed in the early 1980s and do not reflect recent advances in measuring and assuring quality of care.

  • State licensure standards even for basic structural aspects of hospitals vary widely and certification assures conformity to a uniform set of standards.

Cons:

  • It is not realistic to expect that the conditions, which must go through the formal federal rule-making process, can be updated continuously.

  • Little or no relation has been shown between facility-based standards and quality of patient care.

Related issue: Improving enforcement.

HCFA should take a number of steps to increase enforcement capacity (some of them already adopted in nursing home regulation), including the following: specification of survey team size and composition; use of survey procedures and instruments that focus more on patients and less on records; development of explicit decision rules for determining enforcement actions; adoption of intermediate sanctions, such as fines and bans on admissions, so the punishment can fit the crime; and more use of federal inspectors to evaluate state agency performance through validation surveys and to inspect state hospital facilities.

Pros:

  • Increasing competition and price regulation (e.g., prospective payment) in the hospital sector call for more attention to quality assurance and enforcement, especially in small rural hospitals.

  • Enforcement can be increased through these kinds of federal actions, as has been done with certified nursing homes.

Cons:

  • These steps are not worth the cost, given the limits on their effectiveness.

Major Issue 2: Role Of The Joint Commission In Assuring Quality Of Care For Medicare Patients

Deemed status should continue, and the Joint Commission should be encouraged in its efforts to develop a state-of-the-art quality assurance program, but, at the same time, federal oversight of the Joint Commission should be increased to ensure accountability and there should be more disclosure of information about hospitals with quality problems discovered by the Joint Commission.

Pros:

  • Joint Commission standards are higher and more up-to-date than the Conditions of Participation.

  • Accreditation is a positive incentive that motivates hospitals to improve more than certification does or can (the Joint Commission is planning to reinforce this by recognizing ''superior" hospitals).

  • Joint Commission inspectors have better clinical credentials and make more consistent decisions.

  • The Joint Commission may achieve better compliance than the state agencies because accreditation is highly valued and the state agencies are hampered procedurally and politically (e.g., due process, lack of authority to deal with repeat deficiencies, political pressure to assure access to Medicare services); in fact, HCFA might contract with the Joint Commission to conduct all certification surveys, subject to closer monitoring, rather than deal with the inconsistencies and administrative costs of dealing with more than 50 state survey agencies.

  • The Joint Commission is planning voluntarily to release information to HCFA on hospitals with significant quality problems whose continued accreditation is conditional on major changes. These would be the 7 to 8 percent of hospitals surveyed each year that trigger one or more of the Joint Commission's nonaccreditation decision rules.

Cons:

  • Higher standards are not meaningful if they are not enforced vigorously.

  • In any case, the Joint Commission is a private organization governed by associations of the providers it is regulating; its survey findings are confidential (except in 13 states—e.g., New York, Pennsylvania, Arizona—where the survey is a public document under state law). The Joint Commission is not publicly accountable and, therefore, responsibility for assuring the health and safety of Medicare beneficiaries should not be delegated to it.

  • The Joint Commission is still relatively weak in enforcing environmental and life safety code standards.

  • HCFA must maintain a certification program with adequate standards and sufficient capacity (resources and procedures) in any case, to deal with small and rural hospitals that are not accredited, and this program could and should be applied to all (hospitals would still be encouraged to seek accreditation).

  • The resources for increasing federal oversight—more funding for more intensive state inspections, more federal inspectors to conduct validation surveys—would be better used elsewhere in the federal quality assurance program.

Major Issue 3: Improving Coordination Of Federal Quality Assurance Efforts

HCFA should develop criteria and procedures for referring cases in which there are indications of serious quality-of-care problems from PROs to the Office of Survey and Certification and vice versa.

Pros:

  • The quality-of-care screens used by PROs include only indicators of quality-of-care problems, and the actual role of a hospital in producing adverse indicators has to be investigated further before changes can be required or sanctions applied. In many cases, on-site surveys by health facility inspectors could usefully supplement central reviews of cases by PRO clinicians.

  • The state inspection agencies and federal regional offices, in turn, could alert PROs when they find hospitals with possible quality-of-care problems; the PROs could then initiate focused reviews to document process of care or patient-outcome problems, if any.

Cons:

  • Most state inspection agencies do not have physician inspectors and some do not have that many nurses, which limits their capacity to look at quality of clinical care or to justify findings in court against a facility's physician consultants.

  • Any additional resources for handling quality-of-care problems should go to building up PROs or some other peer review-oriented mechanism.

Concluding Remarks

About 7,000 hospitals provide services to Medicare patients. The Secretary of DHHS has the regulatory authority to promulgate standards called Conditions of Participation in order to assure the adequate health and safety of Medicare patients in those hospitals, although the 5,400 hospitals accredited by the private Joint Commission and the AOA are deemed to meet the federal standards without further inspection by a public agency (except for a small number of accredited hospitals that are subject to validation surveys each year). In effect, then, Joint Commission standards are the Medicare standards for most Medicare beneficiaries using hospital services. At the same time, the users of 1,600 hospitals rely on the standards in the Medicare Conditions of Participation. These are mostly small, primarily rural hospitals where Medicare beneficiaries do not have the alternative of going to an accredited hospital. Both sets of standards, therefore, affect a large number of people and should be as effective as possible in achieving the goal of assuring adequate care.

This chapter has examined the evolution of Medicare and the Joint Commission hospital standards from mostly structural standards (aimed at assuring that a hospital has the minimum capacity to provide quality care) to mostly process standards (aimed at making hospitals assess in a systematic and ongoing way the actual quality of care provided on their premises). Also, certain structural standards, such as those for fire safety, that continue to be mandated and enforced through the certification and accreditation standards may not be closely related to patient care but are important factors in patient safety.

The certification and accreditation programs are inherently limited in their capacity to assure quality of care. They are hampered by the lack of knowledge about the interrelations between structure and process features of a hospital and patient outcomes. They are limited because periodic inspections cannot reveal much about how well the process of care conforms to the standards of best practice, or what the outcomes of care are. They rely on the subjective judgment of their inspectors and the enforcement attitudes of the inspection agencies.

Certification and accreditation could play a significant role in Medicare's quality assurance efforts if several issues are addressed. Pros and cons of suggested strategies are identified for consideration.

References

  • Affeldt, J.E., Roberts, J.S., and Walczak, R.M. Quality Assurance: Its Origin, Status, and Future Direction—A JCAH Perspective. Evaluation and the Health Professions 6:245-255,1983. [PubMed: 10289495]

  • AHA (American Hospital Association). Hospitals , Journal of the American Hospital Association (Guide Issue, Part 2), 40 (August 1, 1966).

  • Association of Health Facility Licensure and Certification Agency Directors. Summary Report: Licensure and Certification Operations. Unpublished report submitted to Health Standards and Quality Bureau, Health Care Financing Administration. Baltimore, Md., 1983.

  • Bogdanich, W. Prized by Hospitals, Accreditation Hides Perils Patients Face. Wall Street Journal October 12, 1988, pp. A1, A12.

  • Cashman, J.W. and Myers, B.A. Medicare: Standards of Service in a New Program—Licensure, Certification, Accreditation. American Journal of Public Health 57:1107-1117,1967. [PMC free article: PMC1227529] [PubMed: 5338434]

  • Davis, L. Fellowship of Surgeons: A History of the American College of Surgeons. Chicago, Ill.: American College of Surgeons, 1973.

  • DHHS (Department of Health and Human Services). Medicare Validation Surveys of Hospitals Accredited by the JCAH: Annual Report for FY 1979. Washington, D.C.: U.S. Department of Health and Human Services, 1979.

  • DHHS. Medicare Validation Surveys of Hospitals Accredited by the JCAH: Annual Report for FY 1980. Washington, D.C.: U.S. Department of Health and Human Services, 1980.

  • DHHS. Inventory of Surveyors of Medicare and Medicaid Programs, United States, 1983. Baltimore, Md.: Health Care Financing Administration, 1983.

  • DHHS. Report on Medicare Validation Surveys of Hospitals Accredited by the Joint Commission on Accreditation of Hospitals (JCAH): Fiscal year 1985. In Report of the Secretary of DHHS on Medicare. Washington, D.C.: U.S. Government Printing Office, 1988.

  • Donabedian, A. Evaluating the Quality of Medical Care. Milbank Memorial Fund Quarterly 44:166-203,1966. [PubMed: 5338568]

  • Donabedian, A. The Epidemiology of Quality. Inquiry 22:282-292,1985. [PubMed: 2931371]

  • Donabedian, A. The Quality of Care: How Can It Be Assessed? Journal of the American Medical Association 260:1743-1748,1988. [PubMed: 3045356]

  • Feder, J. Medicare: The Politics of Federal Hospital Insurance. Lexington, Mass.: D.C. Heath, 1977. a.

  • Feder, J. The Social Security Administration and Medicare: A Strategy for Implementation. Pp. 19-35 in Toward a National Health Policy. Friedman, K., editor; and Rakoff, S., editor. , eds. Lexington, Mass.: D.C. Heath, 1977. b.

  • Federal Register, Vol. 45, pp. 41794—41818 , June 20, 1980.

  • Federal Register, Vol. 48, pp. 299-315 , January 4, 1983. [PubMed: 10299046]

  • Federal Register, Vol. 51, pp. 22010-22052 , June 17, 1986. [PubMed: 10300760]

  • Foster, J.T. States are Stiffening Licensure Standards. Modern Hospital 105:128-132,1965. [PubMed: 14329602]

  • Fry, H.G. The Operation of State Hospital Planning and Licensing Programs. American Hospital Association Monograph Series, No. 15. Chicago, Ill.: American Hospital Association, 1965. [PubMed: 14292535]

  • GAO (General Accounting Office). The Medicare Hospital Certification System Needs Reform. HRD-79-37. Washington, D.C.: General Accounting Office, 1979.

  • Greenfield, S., Lewis, C.E., Kaplan, S.H., et al. Peer Review by Criteria Mapping: Criteria for Diabetes Mellitus: The Use of Decision-Making in Chart Audit. Annals of Internal Medicine 83:761-770,1975. [PubMed: 1200522]

  • Greenfield, S., Nadler, M.A., Morgan, M.T., et al. The Clinical Investigation and Management of Chest Pain in an Emergency Department: Quality Assessment by Criteria Mapping. Medical Care 15:898-905,1977. [PubMed: 926871]

  • Greenfield, S., Cretin, S., Worthman, L.G., et al. Comparison of a Criteria Map to a Criteria List in Quality-of-Care Assessment for Patients With Chest Pain: The Relation of Each to Outcome. Medical Care 19:255-272,1981. [PubMed: 7218892]

  • HCFA Task Force (Health Care Financing Administration). HCFA Task Force Recommendations. Unpublished document in files of the Health Standards and Quality Bureau, Health Care Financing Administration, Baltimore, Md., 1982.

  • HCFA. Appendix A, Interpretive Guidelines—Hospitals. Pp. A1-A165 in State Operations Manual: Provider Certification. Transmittal No. 190. Health Care Financing Administration. Washington, D.C.: U.S. Department of Health and Human Services, 1986.

  • Health Insurance Benefits Advisory Council. Report Covering the Period July 1, 1966—December 31, 1967. Washington, D.C.: Social Security Administration, 1969.

  • IOM (Institute of Medicine). Improving the Quality of Care in Nursing Homes. Washington, D.C.: National Academy Press, 1986. [PubMed: 25032432]

  • Jacobs, C.M., Christoffel, T.H., and Dixon, N. Measuring the Quality of Patient Care: The Rationale for Outcome Audit. Cambridge, Mass.: Ballinger, 1976.

  • JCAH (Joint Commission on Accreditation of Hospitals). Standards for Hospital Accreditation. Chicago, Ill.: Joint Commission on Accreditation of Hospitals, 1965.

  • JCAH. 1970 Accreditation Manual for Hospitals. Chicago, Ill.: Joint Commission on Accreditation of Hospitals, 1971.

  • JCAH. The PEP Primer: Performance Evaluation Procedure for Auditing and Improving Patient Care. Chicago, Ill.: Joint Commission on Accreditation of Hospitals, 1975.

  • JCAH. Guidelines Set for AMH Revision. JCAH Perspectives 1(5):3,1981.

  • JCAH. New QA Guidelines Set. JCAH Perspectives 2(5):1,1982.

  • JCAH. New Quality and Appropriateness Standard Included in 1984 AMH. JCAH Perspectives 3(5):5-6,1983.

  • JCAH. JCAH Board Approves New Medical Staff Standards. JCAH Perspectives 4(1):1,3-4,1984. a.

  • JCAH. Quality Assurance Standards Revised. JCAH Perspectives 4(1):3,1984. b.

  • JCAH. "Implementation Monitoring" for Designated Standards. JCAH Perspectives 5(1):3-4,1985. [PubMed: 10289651]

  • JCAH. Monitoring and Evaluation of the Quality and Appropriateness of Care: A Hospital Example. Quality Review Bulletin 12:326-330,1986. [PubMed: 3095764]

  • JCAH. Hospital Accreditation Program Scoring Guidelines: Nursing Services, Infection Control, Special Care Units. Chicago, Ill.: Joint Commission of Accreditation of Hospitals, 1987.

  • JCAHO (Joint Commission on Accreditation of Healthcare Organizations). Overview of the Joint Commission's "Agenda for Change." Mimeo. Chicago, Ill.: Joint Commission on Accreditation of Healthcare Organizations, 1987.

  • JCAHO. An Introduction to the Joint Commission: Its Survey and Accreditation Processes, Standards, and Services. Third edition. Chicago, Ill.: Joint Commission on Accreditation of Healthcare Organizations, 1988. a.

  • JCAHO. Rules Change on Monitoring and Evaluation Contingencies. Joint Commission Perspectives 8:5-6,1988. b.

  • JCAHO. Medical Staff Monitoring and Evaluation: Departmental Review. Chicago, Ill.: Joint Commission on Accreditation of Healthcare Organizations, 1988. c.

  • JCAHO. Proposed Clinical Indicators for Pilot Testing. Chicago, Ill.: Joint Commission on Accreditation of Healthcare Organizations, 1988. d.

  • JCAHO. Field Review Evaluation Form: Proposed Principles of Organizational and Management Effectiveness. Chicago, Ill.: Joint Commission on Accreditation of Healthcare Organizations, 1988. e.

  • JCAHO. Hospital Accreditation Program Surveyors, September 1988. Chicago, Ill.: Joint Commission on Accreditation of Healthcare Organizations, 1988. f.

  • JCAHO. 1990 Accreditation Manual for Hospitals. Chicago, Ill.: Joint Commission on Accreditation of Healthcare Organizations, 1989.

  • Jost, T.S. The Joint Commission on Accreditation of Hospitals: Private Regulation of Health Care and the Public Interest. Boston College Law Review 24:835-923,1983.

  • Lohr, K.N. Outcome Measurement: Concepts and Questions. Inquiry 25:37-50,1988. [PubMed: 2966125]

  • Longo, D.R., Wilt, J.E., and Laubenthal, R.M. Hospital Compliance with Joint Commission Standards: Findings from 1984 Surveys. Quality Review Bulletin 12:388-394,1986. [PubMed: 3101027]

  • McNerney, W.J. Hospital and Medical Economics. Chicago, Ill.: American Hospital Association Hospital Research and Educational Trust, 1962.

  • Palmer, R.H. and Reilly, M.C. Individual and Institutional Variables Which May Serve as Indicators of Quality of Medical Care. Medical Care 17:693-717,1979. [PubMed: 379466]

  • Phillips, D.F. and Kessler, M.S. Criticism of the Medicare Validation Survey. Hospitals, Journal of the American Hospital Association 49:61-62, 64, 66 , 1975. [PubMed: 1098993]

  • Roberts, J.S. and Walczak, R.M. Toward Effective Quality Assurance: The Evolution and Current Status of the JCAH QA Standard. Quality Review Bulletin 10:11-15,1984. [PubMed: 6422378]

  • Roberts, J.S., Coale, J.G., and Redman, R.R. A History of the Joint Commission on Accreditation of Hospitals. Journal of the American Medical Association 258:936-940,1987. [PubMed: 3302327]

  • Sanazaro, P.J. Quality Assessment and Quality Assurance in Medical Care . Annual Review of Public Health 1980 1:37-68,1980. [PubMed: 6753865]

  • Schroeder, S.A. Outcome Assessment 70 Years Later: Are We Ready? New England Journal of Medicine 316:160-162,1987. [PubMed: 3467200]

  • Silver, L.H. The Legal Accountability of Nonprofit Hospitals. Pp. 183-200 in Regulating Health Facilities Construction. Havighurst, C.C., editor. , ed. Washington, D.C.: American Enterprise Institute for Public Policy Research, 1974.

  • Somers, A.R. Hospital Regulation: The Dilemma of Public Policy. Princeton, N.J.: Industrial Relations Section, Princeton University, 1969.

  • Stephenson, G.W. College History: The College's Role in Hospital Standardization. Bulletin of the American College of Surgeons (February):17-29,1981. [PubMed: 10298071]

  • Taylor, K.O. and Donald, D.M. A Comparative Study of Hospital Licensure Regulations. Berkeley, Calif.: School of Public Health, University of California, 1957.

  • Vladeck, B.C. Quality Assurance Through External Controls. Inquiry 25:100-107,1988. [PubMed: 2966115]

  • Worthington, W. and Silver, L.H. Regulation of Quality Care in Hospitals: The Need For Change. Law and Contemporary Problems 35:305-333,1970.

1. Throughout this chapter, we use the terms nonaccredited and unaccredited. Nonaccredited hospitals are those that have lost accreditation from the Joint Commission. Unaccredited hospitals are those hospitals that have never been accredited by the Joint Commission or who were accredited but subsequently lost accreditation and are not actively pursuing accreditation with the Joint Commission.

2. Another regulation automatically permits hospitals that meet the Medicare Conditions of Participation to participate in Medicaid.

3. One consumer representative has served on the board since 1981. In late 1989, two more public members were added to the Joint Commission board.

4. The author wishes to acknowledge the helpful comments provided by staff of the Joint Commission, HSQB, and HCFA's Office of Policy Development on earlier drafts of this chapter.

5. Most of the unaccredited hospitals had fewer than 25 beds and therefore were not eligible for accreditation under ACS rules at that time.

6. The Canadian Medical Association was also a founder of JCAH but withdrew in 1959 to develop the Canadian Council on Hospital Accreditation. The American Dental Association joined JCAH in 1980.

7. At 1961 hearings on health services for the aged, HEW Secretary Ribicoff said he would ''hand down an order that any hospital that was accredited by the Joint Commission on Accreditation would be prima facie eligible" (quoted in Jost, 1983, p. 853). The report of the Senate Finance Committee accompanying the Medicare bill said that hospitals accredited by JCAH would be "conclusively presumed to meet all the conditions for participation, except for the requirement of utilization review" (quoted in Worthington and Silver, 1970, p. 314).

8. Art Hess, first head of Medicare, told the American Public Health Association at its 1965 annual meeting that the Social Security Administration did not want to pay for services that did not meet "minimal quality standards," but "the intention ... is not to impose requirements that cannot be met." He went on to say that "the program, through its definitions, provides support to what has now been achieved, and makes continued upgrading possible as progress in standards is made in the private sector through accreditation activities'' (Hess, 1966, p. 14).

9. Two special certification provisions were implemented in 1966 for certifying hospitals that did not meet the Conditions of Participation. The access provision allowed for the certifying of rural hospitals out of compliance with one or more conditions but in compliance with all statutory provisions provided the hospital was located in a rural area where access by Medicare enrollees to fully participating hospitals would be limited. The second provision. based upon the Burleson amendment, waived the statutory 24-hour registered nurse requirement for rural hospitals meeting all other requirements. Both provisions have since been terminated.

10. As of 1970, 98 hospitals that had applied in 1966 were still not in the program and 411 hospitals were participating through the special access certification provision (Worthington and Silver, 1970).

11. JCAH apparently adopted the utilization review requirement (implemented in 1967) in the hope that accredited hospitals could be deemed to meet all federal requirements without state agency inspection. The Secretary of the DHHS, however, has never agreed to let this accreditation standard be deemed to meet the federal utilization review requirement. More recently, however, hospitals have been able to meet the requirement if they are reviewed through Medicare's Utilization and Quality Control Peer Review Organization (PRO) program.

12. Even though compliance at the condition level may be similar, it is interesting to note that more detailed analyses in earlier reports found that only about 10 to 14 percent of the specific deficiencies cited were the same (DHHS, 1979, 1980; GAO, 1979).

13. These worksheets, which provide insight into the thinking that went into the revision of the Conditions of Participation for hospitals during the 1981-1983 period, are in the HCFA files (HCFA Task Force, 1982).

14. For example, comparative hospital mortality figures have no meaning without consideration of many factors such as case-mix, severity of illness, geographic differences, and patterns of care of the terminally ill among hospitals, hospices, nursing homes, and family homes.

15. As of late 1989 HCFA was considering a revision of its sampling methodology to improve the effectiveness of its validation efforts. Also, beginning in FY 1989, the number of validation surveys performed by state agency staff was increased to approximately 200 per year (HCFA, personal communication, 1989).

Which of the following organizations provides accreditation for healthcare institutions quizlet?

The Joint Commission accreditation preempts the need to state licensing of a health care organization in all 50 states.

Who has the primary responsibility for setting the overall direction of the hospital?

The board of directors has primary responsibility for setting the overall direction of the hospital.

Which are measurable objective guides that are established to monitor all aspects of patient care?

Review questions workbook.

Which of the following should the phlebotomist recognize as an abnormally low fasting glucose in a patient who is also exhibiting a rapid heartbeat?

The phlebotomist would recognize 46 mg/mL as an abnormally low fasting glucose in a patient who is also exhibiting a rapid heartbeat. Reason:Normal fasting glucose is between 70 and 100 mg/mL, so a blood glucose of 46 mg/mL should be recognized as low and can cause a rapid heart rate.