Including Oncology Outcomes of Care in the Computer-Based Patient Record

Publication
Article
OncologyONCOLOGY Vol 9 No 11
Volume 9
Issue 11

Changes in the health care system have caused a shift in research to outcomes of care, effectiveness, efficiencies, clinical practice guidelines, and costs. The greater use of computer systems, including decision support systems, quality assurance systems, effectiveness systems, cost containment systems, and networks, will be required to integrate administrative and patient care data for use in determining outcomes and resource management. This article describes developments to look forward to in the decade ahead, including the integration of outcomes data and clinical practice guidelines as content into computer-based patient records; the development of review criteria from clinical practice guidelines to be used in translating guidelines into critical paths; and feedback systems to monitor performance measures and benchmarks of care, and ultimately cost out cancer care. [ONCOLOGY 9(Suppl):161-167, 1995]

Changes in the health care system have caused a shift in research to outcomes of care, effectiveness, efficiencies, clinical practice guidelines, and costs. The greater use of computer systems, including decision support systems, quality assurance systems, effectiveness systems, cost containment systems, and networks, will be required to integrate administrative and patient care data for use in determining outcomes and resource management. This article describes developments to look forward to in the decade ahead, including the integration of outcomes data and clinical practice guidelines as content into computer-based patient records; the development of review criteria from clinical practice guidelines to be used in translating guidelines into critical paths; and feedback systems to monitor performance measures and benchmarks of care, and ultimately cost out cancer care.

Introduction

In the 1990s, the focus of health care research began to shift toward outcomes, effectiveness, and efficiencies. The Agency for Health Care Policy and Research (AHCPR) was legislatively charged with initiating programs and research to provide a more rational basis for decisions about which treatments to offer and which technologies to purchase. Dr. Clifton Gaus, the agency administrator, represents the mission of AHCPR in terms of a quality-accountability continuum (Figure 1) [1]. Basic science and biomedical research precede this continuum or provide a basis for the type of health services research that AHCPR sponsors. Outcomes and effectiveness research, clinical practice guidelines, technology assessments, and the science of quality measurement are the necessary ingredients to yield outcomes management (the assessment of clinical practice) and institutional or individual accountability.

The computer-based patient record is the most useful tool to manage the data from outcomes, effectiveness, and efficiencies research. In the outcomes initiative, we define "effectiveness" as outcomes experienced by, or observed in, patients in routine clinical practice. This is distinct from "efficacy," which refers to the potential benefit of clinical interventions provided under ideal circumstances to patients who meet specific criteria. "Cost effectiveness" summarizes the cost and effect of treatment in terms of specified outcomes measured in nonmonetary units; it indicates value obtained for resources expended. In the past, the concept that "knowledge is power" held sway, but this has shifted in a market-driven system to the concept that "knowledge is money" [2].

Outcomes data are an integration of administration and patient care data for determining outcomes and resource management. Outcomes are only one part of the assessment of a health system's performance. Other components include access, utilization, cost, resources, and patient satisfaction.

The focus of outcomes is also moving toward transaction-information-based patterns of care and quality of care measurement. This type of focus emphasizes the use of clinical practice guidelines. Recently, attention has centered on the possibility of using clinical practice guidelines developed by AHCPR as content driving the computerized patient record [3]. Others have described the placement of clinical practice guidelines on the physician workstation to provide access to up-to-date information [4].

Guidelines can be useful in managing care, since they define the appropriate diagnostic and treatment interventions/procedures to achieve outcomes while reducing variability in practice. The computer programmed with clinical practice guideline content can lead to improved collection of data and act as a decision platform upon which to guide practice [3]. Practice guidelines include recommendations for diagnostic and treatment procedures that might be overlooked in the routine delivery of care, but they still require clinician decision making, since many options are provided.

This manuscript describes the need for integration of the concepts of outcomes data management, clinical practice guidelines, and the development of review criteria to provide feedback systems to monitor critical paths (clinical pathways) and performance measures (benchmarks of care), and ultimately cost out care through the computer-based patient record. The examples given in this manuscript relate to oncology care, but other conditions could be substituted.

Outcomes Data Management

One definition of outcomes is the end result of a treatment or intervention. While this definition looks and sounds simple, in actual practice establishing realistic and measurable outcomes is more difficult. Many types of outcomes are being used in a variety of types of health services research. Outcomes management examines the treatment of clinical conditions rather than individual procedures or treatments. It is the systematic assessment of clinical practice, encompassing outcomes that are relevant to patients--mortality, morbidity, complications, symptom reduction, functional improvement--as well as physiologic and biologic indicators. It involves all reasonably held theories and alternative clinical practice interventions [5].

Many of the concepts of outcome management are wrapped up in the terms health status and health-related quality of life (HRQOL). Health status measures a patient's clinical, biological, and physiological status such as morbidity, mortality, blood pressure, hemoglobin, and temperature. Health-related quality of life measures physical function such as activities of daily living (ADLs), instrumental ADLs (eg, medication administration), emotional and psychological functioning and well-being, social functioning and support, role functioning, general health perceptions, pain, vitality (energy/fatigue), and cognitive functioning.

One method of monitoring outcomes in the computerized patient record is the use of disease-specific health status and health-related quality of life measurements. The tools to measure health status and quality of life, however, have been described for only a few diseases and conditions.

Once the health status or health-related quality of life indicators are chosen for specific diseases or conditions, patient trends for groups of patients with the same condition can be monitored. These types of systems help to define episodes of care, ie, when a patient's problem began and when it "ended," and also which procedures were attached to which episode or patient condition.

The socioeconomic and sociodemographic characteristics of patients are often forgotten in defining content in databases to monitor outcomes. Unless the computer database is built interrelating patient outcomes with patient demographics, it is difficult to determine if the outcomes of the condition are a result of practitioner interventions or the patient's environmental, demographic, social, or economic conditions. For example, it is difficult to determine patients' responses to chemotherapy when they are also taking megadoses of vitamins, changing diet, and consuming enormous volumes of shark cartilage. Similarly, it is more challenging to manage patients who are homeless than those who have a residence.

Demographic data are also needed to relate the role and influence of comorbidities, eg, history of substance abuse, age of patient, obesity, or failure of previous treatment interventions. Different outcomes can be expected after breast cancer surgery for a 65-year-old patient with no comorbidities and a 65-year-old patient who also has insulin-dependent diabetes, a history of congestive heart failure, and hypertension.

Two other forgotten areas to monitor for outcomes are safety and claims databases. The safety database includes error reports, falls, and other patient liabilities. The claims database includes not only patient liabilities but also personnel or employee claims for back injuries, falls, and workmen's compensation claims.

Table 1 is a composite list of data requirements in the patient record for better integration of outcomes with patient care [6-8].

AHCPR Oncology-Related Projects

The activities of the AHCPR include research efforts, the development of clinical guidelines, and technology assessments. The Medical Treatment Effectiveness Program (MEDTEP) research portfolio includes studies designed to describe breast cancer screening policies and practice; evaluate practice variations and costs of cancer; study hysterectomy outcomes (a community-based study); identify treatment choices and outcomes in prostate cancer, ie, TURP versus open prostatectomy or nonoperative treatments; examine regional variations in cancer treatment and mortality; study the impact of a physician's insurer on early cancer detection; evaluate breast and colon screening by cancer mortality; study cancer prevention for minority women in a Medicaid HMO; and perform a retrospective survival analysis for prostate cancer.

Large PORT (patient outcomes research team) studies in oncology include the assessment of therapies for benign prostatic hypertrophy and localized prostate cancer (Dr. Wennberg, Dartmouth) and a new PORT II project concerning the care, costs, and outcomes of localized breast cancer (Dr. Hadley, Georgetown University) and prostatic disease (Dr. Barry, Boston).

The AHCPR also funds a number of Research Centers on Minority Populations. Those that focus on cancer include the University of California-San Francisco, Henry Ford Hospital in Detroit, Pacific Health Research Institute in Honolulu, and the University of New Mexico, Albuquerque.

Clinical practice guidelines developed by the AHCPR that are useful to the oncologist include those on acute pain management; depression detection, diagnosis, and treatment; benign prostatic hyperplasia; quality determinants of mammography; and the management of cancer pain.

The AHCPR technology assessment program has conducted assessments of the selection criteria for hyperthermia in conjunction with cancer chemotherapy and the use of autologous peripheral stem cell transplantation. In addition, health technology reviews have been published on lymphedema pumps, pneumatic compression devices, external and implantable infusion pumps, and hematopoietic stem cell transplantation for multiple myeloma.

Tools to Measure Oncology Outcomes

Many tools are described in the literature that can be automated to produce oncology outcomes information. The simplest tools are statistical packages that provide aggregate data, show trends graphically, and look at simple regression curves, to determine if a patient is improving, stabilizing, deteriorating, or has died [7]. These maximal directions of patient change are illustrated in Figure 2. Like the trend analysis that has been common in intensive and critical care environments, these tools are now needed to measure outcomes in medical oncology, surgical oncology, primary care, and long-term care.

Wennberg used claims data as a retrospective tool to examine small area analysis and variation. Using Medicare data tapes, he provided proof that differences in utilization or differences in care provided from one geographic area to another stem from decisions that providers make after their patients contact them [9,10]. Cromwell and Mitchell found that variation rates in surgery are related not only to supply and demand but also to medical need, ability to pay, and availability of substitute resources [11].

Paul et al described secondary databases that can be used retrospectively to collect data [12]. Some of these secondary databases can be used for outcomes research and are listed in Table 2 [13]. However, these tools are cumbersome to use, need data quality checks prior to use, and usually have to be purchased or licensed for use.

Appropriateness research by RAND-UCLA is another retrospective tool. This method incorporates a process that includes the selection of procedures, a literature review, clinician input, development of a list of indications for the performance of a procedure, creation of computer software, preparation of training material, and updating standards [14].

All tools are not valid and reliable for all patients conditions. However, the SF-36 (MOS--Medical Outcomes Study--36-item short form) defines several domains of outcomes, including physical health (physical role limitation, bodily pain, and health perception) and mental health (energy, social emotional role limitation, and depression risk), and are being used successfully with many patient groups [15,16]. The Health Outcomes Institute, Bloomington, Minn, has developed a national study of 18 conditions and a national questionnaire of health status prior to, during, and following several medical and surgical conditions [17]. Patterns of patient profiles of pain, mobility, and other variables are described. Cancer-related conditions have not been described for this study at this time.

Table 3 lists additional tools that may be included in computerized databases prospectively to measure and manage outcomes. Of note to oncologists should be the tool by Padilla et al, to identify quality of life in cancer patients with pain [18]. Several issues are important in selecting the appropriate tool for outcomes management: conceptual compatibility, consistency of purpose, sensitivity/responsiveness to change, approaches to data collection, approaches to scoring/weighing, metric properties of the instrument, reliability and validity, and feasibility and practicality of using the same tool in the acute care, ambulatory care, or long-term care environments.

Clinical Practice Guidelines as Content in Computer-Based Patient Records

Professional associations, academic and private institutions, insurers, hospitals, managed care associations, and the US government have all developed and issued clinical practice guidelines in the last decade. The development of practice guidelines has been spurred by the variability of clinical practice that makes it difficult to define quality and the inability to access the information that clinicians need in a useful format.

Clinical practice guidelines developed by AHCPR are defined as "systematically developed statements to assist practitioners and patients in decisions of what is appropriate in health care for specific clinical circumstances."19 Guidelines are also called practice standards, protocols, clinical trials outcome reports, and care process formats. Specific entities refer to their guidelines as practice parameters (American Medical Association), consensus conference summary reports (National Institutes of Health), Task Force Statements (Office of Health Promotion and Disease Prevention, US Preventive Services), and simply guidelines (Centers for Disease Control). This paper refers to the Institute of Medicine defined clinical practice guidelines. Excellent protocols to manage patients with cancer are available from the National Cancer Institute's PDQ.

The AHCPR clinical practice guidelines, the US Preventive Services Task Force Statements, the NIH Consensus Conference Reports, and the PDQ are all available on-line through the National Library of Medicine. These free electronic services are available in a system called HSTAT (Health Services/Technology Assessment Text). The guidelines can be accessed on a regular basis soon after their publication. They are available by dial-up access, or on the Internet through the NIH gopher, FTP, or Mosaic (http://www.nlm.nih.gov). Then select Online Information Services, HSTAT. Not only can health providers access these cancer information resources, but consumer access is encouraged for AHCPR guidelines and PDQ.

The AHCPR has recently published a methodology book that includes a description of the many methodologies utilized in guideline production [20]. The methodology to develop AHCPR guidelines has included a step to develop an algorithm based on the logic of the practice guideline [21]. The transformation of the guidelines to an algorithm is a beginning step toward translating guidelines to logical decision programs for computers.

In addition, the AHCPR guidelines are translated into medical review criteria, which also includes the use of algorithms [22,23]. These two byproducts of development define the skeletal logic that can be used in computerizing clinical practice guidelines. The advantage of translating guidelines to review criteria are described in a two-volume book from AHCPR [24] .Review criteria become the necessary elements in defining a critical path.

The next step of the quality cycle involves translating the guidelines review criteria into performance measures. Another two-volume series from the AHCPR defines the methodology and tools to convert guidelines into performance measures [25]. Performance measures set a rate of conformance with guidelines or review criteria. They are useful in benchmarking different sites of care.

Other Methodologies to Synthesize Oncology Outcomes

The decision analysis model provides a mechanism for linking interventions to outcomes when evidence from the literature is not available [26,27]. Path analysis is a statistical methodology that can be used to link outcome models [28]. Computer-generated data from documentation also provide information on current practice and outcomes achieved in clinical practice [29].

Patterns of care is a method that the American College of Radiology has used for more than 20 years in radiation oncology [30]. In this methodology, a panel of experts not only conducts a literature review but also reviews the practice patterns of radiation oncology centers nationally. Analysis of outcomes data has typically included data on survival, recurrence, and complications. The surveys have provided evidence that is generalizable, since the patient data come from settings outside clinical trials and from multiple institutions.

The Need for Further Decision Analysis Tools to Facilitate Oncology Care

There are few data available about the oncologic use of decision support tools integrated into patient documentation. Such tools can provide reminders, prompts, alerts, expert protocols, or supplements to prevent physicians from overlooking effective diagnostic tests, procedures, or treatments. These tools could be useful to the patient in choosing treatments, the clinician in making practice decisions, the health system manager in making coverage decisions, and the payer in making purchasing decisions [1].

Johnston et al summarized currently available decision support systems that facilitate achieving patient outcomes, and found few systems linked to outcomes [31]. The majority of decision support systems currently available only facilitate diagnostic reasoning and do not link assessment to interventions. The latter requirement is necessary for outcomes monitoring.

In a survey by the American College of Physicians (ACP), clinicians listed several areas where they require guidance and decision supports (ACP Ranking of Top 50 Guideline Topics. Personal Communication, 1993). Of highest priority was the need for guidance on prostate cancer screening. Relatively important (in the top 10) was the diagnosis and management of prostate cancer. ACP physicians also stressed the need to receive guidance on the diagnosis and management of pleural effusion and, related to all cancer management, the management of pain.

From an excellent review of decision support systems that are currently available, Dr. Randolph Miller [32] described only two oncology decision support systems: (1) a neural network approach to the detection of metastatic melanoma from chromatographics analysis of urine [33]. and (2) a neural network to diagnose cancer [34].

A recent paper by Kibbe et al provides constructive recommendations and a model for translating guidelines into products to be used in quality programs [35]. It makes several assumptions: that health care is provided in complex organizations with many multidisciplinary professionals delivering care; that organizational components that lead to quality include availability and timeliness of services, adequate staffing and training for skill mix, and the achievement of acceptable cost levels; and that outcomes can be the result of multiple subsystems functioning to support patient care.

Kibbe et al propose several steps for the utilization of care protocols, including recognition, identification, implementation, and institutionalization. These stages place the concept of outcomes in a broader management role than monitoring individual performance and patient response.

Included in the broader process of outcomes are the collection of data elements; assurance of validity and reliability of data; identification of issues associated with severity of illness to determine the link between severity, complexity, and risk adjustments; identification of issues associated with measuring staffing resources; and verification of methods for risk-adjusted outcomes [36].

At Henry Ford Health System, Detroit, information systems include the use of guidelines and decision aids such as computerized reminders, expert systems, and clinical education to influence clinical decision making [37]. These tools have already provided a useful interchange of information between sites by using a standard Arden syntax to simplify sharing medical logic modules [38].

The Need for Standards in Documentation Format and Language

Three types of standards need to be facilitated to achieve the continuum toward quality care. These are in the areas of language, coding and structure; the content in documentation (guidelines, review criteria, performance measures, and decision support systems); and the transmission requirements between systems. For patient care data, guidelines are needed to describe the signs and symptoms, diagnoses, procedures, and outcomes. Table 4 lists some of the reasons that standards are needed in defining outcomes goals.

The AHCPR's role in accelerating the use of standards is to facilitate private initiatives in development and applications; improve communications between researchers, standard development organizations, and vendors; provide an infrastructure to support information about standards; fund research on sites of standards development in open architectures that can be shared by all persons through linkages to large communication networks; and extend collaboration efforts to other international communities.

Establishing Patterns of Care in Oncology

One of the major advantages of integrating computer-based patient records with effectiveness products is the more efficient delivery of care and improved quality of care. In evaluating care, feedback to the clinician on what has been successfully achieved or not achieved has frequently been missing. The clinician has documented, recorded, and dictated treatment with little time to reflect on outcomes of care for individuals or groups of patients. However, trends toward prospective management or point-of-service management have emerged which benefit from the use of feedback on the use of resources consumed and previous outcomes achieved.

To identify patterns of care, and providers who are high performers or low performers, data are needed over time in relation to the outcomes of care achieved. The patient record becomes the site to place clinically relevant events into units for further analysis, to describe the persons served, and to summarize the events. This type of analysis is becoming especially important to case managers who serve as profile managers in multidisciplinary practice environments.

The data elements needed for these evaluations come from practice guidelines and protocols translated into review criteria. It is the next step that the management team takes with the protocols that turns them into useful management tools for determining patterns of care. If the guideline or protocol describes the procedures/tests that should be done and the possible treatment interventions that are known to reach effective outcomes, then the review criteria can monitor whether these were delivered or not delivered [4].

The utility of collecting information in administration is that information can be managed to support quality of care, patterns or variations in services, utilization review, and outcomes achieved [39]. Quality can be expanded to include concepts of clinical compe-tence (decision making, accuracy of diagnosis, recording behavior); appropriateness of diagnosis, therapeutic interventions and procedures used; and effectiveness of treatment procedures, drugs, and technology. Patterns of variance can be attributed to providers, resources, procedures, tests, drugs, or treatment, and can be analyzed by geographic area, patient type, specialty or provider type, and location of care.

Benchmarks and Profiles of Variance

Variance can be monitored through the computer-based patient record when guidelines are translated into review criteria and further monitored for performance measures. When an organization attempts to compare its results with those from other organizations that are getting better results, and modify their practices accordingly, this is called "benchmarking."

Outcomes that are achieved need to be documented. Variances can be represented through scatter diagrams, for example, for patients' lengths of stay as a function of number of comorbidities, or through graphs to show, for example, changes in patient days and charges with and without care management. Some benchmarks present frequency or cumulative summaries of reasons for variance, and others show control charts that summarize mean data over time and outliers (positive and negative) by department.

Profiles are cumulative reports that can be prepared reflecting patient, provider, department, or condition variances within a care facility. The new National Comprehensive Cancer Network (NCCN) may provide an excellent environment for profiling outcomes in oncology care with diverse providers, resources, and patients [40].

Focused Examples of Achieving Positive Outcomes

Focused examples are becoming a popular method of describing automation efforts that have improved outcomes of care. To date, there are few examples in the information systems literature in which oncology outcomes achieved have been demonstrated and costs reduced. Studies of interventions in pain management are emerging, but most examples are found in the general medicine literature.

In one study, use of a workstation resulted in a reduction in charges per patient admission of almost $900 and a reduction in length of stay of almost 1 day [41]. At the Latter Day Saints Hospital, Salt Lake City, Utah, several studies of automation have described reductions in antibiotic usage 94% of the time with a consultant support system [42]. Recently, use of a locally translated clinical practice guideline from AHCPR on the prevention of pressure ulcers led to an estimated annual savings of $500,000 (assuming that a pressure ulcer episode in acute care costs about $4,200) by linking signs of pressure ulcer to treatments [43]. With integrated guidelines and pathway management, Lovelace in Albuquerque, NM, demonstrated that patients with partial hip replacement who were in case management stayed 2 days less than those not in case management; patients with open reduction internal fixation for hip fractures in case management stayed an average of 3.6 days less than those not in the case management protocol [44].

Summary

New tools are now available to facilitate more efficient and effective management in the health care arena in the 1990s and beyond. This paper has discussed how outcomes can be effectively managed in computerized databases, and the development and potential use of clinical practice guidelines as content in a computer-based patient record. This is just the initial step toward expanded clinical decision-making methods to enhance computer-based patient records. Guidelines that are translated to review criteria and performance measures become the necessary link for incorporating guidelines content into quality care documentation. All of these methods are more efficiently managed by computerized information systems.

References:

1. Gaus C: Health outcomes and accountability in a market-driven system. Read before the Congress on Health Outcomes and Accountability, Washington, DC, December 14, 1994.

2. Hayes JR: Knowledge is money. Forbes February 13, 1995, pp 188.

3. Shortliffe EH, Tang PC, Amatayakul MK, et al: Future vision and dissemination of computer-based patient records, in Ball MJ, Collen MF (eds): Aspects of the Computer-based Patient Record, pp 46-56. New York, Springer-Verlag, 1992.

4. Silva JS, Zawilski AJ: The health care professional's workstation: Its functional components and user impact, in Ball MJ, Collen MF (eds): Aspects of the Computer-based Patient Record, pp 46-56. New York, Springer-Verlag, 1992.

5. Donaldson M, Capron A (eds): Patient Outcomes Research Teams: Managing Conflict of Interest. Washington, DC, National Academy Press, 1991.

6. Barrett MJ: Is your organization ready for total quality management? Am J Med Qual 7:(4) 106-110, Winter, 1992.

7. McCormick KA: Future data needs for quality of care monitoring, DRG considerations, reimbursement and outcome measurement. Image J Nurs Sch 23:4-7, 1991.

8. Saba VS, McCormick KA: Essentials of Computers for Nurses, 2nd Ed. New York, McGraw-Hill, 1996.

9. Wennberg J, Freeman JL, Culp WJ: Are hospital services rationed in New Haven or over-utilized in Boston? Lancet May 23; 1(8543):1185-1189, 1987.

10. Wennberg J, Freeman JL, Shelton RM, et al: Hospital use and mortality among Medicare beneficiaries in Boston and New Haven. N Engl J Med 321:1168-1173, 1989.

11. Cromwell J, Mitchell JB: Physician-induced demand for surgery. J Health Econ 5:293-313, 1988.

12. Paul JE, Weis KA, Epstein RA: Data bases for variations research. Med Care 31(5):S96-S102, 1993.

13. McCormick KA: Nursing effectiveness research using existing databases, in Patient Outcomes Research: Examining the Effectiveness of Nursing Practice: Proceedings of the State of the Science Conference, pp 203-209. Bethesda, Md, NIH Publication No. 93-3411, 1993.

14. Brook RH: The RAND/UCLA appropriateness method, in McCormick KA, Moore SR, Siegel RA (eds): Clinical Practice Guideline Development: Methodology Perspective, pp 59-70. Rockville, Md, AHCPR Pub. No. 95-0009, 1995.

15. Ware JE: Standards for validating health status measures: Definition and content. J Chronic Dis 40:473-480, 1987.

16. Ware JE, Sherbournes C: The MOS 36-item short-form health survey (SF-36). Med Care 30:473-483, 1992.

17. HOI (Health Outcomes Institute): Update: The Newsletter of the Health Outcomes Institute. Bloomington, MN, Health Outcomes Institute, Summer, 1994.

18. Padilla G, Ferrell B, Grant, MM, et al: Defining the content domain of quality of life for cancer patients with pain. Cancer Nurs 13(2):108-115, 1990.

19. Institute of Medicine: Clinical Practice Guidelines: Directions for a New Program. Washington, DC, National Academy Press, 1990.

20. McCormick KA, Moore SR, Siegel RA (eds): Clinical Practice Guideline Development: Methodology Perspective. Rockville, Md, AHCPR Pub. No. 95-0009, 1995.

21. Hadorn DC, McCormick KA, Diokno A: An annotated algorithm approach to clinical guideline development. JAMA 267:3311-3314, 1992.

22. Institute of Medicine: Guidelines for Clinical Practice: From Development to Use. Washington, DC, National Academy Press, 1992.

23. Beavert CS, Magoffin CJ: Medical review criteria: A tool for quality management. Group Practice Journal 42(4):53-57, 1993.

24. Schoenbaum S, Sundwall, Bergman D, et al: Using Clinical Practice Guidelines to Evaluate Outcomes of Care. Vol 1: Issues, Vol 2: Methods. Rockville, Md, AHCPR Publication No. 95-0045 and No. 95-0046, 1995.

25. Duggar B, Palmer H, et al: Understanding and Choosing Clinical Performance Measures for Quality Improvement: Development of a Typology and Attachments. AHCPR publication No. 95-N001 and 95-N002, 1995.

26. Owens DK, Nease RB: Development of outcome-based guidelines: A method for structuring problems and synthesizing evidence. J Qual Imp 19:248-264, 1993.

27. Matchar DB: Application of decision analysis to guideline development, in McCormick KA, Moore SR, Siegel RA (eds): Clinical Practice Guideline Development: Methodology Perspective, pp 35-40. Rockville, Md, AHCPR Pub. No. 95-0009, 1995.

28. Williams AR, DeLurgio SA: Path analysis: Reflections on causal modeling, in McCormick KA, Moore SR, Siegel RA (eds): Clinical Practice Guideline Development: Methodology Perspective, pp 77-84. Rockville, Md, AHCPR Pub. No. 95-0009, 1995.

29. McDonald CJ, Tierney WM, Overhage JM: Computer-based reminder rules, data bases, and guideline development, in McCormick KA, Moore SR, Siegel RA (eds): Clinical Practice Guideline Development: Methodology Perspective, pp 71-76. Rockville, Md, AHCPR Pub. No. 95-0009, 1995.

30. Owen JB, Hanks GE: The patterns of care study: A model for clinical quality assessment, in McCormick KA, Moore SR, Siegel RA (eds): Clinical Practice Guideline Development: Methodology Perspective, pp 85-92. Rockville, Md, AHCPR Pub. No. 95-0009, 1995.

31. Johnston ME, Langton KB, Haynes RB, et al: Effects of computer-based clinical decision support systems on clinician performance and patient outcomes. Ann Intern Med 120:135-142, 1994.

32. Miller RA: Medical diagnostic decision support systems--past, present, and future: A threaded bibliography and brief commentary. J Am Med Informatics Assoc 1(1):8-27, 1994.

33. Cohen ME, Hudson DL, Banda PW, et al: Neural network approach to detection of metastatic melanoma from chromatographic analysis of urine. Proc Ann Symp Comput Appl Med Care, pp 295-299, 1993.

34. Maclin PS, Dempsey J, Brooks J, et al: Using neural networks to diagnose cancer. J Med Sys 15:11-19, 1991.

35. Kibbe DC, Kaluzny AD, McLaughlin CP: Integrating guidelines with continuous quality improvement: Doing the right thing the right way to achieve the right goals. J Qual Imp 20(4):181-191, 1994.

36. Petryshen P, O'Brien-Pallas LL, Shamian J: Outcomes monitoring: Adjusting for risk factors, severity of illness, and complexity of care. JAMIA 4(2):243-249, 1995.

37. Ward RE: Information systems support continuous quality. Group Practice J 42(2):30-35, 1993.

38. Hripcsak G, Clayton PD, Pryor TA, et al: The Arden Syntax for Medical Logic Modules. Proc Ann Symp Comput Appl Med Care, pp 200-204, Washington, DC, 1990.

39. Davies AR: Health care researchers' needs for computer-based patient records, in Ball MJ, Collen MF (eds): Aspects of the Computer-based Patient Record, pp 46-56. New York, Springer-Verlag, 1992.

40. Vaupel P: Cancer treatment network holds out guidelines as bait for payers. Report on Medical Guidelines & Outcomes Research 6(3):9-10, 1995.

41. Tierney WT, Miller ME, Overhage JM, et al: Physician inpatient order writing on microcomputer workstations: Effects on resource utilization. JAMA 269:379-383, 1993.

42. Evans RS, Pestotnik SL, Classen, DC, et al: Development of an automated antibiotic consultant. M.D. Computing 10:17-22, 1993.

43. Horn S: LDS nurses reduce pressure ulcer incidence with retooled guidelines. Report on Medical Guidelines & Outcomes Research 6(4):10-11, 1995.

44. Coyle M, Rodgers SC, Monson AL: Guidelines enhance accountability and patient care. Group Practice J 42(6):40-44, 1993.

Recent Videos