Authors: A

Altschuld, J. W. (1999). "The Case for a Voluntary System for Credentialing Evaluators." American Journal of Evaluation 20(3): 507-517. external image MIA.jpg
Abstract:
"A voluntary system for credentialing evaluators is described. I examine the urgent need for such a system in the field of evaluation, as well as various concerns regarding credentialing. The paper also includes an unabridged and adapted version of a table that was used in a debate on certification at the 1998 annual meeting of the American Evaluation Association in Chicago. The table is helpful in understanding both the pro and con sides of the credentialing issue," (pg 507).
Anie, S. J. and E. T. Larbi (2004). "Planning and implementing a national monitoring and evaluation system in Ghana: A participatory and decentralized approach." New Directions for Evaluation 2004(103): 129-139. (File) external image Pdf.jpg
Abstract:
"Over 2,500 organizations have been funded to carry out HIV/AIDS interventions in Ghana. A comprehensive and well-coordinated monitoring and evaluation system-one that is simple, strategic, and participatory-is needed to track the national response and its effects," (pg 129).

Authors: B

Benko, S. S. and A. Sarvimaki (2000). "Evaluation of patient-focused health care from a systems perspective." Systems Research And Behavioral Science 17(6): 513-525. (File) external image Pdf.jpg
Abstract:
"The purpose of this paper is to outline a hierarchic systems theory approach as a framework for patient-focused evaluation in nursing and other health care areas. Such a framework allows for complex features of processes in health care to appear by simultaneous analyses of relationships on different levels and with different methods. In nursing and caring research mostly a 'one-level' design has been employed. There is an awareness, however, that the outcome of the nursing process needs to be evaluated in a more differentiated manner. A systemic model is offered according to systems thinking where both system levels and system dynamics in the 'downward' as well as 'upward' direction are recognized as crucial in the analysis," (pg 513).
Bickman, L. (2002). "Evaluation of the Ft. Bragg and Stark County Systems of Care for Children and Adolescents." American Journal of Evaluation 23(1): 67-68. external image MIA.jpg
Abstract:
"Describes evaluations of two programs, one designed to improve mental health outcomes for children and adolescents referred for mental health treatment, and the other designed to provide comprehensive mental health services to children and adolescents. In both studies, the effects of systems of care are primarily limited to system level outcomes, but do not appear to affect individual outcomes such as functioning and symptomatology," (pg 67).

Authors: C

Chen, H.-T. (2001). "Development of a National Evaluation System to Evaluate CDC-Funded Health Department HIV Prevention Programs." American Journal of Evaluation 22(1): 55-70. external image MIA.jpg
Abstract:
"This article discusses the recent experience of the Centers for Disease Control and Prevention (CDC) in developing a national evaluation system for monitoring and evaluating health department HIV prevention programs that are funded by CDC. The foundation for such a system is evaluation guidance that establishes standardized data elements for a variety of evaluation activities. The article discusses barriers to developing such a system, strategies used to deal with those barriers, the contributions made by stakeholders to the development of the system, the end product, strategies and activities used to meet the capacity building, and technical assistance needs of stakeholders. Lessons learned from this experience should be useful to any organization intending to develop a large evaluation system," (pg 55).
Clements, P. (2005). "Book Review: A Handbook for Development Practitioners: Ten Steps to a Results-Based Monitoring and Evaluation System." American Journal of Evaluation 26(2): 278-280. external image MIA.jpg
Coblio, N. A., P. McCright, et al. (2005). "Systems evaluation and pharmacy redesign needed in addressing medication errors." Journal Of The American Pharmacists Association 45(1): 4-6. (File) external image Pdf.jpg

Authors: D

Darabi, A. (2002). "Teaching Program Evaluation: Using a Systems Approach." American Journal of Evaluation 23(2): 219-228. external image MIA.jpg
Abstract:
"Increasing numbers of individuals are being asked to teach courses in program evaluation. Students in program evaluation courses are trying to piece together activities presented to them in order to create their own "big picture" of the seemingly fragmented and chaotic process of evaluation. The abundance of program evaluation approaches and content-specific models exacerbates the problem by only organizing and conveying conceptual and factual knowledge while seeming to render key aspects of program evaluation practice invisible to students. This article describes a program evaluation model that was developed over three semesters of teaching an introductory graduate-level course in program evaluation. The framework utilizes a systems approach that emphasizes its sequenced methodology and the significance of monitoring one's work by providing a series of feedback loops in an ongoing revision process," (pg 219).

Authors: F

Frankel, R. J. (1975). "Systems Evaluation Of Village Water Supply And Treatment In Thailand." Water Resources Research 11(3): 383-388. external image MIA.jpg
Notes:
This item is at the Engineering library, GB651 .W31 and at Mann library, HD1694 .A14.

Authors: G

Green, L. W. and R. E. Glasgow (2006). "Evaluating the relevance, generalization, and applicability of research: Issues in external validation and translation methodology." Evaluation & The Health Professions 29(1): 126-153. (File) external image Pdf.jpg
Abstract:
"Starting with the proposition that “if we want more evidence-based practice, we need more practice-based evidence,” this article (a) offers questions and guides that practitioners, program planners, and policy makers can use to determine the applicability of evidence to situations and populations other than those in which the evidence was produced (generalizability), (b) suggests criteria that reviewers can use to evaluate external validity and potential for generalization, and (c) recommends procedures that practitioners and program planners can use to adapt evidencebased interventions and integrate them with evidence on the population and setting characteristics, theory, and experience into locally appropriate programs. The development and application in tandem of such questions, guides, criteria, and procedures can be a step toward increasing the relevance of research for decision making and should support the creation and reporting of more practice-based research having high external validity," (pg 126).
Gregory, A. J. and M. C. Jackson (1992). "Evaluating Organizations - A Systems And Contingency Approach." Systems Practice 5(1): 37-60. (File) external image Pdf.jpgNotes
Abstract:
"It has become increasingly difficult to keep pace with the amount of information being generated about how to evaluate organizations. If it were not enough that the situation is made difficult by the sheer mass of material on evaluation, clarity is further hindered by many of the publications on the subject failing to make explicit the principles and assumptions upon which they are based. This was the situation confronting the authors when they began a national project with the National Association of Councils for Voluntary Service on the evaluation of the performance of Councils for Voluntary Service. In an attempt to bring some order to the field, this paper adopts a systems and contingency approach to elucidate the nature and practical usefulness of the different methods of evaluation. It first seeks, using some tools of Checkland's soft systems methodology, to present a systematic analysis of the subject of evaluation. Then, in the light of the analysis, an attempt is made to formulate a simple classification of approaches to evaluation which serves to match the different forms of evaluation to the contexts in which they are most appropriate for use," (pg 37).
Gregory, A. J. and M. C. Jackson (1992). "Evaluation Methodologies: A System For Use." Journal Of The Operational Research Society 43(1): 19-28. (File) external image Pdf.jpgNotes
Abstract:
"An increasing number of voluntary organizations are required to have their work and structures evaluated as a condition of funding. In response to this trend, a joint project is being undertaken by the Centre for Community Operational Research, at the University of Hull, and the Council for Voluntary Service National Association. In the light of the project work to date, this paper presents an analysis of the theoretical underpinnings of four types of evaluation and formulates a system of evaluation methodologies showing in what circumstances each can be most appropriately used," (pg 19).

Authors: H

Harkreader, S. A. and G. T. Henry (2000). "Using Performance Measurement Systems for Assessing the Merit and Worth of Reforms." American Journal of Evaluation 21(2): 151-170. external image MIA.jpg
Abstract:
"One highly touted use of performance measurement systems is to assess the merit and worth of reforms. In this study, the effect of the League of Professional Schools, a democratic reform initiative in Georgia, was evaluated using performance measures from the state's educational performance measurement system. The findings indicate that the League, in combination with an antecedent condition, motivated leadership, produced more widespread participation in staff development than other schools. In addition, schools that were relatively successful in implementing the tenets of the program exhibited modestly improved levels of student achievement over similar schools. However, the League schools did not outperform schools involved in another school reform that was instituted with the same antecedent condition--motivated leadership. Although both reforms were associated with modestly better student performance, the League seemed to trigger more teacher involvement in school governance than did the alternative reform. There is, however, no evidence that the antecedent condition, motivated leadership, was not sufficient by itself to cause the higher levels of student performance. Analysis of the performance measurement data allowed the merit of the League to be assessed against several different performance standards. Although the performance of the League's schools was positive relative to several of these performance standards, in the end it was impossible to use performance measures to show that the League was a necessary component of the causal package that resulted in improved performance," (pg 151).

Authors: M

McIntire, P. W. and A. S. Glaze (1999). "The use of systems modeling in legislative program evaluation." New Directions for Evaluation 1999(81): 45-60. (File) external image Pdf.jpg
Abstract:
"The development of systems models for program evaluations provides unique benefits. Computer simulations can facilitate an understanding of multi-issue legislation and help policymakers reach comprehensive conclusions," (pg 45).
Midgley, G. (1988). A Systems Analysis and Evaluation of Microjob: A Vocational Rehabilitation and Information Technology Training Centre for People with Disabilities. London, City University. M. Phil. external image MIA.jpg
Notes:
This thesis is unavailable through Cornell and the interlibrary loan system.
Midgley, G. (1996). "Evaluation and change in service systems for people with disabilities: A critical systems perspective." Evaluation 2: 67-84. external image MIA.jpg
Notes:
This item is at Mann Library, H62 .E921.
Molas-Gallart, J. and A. Davies (2006). "Toward theory-led evaluation: The experience of European science, technology, and innovation policies." American Journal of Evaluation 27(1): 64-82. (File) external image Pdf.jpg
Abstract:
"This article reviews the literature and practice concerned with the evaluation of science, technology, and innovation (STI) policies and theway these relate to theories of the innovation process. Referring to the experience of the European Union (EU), the authors review the attempts to ensure that the STI policy theory is informed by advances in the authors'understanding of the innovation process. They argue, however, that the practice of policy evaluation lags behind advances in innovation theory. Despite the efforts to promote theory-led evaluations of STI policies based on new theories of the systemic nature of innovation, evaluation practice in the EU continues to favor the development of methods implicitly based on outdated linear views of the innovation process. This article examines the reasons why this is the case and suggests that STI policy evaluation should nevertheless be supported by the evolving theoretical understanding of the innovation process," (pg 64).

Authors: P

Poole, D. L., J. Nelson, et al. (2000). "Evaluating Performance Measurement Systems in Nonprofit Agencies: The Program Accountability Quality Scale (PAQS)." American Journal of Evaluation 21(1): 15-26. external image MIA.jpg
Abstract:
"The drive for accountability in human services puts pressure on nonprofit agencies to develop performance measurement systems. But efforts to build capacity in this area have been hindered by the lack of instruments to evaluate the quality of proposed performance measurement systems. The Performance Accountability Quality Scale (PAQS) attempts to fill this gap. The instrument was field-tested on 191 program performance measurement systems developed by nonprofit agencies in Central Florida. Preliminary findings indicate that PAQS provides a structure for obtaining expert opinions based on a theory-driven model about the quality of a proposed measurement system in a not-for-profit agency. The instrument also is useful for assessing agency needs for technical assistance and for evaluating progress in the development of performance measurement systems. Further study is needed to test PAQS in other settings and to explore new areas of research in outcome evaluation," (pg 15).

Authors: R

Renger, R., A. Cimetta, et al. (2002). "Geographic Information Systems (GIS) as an Evaluation Tool." American Journal of Evaluation 23(4): 469-479. external image MIA.jpg
Abstract:
"Evaluators must seek methods that convey the results of an evaluation so that those who intend on using the information easily understand them. The purpose of this article is to describe how Geographic Information Systems (GIS) can be used to assist evaluators to convey complex information simply, via a spatial representation. Although the utility of GIS in such disciplines as geography, planning, epidemiology and public health is well documented, a review of the literature suggests that its usefulness as a tool for evaluators has gone relatively unnoticed. The paper posits that evaluators may have not recognized the potential of GIS, because of two beliefs that GIS can only provide cross-sectional, snapshots of data, and hence cannot depict change and that many of the available databases that underlie GIS do not contain data relevant to the evaluation at hand. This article demonstrates how GIS can be used to plot change over time, including impact and outcome data gathered by primary data collection," (pg 469).
Ryan, K. (2002). "Shaping Educational Accountability Systems." American Journal of Evaluation 23(4): 453-468. external image MIA.jpg
Abstract:
"The No Child Left Behind Act of 2001 (NCLB) institutionalizes the reliance on accountability and assessment systems as a key mechanism for improving student achievement (Linn, Baker, & Betebenner, 2002). However, there is a fundamental tension between performance measurement systems, which do serve stakeholders and public interests through monitoring, and these kinds of indicators where representations of program quality are oversimplified (Stake, 2001). Evaluators are uniquely situated to made a significant contribution in the dialogue about the merits and shortcomings of educational accountability systems. Suggestions concerning how evaluation can contribute to improving and changing accountability systems are presented," (pg 453).

Authors: S

Smith, C. L. and R. L. Freeman (2002). "Using Continuous System Level Assessment to Build School Capacity." American Journal of Evaluation 23(3): 307-319. external image MIA.jpg
Abstract:
"The purpose of this article is to introduce a conceptual model for internal assessment and professional development planning. The continuous systems-level assessment (CSLA) is a model that can provide school and other professionals with a method for making data-based decisions, and is intended to foster local expertise by utilizing assessment information to design effective professional development strategies. The CSLA model includes three phases: needs assessment and problem identification; designing interventions and building staff capacity; and implementing and evaluating interventions. The authors outline the values associated with the CSLA model, the major phases involved in the process, and provide an example of how the process is currently being implemented in an urban school in Kansas. School-based programs are complex, dynamic, and usually involve long-term commitments from a variety of partners. In addition, program implementation often occurs within the context of organic, changing organizations and therefore, requires a flexible model of evaluation. Several bottom-up program evaluation models have been developed that can be used to guide an assessment of the commitments, values, and capacities of schools and other organizations (Fawcett et al., 1996; Fetterman, 1996; Greenwood, Whyte, & Harkavy, 1993; Knoff, 1996; Levin, 1996; Millett, 1996; Scriven, 1967; Stake, 1967, 197 6). The approach described here is similar to these bottom-up program evaluation models, and is being used to assess ongoing improvement efforts and identify professional development needs for schools," (pg 307).
Smith, N. L. (2005). Evaluation design alternatives: Studies, systems, and other variations. American Evaluation Association/ Canadian Evaluation Society joint annual meeting. Toronto, Canada. (File) external image Pdf.jpg
Abstract:
"This paper presents a conceptual framework for characterizing evaluation design alternatives. By incorporating dimensions of structure (study versus system), process (preordinate versus emergent), nature of the evaluand (stable versus dynamic), type of knowledge claim (value versus cause), and locus of application (local versus general), the framework encompasses much of the evaluation work currently being conducted. In addition, the framework can be used to define a few archetypes of evaluation design," (pg 2 of pdf).

Authors: U

Ulrich, W. (1988). Churchman's “process of unfolding”—Its significance for policy analysis and evaluation. Systemic Practice and Action Research, 1(4), 415-428. (File) external image Pdf.jpgNotes

Authors: W

Watt, J. H. (1999). "Internet systems for evaluation research." New Directions for Evaluation 1999(84): 23-43. (File) external image Pdf.jpg
Abstract:
"The author provides a detailed description of diverse Web-based data collection tools and enumerates their advantages, disadvantages, and logistical challenges. Web-based data collection can offer cost-effective, flexible, and timely solutions to many evaluation needs," (pg 23).
Woodwell, W. H., Jr. (2005). Evaluation as a pathway to learning: Current topics in evaluation for grantmakers. Washington, DC, Grantmakers for Effective Organizations. (File) external image Pdf.jpg
Notes:
"EVALUATION=LEARNING: Evaluation as a Pathway to Learning helps grantmakers demonstrate the impact of their grantees and support their learning. Released by Grantmakers for Effective Organizations, the report was shaped by the 2005 Evaluation Roundtable, a meeting that included the evaluation directors from some of the largest foundations in the United States. The report includes information on evaluation techniques and explores concepts such as evaluation's link to knowledge management. It also offers tips on incorporating a results orientation into foundation work without making a large investment and showcases the evaluation approaches of several large and small grantmakers."