Wiley.com
Print this page Share

Evaluation Essentials: Methods For Conducting Sound Research

ISBN: 978-0-7879-8439-7
Paperback
325 pages
July 2008, Jossey-Bass
List Price: US $77.95
Government Price: US $53.72
Enter Quantity:   Buy
Evaluation Essentials: Methods For Conducting Sound Research (0787984396) cover image

Figures and Tables.

Preface.

Acknowledgments.

The Author.

ONE: INTRODUCTION.

Learning Objectives.

The Evaluation Framework.

Summary.

Key Terms.

Discussion Questions.

TWO: DESCRIBING THE PROGRAM.

Learning Objectives.

Motivations for Describing the Program.

Common Mistakes Evaluators Make When Describing the Program.

Conducting the Initial Informal Interviews.

Pitfalls in Describing Programs.

The Program Is Alive, and So Is Its Description.

Program Theory.

The Program Logic Model.

Challenges of Programs with Multiple Sites.

Program Implementation Model.

Program Theory and Program Logic Model Examples.

Summary.

Key Terms.

Discussion Questions.

THREE: LAYING THE EVALUATION GROUNDWORK.

Learning Objectives.

Evaluation Approaches.

Framing Evaluation Questions.

Insincere Reasons for Evaluation.

Who Will Do the Evaluation?

External Evaluators.

Internal Evaluators.

Confi dentiality and Ownership of Evaluation Ethics.

Building a Knowledge Base from Evaluations.

High Stakes Testing.

The Evaluation Report.

Summary.

Key Terms.

Discussion Questions.

FOUR: CAUSATION.

Learning Objectives.

Necessary and Suffi cient.

Types of Effects.

Lagged Effects.

Permanency of Effects.

Functional Form of Impact.

Summary.

Key Terms.

Discussion Questions.

FIVE: THE PRISMS OF VALIDITY.

Learning Objectives.

Statistical Conclusion Validity.

Small Sample Sizes.

Measurement Error.

Unclear Questions.

Unreliable Treatment Implementation.

Fishing.

Internal Validity.

Threat of History.

Threat of Maturation.

Selection.

Mortality.

Testing.

Statistical Regression.

Instrumentation.

Diffusion of Treatments.

Compensatory Equalization of Treatments.

Compensatory Rivalry and Resentful Demoralization.

Construct Validity.

Mono-Operation Bias.

Mono-Method Bias.

External Validity.

Summary.

Key Terms.

Discussion Questions.

SIX: ATTRIBUTING OUTCOMES TO THE PROGRAM: QUASI-EXPERIMENTAL DESIGN.

Learning Objectives.

Quasi-Experimental Notation.

Frequently Used Designs That Do Not Show Causation.

One-Group Posttest-Only.

Posttest-Only with Nonequivalent Groups.

Participants’ Pretest-Posttest.

Designs That Generally Permit Causal Inferences.

Untreated Control Group Design with Pretest and Posttest.

Delayed Treatment Control Group.

Different Samples Design.

Nonequivalent Observations Drawn from One Group.

Nonequivalent Groups Using Switched Measures.

Cohort Designs.

Time Series Designs.

Archival Data.

Summary.

Key Terms.

Discussion Questions.

SEVEN: COLLECTING DATA.

Learning Objectives.

Informal Interviews.

Focus Groups.

Survey Design.

Sampling.

Ways to Collect Survey Data.

Anonymity and Confi dentiality.

Summary.

Key Terms.

Discussion Questions.

EIGHT: CONCLUSIONS.

Learning Objectives.

Using Evaluation Tools to Develop Grant Proposals.

Hiring an Evaluation Consultant.

Summary.

Key Terms.

Discussion Questions.

Appendix A: American Community Survey.

Glossary.

References.

Index.

Related Titles

Assessment, Evaluation & Research Methods

by Darlene F. Russ-Eft, Marcie J. Bober, Ileana de la Teja, Marguerite Foxon, Tiffany A. Koszalka
by Seymour Sudman, Norman M. Bradburn, Norbert Schwarz
by Norman M. Bradburn, Seymour Sudman
Back to Top