COVID-19 dashboard banner



See Customization and localization

A strategy used to address a specific health problem within a particular population. It refers to the "how" of programming choices. For example, there are many ways to deliver an educational program. These might include using mass media campaigns or educating patients in a clinical setting.

Approach: Insufficient evidence
This is a term used in the Guide to Community Preventive Services. It indicates a body of evidence that does not give the Task Force enough information to decide whether an intervention is effective. A finding of "insufficient evidence" means that there's a need for more research into how effective the approach is. It does not mean that the approach doesn't work. It means that we aren't able to tell whether or not it works.
Related term: Community Guide
(Adapted from the Guide to Community Preventive Services)

Approach: Recommended
This is a term used in the Guide to Community Preventive Services. It indicates a body of evidence that the systematic review of available studies provides strong or sufficient evidence that the approach is effective. The findings do not directly relate to how well the approach is expected to work.

A finding of "recommended" is based on several things. These include the study design, the number of studies, and how consistent the effects are across the studies.
Related term: Community Guide
(Adapted from the Guide to Community Preventive Services)

Approach: Recommended against
This is a term used in the Guide to Community Preventive Services. It indicates that a systematic review of available studies provides strong or sufficient evidence that the intervention is harmful or not effective.
Related term: Community Guide
(Adapted from the Guide to Community Preventive Services)


Basic information that is collected before a program starts. The information can be used later to provide a comparison when evaluating the impact of the program. Baseline information can also be used to provide data about a starting point before a policy change or other change in a community.
Related terms: Post-test

Community Guide

A shortened name for the Guide to Community Preventive Services, a free resource of the Centers for Disease Control and Prevention. The Community Guide website offers reviews of different approaches you can use to influence a population's health. It also tells you which approaches have been most effective.
(Adapted from the Guide to Community Preventive Services)

Community-based participatory research
Community-based participatory research (CBPR) is a "collaborative approach to research that equitably involves all partners in the research process and recognizes the unique strengths that each brings. CBPR begins with a research topic of importance to the community, has the aim of combining knowledge with action and achieving social change to improve health outcomes and eliminate health disparities." – Source: WK Kellogg Foundation Community Health Scholars Program

Core components
The key pieces of a program or intervention that make it work. If you change or remove them, you can't be sure that the program is still evidence-based.

Customization and localization
The process of changing an existing program for new audiences, conditions, or contexts that it was not originally created for. You can change the program to give it more impact and make it more relevant for your community. But to make sure the program stays evidence-based, it's best to keep these changes to a minimum.

Common elements to change include:

  • Language of the program
  • Cultural references
  • Reading level of materials


For program planning, data are numbers and facts that can be used to describe a specific population at a moment in time. Data may be used to assess the needs of a population that a program is targeting. Data may show the impact of a program on a population. Data may also be used to evaluate whether a program has met its objectives.

Direct observation
Watching people in a natural setting without interacting. An example would be watching shoppers in a grocery store to see if they are reading posted nutritional charts.
(Adapted from NCI's Using What Works)


The systematic collection of information about the activities, characteristics, and outcomes of a program. The evaluation allows those delivering the program to make judgments about the program and improve its effectiveness. It may also help them as they make decisions about future program development.
Related terms: Outcome evaluation, process evaluation
(Adapted from NCI's Using What Works)

Evaluation plan
A written document that describes the overall approach or design that will be used to guide an evaluation. It includes what will be done, how it will be done, who will do it, and when it will be done. It also outlines why the evaluation is being conducted and how the findings will likely be used.
(Adapted from CDC's Evaluation Manual: Glossary and Evaluation Resources)

Information that helps determine whether a belief or statement is true. It is gathered systematically and goes beyond personal beliefs and opinions. It should be up-to-date and accurate.

Evidence-based medicine
Evidence-based medicine is the conscientious, explicit, and careful use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means combining individual clinical expertise with the best available external clinical evidence from systematic research.
(Adapted from Sackett, D., Rosenberg, W., Muir Gray, J., Haynes, R. Richardson, W. (1996). Evidence-based medicine: what it is and what it isn't. British Medical Journal, 312, 71-72)

Evidence-based program
A program that has undergone formal testing and was found to be effective.


See Program fidelity

Focus group
A form of qualitative research in which information is collected from a small group that represents a wider population. The focus group allows researchers to learn about participants' perceptions, attitudes, and opinions on an issue or product. The open discussion format is well-suited to exploring new ideas and questions.
Related term: Qualitative data
(Adapted from NCI's Using What Works)

Formative evaluation
A way to look at a community and understand the interests, attributes, and needs of different populations and people in the community. It is also a way to understand the way these characteristics affect their behavior. This kind of research is done before a program is chosen and delivered, and it serves as the basis for developing effective strategies. For example, a community agency may use this kind of research to identify and understand the biggest needs of community members who are at greatest risk for HIV.
(Adapted from Good Questions, Better Answers - © 1998 California Department of Health Services and Northern California Grantmakers AIDS Task Force)

Health disparities

Differences in health status, health outcomes, healthcare access, and quality of health care among groups based on attributes like race/ethnicity and socioeconomic status (SES). Typical measures of health disparities include population-specific differences in the presence of disease, health outcomes, or access to health care.
(Adapted from Institute of Medicine, Unequal Treatment: Confronting racial and ethnic disparities in health care, ed. B.D. Smedley, A.Y. Stith, and A.R. Nelson. 2003, Washington, DC: National Academies Press.)


See Program delivery

In-depth interviewing
A method of collecting detailed information through individual interviews. Varies from brief, casual talks to long, formal conversations with members of your audience. In-depth interviewing relies on the interviewing skills of the researcher, including phrasing of questions and how much he or she knows about the subject's culture or frame of reference. Responses gathered in an interview are assumed to be true.
(Adapted from NCI's Using What Works)

A measure of quantifiable data (information that can be captured in a number) related to an issue. This number can then help you decide how to address that specific issue.

For example, a program designed to help people quit smoking will probably ask participants whether they are smokers. The number of people who say yes or no is an indicator of how big the need is to address this issue. Also, comparing the number of yeses at the start to the number at the end is an indicator of the overall success of the program.
(Adapted from the 2008 Common Pathways Worcester Indicators Report [Resource page])

See Program

Key informant interviewing

The process of interviewing influential, prominent, and well-informed members of a community. Community leaders are one such example. The social, political, financial, or management position of the interviewees may provide program planners with information they could not otherwise learn. For example, key informants may have a more objective opinion of a health issue if they are not personally affected by the issue.
Related term: Qualitative data
(Adapted from NCI's Using What Works)

Literacy level

See Readability

Literature review
A summary of published information about a particular topic. This type of document summarizes the current research. It can be helpful for gathering background information, writing grants, or planning programs.

Logic model
A program logic model is a picture of how a program works. The model links program activities and processes with outcomes (both short- and long-term) and includes information about the theory and assumptions that drive the program. A logic model often includes key factors (that support or limit program effectiveness), activities (events, methods, or actions), results of program activities, outcomes (such as changes in knowledge, attitude, or behavior), and impacts (at the organization-, community-, or system-level).
(Adapted from the W.K. Kellogg Foundation)

Needs assessment

The process of collecting and evaluating information about a particular population, organization, or community. This information is focused on specific and measurable problems, challenges, or needs of that group. Some examples of topics for needs assessment include:

  • Patient health knowledge
  • Community perceptions
  • Provider attitudes
  • Institution practices

You can use the information from a needs assessment to design health programs that are tailored to the group's needs.
(Adapted from NCI's Using What Works)


More specific than a goal, an objective states how much of a goal will be accomplished within a certain time frame. A program objective may describe the actual knowledge, behavior, attitude, or skill changes that result from the program. It may also detail a desired outcome including:

  • Who is responsible
  • How much of a particular action they need to demonstrate
  • A time frame in which they must demonstrate it

Objectives should be SMART:

  • Specific
  • Measurable
  • Achievable
  • Realistic
  • Time-framed

(Adapted from NCI's Using What Works)

Outcome evaluation
An evaluation to see if the objectives of a program were met. Evaluation measures can include changes in your target audience's knowledge, attitudes, and behaviors. For example, the evaluation may show an increase in physical activity among people who attend a program that promotes exercise.
Related terms: Evaluation, process evaluation
(Adapted from NCI's Using What Works)

Outcomes are events, occurrences, or conditions that show progress toward achieving the purpose of the program. Outcomes reflect the results of a program activity compared to its intended purpose. Outcomes are often expressed in terms of knowledge, skills, or behaviors. They are distinct from measures that reflect the delivery of the program (such as the number of participants).
Related term: Outcome evaluation
(Adapted from the Centers for Disease Control and Prevention)


Parts of an evidence-based program that can be changed to make the program more relevant to a particular community, population, or situation. Packaging can also be changed to increase the program's impact. As always, these changes must be kept to a minimum to ensure the program stays evidence-based.

A relationship that brings individuals and groups together to serve as resources for each other. Partnerships also allow both parties to better achieve a common goal.

Peer-reviewed journal
A publication in which articles on a particular subject are reviewed and edited by a panel of experts on that subject.
(Adapted from NCI's Using What Works)

Pilot test
The process of doing a trial run of a program (or program materials) with a small group of people from the target audience. It allows you to confirm that the program or materials work with this group. A pilot test also lets you find problems with the intervention, so you can make changes before you deliver it on a larger scale.
(Adapted from NCI's Using What Works)

Plain language
Communicating with an audience using language they can understand the first time they read or hear it. This might include:

  • Avoiding specialized terms
  • Defining specialized terms that must be used the first time they are mentioned
  • Crafting messages with short, direct sentences, and short paragraphs

Basic information that is collected at the end of a program. It is compared to the baseline to show the effect of the program. It can also be used to provide data about the effect of a policy change or other change in a community.
Related term: Baseline

Primary data
Information about a community that was gathered for the purposes of a particular project. Examples of primary data include:

  • Quantitative data, such as results of a survey of residents in your community
  • Qualitative data, such as focus group results you gathered from target audience members

Related terms: Data, secondary data, qualitative data, quantitative data

Principal investigator (P.I.)
The main researcher or program developer on a given project.
(Adapted from NCI's Using What Works)

Process evaluation
A type of evaluation that looks at the activities of an intervention as they happen. It provides a way to keep track of how the program is being delivered. It is also useful for troubleshooting.
Related terms: Evaluation, outcome evaluation, formative evaluation
(Adapted from NCI's Using What Works)

A planned effort designed to create specific changes in attitudes, beliefs, or behaviors in a population.
(Adapted from Using What Works)

Program delivery
The act of putting program plans into practice. This set of activities includes the actual provision of the program as well as the work by the organization to prepare for and deliver the program.

Program fidelity
The effort to keep as many elements of an original evidence-based program as possible. If your program fidelity is high, the program you deliver will be more likely to have the same kind of impact as the original.

Program goal
A simple statement that describes who will be affected and what will change as a result of the program.
(Adapted from NCI's Using What Works)

Program rationale
A description of the purpose of a given program, including its goals, objectives, and critical elements.
(Adapted from NCI's Using What Works)

Qualitative data

Data collected in ways that typically encourage discussion, rather than one- or two-word responses. Qualitative data allow you to identify themes or fundamental concepts that provide a general way to understand the specific experiences of individuals. These data usually do not include numbers. They cannot be easily summarized in a table or graph.
Related terms: Data, primary data, secondary data, quantitative data
(Adapted from NCI's Using What Works)

Quantitative data
Data that involve numbers or statistics, or that can be summarized in a table or graph. These data are often gathered using closed-ended questions, like multiple choice or those that need only one- or two-word answers.
Related terms: Data, primary data, secondary data, qualitative data
(Adapted from NCI's Using What Works)

Randomized controlled trial

A study design in which people with similar backgrounds or characteristics are randomly divided into groups. One or more groups will get the intervention (such as new information, treatment, or activities) and are called “experimental” group(s). The other groups do not get the intervention and are called the “control” group(s). In the end, these experimental and control groups are compared, to see the effect the intervention had, if any.
(Adapted from NCI's Using What Works)

How hard or easy it is to read a given piece of written material. There are different methods that measure the readability of a piece of text. Some of these methods include:

  • Grade level (the grade level in school that a reader needs to have reached to understand the text)
  • Average number of syllables in a word
  • Average number of words in a sentence
  • Average number of sentences in a paragraph

Secondary data

Data collected for another project. Sources for secondary data include:

  • Your organization's previous work
  • Government institutions
  • Journals
  • Books
  • Newspapers
  • The internet

Related terms: Data, primary data, qualitative data, quantitative data
(Adapted from NCI's Using What Works)

The National Cancer Institute's Surveillance, Epidemiology and End Results program is a premier source for cancer statistics in the United States. SEER collects cancer information from specific geographic areas representing 26% of the US population. These statistics are related to cancer:

  • Incidence
  • Prevalence
  • Survival rates

SEER also compiles reports on cancer mortality for the entire country. The site is for anyone interested in US cancer statistics or cancer surveillance methods.
(Adapted from the SEER website)

The measurable characteristics of human populations. They can include:

  • Race
  • Ethnicity
  • Age
  • Gender
  • Income
  • Education

Socio-demographics may be used to describe the group that a program is created for. These data are often derived from government sources such as the US Census or state Department of Public Health reports.

Socioeconomic status (SES)
A measure of an individual or family's relative economic and social ranking that gives a sense of their access to resources. SES typically measures:

  • Income
  • Education level
  • Occupation
  • Social status in the community

This information helps determine the person's or family's status relative to others.
(Adapted from National Center for Education Statistics)

Systematic review
An evaluation of scientific studies that is conducted by experts. These experts use a formal process to assess the quality of all relevant studies and then summarize their evidence. Their findings provide:

  • Guidance about the effectiveness of particular approaches for addressing a public health problem
  • A description of the groups for whom these approaches are appropriate

(Adapted from the Centers for Disease Control and Prevention)

Target audience

A particular socio-demographic or community group for which:

  • You want to provide services
  • A specialized program or message is designed

Task Force on Community Preventive Services
The Task Force is an independent, non-federal, volunteer body of public health and prevention experts. Its members are appointed by the Director of the Centers for Disease Control and Prevention. The role of this Task Force is to:

  • Oversee systematic reviews led by CDC scientists
  • Carefully consider and summarize review results
  • Make recommendations for interventions that promote population health
  • Identify areas within the reviewed topics that need more research

Related term: Community Guide
(From The Guide to Community Preventive Services website)

For program planning, theory is a way to explain "why" behavior change or particular types of programs succeed. It is supported by scientific research.