State governments and many nonprofit organizations aim to support children and families, but knowing how to best allocate resources can be a challenge. At the Prenatal-to-3 Policy Impact Center, we review the extensive landscape of research on early childhood-focused state policies to learn the most effective ways to create the conditions in which children thrive from the start. Where the research is lacking, we conduct evaluations to determine if a program is working, for whom, and under what conditions, all vital information for organizations looking to maximize their impact. In this post, we offer a behind-the-scenes look at one of our current research projects. Read on to see how my team approaches a program evaluation.
A State Program to Educate Young People About Parenting and Paternity
In 2007, the Texas Legislature directed the State Board of Education and the Office of the Attorney General Child Support Division (OAG-CSD) to develop a program to educate high school students about responsible parenting, healthy relationships, and the legal and financial implications of having a child. This directive gave rise to the Parenting and Paternity Awareness, or PAPA, curriculum. In 2008, the OAG-CSD partnered with our Executive Director, Dr. Cynthia Osborne, to evaluate the effectiveness of the original high school PAPA curriculum. Dr. Osborne found that after completing the PAPA curriculum, students demonstrated more knowledge and healthier attitudes about key topic areas including the cost of parenting, the value of co-parenting, establishing paternity, and healthy relationship skills.
In an effort to reach economically vulnerable and young parents, PAPA’s school-based curriculum has evolved into a virtual, video-based program. In partnership with OAG-CSD, we are currently evaluating the modernized PAPA Program. Knowing how important it is that young children are raised in households with sufficient resources and with parents that have the knowledge and skills to provide nurturing, supportive care to their children, we see great value in evaluating a program like PAPA. The OAG-CSD will use our findings to optimize the PAPA Program and ensure the services they are offering to young parents are as effective as they can be.
For the evaluation, we are examining changes in knowledge, attitudes, and behavioral intentions related to responsible parenting, healthy relationship formation, and the financial and legal responsibilities of parenthood after participants engage with the PAPA Program video series. This program evaluation is broken into four main phases that take place over the course of several years:
- Planning and research design (current phase)
- data collection tool development
- data collection
- data analysis and reporting
Evaluation Phase 1: Planning and Research Design
The overarching research objective described above resulted from months of preliminary collaboration between our Center and the OAG-CSD. Before a program evaluation formally begins, we outline the scope of the project. We collaborate with our client to learn the ins and outs of the program, understand the goals and outcomes the client is trying to reach, and establish specific, feasible, and measurable research questions.
Once we establish what we are seeking to learn, the conversation turns to how we will answer the research questions—we select the research design for the study. Ideally, a program evaluation can identify causal impacts between the program and the outcomes. In real-world scenarios, however, making casual links is often impossible because we cannot randomly assign participants to the program or because we don’t have a comparison group that is similar enough to the program participants.
We work with our client to determine what is possible given the types of data we can access, what the timeline will look like, and what research designs are feasible. Our goal is to navigate real-world constraints to design the most rigorous possible study, getting as close as possible to a causal link between the program and outcomes of interest.
Evaluation Phase 2: Data Collection Tool Development
The PAPA Program evaluation relies on both quantitative and qualitative data, using original surveys as the primary tool for data collection to assess learning over time. This research design is known within the field as a repeated measures approach, which looks at within-group changes over time.
We designed a series of three surveys that allow us to observe changes in knowledge, attitudes, and intentions as participants progress through the series of video lessons. We meticulously crafted each survey question to align with research aims while also ensuring inclusivity, preventing bias, and prioritizing readability. Further, we designed every survey to minimize any unique barriers the target population may face—we considered language, accessibility, technology, and cultural needs.
The new surveys undergo rigorous simulation testing and quality control. In addition, all of our studies are reviewed by the Institutional Review Board, which ensures ethical and regulatory compliance, safeguarding the rights and wellbeing of those participating. Team members try every possible combination of answers and buttons to find any glitches, holes, or errors in the data collection tool. This stage is critical to ensure that the surveys accurately count all participants’ responses during the data collection process.
Evaluation Phase 3: Data Collection
The PAPA Program evaluation is open to individuals who participate in Texas’s Nurse-Family Partnership (NFP) and who also opt in to the PAPA Program. NFP nurses, rather than the researchers, will recruit participants; therefore, in this phase, our primary focus as researchers turns to tracking participation levels.
Tracking survey participation is a vital step to ensuring high-quality data, which lead to high-quality, robust study results. Tracking participation and conducting follow-up as needed leads to higher survey participation rates. More survey participants make the data more reliable and make more rigorous analyses possible. Similarly, we track the characteristics of survey participants versus non-participants (when possible) because representative participation also makes the data more reliable.
Representative participation means that the survey participants have similar characteristics to the overall group of program participants, increasing our confidence that the results tell the full story of participant experiences. Being aware of who has participated in our surveys all along the way allows us to offer our client real-time recommendations to adjust recruitment and enrollment strategies.
Monitoring our data collection tools and tracking participation levels is not the only way we participate in the data collection process. For the PAPA Program evaluation, we also create and provide materials, such as fliers, that Texas NFP nurses can use to educate potential participants. Sometimes, we are also able to offer compensation, such as a gift card, to thank respondents for their time and promote participation.
Evaluation Phase 4: Data Analysis and Reporting
Once the survey data collection period ends, we clean, manipulate, and analyze the data, working together as a research team to unearth the story and new knowledge our data hold. For the PAPA Program evaluation, we expect to be able to determine if and to what extent participants experienced changes in knowledge, attitudes, and intentions related to responsible parenting, healthy relationship formation, and the financial and legal responsibilities of parenthood after participating in the PAPA Program.
We will share our findings with our client, who can use our findings to make improvements to PAPA and inform their plans and goals for the program. This program evaluation can also contribute to the broader evidence base on effective investments in families and children, supporting other organizations with similar goals of improving the health and wellbeing of children and families.
The Research and Evaluation team at our Center conducts original research to build the evidence base of what works, supporting our clients to make the best decisions for their programs, services, and ultimately the children and families they serve. Our team brings together expertise in analytic methods and policy, which—coupled with an acute understanding of clients’ program objectives—allows us to provide pragmatic and innovative assessments.