Title
Justification for Use of the Pre-Test then Retrospective Pre-then-Post-Test Evaluation in Couple and Relationship Education.
Proposal Focus
Research
Presentation Type
Presentation
Abstract
A retrospective pre-then-post survey instrument design was intentionally used as a good fit for the SMART programming in order to evaluate learning outcomes both before and after the program for several reasons (see Marshall, Higginbotham, Harris, & Lee, 2007; Moore & Tananis, 2009) summarized below. The experimental pretest-posttest design using a control or comparison group is considered to be one of the most respected methods that can be used to measure change in individuals (Campbell & Stanley, 1966; Kaplan, 2004). This design is highly regarded because of its control over internal validity concerns and ability to compare results from the same people or groups of people at multiple time points.
While there are advantages to using the pretest-posttest method, there are some limitations with this research method as well. One limitation comes with finding an adequate comparison group, which can be difficult or impossible for the researchers to locate. Another limitation concerns the possible lack of resources and time available for community-based programs to complete comprehensive pretest-posttest comparisons (Brooks & Gersh, 1998). Also, in order for the pretest-posttest comparisons to be meaningful, participants must attend the complete program from start to finish (Pratt, McGuigan, & Katzey, 2000). Due to the nature of community education programs, attrition and sporadic attendance are common issues (Pratt, McGuigan, & Katzev, 2000).
While the pretest-posttest information must be complete for comparisons to be made, it may be challenging for researchers to see the actual changes in attitudes, behaviors, or skills if the participants overstate their original attitudes, behaviors, or skills when completing the pretest (Howard & Daily, 1979; Moore & Tananis, 2009). This overestimation may occur when the participants do not have a clear understanding of the attitudes, behaviors, or skills that the program is targeting (Pratt, McGuigan, & Katzev, 2000). A lack of knowledge on certain topics (e.g., attitudes, behaviors, skills) often supports the initial need for a program intervention, but this same issue may show participants during the program that they actually knew much less than they thought when they completed the pretest. Thus, one must be aware of the potentially misleading information from pretest-posttest comparisons due to the participants’ change in perspective (Howard & Daily, 1979). “Response shift bias,” first referred to by Howard and Daily (1979), explains the “program-produced change in the participants’ understanding of the construct being measured” (Pratt, McGuigan, & Katzev, 2000, p. 342). Response shift bias was assessed in this study by administering a pre-test at the beginning of programming and a pre-then-post test at the end of programming. Results indicate that response shift bias was present in a majority of the variables studied and that the design of administering a pre-test and then a retrospective pre-then-post test is a good fit for exposing response shit bias. Specific results will be discussed.
Keywords
retrospective pre-post, relationship evaluation, pre-test post-test
Location
Tiger I
Start Date
10-3-2018 10:00 AM
End Date
10-3-2018 11:30 AM
Justification for Use of the Pre-Test then Retrospective Pre-then-Post-Test Evaluation in Couple and Relationship Education.
Tiger I
A retrospective pre-then-post survey instrument design was intentionally used as a good fit for the SMART programming in order to evaluate learning outcomes both before and after the program for several reasons (see Marshall, Higginbotham, Harris, & Lee, 2007; Moore & Tananis, 2009) summarized below. The experimental pretest-posttest design using a control or comparison group is considered to be one of the most respected methods that can be used to measure change in individuals (Campbell & Stanley, 1966; Kaplan, 2004). This design is highly regarded because of its control over internal validity concerns and ability to compare results from the same people or groups of people at multiple time points.
While there are advantages to using the pretest-posttest method, there are some limitations with this research method as well. One limitation comes with finding an adequate comparison group, which can be difficult or impossible for the researchers to locate. Another limitation concerns the possible lack of resources and time available for community-based programs to complete comprehensive pretest-posttest comparisons (Brooks & Gersh, 1998). Also, in order for the pretest-posttest comparisons to be meaningful, participants must attend the complete program from start to finish (Pratt, McGuigan, & Katzey, 2000). Due to the nature of community education programs, attrition and sporadic attendance are common issues (Pratt, McGuigan, & Katzev, 2000).
While the pretest-posttest information must be complete for comparisons to be made, it may be challenging for researchers to see the actual changes in attitudes, behaviors, or skills if the participants overstate their original attitudes, behaviors, or skills when completing the pretest (Howard & Daily, 1979; Moore & Tananis, 2009). This overestimation may occur when the participants do not have a clear understanding of the attitudes, behaviors, or skills that the program is targeting (Pratt, McGuigan, & Katzev, 2000). A lack of knowledge on certain topics (e.g., attitudes, behaviors, skills) often supports the initial need for a program intervention, but this same issue may show participants during the program that they actually knew much less than they thought when they completed the pretest. Thus, one must be aware of the potentially misleading information from pretest-posttest comparisons due to the participants’ change in perspective (Howard & Daily, 1979). “Response shift bias,” first referred to by Howard and Daily (1979), explains the “program-produced change in the participants’ understanding of the construct being measured” (Pratt, McGuigan, & Katzev, 2000, p. 342). Response shift bias was assessed in this study by administering a pre-test at the beginning of programming and a pre-then-post test at the end of programming. Results indicate that response shift bias was present in a majority of the variables studied and that the design of administering a pre-test and then a retrospective pre-then-post test is a good fit for exposing response shit bias. Specific results will be discussed.