Assessing Data Quality Across the Semester from an Undergraduate Psychology Participant Pool

Location

D.P. Culp Center Ballroom

Start Date

4-5-2024 9:00 AM

End Date

4-5-2024 11:30 AM

Poster Number

177

Name of Project's Faculty Sponsor

Ginette Blackhart

Faculty Sponsor's Department

Psychology

Classification of First Author

Undergraduate Student

Competition Type

Competitive

Type

Poster Presentation

Presentation Category

Social Sciences

Abstract or Artist's Statement

When collecting data for research within the psychological sciences, it is common for academic institutions to rely upon the undergraduate participant pool, often those enrolled in intro-level psych courses. Many researchers are concerned about the quality of data as it is collected across the semester. Researchers assume that data quality is worse at the end of a semester, with online studies being particularly vulnerable to this. Few studies have empirically investigated this question. To address this gap, we conducted one study across multiple semesters to assess whether data quality is worse at the end of the semester than towards the beginning. Participants signed up through Sona and completed the study in an in-person lab setting. After signing an informed consent document, participants were asked to complete several questionnaires followed by a puzzle task and were then asked to complete a few more questionnaires before being debriefed. The time it took participants to complete tasks (the consent form, questionnaires, the puzzle task, and the funnel debriefing) were all measured. Data quality was assessed by looking at the number of incorrect attention-check items, number of missed items, length of open-ended responses, response bias, answers regarding self-reported engagement, and time spent on each task as it may indicate whether participants rushed through the study or not. In the online version of this study, we found that for online studies recruiting participants from a university undergraduate pool, data quality is worse at the end of the semester than at the beginning. Participants who participated later in the semester wrote fewer words on open-ended questions, incorrectly responded to more instructed response attention check items, showed greater response bias, and reported less attention, effort, diligence, and interest. As with the online study previously conducted, we will compare data quality from those who participated at the beginning of the semester to those who participated at the end. We will also discuss how these results compare to those found in our online data quality study. We expect results in the current study to find that data quality is worse at the end of the semester than at the beginning, even in an in-person study. We also expect the data quality of the in-person study to be higher overall than the online study. This research has examined the concerns surrounding whether data quality obtained from an undergraduate participant pool decreases in quality across the semester. Although results from the online study support this idea, the effect size was small to moderate. As such, researchers should not be afraid to collect data at the end of a semester if they assess data quality. If results from the current study do not align with results from the online study or our hypotheses, it may suggest that data collected at the end of the semester for in-person studies do not decrease significantly in data quality.

This document is currently not available here.

Share

COinS
 
Apr 5th, 9:00 AM Apr 5th, 11:30 AM

Assessing Data Quality Across the Semester from an Undergraduate Psychology Participant Pool

D.P. Culp Center Ballroom

When collecting data for research within the psychological sciences, it is common for academic institutions to rely upon the undergraduate participant pool, often those enrolled in intro-level psych courses. Many researchers are concerned about the quality of data as it is collected across the semester. Researchers assume that data quality is worse at the end of a semester, with online studies being particularly vulnerable to this. Few studies have empirically investigated this question. To address this gap, we conducted one study across multiple semesters to assess whether data quality is worse at the end of the semester than towards the beginning. Participants signed up through Sona and completed the study in an in-person lab setting. After signing an informed consent document, participants were asked to complete several questionnaires followed by a puzzle task and were then asked to complete a few more questionnaires before being debriefed. The time it took participants to complete tasks (the consent form, questionnaires, the puzzle task, and the funnel debriefing) were all measured. Data quality was assessed by looking at the number of incorrect attention-check items, number of missed items, length of open-ended responses, response bias, answers regarding self-reported engagement, and time spent on each task as it may indicate whether participants rushed through the study or not. In the online version of this study, we found that for online studies recruiting participants from a university undergraduate pool, data quality is worse at the end of the semester than at the beginning. Participants who participated later in the semester wrote fewer words on open-ended questions, incorrectly responded to more instructed response attention check items, showed greater response bias, and reported less attention, effort, diligence, and interest. As with the online study previously conducted, we will compare data quality from those who participated at the beginning of the semester to those who participated at the end. We will also discuss how these results compare to those found in our online data quality study. We expect results in the current study to find that data quality is worse at the end of the semester than at the beginning, even in an in-person study. We also expect the data quality of the in-person study to be higher overall than the online study. This research has examined the concerns surrounding whether data quality obtained from an undergraduate participant pool decreases in quality across the semester. Although results from the online study support this idea, the effect size was small to moderate. As such, researchers should not be afraid to collect data at the end of a semester if they assess data quality. If results from the current study do not align with results from the online study or our hypotheses, it may suggest that data collected at the end of the semester for in-person studies do not decrease significantly in data quality.