Skip to main content
  • Original article
  • Open access
  • Published:

“Out of my control”: science undergraduates report mental health concerns and inconsistent conditions when using remote proctoring software

Abstract

Efforts to discourage academic misconduct in online learning environments frequently include the use of remote proctoring services. While these services are relatively commonplace in undergraduate science courses, there are open questions about students’ remote assessment environments and their concerns related to remote proctoring services. Using a survey distributed to 11 undergraduate science courses engaging in remote instruction at three American, public, research-focused institutions during the spring of 2021, we found that the majority of undergraduate students reported testing in suboptimal environments. Students’ concerns about remote proctoring services were closely tied to technological difficulties, fear of being wrongfully accused of cheating, and negative impacts on mental health. Our results suggest that remote proctoring services can create and perpetuate inequitable assessment environments for students, and additional research is required to understand the efficacy of their intended purpose to prevent cheating. We also advocate for continued conversations about the broader social and institutional conditions that can pressure students into cheating. While changes to academic culture are difficult, these conversations are necessary for higher education to remain relevant in an increasingly technological world.

Introduction

The number of online course offerings has steadily increased over the past two decades, and the mass transition to remote instruction during the COVID-19 pandemic reinvigorated discussions about academic integrity in virtual spaces (Arbaugh 2014; Castro 2019; Eaton 2020; Kentnor 2015; Gamage et al. 2020; Picciano 2006). Early investigations into the impacts of remote proctoring services exposed inequities and concerns arising from these services, including racial inequity, discrimination against those with disabilities, reduction in mental health, and privacy concerns, both physical and virtual (Barrett 2021; Feathers 2021; Gin et al. 2021; Patil & Bromwich 2020; Woldeab & Brothen 2019). Students have previously indicated that they are wary of remote proctoring services, yet remote proctoring services were used prior to the pandemic and will likely continue to be used as remote instruction gains popularity (Alessio & Messinger 2021; Butler-Henderson & Crawford 2020; Langenfeld 2020; Milone et al. 2017; Nigam et al. 2021; Weiner & Hertz 2017).

The increased use of remote proctoring software in higher education courses presents an opportunity to explore these services, especially as they relate to student course experiences, student mental health, and the characteristics of students’ remote assessment environments. Meaders et al. (2020) found that asking students about their concerns revealed useful knowledge and possible ways to improve student experiences. We anticipate that identifying concerns related to remote proctoring services can reveal trends that instructors and institutions can address to create more inclusive and comfortable course experiences.

We sought to identify characteristics of students’ assessment environments and to determine student concerns specifically related to remote proctoring services. We explored three research questions: 1) In what physical environments are students taking their online, remote exams? 2) what concerns do students have related to remote proctoring of course assessments? and 3) do instructors and students assume different rates of cheating during in-person and remote exams? Using these research questions as a guide, we reviewed the literature related to online learning, remote proctoring services, and student mental health, emphasizing the interplay of these factors on undergraduate students’ course experiences resulting in a conceptual framework for this study. Prior work that addresses students’ “mental health” does not regularly provide clear definitions for the term. For the purposes of our study, we considered “mental health” as defined by the American Psychological Association as “a state of mind characterized by emotional well-being, good behavioral adjustment, relative freedom from anxiety and disabling symptoms, and a capacity to establish constructive relationships and cope with the ordinary demands and stresses of life” (American Psychological Association, n.d.). We addressed these research questions through a multi-institutional survey of both students and instructors. All institutions involved in this study were in the United States.

Literature review & conceptual framework

While online learning has become commonplace in higher education courses, this form of instruction is distinct from the emergency remote teaching experienced during the pandemic. Emergency remote teaching is characterized by an expedited response to emergency scenarios, little training in instructional strategies and available resources, and lack of student intent to enroll in remote courses (Eaton 2020; Ferri et al. 2020; Hodges et al. 2020). Conversely, online learning is an intentional instruction method, and faculty members are sometimes trained in effective pedagogical techniques, supported by their institution, and students anticipate the course modality prior to enrollment (Eaton 2020; Hodges et al. 2020; Prince et al. 2020). Both online learning and emergency remote teaching are not confined to the current era of remote learning, as these strategies have been used to reach more potential students and to respond to emergency situations (Kentnor 2015; Picciano 2006). The pandemic necessitated emergency remote teaching, but our sample does not fall within this category of instruction. The courses involved in our data collection were remotely delivered in Spring 2021 but without emergency transition, so we will refer to this teaching method as remote instruction with students engaging in online learning. The pandemic provided ample opportunity to explore our research questions, but the scope of our data and analyses could apply to online learning and remote instruction beyond the context of COVID-19.

The switch to emergency remote instruction during the pandemic revived efforts and discussions about how to best maintain academic integrity in online undergraduate courses, no matter the context (Eaton 2020; Gamage et al. 2020). Academic integrity is valued in higher education to promote self-efficacy, reinforce good habits for future behavior, and foster a fair environment for all students (Gilmore et al. 2015; Macfarlane et al. 2014). Several studies determined that academic dishonesty may be more likely to occur in online learning environments, but conversely, these behaviors may be easier to recognize and less likely to contribute to academic success and progress (Arnold 2016; Eaton 2020; Stuber-McEwen et al. 2009; Watson & Sottile 2010). The literature also indicates that some potential benefits of remote courses can be negated by suboptimal remote assessment environments in which students are subjected to frequent distractions (Beatty et al. 2022; Fask et al. 2014; Hollister & Berenson 2009).

To date, research has focused on in-person assessment environments and their impacts on student performance, but less is known about the role of remote assessment environments. One important element of remote proctoring is the need for a quiet, distraction-free environment. This is not always feasible, and those who are unable to find quiet environments may be susceptible to distractions (Driessen et al. 2020). In general, the literature indicates that undergraduate students struggle with distractions and multitasking in both traditional and remote environments, and students’ ability to adapt to virtual learning varies by both personal traits and the quality of their individual environments (Gonzáles-Gutierrez et al. 2022; May & Elder 2018; Wu 2015; Wu & Cheng 2019). Students also tolerate self-produced distractions (e.g., texting on a cell phone) more than external distractions (e.g., noises produced by others), and specifically, external distractions impact learning and subsequent academic performance on assessments (Blasiman et al. 2018; Drozdenko et al. 2012). Furthermore, greater variation in course-wide assessment grades has been identified in online learning environments, but researchers attribute these results to uncontrolled aspects of remote assessment environments instead of increases in cheating behavior (Hollister & Berenson 2009).

Prior work has shown that undergraduate science students value certain learning environments but incorrectly predict what their courses will be like (Hassel & Ridout 2018; Kuh et al. 2006; Lowe & Cook 2003). Students often predict more contact with instructors, smaller class sizes, and lighter workloads than actually experienced in their courses (Akiha, 2018; Hassel & Ridout 2018; Lowe & Cook 2003, Meaders et al. 2019). The misalignment between expectations of the course and the actual learning environment can impact course grades and attrition rates (Eagan et al. 2014; Geisinger & Raman 2013; Lisberg & Woods 2018; Watkins & Mazur 2013). The emergency remote teaching environment spurred by the COVID-19 pandemic likely created further dissonance between students’ expectations of their courses and the reality of virtual instruction. Understanding the role of different factors, such as remote proctoring and students’ assessment environments, on these perceptions is important to work towards positive course experiences and meeting learning outcomes.

Without an obvious consensus on the best strategies to maintain academic integrity in remote courses, many instructors and institutions have relied on remote proctoring services, such as Proctorio, Respondus LockDown Browser, and HonorLock, to discourage cheating in undergraduate science courses (Langenfeld 2020; Nigam et al. 2021). Remote proctoring services monitor examinee’s computers and physical spaces, including recording the computer screen, the physical environment, audio, and eye movement, to monitor cheating behavior. However, the effectiveness and reliability of these services has been questioned, and it is often unclear how effective these services are at discouraging and detecting cheating (Barrett 2021; Bergmans et al. 2021; Nigam et al. 2021). These services may discriminate against students of color and students with disabilities, either by necessitating limited movement, failing to recognize darker skin, or perpetuating high-stakes assessment environments (Feathers 2021; Gin et al. 2021; Kolski & Weible 2018; Patil & Bromwich 2020; Woldeab & Brothen 2019). Students who may be discriminated against by remote proctoring services are those who already face additional stresses in their higher education such as People of Color, students with mental illness, and those with a physical, developmental, or learning disability (Lisnyj et al. 2021). Combining these concerns with unknown reliability, the use of remote proctoring services raises ethical questions about their impact on the student and the role of variable factors, such as environments, on performance and learning outcomes.

Generally, undergraduate students struggle with distractions in both traditional and online environments, and the ability to adapt to a remote space depends on individual circumstances, like students’ traits and environments, and structural components of the course, like pedagogy and available resources (Blasiman et al. 2018; Drozdenko et al. 2012; May & Elder 2018; Wu 2015; Wu & Cheng 2019). While there has been some exploration into the role of remote assessment environments on performance (Beatty et al. 2022; Fask et al. 2014), only a few studies have alluded to the impacts of remote proctoring on mental health (Chaudhry et al. 2022; Eaton & Turner 2020; Kharbat & Abu Daabes 2021) and there is even more limited data from an American context. Eaton and Turner’s (2020) rapid review of the remote proctoring literature explicitly calls for further investigation into the relationship between remote proctoring and student mental health.

Based on the interactions between student experiences, remote proctoring services, online learning, and remote assessment environments, we developed a conceptual framework that illustrates the relationships between these topics that are hypothesized by the literature (Fig. 1). Our conceptual framework suggests that the widespread transition to emergency remote teaching has expanded discussions about academic integrity in online spaces, and combined with virtual environments, drives the use of remote proctoring services. We used this conceptual framework to guide our research questions, incorporate background knowledge, and contextualize our findings in the broader literature (Luft et al. 2022). Because the impact of remote proctoring services on student course experiences and perceptions are unknown, we sought to explore students' views about these services. We characterized the specific features of students’ remote assessment environments and students’ concerns about remote proctoring services. To address our three research questions, we developed and distributed three surveys to 1) recruit instructors and gain insight into how they use remote proctoring services in their courses and then 2) receive insights from the students in these courses about their experiences. At the conclusion of our manuscript, we incorporate our findings into our conceptual framework to showcase hypotheses that require further investigation.

Fig. 1
figure 1

Conceptual framework of the interactions of online learning based on current literature. A conceptual framework based on current literature to structure the research approach. Online learning requires remote learning environments, which are important to students' learning experiences, and online learning also revives conversations among instructors and researchers about academic integrity in virtual spaces. Concerns about academic integrity and remote environments intersect to facilitate using remote proctoring services to monitor student behavior. Currently, the effects of remote proctoring on students’ course experiences are unknown

Methods

Survey Development

We used three surveys as part of this study. Our primary source of data was a survey given to students in biology courses that engaged in some form of remote instruction during the period of data collection. We also surveyed the course instructors before and after surveying their students to gain contextual information about the courses. The first instructor survey was given to course instructors to identify courses to participate in the study. Before the semester started, faculty members were asked about their remote proctoring practices and their perceptions of the percent of students who were cheating in their courses. The second instructor survey was distributed after the semester ended, along with a report containing aggregated, anonymized data from their own courses for reflection purposes.

The student survey questions were developed by the authors, several of whom are faculty members and some of whom are part of a Research Coordination Network supported by the National Science Foundation, called EDU-STEM. EDU-STEM is composed of faculty members, postdoctoral scholars, and graduate students from a variety of institutions across the United States who are dedicated to researching inclusive instruction (Thompson et al. 2020). We initially developed questions to reflect the concerns and experiences that authors heard from undergraduate students enrolled in their courses. Also, we reviewed literature related to cheating and to the use of proctoring services to inform question wording, add additional questions, and remove any questions that were vague or uninformative. After initial development, several undergraduate students who were members of the authors’ research groups were asked to review the questions and provide feedback. This feedback was incorporated into the final version of the student survey. Student surveys were administered after completing at least one remotely proctored assessment in the course and surveys remained open for 5–10 days depending on the course. All surveys are provided in Additional file 1: Appendix A and were distributed via Qualtrics (Qualtrics, 2021). This work was determined exempt from full review by the University of Minnesota Institutional Review Board (STUDY00012384), the University of South Alabama Institutional Review Board (#1,544,421–5), and Auburn University Institutional Review Board (AU 18–349 EP 1811).

Participant context and recruitment

Data for this study were collected in the Spring semester of 2021. Instructors and students were recruited from EDU-STEM-member institutions and from the authors’ own institutions and departments via invitation email (Thompson et al. 2020). This resulted in 11 participating undergraduate courses from three public, research-focused institutions across the country, including two very high research activity universities and one high research activity university (Carnegie Foundation for the Advancement of Teaching 2021). These STEM courses included introductory and upper-level courses in biology and instructors reported using Proctorio, Respondus LockDown Browser, or both as a remote proctoring service for the courses. Students were recruited from these courses through email, and some, but not all, instructors incentivized student participation with a small amount of course extra credit (< 1% of total course points). Those students who did not want to participate in the study but did want to receive course credit were able to select “No, I do not consent to participate in this study” and still receive the extra credit. The ten course instructors themselves did not receive any incentive for participation aside from receiving aggregated, anonymized data from their own course. Student participants were asked to complete survey questions relating to remote assessment environments, experiences with remote proctoring services, concerns about remote proctoring services, and demographic information. The survey instructions did not provide definitions for any terms used within (e.g., “concerns) as we wanted to remain open to students’ interpretations within the long answer responses. We provide student respondents’ demographic information in Additional file 1: Appendix B.

Data collection and analysis

After collecting survey responses, we cleaned the data. Specifically, unnecessary information, such as completion time and location information, were removed from the dataset, and identifying information was replaced with a unique number identifier for each participant. When there were duplicate responses, the more complete response was kept, and duplicates across courses were removed. In the case that two responses from the same participant were of similar completion, we randomly selected one response to retain. Additionally, responses were removed if students were under 18 years old, did not consent to participating in the study, or only completed the identifying questions for course credit; this left a total of 375 responses (full data on number of participants and number of potential participants in Additional file Table 1E). The number of responses to each individual question varies throughout, as participants could skip any portion of the survey, and these data are reported as (n = total number of respondents to question, percent of respondents who gave the referenced response). One question asked participants to report the extent to which they were concerned about elements of remote proctoring services on a three-point Likert scale, and only respondents who addressed all ten concerns were included in our analysis of this question to capture more complete data, totaling 264 responses (Additional file 1: Appendix C).

For qualitative analyses of open-ended questions, every response was kept regardless of the completion of the rest of the survey due to limited participation. Two coders (AKL and AKP) reviewed all responses using open coding techniques (Gibbs 2018; Hemmler et al. 2020; Saldaña 2021; Stemler 2004) for recurrent ideas, then created a codebook to capture process codes for each item. The coders then independently coded each response and adjusted the codebook as needed in an iterative process. Finally, a consensus was reached for each response. After consensus, codes with fewer than three responses were removed from the codebook and added to the “other” category to retain a moderate number of the most useful codes. This resulted in removing one code from the assessment environment codebook and three codes from the top concern codebook. The final assessment environment codebook contained 19 codes, while the final top concern codebook contained 25 codes. All codes, definitions, and examples can be found in Additional file 1: Appendix D. Quotes in these tables and in the manuscript have been lightly edited for grammar and clarity and have been selected to represent the range of responses while protecting participant identities.

We employed descriptive statistics to capture trends in student concerns, experiences, and environments. We constructed figures in JMP 15.2 and used the program to perform two sample t-tests, assuming unequal variance, to analyze both student and instructor participants’ estimations of cheating using the same software (JMP, North Carolina, JMP, Version 15.9, 2021).

Results

To better understand undergraduate science students’ experiences with remote proctoring services, we collected information on (1) the characteristics of their remote instruction and assessment environments, (2) the most concerning aspects of using remote proctoring services, and (3) how instructors and students vary in their estimation of students cheating during in-person or remotely proctored exams.

Characteristics of assessment environments

We asked participants to report the number of other individuals in their environments when completing coursework or assessments. Participants were able to select a number ranging from 0 to 5 + . Most participants reported living with at least one other individual, with the most common range encompassing one to three other individuals in the same space (Fig. 2).

Fig. 2
figure 2

Reported number of individuals, other than the student, who occupy the same household. Participants (n = 340) were asked to report the number of other individuals in their household. Most respondents indicated that they lived with 3 or fewer other individuals

We also asked participants to elaborate on their remote assessment environments in an open-ended survey question. We prompted respondents to include details such as distractions, the quality of technology used to complete assessments, and any relevant information about the physical space itself. The majority of participants reported testing in suboptimal conditions (n = 305, 53.4%). Few participants reported having sufficient environments (n = 305, 16.7%), and roughly a third did not qualify their assessment environment (n = 305, 29.8%) (Additional file 1: Appendix D). The most common suboptimal conditions included noise and distractions from other individuals in the space (n = 305, 44.3%), and poor internet quality (n = 305, 21.3%) (Additional file 1: Appendix D). For example, one respondent reported, “My internet is good but there are often distractions throughout the entirety of my exam. This includes: my dog, family members talking, trucks and cars passing by, children screaming outside (playing),” while another stated, “Our internet quality is low and having four roommates means that there is always noise.” Additional summary data about the codes and representative quotes about respondents’ remote assessment environments can be found in Additional file 1: Appendix D.

Concerns related to remotely proctored exams

We asked participants to sort ten concerns about remote proctoring services on a three-point Likert scale with the categories “Not concerned,” “Somewhat concerned,” and “Very concerned.” Participants placed each concern in a box labeled by level of concern within the survey, but participants did not have to sort each item. Of those who sorted all ten concerns, students reported being “Very concerned” about being wrongfully flagged for cheating by the proctoring software (n = 264, 68.3%), followed by having a technological difficulty (n = 264, 58.0%) and being wrongfully flagged for cheating by the professor (n = 264, 53.1%) (Fig. 3). The reported concerns for all 375 responses, including those that did not sort all concern elements, can be found in Additional file 1: Appendix C. While respondents were able to write in additional concerns to include on the scale, none chose to do so.

Fig. 3
figure 3

Top concerns held by students who sorted all concern elements. Only respondents who placed all ten concerns into a concern level were included in these data. Concerns are ordered by the percent of respondents at the “Very Concerned” level. Labels have been abbreviated, and full labels can be found in Additional file 1: Appendix C

After sorting the concern items, we asked participants to identify their top concern related to remote proctoring services and describe what aspects made it most concerning. The most common responses were the possibility of being wrongfully accused of cheating (n = 285, 74.1%) and encountering technological difficulties (n = 285, 31.3%) (Additional file 1: Appendix D). Participants also reported dealing with emotional distress when using proctoring software (n = 285, 21.4%), commonly citing increased feelings of stress, anxiety, and general worry when completing remotely proctored exams. Each of these ideas are described below.

Academic integrity

Themes about cheating were prevalent throughout our data and being wrongfully flagged for cheating was the most commonly coded item when respondents elaborated on their top concerns (Additional file 1: Appendix D). We asked students and instructors to estimate the percent of students cheating on exams, both proctored in-person and remotely, to explore the perception of cheating and the influence of proctoring services on these behaviors. Averaging across responses, student respondents (n = 342) estimated that 21.2% of their peers were cheating during remotely proctored exams, while respondents (n = 323) estimated that 11.6% were cheating during in-person exams. The difference in means was statistically significant according to a two-sample t-test, assuming unequal variance with a tested normal distribution of means (t587 = 7.54, p < 0.001) (Fig. 4). Additional information about students’ perceptions of cheating by institution can be found in Additional file 1: Appendix E. Compared to student respondents, instructors estimated lower percentages of students cheating in both proctoring environments. On average, instructors estimated that 12.2% of students were cheating during remotely proctored exams compared to 5.6% during in-person exams, but this difference was not statistically significant, likely due to the small sample size (t18 = 1.47, p = 0.175) (Additional file 1: Appendix E).

Fig. 4
figure 4

Students’ perceptions of the percent of classmates cheating by proctoring type. Boxplots showing respondent estimations of the percent of classmates cheating during remotely proctored and in-person exams. The boxplot shows the median value within the interquartile range of responses as well as the mean estimated percent. The dots represent outlier estimates that were greater than three standard deviations from the mean. Participants (n = 342) estimated that 11.6% of the class was cheating during in-person exams and 21.2% of the class was cheating during remotely proctored exams. The difference in means was statistically significant (t587 = 7.41, p < 0.001)

When asked to elaborate on their top concern about remote proctoring services, participants frequently mentioned cheating, such as concerns about being wrongfully accused of cheating or their classmates cheating. The fear of being wrongfully accused of cheating was closely tied to a perceived lack of control over the remote assessment environment and/or professors’ decisions about cheating, as exemplified by this response, “Since I take most of my exams in my dorm, I'm constantly worried that the proctoring software will flag my test for cheating because of someone screaming, or talking in the hallway, or coming into view of the camera. My testing environment is so out of my control, that at any moment I think I'll be kicked out of the exam for supposed cheating.”

Concerns about being wrongfully flagged for cheating ranged from short-term impacts, such as being locked out of the exam, to long-term, wide-reaching consequences. For example, one participant responded, “My biggest fear all year has been being falsely accused of cheating either by the proctoring software or a professor. Cheating is a serious accusation that can cause serious long-term effects and essentially ruin a student's entire future.” In addition to these concerns, a small number of participants (n = 8) indicated that they were concerned or frustrated by classmates potentially cheating, and these concerns were largely focused on the possible impact on the course curve or individual grades. One respondent replied, “I hear [from] friends, peers, and social media that most people do use their phones and other things to cheat on exams, and it really worries me about not doing as well.” Furthermore, another respondent questioned the efficacy of remote proctoring services, concisely summarizing, “If a student is going to cheat, neither in person nor online proctored tests will stop them.”

While concerns about being wrongfully flagged for cheating were common amidst respondents of all demographic groups, students who self-identified as having a disability, medical, and/or mental health concern (n = 22) frequently connected their experiences with heightened concerns about remote proctoring services. (Note that respondents provided this information unprompted and therefore this may be a significant underestimate of the students in our sample who self-identify in this way.) These concerns were linked to behaviors, like body movements or wandering gazes, that may be seen as dishonest during proctored exams. One respondent stated, “As someone with ADHD, I often wonder my gaze and just look around and random things in order to think, and I am always worried that I will get flagged for looking around.” Respondents with disabilities reported feeling that the concern about being accused of cheating negatively impacted their ability to perform on exams, with one participant emphasizing, “I have been diagnosed with anxiety, and this type of testing makes it so much harder for me to succeed in my classes. A lot of the time during these timed tests, my mind is not able to focus on the exam itself because [of potential distractions]. All of these kinds of thoughts overtake my ability to complete the exam to the best of my ability.”

Encountering technological difficulties

Our results showed that technological difficulties were a common concern, aligning with previous research surrounding online learning and remote proctoring services (Beaudoin et al. 2009; Kauffman 2015; Milone et al. 2017;Rasheed et al. 2020). As such, we asked participants to report their previous experiences with technology and remotely proctored exams. The most common concerns included an internet connection that was too slow (n = 259, 37%) and proctoring software that was unable to detect the student’s face (n = 259, 21%) (Fig. 5). Respondents commonly reported concerns about technological difficulties when using remote proctoring services that were based on negative past experiences (Fig. 3). One respondent stated, “I have a pretty old laptop that [the proctoring software] sometimes does not like to deal with. My biggest concern is having some sort of technical issue with my laptop or the [proctoring] software that causes me to have problems with my exam.” Another participant simply shared, “Internet quality is unreliable and out of my control.”

Fig. 5
figure 5

Reported technological difficulties when using remote proctoring services. Participants were asked to select all technical difficulties that they had experienced while using remote proctoring. Participants were able to select an option for none of the above, but all (n = 259) reported some kind of technical challenge

Emotional distress

We discovered the emerging theme of reported emotional distress when using remote proctoring services (n = 285, 21.4%; Additional file 1: Appendix D), but the degree of distress and its impact varied among respondents. The theme of emotional distress was self-defined by the authors as students reporting negative impacts on mental health either long-term or in the moment. Some students identified new concerns and others mentioned the compounding effects of existing mental health disorders and testing. For example, one participant referenced their existing mental health and the negative impact of proctoring, stating, “I need to fidget and move when doing anything; asking me not to is just making my testing anxiety worse.” Participants also indicated generalized concerns about being remotely proctored, while others more acutely expressed their emotional distress while using proctoring services. One student said, “I am constantly scared while taking proctored exams on [the proctoring software] that I am going to be wrongfully accused and then have my grade in that class, my GPA, or my career affected by something a software program misidentified.” This response also exemplifies the wide-reaching, perceived impacts of being wrongfully flagged for cheating. While the source, intensity, and duration of emotional distress varied in responses, the theme of negative impacts on mental health spurred by proctoring services was present throughout survey responses.

Emotional distress was closely linked to students who identified as having a disability, medical, and/or mental health concern. Of the 22 students who mentioned their disability, 15 also emphasized the negative effects on mental health of using remote proctoring services during assessments. One respondent stated, “As someone with mental health problems, sometimes being forced to focus on looking like you aren't cheating, when you aren't, is super overwhelming.” As before, emotional distress is closely linked to the fear of being wrongfully flagged for cheating, increasing feelings of stress and worry. Students with disabilities indicated that the remote proctoring services directly impacted their ability to effectively complete assessments in remote environments. One participant stated, “[Thinking through every action and movement] makes my anxiety act up and impairs my ability to reason/think through my test. In fact, I usually finish [proctored] tests as early as possible in order to experience this for as little time as possible.”

Instructor role in remote proctoring perceptions

Finally, students were asked to indicate if their instructors explained how and why remote proctoring services were used in the course and to elaborate on the impact of these explanations on their experiences. Most respondents indicated that their instructors did explain both how and why a proctoring service was used in the course (Table 1). Of those who indicated that their instructor explained how the software was used, the majority reported that the explanation had no impact on how they felt about using remote proctoring in the course. Even with instructor explanations, students report feeling uncertain about the software itself, a concern highlighted in the quote, “I don't know how the software works to flag things as cheating.”

Table 1 Participants’ perceptions of instructors’ explanations of why and how proctoring services are used

Many instructors indicated their intent to better explain the reasoning behind using remote proctoring services in a reflection survey after viewing their own course’s results. Of the six instructors who responded to the reflection survey, four stated that they would reflect on how they communicate their reasoning to students and dedicate more time to explaining this rationale to students. For example, one instructor stated, “I think a major change I will make, given these results, is to think about how I explain the purpose of [a proctoring software] to students, as well as how I will use it to ensure the exam is fairly administered.” Instructors indicated that they would use these strategies to assuage student concerns, especially related to being wrongfully flagged for cheating.

Discussion

We discovered that many undergraduate science students complete coursework and assessments in suboptimal environments and hold legitimate concerns about remote proctoring services. Together, these create negative course experiences and worsen mental health. Suboptimal assessment environments were characterized by noise, distractions, and inconsistent internet quality (Figs. 2 and 5). Respondents also reported being wrongfully accused of cheating and experiencing technological difficulties as the most concerning aspects of using remote proctoring services (Fig. 3). Finally, participants noted negative emotional impacts, such as increased stress and test anxiety, when completing assessments in this format. These results indicate that remote proctoring services may impact student course experiences and mental health, potentially exacerbating inequalities in assessment scores and experiences between students (Fig. 6). Here, we contextualize the main themes that emerged from our findings with existing literature and make recommendations for instructors and future research in this area.

Fig. 6
figure 6

Modified conceptual framework of the interactions of online learning with course experiences based on research findings. Based on the original conceptual framework, our results indicate that remote proctoring services do impact course experiences. From our results, we were able to expand on the conceptual framework driving our research. We concluded that students reported negative impacts on mental health and that remote proctoring services may exacerbate existing assessment inequalities

Variable environments perpetuate unequal educational experiences

The majority of respondents reported testing in suboptimal environments characterized by high levels of noise, distractions, and lack of quality internet access (Fig. 2, Additional file 1: Appendix D). Participants also identified these themes as concerns for using remote proctoring services, specifically mentioning uncontrollable elements of their environments such as external distractions and noise (Fig. 3, Additional file 1: Appendix D). These results indicate that students are testing in unique remote environments, each with their own characteristics and challenges, and indicate that impacts of these testing environments are inextricably linked to proctoring software. The impact of remote proctoring services appears to be mediated by students’ remote assessment environments, and instructors should thoughtfully consider the variability among environments and how they impact students when choosing to use remote proctoring services.

Unequal learning and assessment environments may be contributing to disparities in educational experiences, impacting students’ learning gains and course grades (Hollister & Berenson 2009; May & Elder 2018). While environmental distractions may impact students differently during in-person assessments, in-person scenarios are at least in the same environment. For example, student performance on an exam taken in a common lecture hall will not be subject to disparate impacts of internet access or noisy housemates. This benefit only exists, however, if individual students can access accommodations, such as extended time (Gregg 2012).

The quality of remote assessment environments depends on elements outside of students’ control, and currently, there are few effective strategies to ensure equitable conditions for all students. Because our results show that students are experiencing unequal, suboptimal assessment environments, we urge instructors to reconsider the use of high-stakes testing in their remote learning courses. High-stakes testing already disadvantages specific demographic groups, such as women and underserved students, due to increased test anxiety and stereotype threat (Ballen et al. 2017; Cotner & Ballen 2017). Therefore, implementing more formative or mixed-assessment methods both reduce assessment performance disparities and may eliminate the need for remote proctoring services altogether. By relying on other assessment strategies (e.g., group participation, low-stakes quizzes and homework assignments, formative assessments) to evaluate student learning, instructors would not need to rely on high-stakes testing as measures of student learning outcomes. Decreased reliance on high-stakes testing and remote proctoring services would also remove concerns about remote testing environments. To reduce the burden of variable testing environments, instructors could guide their students towards resources, such as private study rooms or dedicated remote testing locations, that would mitigate concerns about the environment. Students may not be aware of these resources, and instructors play a key role in informing their students about potential alternatives.

The culture and conversations of cheating

Respondents identified being wrongfully flagged for cheating as the most concerning element of remote proctoring and were particularly concerned about some uncontrollable aspect of the environment, such as movements or loud noises (Additional file 1: Appendix D). This result aligns with findings from Chaudhry et al. (2022) who interviewed 14 Canadian students and also reported on fear of being mis-flagged for cheating increasing students’ stress during exams. Simultaneously, participants believed that their peers were more likely to be cheating during remotely proctored exams (Fig. 4). It is important to note that this finding is based on students’ perceptions rather than actual differences in the rate of cheating and that students were unable to directly witness their peers’ cheating due to the remote nature of the exams.

These results imply that students do not believe that remote proctoring services dissuade cheating behaviors. When combined with proctoring services’ inconclusive ability to detect cheating (Barrett, 2021; Bergmans et al. 2021), remote proctoring services may not be an effective strategy to maintain academic integrity and may even contribute to unequal course experiences. To date, research has failed to show that cheating occurs more in remote environments, regardless of remote proctoring usage (Fask et al. 2014; Hollister & Berenson 2009; Stuber-McEwen et al. 2009; Watson & Sottile 2010). Fask et al. (2014) and Hollister & Berenson (2009) used statistical modeling to identify cheating in remote environments, and both groups were unable to attribute exam score variation to cheating (Fask et al. 2014; Hollister & Berenson 2009). Given the inability to confirm increased cheating in remote environments, it is important to consider the negative impacts on students’ mental health and experiences weighed against the unlikely scenario of consistently identifiable academic misconduct.

The frequent theme of cheating raises important questions about the social and institutional factors that influence student behaviors. While the literature about academic dishonesty largely focuses on individual traits, research has shown that social factors, like peer achievement and pressure for academic success, influence the desire to engage in cheating behavior (Krou et al. 2021; Wilkinson 2009). Institutional and academic structures are also frequently cited as pressures to cheat, such as the need to maintain certain grades to receive scholarships (Krou et al. 2021; Passow et al. 2006). The abundance of external pressures likely contributes to students’ concerns about cheating (e.g., Genereux & McLeod 1995; Butler et al. 2022), and we advocate for discussion centering on the external culture that influences cheating behavior before implementing remote proctoring services. From an instructor’s perspective, creating a culture where cheating is discouraged through the use of low-stakes test formats, alternative forms of assessment, peer accountability, and lessened pressure for academic success could eliminate the need for third-party management of academic integrity. Additionally, clear communication with students could mitigate these concerns, such as emphasizing that instructors, not software, ultimately make decisions about academic misconduct. Addressing cheating and its related pressures is a difficult discussion and task, but it is necessary to review assessment strategies to successfully educate students in online environments.

Implications on student mental health

In addition to concerns about being wrongfully flagged for cheating, many students referenced negative emotional experiences while using remote proctoring services (Additional file 1: Appendix B). These experiences ranged from acute feelings of stress and anxiety to compounding effects with previous mental health conditions. Students frequently reported increased levels of worry, stress, and test anxiety when using remote proctoring services (Additional file 1: Appendix B). Given the alarming rates of worsened student mental health, especially for women and students of color (Lee et al. 2021; Ketchen Lipson et al. 2015; Scherer & Leshner 2021), remote proctoring services may unnecessarily heighten emotional distress (Chaudhry et al. 2022; Eaton & Turner 2020). Additionally, previous research indicates that test anxiety disproportionately affects students with disabilities, women, and students of color (Ballen et al. 2017; Cotner et al. 2020; Salehi et al. 2019; Salehi et al. 2021; Thames et al. 2015; von der Embse et al. 2018; Woods et al. 2010; Whitaker Sena et al. 2007). General student mental health is also closely tied to test anxiety in higher education. Test anxiety is described as negative emotional or physiological responses to evaluative situations and has been shown to negatively impact academic performance and general intent to pursue a major (Barrows et al. 2013; Cassady & Johnson 2002; England et al. 2017; Kolski & Weible 2018; Sommer & Arendasy 2014). However, test anxiety does not fully encompass the negative mental health impacts participants reported experiencing while using remote proctoring services (Chaudhry et al. 2022). Some student participants perceived this heightened stress as a significant factor in their assessment performance, which may be further evidence that these services are a barrier to equal educational experiences and sufficiently representative course grades for many students.

Other considerations and future directions

There are several considerations to be mindful of when reviewing these data and results. First, a limited number of institutions were surveyed, and all institutions were in the United States. As such, findings are limited to this context. All three universities are classified as having high to very high research activity (Carnegie Foundation for the Advancement of Teaching 2021). Therefore, our sample could not capture differences in students’ experiences by institution type, such as in community colleges and other teaching-focused institutions, or any difference. Future research should aim to capture experiences outside of the scope of research-intensive institutions (Thompson et al. 2020). Second, the institutions surveyed in this study also have predominantly white student populations, and our sample did not reveal conclusive differences by demographic groups. However, the absence of these trends does not mean they do not exist, and future research should aim to include diverse student populations to explore the interplay between identity and experience. Third, because we were primarily interested in students’ perceptions and experiences, we defined very few terms for the participants. As a result, participants may have interpreted words such as “concern” differently. Finally, the role of the pandemic should be considered when assessing students’ concerns. The COVID-19 pandemic exacerbated the student mental health crisis and increased student stress levels (Correia et al. 2022; Lee et al. 2021), and student concerns about remote proctoring software and their remote assessment environments may have been amplified by the broader stressful circumstances of the world.

Additionally, our survey did not specifically collect systematic demographic data about disability aside from scenarios where students voluntarily self-identified in their responses. Interactions between disabilities and distance learning could greatly impact students’ experiences and their specific concerns about remote proctoring software. While literature about test anxiety and online proctoring exists (Cassady & Johnson 2002; Barrows et al. 2013; Kolski & Weible 2018; Stowell & Bennet, 2010), less is known about the intersection of test anxiety, remote proctoring anxiety, and learning disabilities. Students with learning disabilities, such as attention deficit hyperactivity disorder, already report higher rates of test anxiety in traditional evaluative settings (von der Embse et al. 2018; Woods et al. 2010; Whitaker Sena et al. 2007) and face discrimination during in-person science instruction (Gin et al. 2022; Hutcheon & Wolbring 2012; Lee 2011), so exploring this relationship may reveal inequities in students’ environments and experiences perpetuated by the use of remote proctoring services.

Another area for future investigation centers around the role of instructors when discussing remote proctoring services and remote learning. While most students reported that their instructors explained how and why they use remote proctoring services, students did not report increased positive feelings towards them, with most reporting no impact on their opinion and some reporting feeling worse (Table 1). If instructors feel that remote proctoring is necessary to maintain academic integrity, identifying the best ways to introduce and discuss remote proctoring could mitigate students’ concerns and better equip instructors to address students’ experiences. Instructors may be explaining their rationale to students but standardizing and assessing the content of these explanations and how they are delivered could modify students' attitudes towards remote proctoring. An additional hypothesis worth exploring is if instructors’ explanations can result in students’ beliefs that more cheating is occurring. Essentially, by discussing cheating and remote proctoring are instructors inadvertently making students believe that more cheating is occurring?

Finally, understanding student perceptions of grading and cheating in their courses could provide additional context to our results. Of the students who reported being concerned about their classmates cheating, some cited concerns about the impact of others on their own performance, grade, and class distribution. Characterizing students’ perceptions of curving, grade distributions, and the aftermath of breaching academic integrity could provide context to why students equate remote proctoring services with concerns of being wrongfully accused of cheating. While it is difficult to design studies to assess actual cheating, we could explore cheating from an instructor perspective, asking how often and under what conditions proctoring services actually aid instructors in identifying cheating situations.

Conclusions

Our research aimed to characterize the remote assessment environments of undergraduate science students and to understand their concerns and perceptions about remote proctoring services in their courses. We discovered that the majority of students report testing in suboptimal environments, and that many hold concerns about experiencing technological difficulties and being wrongfully flagged for cheating when using remote proctoring services. While we collected data in the spring of 2021 during the COVID-19 pandemic, we believe that these data are relevant to all online learning environments. Before implementing remote proctoring services as a means to maintain academic integrity, we caution that these services may negatively impact course experiences and student mental health and may contribute to unequal educational experiences. While it is difficult to address the social and institutional pressures that contribute to cheating behaviors, it is necessary to reevaluate undergraduate assessment strategies to educate students in an evolving online world.

Availability of data and materials

To protect identities of participants and faculty, data will not be made publicly available. However, limited access may be granted via request to the corresponding author.

Abbreviations

STEM:

Science Technology, Engineering, and Math

EDU-STEM:

Equity and Diversity in Undergraduate STEM

COVID-19:

Coronavirus disease 2019

References

Download references

Acknowledgements

We thank our participants, both students and instructors, for their time and perspectives as well as the EDU-STEM Network for funding support and community.

Funding

This work was supported by the EDU-STEM Network (NSF Award #1919462). Funding supported publication fees and undergraduate student support.

Author information

Authors and Affiliations

Authors

Contributions

AKL and AP led data collection, analysis, and writing of the manuscript. All authors contributed to project design including development of research questions and surveys; and manuscript editing. CJB, EPD, JAH, BG and CW also contributed to data collection and management.

Corresponding author

Correspondence to A. Kelly Lane.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1:

Appendix A. Survey Content. Appendix B. Supplemental Figures & Tables. Appendix C. Additional Data: Remote Proctoring Concerns. Appendix D. Codebooks. Appendix E. Perceptions of Cheating by Institution.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pokorny, A., Ballen, C.J., Drake, A.G. et al. “Out of my control”: science undergraduates report mental health concerns and inconsistent conditions when using remote proctoring software. Int J Educ Integr 19, 22 (2023). https://doi.org/10.1007/s40979-023-00141-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40979-023-00141-4

Keywords