Skip to main content

Addressing cheating in e-assessment using student authentication and authorship checking systems: teachers’ perspectives

Abstract

Student authentication and authorship checking systems are intended to help teachers address cheating and plagiarism. This study set out to investigate higher education teachers’ perceptions of the prevalence and types of cheating in their courses with a focus on the possible changes that might come about as a result of an increased use of e-assessment, ways of addressing cheating, and how the use of student authentication and authorship checking systems might impact on assessment practice. This study was carried out within the context of the project TeSLA (an Adaptive Trust-based e-assessment System for Learning) which is developing a system intended for integration within an institution’s Virtual Learning Environment (VLE) offering a variety of instruments to assure student authentication and authorship checking. Data was collected at two universities that were trialling the TeSLA system, one in Turkey, where the main modes of teaching are face-to-face teaching and distance education, and one in Bulgaria, where the main modes of teaching are face-to-face teaching and blended learning. The study used questionnaires and interviews, building on existing TeSLA project evaluation activities and extending these to explore the specific areas we wished to examine in more depth.

In three of the four contexts cheating was seen by teachers as a serious and growing problem, the exception was the distance education context where the teachers believed that the existing procedures were effective in controlling cheating. Most teachers in all four contexts expected cheating to become a greater problem with increased use of e-assessment. Student authentication was not seen as a major problem in any of the contexts, as this was felt to be well controlled through face-to-face proctored assessments, though the problem of assuring effective authentication was seen by many teachers as a barrier to increased use of e-assessment. Authorship checking was seen as a major issue in all contexts, as copying and pasting from the web, ghost writing and plagiarism were all reported as widely prevalent, and authorship checking was seen as becoming even more important with increased use of e-assessment. Teachers identified a third category of cheating behaviours, which was the accessing of information from other students, from written materials, and from the internet during assessments.

Teachers identified a number of approaches to addressing the problem of cheating: education, technology, assessment design, sanctions, policy, and surveillance. Whilst technology was not seen as the most important approach to prevention, student authentication and authorship checking systems were seen as relevant in terms of reducing reliance on face-to-face proctored examinations, and in improving the quality of assessment through supporting the employment of a wider range of assessment methods. The development of authorship checking based on computational linguistic approaches was an area of particular interest. Student authentication and authorship checking systems were not seen as being able to address the third category of cheating behaviours that the study identified.

Introduction

Whilst cheating and plagiarism in education are not new phenomena, technology is widely seen as having facilitated its increase, however as part of a multi-faceted approach to addressing the issue, technology may also have an important role in preventing cheating and plagiarism. The use of student authentication and authorship checking systems is one possible technological approach, and is being explored by the TeSLA project (an Adaptive Trust-based e-assessment System for Learning). This study was carried out within the context of the TeSLA project, though it is not an evaluation study of the effectiveness of the particular set of instruments in that project, but rather an exploration of the basic rationale of the project, and the likely value of this and other similar approaches.

Data was collected at two universities that were trialling the TeSLA system, one in Turkey, where the main modes of teaching are face-to-face teaching and distance education (with elements of online support), and one in Bulgaria where the main modes of teaching are face-to-face teaching and blended learning (with a significant element of e-learning), thus providing evidence from four quite different teaching and learning contexts.

To clarify terminology, we define face-to-face learning as the form of learning where the instruction and course activities take place in a classroom. Following Owusu-Boampong and Holmberg (2015) we will use the term ‘distance education’ as a generic term for different organizational forms of education in which students and teachers are separated in time and place. Following Gaebel et al. (2014) we define online learning as a form of educational delivery in which learning takes place primarily via the Internet; and blended learning as a pedagogical model combining face-to-face classroom teaching and the innovative use of ICT, blending online and face-to-face delivery.

This paper first briefly highlights some aspects of the literature on the prevalence and prevention of cheating and plagiarism, and provides a description of the TeSLA system. It then sets out the aims, methods, and findings of the study, looking at the perceptions of higher education teachers (and, to a lesser extent, those of their students) on the prevalence and types of cheating, their reflections on possible changes in the frequency and nature of cheating and plagiarism that might come about as a result of an increased use of e-assessment, their views on ways of addressing perceived problems, and their thoughts on the ways in which the use of student authentication and authorship checking systems might impact on assessment practice. The conclusion section discusses the positive role that student authentication and authorship checking systems may have to play in both online and face-to-face assessment as well as the perceived limitations of such systems.

Literature review

Cheating and plagiarism in assessment in higher education are not recent phenomena. Oral examinations prevailed in European universities for many years, with the shift from oral to written examinations taking place during the 18th and 19th centuries (Stray 2001). In oral examinations the most common means of cheating was to have someone else take the exam in place of the candidate. Before photography and identity cards this would have been harder to detect than today, though there is a record from 1819 of a candidate prosecuted for impersonation in an examination for medicine (Barrett 1905 p.187). With the widespread use of written examinations, new forms of cheating became possible, and by the middle of the nineteenth century, fraternity houses in US universities were keeping ‘fraternity files’, that is, collections of submitted work, made available to be re-submitted by students, a practice that led to ‘essay mills’ in the 1970’s (Duke Law Journal 1973; Stavisky 1973) and which, in turn, clearly has echoes in the more recent phenomenon of ‘contract cheating’ (Lancaster and Clarke 2016). Bower’s work in the 1960’s (Bowers 1964) inaugurated the academic study of cheating in higher education. In more recent years, McCabe, building on Bower’s work, carried out a longitudinal study of cheating in US universities from 2002 to 2013 (McCabe 2016) which documented significant levels of cheating and plagiarism.

This present study is based on data from universities in Bulgaria and Turkey. In Bulgaria, there have been a number of studies of plagiarism: one study based on student surveys in four EU countries (Pupovac et al. 2008) found Bulgarian students somewhat more likely to plagiarise than UK students, and less likely to believe that they would get caught; another study as part of a comparative study of plagiarism policies across EU countries (Glendinning 2013; Glendinning 2014) found some academic interest in plagiarism in Bulgaria, but noted a lack of statistics and of national or institutional guidelines. In recent years there has been further work in Bulgaria in relation to plagiarism, and a start in developing effective approaches to addressing it e.g. Арсенова (2015). In Turkey, a study of understanding of plagiarism amongst research assistants found that they lacked knowledge about some aspects of plagiarism, and also experienced problems related to plagiarism arising from their use of foreign language texts (Eret and Gokmenoglu 2010). Two instruments for measuring academic dishonesty have been developed in Turkey: the Academic Dishonesty Tendency Scale (ADTS) (Eminoğlu and Nartgün 2009); and the Internet-Triggered Academic Dishonesty Scale (ITADS) (Akbulut et al. 2008). Keçeci et al. (2011) made use of the ADTS scale with nursing students, finding a ‘medium level’ of academic dishonesty. There are also a number of studies of staff and student perceptions of cheating in Turkey (Küçüktepe 2014; Yazici et al. 2011) and a cross-cultural study using the theory of planned behaviour (Chudzicka-Czupała et al. 2016) which suggested that ‘subjective norms’ may play a greater role in predicting intention to cheat in Turkey (and Poland and the Ukraine) than in Switzerland, USA and New Zealand.

In recent years, journalists have reported widespread concern about increasing levels of academic cheating, placing much of the blame on technology and the use of the internet, for example: Pérez-Peña (2012), Mostrous and Kenber (2016), and Marsh (2017). However, there may well be a degree of ‘moral panic’ in these reports, and researchers have cautioned against exaggerated concerns. Calling on evidence from his longitudinal study, McCabe concluded that “From the data collected in the period from 2002 to 2013, a general pattern emerges of a decrease in cheating between the Bowers study in 1963 and the data collected in 2012/2013” (McCabe 2016 p. 191). Further, in a review of studies of plagiarism, Davies and Howard concluded that “Despite widespread fears about the Internet as a cause of or contributor to plagiarism, no empirical research demonstrates that relationship. These fears that the Internet has facilitated and accelerated the number of cases of student plagiarism are incorrect” (Davies and Howard 2016 p. 591).

Of particular interest for this paper is whether the use of e-assessment might facilitate cheating. Kennedy et al. (2000) found that teachers and students believed that the use of e-assessment would mean that cheating would be easier and hence more common. However, Stuber-McEwen et al. (2009) found that students reported that they were less likely to cheat in online classes than in face-to-face classes and Grijalva et al.(2006) also found no difference in self-reported cheating between the two contexts.

Also of interest for this paper is the role of proctoring in assessment. Proctored assessment refers to exams conducted in exam rooms which are invigilated by a person who monitors the students during the exam, or to exams which are proctored online by using technologies that allow exams to be taken securely in a remote location (Fask et al. 2014; Hollister and Berenson 2009; Draaijer et al. 2017). If an assessment is described as non-proctored, the implication is that the exams are conducted without any invigilation either by a person in the exam room or by online proctoring technologies. Clearly, both in the face-to-face context and in the online context there are many possible variations in the ways proctoring is carried out, and in how diligently the procedures are applied, and these variations will impact on the effectiveness of the proctoring. There have been a number of studies using a range of statistical and experimental techniques to look at the impact of non-proctored versus proctored environments on assessment scores, though the proctoring procedures used and how they were monitored varies from study to study. Harmon and Lambrinos (2008) found that more cheating took place in non-proctored assessments than proctored assessment; Hollister and Berenson (2009) found no evidence of cheating behaviour in the non-proctored group; Fask et al. (2014) concluded that the difference in the testing environment created a disadvantage for students taking an online exam which offset the advantage of the greater opportunities to cheat when the exam was non-proctored. These differences in results between studies suggest that there may be factors at play (e.g. assessment design) that are having a stronger impact on cheating behaviour that proctoring itself. This conclusion is lent support by a study by Watson and Sottile (2010) which found no evidence overall of greater cheating in online courses than in face-to-face courses, but did identify a significantly greater risk of some specific cheating behaviours, specifically online students obtaining answers from other students during an online test or quiz (as distinct from other forms of e-assessment).

In a wide-ranging review of the literature on why students cheat Brimble (2016) identified seven themes: (1) changing attitudes; (2) education, training, and learning; (3) curriculum design; (4) situational factors; (5) life of the modern student; (6) life of the modern academic; and (7) individual student characteristics. This highlights the complexity of the problem, and of the need to adopt a multi-faceted approach to tackling the issue. In addressing the development of contract cheating, the UK Quality Assurance Agency has recommended a multi-faceted approach including: Education - information and support for students and for staff; the use of ‘authentic assessment’ and a mixture of assessment methods; blocking essay mill websites; use of organisation-wide detection methods incorporating linguistic analysis tools to complement text-matching software; and development of appropriate regulations and policies (QAA 2017).

Amongst the approaches suggested in the QAA report (QAA 2017) the approach “use of organisation-wide detection methods incorporating linguistic analysis tools to complement text-matching software” most closely corresponds to the approach investigated in this paper. There is increasing discussion in the literature of the use of linguistic analysis tools to address cheating (Juola 2017; Sousa-Silva 2017) but as yet no evidence of their effective use at the institutional level. The use of text-matching software for plagiarism detection, which is now a well established practice, has been accompanied by a number of problems: the reports can be difficult to interpret; correctly referenced material may be marked as plagiarism; some plagiarised documents are not recognised as plagiarised; different systems report varying degrees of plagiarism for the same document, but the lack of transparency about the algorithms used means that these differences are difficult to interpret (Weber-Wulff 2016). It is reasonable to assume that similar problems are likely to arise in the use of linguistic analysis tools.

The study

This study is an exploration of the basic rationale of the use of student authentication and authorship checking systems. It is carried out within the context of the TeSLA project, but it does not seek to look at how effective the TeSLA system itself is in meeting its objectives, but rather to look at the rationale of the project - that is, that teachers are seriously concerned about cheating and plagiarism, that this concern is limiting the wider use of e-assessment and hence of online learning, and that the use of student authentication and authorship checking systems might help address these issues. We will examine these issues in two educational contexts in each of two of the seven institutions which have piloted the TeSLA system.

TeSLA

The TeSLA project is a Horizon 2020 project, involving a consortium of technological and educational institutions, together with experts in data privacy and in quality assurance (Noguera et al. 2016). TeSLA aims to address cheating in e-assessment in higher education through the use of a system for student authentication and authorship checking integrated within institutional Virtual Learning Environments (VLEs). As an example, in Moodle the TeSLA plug-in is directly integrated into the most used assessment activities such as assignment, forum and quiz. When assessment activities are being designed, the available TeSLA instruments are presented as a list and the teacher can select the most appropriate TeSLA instruments for that activity.

TeSLA uses a variety of instruments in order to assure student authentication, including Face Recognition, Voice Recognition and Keystroke Dynamics (for typing rhythm), and in order to check authorship, including Forensic Analysis (for writing style) and Plagiarism Detection. The TeSLA designers have chosen not to use invasive technologies (such as blocking the use of other software) because this might undermine the trust between students and their institution. TeSLA also does not use biometric techniques that require special devices (such as digital fingerprint readers or high-definition cameras for iris recognition) but rather relies on the use of commonly available devices (webcam, keyboard and microphone). TeSLA is a modular system, the various instruments can be switched on or off for use with different assessment activities and it is possible to use TeSLA in combination with other technologies aimed at addressing cheating and plagiarism.

TeSLA is intended to be used to support e-assessment for distance education and online courses, so that the amount of face-to-face assessment might be reduced or even eliminated altogether, but TeSLA can also be used to support the incorporation of e-assessment into face-to-face classes. Whilst, perhaps the most obvious way of using TeSLA would be to identify individuals who may be cheating, and to take appropriate action, the system could be used in more flexible ways, for example, providing feedback to universities on levels of cheating that would enable them to adjust their forms of assessment either at an institutional or an individual level - for a discussion of adaptation at the individual level see Baneres et al. (2016).

The piloting of TeSLA is being carried out in three stages across seven institutions. Some 5000 students in seven institutions participated in the second stage of the pilot. The data for this study was collected from two of the seven institutions taking part in the second stage of the pilot.

Aims of the study

As explained above, the overall aim of this study is to explore the basic rationale for the use of student authentication and authorship checking systems, and within that overall aim, we set out to examine four specific issues:

  • How concerned are teachers about the issue of cheating and plagiarism in their courses?

  • What cheating and plagiarism have the teachers observed?

  • If e-assessment were introduced in their courses, what impact do the teachers think this might have on cheating and plagiarism?

  • How do teachers view the possible use of student authentication and authorship checking systems, and how well would such systems fit with their present and potential future assessment practises?

Data collection

This exploratory study is based on data collected from two of the TeSLA pilot universities, one based in Turkey (University A) and the other in Bulgaria (University B). In University A the main modes of teaching are face-to-face teaching and traditional distance education, and in the University B the main modes of teaching are face-to-face teaching and blended learning. By collecting data across four contexts we hoped to gather a range of perspectives in order both to establish widely shared views as well as to identify any differences between the contexts. We built on the existing TeSLA project evaluation tools - questionnaires and interviews - extending them to explore the specific areas we wished to examine in more depth, and also involving an additional sample of teachers who had not been involved in the TeSLA pilot, but who were provided with an introduction to the TeSLA system.

Ethical approval for the studies was obtained from the Ethical Review Committees in the two universities. Questionnaires and interview schedules were developed in English and translated into Turkish and Bulgarian. Three groups of respondents were approached: administrators, teachers and students, though the main focus of this paper is on the teachers’ responses. One administrator from each of the universities was interviewed about the scale of cheating in their university, and issues that would be raised by an increased use of e-assessment. Teachers completed questionnaires asking about: the prevalence of cheating; the types of cheating observed; why they thought students cheated; and how cheating might be prevented. From the teachers who completed the questionnaires two teachers from each context were invited to take part in interviews which explored in more depth: how serious they felt the issue of cheating and plagiarism was in their context; how a move to greater use of e-assessment might impact on cheating; and how the availability of student authentication and authorship checking systems might impact on their assessment design. Students on courses selected for the TeSLA project were invited to sign a consent form allowing the collection of data, and to complete questions asking about: the prevalence of cheating; the types of cheating observed; and why they thought students cheated.

Details of the sample are given in Table 1.

Table 1 The sample

Data analysis

The quantitative data from the questionnaires will be presented descriptively, though some statistical analyses will be presented in order to examine the distribution of responses across the contexts where this is seen as throwing light on issues arising from the data. The responses to the open questions in the questionnaires and the interviews were analysed thematically. In order to support the analysis and provide some consistency across the language contexts, initial categories for the analysis were developed from the literature and presented in English. These categories were translated into Turkish and Bulgarian and were used as a basis for the initial analysis. Additional categories were developed during the analysis and suggestions made for modifications of existing categories. The results were then compared across the two countries and modifications made to the categories for consistency and coverage. In terms of the topics explored in this paper, although a number of additional categories were developed during the analysis process they were finally subsumed under the main categories and as a result the final categories used for the analysis were substantially the same as those initially proposed. This convergence was possibly the result of the initial framing of the analysis, but had we not adopted this approach then there was a danger that the categorisations in the two countries might have diverged too much, making comparison difficult.

Findings

This section of the paper presents an account of the findings from the study. We will first present a description of the four contexts, and then findings related to teachers’ views: the prevalence of cheating; the types of cheating; the reasons for cheating; how cheating might be prevented; the possible impact on cheating of a move to online assessment; and finally the impact of student authentication and authorship checking on assessment design.

Description of the contexts

This study was carried out within two contexts in each of two universities, one in Turkey and one in Bulgaria. In the Turkish university (University A) the two contexts are: a face-to-face context (A_f2f) and a distance education context (A_distance). In the face-to-face context most assessments are conducted in a proctored face-to-face environment, though there is some use of projects, essays and oral presentations. The learning model of the distance education context is based on printed self-study materials, supported by optional online course materials. Assessment is conducted in a proctored face-to-face environment, where the student’s photo is incorporated into the answer form. In the Bulgarian university (University B) the two contexts are: a face-to-face context (B_f2f) and a blended learning context (B_blended). Most classes are taught face-to-face in the first context. Teachers use a variety of continuous assessment methods based on the development of artefacts. A number of courses are offered via blended learning: some use Moodle to access learning resources and for coursework submission; others are fully online courses including both individual and group learning activities; and some use virtual classroom software. A number of courses are presently being developed for national accreditation as distance education courses. All final assessments in both contexts in University B take place in face-to-face proctored environments.

Basic demographic data about those teachers who took part in the study was collected via the questionnaire. In university A, most respondents in the A_f2f context were female, whilst most in the A_distance context were male. In University B most teachers were female. All respondents had a lot of experience of face-to-face teaching, and most teachers had taught a course where at least part of the assessment had been conducted online, though 16% of the teachers in the two face-to-face contexts had no experience of blended, distance or online teaching. About one third of the respondents worked in the field of Education, the other teachers came from the fields of Arts, Computing, Mathematics, Sciences and Social Sciences.

The students in the samples from the two universities were quite different. In University A, the sample was mainly drawn from postgraduate courses, almost all of the face-to-face students already had a degree, as did almost half of the distance students. They were mainly part-time students and there were roughly equal numbers of men and women. In University B the students were mainly female, full-time students taking their first degree.

Prevalence of cheating

Teachers were asked how many students they thought had cheated in their courses, and how often they had reported cheating, the results are shown in Table 2.

Table 2 Prevalence of cheating

The wording of both of these questions are subject to a degree of interpretation on the part of the respondents which may affect the quality of the responses. University policies and procedures will impact on what counts as academic misconduct, and to what extent formal reporting is required, and indeed the answers to these questions should be seen as reflecting both the prevalence of cheating and the influence of the institutional contexts. These figures should therefore not be seen as providing accurate indicators of the levels of cheating, but rather as indicators that the teachers in both universities felt that there was a significant amount of cheating and plagiarism in the courses that they teach. In order to provide additional information about how seriously teachers viewed cheating, this issue was explored further during the interviews.

In the face-to-face context in University A, the teachers thought that cheating was common, and usually serious, though, perhaps, sometimes to be tolerated:

Cheating is quite common in formal education. This can be tolerated in the context of certain courses; the software course is one of them. Students definitely make use of websites when writing code … I believe that this is not harmful in the beginning phase. (A_f2f_T1)

To be honest, I believe cheating is one of the major problems that we encounter in higher education. The main problem is that the students are quite unaware about what constitutes plagiarism and cheating. (A_f2f_T2).

The teachers in the distance context in University A were less concerned about cheating, feeling they had effective measure in place:

A majority of the courses that are included in the open university system use multiple choice tests. Thus, the students are provided with exam booklets and optical answer sheets. The rates of cheating are very low since there are exam room heads and supervisors in exam rooms … it is not a serious problem currently due to face-to-face and supervised administration of exams.(A_distance_T1)

The teachers in the face-to-face context in University B saw cheating, and particularly plagiarism, as a significant problem:

Plagiarism, transcription and other forms of fraud in higher education assessment are certainly becoming more common. Therefore, in order to limit the problem, I stopped giving the students homework which they could bring in the class for assessment, instead I carry out oral, in-person, exams. (B_f2f_T2)

The teachers in the blended context in University B were experimenting with a variety of assessment methods, which presented issues for controlling cheating:

[I use a] project-based learning approach. The students are divided into small groups and asked to create a series of artefacts (joint report, picture story, blog and presentation). These products are partly developed in the classroom and partly in the VLE in shared workspaces for each group – forum or wiki. This way of working makes it difficult to determine the degree of authorship and contribution of each student to the final product. It also creates opportunities for others to contribute when students work from home … By constantly monitoring work on a project in the face-to-face sessions and in Moodle, and by conducting the final face-to-face written exam with an invigilator, fraud is prevented to a certain extent. (B_blended_T1)

In summary, across three of the four contexts cheating and plagiarism were seen as major problems, and the use of e-assessment was seen as exacerbating the problem. The exception is the distance education context in University A, where the teachers interviewed said that the systems in place were effective in preventing cheating.

Types of cheating

In the questionnaires, teachers were asked to rate the frequency with which they encountered 14 types of cheating and plagiarism. To facilitate comparison of the frequency of types of cheating, a mean score for the teachers’ responses was calculated, allocating a mark of 4 for ‘often’, 3 for ‘sometimes’, 2 for ‘occasionally’, 1 for ‘rarely’, and 0 otherwise, these are shown for each university in Table 3, this table also shows the results for the Mann-Whitney U-test, the use of which is explained below.

Table 3 Teachers’ views on frequency of types of cheating – mean scores and Mann-Whitney U test results

In order to examine how the responses to each of the 14 questions about the frequency of types of cheating (the dependent variables) varied between universities (the independent variable), the Mann-Whitney U-test was used because the dependent variables are ordinal variables. A one-tailed test was used with a significance level of 0.01. The sample sizes are: 31 for University A and 100 for University B. The null hypothesis for each question from the questionnaire is that it is equally likely that a randomly selected value from University A will be less than or greater than a randomly selected value from University B (i.e. the two samples from University A and University B have the same distribution).

As can be seen from Table 3, the p values for the Mann-Whitney U-test for five types of cheating in face-to-face assessments were less than 0.01, and so the null hypothesis can be rejected in these cases. The five types of cheating were:

  • Copying from the work of other students in the exam room

  • Receiving hints from other students in the exam room

  • Copying from materials (on paper, on a mobile device, etc) taken into the exam hall

  • Using a device with headphones to receive assistance from someone outside the exam room

  • Giving an excuse to leave the exam room temporarily, and then gaining access to outside help.

In each case the mean score for University A was less than for University B. One possible explanation for the differences in the frequency of these five types of cheating might be that there is a somewhat stricter supervision of examination rooms in place in University A than in University B. However, these differences in themselves should not be seen as implying that examination surveillance in general is stricter in one university than another as other factors such as assessment design would need to be taken into account in order to make such a judgement.

Teachers and students were asked to identify any other types of cheating that they had observed. Teachers mentioned some additional variants of ‘copying from materials’ (e.g. planting information in the room on a previous day and also using invisible ink) and of plagiarism (including translation from other languages). Teachers in both contexts in University A mentioned invigilators helping students, and students in both these contexts also mentioned getting help from the invigilator. Other types of cheating identified by students included the use of translations of foreign language texts and using fake data in research projects.

From Table 3, it is clear that, in both face-to-face and online assessment, the most common categories of cheating that teachers experienced were plagiarism and ghost writing, this was followed by copying (from other students or from notes, via mobile phones or the internet), and lastly impersonation. Student authentication would not, therefore, seem to be a major issue for the teachers at the moment, probably because impersonation is seen as being well controlled through the use of face-to-face proctored assessments. Authorship checking, on the other hand, would seem to be highly relevant, as the major cheating observed was ghost writing and plagiarism. However, another category of cheating behaviours that was of concern to teachers in both face-to-face and online assessment was small scale copying (from other students or from notes, via mobile phones or the internet), though this category is less seen in face-to-face assessments in University A than in University B. This category of cheating is unlikely to be picked up by authorship checking instruments as it is on too small a scale. This category is similar to that class of cheating behaviours identified by Watson and Sottile (2010) referred to earlier as particularly likely to occur in online tests and quizzes.

Reasons for cheating and means of prevention

The teachers and students were asked why they thought that students cheated, and presented with a list of options to select from. The most popular options chosen by both teachers and students were:

  • Wanting to get higher grades

  • The internet encourages cheating and plagiarism, and makes it easy to do

  • There would not be any serious consequences if cheating or plagiarism was discovered.

In response to an open question, teachers provided a range of other possible reasons, principally suggesting that students were lazy and wanted to take the easy way out, however some teachers put the responsibility on the university rather than the student, saying that students had not been educated about cheating and plagiarism, and that poor learning content encouraged students to cheat.

Students came up with some other suggestions:

  • Lack of knowledge about what cheating and plagiarism are

  • High expectations from their parents

  • ‘I work and I have no time to learn’

  • ‘When you have three assessments in a week’

  • Students don’t engage with course content unless it is provided in an interesting and accessible way.

The views expressed by the teachers might lead one to expect that they would prioritise approaches to the prevention of cheating that emphasise sanctions. However, when asked about ways of preventing cheating and plagiarism, educational approaches were the most commonly described approach, with approaches based on the use of technology, changed assessment design and increased sanctions also very popular. Comments related to technology use in the prevention of cheating included references to student authentication, plagiarism checking, performance tracking, jamming devices for mobile phones, and surveillance cameras.

Impact of increased use of e-assessment on cheating

The administrator interviewed in University A was principally concerned about the technical and security issues associated with any increased use of e-assessment:

The online exams can be organised in two ways. The first one is administered in computer laboratories where supervisors are present, and completed within a certain period of time on the internet using a limited number of computers. The second one is administered completely on the personal computers of learners, and there are no supervisors during exams. If the exams are to be administered without any supervisors, the systems should be developed and cleared to prevent students from helping each other cheat. It should not be regarded as just the internet access of a computer with a web camera.

The administrator interviewed in University B was principally concerned about existing institutional approaches to cheating and plagiarism:

There is no well-established mechanism to stop the process of attempting to cheat and plagiarise by the students ... There is not a department with such competencies, and there is no electronic system to check the text materials. I have noticed that this problem is massively neglected by the teachers.

In the questionnaire given to teachers, they were asked whether they would be concerned about an increase in cheating if there was a greater use of e-assessment. The results are shown in Table 4.

Table 4 Impact on cheating of increased use of e-assessment

Many teachers are concerned that an increase in the use of e-assessment will impact on cheating, though there are differences in the degree of concern between contexts, most striking is the statistically significant difference between the A_distance and B_blended contexts (Chi-square = 9.545, d.f. =2, p < 0.01). The use in the A_distance context of tightly controlled face-to-face proctored examinations for large numbers of students provides a high degree of confidence in the assessment process and hence these teachers are particularly concerned about any change in the way in which assessment is carried out. The teachers in the B_blended context, on the other hand, have developed an assessment approach using significant elements of e-assessment for small groups of students that involves close monitoring, and have fewer concerns about an increased use of e-assessment.

Each of the four contexts will now be discussed in turn, looking firstly at the replies to an open question in the questionnaire asking about the possible impact on cheating of an increased use of e-assessment, and then at the interviews with the two teachers in each context exploring this same issue.

In the face-to-face context in University A the teachers thought that students would take advantage of e-assessment in order to cheat unless there were strict controls, and they were sceptical about the extent to which systems for student authentication and authorship checking could be effective. One teacher referred to the TOEFL iBT Test as an ideal model of e-assessment in a strictly controlled environment. The teachers interviewed shared these concerns, though they also saw some advantages in e-assessment, in part because of the opportunity for more creative assessments:

Having the students perform online activities that will make them use higher level cognitive skills, such as writing blogs and making discussions will presumably reduce the acts of cheating. (A_f2f_T1)

and, in part, because it would facilitate the use of tools that identify, and so discourage, cheating and plagiarism:

It is very difficult to read the exam papers and assignments one by one. It will be faster to read them using technology ... When reading the written material, sometimes we fail to see that the student had cheated … there are online tools such as ‘iThenticate’ … which help to read and see cheating much faster. (A_f2f_T1)

I have begun to use ‘Turnitin’ to collect student work … because the students know this, they are extra careful in their submitted assignments. (A_f2f_T2)

In the distance education context in University A the teachers also thought that students would attempt to take advantage and to cheat, and that existing controls would be insufficient to prevent this. However, they also argued that there were possible preventative steps which would lessen the impact: the adoption of appointment based tests such as those used in the TOEFL iBT Test; and the redesign of multiple choice exams through developing new question types, increasing the number of questions based on reasoning, and using a wider range of questions based on materials outside the textbook.

The concerns felt by teachers in this context are illustrated by this comment from one of the interviewees:

...the participants will attempt cheating to pass the exam with the easiest way. The conditions that prevent students from cheating will not exist in the online environments ... Thus, the participants will definitely try to cheat … In this environment, they may try to have someone else take the exam instead, or cheat from the internet or the printed resources. (A_distance_T2)

In both the face-to-face and blended contexts in University B many teachers thought that there would be some loss of control over the assessment process and that this would lead to greater cheating even with a student authentication system as there was no control over other people communicating with the student, or over the student accessing other materials. However, some teachers argued that technology actually created better opportunities for control (particularly through authorship checking with Plagiarism Checking and Forensic Analysis) and that appropriate design of assessment could also help to limit cheating, examples included: keeping the time for the assessment task short so there was little opportunity for cheating: and setting more creative assignments that would be harder to copy from elsewhere. It was also argued that e-assessments should have a low weight in the overall assessment and be complemented by a face-to-face final exam.

Teachers concerns were well expressed by this teacher:

I also think that a problem arises in solving a test or a written/oral exam in the online environment because the teacher does not have the opportunity to follow what materials or tools the student uses during the exam itself … In a written exam, for example, even with Voice, and Face Recognition, I cannot be sure that the student does not use any help on the computer screen or on his knees to help. So, yes, certainly online testing and e-assessment definitely worry me. (B_f2f_T2)

In summary, teachers in all four contexts felt that greater use of e-assessment would increase the prevalence of cheating, though there was also recognition that technology also provides opportunities to support assessment and reduce cheating. The teachers most concerned were those in the distance education context in University A, where there are large student numbers presently being assessed in strictly controlled proctored face-to-face environments. University B is starting to develop distance education programmes and is concerned about how it should develop assessment in that context. When talking about technology, a number of teachers referred positively to their use of plagiarism detection software, and whilst student authentication instruments were seen as useful, there were concerns that they do not prevent communication with other people, or the copying of materials, during the assessment task.

Possible roles for student authentication and authorship checking systems

Two teachers in each of the four contexts were interviewed. They were asked how they might use student authentication and authorship checking systems such as TeSLA to address existing problems with cheating, or how they might use such systems to support new forms of assessment. The teachers’ comments are presented for each context below.

In the face-to-face context in University A the teachers thought that such a system would reduce cheating, but one teacher’s experience of issues arising from the use of other plagiarism detection software prompted a degree of caution:

Making the students type their answers and informing them that their keystrokes and syntactic patterns are recorded would be really helpful to prevent cheating. Nevertheless, I don’t yet know the stress this will cause in students. ‘Turnitin’ already makes them very stressed because it gives all the possible matches, even their own names in other assignments they have submitted earlier. Most of the teachers do not even read the work when they see the originality report, they don’t bother checking what is inside. (A_f2f_T2)

However, the teachers welcomed the possibilities of easier access to assessment for students:

Especially for the disabled students, these can minimise effort. Because the system will recognise their keystroke patterns and sentence structure, their tests can be given online which will help these disadvantaged groups as they will not need to travel to the exam. (A_f2f_T2)

One teacher explained how the use of such systems might impact on her assessment activities, firstly by enabling her to re-introduce book reading activities which she had dropped because of fears about cheating and plagiarism, and secondly in moving multiple choice tests online:

... if it is the TeSLA system we are using, I can see to what extent the assignment is taken from another resource or to what degree the students had copied from each other or from the pages on the internet since I know about the content of the text book and students’ homework in it.

I use multiple choice tests in the mid-term and final exams ... I would like to administer them online ... to do an exam on their tablet PCs and mobile phones within a certain period of time after receiving their biometric data. (A_f2f_T1)

In the distance education context in University A exams are presently carried out face-to-face because of the risk of cheating and plagiarism. Teachers thought that a reliable e-assessment system which incorporated additional checks together with those provided by student authentication and authorship checking would enable them to carry out some assessments online:

The students will probably not have the chance to cheat from other resources as long as the exam durations are acceptable and not very long ... I suggest that the students should not be allowed to open any other applications on the computer when the exam application is open in TeSLA. (A_distance_T2)

University A wishes to move away from a reliance on multiple choice tests, and TeSLA was seen as supporting the use of a wider range of assessment activities, including taking some existing formative e-assessment activities and using them summatively, and thus re-balancing the overall assessment towards a greater use of continuous assessment:

In the open university system, the project assignments to be given to the students can be checked by the Keystroke Dynamics instrument. This way, it can be determined if a student had done the assignment personally ... Voice Recognition can be used very effectively. They can record their assignments on mobile phones, and do them on the computer as well … TeSLA instruments can be used for projects, term papers, or portfolios. (A_distance_T1)

It is possible to use activities that are provided in the VLE for continuous assessment purposes that encourage students to search and learn, even in courses with large number of students … The Forensic Analysis instrument … encourages me to give homework to students in courses having relatively small number of students as the system recognizes the writing styles. This way, I have the chance to determine whether the student has done it themselves. (A_distance_T2)

In the face-to-face context in University B, teachers also welcomed the possibilities of easier access to assessment for students:

.. these are wonderful tools in cases where it is impossible, or very difficult, for students to attend university … (such as) SEN students, students from programs abroad, students with temporary difficulties attending exams. (B_f2f_T1)

One teacher commented that although the TeSLA system had potential, it was limited and therefore unlikely to impact on her existing assessment design:

The instruments developed in TeSLA are definitely interesting and have the potential to limit some of the problems associated with testing fraud. In the online environment, Face and Voice Recognition would eliminate the possibility of impersonation. The Keystroke Dynamics instrument is useful when the student responds in writing to a question in a real environment. Unfortunately, all three instruments cannot cope with the problem of the presence of hidden aids that examinees may use.

Currently, using TeSLA would not change the assessment methods that I use in general. As an option … I might conduct oral exams in an online environment, but only when it is possible to ensure that the student does not use help materials on paper or on the computer screen. (B_f2f_T2)

In the blended context in University B, teachers again noted the value of flexibility:

… many of our students are working and often cannot attend classes. In this regard, the possibility of replacing face-to-face exams with e-assessment from home would be of great benefit for both teachers and students. (B_blended_T2)

This teacher explained in some detail how he would integrate the tools within the assessment process:

… the final written exam, conducted under the supervision of a lecturer, is more heavily weighted than the continuous assessment, because students might receive external help when working from home. With TeSLA, I could assume that students’ work done at home is their own and then I would increase the weight of the results of the continuous assessment in forming the final grade in the discipline. As forms of assessment, I would keep the writing a wiki report (integrated with the TeSLA instruments Keystroke Dynamics and Plagiarism Detection) … I would change the final written exam from writing on paper to writing on a computer under the supervision of a teacher, integrating the Keystroke Dynamics and Plagiarism Detection instruments … as well as the Forensic Analysis instrument … (B_blended_T1)

In summary, in all four contexts the teachers interviewed could see some potential role for student authentication and authorship checking systems, allowing greater assessment flexibility and access. However, the integrations proposed are rarely straightforward, and involve modifications to both the wider technological environment and to administrative arrangements in order to enable such systems to be used effectively.

Conclusions

The opinions of the teachers for most of the issues were similar across the four contexts examined, and this provides some degree of confidence that our findings may be more widely generalisable. Where differences were found between the contexts, it was the distance education context in University A that stood out as different from the other three. In this context, a well established, secure, face-to-face proctored assessment process capable of processing very large numbers of students is in place, and there was significant reluctance to move away from this model. However, it is also the case that this assessment model is seen as restricting the choice of assessment methods, and so the institution is keen to move away from exclusive reliance on this form of assessment.

This study was an exploration of the basic rationale of the use of student authentication and authorship checking systems, and had four specific aims. The findings in relation to these aims are summarised below:

  • How concerned are teachers about the issue of cheating and plagiarism in their courses? In responses to the questionnaires, the teachers in all contexts described cheating and plagiarism as a serious and widespread problem. In the interviews, the teachers in three contexts expanded on this view, but the teachers in the distance education context said that cheating was not a significant issue for their courses because of existing strict control of assessment.

  • What cheating and plagiarism have the teachers observed? The most widespread types of cheating reported in all contexts were plagiarism and ghost writing, followed by copying and communicating with others during assessments, with impersonation being only occasionally observed. In further exploring this issue we found that many teachers said that the main cause of cheating was their students’ unwillingness to work hard and the weakness of sanctions, though others saw the causes to lie in lack of education of students as to what constitutes plagiarism and cheating, and in poor course content leading to disengagement. Students expressed similar views to the teachers in relation to students’ failings but particularly noted lack of awareness of what constitutes plagiarism, lack of engagement with course content that was not presented in an interesting and accessible way, and also laid emphasis on the impact of high workloads. When asked about ways of preventing cheating and plagiarism, however, teachers were most likely to mention educational approaches, followed by assessment design, technology, and sanctions. So, use of technology to address the issue of cheating was seen as a significant element of a preventative approach, but was not seen as the only, or even as the main element in this.

  • If e-assessment were introduced in their courses, what impact do the teachers think this might have on cheating and plagiarism? Teachers in all four contexts thought that there would be an increase in cheating, with the teachers in the distance education context being the most concerned. However, teachers also described a number of ways in which cheating might be reduced through assessment design, and the opportunities that e-assessment offered for increased control of the assessment process.

  • How do teachers view the possible use of student authentication and authorship checking systems, and how well would such systems fit with their present and potential future assessment practises? In all contexts the value of such a system was seen as enabling greater flexibility in access (for those who found it difficult to travel to exam centres), and greater flexibility in forms of assessment. The impetus for change in the distance education context was not so much a desire to break away from face-to-face examinations in themselves, as to open up new forms of assessment beyond the use of multiple choice tests. There was also a large demand for such systems in face-to-face courses, where it was felt that they would support greater flexibility in assessment methods. Student authentication enabled some degree of confidence that the right person was taking the exam, but there were concerns that this did not actually prevent cheating through accessing materials or communicating with other people during the assessment. Many teachers commented positively on the potential to use of authorship checking tools though those with experience of plagiarism detection systems pointed to some potential problems that might arise. Tools based on linguistic analysis for checking author style (such as TesLA’s Forensic Analysis instrument) received particular attention and were widely welcomed.

Findings that have implications for the development of student authentication and authorship checking systems, and in particular for TeSLA, were:

  • Teachers did not see technology as the primary means of addressing cheating and plagiarism, but rather saw it as one important element to be used in conjunction with other approaches based on education, assessment design, and effective sanctions.

  • Teachers did not see student authentication as a major issue in existing assessments as the risk of impersonation was felt to be well controlled, but it acquires a much greater significance with the increased use of e-assessment, and in supporting the secure use of a wider range of types of assessment. However, the existing commitment to systems which are felt to be very secure presents a high bar for any student authentication system to reach if it is to compete effectively with existing approaches.

  • Teachers widely welcomed the potential use of computational linguistic approaches to authorship verification, such as that exemplified by the Forensic Analysis instrument in TeSLA, but this is an approach that has not been widely trialled in education as yet, and it remains to be seen as to how effectively it can be integrated into institutional practice.

  • Teachers were concerned about low-level cheating through copying and communication with other people that can occur in all assessment contexts, including proctored examinations, and felt that these forms of cheating were not addressed by student authentication and authorship checking systems. Some teachers described how effective assessment design can go some way to address this issue, but others looked to the use of more intrusive technologies such as comprehensive online proctoring systems, the locking down of browsers, and automatic logging when the user switches windows.

This study of higher education teachers’ perceptions of the rationale for the use of student authentication and authorship checking systems has provided valuable insights into how these technologies might be effectively used to address cheating. It has also highlighted the potential areas of concern in the use of these technologies, and the need to use a variety of approaches in conjunction with such technologies in order to address cheating effectively.

References

Download references

Funding

This work is supported by the H2020-ICT-2015/H2020-ICT-2015 TeSLA project ‘An Adaptive Trust-based e-assessment System for Learning’, Number 688520.

Author information

Authors and Affiliations

Authors

Contributions

HM contributed to the development of the research instruments, data analysis and the writing of the paper. RP and SK contributed to the research instruments development, data collection and data analysis. AK, and BY contributed to the data collection and data analysis. All authors read, commented upon the final manuscript. All authors have approved the manuscript for submission.

Corresponding author

Correspondence to Serpil Kocdar.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mellar, H., Peytcheva-Forsyth, R., Kocdar, S. et al. Addressing cheating in e-assessment using student authentication and authorship checking systems: teachers’ perspectives. Int J Educ Integr 14, 2 (2018). https://doi.org/10.1007/s40979-018-0025-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40979-018-0025-x

Keywords