Skip to main content

Postplagiarism: transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology

Abstract

In this article I explore the concept of postplagiarism, loosely defined as an era in human society and culture in which advanced technologies such as artificial intelligence and neurotechnology, including brain-computer interfaces (BCIs), become a normal part of life, including how we teach, learn, communicate, and interact on a daily basis. Ethics and integrity are intensely important in the postplagiarism era when technology cannot be decoupled from everyday life. I argue that it might be reasonable to assume that when commercialized neuro-educational technology is readily available in a form that is implantable/ingestible/embeddable and invisible then academic integrity arms race will be over, as detection will be an exercise in futility.

In a postplagiarism era, humans are compelled to grapple with questions about ethics and integrity for a socially just world at a time when advanced technology cannot be unbundled from education or everyday life. I conclude with a call to action for transdisciplinary research to better understand ethical implications of advanced technologies in education, emphasizing that such research can be considered pre-emptive, rather than speculative. The ethical implications of ubiquitous artificial intelligence and neurotechnology (e.g., BCIs) in education are important at a global scale as we prepare today’s students for academic and lifelong success.

As I was finishing the manuscript for my book Plagiarism in Higher Education: Tackling Tough Topics in Academic Integrity (2021), I began to contemplate the future of plagiarism and academic integrity. The manuscript was due to the publisher in May of 2020 and with only days left before the deadline, I was rewriting the final chapter, as I was forming and reforming my conceptualizations of plagiarism for future generations. I introduced the idea of life in a postplagiarism world, thinking about the impact of artificial intelligence on writing, teaching, learning, and assessment. In this editorial, I expand on and extend those ideas. The concept of a postplagiarism age was inspired in part by Rebecca Moore Howard’s work written more than two decades prior. Howard (2000) proposed that plagiarism is “inherently indefinable” (p. 473), that it “eludes definition” (p. 474), and that we ought to discard the term entirely. As practices such as contract cheating continue to complicate and challenge academic integrity, debates about whether academic outsourcing is plagiarism, or fraud, or some other classification of misconduct continue to persist among policy makers, misconduct investigators, and educators.

Definitions of plagiarism appear in almost every academic integrity policy that I have ever seen. There is no absolute or universally accepted definition of plagiarism. Cultural and contextual factors play a role in how we define and address misconduct. The bounded nature of plagiarism definitions can serve as an impediment to deeper reflection about what it means to engage in ethical decision-making in and beyond the classroom. Academic integrity serves as a foundation for ethical decisions at school, at work, and in life.

I tried to imagine what would happen if we took Howard’s advice and discarded the term ‘plagiarism’ from our policies and procedures. What then? What would replace it? I was stumped, so I reframed the question. Instead of asking what would the world look like if we discard the term plagiarism, I asked instead, what happens if we transcend it? Reframing the question led me to contemplate would mean to live in a postplagiarism world.

A few months prior, in February 2020, American journalist Jeffrey Young posed the question, “Are algorithmically-generated term papers the next big challenge to academic integrity?” (Young, 2020, n.p.). A few days later, an editorial by Canadian, Michael Mindzak hooked readers with the headline, “What happens when a machine can write as well as an academic?” (Mindzak 2020, n.p.). Within days of one another these two writers in different countries were offering provocations about the impact of artificial intelligence on academic writing. As I was concluding my book manuscript in May 2020, I concluded that artificial intelligence technologies would indeed be among the next big challenges to academic integrity and writing. That was two and half years before ChatGPT was released by OpenAI in November 2022.

Thinking about the idea of postplagiarism catalyzed me to mobilize a team of colleagues to start researching the ethical impacts of artificial intelligence on teaching, learning, and assessment. This work began with a small, internally funded research project at my home university and led to a multi-country project funded by the Social Sciences and Humanities Research Council of Canada (SSRHC). Our SSHRC-funded project included scholars from multiple universities in Canada (University of Calgary, University of Saskatchewan, Brock University, and Toronto Metropolitan University) and Australia (Deakin University). We gathered at the University of Calgary in June 2023 to explore pressing questions about the ethical implications of artificial intelligence on higher education (Dawson 2023; Eaton et al. 2023). The idea of life in a postplagiarism world was central to our discussions.

What is postplagiarism?

Postplagiarism refers to an era in human society in which advanced technologies, including artificial intelligence and neurotechnology, including brain-computer interfaces (BCIs), are a normal part of life, including how we teach, learn, and interact daily. Philosophers and intellectual theorists have long classified human thinking and culture according to eras including, but not limited to postmodernism, structuralism, and poststructuralism (Mann, 1994). Postplagiarism heralds a new era of intellectual engagement in the age of advanced technology.

Although the concept of plagiarism dates back thousands of years to the ancient Greeks, it became an everyday concern after the development of the printing press in the fifteenth century (Eaton 2021). Copyright and intellectual property rights came after that. After the invention of the printing press, literacy at a population level improved, as more people learned to read and write. Education became institutionalized only after the advent of the printing press, with the industrial revolution catalyzing the massification of schooling for children. Modern concepts of plagiarism can also be traced back to the printing press as a technological disruption that changed society for ever. In the postplagiarism era humans are not only consumers of information, but we are co-creators of knowledge together with technology.

Ethics and integrity are intensely important in the postplagiarism era when technology cannot be decoupled from everyday life, at least for the majority of people on the planet, and not without a concerted and sustained effort to remain disconnected. There are complexities yet to be disentangled about the ways in which advanced technologies can impact decolonization (or conversely, can perpetuate colonialism). As yet, there are no clear answers these big questions, such as: What are the ethical implications of advanced technology on education? How can artificial intelligence promote equity, diversity, inclusion, and accessibility? In what ways can artificial intelligence help or hinder efforts to decolonize education? These are expansive questions without easy answers. In an age where the rate of technological transformation is arguably outpacing some educators’ ability to keep up, the implications for educational ethics and integrity are more pressing than ever. In the infographic below, I offer six tenets to frame the principles underpinning the postplagiarism (see Fig. 1). In the section that follows, I explain each tenet in more detail. I shared an earlier and briefer version of this infographic on my blog (Eaton 2023b).

Fig. 1
figure 1

Six tenets of postplagiarism (This image is licenced under a Creative Commons license.)

Six tenets of postplagiarism

Hybrid human-AI writing will become normal

The first principle of postplagiarism is that hybrid writing co-created by human and artificial intelligence is becoming prevalent and will soon become the norm. Text generated by artificial intelligence tools is not static. It can be edited, revised, reworked, and remixed. The result can be a product that is neither fully written by a human, nor by an AI, but one that is hybrid. Trying to determine where the human ends and where the artificial intelligence begins is pointless.

As AI tools become increasingly sophisticated the probability of accurately detecting whether the text was written by a human or an artificial intelligence diminishes (Elhatat et al. 2023). In August 2023, OpenAI, the company behind ChatGPT definitively declared that text generated by artificial intelligence applications cannot be detected (OpenAI 2023). This comes on the heels of numerous news stories about students being falsely accused of academic misconduct after teachers had used so-called AI text-generation detection tools on students’ academic work (e.g., Fowler 2023; Jimenez 2023; Verma 2023).

There are strong signals that AI capabilities will soon be integrated into technologies we use every day such as Microsoft Office or Google Workspace. Already some social media platforms offer an option to users to have AI help write their posts. We only need to pay attention to what is happening around us to see that AI capabilities for text- and non-text-based applications will soon be part of every technology we use.

Human creativity is enhanced

Another tenet of postplagiarism is that human creativity is enhanced, not threatened by artificial intelligence. Humans retain their ability to be inspired and inspire others. We may even be inspired by artificial intelligence, but our innate human ability to imagine and create remains boundless and inexhaustible. I am well aware of the protests of writers and other content creators with regards to artificial intelligence. I understand there are concerns about intellectual property and using AI to replace human labour. All of these are important topics that merit our attention. Concurrently, it is important to recognize that human creativity itself is not threatened. Past generations have worried that technologies such as radios or smartphones would diminish our ability to think (Orben 2020), but there is no empirical evidence to support such an assertion. There are complexities and nuances to human intelligence, creativity, and relationships that may never be captured by artificial intelligence (Marks 2022).

Language barriers disappear

When I first wrote on my blog that “one’s first language will begin to matter less and less as tools become available for humans to understand each other in countless languages” (Eaton 2023b, n.p.), the backlash on social media was immediate and intense. In a response to the original blog post, Bali (2023) noted that this tenet was by far the most contentious. When I originally wrote this tenet, it was not my intention to suggest that one’s first language would become unimportant. As someone who spent fifteen years as a language teacher, I am acutely aware of the political nature of language, translation, and interpretation. The advocacy work to decolonize language is important, relevant, and timely.

The intention behind this tenet was to emphasize that the availability and effectiveness of technologies to help us transcend language barriers is likely to increase. This is an entirely different matter to whether those technologies will be able to recognize or respond to complexities related to language politics. The accuracy and effectiveness of neurotechnology and brain-computer interfaces (BCIs) that can help persons with disabilities communicate better is increasing at a rapid pace (Willett 2023). Overcoming barriers and advancing equity are fundamental to postplagiarism precisely because ethics and integrity are of utmost importance.

Humans can relinquish control, but not responsibility

Humans can retain control over what they write, but they can also relinquish control to artificial intelligence tools if they choose. Although humans can relinquish control, they do not relinquish responsibility for what is written. Humans can – and must – remain accountable for fact-checking, verification procedures, and truth-telling. Humans are also responsible for how AI-tools are developed.

Major publishers and organizations concerned with publication ethics, such as the Committee of Publication Ethics (COPE) (an organization on which I hold a seat as an elected member of the COPE Council), have agreed that ChatGPT and similar Large Language Model (LLM) applications should not be named as co-authors on scientific papers (e.g., Nature 2023). The reason for this is simple: Humans, not technology, are held responsible for the accuracy, validity, reliability, and trustworthiness of scientific and scholarly outputs.

To extend this argument to educational contexts, students remain responsible for the quality and credibility of the work they submit for assessment. It has never been acceptable for students to outsource their academic work, regardless of whether the outsourcing is done by a human or by an artificial intelligence. If students cannot demonstrate their own learning, then there may be reason to question whether academic integrity has been violated. In turn, educators have a responsibility to develop assessment tasks that provide students with opportunities to demonstrate their learning. Bearman and Luckin (2020) point out that assessment tasks that focus on human learning can focus on personal epistemology (meta-knowing) and evaluative judgement. A deeper discussion of these topics is beyond the scope of this article, but needless to say, a fundamental principle of life in a postplagiarism era is that we, as humans, do not get to abdicate ourselves of responsibility for the work we do or the research and scholarship we conduct.

Attribution remains important

Another principle of postplagiarism is that attribution persists as a desirable aspect of learning and scholarly engagement. It has been, and always will be, appropriate to appreciate, admire, and respect our teachers, mentors, and guides. Humans learn in community with one another, even when they are learning alone. Citing, referencing, and attribution remain important skills. Too often, citing and referencing is taught and upheld as a technical skill, rather than as a practice of paying homage to those from whom we have learned. Indigenous scholars in particular, have begun to challenge the ways in which modern-day practices related to citing and referencing can perpetuate colonialism and privilege Western ways of knowing (e.g.,Gladue 2020; Lindstrom 2022; MacLeod 2021; Poitras Pratt & Gladue 2022; Younging 2018). Lorisia MacLeod, from the James Smith Cree Nation in Canada developed templates for citing Indigenous Elders and Knowledge Keepers because standard citation guides (e.g., APA, MLA) continue to marginalize Indigenous voices and knowledge by failing to provide appropriate methods to attribute oral knowledge (MacLeod 2021).

Citing and referencing is undertaken too often as a perfunctory or performative obligation, evidenced as a list of works at the end of a paper or in a series of footnotes. Attribution, on the other hand, is about knowing others’ work, being able to speak to it accurately, and showing respect for others’ contributions. Attribution is a form of intellectual appreciation that can be written, oral, or demonstrated in a variety of other ways; it is about being a knowledge caretaker and steward. In this sense, attribution is about taking responsibility not only for what we know and what we write, but also for showing respect to those from whom we have learned. The people we cite and reference are our teachers in the broadest sense of the word.

Historical definitions of plagiarism no longer apply

Historical definitions of plagiarism might not be rewritten because of artificial intelligence; instead, they can be transcended. Policy definitions can – and must – adapt. Paying attention to signals about technological advances in education and in society is essential if we are to commit to educational integrity as a broad concept that includes teaching, learning, assessment, research, leadership, and policy. As students, educators, and members of society begin to think more about the normalcy of complexity and developing a tolerance for ambiguity, we may be challenged to articulate new ideas about what it means to learn, work, and live ethically.

Large language models have provoked much debate about plagiarism. When Noam Chomsky was asked in a YouTube interview in January 2023 to share his thoughts about thoughts about ChatGPT as an educational fad, Chomsky commented, “I don’t think it (ChatGPT) has anything to do with education, except undermining it. ChatGPT is basically high-tech plagiarism” (EduKitchen, 2023, timestamp 04:23–0:4:28). After that interview, media reports of Chomsky’s declaration that ChatGPT was high-tech plagiarism rippled across the world.

Because I am a plagiarism scholar, people asked me what I thought of Chomsky’s comments. The first thing I did, of course, was to locate and listen to the original interview in which Chomsky made this declaration (see EduKitchen, 2023). I listened to the interview with great interest. As someone who is regularly called upon by journalists and others to offer on-the-spot commentary on issues related to my scholarship, I know all too well that sometimes one’s words can be taken out of context. On that basis, I refrained from commenting. But when Chomsky and colleagues published a guest editorial in the New York Times a couple of months later that re-iterated some of these same ideas (Chomsky et al. 2023), I took a firmer position. I disagree with Chomsky’s assertion that outputs from Large Language Models are nothing more than high-tech plagiarism. Moreover, I think it would be fair to say educators at every level all over the world might struggle with the comment that ChatGPT and other AI apps have nothing to do with education. By this I mean no disrespect to Professor Chomsky, as he is an esteemed professor and public intellectual. As scholars, we can disagree with another’s ideas while still maintaining great respect and admiration for them as human beings. Disagreeing with Chomsky on this point may be irreverent, but it is not intended to be impudent, as I hold him in high regard.

I would respectfully offer that in a postplagiarism era, historical definitions of plagiarism that focus on cutting-and-pasting text verbatim without attribution, may soon be obsolete. Longitudinal research on plagiarism led by Guy Curtis in Australia has shown that plagiarism remains a topic of concern in higher education (Curtis & Popal 2011; Curtis et al. 2016; Curtis & Tremayne 2019). Curtis and Tremayne (2019) point out that there are “some substantial gaps in students’ knowledge and causes for concern in rates of several forms of plagiarism including sham paraphrasing, illicit paraphrasing, and contract cheating.” (p. 10). In their work, Curtis and Tremayne attend to nuance and complexity, as they refrain from defining plagiarism in absolute terms and instead classify it into several different types of misconduct behaviours. This longitudinal empirical research into plagiarism would seem to support Howard’s (2000) assertion that plagiarism “eludes definition” (p. 474). Plagiarism is understood within the context of culture and epistemological traditions. When we talk about postplagiarism we are talking about an era in which historical notions of what it means to write and create ethically are being challenged. As yet, we do not have a collective sense of what a new ethical normal might be and this is an intellectual endeavour of our time.

Looking ahead: the impact of neurotechnology and brain-computer interfaces (BCIs) on education

Contrary to what some may believe, ChatGPT did not come out of nowhere. The company behind GPT, OpenAI had been working on generative pretrained transformer technologies since the company was launched in 2015 (Eaton 2023a). The technological precursor to modern day large language models can be traced back to the 1980s when predictive text technologies were first developed to help persons with disabilities (Swiffin 1987; Eaton 2023a; Eaton et al. 2023; McDermott 2023).

In the same year that OpenAI was launched, Mark Zuckerberg asked, “How does learning work, and how can we empower humans to learn a million times more?” (Booton 2015). By the time the first generative pre-trained transformers (GPT) was introduced in 2017, Elon Musk was working on Neuralink, a brain-computer interface that is “fully implantable, cosmetically invisible, and designed to let you control a computer or mobile device anywhere you go” (Neuralink 2023).

Brain-computer interfaces (BCIs) have existed for years for medical purposes (Marsh 2018; Ienca et al. 2018), but the technology is shifting from being specialized and medicalized, to being commercialized and socialized. A 2023 report led by UNESCO, declares that neurotechnology:

“has broken into the market leading to an increased availability of direct-to-consumer products that may be used for recreational and mental augmentation purposes. However, the effects of these technologies are still unclear and their unregulated use entail unprecedented risks for human rights related to freedom of thought, mental integrity and to some of its underlying pre-conditions such as dignity, identity or human agency.” (p. 3)

There are already pressing ethical concerns related to neurotechnology. There are compelling indications with neurotechnology, just as there were with artificial intelligence apps, that this advanced tech might become ubiquitous sooner than the average person might expect. If history is any indication, when neuro ed tech arrives at our classrooms, it is reasonable to expect that neither educators, nor policy makers will be able to curtail or control its use, at least to the extent that they might desire.

It is reasonable to suggest that prior to the launch of ChatGPT in 2022, many educators might have considered artificial intelligence being used in our classrooms as a far-fetched idea. Yet here we are. We are in a similar position today, with neurotechnology not even being on the radar of many classroom educators. Nevertheless, there are strong signals that neurotechnology will become readily available to the public at some point within our lifetime – and quite possibly before the current generation of kindergarten students graduates from high school. We cannot predict exactly when neurotech will become available to the general public, but we can say is that recent history has shown that since the beginning of the 2020s’, educators and educational policy makers have been ill-prepared for mega-scale social and technological changes over which they have no control, first with COVID-19, then with artificial intelligence apps becoming omnipresent within a single school year.

The end of the academic integrity arms race?

The possibility of neurotechnology that is available to the average consumer could sound like an academic cheating nightmare. After all, gadgets from earpieces to smart phones to high tech glasses, have been used for cheating for years now (Scott, 2012). There are websites that specialize in selling cheating gadgets to students around the world. What happens when the technology is implantable, and most importantly, cosmetically invisible, as Musk’s company has suggested? It might be reasonable to assume that when commercialized neuro-educational technology becomes implantable/ingestible/embeddable and cosmetically invisible the academic integrity arms race will be over, as detection will truly be an exercise in futility.

Call to action for research into the ethical implications of neurotechnology in education

Educators were caught off guard with COVID-19 and the need to have online teaching skills; most had to pivot quickly. Suddenly, there was global urgency for all educators to teach virtually. Only an (unquantifiable) small portion of educators had the skills, competencies, confidence to teach online. The advent of ChatGPT resulted in educators worldwide having to confront the reality of artificial intelligence applications that were at the fingertips of billions of people within a matter of months. Again, educators had to pivot quickly.

As of 2023, children who are five years old or younger (at least in economically developed countries) will never know school without artificial intelligence. As Phillip Dawson (2023) has said, we need to prepare children for their future, not our past. If we think ahead to a time when children who start their schooling in 2023 graduate from secondary school (probably in or around the year 2041) as educators, we must ask ourselves, what might their world be like? This question is not a summons to scurry down rabbit holes of a dystopian science fiction. Instead, I pose this question as an invitation to think about how we, the current generation of educators, can prepare students for a future we cannot yet imagine – their future, not ours.

Research into the ethical implications of advanced technologies such as artificial intelligence and neurotechnology in education can be considered pre-emptive, rather than speculative. There are important ethical questions about the use of advanced technologies for education for which we currently have no answers. I conclude with a clarion call to action to research the ethical implications of neurotechnology and brain computer interfaces (BCIs) in education. I am not talking about medical devices prescribed by a doctor, but rather commercially available neurotech that students choose to use. These ethical implications of direct-to-consumer neurotechnology used in classrooms transgress and transcend traditional academic subjects or silos. As such, research into these topics is transdisciplinary, meaning that collaboration with scholars across disciplines, together with policymakers and industry may be useful – even necessary – as we tackle complex questions without easy solutions. In a postplagiarism era, humans are challenged to grapple with questions about ethics and integrity for a socially just world at a time when advanced technologies such as artificial intelligence and neurotechnology cannot be unbundled from teaching, learning, assessment, science, business, or everyday life. The ethical implications of ubiquitous artificial intelligence and neurotechnology (e.g., BCIs) in education are important at a global scale as we prepare today’s students for academic and lifelong success.

Availability of data and materials

All materials used in the preparation of this article are cited in the reference list.

References

Download references

Acknowledgements

I am grateful to those who provided feedback on early drafts of this work: Phillip Dawson, Rahul Kumar, and Todd Maki.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

This is a sole-authored article. As such, I am 100% responsible for the content herein, including any errors or omissions.

Corresponding author

Correspondence to Sarah Elaine Eaton.

Ethics declarations

Competing interests

I have no financial competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Eaton, S.E. Postplagiarism: transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. Int J Educ Integr 19, 23 (2023). https://doi.org/10.1007/s40979-023-00144-1

Download citation

  • Published:

  • DOI: https://doi.org/10.1007/s40979-023-00144-1

Keywords