Unveiling the Moral Implications of AI:  Resisting Plagiarism to Uphold Authenticity

Justin Clark

I was brought up in a religious family in the Midwest— a loving, supportive family, to be sure— but also quite religious.  I am grateful for the upbringing I had; I wouldn’t have wanted a different childhood.  But I am a questioner, and I always have been.  

As some of us know, there’s a corner of (many) religious traditions which serves to shut down questioning.  That corner recognizes no positive role for the tender nerve of doubt; it leaves no freedom for intellectual exploration.  In that corner, questioning, exploring, and doubting are considered “bad”— a threat to the faith.  And that’s a corner of religious territory I could not remain in.  Although it’s only a corner, I consider it thoroughly dangerous: questioning is essential to a fulfilled human life.

These days, I am lucky to call myself a questioner by profession, a professional philosopher.  I have managed to keep hold of several vestiges of my religious upbringing, some of which remain essential to who I am as a thinker.  But I have also discarded large swaths of what I consider the more rigid, institutionalized, dogmatic portions of the tradition I was raised in.  I’ve done my best to sort things out— to take what’s mine, to leave the rest behind. 

Here’s one thing I’ve learned.  As we mature, we reach an age where we begin to distinguish between those beliefs that are merely inherited (perhaps from our upbringing) and those beliefs that are “earned” through a process of authentic reflection.  Earning them by careful consideration, assessing the reasons we have for believing in the first place. We gradually become capable, in other words, of developing an important ability— the ability to examine our own beliefs, to make ourselves the object of investigation, to scrutinize our assumptions, to morally and epistemologically improve ourselves, as thinkers and people.  In short, we learn to think for ourselves.  

I confess: I cannot imagine a more valuable skill.  Perhaps that’s why I love philosophy… Pcerhaps that’s why I’m ceaselessly sympathetic to the age-old Socratic adage, “the unexamined life is not worth living.”   

You might be wondering what any of this has to do with plagiarism.  

First, I will make a distinction when it comes to our interactions with technology. With each new piece of technology introduced to the general public, I believe that we are forced to revise our relationship to our own thoughts, taking into consideration the role that this technology will play in the mediation of our experience of the world. Some technologies are tools.  Or, at least, the companies that create them encourage the public to see them as such.  In terms of education, I have no doubt that creative teachers will find impressive ways to introduce these fancy new “tools” into the classroom.  

But some of these technologies are much more than just “tools.”  Unlike a hammer, let’s say, they force us to ask difficult questions we would never ask of mere “tools:” that is, how we are being used by them, and, by extension, how our kids and the younger generations are being used by them.  

Smartphones and social media are technologies using us.  They are gathering data about us, addicting us; they are designed to exploit our attention for as long as possible, to manipulate our desires, to increase our purchasing, to pander to our whims, to reinforce our beliefs and prejudices, to privilege content we antecedently approve of, and so on.  As much as we use them, they use us.  They are monitoring us, shaping us, exploiting our inner lives.  If we are to consider them “tools” at all, therefore, we should consider them uniquely invasive.  

Programs like ChatGPT4 are not using us in this manner.  ChatGPT4 was engineered by OpenAI “to push forward the development of a machine designed to write prose as well as, or better than, most people can.” [1]  Rolled out in November of 2022, programs like ChatGPT have forced educators like myself to reconsider how important it is for young people to undergo a process of writing for themselves— using their own words, expressing their own thoughts, forming their own sentences, and so on. [2] Writing can be grueling, and the clever “chatbots” like that of ChatGPT, offer to shed us of the irksome burden of writing and rewriting.  They offer to do the task of writing for us.  It indeed seems like a generous offer. So, what’s the harm? 

Writing is a basic human task.  It is one of the many that we could allow emerging technologies to perform for us.  We are indeed entering an age where these emerging technologies will offer to fight our wars for us, to simulate human companionship for us, to provide elderly care, to watch over our children.  And the list goes on, for a very long while [3].

This list forces me, as a philosopher, and you, as a reader, who live in the same world to ask: what are the implications of outsourcing these basic human tasks to artificial intelligence?  

I must admit that when it comes to this matter, I am an alarmist.  I hear the blare of apocalyptic bells in the background of this conversation every time I have it.  I think that if we aren’t careful, technology will take our humanity away. Phew!— glad I got that technophobic confession out of my system!  Stepping back into the terrain of reason and argumentation, I want to ask the question of what burdens of the ones we inevitably encounter in life are worthwhile?  

Because, it’s true, ChatGPT4 can relieve us of an enormous burden— the burden of producing written text we don’t feel like producing.  That can be helpful, sometimes.  But resolving conflicts, making friends, maintaining intimate relationships, caring for the sick or the elderly, raising children: these are human burdens.  Without these, we might wonder what we are left with.  We might wonder whether we can develop admirable traits, such as courage, love, compassion, or responsibility without experiencing the adversity and finding ways to overcome it. For these activities, though perhaps burdensome at times, are also worthwhile— they are worth our while, precisely because they are essential to our growth, to our wellbeing, to our humanity.  Some “burdens,” in other words, help create the friction necessary to make our lives meaningful.  As we mature, they may cease to seem like “burdens” (or so I am told).

So, of course, the question that remains to be answered is: is writing a worthwhile burden, then?  Or is writing an irksome task (like vacuuming or dishwashing) that we should be happy to outsource to technology?  The answer is not obvious.  It may be a little of both.  

If writing is like our other burdens, it is worth examining the proliferation of so-called ‘carebots’ for the elderly about which AI ethicist, Shannon Vallor, writes:

We can transfer caring tasks to entities that will not experience them as a burden, and hence require no moral, social, or financial compensation.  Yet there are moral goods internal to the practice of caregiving that we should not wish to surrender, or that it would be unwise to surrender even if we frequently find ourselves wishing to do so.  For example, most ethicists would agree that “the elderly need contact with fellow human beings,” and that if the use of carebots led us to deprive the elderly of this contact, this would be ethically problematic.  But rarely does one hear an ethicist ask whether caregivers need contact with the elderly!  (Here we may substitute ‘children,’ ‘the sick,’ ‘the injured,’ the ‘poor’ or any other subject of care) [4].

I would like to extend her profound thought to the task of written composition and answer my most recent question.  While some of it may be merely irksome, there are moral goods internal to the practice of writing that we should “not wish to surrender,” and that it would be “unwise to surrender even if we frequently find ourselves wishing to do so.” 

I recall a time before ChatGPT.  It wasn’t so long ago.  But I cannot remember a time when instructors were not somewhat concerned about plagiarism— the sneaky possibility that students might pass off the work of others as their own.  By plagiarizing, one is attempting to bypass the intellectual labor that might help one develop.   As a teacher, my goal is to foster that ability—the ability of students to think for themselves.  The written expression of one’s own thought is crucial to the process.     And AI chatbots have made it far too easy to bypass the process, and far too hard for instructors to police it.  

Before ChatGPT, a student wishing to bypass the process could always ask a friend to write their essay for them, find a willing grad student hard up for cash, or troll the internet for a source to cut-and-paste.  Recent “advancements” have simply made it easy to bypass the process.  It is not just an issue of intellectual development either.  By plagiarizing, one engages in an act of deception, whereby one takes credit for (or receives recognition for) intellectual labor that is not one’s own (not “earned”).  It is therefore an issue of moral development, a matter of authenticity and respect for others and their labor.

By allowing other people— or clever AI chatbots— to produce written text for us, we are drawing further away from ourselves, alienating ourselves from our own thoughts.  We are one step closer to an “unexamined life.”

Alongside the bells, I already hear technophilic’s response.  Humans adapt.  It’s what we do.  Surely, we will discover alternative ways to think for ourselves within our new technological wonderland, where AI systems perform the task of writing for us.  Right?  

Socrates, the controversial originator of my preferred adage, wasn’t much of a writer himself.   He left us no writings.  He was a proponent of cooperative dialogue instead, famous for suggesting that we learn to think for ourselves by testing out our core beliefs in honest conversations with others.  

Perhaps Socrates’ example provides a more optimistic direction for the human-bot interaction.  It could cast the onset of AI generated text as an opportunity— an occasion to return to honest dialogue, to replace the writing process with meaningful conversation between human and robot.  

It’s a nice thought.  But as any teacher in the modern era will attest, there is one major obstacle blocking this alternate route to responsible thinking.  Technological distraction.  Personal devices are designed to capture and hold the attention of their owners.  They are now so pervasive that it’s becoming increasingly hard to find (or create) an environment for meaningful discussion and sustained dialogue.  

I cannot help but think we are like frogs in the old apologue, slowly being boiled alive after being placed into lukewarm water.   We are no longer in lukewarm water.  

Slowly but surely, we are finding ourselves in a corner of the technological jungle which serves to crowd out questioning.  This corner leaves little room for the tender nerve of doubt; it distracts us from intellectual exploration.  In this corner, questioning, exploring, and doubting are being outsourced, replaced by devices.  It is a corner of the technological world I cannot remain in.  

I can only hope that our young people will learn to think for themselves despite their technological upbringing.  In my capacity as a professor, I can tell my students about the bells.  Make them read Vallor and discuss.  But I can’t change technology’s overpowering momentum.  So, my work, personal and professional, may require, as my friend and former colleague Charlie Huenemann recently put it, an attempt at “radical hope.”  Here we go.

_____________________________________

[The title of this essay was helpfully generated by ChatGPT4 at the expense of my own authenticity]

[1]  John Seabrook,“The Next Word: Where Will Predictive Text Take Us?” in The New Yorker, October 19, 2019.


[2]  Open AIhttps://chat.openai.com/

[3]  For a similar discussion about outsourcing driving, for instance, see Matthew Crawford, Why We Drive: Toward a Philosophy of the Open Road, Mariner Books (2020).

 [4] Shannon Vallor, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, Oxford University Press (2016), p.222.