If you brought a programmable calculator to your fifth-grade class in 1965, you would have quickly been bounced out on your ear by Mrs. Clark, your persnickety and vaguely sullen math teacher. You know what she would have called you? A cheater.
Ah, how times change―and how technology is often a driver of that change. Sometimes cheating goes by the name of its twin: innovation.
Since the rocket to the bustle that was LLM and GenAI, everyone, everywhere is trying to figure out where the lines are and when they are blurred. Some are pretty clear: Don’t steal a particular author’s work and put your name on it as if it was yours. Some rules have yet to be written. When LLMs crowdsource a hundred thousand different sources to answer one part of a particular question, you can see how this idea of direct attribution falls apart. The databases are only getting bigger and more sophisticated, and you should know: If you have used ChatGPT, you are its trainer.
The concept of cheating itself is in flux as people become more collaborative with AI. It’s not clear where the boundary between human and AI lies, as they co-create and use AI as an intellectual prosthetic―not unlike a digital limb or hearing aid.
Where do you start, and it ends? What is an intellectual prosthetic, anyway? It may soon become your new best friend. Let’s stop short of brain implants―not today, Satan―and think of it this way:
An intellectual prosthetic is an artificial device or system that helps a person improve their cognitive abilities, such as memory, learning, creativity, or problem-solving. Some examples are calculators, computers, smartphones, brain-computer interfaces, neural implants, or artificial intelligence. These devices can extend or modify the human mind in different ways, with various benefits and challenges.
You can start to think about AI as a partner, or you can use it like a cudgel. It’s up to you, but the innovators will be way out ahead on this. Of course, as AI itself reminds us: You should respect the ethical and legal boundaries of using ChatGPT, such as not plagiarizing, not violating privacy, and not harming others.
ChatGPT is powerful and innovative technology, but it is not a substitute for your own intelligence, judgment, and creativity. The question is and will continue to be: How do we define ethical, privacy, and legal boundaries. What does it mean to do no harm in this context? It’s on us to figure that out.