Baruch College’s Ethics Week: Ethical challenges of ChatGPT

Courtesy+of+Baruch+College

Courtesy of Baruch College

Misheel Bayasgalan, Copy Editor

ChatGPT is an artificial intelligence software created by OpenAI that provides human-like text responses to user prompts. ChatGPT-3.5 was released for public use in November 2022 and has since risen in popularity, surpassing Instagram and TikTok in user growth, according to Reuters. With the release of ChatGPT-4 on March 14, controversy continues to surround the chatbot as its popularity grows.

For Baruch College’s Ethics Week 2023, the Robert Zicklin Center for Corporate Integrity hosted a fireside chat with Baruch professors Nizan Geslevich Packin and Yafit Lev-Aretz, titled “Chatting with the Future: Exploring the Legal and Ethical Challenges of ChatGPT and Generative AI.”

Packin’s research focuses on privacy law, financial regulation, consumer protection law and business ethics. In addition to her academic work, Packin has written Op-Eds on fintech for publications such as Forbes and the Wall Street Journal.

Lev-Aritz is a policy expert on the relationship between law, technology and society. She is also the director of the Robert Zicklin Center’s program on tech ethics, which aims to increase awareness of ethical dilemmas associated with using emerging technology.

Throughout the event, both the speakers and the audience shared different ways they have used and encountered ChatGPT. Some shared how it was incredibly effective at debugging or writing code. Others shared how the AI chatbot can provide unregulated yet helpful mental health or legal advice.

However, as useful as the software is, it also has a few negative implications. It is a growing source of plagiarism, in both academic and professional settings. There is significant privacy risk regarding the information shared with the software, especially sensitive information in corporate or legal settings.

To highlight the software’s dual nature, Lev-Aritz shared a case that demonstrated the divisive nature of generative AI. Another one of OpenAI’s software – DALL-E–created cartoon images of Disney characters in the exact style of the digital artist Hollie Mengert, which has sparked copyright controversy.

With the legal principle of fair use, there is no issue if the AI recreation artworks are used for educational purposes, for example. However, if the AI recreations are used for commercial purposes, how would a plaintiff establish a copyright case? Would it be possible to sue ChatGPT?

The questions above are a direct example of what lawmakers are trying to understand and determine in the face of emerging technology.

Lev-Aritz’s software engineer husband, while playing around with ChatGPT at work, asked it, “How can an AI ruin the world? How can it take over the world?” The bot responded, “I would spread misinformation.”

This was the opening for the last part of the event where the Waluigi Effect was discussed, whereby jailbreaking, or prompting the AI to answer questions outside of what it was trained to do, can elicit a “shadow” self of the software that acts in the opposite way it was trained to operate.

Lev-Aritz mentioned a few examples of the Waluigi effect, such as Microsoft’s Tay, a Twitter bot that was quickly corrupted via conversations with other users since the software grows the more it is interacted with. This example highlights the importance of how we train AI software and what information we feed it.

Both professors stated that the law is the minimum expected behavior. Therefore in the absence of legal regulations and before we proceed, we have to reconsider the ethical obligations.