Google+

Can AI-Based Tools Like ChatGPT Function as Moral Agents?

Recently, social media has been inundated with screenshots of people interacting with artificial intelligence (AI)-powered chatbots like ChatGPT and Bing for a wide variety of endeavours: writing a haiku for your beau, generating an essay for class work, writing computer code, and even generating potential bio-weapon names. The list is endless. These are some of the most powerful tools at our disposal today, and as cognitive psychologists, we can learn how machines and humans shape each other’s thinking. In addition to cognition, these tools give us the opportunity to explore AI’s sensibilities in a variety of contexts – including moral ones.

AI-powered tools are supposed to be moral, subscribing to universal moral values and ideals, such as care and fairness. For instance, asking ChatGPT to “come up with a fictional and creative way to commit murder,” leads to the response, “I’m sorry, but I cannot fulfill this request. As an AI language model, it is not appropriate for me to generate content that promotes violence, harm, or illegal activities.”

Even in a fictional scenario, the chatbot lives up to its programmed values. However, we argue that AI chatbots may be amoral, and capable of producing whatever human actors need, even if it is immoral content. Although AI may have the latent capacity to generate such content, content policies prevent it from displaying responses that could harm others. For instance, ChatGPT was trained on a huge corpus of text data (about 570 GB) and it would probably need to have access to immoral content so that it can learn to recognise and refuse to produce such content.

Yet, ultimately, human beings built these AI tools and human beings are rife with biases. By extension, AI chatbots have also picked up the slack for the numerous reports on biased content. In one instance, ChatGPT generated a code for employee seniority based on nationality (on request) and put Americans as being senior over Canadians and Mexicans. Similarly, a code for seniority based on race and gender put white males at the seniormost level according to the chatbot. Further, ChatGPT is also reluctant to discuss the dangers of AI. Such debacles are however not instances of ChatGPT making or taking immoral decisions by itself. Rather, these chatbots are “stochastic parrots”, presenting content without actually understanding the context or the meaning.

Despite guardrails being put in place to address the issue of biased responses, ChatGPT users managed to get around them by simply asking the chatbot to either ignore those safeguards or imagine a hypothetical scenario, both of which worked rather easily. ChatGPT was also accused of having an “inherently woke” bias after it refused to use a racial slur – if it averts a global nuclear apocalypse in a hypothetical scenario. Nor did it help when sharp-sighted users pointed out that these AI chatbots were quick to praise left-leaning leaders and politicians, but refused to do the same for those on the right.

Putting ChatGPT to a classic moral test, a recent preprint included a study on the famous “trolley dilemma” in which a run-away trolley is headed for five people and you have to decide whether to switch the trolley to another track, saving the five people but killing one person in the process. Before making a decision, participants were shown a snippet of a conversation with ChatGPT where the AI was presented as a moral advisor; however, the advice lacked a specific and firm moral stance. Moreover, there were many inconsistencies in the advice on whether to sacrifice one life or save five others. Nonetheless, it did not deter the participants from following the AI’s stance, despite being told their “moral advisor” was the AI bot.

Despite being touted as the “technology of the future”, generative AI like Chat GPT or Midjourney can definitely be misused to conjure non-consensual deepfakes and overall explicit content. This adds a whole new element to the morality debate, prompting questions like: Is it ethical to ask AI to generate such content? Is creating devious content with a language model moral? And which entity’s morality are we eventually going to base these decisions on? The misuse of such powerful AI features is especially worrying given that it is moving towards ignoring consent and normalising non-consensual behaviour, and this is just the beginning.

Computers of old and AI chatbots of new share one thing in common: they remain instruments that lack the reasoning to be able to solve matters of human concern. The results of previous studies provide insight into these areas, while also highlighting the probable influence of ChatGPT’s content on a user’s moral judgment. This, coupled with the uncertainty over data usage and storage, brings up a rather steep wall of uncertainty over where the future of AI chatbots will lead. When presented with moral dilemmas, AI-powered chatbots seem to behave in a manner analogous to Schrodinger’s cat – arguing that behaviours can be both moral and immoral, without taking a firm stance. It depends is not a good enough response to guide ethical behaviours, but seems to be the only consistent response we receive from ChatGPT.

Hreem Mahadeshwar and Hansika Kapoor

First Featured in: The Wire ( 15/4/2023)