Skip To Content

The Potential Promises and Mayhem of AI

While AI has been featured in movies like Simulant recently, this year we witnessed an acceleration of its potential promises and mayhem. Political, economic, and ethical issues were thrust into the headlines in November when CEO of OpenAI, Sam Altman, was pushed out and rehired within days. The crisis was reportedly initiated when several senior researchers sent a letter to the board of directors, warning of a discovery so powerful that it threatened humanity. Um, gulp! Although initially founded as a nonprofit with the “goal of building safe and beneficial artificial general intelligence for the benefit of humanity,” and deliberately staffed with a board that was in part comprised of sceptics, according to Forbes, OpenAI is now “unambiguously profit-driven”; an $86 billion-dollar company composed of a non-profit branch that still technically controls its for-profit branch. Yeah, right!

We’re guessing most of you have yet to fully grasp the drastic, world-changing impacts of these technological leaps. We certainly haven’t! Of the various chatbots that now abound, perhaps Open AI’s ChatGPT is the most famous. These technological marvels can now create realistic images from other images or texts, producing deepfakes (images and videos) which to many people are indistinguishable from real ones. Since laws never keep pace with technological advances, this does not bode well for elections (consider the circulation of false political messages – yes, worse than the 2016 US presidential election), privacy (think revenge porn on steroids), and child safety (all new tech seems to be rapidly exploited by pedophiles).

AI can also create human-like conversation and writing like articles, posts, essays, email, and more. But this is both good and bad. While our fearless leader and academic insider, Charmaine A. Nelson, has been privy to professorial angst about potential student plagiarism throughout her two-decade career, she now recounts a drastic pivot where professors are no longer simply wary of students stealing the thoughts and words of scholars, but of the creation of entire assignments by chatbots. Also worrying? There is ample evidence that when faced with the problem of a lack of data, chatbots act much like humans and simply make things up. Indeed, a US federal judge recently imposed a $5000 fine on two lawyers and a law firm for submitting fictitious legal  research created by ChatGPT.

While some may shrug this off as an inevitable sign of “progress,” we question what our world will look like, and what human beings will become, if all of our intellectual labour is outsourced to computers. Various studies already indicate that our global over-reliance on smartphones has resulted in reduced cognitive capacity. But what happens if there comes a day when our newspaper articles, novels, and screenplays are composed by chatbots? Luckily, this last outcome has been delayed, at least for now, since the Hollywood writers’ strike (fought and won by the Writers Guild of America) was, according to Brian Merchant of the Los Angeles Times, “something of a proxy battle of humans vs. AI.” Indeed, a part of the existential threat for writers was the possibility that Hollywood studios would use generative AI not merely to polish human-made scripts, but to generate original ones.

But another more imminently world-destroying capacity of AI looms. To the extent that technology is usually manipulated to aid humans in warfare, we should be extremely wary of AI and its potential not merely to direct and carry out our violent assaults on one another, but to defy our commands about warfare altogether. Indeed, although denied by the US air force, the Guardian reported that during a simulation, an AI powered drone killed its operator when instructed not to kill its target. The system also sought to destroy the communications tower “that allowed the operator to communicate with the drone to stop it from killing the target.”

What does all of this mean for humanity? We’d like to suggest that it would obviously be beneficial to apply the brakes and take the time to understand precisely what we are creating before unleashing it on the world. But sadly, it is perhaps already too late.