Artificial Intelligence (AI) covers most aspects of our lives. Working in the background, from the moment we unlock our phones, using facial or fingerprint recognition, and customized reels on our Instagram accounts. In recent years, AI has revolutionized industries, enhanced productivity, elevated our daily lives and frightened many with its power.
One of AI’s most notable breakthroughs is natural language processing, in which AI models such as OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot can understand and generate human-like text. Language processing has significantly helped the advancement of content writing and coding. But it has also significantly increased the probability of unoriginal work in classrooms across campuses.
Let’s face it, typing is so 20th century. Thankfully, our friends Alexa and Siri saved us from finger cramps. Now, we can just yell random questions into space and get lightning-fast answers, like a digital genie that knows everything.
Another breakthrough, self-driving cars created safer and more effective transportation networks. Or did it?
“When self-driving cars hit the road, traffic jams will concern of the past,” freshman computer science major Ritesh Rokka said. “But when an accident occurs who will take the blame, the programmer, the manufacturer or the car?”
AI impacts the workforce with a complex mix of job displacement and job creation. AI might replace the jobs requiring repetitive tasks like data entry and customer service and creating jobs for data scientists and AI engineers.
Despite its undeniable benefits, there are some concerns regarding the ethical implications of AI. One of the leading anxieties is the potential mass displacement of jobs. As models can do tasks more precisely and, in less time, than humans, there is concern that it will eventually replace human labor causing widespread unemployment.
Also, AI use in certain fields has given rise to ethical concerns having the potential to be biased and lacking transparency. It might’ve revolutionized our lives in many ways but its use in criminal justice raises eyebrows. Imagine you standing in front of a judge who is trained from questionable social media posts and clickbait articles and getting judged based on your affinity to certain group of people. It just doesn’t make sense. AI might someday revolutionize criminal justice too, but for now let’s just hope it doesn’t learn too much about human criminal history.
John W. Sutherlin, professor of political science and public administration in ULM, emphasized the ethical foundation of AI being influenced by human input.
“AI systems will be no better or worse than human decision-making. In fact, AI’s ‘ethics’ stem from what humans have provided as the base input,” Sutherlin said. “Still, AI, upon ‘consciousness’ will begin to formulate its own set of morals and principles.”
He also expressed doubts on whether AI should be guided by utilitarian principles or duty-based ethics, reflecting ongoing human ethical debates.
“Do we program the AI system of ethics to be based on the principle of the ‘greatest good for the greatest number’? Or should we rely on duty and obligations? Are consequences or rights more important?” he said. “As you can imagine, humans are still grappling with these questions. AI will not solve ethical dilemmas any better than humans have.”
Advances in artificial intelligence have enormous potential for humanity, but they also pose significant challenges that should not be overlooked while forming policies for the application of AI. By promoting ethical AI development and application, we can ensure that AI will keep enhancing our lives while also maintaining our morals and the welfare of society.