欧美口爆视频

Skip to main content

ChatGPT is Forcing Us to Do Some Deep Reflecting: Are You Ready? (Herbst Fellow Essay)

ChatGPT is Forcing Us to Do Some Deep Reflecting: Are You Ready? (Herbst Fellow Essay)

People are freaking out about ChatGPT鈥檚 ability to generate complex and unique texts that can be hard to distinguish from human writing (Dale, 2020). With ChatGPT gaining 100 million users in two months (The Economic Times, 2023) and becoming the fastest growing consumer application in history, many people feel like AI development is coming out of left field and cannot help but to anticipate the AI apocalypse. For example, credible news sources like Politico, the New York Post, and The Washington Post have posted headlines like 鈥淭racking the AI apocalypse鈥 (Robertson, 2023); 鈥淩ogue AI 鈥榗ould kill everyone,鈥 scientists warn as ChatGPT craze runs rampant鈥 (Cost, 2023); and 鈥淥pinion | ChatGPT might be the end of civilization鈥 (Leibbrand, 2023).

Another pressing question many people have is 鈥淲ill AI take over my job?鈥 First of all, white collar jobs that involve processing data, writing text, and even programming are most likely to be affected. But to answer the question, I can see this going two ways: possibly yes and possibly no. Sam Altman, OpenAI's CEO, advocates for universal basic income (AI News Base, 2023) which makes us think he thinks the answer is, 鈥測es.鈥 On the other hand, affected doesn鈥檛 have to mean replaced. Instead of AI replacing lawyers, this could mean lawyers working with AI would replace lawyers not working with AI. (John Oliver, 2023).

However, asking if AI will take over isn鈥檛 productive. To figure out what we should do with AI, we must ask the question, 鈥淲hat does it mean to be human in the age of AI?鈥 Before we discuss this question, let's first gain a better understanding of what exactly ChatGPT is.

ChatGPT: A Deeper Dive

AI research started in the 1950s, but its performance was not very significant. Until recently. After large language models (LLMs) were fed billions of parameters (texts and images from the internet), LLMs finally displayed intelligent behavior. ChatGPT is a LLM, and it is like a vast scrapbook created from a huge pile of snippets of text from the internet that it then glues together on demand (Heaven, 2020). ChatGPT鈥檚 acceleration in intelligence was unexpected. In 2022, OpenAI鈥檚 GPT-3.5 only scored in the 10th percentile of the bar exam, but, in less than a year, GPT-4 scored in the 90th percentile. However, ChatGPT is not truly intelligent. 

There are two types of AI: narrow and general. Narrow AI can perform only one defined task while general AI demonstrates intelligent behavior across a range of cognitive tasks. An example of general AI would be J.A.R.V.I.S from Iron Man.

ChatGPT's only task is to generate text, so generative AI is narrow鈥 for now. Mr. Altman鈥檚 ultimate goal is to reach general AI. In recent interviews, he has said general AI has benefits for humankind 鈥渟o unbelievably good that it鈥檚 hard for me to even imagine.鈥 However, be cautious in believing his words because he did also mention that general AI could kill us all (Roose, 2023).

ChatGPT鈥檚 Current Limitations

Regardless of how ChatGPT seems like it understands what it is saying, we must note that it actually doesn鈥檛. When prompted for sources, it will provide fake articles that don't exist. This is caused by LLM鈥檚 learning of likelihood: when asked to provide sources, it produces a very likely title that a human would have written for that topic. Tellingly, AI spouting false information is called hallucinating (Johnson, 2022), and this poses a serious problem for the public good. Since everyone is using ChatGPT but are not aware of this limitation, we will believe the false information. Furthermore, GPT-4 is able to make false facts more convincing and believable than earlier GPT models. Thus, overreliance occurs when users excessively trust the model, leading to inadequate oversight (OpenAI, 2023).

As this Twitter and ChatGPT user has pointed out, we should be aware that ChatGPT can be misleading if not scrutinized. 

Some limitations can be explained, but exactly what鈥檚 going on inside ChatGPT isn鈥檛 clear. Developers don鈥檛 fully understand how the massive amounts of data are being linked together, and they can't explain how ChatGPT鈥檚 unique results are derived. In addition, developers can鈥檛 explain why internet scaling allowed for ChatGTP鈥檚 intelligent behavior to emerge. This is like a black box. We鈥檙e able to see the responses AI is generating (the box), but we鈥檙e unable to see how the system is making its decision (what鈥檚 happening inside the box). So it is concerning to me that there is a black box problem happening here, but developers continue to create more black boxes as they release more LLMs applied to non language processing issues like predicting protein structures (Timmer, 2023). It is wonderful that LLMs are advancing science, but it would be more beneficial if we could follow along and understand how the LLMs are deducting their predictions. 

Because ChatGPT learns off of billions of parameters, its scale also makes it harder for OpenAI to test every single test case. For example, GPT-4-early was observed to have serious safety challenges including harmful content and privacy problems (OpenAI, 2023). Intentional probing could lead to advice for self harm, hateful content, content for planning violence, and instructions for finding illegal content. In addition, GPT-4-early had the potential to be used to identify individuals with a large online presence if a user possesses outside data and then gives it to GPT-4-early (OpenAI, 2023). Being able to identify someone without their consent or even knowledge raises serious privacy concerns. 

OpenAI implemented safeguards to mitigate these challenges, but again, they will most likely never catch all test cases and, therefore, never be able to create safeguards for them. However, creating safeguards can be problematic as well. Attempts to filter out toxic speech in systems like ChatGPT can come at the cost of reduced coverage for texts about marginalized groups (Welbl et al., 2021) Essentially, this safeguard solves the problem of being racist by erasing minorities, which, historically, doesn't put it in the best company (Oliver, 2023). Honestly, the list of limitations continues, but Big Tech continues to roll out more LLMs for commercial use. This is reckless and seriously threatens public safety. 

Silicon Valley鈥檚 Irresponsibility 

Even the decision to release ChatGPT early was rash. OpenAI鈥檚 original plan was to release GPT-4 after it was done with thorough testing. But before GPT-4 was ready, the company鈥檚 executives urged workers to release a chatbot to the public fast. They were worried that rival companies might upstage them by releasing their own A.I. chatbots before GPT-4, according to the people with knowledge of OpenAI. So they decided to dust off and update an unreleased chatbot that built on GPT-3, the company鈥檚 previous language model, creating GPT-3.5 (Roose, 2023).

Clearly GPT-3.5 didn鈥檛 go through the proper testing because it was released within two weeks. GPT-3.5 still produced biased, sexist, and racist text, but OpenAI wanted to be the first, probably for the money and power. If that was their goal with the early release, they achieved it. Because of ChatGPT, OpenAI is now one of Silicon Valley鈥檚 power players. The company recently reached a $10 billion deal with Microsoft and another deal with BuzzFeed. In addition, Mr. Altman has met with top executives at Apple and Google (Roose, 2023). OpenAI鈥檚 mission statement says that they will  鈥渆nsure that artificial general intelligence鈥 benefits all of humanity鈥 and that generative models are safe and align with human values (OpenAI, 2023), but the company has seemed to become too profit driven, undermining its original spirit.

However, ChatGPT isn鈥檛 the only product that was released irresponsibly and reflects the culture of Silicon Valley. For example, according to the National Transportation Safety Board (NTSB), an experimental automated driving system by an Uber Advanced Technologies Group was deployed when the system did not account for jaywalking pedestrians yet; the system did not consider pedestrians as human if they were not walking on a crosswalk (National Transportation Safety Board, 2019). 

Everyone knows the mantra of Silicon Valley is 鈥渕ove fast and break things鈥, but you would think they鈥檇 make an exception if their product literally moves fast and can break people (Oliver, 2023). 

Why We Need Guardrails in Legislation

AI does have the potential to help humans achieve great things (that I will discuss later), but we urgently need legislative guardrails. If we are not careful, such progress might come at the price of civil rights or democratic values. Suresh Venkatasubramanian, Computer Science Professor at Brown University and appointed to the White House Office of Science and Technology Policy, says 鈥淭hese technological systems impact our civil rights and civil liberties with respect to everything: credit, the opportunity to get approved for a mortgage and own land, child welfare, access to benefits, getting hired for jobs 鈥 all opportunities for advancement鈥 (News for Brown, 2022). For such reasons, the Federal Trade Commission (FTC) has declared that the use of AI should be 鈥渢ransparent, explainable, fair, and empirically sound while fostering accountability.鈥 OpenAI鈥檚 product GPT-4 satisfies none of these requirements, yet the FTC has taken no action (Federal Trade Commission, 2023). The Center for Artificial Intelligence and Digital Policy (CAIDP) recognizes how this lack of action could allow OpenAI to impact our civil rights, so they issued a complaint, demanding the FTC to act. In their complaint, the CAIDP states, 鈥淭here should be independent oversight and evaluation of commercial AI products offered in the United States鈥 (Federal Trade Commission, 2023).

Nonetheless, as the constant adoption of information technologies deepens the uncertainty of the future, it is less likely that traditional governance instruments will be adequate. Thus, we need to create frameworks using systems such as virtue ethics that are better suited to navigating uncertainty (Bauer, 2022). Like Aristotle鈥檚 eighteen virtues, Shannon Vallor recently proposed twelve techno-moral virtues, including humility, justice, courage, magnanimity, empathy, care, and wisdom. In her book Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, Vallor argues the cultivation of these virtues will help individuals live well with AI (Bauer, 2022). If we can use such values to evaluate developmental AIs, we will increasingly ensure that AI aligns with substantive human values.

We may not have solid legislation yet, but guardrails build trust in the technology and allow innovation to flourish without fear of liability. In Venkatasubramanian鈥檚 testimony to the U.S. Equal Employment Opportunity Commission, he says that to argue against guardrails is the same as 鈥渁dvocating for sloppy, badly engineered and irresponsible technologies that would never be deployed in any other sector.鈥 (U.S. Equal Employment Opportunity Commission, 2023).

The effort of national governments to develop formal frameworks for AI policy is a recent phenomenon, but the pace of AI policymaking is anticipated to accelerate in the next few years (Center for AI and Digital Policy, 2022). For example, a 鈥淩ecommendation on the Ethics of Artificial Intelligence鈥 by Unesco came out in November 2021, suggesting how countries should start to evaluate AI. In October of 2022, the United States also created a Blueprint for an AI Bill of Rights, which was developed in consultation with not only agencies across the federal government, but also with the private sector, civil society advocates, and academics (Venkatasubramanian, 2023).  As we continue to formulate our values, it would be most productive to ask, 鈥淲hat makes us human?鈥 Then, we can better understand where we want to go with AI and start creating real legislation.

What makes us human in the age of AI? What is the human interest?

Like the discovery of the heliocentric system, AI will change the worldviews we live by, especially the modern experience of what it means to be human. Humans are no longer the only talking thing in a world of mute objects, and I cannot believe so few people are talking about the philosophical stakes of generative AI. According to Descartes鈥 鈥淒iscourse on the Method鈥, he considers language as a power only humans possess because animals cannot understand what our words mean, and it sets us apart in an exceptionally qualitative way from animals and machines (Rees, 2022). Because of language, humans are capable of reasoning and methods to elevate the mind. Now with OpenAI working towards general AI, there is a chance that this distinction between human and non-human will no longer be maintained. 

In several cases, AI has benefitted human lives and the US government acknowledges this. In the 鈥淏lueprint for an AI Bill of Rights,鈥 the White House says from 鈥渁utomated systems that help farmers grow food more efficiently and computers that predict storm paths, to algorithms that can identify diseases earlier in patients, these tools hold the potential to redefine every part of our society and make life better for everyone鈥 (The White House, 2023). Another plus is that people have generated harmless videos for entertainment purposes like Youtuber Grandayy creating an Eminem song about cats (HAL-9000, 2023).

All jokes aside, if OpenAI is able to achieve general AI, where the system is genuinely capable of knowledge 鈥 remember current generative AI is only doing predictive analysis 鈥 language and intelligence will no longer be a reason to set us apart from everything else. For even further into the future, what if AI ever reaches the point of super intelligence? Super intelligence is a hypothetical agent where AI鈥檚 intelligence will far surpass human intelligence. It鈥檚 likely that not too long after general AI is accomplished, super intelligence will emerge. It may still seem well into the realms of science fiction, but AI could develop emotional intelligence as well. Machines would become capable of feeling emotions. Many people say connections and the ability to love makes us human, but if general AI, and thus super intelligence, is the direction we want to be headed in, we may create other sentient beings. Is this what we want? More specifically, can humanity even handle this? 

Without a deeper discussion of what direction society should go in, legislation will not be able to help increase some certainty for the future. AI will only continue to accelerate. There must be a point at which we draw the line at, so we must be proactive to ensure we don鈥檛 hit the worst case scenario.


References

  •  
  •  
  •  
  •