What are LLMS and Why Should I Care
The uncut version of the second column in Natural Intelligence, my opinion column on AI featured in my college paper
__torihope__ is a junior writing about the inevitable AI future with a focus on ethics and well-being. Her Column, Natural Intelligence, runs every other Friday.
Did you use Chat GPT this week? In the past hour? If you did, then you used a LLM. So, what are they, and why should we care?
Before forcing myself to enter it, I found the tech world very intimidating. Sure, my generation practically grew up with iPhones, giving us an advantage at intuitively navigating new interfaces (which is apparent when interacting with older generations, like when I increase the volume on my mom’s iPhone and she applauds my computer-savvy). However, with the rapid pace that technology is evolving and the lack of hardware and software knowledge known in non-STEM populations, it’s easy for people like me to throw our hands up and surrender to the wave of ones and zeroes.
Writing this column proved intimidating indeed, as I am not an expert on artificial intelligence. However, here I go anyway, wearing my naivete on my sleeve.
I was surprised to discover that serious discussions about AI have been in circulation since 1950, when Alan Turing published his seminal paper “Computing Machinery and Intelligence.” Turing, a British mathematician whose sudden death occurred following a court-order to chemically castrate after being “convicted” of homosexuality, was incredibly ahead of his time. “Computing Machinery and Intelligence” deserves an entire column of its own, as it brilliantly explores theological approaches to a machine-soul, the difference between human intelligence and machine intelligence, and what the future holds for the ability to differentiate between a human or an AI (“The Imitation Game,” also known as the Turing Test, is still used today).
All of this is to say that artificial intelligence is not new, a product of 21st century advancement. Nor is artificial intelligence able to be easily narrowed down into one thing, and sometimes is considered not just as a type of technology but an entire field of computing, data science, and problem-solving.
Large language models, or LLMs, are a specific application of artificial intelligence, and they are important because they are unquestionably the most popular today.
In almost all cases, an LLM exists in 2 steps: step one, an entry made by the human user, and step two, a response given to the user by the AI (both entry and answer can take form textually, auditorily, or visually; think ChatGPT vs Alexa vs Dall·E). To understand how complicated the process is between step 1 and step two, consider some of the following questions AI coders must reconcile: What is the relationship between a literal sentence and figurative meaning i.e. can a machine decipher the feelings it evokes, the sarcasm it contains, the societal context it ignores? What parts of life can and cannot be datafied (a process called datafication) ie. Can you turn the feeling a parent has when they hold their child into a piece of codified data?
Another issue is the raw amount of computing power required to scan, decode and categorize the entire internet … and how much money that hardware costs. In an anecdotal example of the consequent financial shifts, Microsoft, after disclosing its plan to lay off 10,000 employees in January 2023, recently announced a $10 billion investment in OpenAI, the creator of ChatGPT.
In short, through a complex inner-system of training code, data-collection, and neural pathway categorizing, LLMs are very talented at predicting language (hence the name). This is how LLMs spit out long, in-depth, human-sounding responses so fast. However, the predicted language they deliver does not promise accuracy.
Google’s search engine is so mainstream that it is now used as a verb– “Let me google that!”. However, when we search the internet for information by googling, we must translate our inquiry into appropriate search terms, or as my college professor Dr. Morten Bay put it, we must learn “keywordese.” ChatGPT, the program people associate with the recent mass distribution of AI, enables users to search the internet through the same verbiage they use to talk to their friends (known as natural language processing). No “keywordese” necessary. This may explain their immediate popularity.
But it is crucial to remember that LLMs like ChatGPT, as they exist now, are not interchangeable with search engines like Google. LLMs “hallucinate” false information, or sometimes complete gibberish, because they are trained for convincing communication skills, not literal accuracy.
As you may imagine, there is a near infinite supply of unsatisfied ethical questions, technical complexities, and economic impacts that weren’t touched upon in this limited window into the world of LLMs (one example– what happens when someone wants an LLM girlfriend?). Some of these topics will be thoughtfully explored in future columns.
But, in conclusion, you should care about LLMs because chances are you’re either using one now or will have to use one very soon. As always, in making the case for a tech-optimist future, I advocate that every individual educate themselves on the critical technology that is exploding into our everyday lives. That way, we don’t have to rely on a handful of tech-bros in San Francisco to make the decisions for us.



love tech-optimism indoctrination. just kidding. kind of.
Following down the tech rabbit hole either scares me or amazes me. This is one of those cases, where I don't know whether I should be scared or impressed.