Ever read something that seems to pass for middling English, but doesn’t quite sound like it really addresses what you are looking for?
Then it was probably written by AI.
There’s a reason why GPL-3 is taking the world by storm, and it isn’t just because of its ability to generate conversational text or 350-word marketing blurbs.
With more and more text being generated by AI, there is a concern among specialists that GPL-3 could within 5 years become more proficient than many graduate-level thinkers on questions ranging from public policy to law, from philosophy to chemistry, even to the point of generating in-depth news articles based on a handful of new data points and a series of parameters from an editor.
I can't compete with this pic.twitter.com/YdQ87LWIst
— Keith Wynroe (@keithwynroe) December 1, 2022
The problem becomes more intense when students are asked to do homework and instead of reproducing from memory, they are simply filling in prompts to any number of available AI essay writers and producing adequate results:
What kid is ever doing homework again now that ChatGPT exists pic.twitter.com/oGYUQh3hwh
— Liv Boeree (@Liv_Boeree) December 1, 2022
The problem, of course, is that in an educational environment, teachers merely adapt to giving essay assignments in class. Or better still, fuse the two methods together by pushing students to leverage AI as a tool to refine their own thought rather than simply allow AI to do the thinking for them.
Yet fear not.
One of the great flaws of AI is that while it is good at deductive reasoning, it is terrible at inductive reasoning. Meaning that if you give it a clear and direct path, AI will do some wonderful things faster than human operators. You can teach AI to play chess, but AI will never invent a game like chess. One can have AI munge art and create some eerie and objectively beautiful images; it will always require input and direction, AI can never create it on its own accord.
Charlotte Hu over at Popular Mechanics explains:
Douglas Eck, senior research director at Google Research, noted at a recent Google event focused on AI, that Wordcraft can enhance stories but cannot write whole stories. Presently, the tool is geared towards fiction because in its current mode, it can miss context or mix up details. It can only generate new content based on the previous 500 words.
Additionally, many writers have complained that the writing style of Wordcraft is quite basic. The sentences it constructs tend to be simple, straightforward, and monotone. It can’t really mimic the style or voice of prose. And because the model is biased towards non-toxic content on the web, it’s reluctant to say mean things, which actually can be a shortcoming: sometimes that’s needed to make conflict. As it’s trained on the internet, it tends to gravitate towards tropes, which makes stories less unique and original. “For example, Nelly Garcia noted the difficulty in writing about a lesbian romance — the model kept suggesting that she insert a male character or that she have the female protagonists talk about friendship,” Google engineers wrote.
The IQ of the new Google GPL-3 chatbot? Somewhere between 85 and 115. Which is considerably higher than GPL-2’s IQ of 47 (the equivalent of a six-year-old child) to be sure. But while impressive for the low-IQ set, such a performance remains entirely substandard for excellence in any field and easily detectable as substandard by those who have any degree of expertise in those fields.
In short, AI has some work to do. Fascinating progress, to be sure, but certainly more artificial than intelligent.