HomeTech and GadgetsArtificial IntelligenceCan an Artificial Intelligence Write Better Than a Human?

Can an Artificial Intelligence Write Better Than a Human?

OpenAI describes itself as an artificial intelligence (AI) research and deployment company with a mission to benefit humanity through AI. It is the developer of a new program called ChatGPT, capable of writing and conversing very much like a human. Optimized for dialogue it can answer questions, learn, do language translation, admit when it makes a mistake and can be integrated with a blog like this one to provide an online chat mining the information in more than 3,550 postings on this site and answering questions from readers. It generates human-like text, can write essays, and even debug computer code.

Recently, Ryan Reynolds, the actor and CEO of Mint Mobile, a budget cellular provider, tested ChatGPT to write a commercial for Mint Mobile using his voice. He instructed it to include a joke, a curse word, and a promo the company was working on. He told Fast Company that the results were “mildly terrifying.”

Ryan Reynolds asks ChatGPT to write an ad for MintMobile. Calls the results “eerie” and “mildly terrifying.”

My First Test of ChatGPT

The latest API for ChatGPT-3.5 is used to produce text that is remarkably similar to what humans write. Knowing this I had to try it out. So I asked ChatGPT to take on a subject from a posting I wrote yesterday I said “explain the relationship between atmospheric warming and rising ocean temperatures.” And this is what it wrote within a minute of my query.

“Atmospheric warming, caused by an increase in greenhouse gases in the atmosphere, causes rising ocean temperatures. As the Earth’s atmosphere warms, the oceans absorb about 93% of the excess heat. This causes the temperature of the ocean to rise, which in turn leads to a number of consequences including sea level rise, ocean acidification, and changes in ocean circulation. Additionally, warmer oceans can lead to more intense and frequent storms, such as hurricanes and typhoons.” 

That’s a good summary of the issue described in the article I posted yesterday. If Ryan Reynolds says he was blown away by the promo ad ChatGPT produced, I can only reinforce his comment with my own “pretty creepy.” 

Microsoft Invests in ChatGPT

Microsoft has put money into OpenAI to the tune of $1 billion US in recent years. But now, according to recent reports, it is adding $10 billion to the investment. Why? Because Microsoft has the Bing search engine, its competition for Alphabet’s Google Search. To date, the company has been spectacularly unsuccessful in taking market share away from Google. But incorporating ChatGPT into Bing may be the game changer that can beat Google’s search algorithm.

ChatGPT isn’t an isolated case of an AI capable of human-like conversation. Google has its own technology called LaMDA, a conversation algorithm that understands query requests to produce better results for users of Google Search. Google uses an AI neural network for LaMDA which is capable of delivering sensible responses in human conversations.

Impact of ChatGPT on Education

But ChatGPT has made a much bigger splash recently because of the implications this technology may have in education and because it is open source which means it’s free. LaMDA is proprietary.

For teachers, ChatGPT can facilitate and automate many tasks. Madeline Will, a reporter at Education Week, wrote a piece on ChatGPT yesterday describing teachers’ reactions to the algorithm’s ability to almost write anything. She decided to give it five common teaching tasks: generate a lesson plan, write a response to a concerned parent, compose a rubric, provide feedback on student work, and put together a letter of recommendation.

On the first task, the general impression from teachers was that at best it provided a framework but not the details for lesson planning. The same was said about the letter-writing capabilities of the tool, and the composing of a rubric. On providing feedback on student work, the response was unimpressive in terms of the AI’s comments and grading. And on writing a letter of recommendation, teachers stated that what ChatGPT came up with was “far too generic.” So all in all, the current version of ChatGPT as a teaching aid seems underwhelming.

On the student work side, ChatGPT, however, is far more problematic. Because it can compose the kind of content produced for me in the above example, teachers are concerned that the tool will make it easy for students to submit work they didn’t write. Is this plagiarism? Can you plagiarize an AI?

When OpenAI made it publicly available in November of last year, concerns led the New York City School Board to block access to it on its networks and computers. A Princeton University student took the initiative to create an app called GPTZero aimed at the responsible use of AI in education and capable of detecting ChatGPT-written documents.

Some teachers are responding to the arrival of ChatGPT by reverting to having students submit handwritten work although a student can easily copy by hand something the AI produces. And at some schools, students are being asked to sign an “authenticity pledge” with any work submitted.

Is there a positive response from educators to ChatGPT? Because teachers generally have commented on ChatGPT’s framing abilities, some see it as a useful tool to help students improve their writing abilities.

But I think most teachers would see the AI as disruptive. Take for example a December 9, 2022 article that appeared in the Atlantic entitled, “The End of High-School English.” Produced by a high school English teacher, Daniel Herman, after he tried ChatGPT, he is convinced that this will not make students better communicators of the written word.

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

4 COMMENTS

  1. I suspect that the teachers would pan AI regardless because they see AI as a threat to their careers.

    Would it make American English better or worse ?
    As a native English speaker, I can assure everybody that it could not make it any worse than it already is.
    Sometimes, American English actually hurts the heads of original native English speakers (people from Britain), due mostly to the American incoherent and random misuse of various words. eg. Americans hear a peer or TV personality etc. use a random word incorrectly several times (awesome, literally, unique, liberal, spicy, gourmet etc.) so they now feel compelled to randomly jam it into every sentence they use. To an actual native speaker (or skilled teacher), Americans appear to be only semi illiterate.

    We would have been ridiculed for using words without knowing the definition or proper usage even by the age of around 10. It was considered to be uneducated at best, or you were just the dumb kid to bully at worst. Whereas, in America it is encouraged (the dumb down effect).
    There are around one million words existing in the English language. Why destroy that massive collection of fine tuned communication descriptives by needlessly changing existing well established definitions often to their antonym ?

    Odd that people prefer to sabotage a language rather than learn to use it. I feel sorry for skilled English teachers in America. I had many a good laugh with some of them as we debated the topic, “that literally tactically liberal unique gourmet spicy (plain mild cheese flavor) popcorn from MacD’s was literally awesome.”.
    Hmmmm… Could do better -G.
    (I feel your pain English teachers across America)

    Would a competent AI butcher a language like an American ?
    No, while humans only know a few thousand words, AI knows a much larger list of words. Very few people could memorize just one small dictionary.

    Could AI write a better story ?
    AI should be able to look up all one million’ish word definitions and proper usage, so in theory yes. It would fail if it were illiterate.

    However, due to the existing limitation in story archetypes (the villain, hero, protagonist, comedy, horror, mystery etc. formats), it therefore has the same limitations in that sense.
    All the characters and stories have already been created long ago. eg. Just look at the iterations since the Sherlock Holmes style detective genre was formulated, that lead up to many new detective characters but they were all the same, even the recent TV show ‘House’ was based on the Sherlock formula with a twist. Even Sherlock stories were inspired by existing short stories of fictional detective crime stories of mr Doyle’s era.
    Hard to be original when it’s been done already.

    AI has great potential but, you know it will only be abused just like any other power existing in today’s society. Humans prefer to destroy rather than create or improve for real progress. Some would prefer to keep their crooked games in play at any cost to the rest of us.

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics