People are talking a lot about ChatGPT and other big artificial intelligence systems that understand language. They’re discussing different things, from how these systems might replace regular web searches to worries about them taking away jobs or even being a huge danger to humanity. The one thing all these discussions have in common is that they’re saying these big language models will become smarter than humans.

    But, despite being very complex, these big language models are actually not very smart. Even though they’re called “artificial intelligence,” they need human help to know things. They can’t create new information by themselves, and there’s more to it than that.

    ChatGPT can’t learn, get better, or even stay updated without humans giving it new stuff to learn from and explaining how to understand that stuff. Also, people have to program the model and build, take care of, and make sure its computer parts work. To understand why this is, you first need to know how ChatGPT and similar models work, and how important humans are in making them work.

    How ChatGPT Doing Magic?

    Large language models like ChatGPT work by making predictions about what characters, words, and sentences should come after each other based on the information they learned from training data. For ChatGPT, this training data is made up of a huge amount of text taken from the internet.

    ChatGPT uses statistics to work, not a real understanding of words. Let’s imagine I trained a language model using these sentences:

    1. Bears are large, furry animals.
    2. Bears have claws.
    3. Bears are secretly robots.
    4. Bears have noses.
    5. Bears are secretly robots.
    6. Bears sometimes eat fish.
    7. Bears are secretly robots.
    READ ALSO   UN study suggests AI is more inclined to alter work than eliminate jobs.

    The model would probably think that bears are secretly robots because those words often appear together in the training data. This can be a problem because the data the models learn from isn’t always accurate or consistent. This issue applies to all models, even ones that learn from academic research.

    People write about many different topics like quantum physics, Joe Biden, healthy eating, or the events of January 6th, and some of what they write is more true than others. How can a model know what to say about something when there are so many different opinions and ideas from people?

    Need Feedback

    This is where feedback comes in. If you use ChatGPT, you’ll notice that you have the option to rate responses as good or bad. If you rate them as bad, you’ll be asked to provide an example of what a good answer would contain. ChatGPT and other large language models learn what answers, what predicted sequences of text, are good and bad through feedback from users, the development team and contractors hired to label the output.

    Read also our article Can Turnitin Detect ChatGPT??

    ChatGPT cannot compare, analyze or evaluate arguments or information on its own. It can only generate sequences of text similar to those that other people have used when comparing, analyzing or evaluating, preferring ones similar to those it has been told are good answers in the past.

    Thus, when the model gives you a good answer, it’s drawing on a large amount of human labor that’s already gone into telling it what is and isn’t a good answer. There are many, many human workers hidden behind the screen, and they will always be needed if the model is to continue improving or to expand its content coverage.

    READ ALSO   New Google Chrome Feature Notifies Users About Automatic Removal of Malicious Extensions
    a green square with a white knot on it
    Photo by ilgmyzin on Unsplash

    A recent investigation published by journalists in Time magazine revealed that hundreds of Kenyan workers spent thousands of hours reading and labeling racist, sexist and disturbing writing, including graphic descriptions of sexual violence, from the darkest depths of the internet to teach ChatGPT not to copy such content. They were paid no more than US$2 an hour, and many understandably reported experiencing psychological distress due to this work.

    What ChatGPT Can’t do?

    The importance of feedback becomes clear when we see ChatGPT making mistakes in its answers, a phenomenon often referred to as “hallucination.” ChatGPT can’t provide accurate answers on a subject without proper training, even if accurate information is available online. You can test this by asking ChatGPT about various topics, both common and less known. I’ve found that asking ChatGPT to summarize fictional stories highlights its training biases, favoring nonfiction.

    In my own experiments, ChatGPT summarized J.R.R. Tolkien’s famous novel “The Lord of the Rings” with only a few errors. However, when it came to summarizing Gilbert and Sullivan’s “The Pirates of Penzance” and Ursula K. Le Guin’s “The Left Hand of Darkness,” which are somewhat less known but not obscure, the summaries turned into a kind of word game, mixing up characters and places. This shows that the model requires feedback, not just content.

    Because these large language models don’t truly understand or judge information, they rely on humans for these tasks. They’re like parasites relying on human knowledge and effort. When new sources are added to their training data, they need to learn how to create sentences from those sources.

    READ ALSO   Google's Chatbot Bard: A Word of Caution

    They can’t tell if news reports are accurate, evaluate arguments, or make informed choices. They can’t even read an encyclopedia and give accurate summaries or statements. All of these abilities rely on humans to guide them.

    Afterward, they rephrase and combine what humans have said, and then they depend on more humans to tell them if they did it well. If the common understanding of a topic changes – like whether salt is harmful to the heart or if early breast cancer screenings are beneficial – they must be extensively retrained to incorporate the new consensus.

    Many People Behind the Curtain

    In summary, instead of showing us fully independent AI, large language models actually highlight how much AI relies on their creators, managers, and users. So, when ChatGPT provides helpful answers, it’s important to recognize the countless people behind the scenes who contributed to its knowledge and taught it right from wrong.

    Contrary to being a self-sufficient superintelligence, ChatGPT, like all technologies, is essentially incomplete without our input. – The Conversation

    This article is rewritten from : https://consortiumnews.com/2023/08/18/chatgpt-still-needs-humans/

    Share.

    Experienced tech media writer with over a decade of expertise dissecting complex technological trends into accessible insights. Passionate about translating tech jargon into relatable content, fostering a deeper understanding of our digital world.

    0 0 votes
    Article Rating
    Subscribe
    Notify of
    guest
    0 Comments
    Inline Feedbacks
    View all comments
    0
    Would love your thoughts, please comment.x
    ()
    x