Commentary: ChatGPT, Software Created in Calif Lab, to Change Way We Learn and Live

A spokesperson for OpenAI, which owns ChatGPT, said the company “made ChatGPT available as a research preview to learn from real-world use, which we believe is a critical part of developing and deploying capable, safe AI systems.”

0
OPEN AI ChatGPT chat bot seen on smartphone placed on laptop. AI chatbot responded to the question about love. Stafford, United Kingdom, December 13, 2022.

By Maxim Elramsisy, California Black Media

In December, OpenAI, a San Francisco-based artificial intelligence (AI) research lab, released a free, publicly available version of their AI-based bot called ChatGPT.

The “GPT” stands for Generative Pre-training Transformer.

Transformers are programs, originally invented by Google, that use data to predict — as you’ve more than likely seen in your Google search bar — the next characters you will type.

Since the summer of 2021, a series of new AI-based products have exponentially advanced internet technology. Applications like Lensa takes pictures uploaded by a user, extracts (or learns) information about the person’s appearance, then creates its own images of the person.

AI music generators like Amper Music work in a similar way, learning from available music databases then creating original music.

A spokesperson for OpenAI, which owns ChatGPT, said the company “made ChatGPT available as a research preview to learn from real-world use, which we believe is a critical part of developing and deploying capable, safe AI systems.”

The technology in ChatGPT’s large language model, GPT 3.5 uses data from the internet to predict the best possible response to a query. It can piece together vast chunks of data to create a response that is not a repetition from a singular site or source, but rather fragments from the whole dataset which, together, make a unique answer – a critical difference from a search engine’s function.

So, a romantically inclined individual, for example, looking for a unique way to show affection to a love interest could instruct ChatGPT to “make a list of some one-of-a-kind Valentine’s Day gifts.” A search engine would return an index of links to websites with words matching that query, along with sponsored results/advertisements. ChatGPT, on the other hand, would reply with a list of gifts that, together, would represent, statistically, the most likely gifts to appear in your search. This method of AI-assisted brainstorming is one of the greatest strengths of ChatGPT.

With some very specific instructions, the software application could also write a Valentine’s Day song, heartfelt letter or poem for our lovestruck friend. The program’s ability to generate original text could be useful here. The lyrics or poetry it spits out will not be plagiarized.

The program can write jingles for advertisements, cover letters for prospective employees, or scripts for cold calls. It can write a story in Shakespearian prose, or it can take paragraphs from a college physiology textbook and explain it at a 4th grade level.

Although the potential for this technology is mind-blowing, it has some very distinct and important drawbacks especially at a time when the proliferation of disinformation and misinformation has become a pressing national concern. For one, as a predictive text generator, it is not very good at math. As of now, it cannot correctly answer some simple questions and it can generate some blatantly false information and convey it with confidence. Since the text is generated by AI itself, there is no source material to which the intelligence it gathers can be attributed.

According to the Frequently asked questions page, “ChatGPT will occasionally make up facts or “hallucinate” outputs. If you find an answer is unrelated, please provide that feedback by using the ‘Thumbs Down’ button.”

If the data that trains the AI is biased or not representative, then its results will have many of the same biases.

The machine learning algorithms in ChatGPT allow it to learn, expanding its data with each successive use. It makes sense, then, that OpenAI would allow the public to use it for free. The more it is used, the greater the data it can draw upon to improve its responses. The engineers who created the technology perhaps could not have envisioned all of the ways the technology could be used, for better or for worse. In this iteration, engineers are explicitly requesting that improper results be flagged.

One of the first companies to invest in OpenAI was Microsoft. The technology giant increased its investment to $10 billion, hoping that the ChatGPT could, perhaps, enable valuable features for their existing software like Microsoft Office Personal Assistant, or incorporate text generation into their Bing search engine, which has been dominated byGoogle.

Media companies struggling to create content as they cut their workforce have pounced on the opportunity to have AI generated content with mixed results. Technology website CNET allowed AI to write stories, but the experiment backfired when users pointed out inaccuracies within the generated content. Currently, the programs have no capacity to distinguish truth from lies or good sources from bad ones.

Educators around the country are grappling with how to use the technology, or to coexist with it. The New York City Department of Education has banned the technology outright, fearing that it would enable cheating and circumvent tools that teachers may usto use to check for plagiarism. School districts in Los Angeles, Oakland and Seattle similarly have banned the use of OpenAI tools.

Some teachers see an opportunity. Cherie Shields, a high school English teacher, found that the bot could help her with evaluating and generating

feedback for her students. It can also write test questions and study guides based on submitted text.

The information on the internet, as we all know, is imperfect. So, while ChatGPT allows for greater creation and dissemination of misinformation, it can also perpetuate biases.

The pace of technological innovation is faster than that of regulation, so many tech companies have to self-regulate. ChatGPT is trained to refuse to create content that is hateful, offensive, triggering, or could lead to violence, but there are ways around these safeguards and hackers/bad actors will be looking to exploit these vulnerabilities.

OpenAI has already announced the next iteration of their large language model, GPT4, which is expected to be smarter than GPT3.5. Semafor reports that OpenAI is also reportedly working on a mobile app version of ChatGPT to be released in the coming weeks.