Generative AI Trends

All the latest news and trends
Limitations and risks associated with ChatGPT

shared by Naveen Kohli on Sunday, July 9, 2023 •

Introduction

ChatGPT and other Generative AI tools are becoming part of our day-to-day work life as well as personal life. Like any other thing in life, they are not perfect. These tools have limitations and risks associated with them. Therefore, when using tools like ChatGPT it is important to know these risks. Depending on the context in which you are using these tools, the impact of the risks could be too high that may be prudent to not use the tools at all. In this article I will try to address some of the known limitations, risks and mitigations associated with ChatGPT.

The ChatGPT team has done a good job of providing information about known limitations. All the information I am going to provide in this article is an explanation of the documented facts. You can verify all I am going to tell from ChatGPT website as well.

Hallucinations

Don’t worry, hallucinations here do not mean that if you use ChatGPT, you are going to start hallucinating. In the context of ChatGPT, it means that GPT-4 has potential to hallucinate facts. It translates to exactly what it means when a person hallucinates. GPT-4 can make up facts that do not exist. When you make up facts, you tend to have bad judgement.

Given this known limitation, you must make a judgement call about the output from ChatGPT. In my earlier article “What is Generative AI”, I mentioned not to ask ChatGPT questions like “What is your opinion” or “What do you think”. Knowing that it can make up some facts, those opinions or judgements could all be based on the made-up data.

ChatGPT team provides guidance on this as well. If the stakes are too high, use extra caution or avoid the use of Generative AI altogether. If you do want to use ChatGPT in these situations, use the following guidelines.

  • Get the output reviewed by a human.
  • Use additional context when generating out to reduce the possibility of ChatGPT filling the gaps with some made up data.
  • When in doubt, discard the output.

GPT-4 has made improvements in reducing these hallucinations. The keyword here is “improvement”. It is a work in progress.

Bias

This is a sensitive and controversial topic for any media or tool that can influence human opinion on a subject. In the last few years, social media has been in the news about information bias, about the platforms deem appropriate or not appropriate, what is fact and what is fiction etc.

Generative AI is not immune to bias as well. These tools are neural networks trained on massive data sets. As per information posted by ChatGPT, this training process has two phases. The first phase is “pre-training” where the model learns to predict by itself. The bias will start from this phase itself. A biased training data set will inject bias in the predictions or output. The second phase is “fine-tuning”. In this phase humans provide input in decision-making. There is now a double bias in the fine-tuning process. A ChatGPT employee will generate instruction set for the reviewers.

I will quote content from ChatGPT’s information.

“Since we cannot predict all the possible inputs that future users may put into our system, we do not write detailed instructions for every input that ChatGPT will encounter. Instead, we outline a few categories in the guidelines that our reviewers use to review and rate possible model outputs for a range of example inputs.”.

A guideline is always subject to interpretation. This means that depending on the instructor and reviewer’s interpretation, there will be a chance of bias.

Personally, I will do what ChatGPT tells its own reviewers, avoid taking a position on controversial topics. This means, if you can help it, do not use Generative AI for topics that are controversial.

Old Training

As of this writing, GPT-4 is trained on data that is older than September 2021. A lot has changed since that cut-off date. This means that all the predictions and output generated by AI may or may not be relevant to current trends, technologies, frameworks etc. There are going to be incidents where it will make mistakes. It also does not learn from mistakes. In human terms, it stopped learning after September 2021.

If you are having conversations with ChatGPT regarding cybersecurity topics, I will be very careful about using the output. Since the model has not been trained on the latest data, it is likely to provide you with answers that may already be outdated. It may suggest code that may be using techniques that have potential to introduce security vulnerabilities.

Use ChatGPT output as guidance and always check on current trends from current sources.

Can Make Mistakes

Generative AI models are neural networks trained to recognize patterns. After learning, it makes predictions based on the probabilities of taking the correct path to reach the correct answer. By very definition, probability is probability and not certainty. Much like humans, it can make mistakes. It will be wrong to assume that Generative AI will always provide correct answers.

I will quote from ChatGPT documents, “GPT-4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake”.

In cybersecurity, a term a phrase is used “trust, but verify”. I will reword it slightly differently here, “have some degree of confidence, but always verify”.

Buggy Code

When you ask ChatGPT to write code, it can produce buggy code. I have personal experience with it. The reason for buggy code could be due to lack of context, lack of training on the subject, hallucinations etc.

Always verify the code for its validity for compile time and run time errors. More importantly, make sure that is not going to introduce any security vulnerabilities. ChatGPT does not know anything about the overall structure of your application. All it can do is provide you with a unit of code that is relevant to the question you asked.

Conclusion

Like any software and technology, there are limitations and risks associated with it. It is up to you and your organization to quantity that risk. Impact of some risks could be too high that it may warrant not to use Generative AI under certain circumstances.

If you present a bad output to your clients, they are not going to care about how you prepared that output. Your argument that it is not my fault, it was ChatGPT who did it, is not going to mean anything to the other person.

Carefully evaluate limitations of the technology and then decide if it is for you or not.

Tags: chatgpt generative ai

Share this post
Recent Posts
July9
Limitations and risks associated with ChatGPT

Shared by Naveen Kohli

Time: 8:34 AM

July8
July8
July7
What is text to image Generative AI

Shared by Naveen Kohli

Time: 10:41 AM

Popular AI Tools