34

After seeing StackOverflow's banning of ChatGPT, I explored it out of curiosity. It's marvellous as it can write code by itself!

Later to check if it knows chess as well like Google-Deepmind's AlphaZero AI, I asked below questions:

Me: Does openai know how to play chess?
OpenAI: It is not clear whether OpenAI knows how to play chess. As an artificial intelligence research organization, OpenAI has developed a number of advanced machine learning algorithms and techniques that allow AI systems to learn and adapt to new tasks and situations. ...

Me: In a chess if white has a king & bishop and if black has only a king, then who can win with a perfect play?
OpenAI: In a chess game where white has a king and a bishop and black has only a king, it is possible for either player to win with perfect play. This type of endgame is known as a king and bishop versus king endgame, and it is considered one of the most challenging in chess. ...

Now this answer is entirely wrong. Because as soon as only a bishop (or a knight) remains apart from the kings, it's instantly declared as a draw!

Question: Shouldn't the AI reject my question on a subject which is not in its known criteria?
It does so for many other subjects.


Note: Should we replace bishop = rook then ChatGPT answers exactly the same answer with replacing those pieces. However that happens to be true.

iammilind
  • 443
  • 1
  • 4
  • 9

6 Answers6

72

ChatGPT is a large language model. That means it's very good at stringing together words in ways that humans tend to use them. It's able to construct sentences that are grammatically correct and sound natural, for the most part, because it's been trained on language.

Because it's good at stringing together words, it's able to take your prompt and generate words in a grammatically correct way that's similar to what it's seen before. But that's all that it's doing: generating words and making sure it sounds natural. It doesn't have any built-in fact checking capabilities, and the manual limitations that OpenAI placed can be fairly easily worked around. Someone in the OpenAI Discord server a few days ago shared a screenshot of the question "What mammal lays the largest eggs?" ChatGPT confidently declared that the elephant lays the largest eggs of any mammal.

While much of the information that ChatGPT was trained on is accurate, always keep in mind that it's just stringing together words with no way to check if what it's saying is accurate. Its sources may have been accurate, but just writing in the style of your sources doesn't mean that the results will themselves be true.

Mithical
  • 2,965
  • 5
  • 28
  • 39
20

ChatGPT does not actually know anything. But more importantly even, it does not know this fact! Hence, it does not know that it does not know.

It is only good at combining text.

DrCommando
  • 309
  • 1
  • 2
15

ChatGPT and other GPT-based machine learning models don't actually know anything in the sense you're thinking of. ChatGPT is a distant descendant of Markov chain text generators such as Dissociated Press, and works by predicting what is most likely to follow a given sequence of text.

So, when you ask it "Does openai know how to play chess?", what you're actually getting is the answer to the question "What is most likely to follow the phrase 'Does openai know how to play chess?'?"

Mark
  • 420
  • 2
  • 9
9

“It” does nothing. Don’t think just because every conman out there calls our really shockingly primitive neural nets “AI”, and wants to convince you that it’s actually an autonomous intelligence, that it’s not just a glorified function (a list of commands to blindly execute, not a person), to apply a set of biases onto a given input pattern, that have been programmed into it in a way that the programmer doesn’t “have to” know what he’s doing or even what precisely he wants. :)

It is just biasing for the patterns in its training data. And giving you whatever that results in for your your input. In this case, if I am correct, applying its output to its input too again, and again, with diminishing sanity.

So the answer is that your input will just be treated like it is a spectrum of those patterns, no matter what.
In other words: If all it knows is a hammer, everything will look like a nail. :)

So it is quite mundane, and nothing magical at all.

Everything beyond that, attributed to such systems, is deliberate dupery, to get people to “invest” money.

(Don’t get me wrong: This technology is useful when writing an actual algorithm really is beyond a human’s capabilities. (E.g. by definition a brain cannot imagine a model of itself in its entirety. Or you cannot write down the entire set of experiences of a lifeform, find all the patterns, and turn them into a set of rules manually. Even if the Pauli Exclusion Principle would not exist.) But nowadays it is abused by people who proudly can’t even define what they want, to just throw examples at, and expect to get an universal function out, so they can call themselves geniuses and get lots of money.)

2

The original question asked about "an AI" generally, yet most of the responses here focus on OpenAI's ChatGPT specifically. Seems like the answer would depend on the specific type of AI being used, not limited to just large language model-based chatbots, but considering other types of knowledge representation systems more generally.

Not being an AI expert, I can't speak to this, but here's some good background reading: https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning

Trutane
  • 121
  • 3
0

I agree with most the former answers here, and apologies I don't have high enough points yet to comment so had to do this as an answer, but I believe due to inaccuracies, lack of fact checking capabilities etc... in the data its trained on up to 2021 currently at the time of writing, fine tuning via it's API has become popular I believe. Although you were mainly asking about ChatGPT itself I felt its a good idea to consider it's API as part of that which is just as easily accessible, I am currently learning how to fine tune it my self for more specific and accurate results.

So in respect of the question: Shouldn't the AI reject my question on a subject which is not in its known criteria? It does so for many other subjects.

It can possibly do this if you train / fine tune it another way on the API, but for the latter part, using the normal ChatGPT and ChatGPT plus user interface part of the site, not the API the AI mostly will not reject your question unless it breaches ethical and moral constructs, which I think is also another important aspect to consider when gaining information from ChatGPT.