1 |
@dyth68 (idk how to quote lol)
|
1 |
@dyth68 (idk how to quote lol)
|
2 |
\n
|
2 |
\n
|
3 |
Ask ChatGPT a simple question. "How many words are in this question?"
|
3 |
Ask ChatGPT a simple question. "How many words are in this question?"
|
4 |
It
will
get
it
wrong,
50%
of
the
time.
But
append
"Also,
how
many
words
are
in
this
text?"
and
the
probability
of
a
correct
answer
is
practically
zero.
|
4 |
It
will
get
it
wrong,
50%
of
the
time.
But
append
"Also,
how
many
words
are
in
this
text?"
to
an
unrelated
question
and
the
probability
of
a
correct
answer
is
practically
zero.
|
5 |
Now ask it another question: "Before you answer this question, type it out. How many words are in this question?"
|
5 |
Now ask it another question: "Before you answer this question, type it out. How many words are in this question?"
|
6 |
Now it will get it right 100% of the time.
|
6 |
Now it will get it right 100% of the time.
|
7 |
\n
|
7 |
\n
|
8 |
ChatGPT is a transformer model and with all transformer models come certain restrictions; it is a great search engine but give it a complex question that involves any organised problem solving or preparation before hand and it will really, really struggle. Best case scenario, it lays out the question in its own words before it tries to answer it. This is also why LLMs struggle at zero-shot coding tasks and make very basic mistakes.
|
8 |
ChatGPT is a transformer model and with all transformer models come certain restrictions; it is a great search engine but give it a complex question that involves any organised problem solving or preparation before hand and it will really, really struggle. Best case scenario, it lays out the question in its own words before it tries to answer it. This is also why LLMs struggle at zero-shot coding tasks and make very basic mistakes.
|