1 |
AI is way too simple for anything remotely practical today.
|
1 |
AI is way too simple for anything remotely practical today.
|
2 |
\n
|
2 |
\n
|
3 |
The
building
blocks
of
AI,
known
as
perceptrons,
are
based
on
a
frankly
outdated
model
of
neural
activity
based
in
the
1940s.
Perceptrons
take
in
a
set
of
inputs,
moderated
by
weights,
sum
those
inputs
together
and
spit
out
a
number
if
those
weights
reach
a
certain
activation
threshold.
|
3 |
The
building
blocks
of
AI,
known
as
perceptrons,
are
based
on
a
frankly
outdated
model
of
neural
activity
conceptualized
in
the
1940s.
Perceptrons
take
in
a
set
of
inputs,
moderated
by
weights,
sum
those
inputs
together
and
spit
out
a
number
if
those
weights
reach
a
certain
activation
threshold.
|
4 |
\n
|
4 |
\n
|
5 |
The reason why ChatGPT hallucinates more often than not is because it's solving a very simple problem:
|
5 |
The reason why ChatGPT hallucinates more often than not is because it's solving a very simple problem:
|
6 |
\n
|
6 |
\n
|
7 |
[q]Given a dataset and the previous words just said, predict which word will come next[/q]
|
7 |
[q]Given a dataset and the previous words just said, predict which word will come next[/q]
|
8 |
\n
|
|
|
9 |
Last time I checked, human speech was, to put it mildly, a little more sophisticated than simply predicting which word will come next.
|
8 |
Last time I checked, human speech was, to put it mildly, a little more sophisticated than simply predicting which word will come next.
|
10 |
\n
|
9 |
\n
|
11 |
With today's neuroscience advancements, we understand that biological neurons are far more complicated and powerful than today's perceptron networks would suggest. It has gotten to the point where they've managed to model a single human neuron with a deep neural network.
|
10 |
With today's neuroscience advancements, we understand that biological neurons are far more complicated and powerful than today's perceptron networks would suggest. It has gotten to the point where they've managed to model a single human neuron with a deep neural network.
|
12 |
\n
|
11 |
\n
|
13 |
Yep. Just one neuron in the human brain is equivalent to a deep neural network. Granted, most of it is due to modelling the NMDA channels, but nonetheless, we're not exactly scaling deep neural networks to artificial brains soon.
|
12 |
Yep. Just one neuron in the human brain is equivalent to a deep neural network. Granted, most of it is due to modelling the NMDA channels, but nonetheless, we're not exactly scaling deep neural networks to artificial brains soon.
|
14 |
\n
|
13 |
\n
|
15 |
However, they are beginning to bring more actual neuroscience into artificial neural networks, with models like liquid time-constant networks explicitly inspired by reverse-engineering work on actual neurons. Suffice to say, liquid time-constant networks accomplish tasks like autonomous driving with more stability and fewer neurons compared to conventional neural network models.
|
14 |
However, they are beginning to bring more actual neuroscience into artificial neural networks, with models like liquid time-constant networks explicitly inspired by reverse-engineering work on actual neurons. Suffice to say, liquid time-constant networks accomplish tasks like autonomous driving with more stability and fewer neurons compared to conventional neural network models.
|
16 |
\n
|
15 |
\n
|
17 |
I have a video which talks about the differences between today's neural network and our neurons: https://www.youtube.com/watch?v=hmtQPrH-gC
|
16 |
I have a video which talks about the differences between today's neural network and our neurons: https://www.youtube.com/watch?v=hmtQPrH-gC
|
18 |
\n
|
17 |
\n
|
19 |
for more information about liquid time-constant networks, I have linked a paper and a video on this topic in a forum post here:
|
18 |
for more information about liquid time-constant networks, I have linked a paper and a video on this topic in a forum post here:
|
20 |
https://zero-k.info/Forum/Thread/36858
|
19 |
https://zero-k.info/Forum/Thread/36858
|