Discussion about this post

User's avatar
KurtOverley's avatar

It is fair to critique the LLMs as glorified auto-completes, but I do find them to be extraordinarily useful tools (they really have learned how to code), albeit with serious limitations. For one, LLMs basically just present the consensus view, and it takes quite a bit of herding to force them to explore contrarian takes. Hallucination is another major problem which can be dealt with by having them generate code for rigorous data analysis. However, LLMs have already passed the Turing test and I suspect that true intelligence is an emergent phenomenon that will be achieved when LLM parameterization scales by 100x to more closely approximate the average human brain of 86 billion neurons each with around 1,000 synaptic connections, incorporates a version of memory, and is modified to allow both internal and external feedback training loops - possibly by ChatGTP version 6.0. With true AGI quickly approaching, we had better hurry up and solve the alignment problem, or prospects for team human are bleak.

Expand full comment
Simon Bettison's avatar

It very much feels like a new round of rhetorical reasoning, from a new clade of Sophists. Dialectic, as it did then, so it does now — decimates the digital.

These machines really do put the artificial, into intelligence.

Expand full comment
1 more comment...

No posts