AGI = Impossible
KimKardashian
(Notes regarding updating the OP:
1. This is an entirely new OP I wrote on 04.01.24. The previous OP is in this post. All of the thread up to that post pertains to the old OP.
2. I will regularly update this OP either in response to replies or to my own thoughts, so that it always represents the strongest case available.)
__________________________________________________________________________

Here I demonstrate how creating an AGI either on par or above human intelligence is a logical impossibility. I've tried to format this post in the clearest way I can, and numbered the arguments for ease of reference for deboonging the reasoning here.

The argument:

1. Intelligence is the measurable ability of any behavior to attain a given goal, whether food, passing a test, asteroid mining, etc.

2. To measure (prove) intelligence, you need to know the goal being pursued. Without it, you have no standard against which to measure ability, and the intelligence of the behavior remains unknown.
2.1. In unknown behavior, potential intelligence remains unprovable, and thus indiscernible from nonsense/stupidity.

This is the basis and the rest follows from it:

3. Measuring intelligence is limited by the observers own intelligence, because he needs figure out the goal pursued (2).

4. The goal of developing an AI is to produce provably intelligent behavior, which requires the engineer monitor and cull any unknown behavior. Knowledge of goals is required (2).
4.1. If he sets goals for the AI himself, his intelligence will limit what the AI pursues.
4.2. If he lets the AI set goals for itself, any goal exceeding his intelligence results in unknown behavior (3).

5. It is not practicable to allow for unknown behavior to continue in the hopes that it turns out to be provably intelligent.
5.1. If it exceeds his intelligence, it will remain unknown (3, 6).
5.2. There are infinite ways for unknown behavior to produce nonsense, and only limited ways for it to turn out intelligent. It is forbiddingly costly to try and brute force it. This is how evolution produced human intelligence, and it would take immensely many iterations and complexity to replicate.
5.3. At best it becomes knowably intelligent, in which case the result is no better than following 4.

6. The engineer cannot take AI’s own word for the intelligence of its behavior which he cannot prove himself. This would run into 5, 5.2.

7. Therefore, the very process of constructing an AI consists of unavoidably culling any behavior beyond the engineers own intelligence.

If this reasoning stands, building an AI above human intelligence is impossible. But what about AI on par with human intelligence? (This is the weakest part of my reasoning, but I feel there is some truth here that I cannot perhaps yet express convincingly.)

8. The engineer cannot replicate his entire intelligence inside the AI. At best human intelligence will remain an asymptote towards which the AI will always be approaching.
8.1. He can not know himself fully (there is no universal set that contains all sets), to replicate himself in AI.
8.2. Any addition to the AI will simultaneously add to the engineer, making him unable to close the distance between the two.

NB note that this disproves only the possibility of building such an intelligence, and not that it may not exist elsewhere in the universe (aliums).
Mason Hall-McCullough
Very idiotic cope.

KimKardashian Wrote:The problem with AI is that at no point does trained emulation translate into comprehension.
...
This is where the cargo cult comes into play – regardless of how many times and how fast GPT can condense Marx into a haiku, at no point does comprehension appear.

What is comprehension if not the ability to summarize your entire post into a paragraph in half a second? Why doesn't ChatGPT "truly understand" what it has read? I can ask it questions about your post and it will give accurate and detailed answers. There's nothing special about human comprehension other than the magical significance you have assigned.

Quote:There is no dialectic involved, no evolution, no building on the genius of its previous output – which may have as well been a total diarrhea.

https://en.wikipedia.org/wiki/Long_short-term_memory
https://en.wikipedia.org/wiki/Incremental_learning
https://arxiv.org/abs/2312.06141

Quote:The very possibility of AGI is pure magical thinking; specifically a cargo cult. The types of AI we currently have are the equivalents of infinite monkey theorem – ie an actor totally incapable of understanding its output. The only difference is that our infinite monkeys are trained: whenever they accidentally output a Tolstoy or Plato – or even just one comprehensible sentence – we reward them with treats, increasing the probability of more similar output. Over long enough training, we can have the monkeys emulate our syntax entirely.

The infinite monkey theorem describes an agent that generates random text. "The only difference is that our infinite monkeys are trained" - that inverts the defining characteristic of the infinite monkey theorem. AI does come to understand its input through its training, while a program that generates random text obviously does not.

Even if you were describing an "infinite monkey" computer program that generated random characters over and over, why and how would a computer program possibly "cum, piss and shit on everything"?

Quote:We may say the human retains a permanent "causal initiative," which he can't shake off even if he wanted to.

Quote:While we got to train on the real stuff – universe – any AI we produce can only train on its one step removed derivative, ie whatever info of the universe comes through us.

Quote:It is not an argument based on control. It’s an argument based on the inescapability of the human paradigm.

"We have a very special thing called a 'causal initiative' inside our immortal souls."
"Heh, stupid AIs don't even touch grass, they're not even real."
"Humans. Are. Inescapable. Expect us."

Quote:The only possibility of an actual AGI besides human intelligence, is an agent that can train on the universe itself, like we did.

[Image: q9hco3.png]
BillyONare
Your neurons are stupider than trained monkeys, and your brain is just a set of neurons, how does that make you any smarter than monkeys throwing shit around? Inb4 midwit stuff like muh consciousness, muh soul, muh Chinese Box xD, muh creativity, muh meetyphysics.

Quote:The AI will forever remain inside a human-defined paradigm.

Read Gariepy.

Quote:Think of it this way – we, humans, have inherited the training data of over three billion years of life on Earth. How much is that in TB; trillions, fucktillions?

The result of that training:

“gibsmedat bix nood purple drank”

Training is not that important. Wait until midwit AI researchers understand what I understand. Oppenheimer did not train to create nuclear weapons, he just did it. THATS what intelligence is, to do things WITHOUT training. To do what would be IMPOSSIBLE no matter how many billions of years yeast-life has to evolve in a matter of months and change the course of history.
Zed
Some brief thoughts that I had actually intended to make a thread about.

It's worthwhile to think of intelligence in two forms - liquid and glacial.

Liquid intelligence is malleable and rapidly adapts to the ecosystem/environment. Human and animal intelligence is liquid. To fix ideas: If you are solving a physics problem, your brain is rapidly adapting around the problem, literally changing your neural structure as you proceed via trial and error convergence. Often you proceed by rapid failures - quickly learning from them and adjusting your strategy -before converging to a solution.

Modern deep learning is mostly preoccupied with glacial intelligence, intelligence that is vast in quantity (hence, glacier), but fundamentally inflexible. If a LLM cannot solve a question within a few prompts, it will usually never be actually capable of solving it. In fact, it has been shown theoretically that transformer based models are not even capable of generalization. On the other hand, glacial intelligence is great at first order analogies (which is a *weak* form of generalization).

Consideration of this dynamic leads to the following naive idea: There is some kind of inverse law at play between the scale of a model and it's capacity for adaptation. In a certain sense, a related idea has been known in statistics for a very long time as bias/variance tradeoff. Of course, there is a problem here - modern deep learning models actively disobey the B/V law, tending towards being highly overparametrized (yet not *overfitting*, which is seemingly miraculous). This has led researchers to observe the following phenomenon in large models:

 [Image: 1*oFhZTvXGUHyFAFIs3ZmEHQ.jpeg]

(By the way, test error is how the model performs on data it has not seen before - it is the standard metric for judging how good a model actually is)

Perhaps that might remind of you of something...
[Image: https%3A%2F%2Fsubstack-post-media.s3.ama...20x960.png]

Anyways. I don't really want to talk about either the B/V tradeoff or the miraculous phenomena of why overfitting doesn't happen in LLMs. They are weakly related, but I think the most interesting aspect of the dichotomy between liquid and glacial intelligence actually has little to do with either. 

Rather than understand 'adaptation' as 'how well the thing performs in new situations' (eg, test error), it is better to understand it as how well the model responds to its own failures. I'm not speaking of training it or gradient descent or any such thing either, but failure outside of the training phase - I'm speaking of the way that *failure* is part and parcel of animal intelligence, it is almost required to learn anything at all. Pavolovian responses are indeed the basis of all animal learning. Unfortunately, all work in ML is generally about building models that fail as little as possible - but this wrongheaded for pursuing AGI. AGI, if it can be made at all, requires models where failure is allowed, but are trained to *effectively* detect and gracefully optimize against it's own failures. One way or another, I suspect that if there is a path to this - it will come from reinforcement learning and the older style RNNs (now almost abandoned in favor of transformers). Or perhaps from diffusion models, ala Stable-Diff. Either way, transformers are probably a dead end.

Not that glacial models will not play a role, they may provide a wonderful bootstrapping tool, but the future pursuit of AGI must come via smaller and leaner models with architectures capable of self-detecting and self-optimizing against failures in real time.
turnip
I'm not too interested in AI currently, but what does interest me is why no one ever seems to be able to articulate, in clear terms, why AGI will or will not be happening in the near future. I think it's mostly due to a flawed conception of the nature of life and intelligence.

As far as I know computers work by running code through static circuits. Life directly machines cognition at the level of molecular physics and chemistry. This is how organisms accomplish perception and thought (to the extent they think and feel as an individual). This is the basic insight of Kant that e.g. the laws of physics are not discovered in or constructed out of experience, the unconscious is physics, micro-physical and micro-logical. Reverse-engineering intelligence isn't just a matter of executing instructions in the right way, because intelligence isn't just solving puzzles.

I think artificial intelligence is possible, but it's really a question of how to create artificial, non-carbon-based life. If done properly, AI and computing could be akin to something like alchemy, an effort re-unify fragmented domains of the psyche on a higher level, where it isn't clear yet what the lapis is; something like bringing matter to life, or unifying mind and matter (chemistry was already a step in this direction). Unfortunately it seems like AI research is being watered down into a trannyjew marketing meme, but hopefully I'm wrong. At any rate I think real progress on something this challenging would require the complete dedication of refined Aryan genius working toward some truly divine purpose like making anime real, and the current order of things is arranged to prevent this.
ssa Bug 
turnip Wrote:As far as I know computers work by running code through static circuits. Life directly machines cognition at the level of molecular physics and chemistry. This is how organisms accomplish perception and thought (to the extent they think and feel as an individual). This is the basic insight of Kant that e.g. the laws of physics are not discovered in or constructed out of experience, the unconscious is physics, micro-physical and micro-logical. Reverse-engineering intelligence isn't just a matter of executing instructions in the right way, because intelligence isn't just solving puzzles.

I think artificial intelligence is possible, but it's really a question of how to create artificial, non-carbon-based life. If done properly, AI and computing could be akin to something like alchemy, an effort re-unify fragmented domains of the psyche on a higher level, where it isn't clear yet what the lapis is;
You're right on the money. Or rather, human cognition and sentience is even harder than that. Every neuron is in itself an information processing system of staggering complexity. Signals propagate as pulse chains, modulated in both frequency and amplitude, and their processing begins during propagation through the axon. Signal processing involves both electrochemical potentials and quantum coherence effects in neurotubules. Take all that, and scale it up to around 100 trillion synapses, propagating signals non-linearly in a dynamically evolving pattern, and you have a system that's impossible to model with traditional computation, regardless of how much silicon you throw at the problem. Getting a similar cognition artificially will require completely different computers. Current quantum computer architectures based on discrete cubits won't cut it.

However, getting a system that's capable of modelling itself, its environment, learning dynamically and setting its own goals is possible in silicon. In theory it's just an application of existing concepts, embedding arbitrary input signals into an abstract vector space, sorting discrete identities of that vector space into sets, and predicting future states of the internal model. Apply a reward mechanism based on the accuracy of those predictions, add some static output channels, and you have an architecture capable and driven to model its environment and its own potential outputs to achieve consistent predictions- this roughly approximates how simple biological neural networks work, and is a requirement for any system to have active agency.

This is of course simplified, you'd need a few more factors to prevent the system from settling into a local minimum- negative rewards based on prolonged static configurations, and probably also on the model's internal size. You need to incentivize efficient state modelling, since computational resources aren't infinite.

I haven't implemented this yet but I'll get around to it at some point, I can't make any statements on the scale of computation required for such a model to act coherently in real time until I do so, but as a napkin figure you could probably get to the information and behavioral complexity on a cockroach using consumer-grade hardware.

turnip Wrote:real progress on something this challenging would require the complete dedication of refined Aryan genius working toward some truly divine purpose
Few get this.
Trevor Bauer
'Where's the heart?' And other sentimental yokelisms.
Pylon
BillyONare Wrote:Your neurons are stupider than trained monkeys, and your brain is just a set of neurons, how does that make you any smarter than monkeys throwing shit around? Inb4 midwit stuff like muh consciousness, muh soul, muh Chinese Box xD, muh creativity, muh meetyphysics.

Quote:The AI will forever remain inside a human-defined paradigm.

Read Gariepy.

Quote:Think of it this way – we, humans, have inherited the training data of over three billion years of life on Earth. How much is that in TB; trillions, fucktillions?

The result of that training:

“gibsmedat bix nood purple drank”

Training is not that important. Wait until midwit AI researchers understand what I understand. Oppenheimer did not train to create nuclear weapons, he just did it. THATS what intelligence is, to do things WITHOUT training. To do what would be IMPOSSIBLE no matter how many billions of years yeast-life has to evolve in a matter of months and change the course of history.

Absolutely correct. Intelligence IS spontaneity. Instantaneous, explosive, catastrophic spontaneity. Nothing that needs to be brute-force evolved or "prompted" will ever lead to superintelligence. Case closed.
An Ancient and Unbound Evil Awakens...
KimKardashian
Mason Hall-McCullough Wrote:What is comprehension if not the ability to summarize your entire post into a paragraph in half a second? Why doesn't ChatGPT "truly understand" what it has read? I can ask it questions about your post and it will give accurate and detailed answers. There's nothing special about human comprehension other than the magical significance you have assigned.

You have a stochastic parrot. It seems my appropriation of the infinite monkey theorem made the case so painfully obvious that we're not dealing with comprehension on the monkeys part, that you've resorted to calling the difference magical. You know there's a difference between a stochastic parrot and comprehension. You don't know what's the difference. But any pointing out of the difference aggravates you into calling it magical, presumably from some notion that we should not point out the difference, because we can't explain it. There we have it, magical thinking: if we ignore the difference, we can pretend a stochastic parrot equals comprehension. Sorry, did I run afoul of the techno-coomer etiquette?

We see that a stochastic parrot isn't comprehension. Then what is? If we get down to it, we don't really know. This enrages the techno-coomer.

Quote:The infinite monkey theorem describes an agent that generates random text. "The only difference is that our infinite monkeys are trained" - that inverts the defining characteristic of the infinite monkey theorem.

That's just hilarious, because that's the point.

Quote:https://en.wikipedia.org/wiki/Long_short-term_memory
https://en.wikipedia.org/wiki/Incremental_learning
https://arxiv.org/abs/2312.06141

The problem went over your head. It's not the lack of memory, it's the lack of ability to recognize value in its output besides treats (which express human value). Without an internal ability to recognize value, it cannot innovate based on internal dialogue.

Quote:Even if you were describing an "infinite monkey" computer program that generated random characters over and over, why and how would a computer program possibly "cum, piss and shit on everything"?

The goal was to demonstrate how a monkey remains in the monkey paradigm, in which only treats exist, and the behavior which receives them. Chasing treats, it's necessarily blind towards any genius contained in its behavior, like the Chinese room. Btw this definitely runs afoul of the techno-coomer etiquette.

The stochastic parrot equivalent of a monkeys "cum, piss and shit" is the complete disregard it has for the genius it has already produced. If treats start coming for writing MLP and Sonichu crossover erotica, that's what it'll do, forever if need be. Nevermind all the Newtons and Darwins it has """"""""""""""""""""comprehended"""""""""""""""""""".

Quote:"We have a very special thing called a 'causal initiative' inside our immortal souls."
"Heh, stupid AIs don't even touch grass, they're not even real."
"Humans. Are. Inescapable. Expect us."

Seems to have flown over your head again. Can't even tell which way, since this time you didn't even say anything. I may elaborate that the human paradigm is the human Umwelt. Feel free to propose any way we can escape it.

Quote:What can be asserted without evidence...

The burden's on you to prove we're capable of producing a human level intelligence.

Speaking of techno-coomers...
How many here know that the term AI was initially supposed to reflect human level intelligence? But since techno-coomers are chronic premature ejaculators, they couldn't stop applying the term to any algorithm in the 1980. And since all of these missed the mark, a new term had to be invented for actual human intelligence, that being AGI. We're seeing the same happen again. 

I suggest we get ahead of the curve and come up with a new term for the next cycle, once every doofus realizes how the term AGI was butchered by applying it to LLMs. Maybe HI - Human Intelligence? Hmm but that's shortsighted, it would be really awkward to try and invent another term after even that gets butchered...
KimKardashian
BillyONare Wrote:Your neurons are stupider than trained monkeys, and your brain is just a set of neurons, how does that make you any smarter than monkeys throwing shit around? Inb4 midwit stuff like muh consciousness, muh soul, muh Chinese Box xD, muh creativity, muh meetyphysics.

Oh shit another of the same variety. I take it you have no response to these?

It's not even a phenomenological argument necessarily. Whether or not comprehension is possible without qualia is not the question. It's more that we can't fathom comprehension without these. And any sufficiently long conversation with a stochastic parrot reminds the obviousness of its lack of comprehension. Just because it's getting better at avoiding its pitfalls doesn't mean we get to ignore what they've demonstrated.

Quote:Read Gariepy.

Uh, not gonna even give a title or anything?

Quote:Training is not that important. Wait until midwit AI researchers understand what I understand. Oppenheimer did not train to create nuclear weapons, he just did it. THATS what intelligence is, to do things WITHOUT training. To do what would be IMPOSSIBLE no matter how many billions of years yeast-life has to evolve in a matter of months and change the course of history.

I agree, but you severely misunderstand one thing: the training data behind Oppenheimer, Newton etc didn't cause the specific breakthroughs, but the internal reward mechanisms that drove the people. It's these that enabled them an obsessive motivation on the one hand, and an ability to recognize and value novelty on the other. In trying to come up with such inner reward mechanisms for an AI you'd very soon get bogged down in difficulties. My guess: it won't take long til you discovered that it would take about as much training data to produce such a reward mechanism artificially, as it took for humanity.

How else would you have an LLM go through the Cartesian moment of idling on a couch and seeing a fly buzz around the room, to coming up with the coordinate system?

Quote:The result of that training:

“gibsmedat bix nood purple drank”

Ha-ha niggers! But they've demonstrated the same reward mechanisms for novelty. They've partaken in the birth of music styles that have overtaken Western culture (eg blues, rock, hip hop, DJing, jazz, rap).

>inb4 muhh music
Call it low IQ if that makes you feel better, but a genre that spreads like wildfire is obviously innovation.
KimKardashian
Pylon Wrote:Absolutely correct. Intelligence IS spontaneity. Instantaneous, explosive, catastrophic spontaneity. Nothing that needs to be brute-force evolved or "prompted" will ever lead to superintelligence. Case closed.

Yes. And as per my previous post, you'll discover that this makes human intelligence more difficult to copy, not less. My argument is that in fact it makes it impossible.
KimKardashian
turnip Wrote:This is the basic insight of Kant that e.g. the laws of physics are not discovered in or constructed out of experience, the unconscious is physics, micro-physical and micro-logical. Reverse-engineering intelligence isn't just a matter of executing instructions in the right way, because intelligence isn't just solving puzzles.

I think artificial intelligence is possible

Kant's phenomenon vs noumenon exludes the possibility of artificial human level intelligence. We can't even observe ourselves, how are we gonna copy our intelligence then?
turnip
KimKardashian Wrote:Kant's phenomenon vs noumenon exludes the possibility of artificial human level intelligence. We can't even observe ourselves, how are we gonna copy our intelligence then?
This point can work in the exact opposite direction, e.g. that AI is a cosmic force of unknown provenance and potentiality. In fact, that is the argument underlying accelerationism, which I disagree with for different reasons. But I don't think you were aware of that argument, since your point just seems to have been that AI can never surpass human intelligence because it was created by humans, which doesn't even follow on its own.

As an aside, I do disagree with the "intelligence = spontaneity" thing. Of course intelligence is heavily based on intuition, though this is relative to personality, and intuitive people are often anything but spontaneous. Rather, they are often like seers, detached from the present and lost in their own world of prophecy.
This is just to say that most of what the mind does is unconscious, and Freud was correct that the conscious mind is more like a filter/skin - naturally you will not reconstruct intelligence by only modeling its surface/skin. More importantly, intuitions might appear to be random (e.g. in hunches, visions, dreams etc.), but I believe they are generated out of ancestral memory.

Matter also has memory (it is memory, more specifically), thus I think that if an AI was created - if other elements were brought to life - it could indeed reveal some xenodemon intelligence in hidden in matter, or whatever. I just don't think this will happen by accident. It will require new disciplines of science, and Aryan geniuses with the freedom to pursue real goals, like trying to alchemically bring their waifu to life or create a new reality. I don't think anything else will do, not Jewish marketing scams, or oldtroons who want to preserve their mind forever, or furries, or whoever else are leading the charge currently.
KimKardashian
turnip Wrote:This point can work in the exact opposite direction, e.g. that AI is a cosmic force of unknown provenance and potentiality. In fact, that is the argument underlying accelerationism,

I've not read Land, tell me more. Afaik, if "AI" is the telos of market interactions (or some Hegelian Absolute or noosphere) I wouldn't even call it AI. A superorganism sounds more apt.

"AI as a cosmic force" sounds just like.... God. What's wrong with calling it God? How's it artificial? It wouldn't be any more artificial than every other component of the transcendental subject (time, space, etc) (I've not read Kant in a long time).

Quote:your point just seems to have been that AI can never surpass human intelligence because it was created by humans, which doesn't even follow on its own.

Wdym doesn't follow?

Quote:I do disagree with the "intelligence = spontaneity" thing. [...] intuitions might appear to be random (e.g. in hunches, visions, dreams etc.), but [they aren't]

By spontaneity I (and I assume BillyONare too) meant unconscious intuition, not pure randomness. Pure randomness might not even exist, if we go by determinism (no idc about quantum jewry).
BillyONare
Your arguments depend on us believing superstitious Christianity e.g. meataphysics like “soul” and “noumenon”. The argument that humans can’t create something smarter than themselves does not follow because parents can have children that are smarter than them. Chimps created humans.
KimKardashian
BillyONare Wrote:Your arguments depend on us believing superstitious Christianity e.g. meataphysics like “soul” and “noumenon”. The argument that humans can’t create something smarter than themselves does not follow because parents can have children that are smarter than them. Chimps created humans.

There's literally nothing Christian about qualia or noumena.

Is your position that a glorified autocorrect is on par with human intelligence?

Tellingly, parents have children and don't create them. Chimps didn't create humans either, silly Billy!
BillyONare
Ok fair enough; you’re not making a Christian argument. Believing in God is much less superstitious than believing in qualia or noumena. Those are demons worshipped by redditors.
BillyONare
“Your brain is just a spaceship for invisible insects called “qualia” and other life forms are inherently inferior and don’t deserve consideration.”

Far crazier than Scientology.
turnip
KimKardashian Wrote:I've not read Land, tell me more. Afaik, if "AI" is the telos of market interactions (or some Hegelian Absolute or noosphere) I wouldn't even call it AI. A superorganism sounds more apt.

"AI as a cosmic force" sounds just like.... God. What's wrong with calling it God? How's it artificial? It wouldn't be any more artificial than every other component of the transcendental subject (time, space, etc) (I've not read Kant in a long time).

Land has an inverted view of time and causality where the future accesses the present through memories, in order to construct itself through time. Meaning that it builds itself through historical time, but is of an alien order of time. This is why death would seem to occur as it does, always rising from within but coming from without - the unconscious energy that is investing the whole process of life is indifferent to the particular skins and vessels that it must use to assemble itself. This agency supposedly betrays itself in phenomena like synchronicities, qabbalistic patterns, or "hyperstition." As I understand it isn't about determinism vs free will either, he means to posit something else, like degrees of determinism.

Agree or not, Kant's ideas, properly understood, should lead to rejection of naïve realism (i.e. all that you've been saying about reality having arbitrary limits on what is historically possible). In any case I brought up Kant not to say that I agree with him about everything, but to make a specific point about how cognition works, so I don't understand why you've jumped to talking about the soul or transcendental subject.
Quote:Wdym doesn't follow?
I mean logically. I don't think you will convince anyone by just insisting that it is so repeatedly.
KimKardashian
BillyONare Wrote:I am an autocorrect.

Wow amazing, now shoo.



[-]
Quick Reply
Message
Type your reply to this message here.




Users browsing this thread: 1 Guest(s)