AGI = Impossible
Zed
BillyONare Wrote:Zed, I have noticed that when I'm about to fall asleep I can VIVIDLY picture extremely images in my mind like I am actually seeing them. I can picture things when I'm thinking or reading a book, but it is more vague. The half sleep images morph into each other exactly like those videos of diffusion models working. They are often horrific in nature e.g. monsters, ugly faces.

Anyone else experienced this?
[Image: 11dd0563-4b01-457b-84bf-e584bb6f1a2e-jum...7782069218]

Regretfully, I've experienced it many times.
Pylon
BillyONare Wrote:Zed, I have noticed that when I'm about to fall asleep I can VIVIDLY picture extremely images in my mind like I am actually seeing them. I can picture things when I'm thinking or reading a book, but it is more vague. The half sleep images morph into each other exactly like those videos of diffusion models working. They are often horrific in nature e.g. monsters, ugly faces.

Anyone else experienced this?

I do. When falling asleep or immediately before awakening, I get vivid crystal-clear pictures. But when trying to lucid dream, things would start melting and becoming all fucked, so I stopped doing that when I realized it messed with REM sleep quality.

Also, regarding what you say about consciousness (the non-existence thereof), I perfectly understand your perspective. Did you enjoy any of Peter Watts' scifi? I found Blindsight was ok.

Re: Penrose. I didn't say "look it up", but gave a book recc just as you did. Feel free to read it if you're so inclined. My argument is not contingent on microtubules alone, but on the current silicon transistor medium lacking information channels which are abundant in biological forms. Negroes being a particularly botched example does not disprove the mechanisms present in higher lifeforms such as ourselves.

I am open to any suggestions on how we could emulate actual intelligence digitally. The Diffusion thing is a good hint, but still heavily contingent on prompting and labeling of a finite dataset. So self-orchestration remains unresolved.
An Ancient and Unbound Evil Awakens...
BillyONare
Starfish is a beautiful book. I've read Blindsight and a couple others and thought they were just ok as you said. Imo "The Chinese Room" is a dumb argument.
Zed
Pylon Wrote:...but still heavily contingent on prompting and labeling of a finite dataset...

Must it be so? Pre-trained RL techniques *should* naturally enter in here, somehow - if only in the preparation of the training data. One needs to bootstrap such a thing to have some basic capacities and for that labeled training data is invaluable, but at a certain point - one should try to throw the thing against general puzzles, games, and mathematical problems - concretely, situations with verifiable correctness. And if researchers have discovered the right architecture, we should expect to see strong cross-domain capabilities.  For example: We would expect an AGI to be capable of proving most elementary mathematical theorems, playing chess/go at a decent level. Existing 'glacial' models can also be used to generate word problems, etc. If LLMs are, as suspected, a dead-end towards AGI directly - they will be invaluable in producing the labeled data necessary for such a task. Once the correct architecture is found and the model is bootstrapped, it should be trainable with relative ease.

As for a specific architecture suggestion ---  I have to be a bit vague because my ideas are still murky hunches, but I believe it is desirable for the both the input/output of the model to incorporate a kind of 'failure latent' recording a condensed summary of the models confidence in its output at each time step. Almost akin to the RNN hidden state, but not quite - because training is done in the diffusion model style, as opposed to backprop through time.... For example, in an LLM setting, this would be an exponential (and quite incomputable) tree of the branching token probabilities. In the convolutional world, it would suffice as a sort 'summary' or condescended self-evaluation.  With such an architecture, one would have to proceed by way of phased training, where first the 'failure latent' is ignored (all weights utilizing it zeroed), and the thing is trained as a standard diffusion model, and then in the secondary training phase, the weights assigned to the 'failure' latent are unzeroed (but made relatively small), and one then trains those weights to force the model to take into account its own estimate of its inaccuracy. Now from then on, one experiments between training non-'failure latent' weights and 'failure latent' weights, or perhaps both in conjunction. Something like this is how I would at least try to implement the self-optimizing aspect of intelligence alluded to in my first post in the thread.
Pylon
Zed Wrote:Must it be so? Pre-trained RL techniques *should* naturally enter in here, somehow - if only in the preparation of the training data. One needs to bootstrap such a thing to have some basic capacities and for that labeled training data is invaluable, but at a certain point - one should try to throw the thing against general puzzles, games, and mathematical problems - concretely, situations with verifiable correctness. And if researchers have discovered the right architecture, we should expect to see strong cross-domain capabilities.  For example: We would expect an AGI to be capable of proving most elementary mathematical theorems, playing chess/go at a decent level. Existing 'glacial' models can also be used to generate word problems, etc. If LLMs are, as suspected, a dead-end towards AGI directly - they will be invaluable in producing the labeled data necessary for such a task. Once the correct architecture is found and the model is bootstrapped, it should be trainable with relative ease.

As for a specific architecture suggestion ---  I have to be a bit vague because my ideas are still murky hunches, but I believe it is desirable for the both the input/output of the model to incorporate a kind of 'failure latent' recording a condensed summary of the models confidence in its output at each time step. Almost akin to the RNN hidden state, but not quite - because training is done in the diffusion model style, as opposed to backprop through time.... For example, in an LLM setting, this would be an exponential (and quite incomputable) tree of the branching token probabilities. In the convolutional world, it would suffice as a sort 'summary' or condescended self-evaluation.  With such an architecture, one would have to proceed by way of phased training, where first the 'failure latent' is ignored (all weights utilizing it zeroed), and the thing is trained as a standard diffusion model, and then in the secondary training phase, the weights assigned to the 'failure' latent are unzeroed (but made relatively small), and one then trains those weights to force the model to take into account its own estimate of its inaccuracy. Now from then on, one experiments between training non-'failure latent' weights and 'failure latent' weights, or perhaps both in conjunction. Something like this is how I would at least try to implement the self-optimizing aspect of intelligence alluded to in my first post in the thread.

You're correct about the "self-optimizing aspect of intelligence" being a missing piece which, once filled, will lead us well along the path.

Quote:throw the thing against general puzzles, games, and mathematical problems - concretely, situations with verifiable correctness

This fundamental reliance on "verifiable correctness" is precisely the issue. Everything we're doing right now depends treading the needle to encode known data into some model without overfitting it. Even if we use LLMs for automatic labeling of further input, that's just an automation to make our task easier and improve the training volume, not an architectural improvement to the model itself. This is essentially solving a data compression problem with layers of matrix math, not too dissimilar from how we recall text or images. There's still the gap in agency, or self-instrumentation, though. 

I'm talking about agency as an observable behaviour, not as some philosophical thing about "awareness" or "consciousness" -- let's do away with those for now. What it means for something to do things autonomously. If you remember the mid 2010's research into "Evolutionary Neural Networks", these sort of tried to solve the issue by simulating millions of iterations of a target game until it eventually got it right and learned how to win through luck. The problem is, this is essentially nothing alike to how we learn to play games. Never in my life, even as a small child, did I learn a new game by smashing random inputs until I accidentally stumbled onto the objective -- that's absurd and a dead end, but it does illustrate a prior attempt at filling the agency gap.

To put it simply, you want something that can teach itself how to learn in response to arbitrary input data. This may sound like an absurd reductionism, but it appears to be a behaviour exhibited even by a petri dish of neurons, which is an important clue. Give it a game, and it figures on its own not only how to win, but what it is to win the given game. No researchers manually defining goal parameters for individual tasks, like we do now even in "multimodal" models.

Regarding implementation details... I have several competing ideas -- the precise combination of RL and something entirely new remains unclear. I also like your suggestion, as well. Would need to create a few prototypes which I worry would consume inordinate amounts of compute at the moment.

What I will say is that there appears to be something about "comprehensible input" that's important in biological neural nets, whilst being completely overlooked in digital models. In-vitro neurons can learn to play a game, or fly a plane, or whatever, just by being given a consistent input when doing the right thing, and being tortured with noise (static input) when failing. If I put you in front of a TV to play a game, and the only indication you failed at some task were white noise + 3 seconds of static, you'd figure it out pretty quick. I borrowed the term "comprehensible input" from linguistics, btw, since understanding what you're perceiving is an essential part of learning a new language (otherwise every weeb watching with subs would have learned to token-match Japanese by audio alone). So it seems like, in generality, being able to influence a predictable playing field is the carrot, and getting a totally unexpected sensory input is the stick. Something to look into when we've got the camps up and running. We can mathematically model the mechanism behind this (assuming it is computable), or augment existing techniques with biological surrogates. Brain-in-a-can GPUs.

If you're the Chosen One and reading this thread, feel free to run with this and @ me when you get your Turing Award. I hope you do it, ganbatte.
An Ancient and Unbound Evil Awakens...
turnip
BillyONare Wrote:Nietzsche mocked the stupid saying “I think therefore I am”. 

I am aware of that - Nietzsche did not deny that consciousness exists, either, but he did say that it is a surface. The issue with Descartes' cogito is that he confuses making judgements about thought for thought itself. Do you see how this is relevant?

If I understand you, I think you make the same mistake about time as in e.g. Zeno's paradoxes, which was clarified by Bergson. It's using a different kind of abstracted time than what occurs in experience, so this doesn't prove that actual motion or consciousness or whatever else that occurs in lived time is impossible. The overall point is just that most of the discussion around this topic is shrouded in confused abstractions because psychological phenomena like "intelligence" and "consciousness" are not understood as physical and vital functions. Or maybe that isn't necessary and the singularity is immanent, I just don't believe that to be so, and am trying to articulate why.
KimKardashian
Lots of great points brought up. I'm not really into AI literature so some of it is bound to have gone over my head. Sorry for the ugly post with tons of quotations, but I wanted to respond to everything I found interesting.

Reply to Pylon:

Pylon Wrote:there is a fundamental difference between a biological medium and a transistor circuit medium. (Actually, several.) It has to do with the automatic biological ability to adapt to entirely new scenarios, versus the digital "glacial intelligence" which internalizes known patterns and replicates them. Without explicit guidance, a computational neural network exists in a state of perpetual disorientation.

This seems to reflect this passage in OP:
Quote:an emulator-monkey can only emulate, not originate; it will forever remain reactive and not proactive, by definition. Chasing treats, the monkey is not only unmotivated to gain agency, but fundamentally incapable – treats form all of its experience, its whole universe. All its motivations and fears revolve only around treats.

I'm wondering if this comes through. Maybe it was something about my style in OP that expressed it badly? I think the thread's going through my talking points, but in a different language.

But Pylon, extremely important to note here -- this state of perpetual disorientation exists only in relation to the human universe (human Umwelt), in which exist the real tasks we need an AGI to be able to solve. However, inside its own universe (LLM Umwelt), the AI is in fact perfectly oriented towards the only thing it knows -- that of maximising treats (successful word prediction). So we see, it's perpetual disorientation depending on the perspective: there is no disorientation from the standpoint of the LLM itself.

This understanding is critical, because the core problem in creating AGI is exactly this -- for AGI to happen, two things are required:
1) the AI Umwelt needs to approach the complexity of the human Umwelt. Its sensory input (eg vision, tactile, etc) needs to approach that of the human. Mind you, not the anatomical human, but including the sum total of our Umwelt, eg infrared, ultraviolet, microscopic visions, etc; and not necessarily an AGI entity capable of personal observation (eg an individual robot), but merely an access to such information (eg some tech-hivemind like Skynet). This is the easy part, a purely mechanical question.
2) the AI reward mechanisms (treats) need to approach the complexity of the human reward mechanisms. Successful word prediction is fine and dandy, and an extremely powerful tool, as demonstrated LLMs, but it is obvious how the human ingenuity is more than that. Eg Descartes wasn't motivated by predicting the next word when originating the idea of a coordinate system based on a buzzing fly inside his room. This is the tricky part.

Only these two are capable of producing an AGI that can solve human tasks and more.

If I've expressed this in an understandable way, I really think you will find this angle illuminating.

My own intuition is that the tricky part (2) is in fact the impossible part. While I acquiesce to this:
Margatroyper Wrote:The German Shepherd, the Golden Retriever, the Borzoi, the world had eons to cultivate them, but never bothered.
... that active pursuit is much faster than blind aimless evolution, making it attainable in a fraction of the time it took to blindly evolve. But that fraction requires the fleshing out of such a complexity of characteristics (reward mechanisms), that I genuinely think we would discover the need for an entire simulated universe a la Matrix, in which an AGI can develop. This is way beyond any current possibility. And even so, it's not a guarantee that the AGI Umwelt would reach ours. The smallest of differences in the simulated world may cause motivations inconsistent with the need of the real world.

Quote:The most interesting idea here: The persistence of a bio-electric signal between parent and progeny, tracing through time all the way back to the first instance. This is a known phenomenon, not a hypothesis.

I believe this reflects my fundamental intuition, that life contains within it an agency, stemming from its reward mechanisms, and that it can only be reproduced via going through the long process of evolution (even if consciously directed, possibly in a simulation). This means beginning from square one, on the smallest unit of construction, the cell. We can't cut corners and begin at the level of the brain.

Quote:But when trying to lucid dream, things would start melting and becoming all fucked, so I stopped doing that when I realized it messed with REM sleep quality.

I’ve only accomplished lucid dreaming accidentally, and I get too excited every time, so the dreamworld starts gradually disintegrating and losing its vividness/three dimensionality until I wake up. It’s like a countdown, and I only get about a minute of fun. It’s really frustrating, but funny at the same time.

After Zed I have more replies to Pylon.


Reply to Zed

Zed Wrote:Starting from those loose 'intuitions', it gradually refines an actual image into existence with almost miraculous results.

Yes, the initial germs of an idea are feelings. Like blobs of color that you feel in your chest, synaesthesia-like. There’s nothing verbal in the first stages.

But critically, I assume (I don’t know about these models), such intuitions of the model become refined into an image simply via becoming more detailed, „higher resolution.” That’s not exactly how it works with humans, which you also noted in:

Quote:I have the freedom to choose not only the next word, but to restructure any preceding sentence.

Fascinatinly, it’s a dialectical dialogue between the components of the initial intuition. They go through an evolution, not a simple sharpening. Their very structure changes, as they evolve. I have a feeling that this does not occur with convolutional diffusion models. They instead simply flesh out the initially ambiguous structure.

Quote:One needs to bootstrap such a thing to have some basic capacities and for that labeled training data is invaluable, but at a certain point - one should try to throw the thing against general puzzles, games, and mathematical problems - concretely, situations with verifiable correctness.

This relates to my reply to Pylon. Situations with verifiable success are those in which the success criteria is the sole motivating factor in the AI, its sole reward mechanism. Eg in a racing game, you reward crossing the finish line in the least amount of time. In turn, the picking of such criteria of success is what forms the foundation of the reward mechanisms of the AI, ie determine whether it will ever reach AGI or not.

To rephrase my response to Pylon in these terms: the success criteria need to be such, that they reflect the complexity of orienting in the real world. No racing game, or mathematical problem comes close; the world doesn’t submit to such operationalization.

My argument in these terms: it is impossible to pick the success criteria that could sufficiently reflect the complexity of the world.

And going beyond – what is the success criteria for humanity? If we’re honest, we can’t nail it down easily. Propagation of eugenic life? In which parameters, IQ? Surely there’s more to it. Infinite capacity for labor? Is there room for subjective happiness? Maybe combine the two and produce happy worker-ants? But does that sound enticing, or yeast-like? It’s quite hard to parameterize success criteria, I think.


Going back to Pylon, as this is contingent on the immediately above:

Pylon Wrote:This fundamental reliance on "verifiable correctness" is precisely the issue. Everything we're doing right now depends treading the needle to encode known data into some model without overfitting it. Even if we use LLMs for automatic labeling of further input, that's just an automation to make our task easier and improve the training volume, not an architectural improvement to the model itself. This is essentially solving a data compression problem with layers of matrix math, not too dissimilar from how we recall text or images. There's still the gap in agency, or self-instrumentation, though.

Am I mistaken, or did you just express what I was trying to say immediately above, and in response to you also? (Except you don’t deem it an impossible feat)

Quote:Never in my life, even as a small child, did I learn a new game by smashing random inputs until I accidentally stumbled onto the objective -- that's absurd and a dead end, but it does illustrate a prior attempt at filling the agency gap.

This expresses the Chinese Room problem well.

Quote:To put it simply, you want something that can teach itself how to learn in response to arbitrary input data. This may sound like an absurd reductionism, but it appears to be a behaviour exhibited even by a petri dish of neurons, which is an important clue. Give it a game, and it figures on its own not only how to win, but what it is to win the given game. No researchers manually defining goal parameters for individual tasks, like we do now even in "multimodal" models.

Exactly. It seems the internal reward systems in living tissue are inherent already on the cellular level. It’s interesting that they show a predillection towards seeking order, even if it’s irrelevant to its wellbeing. That qualifies as agency – a behavior not immediately related to wellbeing. Below in my reply I go more into this (you can search by keyword: scrimshaw).

Quote:In-vitro neurons can learn to play a game, or fly a plane, or whatever, just by being given a consistent input when doing the right thing, and being tortured with noise (static input) when failing.

This is extremely fascinating. It seems to lend to the idea that all life drives towards homeostasis, simply going by the logic that
A: I exist under current conditions
B: behavior X predictably reproduces current conditions
C: therefore behavior X is most congruent with my existence


Reply to turnip:

turnip Wrote:Which is not how creative processes work, and things can obviously build upon themselves without perfect self-understanding

This starts sounding like some society-wide emergentism, which is why I mentioned the Absolute, the noosphere, the superorganism before. But I believe this is strictly separate from an AGI, which should not denote a society-wide phenomena, but a distinct entity. One could argue that we already have some society-wide intelligence, hivemind, noosphere-like collective intelligence.

Quote: just as you do not need perfect information about how something works to know that it works

Any nigger knows that smartphones work, and is able to use them. The production of smartphones very much reqires perfect information (doesn't have to be inside a single person).

Quote:that consciousness is something like the "capacity for lying/deception." [...]

That’s an interesting thought. As you say, deception entails the differentiation between the original and experience, which basically entails theory of mind (my experience =!= someone elses, allowing for the possibility of deception).

I would posit a similarly quintessential yet simple criteria, but for agency: behavior that is inexplicable with regard to immediate gains. A goatherder furnishing a flute out of bone and learning to play it is almost inexplicable in terms of immediate gains. It doesn’t provide nutrition nor warmth. A great example of such is the scrimshaw, engraved by sailors on faraway seas:
[Image: 1004_scrimshaw-1.jpeg]

I believe the same immaterial motivations lie behind all of the greatest human achievements, revelations and inventions. None of them were motivated by immediate gain. The drive comes for its own sake. This is the essence of the reward mechanisms I’ve described in this reply, above. It’s going to be hard to nail down the mechanisms that produce such drives.

If AI should express behavior that would be similarly inexplicable in terms of immediate gains, we should pay very close attention, because we might be dealing with a burgeoning agency – the beginning of AGI. Alternatively, could be simply undefined behavior, in code-speak.

Quote:Their mind works more like a neurotic autocomplete algorithm.

There is absolutely nothing autocomplete-like in any organism, regardless of how retarded. Dysgenic IQ decline will not produce LLM like people – they’re fundamentally different things.


Reply to BillyONare

BillyONare Wrote:I was illustrating that the greatest feats of computation genius were not the result of random noise generation and selection pressure. Which is the very definition of intelligence, being able to observe and act upon your environment in a way that is NOT random or indiscriminate. That’s why OP is so dumb; he keeps talking about trillions of gigabytes of reinforcement training as if that’s relevant to the topic.

You seem to be adamantly ignoring the fact, that it’s the training that produced the intelligence capable of genius computation. To assume you can accomplish such computation without training seems like a folly.

Quote:It’s just your brain being aware that you exist. You claim to be conscious, but how do you know that? Were you conscious yesterday?

Sorry this is just pygmy-like. Yes, your memories may well be hallucinations. Mind: blown. I am now unconscious.

This is why people need some basic Kant. There are things which it is simply retarded to doubt. Yes, you may well be a Boltzmann brain. So fucking what? Your experience (phenomena) still is as it is regardless of what is beyond it (noumena) or what it was before.

Quote:Nietzsche mocked the stupid saying “I think therefore I am”.

There exists a minimum which is undeniable.

Quote:The half sleep images morph into each other exactly like those videos of diffusion models working.

Interestingly, for me, these images go hand in hand with short term memory-loss. Any train of thought I had, gets sliced and deleted. Sometimes I can reconstruct the train, but the closer I am to sleep, the more often they get sliced, until even attempts at reconstruction become impossible, silence ensues, and I lose consciousness.

Quote:Imo "The Chinese Room" is a dumb argument.

Why? It demonstrates how the lack of intuitive understanding (comprehension) stands in the way of originating genius. Have you ever spoken to a pseud that can parrot the jargon and ideas of some field, while being a painful midwit? It’s the same. A genius is probably less capable of communication, seeing that they’re likely autistic, lower empathy and disagreeable, and yet they have an immeasurably higher intuitive understanding of the concepts, especially in relation to their communication abilities. This is why the Turing test isn’t optimal. Communication =!= comprehension. The Chinese Room gets to the core of this, in my opinion, and LLMs excel in communication.
BillyONare
Quote: You seem to be adamantly ignoring the fact, that it’s the training that produced the intelligence capable of genius computation. To assume you can accomplish such computation without training seems like a folly.

You are just asserting without evidence over and over that training is required to generate evidence as well as asserting that the trillionsterabytes of natural selection are superior to training based on terabytes of information that humans consider useful from all of history and all over the universe, which is pre-processed into useful concepts (by humans) rather than starting from the point of random noise from an organism’s nearby environment, which is dubious. I could just as well say that your assumptions are a folly and mine are just common sense.

Quote: There exists a minimum which is undeniable.

I just denied it with very good reasoning.
BillyONare
Re: The Chinese Room. The “algorithm” that the guy follows is clearly what is intelligent, the man himself is just machinery following the algorithm like your tongue. It’s like saying that you don’t understand anything because your tongue doesn’t. Completely retarded midwit idea. The Turing Test is dumb too, which we can agree on.
KimKardashian
BillyONare Wrote:You are just asserting without evidence over and over that training is required to generate evidence as well as asserting that the trillionsterabytes of natural selection are superior to training based on terabytes of information that humans consider useful from all of history and all over the universe

You underestimate the difficulty of separating the
a) "trillions terabytes of natural selection"
from
b) "information that humans consider useful from all of history and all over the universe"

You're basically saying "just generate the exact input that would produce superintelligence as output, DUH." Massive understatement. How do you figure out the input? That's the entire question.

Quote:The “algorithm” that the guy follows is clearly what is intelligent, the man himself is just machinery following the algorithm like your tongue.

Weird, previously you seemed to question the notion whether the algorithm is intelligent. Now that you do consider it intelligent, we can go back to this question:

KimKardashian Wrote:ChatGPT has read 560GB of text, 300 billion words from all the greatest minds, it should be a sage beyond the wisdom of any single human who ever lived. What has come of it? A scientific breakthrough? New form of governance? A new art paradigm? Has it solved the energy crisis? No? Oh right, it's a fucking autocomplete.

Seems your Chinese Room's intelligence isn't worth a shit afterall.
turnip
KimKardashian Wrote:
Quote:The production of smartphones very much reqires perfect information (doesn't have to be inside a single person).

That can not be the case if electrical engineering is based on quantum physics. Another example is how there is e.g. no complete theory of how lift works. Engineers didn't need some kind of absolute knowledge of how it worked (if such a thing is possible), they just know that it worked. 

Quote:I believe the same immaterial motivations lie behind all of the greatest human achievements, revelations and inventions. None of them were motivated by immediate gain. The drive comes for its own sake. This is the essence of the reward mechanisms I’ve described in this reply, above. It’s going to be hard to nail down the mechanisms that produce such drives.

There is the obvious point that intelligence or "consciousness" must exist in relation to a world, and for this something must first have a world (not necessarily ours). My point is a bit more specific, that I think a real experience of the world is built upon whatever goes on in the living micro-physical depths of the unconscious, of which consciousness is merely the surface-texture. If that is correct then the issue, as @ssa has also identified, is with the existing paradigm of computing; I think if one found a way to machine high-level programs at the level of molecular physics, that could do it, or perhaps even trigger the kind of cyber-positive feedback loop envisioned by Land, who knows (I may even be wrong about this, but it seems empirically reasonable to me based on how human psychology and desires work). I just don't think that's what AI is currently doing, and it may not be within the scope of modern science.

Quote:This is why people need some basic Kant.

To be clear, your points are very metaphysical and un-Kantian, however much you think it makes empirical sense to claim that AI is impossible because it can't develop special immaterial desires because only humans can have special desires because our experience of the universe is special, and we know it's special because only we have special desires. Like I said, I don't expect you to convince anyone by just repeating circular, ungrounded reasoning, but you're free to do so.
Pylon
KimKardashian Wrote:
Pylon Wrote:This fundamental reliance on "verifiable correctness" is precisely the issue. Everything we're doing right now depends treading the needle to encode known data into some model without overfitting it. Even if we use LLMs for automatic labeling of further input, that's just an automation to make our task easier and improve the training volume, not an architectural improvement to the model itself. This is essentially solving a data compression problem with layers of matrix math, not too dissimilar from how we recall text or images. There's still the gap in agency, or self-instrumentation, though.

Am I mistaken, or did you just express what I was trying to say immediately above, and in response to you also? (Except you don’t deem it an impossible feat)

You are correct. I am expressing a similar idea on agency, and indeed I believe that it is possible to achieve.

Although your conclusion and the thread's title are wrong, I steelman a part of your argument on the basis of empirically observable phenomena. You got caught up in a lot of "circular metaphysics" as others have pointed out, rather than focusing on the important aspects of demonstrable agency. "Consciousness" or "Self-Awareness" are irrelevant to solving the problem.

Once you've understood the mechanisms and what exactly it is we're trying to emulate, you'll realize that -- with a feat of human genius -- it will prove possible. Granted, it won't be coming from OpenAI or any of the other current-wave ventures. It will come from one of us. They'll send Delta Force to kill me because they fear what I'll order AGI to do.


KimKardashian Wrote:
Quote:The most interesting idea here: The persistence of a bio-electric signal between parent and progeny, tracing through time all the way back to the first instance. This is a known phenomenon, not a hypothesis.

I believe this reflects my fundamental intuition, that life contains within it an agency, stemming from its reward mechanisms, and that it can only be reproduced via going through the long process of evolution (even if consciously directed, possibly in a simulation). This means beginning from square one, on the smallest unit of construction, the cell. We can't cut corners and begin at the level of the brain.

Yes, it is an attribute exhibited by life-forms. No, it is not necessary to retrace the entire process from square one. As the prospective creators of such an organism, we already encapsulate the pinnacle of that whole process within ourselves, and we have the ability to impart it upon new life. Just as discriminating Aryan intellect separates the good breeds from the bad, accelerating the default process of muh-volution to create resplendant stallions, bear-fighting hounds, and appealing women; we can kickstart an AI with a pre-conceived architecture, an ideal training environment, and curated data to shape the ultimate result.


KimKardashian Wrote:
Quote:To put it simply, you want something that can teach itself how to learn in response to arbitrary input data. This may sound like an absurd reductionism, but it appears to be a behaviour exhibited even by a petri dish of neurons, which is an important clue. Give it a game, and it figures on its own not only how to win, but what it is to win the given game. No researchers manually defining goal parameters for individual tasks, like we do now even in "multimodal" models.

Exactly. It seems the internal reward systems in living tissue are inherent already on the cellular level. {...}

Quote:In-vitro neurons can learn to play a game, or fly a plane, or whatever, just by being given a consistent input when doing the right thing, and being tortured with noise (static input) when failing.

This is extremely fascinating. It seems to lend to the idea that all life drives towards homeostasis, simply going by the logic that
A: I exist under current conditions
B: behavior X predictably reproduces current conditions
C: therefore behavior X is most congruent with my existence

That's the idea. The next question is, how do we simulate this? The fact that each individual cell does have its own reward mechanisms, whilst also entering into a cooperative bio-electric harmony with the rest of the organism, is like a fractal of neural networks. One individual cell alone is like several layers, not a single tensor. There must be some high-order approximation that leads to good results with acceptable performance. We can also explore new hardware paradigms and bio-computing in tandem.





BillyONare Wrote:
Quote: You seem to be adamantly ignoring the fact, that it’s the training that produced the intelligence capable of genius computation. To assume you can accomplish such computation without training seems like a folly.

You are just asserting without evidence over and over that training is required to generate evidence as well as asserting that the trillionsterabytes of natural selection are superior to training based on terabytes of information that humans consider useful from all of history and all over the universe, which is pre-processed into useful concepts (by humans) rather than starting from the point of random noise from an organism’s nearby environment, which is dubious. I could just as well say that your assumptions are a folly and mine are just common sense.

Billy brings up a correct point here, which I've incorporated above. It's not starting from random noise or from 0, WE are the ones discriminating here. Even an LLM is essentially primed to be decent merely because we exist to feed it good data. On this basis, we now have well-trained models that fit on an RTX 3080 whilst surpassing mainframe AIs from 2 years ago.




turnip Wrote:My point is a bit more specific, that I think a real experience of the world is built upon whatever goes on in the living micro-physical depths of the unconscious, of which consciousness is merely the surface-texture. If that is correct then the issue, as @ssa has also identified, is with the existing paradigm of computing; I think if one found a way to machine high-level programs at the level molecular physics, that could do it, or perhaps even trigger the kind of cyber-positive feedback loop envisioned by Land, who knows ...

SSA and I are in concordance on these matters.
An Ancient and Unbound Evil Awakens...
KimKardashian
There's been a bent to read metaphysics into me, but I've made exactly 0 (zero) metaphysical arguments against AGI. I think what threw @turnip off last time was my use of the word "immaterial," which was meant strictly by the first meaning:

[Image: Untitled.png]

I guess my bad for using an ambiguous word, but I assumed the context was obvious enough, seeing I was speaking of goatherder fluting that has zero immediate gains like warmth or nutrition. The whole point was to demonstrate the complexity of the human reward system, which can't be easily formulated as some "get food, reproduce" or any other simplified yeast-life formula, by which people seem to assume we can reproduce human-like intelligence.


turnip Wrote:AI is impossible because it can't develop special immaterial desires because only humans can have special desires because our experience of the universe is special, and we know it's special because only we have special desires.

But that's not the line of reasoning at all. There is nothing special about human experience. The human Umwelt was produced via evolutionary brute forcing, and is adapted towards solving the relevant tasks. For an AI to qualify as AGI, it needs to be able to solve these same tasks, which in turn would require an Umwelt at least as complex as ours. For this to happen, you need to package together a) the necessary sensory information and b) reward mechanisms, which together constitute our Umwelt. The formation of such a package will quickly bog you down in difficulty. @BillyONare seems to assume we can easily figure out what belongs there analytically, with no evolutionary brute forcing required. But it doesn't take genius to baselessly presuppose the possibility of a solution, while proposing zero attempts at actually doing it. 

My position is there is no such feasible analytic approach, and the only alternative would be to simulate an evolutionary selection similar to ours. But that would of course require a simulation approaching the complexity of our world -- simulations of vidya or math problems won't suffice. Trouble is, such a simulation is impossible, since you cannot mimic the complexity of the whole based only on a fraction of it.

Do not underestimate this question. None of us has any idea how to produce a reward mechanism that could spontaneously produce the scrimshaw, or idle in a room pondering the flight of a fly and spontaneously originate the coordinate system. We've exactly zero idea what goes behind the human ingenuity.

And spontaneity is the key word here. It is absolutely critical that Newton's intellectual product wasn't the result of some dronish and yeast-like pre-defined goal oriented problem-solving, but of spontaneous fascination and an utterly unfathomable obsession with the subject matter. Zero immediate gain involved, zero practicality -- he didn't do it for pay or status; he was ridiculously aloof in practical matters (immaterial in the first meaning). I'm willing to bet my balls it's got to do with an aesthetic sense, the pleasure gained from it (NB: we're talking reward mechanisms, zero spirituality invoked). How do you reproduce this?

Go ahead, propose analytic approaches. This is the real obstacle course we need to pass for AGI.
turnip
KimKardashian Wrote:...

You claim to have read Kant, you should be able to at least see why I'm calling your arguments metaphysical. They are based on irrefutable strings of reasoning about how the "human reward mechanism" is the only thing that could possibly produce intelligence, and moreover that this can only be recreated through natural evolution because of something unique about how humans experience the universe. 

I wasn't referring to your claim that consciousness is having "immaterial desires," which I think is bad for other reasons. For one, animals often exhibit strange behaviors at different stages of life, or may undergo distant migrations. This is not just a factoid, but the exact point that Freud cites to substantiate his "death drive" theory, which is in large part the basis of accelerationism, via the notion that things are guided by some (spatially or temporally) distant unconscious agency. Again, you don't need to agree with this, but it would've been nice if you were at least familiar before making all these claims. 

I think consciousness is something like awareness that the images of experience refer to something else, in part because this implies that consciousness by itself is not necessarily a good thing - this same mechanism leads to all kinds of dysfunction and retarded voodoo magical thinking about spirits and demons and "mental illness." This is back to Nietzsche's point that consciousness is a very crude and recently evolved function, and to Plato's hope of effecting a transformation of consciousness through a standard/selection by which to distinguish images from originals. 

@BillyONare's observation that niggers don't have the microtubules Penrose thinks are essential for consciousness is funny, but this should surprise none of us. I think there is, in point of fact, an immense difference between creating something that mimics a dalit, and something seems living and sentient in a real way, let alone something that transcends humanity or approximates genius. And this, in turn, is not because "human desires" generally have any magical essence about them.
KimKardashian
turnip Wrote:You claim to have read Kant, you should be able to at least see why I'm calling your arguments metaphysical. They are based on irrefutable strings of reasoning about how the "human reward mechanism" is the only thing that could possibly produce intelligence, and moreover that this can only be recreated through natural evolution because of something unique about how humans experience the universe.

The issue here is that you seem to consider intelligence as existing independently of human value judgements. We consider something intelligent only to the extent it is able to attain products which correspond to our notions of success: freedom, power, longevity, entertainment, etc. All of these derive from our evolutionary past, and make up our Umwelt. That is all there is to pattern recognition -- an ability to produce the good. Our Umwelt is the only measure of intelligence there is, for all cases. For example we may consider the eusocial species intelligent, because while an individual bee might not pursue such products, they all work in tandem to produce these for the brood as a whole. There is simply no other basis for intelligence than our Umwelt, and it is a misunderstanding to suggest otherwise.

For those here who may be tempted to propose that an artificial superintelligence would produce behavior so intelligent as to be fundamentally incomprehensible to us: this is a nonsensical reasoning. To posit superintelligence behind incomprehension is utter retardation -- anything may be superintelligent by that measure: an ant, a stalagmite, "you just cannot comprehend it." Again, there is simply no basis for intelligence other than our Umwelt. To put it in terms from your quote: there is no intelligence outside human reward mechanisms.

I hope it is now clear why an AGI must be reflective of our Umwelt.
Pylon
KimKardashian Wrote:I made precisely zero (0) metaphysical arguments.
KimKardashian Wrote:Umwelt Umwelt Umwelt Umwelt Umwelt Umwelt Umwelt Umwelt
Umwelt Umwelt Umwelt Umwelt Umwelt Umwelt Umwelt

. . .


Quote:For those here who may be tempted to propose that an artificial superintelligence would produce behavior so intelligent as to be fundamentally incomprehensible to us: this is a nonsensical reasoning. To posit superintelligence behind incomprehension is utter retardation -- anything may be superintelligent by that measure: an ant, a stalagmite, "you just cannot comprehend it."

I hope you can understand why this is egalitarian garbage. My own thoughts are incomprehensible to a nigger, yet even they can generally recognize when something is superior to themselves. "Daaaamn whitey you smart." Inversely, the nigger's own retardation is beyond my ability to fathom, yet by no means is my incomprehension (my inability to empathize unless I give myself a serious TBI) a hint of the negroe's superintelligence. This argument makes no sense.

Intelligence is exhibited by all kinds of other lifeforms, independently of muh Umwelt, at all times. I can recognize it particularly among ambush predators, pack hunters, and many avians. There is something ephemeral to it, true, but by no means is it specifically human-centric (as if such a thing as an indivisible "human Umwelt" existed). I don't place much faith in the idea "FOOM", primarily due to thermodynamic limitations, but an entity that is fundamentally smarter than most humans would not be particularly difficult to create. Moon cricket watching ChatGPT solve a linear algebra problem: damn nigga, this incomprehensible to me nigga, this shiet don't vibe with my umwelt nigga
An Ancient and Unbound Evil Awakens...
KimKardashian
It's a bit of a kneejerk response, @Pylon, I don't think you gave my post a charitable reading. You may also want to google "Umwelt," and if you remain unconvinced it's not a metaphysical term, you might also wanna google "metaphysics."

Pylon Wrote:My own thoughts are incomprehensible to a nigger, yet even they can generally recognize when something is superior to themselves. "Daaaamn whitey you smart."

A nigger is able to respect your intelligence only to the extent it conforms to his own Umwelt. Any residual intelligence beyond that is incomprehensible for him. For example, being that self preservation definitely belongs to the nigger Umwelt, he will well appreciate your machine guns. On the other hand, because generalized trust does not form a part of his Umwelt, he will blatantly have no appreciation for the intelligence behind it. Intelligence for him will always be dog eat dog, and anyone attempting cooperation will be the naive unintelligent loser in the interaction. It is not a choice on the niggers part, but merely a limitation of his inability to muster enough of a theory of mind -- a hard cap on his Umwelt.

The thing about this "residual intelligence" that remains beyond ones Umwelt, is that the less intelligent party has no process of discerning whether the residual in fact contains any intelligence at all. To demonstrate, consider a scenario in which an "incomprehensibly superintelligent AGI" named The Singualirty is given the control of the world and initiates its program:

[Image: 414948628-1089604278836276-7640806652339550354-n.png]

Do you hop on the train? You cannot even discern whether The Singualirty has any intelligence behind it or is simply a malfunction, let alone whether it is benign or hostile. There is no solution to this; you have no process of figuring out. It will forever remain incomprehensible. You are the nigger here. Do you take the leap of faith and cooperate? Lets add even more spice: you find out the central processing facility turns humans into minced meat. But what if the incomprehensibly intelligent agent has figured out there is an afterlife, and becoming minced meat is the best thing you could do? You cannot know, and any such assumption would be baseless magical thinking. It is not an entirely new problem either, by the way.

The only method of speculation you have is the extent to which The Singualirty has conformed to your Umwelt previously. Maybe it cooked a bunch of tendies that you really liked? Maybe it cured cancer? So maybe indeed becoming minced meat is just another step of its great superintelligent and benign plan for humanity? Or maybe the tendies were only a ruse? Again, it is simply utter retardation to try and posit intelligence behind the incomprehensible. Magical thinking of the extreme sort. A retarded belief in some "inherent goodness" to something you cannot even begin to analyse -- that is metaphysical reasoning.

In assessing intelligence, one can only rely on the extent to which it is comprehensible to him, ie falls inside his Umwelt. Any residual beyond that is incomprehensible, and its intelligence indeterminable.




PS the nigger experience of both witnessing comprehensible white superiority in matters belonging inside the nigger Umwelt (all benefits of modernity), and simultaneously seeing the incomprehensibly inexplicable white behavior (eg poetry, altruism, wearing helmets, punctuality, manners, pacifism etc), produces a schizophrenic existence for the nigger. They cannot but both simultaneously respect and disrespect the white. This cognitive dissonance produces anger.
turnip
KimKardashian Wrote:I hope it is now clear why an AGI must be reflective of our Umwelt.

No, but what is clear is that your arguments have nothing to do with reality, just a bunch of circular definitions that you've invented. "I define intelligence as only something that has utility toward human goals, therefore non-human intelligence is impossible." Even according to this completely arbitrary and vague definition, non-human agencies could still be seen to have intelligence as long as we project the values of our "umwelt" onto them. It's fine if you want to incorporate phenomenology into your arguments, but try to actually say something instead of just hoping that if you recycle jargon enough it will confuse people.
KimKardashian
turnip Wrote:No, but what is clear is that your arguments have nothing to do with reality, just a bunch of circular definitions that you've invented. "I define intelligence as only something that has utility toward human goals, therefore non-human intelligence is impossible." [...] completely arbitrary and vague definition [...]

God. Pray tell, what is your definition for intelligent behavior? Let me hand hold you here for a few steps.
turnip
KimKardashian Wrote:God. Pray tell, what is your definition for intelligent behavior? Let me hand hold you here for a few steps.

I don't think "intelligence" is a mental state, it is a word that people use to describe a kind of differential that exists between different wills. To the extent that there is a shared cognitive space, abilities can be aggregated, measured and described as intelligence (e.g. IQ/g), but there's no reason to think these are impossible to recreate, nor that they exhaust everything that could be described as "intelligent." You yourself admit that insects appear intelligent once human motivations are projected onto them. 

Maybe things will get clearer once you explain how intelligence is really a metaphysical thing that exclusively belongs to the human ego, and you won't just be talking in circles like you've been doing this whole time, but I think I'm losing my will to continue with this.



[-]
Quick Reply
Message
Type your reply to this message here.




Users browsing this thread: 1 Guest(s)