Lots of great points brought up. I'm not really into AI literature so some of it is bound to have gone over my head. Sorry for the ugly post with tons of quotations, but I wanted to respond to everything I found interesting.
Reply to Pylon:
Pylon Wrote:there is a fundamental difference between a biological medium and a transistor circuit medium. (Actually, several.) It has to do with the automatic biological ability to adapt to entirely new scenarios, versus the digital "glacial intelligence" which internalizes known patterns and replicates them. Without explicit guidance, a computational neural network exists in a state of perpetual disorientation.
This seems to reflect this passage in OP:
Quote:an emulator-monkey can only emulate, not originate; it will forever remain reactive and not proactive, by definition. Chasing treats, the monkey is not only unmotivated to gain agency, but fundamentally incapable – treats form all of its experience, its whole universe. All its motivations and fears revolve only around treats.
I'm wondering if this comes through. Maybe it was something about my style in OP that expressed it badly? I think the thread's going through my talking points, but in a different language.
But Pylon, extremely important to note here -- this state of
perpetual disorientation exists only in relation to the human universe (human Umwelt), in which exist the real tasks we need an AGI to be able to solve. However,
inside its own universe (LLM Umwelt), the AI is in fact
perfectly oriented towards the only thing it knows -- that of maximising treats (successful word prediction). So we see, it's
perpetual disorientation depending on the perspective: there is no disorientation from the standpoint of the LLM itself.
This understanding is critical, because the core problem in creating AGI is exactly this -- for AGI to happen, two things are required:
1) the AI Umwelt needs to approach the complexity of the human Umwelt. Its sensory input (eg vision, tactile, etc) needs to approach that of the human. Mind you, not the anatomical human, but including the sum total of our Umwelt, eg infrared, ultraviolet, microscopic visions, etc; and not necessarily an AGI entity capable of personal observation (eg an individual robot), but merely an access to such information (eg some tech-hivemind like Skynet).
This is the easy part, a purely mechanical question.
2) the AI reward mechanisms (treats) need to approach the complexity of the human reward mechanisms. Successful word prediction is fine and dandy, and an extremely powerful tool, as demonstrated LLMs, but it is obvious how the human ingenuity is more than that. Eg Descartes wasn't motivated by predicting the next word when originating the idea of a coordinate system based on a buzzing fly inside his room.
This is the tricky part.
Only these two are capable of producing an AGI that can solve human tasks and more.
If I've expressed this in an understandable way, I really think you will find this angle illuminating.
My own intuition is that the tricky part (2) is in fact the impossible part. While I acquiesce to this:
Margatroyper Wrote:The German Shepherd, the Golden Retriever, the Borzoi, the world had eons to cultivate them, but never bothered.
... that active pursuit is much faster than blind aimless evolution, making it attainable in a fraction of the time it took to blindly evolve. But that
fraction requires the fleshing out of such a complexity of characteristics (reward mechanisms), that I genuinely think we would discover the need for an entire simulated universe a la Matrix, in which an AGI can develop. This is way beyond any current possibility. And even so, it's not a guarantee that the AGI Umwelt would reach ours. The smallest of differences in the simulated world may cause motivations inconsistent with the need of the real world.
Quote:The most interesting idea here: The persistence of a bio-electric signal between parent and progeny, tracing through time all the way back to the first instance. This is a known phenomenon, not a hypothesis.
I believe this reflects my fundamental intuition, that life contains within it an agency, stemming from its reward mechanisms, and that it can only be reproduced via going through the long process of evolution (even if consciously directed, possibly in a simulation). This means beginning from square one, on the smallest unit of construction, the cell. We can't cut corners and begin at the level of the brain.
Quote:But when trying to lucid dream, things would start melting and becoming all fucked, so I stopped doing that when I realized it messed with REM sleep quality.
I’ve only accomplished lucid dreaming accidentally, and I get too excited every time, so the dreamworld starts gradually disintegrating and losing its vividness/three dimensionality until I wake up. It’s like a countdown, and I only get about a minute of fun. It’s really frustrating, but funny at the same time.
After Zed I have more replies to Pylon.
Reply to Zed
Zed Wrote:Starting from those loose 'intuitions', it gradually refines an actual image into existence with almost miraculous results.
Yes, the initial germs of an idea are feelings. Like blobs of color that you
feel in your chest, synaesthesia-like. There’s nothing verbal in the first stages.
But critically, I assume (I don’t know about these models), such
intuitions of the model become refined into an image simply via becoming more detailed, „higher resolution.” That’s not exactly how it works with humans, which you also noted in:
Quote:I have the freedom to choose not only the next word, but to restructure any preceding sentence.
Fascinatinly, it’s a dialectical dialogue between the components of the initial intuition. They go through an
evolution, not a simple sharpening. Their
very structure changes, as they evolve. I have a feeling that this does not occur with convolutional diffusion models. They instead simply flesh out the initially ambiguous structure.
Quote:One needs to bootstrap such a thing to have some basic capacities and for that labeled training data is invaluable, but at a certain point - one should try to throw the thing against general puzzles, games, and mathematical problems - concretely, situations with verifiable correctness.
This relates to my reply to Pylon.
Situations with verifiable success are those in which the success criteria is the sole motivating factor in the AI, its sole reward mechanism. Eg in a racing game, you reward crossing the finish line in the least amount of time. In turn, the picking of such criteria of success is what forms the foundation of the reward mechanisms of the AI, ie determine whether it will ever reach AGI or not.
To rephrase my response to Pylon in these terms: the success criteria need to be such, that they reflect the complexity of orienting in the real world. No racing game, or mathematical problem comes close; the world doesn’t submit to such operationalization.
My argument in these terms: it is impossible to pick the success criteria that could sufficiently reflect the complexity of the world.
And going beyond –
what is the success criteria for humanity? If we’re honest, we can’t nail it down easily. Propagation of eugenic life? In which parameters, IQ? Surely there’s more to it. Infinite capacity for labor? Is there room for subjective happiness? Maybe combine the two and produce happy worker-ants? But does that sound enticing, or yeast-like? It’s quite hard to parameterize success criteria, I think.
Going back to Pylon, as this is contingent on the immediately above:
Pylon Wrote:This fundamental reliance on "verifiable correctness" is precisely the issue. Everything we're doing right now depends treading the needle to encode known data into some model without overfitting it. Even if we use LLMs for automatic labeling of further input, that's just an automation to make our task easier and improve the training volume, not an architectural improvement to the model itself. This is essentially solving a data compression problem with layers of matrix math, not too dissimilar from how we recall text or images. There's still the gap in agency, or self-instrumentation, though.
Am I mistaken, or did you just express what I was trying to say immediately above, and in response to you also? (Except you don’t deem it an impossible feat)
Quote:Never in my life, even as a small child, did I learn a new game by smashing random inputs until I accidentally stumbled onto the objective -- that's absurd and a dead end, but it does illustrate a prior attempt at filling the agency gap.
This expresses the Chinese Room problem well.
Quote:To put it simply, you want something that can teach itself how to learn in response to arbitrary input data. This may sound like an absurd reductionism, but it appears to be a behaviour exhibited even by a petri dish of neurons, which is an important clue. Give it a game, and it figures on its own not only how to win, but what it is to win the given game. No researchers manually defining goal parameters for individual tasks, like we do now even in "multimodal" models.
Exactly. It seems the internal reward systems in living tissue are inherent already on the cellular level. It’s interesting that they show a predillection towards seeking order, even if it’s irrelevant to its wellbeing. That qualifies as agency – a behavior not immediately related to wellbeing. Below in my reply I go more into this (you can search by keyword: scrimshaw).
Quote:In-vitro neurons can learn to play a game, or fly a plane, or whatever, just by being given a consistent input when doing the right thing, and being tortured with noise (static input) when failing.
This is extremely fascinating. It seems to lend to the idea that all life drives towards homeostasis, simply going by the logic that
A: I exist under current conditions
B: behavior X predictably reproduces current conditions
C: therefore behavior X is most congruent with my existence
Reply to turnip:
turnip Wrote:Which is not how creative processes work, and things can obviously build upon themselves without perfect self-understanding
This starts sounding like some society-wide emergentism, which is why I mentioned the Absolute, the noosphere, the superorganism before. But I believe this is strictly separate from an AGI, which should not denote a society-wide phenomena, but a distinct entity. One could argue that we already have some society-wide intelligence, hivemind, noosphere-like collective intelligence.
Quote: just as you do not need perfect information about how something works to know that it works
Any nigger knows that smartphones work, and is able to use them. The production of smartphones very much reqires perfect information (doesn't have to be inside a single person).
Quote:that consciousness is something like the "capacity for lying/deception." [...]
That’s an interesting thought. As you say, deception entails the differentiation between the original and experience, which basically entails theory of mind (my experience =!= someone elses, allowing for the possibility of deception).
I would posit a similarly quintessential yet simple criteria, but for agency: behavior that is inexplicable with regard to immediate gains. A goatherder furnishing a flute out of bone and learning to play it is almost inexplicable in terms of immediate gains. It doesn’t provide nutrition nor warmth. A great example of such is the scrimshaw, engraved by sailors on faraway seas:
I believe the same immaterial motivations lie behind all of the greatest human achievements, revelations and inventions. None of them were motivated by immediate gain. The drive comes for its own sake. This is the essence of the reward mechanisms I’ve described in this reply, above. It’s going to be hard to nail down the mechanisms that produce such drives.
If AI should express behavior that would be similarly inexplicable in terms of immediate gains, we should pay very close attention, because we might be dealing with a burgeoning agency – the beginning of AGI. Alternatively, could be simply undefined behavior, in code-speak.
Quote:Their mind works more like a neurotic autocomplete algorithm.
There is absolutely nothing autocomplete-like in any organism, regardless of how retarded. Dysgenic IQ decline
will not produce LLM like people – they’re fundamentally different things.
Reply to BillyONare
BillyONare Wrote:I was illustrating that the greatest feats of computation genius were not the result of random noise generation and selection pressure. Which is the very definition of intelligence, being able to observe and act upon your environment in a way that is NOT random or indiscriminate. That’s why OP is so dumb; he keeps talking about trillions of gigabytes of reinforcement training as if that’s relevant to the topic.
You seem to be adamantly ignoring the fact, that it’s the
training that produced the intelligence capable of genius computation. To assume you can accomplish such computation
without training seems like a folly.
Quote:It’s just your brain being aware that you exist. You claim to be conscious, but how do you know that? Were you conscious yesterday?
Sorry this is just pygmy-like. Yes, your memories may well be hallucinations. Mind: blown. I am now unconscious.
This is why people need some basic Kant. There are things which it is simply retarded to doubt. Yes, you may well be a Boltzmann brain. So fucking what? Your experience (phenomena) still
is as it is regardless of what i
s beyond it (noumena) or what it was before.
Quote:Nietzsche mocked the stupid saying “I think therefore I am”.
There exists a minimum which is undeniable.
Quote:The half sleep images morph into each other exactly like those videos of diffusion models working.
Interestingly, for me, these images go hand in hand with short term memory-loss. Any train of thought I had, gets sliced and deleted. Sometimes I can reconstruct the train, but the closer I am to sleep, the more often they get sliced, until even attempts at reconstruction become impossible, silence ensues, and I lose consciousness.
Quote:Imo "The Chinese Room" is a dumb argument.
Why? It demonstrates how the lack of intuitive understanding (comprehension) stands in the way of originating genius. Have you ever spoken to a pseud that can parrot the jargon and ideas of some field, while being a painful midwit? It’s the same. A genius is probably
less capable of communication, seeing that they’re likely autistic, lower empathy and disagreeable, and yet they have an immeasurably
higher intuitive understanding of the concepts, especially in relation to their communication abilities. This is why the Turing test isn’t optimal. Communication =!= comprehension. The Chinese Room gets to the core of this, in my opinion, and LLMs excel in communication.