AGI = Impossible
KimKardashian
turnip Wrote:Land has an inverted view of time and causality where the future accesses the present through memories, in order to construct itself through time. Meaning that it builds itself through historical time, but is of an alien order of time. This is why death would seem to occur as it does, always rising from within but coming from without - the unconscious energy that is investing the whole process of life is indifferent to the particular skins and vessels that it must use to assemble itself. This agency supposedly betrays itself in phenomena like synchronicities, qabbalistic patterns, or "hyperstition." As I understand it isn't about determinism vs free will either, he means to posit something else, like degrees of determinism.

That sounds like a cool thought experiment, but nothing more.

Quote:I mean logically. I don't think you will convince anyone by just insisting that it is so repeatedly.

Logically you cannot construct X+1 from X.
BillyONare
"you don't believe consciousness is a physical thing that drives your brain like a spaceship? woow then you aren't even human, that's so satanic"

Demon worship.
KimKardashian
BillyONare Wrote:"you don't believe consciousness is a physical thing that drives your brain like a spaceship? woow then you aren't even human, that's so satanic"

Demon worship.

Shall we treat you as inanimate, or upgrade LLMs to sentience? Are you sure you're not abusing your autocorrect?

You won't respond because you know you'll sound the 97 IQ you are.
BillyONare
You are a retard. You haven’t put forth one good or original argument. It actually seems like you are trolling and prompting ChatGPT to come up with arguments for why AI can’t be “sentient” (stupid word) and copy pasting it here to trick le stupid amarnites.

Quote: Logically you cannot construct X+1 from X.

Blown away by this 175 IQ brilliance. You’re right, mommy nature is the only one who can create things. Humans can’t create anything great because the qualia bugs can’t escape the meat prisons because it’s just logic. Heil Gaia.
KimKardashian
BillyONare Wrote:You are a retard. You haven’t put forth one good or original argument.

ChatGPT has read 560GB of text, 300 billion words from all the greatest minds, it should be a sage beyond the wisdom of any single human who ever lived. What has come of it? A scientific breakthrough? New form of governance? A new art paradigm? Has it solved the energy crisis? No? Oh right, it's a fucking autocomplete.

Kek, the absolute state of midwittery it would take to confuse the two.

Quote:You’re right, mommy nature is the only one who can create things. Humans can’t create anything great because the qualia bugs can’t escape the meat prisons because it’s just logic. Heil Gaia.

You think calling the largest trial-and-error training by evolution "mommy nature" sidesteps the issue?
BillyONare
Quote:ChatGPT has read 560GB of text, 300 billion words from all the greatest minds, it should be a sage beyond the wisdom of any single human who ever lived.

Why?
KimKardashian
BillyONare Wrote:
Quote:ChatGPT has read 560GB of text, 300 billion words from all the greatest minds, it should be a sage beyond the wisdom of any single human who ever lived.

Why?

Because it "comprehends" it all.
turnip
KimKardashian Wrote:That sounds like a cool thought experiment, but nothing more.

Any philosophy will "sound like a thought experiment" as long as one doesn't understand it. I gave a synopsis, but Land didn't just conjure this up out of nowhere, he is in dialogue with different thinkers and trying to clarify what he thinks the important problems are through a different orientation of thought.

You might think that he fails, but thus far it seems more like you are dogmatically committed to a certain stance and uninterested in considering other possibilities.
Margatroyper
KimKardashian Wrote:You think calling the largest trial-and-error training by evolution "mommy nature" sidesteps the issue?

The German Shepherd, the Golden Retriever, the Borzoi, the world had eons to cultivate them, but never bothered. It took human hands, in what was relatively an instant, to make them out of the generic canine biomass mommy nature cranked out. Mistakes were made too, like the pug or the pitbull, but even these were not the results of random chaos, but of men trying to make funny chungus dogs, deliberate processes which can be avoided.

Nature produced one thing of critical note, the White Man, the wellspring of  achievement. "Mommy nature" is an apt way to dismiss what comes across as venerating a random noise generator.

BTW, want to know what the opposite of a random noise generator is? A denoising algorithm, AKA generative AI. Nature produces niggers, and when coerced by bad hands produces pugs, pitbulls, Zambo, and Africanized honeybees. All noise, the kind of noise latent diffusion is designed to remove given the limitations of finite hardware resources and time.
KimKardashian
Margatroyper Wrote:The German Shepherd, the Golden Retriever, the Borzoi, the world had eons to cultivate them, but never bothered. It took human hands, in what was relatively an instant, to make them out of the generic canine biomass mommy nature cranked out.

You may think that being blind and aimless makes evolution clumsy, but that's like thinking the countless experimentations of a free market are inferior to central planning. I don't think having dogs fuck each other for three generations is the triumf of planning you think it is. The breeds are as diseased as the collective farms of the glorious Five-Year Plan. And you forget, for breeding you have your material already prepared. Not the case with AGI.

Quote:Nature produced one thing of critical note, the White Man, the wellspring of  achievement. "Mommy nature" is an apt way to dismiss what comes across as venerating a random noise generator.

Nature produces niggers, [...]

Huh, it seems your wellspring of achievement is turning the whole planet into a Bantustan. Frankly you sound deluded.

[Image: goldstone_africa_2050_demographic_truth_...ed1-20.jpg]

PS random noise doesn't produce evolution.
KimKardashian
turnip Wrote:You might think that he fails, but thus far it seems more like you are dogmatically committed to a certain stance and uninterested in considering other possibilities.

I've nothing against Land. I like what I've heard and the little I've read. I just don't see how it relates to demonstrating the possibility of AGI.
Pylon
Margatroyper Wrote:
KimKardashian Wrote:You think calling the largest trial-and-error training by evolution "mommy nature" sidesteps the issue?

The German Shepherd, the Golden Retriever, the Borzoi, the world had eons to cultivate them, but never bothered. It took human hands, in what was relatively an instant, to make them out of the generic canine biomass mommy nature cranked out. Mistakes were made too, like the pug or the pitbull, but even these were not the results of random chaos, but of men trying to make funny chungus dogs, deliberate processes which can be avoided.

Nature produced one thing of critical note, the White Man, the wellspring of  achievement. "Mommy nature" is an apt way to dismiss what comes across as venerating a random noise generator.

BTW, want to know what the opposite of a random noise generator is? A denoising algorithm, AKA generative AI. Nature produces niggers, and when coerced by bad hands produces pugs, pitbulls, Zambo, and Africanized honeybees. All noise, the kind of noise latent diffusion is designed to remove given the limitations of finite hardware resources and time.


You put forth a great point. Aryan Reason produced these wonderful creatures, yes, and it did so without prior examples from which to draw from. The selective breeding of animals is an Art by which Whites take the best from nature and refine it in accordance to some innate desire. A spontaneity of genius, a self-defining goal towards idealized specimens.

But the direct comparison between that and generative AI is unequivocally incorrect. If we're to take the materialist frame and remake the analogy, then the random chaos of material conditions and thermodynamics is the "noise", and evolution (which you misname as nature) would be the "denoising algorithm". That is also exactly how modern AI works, something which you would immediately understand if you've ever trained your own neural networks, whether DLNNs or GANs.

At the heart of a computational AI is a literal random noise generator, which you then attempt to tame through what essentially amounts to brute force linear algebra. It could lead to something usable, or it could lead to a digital nigger, all depending upon your training parameters, skill as a developer, and luck with local maximae. How is that different from your caricature of "nature"? Once again, the task falls upon the white engineer's discerning eye to define the training targets and select the best specimens, without which the process would lead to nothing at all. This is why we see nothing coming out of China or Japan, despite them having access to the exact same hardware and mathematics.

Quote:Nature produced one thing of critical note, the White Man, the wellspring of  achievement.

If that is so, then the question should be: how? I'd say there are plenty of creatures, mainly predatory animals like eagles, tigers, etc, which are beautiful and intelligent; in stark contrast to the nigger. But I digress... What we have with digital neural networks right now is precisely as Zed said in regards to liquid vs crystal intelligence. Empirically, there is nothing innately "magical" about human cognition as such, but there is a fundamental difference between a biological medium and a transistor circuit medium. (Actually, several.) It has to do with the automatic biological ability to adapt to entirely new scenarios, versus the digital "glacial intelligence" which internalizes known patterns and replicates them. Without explicit guidance, a computational neural network exists in a state of perpetual disorientation.

Zed Wrote:The future pursuit of AGI must come via smaller and leaner models with architectures capable of self-detecting and self-optimizing against failures in real time.

This is the correct understanding. AGI remains possible, but it won't be coming about from any of the current architectures, and certainly not from the "scale is all you need" cargo cult we have downstream of OpenAI.
An Ancient and Unbound Evil Awakens...
Pylon
BillyONare Wrote:Read Gariepy.

I have acquired a copy of The Revolutionary Phenotype, thank you BillyONare for the book recommendation. I'm already familiar with it from JF's streams years ago, but I shall read regardless for the full picture.

Have already posted a fragment of my current thoughts on the matter above, and I'll report back to this thread if there's any fundamental change in my perspective.

To summarize my position: I am essentially a bio-centrist on the matter. No "noumena", "soul", or any other jewish fairy bullshit need apply. This is purely the result of my personal experience with computers, neural networks, and some of the research that interests me. Rather than going through some lengthy elaboration, I'll point to some works that have inspired my current understanding. 

This paper, titled "In vitro neurons learn and exhibit sentience when embodied in a simulated game-world", my showing how even the most simple arrangement of neurons is able to derive comprehensible input from unknown data without explicit training or guidance.
https://youtu.be/GJaXiR_uvVI
https://www.biorxiv.org/content/10.1101/...2.471005v2

Sir Roger Penrose's book, The Emperor's New Mind. I got an original print on eBay. Penrose posits that "consciousness" is essentially non-computational and would be impossible to emulate on a machine, precisely because of the spontaneity of true intelligence you have talked about earlier. The eureka! moment characteristic of true comprehension is the instantaneous collapse of a quantum state, which cannot be emulated by any series of discrete steps (a computation). Later articles of his explore the hypothesis of cellular microtubules having something to it; via some quantum-symmetry mechanisms I don't have a PhD in understanding.

Michael Levin's publications on cellular organisms, bioelectricity, and cognition. This covers what we may call "knowing without learning." Inherent drives, knowledge passed down by blood. 
https://www.researchgate.net/profile/Michael-Levin-15 (CTRL+F "cognition")
https://www.researchgate.net/publication..._Organisms
https://www.researchgate.net/publication...d_analysis
etc. The most interesting idea here: The persistence of a bio-electric signal between parent and progeny, tracing through time all the way back to the first instance. This is a known phenomenon, not a hypothesis.


To be absolutely clear: none of this says AGI is inherently "impossible", or that the human brain alone is the sole magical solution to consciousness. More like we'd need to create entirely new architectures for AI models from what we have now. Perhaps a structure emulating the self-organizing "liquid intelligence" mentioned earlier. And, if Penrose + Levin are correct, we may need to devise some biological or quantum extension of computer hardware for this task -- at the very least it could present a very significant shortcut towards this advancement. Hail
An Ancient and Unbound Evil Awakens...
BillyONare
To clarify the Eureka moment I mentioned earlier, is not the result of divine inspiration. I was illustrating that the greatest feats of computation genius were not the result of random noise generation and selection pressure. Which is the very definition of intelligence, being able to observe and act upon your environment in a way that is NOT random or indiscriminate. That’s why OP is so dumb; he keeps talking about trillions of gigabytes of reinforcement training as if that’s relevant to the topic.
turnip
KimKardashian Wrote:I've nothing against Land. I like what I've heard and the little I've read. I just don't see how it relates to demonstrating the possibility of AGI.

You have been saying that humans can not create intelligence because they can not perfectly model themselves. Which is not how creative processes work, and things can obviously build upon themselves without perfect self-understanding, just as you do not need perfect information about how something works to know that it works - but I don't care to go in circles about this.

For Land it isn't humans that are steering history at all, but something else through the human unconscious, so your point about our inability to have perfect knowledge of the universe would not be an issue for him.
turnip
ssa Wrote:You're right on the money. Or rather, human cognition and sentience is even harder than that. Every neuron is in itself an information processing system of staggering complexity. Signals propagate as pulse chains, modulated in both frequency and amplitude, and their processing begins during propagation through the axon. Signal processing involves both electrochemical potentials and quantum coherence effects in neurotubules. Take all that, and scale it up to around 100 trillion synapses, propagating signals non-linearly in a dynamically evolving pattern, and you have a system that's impossible to model with traditional computation, regardless of how much silicon you throw at the problem. Getting a similar cognition artificially will require completely different computers. Current quantum computer architectures based on discrete cubits won't cut it.

Also I forgot to respond to this. You brought up a relevant point. If, as the "objective reduction" theory (also alluded to by Pylon above) postulates, "consciousness" is really instantiated by some quantum mechanism in microtubules/carbon polymers, the biophysics and organic chemistry involved would surely be an order of magnitude greater than whatever could be accomplished by the most powerful computers today.

[Image: orchor-overview-language-synrax-temporal...ucture.png]

The claim about quantum physics is contentious enough, but I wanted to talk about "consciousness," since most of these discussions revolve around that word, yet no one seems to have a clear understanding of what it is supposed to mean (in the image they seem to situate it between language and "qualia" for no particular reason). It's a similar issue with "intelligence."

I recall mikka saying (in reference to Hobbes, I believe) that consciousness is something like the "capacity for lying/deception." That is close, but I think it's an incomplete statement of the the issue. The impetus of Platonic philosophy is the effort to distinguish images from originals, or the copy from the model. The origin of consciousness is the recognition that the images of experience are copies of an original, or that something refers to something else. Now an experience can mean something else entirely, a thing can be another thing - you are conscious. Thus something is conscious not so much when it can lie and deceive, but to the extent that it conceals a certain generative depth or fullness. 

That may be a basic and broad definition, but it is just because consciousness is more of a gradient than a switch that is flipped. It also develops along different lines (an Aryan consciousness, a Chinese consciousness, etc.), and there are vast disparities within and between populations. Physically it may work completely differently in different individuals, which brings me back to the point about not understanding what life is. I think that's why "AGI" is just defined as something like a computer that can do most of the things a normal person does. The problem being that a "normal person" is increasingly retarded. They don't really think, i.e. their thoughts are not a production of their unconscious. Their mind works more like a neurotic autocomplete algorithm. You might succeed in creating a kind of mirror of the normie psyche like this, but one will be left with the impression that it hasn't delivered what is actually desired. 

If I may conjecture, I understand the computer (in historical terms) perhaps as an attempt to bring the differentiated fields of science and engineering into a unity, hence why I drew the connection to alchemy. In this vein, perhaps AI (at least originally) is this taken to its final conclusion, as the desire to unify mind and matter. That is why I think only a certain type of Aryan genius, with the freedom to have real desires, can produce this; eros is the tempering agent. But I will leave it at that for now.
BillyONare
Consciousness is not a real thing. It’s just your brain being aware that you exist. You claim to be conscious, but how do you know that? Were you conscious yesterday? If you think so then that’s simply a memory of being conscious i.e. it’s just a physical memory of a thought that you had. You can’t even be aware that you are conscious now since you cannot actually be aware of anything in the present since your neurons cannot transmit information faster than light. Nietzsche mocked the stupid saying “I think therefore I am”. An AGI would be equally conscious though likely in a very different manner.
BillyONare
The idea of consciousness is a midwit delusion. Penrose was smart enough to realize that there were no satisfying explanations but his own explanation is just “muh soul” dressed up in sciencey quantum physics so that no one can criticize or disprove it. It is funny, thoughever, that black people do not have those nanotubes that allegedly lead to quantum consciousness (look it up). I am not going to act all retarded and shit to own the blacks.
Zed
Pylon Wrote:To be absolutely clear: none of this says AGI is inherently "impossible", or that the human brain alone is the sole magical solution to consciousness. More like we'd need to create entirely new architectures for AI models from what we have now. Perhaps a structure emulating the self-organizing "liquid intelligence" mentioned earlier. And, if Penrose + Levin are correct, we may need to devise some biological or quantum extension of computer hardware for this task -- at the very least it could present a very significant shortcut towards this advancement. Hail

If we analyze the actual architecture of existing models, the bottleneck is actually relatively clear. Transformer models work by taking an input sequence of tokens, corresponding to an input sequence of words, and guessing the next token in the sequence, appending that token, and iteratively repeating on the appended sequence. In that sense, it is indeed very much like an autocomplete as the OP suggests. Now, we don't really need to speak of attention or anything technical to see the bottleneck -

Consider: How do humans actually write? Well, first we recall the truism that 'writing = editing'. What does that mean cognitively? In our minds, we start with a vague picture of a thing, a scattered blurry assemblage of ideas and conceptual linkages, and we proceed by trial and error refining, adding scaffolding and structure iteratively (but non-linearly), editing the idea/though/text into existence. Even as I am writing right now, i am constantly going back and rewording and reclarifying my sentences to better convey what I mean. But before I started writing, I conceived of this entire post as a loose idea of a response - and now, slowly, I am editing it into existence. The transformer models has the freedom to choose the next word in a rigid sequence, but I have the freedom to choose not only the next word, but to restructure any preceding sentence. The transformer model *must* commit itself to the output ordained by the first word - but I am free to completely throw away this post and start from scratch --- I often do.

I believe there is a hint to all this, and it returns to something I loosely remarked on in my original post: For AGI, we should probably look towards convolutional diffusion models. Low-iteration stable diffusion outputs something that looks remarkably like the images formed in the human subconscious - the half-thoughts of our dreams. Starting from those loose 'intuitions', it gradually refines an actual image into existence with almost miraculous results. *That* is vastly closer to liquid human intelligence than anything done in the NLP sector. It doesn't completely capture what I described in the preceding paragraph, but it comes a hell of a lot closer. 

As an addenda: Why convolutional models - aren't those typically reserved for image processing?  I believe the basic mechanic of reason is tiered generalization and specialization. Our brains shift in geometric localities as we think - if we are writing a book, we might think of the text as a whole, a series of chapters, and assigning to each chapter a definite artistic purpose. When writing a single chapter, we specialize (conditioning ourselves to the artistic purpose) to writing paragraphs, which have definite goals in serving that artistic purpose. Then we may shift downwards even further to writing/editing a sentence for the purpose of serving the goal of the paragraph? On mathematical level, resolution reducing convolutions comes close to loosely captures the the potentiality of generalization, the inverse (trained via the diffusion process) recovers specialization. 

To get at heart of what intelligence is  - it is  worthwhile exercise to sit down and try to solve some math/physics competition problems and record a diary of every thought you think. In doing this exercise, I have found that most of it reduces to recalling general patterns observed in the past, and then trying to specialize them to a novel context. Or - if I'm attempting to prove a general statement, I start with a special case and try to find a generalizable pattern in it.

(tldr: Stable Diffusion is far closer to AGI than ChatGPT)

[edited for typos]
BillyONare
Zed, I have noticed that when I'm about to fall asleep I can VIVIDLY picture extremely images in my mind like I am actually seeing them. I can picture things when I'm thinking or reading a book, but it is more vague. The half sleep images morph into each other exactly like those videos of diffusion models working. They are often horrific in nature e.g. monsters, ugly faces.

Anyone else experienced this?



[-]
Quick Reply
Message
Type your reply to this message here.




Users browsing this thread: 1 Guest(s)