AGI = Impossible
KimKardashian
turnip Wrote:I don't think "intelligence" is a mental state, it is a word that people use to describe a kind of differential that exists between different wills. To the extent that there is a shared cognitive space, abilities can be aggregated, measured and described as intelligence (e.g. IQ/g), but there's no reason to think these are impossible to recreate, nor that they exhaust everything that could be described as "intelligent." You yourself admit that insects appear intelligent once human motivations are projected onto them. 

What a tragically convoluted non-answer. "differential that exists between different wills" means fuckall. Will for what, bashing ones head against the wall?
turnip
KimKardashian Wrote:Will for what

Do I need to explain Schopenhauer to you too?

Quote:To posit superintelligence behind incomprehension is utter retardation -- anything may be superintelligent by that measure: an ant, a stalagmite, "you just cannot comprehend it." Again, there is simply no basis for intelligence other than our Umwelt.

Oh. I'm sorry but this feels like a waste of time. Hopefully next time you at least have a better grasp of a subject before making definitive claims about it.
KimKardashian
turnip Wrote:
KimKardashian Wrote: Wrote:Will for what

Do I need to explain Schopenhauer to you too?

Quote: Wrote:To posit superintelligence behind incomprehension is utter retardation -- anything may be superintelligent by that measure: an ant, a stalagmite, "you just cannot comprehend it." Again, there is simply no basis for intelligence other than our Umwelt.

Oh. I'm sorry but this feels like a waste of time. Hopefully next time you at least have a better grasp of a subject before making definitive claims about it.

I think you're a pseud who turned to smoke and mirrors.

Any workable criteria for intelligence entails the measurement of ability, which exists only in relation to a given goal. If you cannot comprehend the goal, you cannot measure the ability, therefore cannot determine the level of intelligence. Any discussion of intelligence beyond this point becomes bullshit speculatory masturbation, which kind of sounds like the shit you were trying to come up with in your last post. Do I need to explain Schopenhauer to you too? rofl gtfo.
ssa
KimKardashian Wrote:I think you're a pseud who turned to smoke and mirrors.

Any workable criteria for intelligence entails the measurement of ability, which exists only in relation to a given goal. If you cannot comprehend the goal, you cannot measure the ability, therefore cannot determine the level of intelligence. Any discussion of intelligence beyond this point becomes bullshit speculatory masturbation, which kind of sounds like the shit you were trying to come up with in your last post. Do I need to explain Schopenhauer to you too? rofl gtfo.

Take a breather, step away, go out with some friends or family and detach yourself emotionally from this thread. I didn't wanna post in this thread again until I had a more robust theoretical framework to show, but c'mon. You're having trouble handling philosophical concepts so basic I wouldn't even consider them philosophy. Congrats, you re-stated tulip's explanation of what intelligence is, yet you probably don't even realize you did so.
You're getting angry over semantics and terminology.

This conversation has devolved to the point where there's no actual point being debated anymore, you're just reacting for the sake of reacting.
KimKardashian
ssa Wrote:Congrats, you re-stated tulip's explanation of what intelligence is, yet you probably don't even realize you did so.

This is false:

turnip Wrote:To the extent that there is a shared cognitive space, abilities can be aggregated, measured and described as intelligence (e.g. IQ/g), but there's no reason to think these are impossible to recreate, nor that they exhaust everything that could be described as "intelligent."

This is clearly contrary to my statement. Any description of anything as "intelligent" beyond the shared cognitive space is definitionally meaningless, since the term lacks any content beyond that point. Any workable criteria for intelligence can only remain within the shared cognitive space, since it is only here that you can measure (determine) it in the first place. It is a very simple point, and somewhat appalling that anyone acquaintanced with Kant could overextend a term this way.

And it is no semantics either. Once one realizes the limits of what "intelligence" denotes, the sillyness of its independent existence becomes evident. Any intelligent agent must by definition conform to the set of criteria that stem from the "shared cognitive space." And any elaboration of this set of criteria cannot but lead to the "shared cognitive space" coinciding entirely with what I termed "Umwelt." From here, my reasoning on AGI is inevitable.
turnip
KimKardashian Wrote:...

I'm not concerned with not sounding pretentious to you. You have nothing interesting to say about the field of AI itself, or cognitive science or anything else relevant. You needed a tl;dr of accelerationism in your thread about AI being impossible. You immediately nitpick and argue about things you obviously don't understand (apparently Kant proves AI impossible now, too; my mistake for bringing up Kant). Why bother making this thread? And why would I try to explain anything else when you will just keep repeating that non-human agency is impossible because you say so? Should I keep going in circles with you for the next 20 pages?

If this is how you're going to post, then yes, the appropriate response is just to tell you to read Schopenhauer or stop spewing at this forum with terrible undigested meme-arguments like an uninhibited brown person.
KimKardashian
It's plain you have no response to the case being made and have resorted to saving face by preparing ground for a gracious exit. Then leave, and take your beating around the bush and meta commentary with you.
KimKardashian
So I overhauled my OP entirely. All thread up to this post pertains to the old OP. I still think it's a good essay and demonstrates the point well, but apparently wasn't the best approach to take and was perhaps too ambiguous. The old OP follows:
_________________________________________________________________________________

When talking of AI threats, there's two categories:
1) Yudkowskian "superintelligence" – intelligence in the most general sense; colloquially also called a "consciousness." We will call this "AGI" (Artificial General Intelligence). Dangers entail Skynet takeover scenarios.
2) Currently existing blind calculators, which we will call mere "AI." These are your LLMs, AlphaZeros, DALL-Es, etc. The problems arising out of these are mainly economic and political - eg an AI Excel deleting white collar jobs and Jeff Bezos rawdogging the rest of humanity. But no superintelligence involved.

I will argue the impossibility of scenario 1).

Infinite monkey theorem

The very possibility of AGI is pure magical thinking; specifically a cargo cult. The types of AI we currently have are the equivalents of infinite monkey theorem – ie an actor totally incapable of understanding its output. The only difference is that our infinite monkeys are trained: whenever they accidentally output a Tolstoy or Plato – or even just one comprehensible sentence – we reward them with treats, increasing the probability of more similar output. Over long enough training, we can have the monkeys emulate our syntax entirely. From the outside, they will seem human and will pass the Turing test. Hey presto, """"AGI"""", right? Well lets see. Can anyone guess the difference between Tolstoy and a trained monkey? After finishing the repertoire of actions required for writing War and Peace, Tolstoy (probably) didn't whip out his dick and start smearing shit on the walls. The trained monkey? I wouldn't bet on it.

The problem with AI is that at no point does trained emulation translate into comprehension. Regardless of the amount of genius contained in the words of the monkey, it will still be as liable to whipping out its dick and smearing shit on the walls. The act of typing out a Principia Mathematica brings no change inside it. It will have been a purely motoric task, its sole motivation a treat. There is no dialectic involved, no evolution, no building on the genius of its previous output – which may have as well been a total diarrhea. In fact, that's the whole problem – AI can't tell genius from diarrhea. It's all the same for the monkey. It can only tell if it received a treat in response, or not.

Cargo cult

This is where the cargo cult comes into play – regardless of how many times and how fast GPT can condense Marx into a haiku, at no point does comprehension appear. In other words, regardless of the speed with which you can build airstrips and ATC towers out of sticks and straw, it won't manifest air traffic. The shift required is fundamental, not quantitative.

Monkeys cum, piss and shit on everything

The only way for trained typewriting monkeys to be of any use at all, is via humans. Regardless of how many Fausts or On the Origin of Species the monkey spits out, they will all end up soaking in cum and piss and shit without human interference. Human comprehension and recognition is the critical part that filters the produced genius from diarrhea, and prods the monkey into calibrating its typing towards ever higher order output.

At no point does this relationship between the human and the monkey ever change – an emulator-monkey can only emulate, not originate; it will forever remain reactive and not proactive, by definition. Chasing treats, the monkey is not only unmotivated to gain agency, but fundamentally incapable – treats form all of its experience, its whole universe. All its motivations and fears revolve only around treats.

We may say the human retains a permanent "causal initiative," which he can't shake off even if he wanted to. At no point is the monkey able to take over, not even if the human becomes incapacitated. If that happens, the monkey will forever remain stunted in the state it was last trained into. It will never stop cumming on The Theory of Relativity, or pissing on In Search of Lost Time, or shitting on every other work of genius that doesn't even exist yet. It has no agency to go beyond.

Inescapable human paradigm

But what is agency? Are we really that different from typewriting monkeys chasing treats? After all, aren’t we chasing (dopaminergic) treats just the same, which induce us to behave in specific ways? Aren’t we being trained by our environment, just as our monkeys are trained by us? Exactly! But did you notice the loss of complexity between each level going from universe, to humans, to our trained monkeys? Each level impresses imperfectly on the next. It’s a game of telephone, and no superintelligent AGI can come out the ass end.

Think of it this way – we, humans, have inherited the training data of over three billion years of life on Earth. How much is that in TB; trillions, fucktillions? And we still get things wrong. Our models of the universe are still flawed. Stubbed your toe? Got called an idiot (again)? Misprediction! Our model of the universe will forever remain imperfect, since it will forever remain smaller than the real thing. Obviously, since a part can never equal or surpass the whole. We will never be able to predict the universe, see: incompletness theorem, computational irreducibility.

We can’t even predict ourselves. Most of our behavior is unconscious. Most of our insight is intuitive. Ever wondered how it’s possible we can surprise ourselves? So how do you imagine we can induce a monkey to emulate ourselves, let alone surpass us?

Derivative

While we got to train on the real stuff – universe – any AI we produce can only train on its one step removed derivative, ie whatever info of the universe comes through us. We determine what it sees, by building its sensors; we determine how it thinks, by producing its logic structures; we determine its behavior, by defining the rewarding process. Us, us, us! Any awareness the AI may have, can only be derived from ours. We cannot imbue it any awareness of things of which we ourselves are unaware. The AI will forever remain inside a human-defined paradigm.

It is not an argument based on control. It’s an argument based on the inescapability of the human paradigm.

The only possibility of an actual AGI besides human intelligence, is an agent that can train on the universe itself, like we did. But we cannot build such an agent, since we are only imperfect mediators of the universe, and not the universe itself. Therefore we will forever retain a privilege to anything of our own production. Capisce?
Guest
KimKardashian Wrote:(This is an entirely new OP I wrote on 04.01.24. The previous OP is in this post. All of the thread up to that post pertains to the old OP)

Here I demonstrate how creating an AGI either on par or above human intelligence is a logical impossibility. I've tried to format this post in the clearest way I can, and numbered the arguments for ease of reference for deboonging the reasoning here.

The argument:

1. Intelligence is the measurable ability of any behavior to attain a given goal, whether food, passing a test, asteroid mining, etc.

2. To measure intelligence, you need to know the goal being pursued. Without it, you have no standard against which to measure ability. Unverified intelligence is indistinguishable from stupidity.

This is the basis and the rest follows from it:

3. Measuring intelligence is limited by the observers own intelligence, because he needs figure out the goal pursued (2).

4. Developing the behavior of an AI consists of the engineer monitoring its intelligence and culling unverifiable behavior. Knowledge of goals is required (2).
4.1. If he sets goals for the AI himself, his intelligence will limit what the AI pursues.
4.2. If he lets the AI set goals for itself, any goal exceeding his intelligence produces unverifiable behavior.

Even assuming that 4.1 is true (it isn't), the average guy working on AI is well into the 150+ iq range. That is adequate enough for a good theory of mind.

I don't get what you mean by "unverifiable" since every action the intelligence takes can be monitored. Do you mean unpredictable? In that case yes, there have been multiple instances of that happening. AI is capable of coming up with novel solutions, an indicator of intelligence. 




Quote:5. It is not practicable to allow unverifiable behavior to continue in the hopes that it turns out to be intelligent. 
5.1. If it exceeds his intelligence, it will remain unverifiable (3, 6).
5.2. There are infinite ways for unverifiable behavior to produce nonsense, and only limited ways for it to turn out intelligent. It is forbiddingly costly to try and brute force it. This is how evolution produced human intelligence, and it would take immensely many iterations and complexity to replicate.
Do you seriously think they're just throwing random things at a wall? These are probabilistic models, weighted for certain behaviors. It's not like bacterial evolution.

Quote:6. The engineer cannot take AI’s own word for the intelligence of its behavior which he cannot verify himself. This would run into 5, 5.2, 5.3. 

He does not need to, the AI's work speaks for itself.


Quote:7. Therefore, the very process of constructing an AI consists of inadvertedly culling any behavior beyond the engineers own intelligence.
No, the tuning is specifically for behavior which does not accomplish the intended goal of the AI. Unusual/unconventional behavior that accomplishes the stated goal is left unchanged.

Quote:If this reasoning stands, building an AI above human intelligence is impossible. But what about AI on par with human intelligence? (This is the weakest part of my reasoning, but I feel there is some truth here that I cannot perhaps yet express convincingly.)

8. The engineer cannot replicate his entire intelligence inside the AI. At best human intelligence will remain an asymptote towards which the AI will always be approaching.
8.1. He can not know himself fully (there is no universal set that contains all sets), to replicate himself in AI.
8.2. Any addition to the AI will simultaneously add to the engineer, making him unable to close the distance between the two.

These are all assumptions.
Guest
There are people who look at neural nets and large language models and focus on them in debate over whether SOME FORM of AI can be sapient, sentient, intelligent- these people are clowns regardless of their stances. You can simulate the human brain at a low level of abstraction with exascale computing (at greatly reduced speed if you don't have the hardware) and this is where you should focus your philosophical questions. The most industrially useful, cost-efficient cutting-edge AI whether neural nets in '13 or LLM in '23 is not relevant to these questions- unless you have some new technical point to make that generalizes.
KimKardashian
Guest Wrote:
KimKardashian Wrote:4.1. If he sets goals for the AI himself, his intelligence will limit what the AI pursues.
Even assuming that 4.1 is true (it isn't), the average guy working on AI is well into the 150+ iq range. That is adequate enough for a good theory of mind.
How is it not true? Any goal A we give the AI limits its freedom to choose its behavior. And since any goal A probably relates to some further goal B, we are effectively telling the AI to attain B via A, but perhaps attaining it via C would have been more intelligent? Therefore we have limited the AI behavior by our own intelligence. This logic applies on all levels.

I don't see how theory of mind applies, though.

Guest Wrote:I don't get what you mean by "unverifiable" since every action the intelligence takes can be monitored. Do you mean unpredictable?
By unverifiable I mean we don't know whether the behavior is intelligent. It has not reached its conclusion, and a) the AI does not seem to be reaching the goal we gave it, or b) we cannot figure out the goal the AI took for itself. The behavior seems purposeless. At some point we need to just terminate the behavior and give the AI bad feedback on it.

Guest Wrote:Do you seriously think they're just throwing random things at a wall? These are probabilistic models, weighted for certain behaviors. It's not like bacterial evolution.
Well like I said, it's not practicable. Well it is on smaller complexities like racing vidya machine learning, but not on anything approaching reality.

Guest Wrote:
Quote:6. The engineer cannot take AI’s own word for the intelligence of its behavior which he cannot verify himself. This would run into 5, 5.2, 5.3. 

He does not need to, the AI's work speaks for itself.
Precicely. But this requires the human be able to comprehend the AI's work, its utility. So anything beyond the human's capacity will be unverifiable.

Guest Wrote:No, the tuning is specifically for behavior which does not accomplish the intended goal of the AI
...according to the intelligence of the human. That's the limiting factor. But you are probably speaking here of some simulations like vidya, right? With vidya it's black and white, eg if the AI loses all HP or runs out of time, it's kaput. In reality you will need to assess the cost-benefit of a behavior on the run, to verify whether it is indeed intelligent and not expensive nonsense.

Guest Wrote:These are all assumptions.
Possibly. I have a feeling I can put it in better terms at some point.
Pylon
KimKardashian Wrote:(This is an entirely new OP I wrote on 04.01.24.

Retroactively rewriting the entire OP doesn't invalidate the discussion up to this point; it just makes you look like a moron.

You've only muddled the thread without making substantially different claims from before.

KimKardashian Wrote:1. Intelligence is the measurable ability of any behavior to attain a given goal, whether food, passing a test, asteroid mining, etc.

2. To measure (verify) intelligence, you need to know the goal being pursued. Without it, you have no standard against which to measure ability. Unverified intelligence is indistinguishable from stupidity.

Wrong on both counts, retard. You don't have the barest comprehension of what "intelligence" even is or how it's measured. Have you ever taken an IQ test? You should try it out and post your results.

In any case, you're clearly not interested in testing or refining your ideas through discussion. Instead, it's about winning an "argument" and "convincing everyone" of your pre-determined thesis. Your original OP was shut down by everyone here, so instead of revising your flawed reasoning, you retconned the OP and generated a new idiocy to support the same point.
KimKardashian
Pylon Wrote:Retroactively rewriting the entire OP doesn't invalidate the discussion up to this point;
Yes, I was very explicit about that.

Pylon Wrote:You don't have the barest comprehension of what "intelligence" even is or how it's measured.
No, IQ is not related to AI. You realize we already have methods to measure non-human (animal) intelligence and they don't use IQ tests right?

No need to shit up the thread like that.

Edit: to be exact, points 1 and 2 apply to IQ tests anyway. There's literally "passing a test" there. I say IQ isn't related becase we don't need to get into IQ to talk of intelligence from a higher level view.
Guest
KimKardashian Wrote:How is it not true? Any goal A we give the AI limits its freedom to choose its behavior. And since any goal A probably relates to some further goal B, we are effectively telling the AI to attain B via A, but perhaps attaining it via C would have been more intelligent? Therefore we have limited the AI behavior by our own intelligence. This logic applies on all levels.

I don't see how theory of mind applies, though.

An AI could be made specifically for aiding in the creation of an AGI. This would solve the lack of direction, as it can outline the necessary components needed to simulate a mind. It can be refined if it does not work, until general intelligence is achieved.
Quote:By unverifiable I mean we don't know whether the behavior is intelligent. It has not reached its conclusion, and a) the AI does not seem to be reaching the goal we gave it, or b) we cannot figure out the goal the AI took for itself. The behavior seems purposeless. At some point we need to just terminate the behavior and give the AI bad feedback on it.


Your only real criticism seems to be that because we don't have superintelligence, we can't make AGI. All that is needed is for an AGI to reach a normal and recognizible midwit level of sentience for it to be accomplished. We can train it further from there, and reach superintelligence.
Mason Hall-McCullough
KimKardashian Wrote:The argument:

1. Intelligence is the measurable ability of any behavior to attain a given goal, whether food, passing a test, asteroid mining, etc.

2. To measure (verify) intelligence, you need to know the goal being pursued. Without it, you have no standard against which to measure ability. Unverified intelligence is indistinguishable from stupidity.

A measurable standard... so an exam of some sort? LLMs are good at many of those. I'm confident that in a matter of years AI will be beating the smartest humans at any possible exam you could design.

Premise 2 is wrong anyway and the rest of the post is a logic trap relying on this false premise. You don't need to measure or objectify the intelligence of an insect to verify that a human is more intelligent. Intelligence is plainly distinguishable from stupidity by casual observation.
KimKardashian
Guest Wrote:An AI could be made specifically for aiding in the creation of an AGI. This would solve the lack of direction, as it can outline the necessary components needed to simulate a mind. It can be refined if it does not work, until general intelligence is achieved.
Creating another AI falls under the same logic and same problems as creating the AGI. Lack of direction (5) by itself isn't a problem, since we can just guide the AI ourselves (4). 

Guest Wrote:Your only real criticism seems to be that because we don't have superintelligence, we can't make AGI. All that is needed is for an AGI to reach a normal and recognizible midwit level of sentience for it to be accomplished. 
Demonstrating the impossibility of AGI is the weakest part, yes. But the logic is strong on the impossibility of superintelligence.

Guest Wrote:We can train it further from there, and reach superintelligence.
Training it falls under the same logic and same problems.

Mason Hall-McCullough Wrote:A measurable standard... so an exam of some sort? LLMs are good at many of those.
Yes I am aware. We use tests because they are a cheap way to reflect the real ability in people (eg their IQ or knowledge), because real ability increases their test taking ability. But the logic doesn't work in the opposite direction -- making people better at solving tests does not increase real ability. It's called teaching to the test. It's a form of overfitting. You will have a good AI for passing tests, but if that's your only standard, passing tests is all it will be good at. It's the same problem childhood interventions ran into when attempting to increase black IQ scores. You always need to assess real life behavior.

Mason Hall-McCullough Wrote:Premise 2 is wrong anyway and the rest of the post is a logic trap relying on this false premise. You don't need to measure or objectify the intelligence of an insect to verify that a human is more intelligent. Intelligence is plainly distinguishable from stupidity by casual observation.
Casual observation uses the same logic (2): in seeing how ants accomplish their goals of nestbuilding and food gathering, we can tell it doesn't require much ability that we'd consider very intelligent. Casual observation contains measurement like any other assessment of intelligence. We can also casually measure the height of a building or the length of a car just by eyeballing, without pulling out the measuring tape.

Important to note: any measurement may be inaccurate, but especially so one from a casual observation. Did you know ants also herd other bugs and farm fungus? This is beyond a mere casual observation. Casual observation is what deemed Newton stupid and made him a target of bullying.
BillyONare
Amarna challenge:

Define intelligence. I have a very good definition that I think is the best and I’m not sure if anyone has ever defined it the same way. Winner gets a unique Philosopher role + banner.

Hint: Evolutionary biology.
KimKardashian
BillyONare Wrote:Amarna challenge:

Define intelligence. I have a very good definition that I think is the best and I’m not sure if anyone has ever defined it the same way. Winner gets a unique Philosopher role + banner.

Hint: Evolutionary biology.
Evolutionarily speaking, it's behavioral flexibility -- the ability to cope well with evolutionarily novel environments and situations to which one isn't adapted for. Evolutionary biology? Uh, brain-body ratio? You're probably aiming for some definition that includes only the number of neurons. Then again, those aren't really new, so possibly not.
obscurefish
The way people talk about AGI expectantly sounds very 19th century to my ears.



[-]
Quick Reply
Message
Type your reply to this message here.




Users browsing this thread: 1 Guest(s)