AGI = Impossible
Geisteshahnreischaft
Guest Wrote:Probably a better thing to argue (rather than nitpicking the semantics of "intelligence" or "superintelligence") is how well does an AI statistical approximation (a "swampman" if you may) of consciousness actually emulate consciousness, or rather, if you had two "equally matched" systems, one capable of consciousness, and another incapable, a machine and a human, could you devise a test to select the conscious one.

How is consciousness defined?
What do you mean by "emulate"? Faking?
How would you know if an artificial system is conscious if it emulates (fakes?) all the attributes and characteristics of consciousness?
Let's say somehow we create artificial system that is conscious, how would you know it's not faking?
The Guest
Geisteshahnreischaft Wrote:How is consciousness defined?
What do you mean by "emulate"? Faking?
How would you know if an artificial system is conscious if it emulates (fakes?) all the attributes and characteristics of consciousness?
Let's say somehow we create artificial system that is conscious, how would you know it's not faking?


Exactly my point, these are pertinent questions that have not been answered (either here or anywhere else) and need to be tackled to properly discuss the thread at hand. While I can argue that consciousness is a property of the human mind I am unable to define, or even find, a decent definition of consciousness (similarly with "cognition" and "sentience"), and I don't think there yet exists a framework that can produce a meaningful definition for the task at hand (even the "qualia" abstraction leaves much to be desired).


"Emulation", I would argue here is the approximation (or otherwise aping) of the above via known or computable processes (i.e. LLMs, or other AI models, which we can observe and determine the discrete steps of when sampling/predicting), the last two questions are the ones that the thread should cover, and I unfortunately do not have an answer to (as an answer would allow unprecedented manipulation of "consciousness", as both questions essentially reduce to 'what makes consciousness special'). If there is no such "uniqueness" to consciousness (an answer I am willing to accept with enough proof), then I suppose it would make sense for the above definition to not exist (as you cannot define something that lacks distinction), yet lived experience, internal perception, problem solving, etc. seems to imply that consciousness is some *distinct* thing.
Still The Guest
The Guest Wrote:Exactly my point, these are pertinent questions that have not been answered (either here or anywhere else) and need to be tackled to properly discuss the thread at hand. While I can argue that consciousness is a property of the human mind I am unable to define, or even find, a decent definition of consciousness (similarly with "cognition" and "sentience"), and I don't think there yet exists a framework that can produce a meaningful definition for the task at hand (even the "qualia" abstraction leaves much to be desired).


"Emulation", I would argue here is the approximation (or otherwise aping) of the above via known or computable processes (i.e. LLMs, or other AI models, which we can observe and determine the discrete steps of when sampling/predicting), the last two questions are the ones that the thread should cover, and I unfortunately do not have an answer to (as an answer would allow unprecedented manipulation of "consciousness", as both questions essentially reduce to 'what makes consciousness special'). If there is no such "uniqueness" to consciousness (an answer I am willing to accept with enough proof), then I suppose it would make sense for the above definition to not exist (as you cannot define something that lacks distinction), yet lived experience, internal perception, problem solving, etc. seems to imply that consciousness is some *distinct* thing.

The point here is really that we don't even know the right questions to ask to tackle if "do machines think", let alone the methods of solving said questions, or God forbid the answers to them. Creating a thread on this with OPs bait (even trying to lay it out analytically, as if you could ever argue such a thing through the alchemy of tautologies instead of constructively) made me angry enough to knee jerkily respond.
Geisteshahnreischaft
The Guest Wrote:
Geisteshahnreischaft Wrote:How is consciousness defined?
What do you mean by "emulate"? Faking?
How would you know if an artificial system is conscious if it emulates (fakes?) all the attributes and characteristics of consciousness?
Let's say somehow we create artificial system that is conscious, how would you know it's not faking?


Exactly my point, these are pertinent questions that have not been answered (either here or anywhere else) and need to be tackled to properly discuss the thread at hand. While I can argue that consciousness is a property of the human mind I am unable to define, or even find, a decent definition of consciousness (similarly with "cognition" and "sentience"), and I don't think there yet exists a framework that can produce a meaningful definition for the task at hand (even the "qualia" abstraction leaves much to be desired).


"Emulation", I would argue here is the approximation (or otherwise aping) of the above via known or computable processes (i.e. LLMs, or other AI models, which we can observe and determine the discrete steps of when sampling/predicting), the last two questions are the ones that the thread should cover, and I unfortunately do not have an answer to (as an answer would allow unprecedented manipulation of "consciousness", as both questions essentially reduce to 'what makes consciousness special'). If there is no such "uniqueness" to consciousness (an answer I am willing to accept with enough proof), then I suppose it would make sense for the above definition to not exist (as you cannot define something that lacks distinction), yet lived experience, internal perception, problem solving, etc. seems to imply that consciousness is some *distinct* thing.

To me, just qualias would be an incorrect way to describe consciousness, because the qualias are determined by an outside observer. You are basically alluding to physical-monism of the infamous mind-body problem where you are suggesting that the existence of consciousness is verifiable through empirical probing, if enough probing is performed. Assuming there are a finite number of attributes or qualities that fully describe a state of consciousness.

In AGI case, it's just neural correlates of subjective phenomena, only the neural correlates are emulated using numbers (or electrical charge/signal if you might). It is not possible to know if something is conscious or self-conscious, in this context, through empirical metaphysics. If "something" is conscious, only that "thing" will know if it's conscious or not -- cogito, ergo sum. The state of consciousness is not possible to be known by an agent that resides outside of the subject/system itself.

I think a deliberate creation of a fully conscious artificial system is not possible, even if it's possible, it will happen accidentally.
Guest #2
Guest Wrote:all of the reduction made to metaphysic [...] is silly obfuscation for this.
No metaphysical arguments have been made.

Guest Wrote:(LLMs have already proven themselves "smarter" than most niggers, and within the decade will probably be able to "produce fact" faster and more accurately than most Whites)
"Fact production" has nothing to do with intelligence. We've had search engines for decades now, are they intelligent?

Guest Wrote:Probably a better thing to argue (rather than nitpicking the semantics of "intelligence" or "superintelligence") is how well does an AI statistical approximation (a "swampman" if you may) of consciousness actually emulate consciousness, or rather, if you had two "equally matched" systems, one capable of consciousness, and another incapable, a machine and a human, could you devise a test to select the conscious one.
First of all, "consciousness" is meaningless outside of phenomenology. Weird of you to bring it in, right after terming metaphysics obfuscatory here.

Secondly, the way you replaced "intelligence" with "consciousness" is meaningless, for you use it identically. The question remains the same - can we recognize the essence of what makes us us in things that aren't us? Whether we call this essence "intelligence," "comprehension," "creativity," etc., doesn't matter, the question remains the same.

So can we measure it? Sure. Here's some lazy Turing analogues, where humans compete with machines in:
a) fiction
b) painting
c) music
d) scientific breakthroughs
e) inventions

Success is measured by profitability ie. by how much humans value the output.

I believe the competition must be wholesale, ie. the whole of humanity vs machines, and not on an individual basis. This seems more meaningful, since both human and machine output is a result of countless inputs. Pitting an individual human with his individual inputs vs the sum of all the inputs of a machine seems meaningless, for a master writer doesn't spawn in a vacuum, but comes about through keen observation of others, and of themselves interacting with others. If we go by this criteria, we have no machines that even approach the beginning of beating humans in any field.
Guest #2
Still The Guest Wrote:The point here is really that we don't even know the right questions to ask to tackle if "do machines think", let alone the methods of solving said questions, or God forbid the answers to them.
Meaningless questions, see: philosophical zombie. We don't care whether machines think or are conscious, because 1) we can't even determine it in other humans and 2) it doesn't matter. We care only about behavior and whether it fits the desired criteria.
Geisteshahnreischaft
BillyONare Wrote:Training is not that important. Wait until midwit AI researchers understand what I understand. Oppenheimer did not train to create nuclear weapons, he just did it. THATS what intelligence is, to do things WITHOUT training. To do what would be IMPOSSIBLE no matter how many billions of years yeast-life has to evolve in a matter of months and change the course of history.

You are talking about creativity or the mental faculty to generate creative ways to solve complex problems. Yup, I agree, those are not learnable skills, some people just have them, others don't. History and numerous examples suggest that Jews tend to have such mental capacity more than others. If we say that's the only way to define intelligence, then no, we don't have such artificial system that thinks or operates like a Jew.

Then I guess people (OP) are asking a wrong question here. 

OP should have said "Artificial General Jewry (AGJ) == Impossible"
Reply 



[-]
Reply
Message
Type your reply to this message here.




Users browsing this thread: 1 Guest(s)