12-27-2023, 04:16 PM
(Notes regarding updating the OP:
1. This is an entirely new OP I wrote on 04.01.24. The previous OP is in this post. All of the thread up to that post pertains to the old OP.
2. I will regularly update this OP either in response to replies or to my own thoughts, so that it always represents the strongest case available.)
__________________________________________________________________________
Here I demonstrate how creating an AGI either on par or above human intelligence is a logical impossibility. I've tried to format this post in the clearest way I can, and numbered the arguments for ease of reference for deboonging the reasoning here.
The argument:
1. Intelligence is the measurable ability of any behavior to attain a given goal, whether food, passing a test, asteroid mining, etc.
2. To measure (prove) intelligence, you need to know the goal being pursued. Without it, you have no standard against which to measure ability, and the intelligence of the behavior remains unknown.
2.1. In unknown behavior, potential intelligence remains unprovable, and thus indiscernible from nonsense/stupidity.
This is the basis and the rest follows from it:
3. Measuring intelligence is limited by the observers own intelligence, because he needs figure out the goal pursued (2).
4. The goal of developing an AI is to produce provably intelligent behavior, which requires the engineer monitor and cull any unknown behavior. Knowledge of goals is required (2).
4.1. If he sets goals for the AI himself, his intelligence will limit what the AI pursues.
4.2. If he lets the AI set goals for itself, any goal exceeding his intelligence results in unknown behavior (3).
5. It is not practicable to allow for unknown behavior to continue in the hopes that it turns out to be provably intelligent.
5.1. If it exceeds his intelligence, it will remain unknown (3, 6).
5.2. There are infinite ways for unknown behavior to produce nonsense, and only limited ways for it to turn out intelligent. It is forbiddingly costly to try and brute force it. This is how evolution produced human intelligence, and it would take immensely many iterations and complexity to replicate.
5.3. At best it becomes knowably intelligent, in which case the result is no better than following 4.
6. The engineer cannot take AI’s own word for the intelligence of its behavior which he cannot prove himself. This would run into 5, 5.2.
7. Therefore, the very process of constructing an AI consists of unavoidably culling any behavior beyond the engineers own intelligence.
If this reasoning stands, building an AI above human intelligence is impossible. But what about AI on par with human intelligence? (This is the weakest part of my reasoning, but I feel there is some truth here that I cannot perhaps yet express convincingly.)
8. The engineer cannot replicate his entire intelligence inside the AI. At best human intelligence will remain an asymptote towards which the AI will always be approaching.
8.1. He can not know himself fully (there is no universal set that contains all sets), to replicate himself in AI.
8.2. Any addition to the AI will simultaneously add to the engineer, making him unable to close the distance between the two.
NB note that this disproves only the possibility of building such an intelligence, and not that it may not exist elsewhere in the universe (aliums).
1. This is an entirely new OP I wrote on 04.01.24. The previous OP is in this post. All of the thread up to that post pertains to the old OP.
2. I will regularly update this OP either in response to replies or to my own thoughts, so that it always represents the strongest case available.)
__________________________________________________________________________
Here I demonstrate how creating an AGI either on par or above human intelligence is a logical impossibility. I've tried to format this post in the clearest way I can, and numbered the arguments for ease of reference for deboonging the reasoning here.
The argument:
1. Intelligence is the measurable ability of any behavior to attain a given goal, whether food, passing a test, asteroid mining, etc.
2. To measure (prove) intelligence, you need to know the goal being pursued. Without it, you have no standard against which to measure ability, and the intelligence of the behavior remains unknown.
2.1. In unknown behavior, potential intelligence remains unprovable, and thus indiscernible from nonsense/stupidity.
This is the basis and the rest follows from it:
3. Measuring intelligence is limited by the observers own intelligence, because he needs figure out the goal pursued (2).
4. The goal of developing an AI is to produce provably intelligent behavior, which requires the engineer monitor and cull any unknown behavior. Knowledge of goals is required (2).
4.1. If he sets goals for the AI himself, his intelligence will limit what the AI pursues.
4.2. If he lets the AI set goals for itself, any goal exceeding his intelligence results in unknown behavior (3).
5. It is not practicable to allow for unknown behavior to continue in the hopes that it turns out to be provably intelligent.
5.1. If it exceeds his intelligence, it will remain unknown (3, 6).
5.2. There are infinite ways for unknown behavior to produce nonsense, and only limited ways for it to turn out intelligent. It is forbiddingly costly to try and brute force it. This is how evolution produced human intelligence, and it would take immensely many iterations and complexity to replicate.
5.3. At best it becomes knowably intelligent, in which case the result is no better than following 4.
6. The engineer cannot take AI’s own word for the intelligence of its behavior which he cannot prove himself. This would run into 5, 5.2.
7. Therefore, the very process of constructing an AI consists of unavoidably culling any behavior beyond the engineers own intelligence.
If this reasoning stands, building an AI above human intelligence is impossible. But what about AI on par with human intelligence? (This is the weakest part of my reasoning, but I feel there is some truth here that I cannot perhaps yet express convincingly.)
8. The engineer cannot replicate his entire intelligence inside the AI. At best human intelligence will remain an asymptote towards which the AI will always be approaching.
8.1. He can not know himself fully (there is no universal set that contains all sets), to replicate himself in AI.
8.2. Any addition to the AI will simultaneously add to the engineer, making him unable to close the distance between the two.
NB note that this disproves only the possibility of building such an intelligence, and not that it may not exist elsewhere in the universe (aliums).