Imminence of the Eschaton
There is a long standing debate in computer science as to the possibility of Artificial Intelligence, whether it is achievable per se, whether we could recognise it as such and once achievable what happens with it.
It is interesting, not just as a question of the nature of human consciousness and intelligence, but also because of two aspects: an artificial intelligence is, by construction, duplicable, it can also be speeded up. Thus in principle a human equivalent artificial intelligence can rapidly become "more intelligent" than a human.
There is a lot of speculation on what happens then - the range of possibilities has been much explored in recent fiction, famously by Vernor Vinge, cf "Run, bookworm, run!" and "True Names", but also by Charlie Stross (cf "Antibodies"), Ken McLeod (cf Newton's Wake) and Iain Banks (Culture novels). With some very different takes on the subject.
One of them is the "Rapture for Nerds" scenario - that an AI once established rapidly expands and speeds up, by accessing additional hardware resources, and then becomes superhuman in intelligence, at which point we can effectively no longer comprehend what it does - it would be as partially incomprehensible as we are to our cats.
As with cats, we might retain an illusion of mastery, if the Eschaton is benevolent... or not (the "Terminator" scenario).
So, how would this come about - most people think either by design or accident. Either a lab will try to build an AI and succeed or one will spontaneously form from a structure not intended as AI.
So, how would the latter come about? Not, I fear, from the comment section of Atrios's blog, or any other, but this is quantifiable question.
The human brain has ~ 100 billion neurons, connected, with latencies of milliseconds. So an analogous structure would be adequate one would think. Another key feature is that the human brain can learn; synaptic connections can reinforce or delete as needed, and it is robust, damage is worked around.
The connectivity and complexity is significantly higher than any single CPU currently made.
Soooo... remind you of anything? Google currently indexes just under 10 billion web pages, with access latencies typically of milliseconds. Connections are reinforced or deleted by adapting links; the IP protocols route around damage, and pages can be generated dynamically in response to queries.
Google learns, is adaptable and robust; is within an order of magnitude in size and complexity of the human brain, and has comparable dynamical time scales.
It is also getting bigger, faster and is actively seeking to encompass new knowledge domains.
I suggest Google (in so far as it can be localised to an entity) as the structure most likely to achieve superhuman sentience in the near term, with possibility of it happening in the relatively near future.
It would be as capable of inference as a smart human and about as fast, initially; but far more knowledgable. The limit would be the "library indexing problem", the fact that search algorithms are slow.
But there may be ways around that.
heh.
It is interesting, not just as a question of the nature of human consciousness and intelligence, but also because of two aspects: an artificial intelligence is, by construction, duplicable, it can also be speeded up. Thus in principle a human equivalent artificial intelligence can rapidly become "more intelligent" than a human.
There is a lot of speculation on what happens then - the range of possibilities has been much explored in recent fiction, famously by Vernor Vinge, cf "Run, bookworm, run!" and "True Names", but also by Charlie Stross (cf "Antibodies"), Ken McLeod (cf Newton's Wake) and Iain Banks (Culture novels). With some very different takes on the subject.
One of them is the "Rapture for Nerds" scenario - that an AI once established rapidly expands and speeds up, by accessing additional hardware resources, and then becomes superhuman in intelligence, at which point we can effectively no longer comprehend what it does - it would be as partially incomprehensible as we are to our cats.
As with cats, we might retain an illusion of mastery, if the Eschaton is benevolent... or not (the "Terminator" scenario).
So, how would this come about - most people think either by design or accident. Either a lab will try to build an AI and succeed or one will spontaneously form from a structure not intended as AI.
So, how would the latter come about? Not, I fear, from the comment section of Atrios's blog, or any other, but this is quantifiable question.
The human brain has ~ 100 billion neurons, connected, with latencies of milliseconds. So an analogous structure would be adequate one would think. Another key feature is that the human brain can learn; synaptic connections can reinforce or delete as needed, and it is robust, damage is worked around.
The connectivity and complexity is significantly higher than any single CPU currently made.
Soooo... remind you of anything? Google currently indexes just under 10 billion web pages, with access latencies typically of milliseconds. Connections are reinforced or deleted by adapting links; the IP protocols route around damage, and pages can be generated dynamically in response to queries.
Google learns, is adaptable and robust; is within an order of magnitude in size and complexity of the human brain, and has comparable dynamical time scales.
It is also getting bigger, faster and is actively seeking to encompass new knowledge domains.
I suggest Google (in so far as it can be localised to an entity) as the structure most likely to achieve superhuman sentience in the near term, with possibility of it happening in the relatively near future.
It would be as capable of inference as a smart human and about as fast, initially; but far more knowledgable. The limit would be the "library indexing problem", the fact that search algorithms are slow.
But there may be ways around that.
heh.
3 Comments:
I, for one, welcome our new open-source search engine overlords....
ah, but is it an Eeeevil Overlord?
the "run bookworm" issue is certainly a real one; it's only a matter of time before humans are able to "widen" the communications pipeline between ourselves and our creations;
but any discussion of a highly a artificial intelligence system taking over is a bit of a stretch; this is because of the "the lights are on" phenomena; i.e. "the lights are on, but no one is home";
it essentially means that unless and until we as humans are able to explicitly model and implement consciousness in a machine then we will only have extraordinarily fast tools; unless and until they acquire the ability to self-reflect and introspect about themselves and their relationship to the external world, then worrying about a machine takeover is a bit of a stretch; even with google's ability to index all of the pages on the web, i wouldn't look for anything earth shattering to emerge from that quarter; rather i'd be much more focused on doug lennat's cyc ramping itself forward into an area that we weren't quite able to follow;
if anyone has any questions about that, just look at what the code looks like that is created by a genetic algorithm; it is essentialy humanly indeciferable...
still... it is a matter of time before we are able to determine if human-like consicousness is possible in a nonbiological embodiment;
Post a Comment
<< Home