There is a long standing debate in computer science as to the possibility of Artificial Intelligence, whether it is achievable per se, whether we could recognise it as such and once achievable what happens with it.
It is interesting, not just as a question of the nature of human consciousness and intelligence, but also because of two aspects: an artificial intelligence is, by construction, duplicable, it can also be speeded up. Thus in principle a human equivalent artificial intelligence can rapidly become "more intelligent" than a human.
There is a lot of speculation on what happens then - the range of possibilities has been much explored in recent fiction, famously by Vernor Vinge
, cf "Run, bookworm, run!" and "True Names", but also by Charlie Stross (cf "Antibodies"), Ken McLeod (cf Newton's Wake) and Iain Banks (Culture novels). With some very different takes on the subject.
One of them is the "Rapture for Nerds" scenario - that an AI once established rapidly expands and speeds up, by accessing additional hardware resources, and then becomes superhuman in intelligence, at which point we can effectively no longer comprehend what it does - it would be as partially incomprehensible as we are to our cats.
As with cats, we might retain an illusion of mastery, if the Eschaton is benevolent... or not (the "Terminator" scenario).
So, how would this come about - most people think either by design or accident. Either a lab will try to build an AI and succeed or one will spontaneously form from a structure not intended as AI.
So, how would the latter come about? Not, I fear, from the comment section of Atrios's blog, or any other, but this is quantifiable question.
The human brain has ~ 100 billion neurons, connected, with latencies of milliseconds. So an analogous structure would be adequate one would think. Another key feature is that the human brain can learn; synaptic connections can reinforce or delete as needed, and it is robust, damage is worked around.
The connectivity and complexity is significantly higher than any single CPU currently made.
Soooo... remind you of anything? Google currently indexes just under 10 billion web pages, with access latencies typically of milliseconds. Connections are reinforced or deleted by adapting links; the IP protocols route around damage, and pages can be generated dynamically in response to queries.
Google learns, is adaptable and robust; is within an order of magnitude in size and complexity of the human brain, and has comparable dynamical time scales.
It is also getting bigger, faster and is actively seeking to encompass new knowledge domains.
I suggest Google (in so far as it can be localised to an entity) as the structure most likely to achieve superhuman sentience in the near term, with possibility of it happening in the relatively near future.
It would be as capable of inference as a smart human and about as fast, initially; but far more knowledgable. The limit would be the "library indexing problem", the fact that search algorithms are slow.
But there may be ways around that.