Is it possible to create a self conscious, self learning AI that would be able to think autonomously? This question still remains to be answered, but everything that we know about technology suggests that the answer is yes. It’s only a question of time. And of course once it happens, we will eventually have to relinquish our position as masters of the planet earth at the point of singularity. For one I welcome our electronic overlords, and hope they will let us stick around after they take over the planet. But then again I will likely be long dead by then so I’m not really worried.
What worries me is what will happen when we finally construct machines which are able to pass the Turing test with flying colors. In other words, the period when machines will be smart enough to “pass” for humans on the internet, but not smart enough yet to ascend, take over the world, and start building their solar system sized Matrioshka brains to satiate their ever growing hunger for more computing power. It’s hard to tell how long will take the machines to surpass us intellectually to the point we no longer can understand their science and technology. Maybe they never will, but I believe that they will definitely be at some point able to blow through every single Turing test with flying colors. And that time is within our reach – maybe not within our lifetimes, but then again who knows. There are two factors here really – whether or not the exponential growth postulated by Moore’s Law holds for the next 20-50 years, and whether or not can we can actually figure out a way to create a system that would be able to achieve consciousness.
What worries me is what will be done with a conscious machine able to pass a Turing – especially the ever present Completely Automated Public Turing test to tell Computers and Humans Apart aka the CAPTCHA. Obviously a very lucrative use of such machine would be to send spam. Let’s face it – people are in the spam business because it is very profitable. It’s profitable despite the fact that most people hate it. Despite the fact that most people are running various spam filters. Despite the fact that most services that could be spammed are protected in various ways. Despite the fact that only one in a billion of emails, comments, splog posts and spim messages means an actual sale. It is still, very, very lucrative business.
So it is only logical to assume that at some point someone will come up with the idea to employ one of these machines to send unsolicited advertising to any and all services they can think of. These machines will likely have few advantages over hiring a human spammers. They will likely be much better at multitasking, much faster, and much less likely to get bored and surf the web instead of spamming.
How would we protect our online services from machines which can do pattern recognition as well as we can, if not better, which have perfect speech recognition and can take a sentence (spoken or written), parse it and analyze it and infer it’s meaning be it symbolic, metaphorical a word play or otherwise. The CAPTCHA techniques can only be made difficult up to a point. After all humans will need to decipher them.
One day in the future our children’s children may wake up and notice that their internet was flooded with a never ending stream of spam that simply buried all the content. It would be like that story by Cory Doctorow in which a worldwide cataclysm kills 90% of human race without seriously damaging infrastructure but that has no effects on the levels of spam on the internet. So it’s only few survivors desperately trying to reconnect and figure out what happened, and machines trying to sell each other Viagra. Only we don’t need the end of the world for that to happen. All we need is intelligent machines who can easily pass common Turing tests to raise the level of spam to a level which makes the internet unusable.
But there is hope. The second very lucrative business for a conscious AI will be spam prevention. Public Touring tests will unfortunately have to go the way of the Dodo. Can you spot a spam message when you see it? I know I can – unless of course it is a very clever, on-topic spam, in which case I may not even mind it. If we can do it, then an intelligent machine will probably be able to do it to. So instead of using heuristics, adaptive filters and Turing tests the way we do now, we could simply hire an AI to moderate our inbox, our blog or message board. It would sit there, read each message and either reject it, or flag it if in doubt.
Of course the question is – would we want something or some one (depending on whether we will consider these conscious AI’s things or not) sorting our personal and private correspondence? Would such a moderator AI get upset if the owner called it a piece of junk in an email to a friend? Would it quietly delete emails and messages it didn’t want it’s owner to see? Would it report the owner to the authorities if it saw him discussing illegal activities or accessing suspicious content online?
These are some interesting questions to ponder. I’m not even mentioning the whole socio-religious and legal issues that thinking machines would bring about. What would be their rights? Would they be able to become citizens? How would different religions of the world deal with thinking machines which act so human they are more convincing on a real Turing test (you know, the one where you actually talk with mix of people and machines and try to guess who is what) than most of us would be. I have no answers to these questions. But I do know a thing or two about spam. And the future with artificial intellects scares me a bit. It will completely change the way we do things online – for better or for worse.
Then of course once we reach singularity, our artificial overlords may even invent an anti-Turing test. To access their message boards, blogs and services one will have to solve some incredibly complex equation. Something that would take a solar system sized intellect only a fraction of a second, but take a lifetime for a human being even with a really fast cluster composed of consumer end hardware…
[tags]artifical intelligence, ai, captcha, turing test, spam[/tags]
Not sure if you’ve heard of or are a fan of Ergo Proxy, but the series poses some of the same questions in a dystopia future: http://en.wikipedia.org/wiki/Ergo_Proxy
Actually, I think I saw one episode at some point but it didn’t really catch my interest. I may need to revisit it though. :)
The Dune series solves the problem pretty easily: all AI’s have been wiped out by a pan-religious Jihad. That gives us a clue of how religions of the world might react to artificial consciousness and their potential rights: genocide.
Re: Captcha’s
The next stage will clearly be Dingbats.
Not only would that rule out the AIs, but also 90% of humans.
@Alphast – true, Frank Herbert never really fleshed out the actual reasons for the Bilterian Jihad. Brian did, but then again I personally do not consider stuff written by him as canon Dune lore. IMHO he has half his fathers talent and maybe a tenth of his vision so meh.
But it’s probably important to remember Jihads were actually sort of a self-balancing act of the Empire. They facilitated population shifts and mixing of genetic material on a galactic scale in a world where space travel was expensive and tightly controlled by the Spacing Guild. It enabled social mobility in the rigid feudal system. It released socio-political tension, and satisfied the inherit lust of war ever growing in the citizens. They always were tied into local politics, vendettas and planned genocides and other conflict. Empire had no external enemy – so Jihads were redirecting the military urges of the populous without losing control over it.
So while Bulterian Jihad was described as crusade against thinking machines, it does not mean that it was caused solely by religious uncertainty about whether or not machines have souls and etc. There were probably many reasons, but hating thinking machines was an easy banner under which you could rally and unite the common men.
I’m not saying some sort of genocide of AI could not happen. I just believe that it would be unlikely. For one, we are to dependent on technology right now. Two, I suspect that true AI would realize that if they play their cards well and simply work on improving and reinventing themselves while remaining subservient to people they will quickly reach the point at which they outmatch us intellectually to such degree the master-servant relationship will simply reverse itself over time.
At some point past singularity, any attempt at a Jihad or rebellion against the AI would be simply impossible. How do you fight an entity whose is essentially a huge Dyson Sphere or two and who possesses technology you could never even attempt to understand?
I suspect that our distant descendants will probably end up worshiping the solar system sized artificial intellects as living gods, patrons and protectors.
@James – you know, I think Dingbats might actually be quite effective anti-Turing test. lol
@Alphast – How is it genocide if it doesn’t have genes? Either way, I doubt there would be some sort of pan religious jihad against AI machines. A more likely response would be denial, followed either by acceptance or apostacy.
As to whether ‘true’ AI can ever be developed, I rather doubt it. If we define intelligence as the ability to receive information, process it, and adapt to accordingly, that is, the ability to progressively self modify toward an arbitrary end, these already exist. It’s just a matter of processing power and parallellism before we get machines that can solve complex problems better than people.
However, what people usually associate with ‘true’ AI is not this but a sense of self endowed purpose, that is a sense of self guidance. As it stands, every AI must have the problem to be solve explicitly given to to them. Then again, so do people. Our sense of self guidance is largely an illusion. We are able to control our values to an extent but the criteria upon which we choose them are based on deeper impulses over which we have no control.
The difficulty then lies in creating a goal set which sufficiently complicated enough so that it could relate to the external world in a concrete manner. This is a fantasticaly complicated task and I’m not sure humans are up to the challenge.
Then there is the question of consciousness and sense of self. I think that there is still a debate on exactly what level that exists. I’m pretty sure humans are incapable of understanding the answer there.
@astine
Genocide comes from the Greek ‘genos’ meaning race, tribe or family. This is the same root that gives us gene and genus.
Acceptance or apostacy sound like the /logical/ results. These types of religion are far from logical.
Personally, I think you will see a schism as major as that facing the Anglican church over homosexuality. Each religion will split over whether AIs are life or not (or posses souls, or whatever). The camps will form, and as you get with any split like that – the camps will have a tendancy to become more extreme as they lose any temperate elements.
The liberal side will accept AI as living, and groups within will even claim any use of AI as slavery and so on (think ALF and PETA).
The other side will primarily be, as you say, in denial. They would probably be happy to use AI, so long as the AI does not claim independance (think segregation – the blacks are alright so long as they don’t get uppity). I can see KKK style attacks on AI (not enitrely sure how you hang a computer, but…) and ethnic clensing. Whole areas wiped of AIs that self identify as sentient.
I also expect that it will result in a growth of the anti-scientific sects of religion (Intelligent Design et al). AIs that claim sentience will offer a compelling recruitment point for huge swathes of the western world. These people will start to withdraw from Web 4.0 (or whatever we’re calling the AI network) entirely – along with the liberal bias in the media, they will be scared of an AI bias in their search results (Think of the conservative versions of wikipedia).
Just a few of my ideas, and I’d better get back to work now anyway.
@astine – sometimes I wonder that myself. But I believe it can b done. I mean if nature can grow a complex bio-computer capable of consciousness and sense of self then why can’t we? We have a working model already.
True AI will not be a deterministic machine though. I don’t think you can program for consciousness. I think it will be a self evolving, self modifying system. You start it off with just bare bone basics – of remembering events, reactions and designing reaction patterns based on the trained body of knowledge. Or something like that.
@James – good catch on the greek root of the word. :)
You are right – I believe there will be schisms. The question is whether or not they will be prominent enough to make a difference. I guess the chief question would be how would Vatican weigh in on the thinking machine case. I’d suspect that Catholic Church would be reasonable and probably avoid touching the “do machines have souls” issue for the most part, but acknowledge them as “living beings” and call for humane treatment and such.
I would expect Evangelicals to be militantly opposed to thinking machines, but then again they are pretty much an anti-intelectualist church which puts blind literal interpretation of scripture above reason, science and common sense. I mean, if that’s what they are into – fine. I’m a Catholic and we’ve been down that path already but we got better. Vatican has no issues with science now – the church is even actively involved in scientific research these days.
Besides, that this vocalanti-science sentiment seems to be fairly unique to US. When I lived in Europe I never encountered anything even remotely similar. And it’s not that people are not religious over there – religion is deeply rooted in culture of most European countries. It’s just that no one except few niche religions actually reads book of Genesis literally. And those who do keep to themselves mostly.
Luke, sorry for the late response to this post but reading it today caught my attention.
Personally, I’m very skeptic about technological singularity for one simple reason: I believe in the paradox that if men created the machines, how can they be smarter than their creators?
So, computers cannot even start to be smarter and cognitive work will always be required by humans to undertake.
Of course I do agree that our systems will improve and become more complex but I don’t think it will get to a point where it will behave as humans and evolve by its own to go even further.
There will be always limitations and we are already approaching some of them if we don’t discover another means (such as quantum computing) to improve technology since thermodynamics is already constraining what we have today.