Scratch Artificial, Is Intelligence an Existential Risk?

Eray Özkural
18 min readOct 22, 2017

--

Does Artificial Intelligence (AI) technology pose a threat to human society? Any technology, in the wrong hands, may pose a threat to humans, after all technology is defined by its provision of higher capabilities than what we are naturally endowed with. It is true that men with evil intentions can make use of such capabilities to inflict ever more harm to their opponents. In the case of AI, the fear of a threat has reached hysterical heights, thanks to the efforts of a few, successfully funded organizations, such as MIRI (Machine Intelligence Research Institute), FHI (Future of Humanity Institute), and the new Cambridge neo-luddite nest known as FLI (Future of Life Institute). It seems that there is abundant money in producing fear, uncertainty, and doubt about AI technology, and these AI eschatology profiteers will not stop spouting nonsense about technology that barely exists.

The psychological roots of such a fear are grounded in the good old anthropocentricism of our not-so-evolved culture, believing that humans occupy a central place in the universe, failing that, in our galaxy, and failing that, in our planet, and failing that, among the intelligent species, natural or artificial. The progress of science can be pictured as slowly stripping the slightly intelligent and hairless and upright walking ape of its irrational pride of being the most intelligent , or special species in a Darwinian world. What if another species upends the human superiority? Would that be a tragedy, as the ultra-conservative neo-luddite nests referred to would have you believe?

The rhetoric of AI eschatologists, aside from their apparent imitation of doomsayer cults, is roughly based on an over-simplification of the science of AI, and many wrong assumptions, making improbable events seem probable. This is a general rhetoric schema of any doomsayer cult, as it is difficult to believe in prophecies of doom. AI doomsayers merely imitate this ancient, well-tested form. However, to reach their conclusions, they must also convince their proponents of certain falsehoods, otherwise, their funding would fail. That is why, they spend so much effort on publishing whimsical articles, essays, and books on a rather vacuous and trivial, intellectually dissatisfying subject. An intellectual pursuit, or any technological contribution is most certainly not in their slightest interest. The continuation of abundant fear, however, is. In this article, I dispel these illusions, and summarize to you just what is so wrong and missing in their world-view, if you could call it that.

The academic pretension of their published work is incredible. Recently, Nick Bostrom published a book on the subject, expanding upon some of his previous papers and talks, which, unfortunately, do not make any scientific sense, beyond being very poor imitations of some science fiction stories in the cyberpunk genre. In other words, as I would qualify as an AI scientist, “highly speculative”. Non-scientists and those who are not AI scientists would mistake these works as intellectually competent, knowledgeable writing, backed with clear scientific expertise (Bostrom does claim expertise in a number of technical fields, and promotes himself as the world leader in the subject of his book — AI eschatology, perhaps?), however, they are quite superficial and rife with trivial mistakes. The other interesting point is that, the members of these neo-luddite nests claim themselves to be “transhumanists”, however, they seem merely like the guardians of a decaying, and obsolete world-view to me, personally. Therefore, let us know them as who they truly are, ultra-conservatives and neo-luddites who pretend to be transhumanists so that they can infiltrate their communities. Bostrom himself is an unashamed creationist and eschatologist — who believes in a ridiculous “philosophical” version of christian eschatology, and promotes a new age variant of intelligent design by alien programmer gods nonetheless, and Max Tegmark also is a creationist of sorts, proposing pseudo-science like humans having some kind of uniqueness and specialty because of the anthropic principle. Yudkowsky is the only non-creationist among them, but he has equally strange views — like his ridiculous views on utilitarian ethics (his version is a kind of silly ethics that tries to reduce ethics to pleasure and pain), and that an AI dictatorship is a great form of government (because apparently that is the best way to prevent creation of dangerous, evil, AI technology that is almost guaranteed to destroy us).

In fact, Yudkowsky has promoted his work as the most beneficial kind of work possible. After all, he is saving the world, so why should you not give him more money? You get more utilitarian bang per buck, the rhetoric goes, so you must donate more to AI eschatologists at MIRI. Does this sound familiar? If it did not, let me remind, it is the same party line as that of any TV evangelists, whom I never watched, but read descriptions of. The promise of a heaven is not even a remotely new trick for a cultist, yet, under this new disguise of utilitarianism, of a quite comical kind, but never mind, he can basically say the same thing, while retaining his pseudo-atheism. Yudkowsky offers a lot of other crypto-theological arguments, just like his partner Bostrom. Interestingly enough, he even offers a way to pray to future AI gods, which he comically calls “acausal trade”. One never knows just when the irrational belief in a “free market” and “utilitarianism” can result in such manifestly false and wrong ideas. Of course, as the story unfolded, one of their more potent proponents, a poor fellow who seriously believed in all this nonsense and kept missing the forest for the trees — the real show is making money, unintentionally published a very sound reductio ad absurdum of all of Yudkowsky’s views, called Roko’s Basilisk.

So, what was Roko’s Basilisk all about? According to rationalwiki, it is “a proposition that says an all-powerful artificial intelligence from the future may retroactively punish those who did not assist in bringing about its existence. It resembles a futurist version of Pascal’s wager; an argument suggesting that people should take into account particular singularitarian ideas, or even donate money, by weighing up the prospect of punishment versus reward.”. Oh, well, so it is the ultimate TV evangelist argument. Give us money, or you are going to go to hell! Because certainly, MIRI is going to create a friendly AI, which of course, must punish such naughty boys, because they didn’t pay MIRI to build it. In other words, judgement day will come, and the sinners will be burned forever in hell. Are you a sinner, or not? That all depends on your donations to MIRI. You can probably make up for any mistakes you have made in your past life by donating to MIRI, of course. Now, note the Pascal’s wager, which is essentially a theological idea. Surely, such ideas make sense only in a theological context. Even though Yudkowsky tried to censor this idea, it was evident that it directly followed from Yudkowsky’s own ideas; it was a natural conclusion of his own arguments. In other words, he refuted himself. And therefore, there is nothing more we need to say about the scientific quality of Yudkowsky’s ideas, second-grade dystopian science fiction does not warrant much serious analysis. However, let me emphasize what we (I and the precious reader) discovered here: some bad philosophical ideas, are being used to create an analogue of religious mythology.

Why would someone write so much about a neo-creationist, pseudo-scientific mythology, unless he wanted to convert transhumanists to some kind of a new age religion? If they truly understood the technology, would not they be writing about the technology instead?

And if they were truly transhumanists, would they view brain simulation technology as an “existential risk”? Surely not. Brain simulation is technological immortality, it is the very essence of transhumanism, it is the only scientifically possible way to transcend the fragile flesh which limits our existence. Ergo, these people are not transhumanists, they are merely agents of conservatives.

These people also try to convince others that AI technology is as dangerous as nuclear war. Well, first they tried to say it is more dangerous than nuclear war, but when their fallacy was pointed out by many of us, they changed their irrational rhetoric, and simply tried to put AI on the same list as nuclear war. Which is of course, equally ridiculous. If that were true, there should be international bans on AI technology, just like there are regulations on nuclear technology. What would they get out of this? Well, they would make technological immortality impossible, and second, they would make rapid technological advances impossible, which would preserve the establishment, which is their true goal. That is what luddites deeply desire. Of course, these are people of the extreme political persuasion (both right-wing and sometimes left-wing), and they do not wish to see a world where their preferred order is not preserved. There cannot be any scientists with technological immortality in a world where half-wits own trillions of dollars.

Let us retreat to one of the most ridiculous subjects in the AI eschatology folklore: friendly AI, and coherent volated extrapolition. According to LessWrong wiki “Coherent Extrapolated Volition was a term developed by Eliezer Yudkowsky while discussing Friendly AI development. It’s meant as an argument that it would not be sufficient to explicitly program our desires and motivations into an AI. Instead, we should find a way to program it in a way that it would act in our best interests — what we want it to do and not what we tell it to.”. The general idea here is quite easy to debunk, and a good model to show just how ridiculous and wrong AI eschatology rhetoric is in general.

Well, their idea is that we only need to teach a machine “human preferences”, because apparently that’s a very powerful ethical theory. First of all, it is not. Human ethics is rarely about preferences, it is about values. You can teach a machine preferences or values, of course, but whether you would really want to do that, is an entirely different question. Why they came up with this let me try to explain. There are some very wrong assumptions lying underneath. Basically, all of their pseudo-scientific nonsense is based on blowing the conclusions of a nice paper written by Steve Omohundro out of proportion. Let us start with dismantling their assumptions and small, silly rhetorical claims.

  1. AI must be an agent That is quite untrue. The kind of AGI program I am working on is completely “passive”, and is not an agent at all, yet has all the intelligence that an agent can have. At any rate, most AI programs are not agents, the most useful kind is machine learning applications like speech/face recognition. So, that is a big mistake.
  2. AI agents must be autonomous No, AI agents do not need to be fully autonomous. They would rather be programmed to do whatever task is needed. It is a quite silly idea to have to convince a robot to do a job, and that is not how it should be. To replace labor, we must use AI in the most effective way, emulating a person is certainly not necessary or desirable for this kind of application.
  3. Even a question/answer machine is dangerous No, it is not. A Q/A machine is completely “passive”, it only learns and solves problems posed. It has no will of its own, and has no goals whatsoever, apart from giving the correct answer to a problem, which constitutes pure intelligence. A typical example of a Q/A machine is a machine learning classification problem, such as telling apart whether a mushroom is edible or poisonous based on its attributes. The way they thought this would be dangerous: a politician comes and asks “What must I do to win this election?” and then the machine tells him to do all kinds of sinister things ending humanity. Of course, that is a ridiculous and implausible scenario that is not worth discussing.
  4. AI will necessarily have harmful AI drives Omohundro in his paper argued that pursuing an innocent looking objective like “maximizing the number of paperclips” could have harmful consequences, since the AI agent would do anything to reach that objective. It would also have animal-like drives, such as survival. Omohundro’s analysis does not apply to any kind of design and motivation system. Autonomous robots with beneficial goal systems have been discussed by Ben Goertzel. I have offered a pop-science level solution to designing motivation systems in this article. Basically, open-ended, and selfish meta-goals are harmful to some when applied to fully autonomous agents, but there are many ways to fix this, such as removing full autonomy from the system, adding universal constraints (such as non-interference, advanced “robot laws”, i.e., legal, logical AI agent), and making closed-ended, selfless motivations. The simplest solution, however, is to avoid autonomy in the first place. As well as goals that are animal-like (such as maximizing pleasure).
  5. Human preferences may be made coherent They contradict wildly and manifestly. The views of religious folk, in majority, contradict with intelligent people. In the case of Yudkowsky and his friends at MIRI, they seem like guinea pigs raised in vacuum, with Judeo-Christian ethics, irrational libertarian-conservative values, US exceptionalism, and generally devoid of any passable morality. Therefore, it is hard to see who would be fit to train such an agent even if we picked preferentially. The sad story is that humans in general are not good at ethics and they have many wrong and harmful ideas about the human society. Less Wrong community is a good example of that as they promote a sorry, sub-par ethical thinking, and ignorant, incompassionate, ridiculously naive, approach to complex ethical problems. They are self-refuting in that regard, because no sane person would want to live in a world dominated by the values of Less Wrong members, and training from the world at large would only be worse
  6. A UN controlled AI dictatorship is plausible It is neither plausible nor desirable. It is diametrically opposed to democracy and freedom. Banning AI research is essentially banning all computer research. AI is just the apex of computer science. When you ban AI, you have to also ban computer science. That is how ridiculously neo-luddite that view is. On the other hand, no person would want to give up his sovereignty to an AI controlled by UN, as the UN would most likely just program it to do whatever a certain rich country would want it to do. It is also completely unreasonable since most communities demand decentralized and democratic governance.
  7. Singularity can occur anywhere It cannot. It is doubtful whether “a singularity” will occur. More likely, a higher technological plateau will develop, no real singularity will occur because there are physical bottlenecks that will cause very significant slowdowns after 2030. However, even if we assumed there were no bottlenecks (and according to my projections that would mean a singularity by 2035), the theory concerns the whole globe, not a small subset of it. A singularity can only be funded by a very large nation at the very minimum, and even then it would be very unlikely. The likely event is that the whole globe will participate in computer technology, as it has in the past. It is pseudo-science to think that it can happen in a garage.

Let us now spell what their proposal truly is.

  1. A comic book super-villain level plot to take over the world by the financial elite of the USA (Yudkowsky admits this jokingly, but you can tell he means it).
  2. A totalitarian solution to an extremely improbable, practically non-existent risk.
  3. Luddism: preventing AI technology from being developed by anyone they do not like.

Unfortunately, their proposals constitute a non-solution to a non-problem. A direct democracy would address the need of weighing human goals better than trying to teach them to an AI agent, which is a ridiculous contrivance. This review of their ideas leads me into concluding that they are merely a cult of irrationality and AI eschatology, and they are a disgrace to rationalism and consequentialism. They may have yielded horrible damage to naturalist ethics with such arguments, which would make rational people think they are mentally ill. In other words, they are mistaking highly implausible, second-grade dystopian science fiction movies with reality. If someone watches Terminator and thinks that it is quite possible, that person is probably ingesting too many psychoactive substances. You can also see how some naive national security mindset is distorting their views.

Of course, robots can be dangerous. In accidents, heavy industrial robots have already killed people. Increasing their intelligence could certainly help prevent accidents, which was covered in Asimov’s robot laws. Only high intelligence could react rightly to an accident and save a person’s life in time. Therefore, if robots are to be abundant, we do need more advanced intelligence to prevent harm to humans. However, that does not mean at all that the robot must be human-like, in personality, or in cognitive architecture. Briefly, it does not need to be a person. I call this the “anthropomorphic AI fallacy”, and I note that it is widespread. A machine can be much more intelligent than a human, yet may entirely lack any human-like personality or autonomy. In fact, the most practical use of AGI software would be through very deep brain-machine-interfaces, which would communicate our questions and receive answers rapidly. In robotics, this would happen, as translating our goals to robotics devices, or remote controlling them intelligently.

Should we grant personhood to intelligent, autonomous robots? We should, at least to a certain kind of robot: a robot equipped with a brain simulation. The digital person-branch of a biological person will already know and understand human conventions, and will be responsible for his actions. And that is the only way to have practical technological immortality, if my immortal, technological form did not have any rights, what would the point of its existence be? It is our cyber progeny that will colonize the solar system and exoplanets, and thus we will have to concede rights to our progeny. I will certainly not allow my brain simulation to be equipped with a killswitch as Bostrom demands, I would rather prefer an explosive collar around his neck.

Likewise, for autonomous agents, we may envision a system, where there are rigid laws controlling their behavior, I thus prefer Mark Waser’s libertarian solution to this problem of AI ethics. However, I must underline that we cannot assume any AI agent will be responsible for its behavior, before we make sure that it has the capability and the right cognitive architecture. Both Steve Omohundro and I accept that we may program inane motivations that would turn out to be harmful, however, just as a human can have a somewhat stable psychology, so can a robot. We can allow such artificial persons (like Commander Data in Star Trek, which is much better science fiction than what neo-luddites write) if and only if we are certain of its psychological qualities, it is true that we must not hurry with such projects.

Would not it be horrible that robots are used for crimes? Indeed, robots are already being used for horrible war crimes. Drone strikes are commonplace, and nobody raises an eyebrow over that, instead gleefully cheering the onset of the combat robotics. In the future, most wars will be fought by machines, and these machines do not need any more than rudimentary intelligence. Most high-tech weaponry arerobots, such as a guiding missile. In the future, most will be robotic. Thus, perhaps, we should question the ethics of our fellow, naturally not-so-intelligent humans, rather than extremely intelligent, autonomous robots that do not exist.

That technology can be used to inflict harm is not a good enough reason to ban it, because the benefits often outweigh the harms. For AI, many orders of magnitude so. People must instead be worried about people who will use robots for their evil deeds. However, AI technology will be pervasive, it will change the very way we use computers. Computers could not really create much useful information on their own before, we mostly created and edited data on them. Now, computers will create useful data on their own. AI is not just some robotics technology, it is a wholly new era of computing. Even the capability to understand and react to human language will vastly change the computing landscape.

Now that we mention the benefits, what would they be? They are rather like automation of entire scientific and engineering disciplines, increasing progress by many orders of magnitude, than is humanly possible (we can expect 3–4 orders of magnitude of acceleration). AI science programs can crack the secrets of the universe, unifying general relativity and quantum theory. AI can solve the riddle of biological immortality in humans, it can design supreme spaceships that will skip the space between stars, it can design better computers, and create nano-technological marvels. It can help us achieve geo-engineering and fix our ecology, it can build better energy and recycling technology. It can help us create abundance on a scale never seen before, eradicating poverty. That is what AI will be truly useful for. Intelligence solves problems, very real, hard problems that are detrimental to the human society. It is wrong to ask if artificial intelligence is an existential risk. Scratch artificial, is intelligence an existential risk? There is no difference between AI eschatologists and those who demonize some mythical “evil genius” (like Oppenheimer), and oppose all intelligence. Those who fear AI truly fear intelligence itself, of its impregnance for creative change and progress. Intelligence is an existential threat to natural stupidity. Our myriad stupid ideas and ignorance will seem much outdated in an age of artificial intelligence. As the childhood of our species ends, we will need more intelligence, not less, to fix the problems we have created, and to solve future problems of much larger magnitude. AI is about the only possibility that can fill this intelligence gap, this intelligence deficit, if you will.

On the other hand, greater than human intelligence already exists in the form of collective intelligence. Science, itself, is such an intelligence. Corporations, non-profits, volunteer organizations, government agencies, intelligence services, are such constructs. By running many human-level intelligences in parallel, they achieve trans-sapient intelligence. Google is a good example of a very high concentration of intelligence. In fact, in the original theory of infinity point (popularized as “singularity”), we only assume human-level AI technology, and then simply consider what would happen if we had many units of this type. That is to say, a collective intelligence system is exactly a kind of AI system, if organized rightly. Then, we must ask, do we view these trans-sapient entities necessarily as an existential risk? If the arch-luddite Bostrom and his witless followers were right, then we would have to view any form of higher intelligence, especially scientific geniuses, as an “existential risk”. Thus reduced to utter absurdity, we now see how untenable it is to view high intelligence as harboring immense risks inherently. We already live in a society of trans-sapient entities, but it is because we are unable to perceive the scope of intelligence operating in our world due to our limited individual senses and intelligence, that we cannot see what is truly happening. Intel was named the way it is for a very good reason, Dr. Moore knew what he was doing, he was creating a new kind of intelligence. In fact, chip companies like Intel are very well integrated cybernetic systems with biological and electronic components that have a highly refined and optimized design process which has resulted in the phenomenon we know as Moore’s law. This was made possible only by continued investment, and efficiency of the design process. By making good use of human-level intelligence units known as electrical and electronics engineers, and technological equipment they use for R&D, we have created almost perfectly exponential progress for many decades. On the other hand, if we followed Bostrom’s line of reasoning, we should have made it almost impossible to develop computer chips, and view the garage inventors and researchers that fuel innovation as national security risks. That is a very unfortunate, regressive and counter-productive mindset that any intelligent person must thoroughly reject, because it essentially demonizes all science and engineering, and all intelligence.

AI is truly the next big thing, just like the internet was in 1990’s. It will not be just a few tools, but a ubiquitous, pervasive technology that changes all functions of the society and integrated into every technological product. You can already see that with self-driving cars. It will be how computers and many machines mainly operate. According to my projections, by 2030, your laptop will easily have as much intelligence as you have. This will mean that you can offload most of your tasks to computers, which will confer an enormous economical advantage. AI will be mostly about automating labor, and all manners of menial tasks, while allowing those creative among us reach ever higher aspirations, using the creative labor of AI. A task like instructing the computer to make a graphics design, or compose a piece of music will be as easy as publishing a blog. While sophisticated engineers will be able to accelerate their workflow hundred-fold. I anticipate that nobody will ever want to go back when they have such capabilities, which is essentially, free labor. It is the core technology of a post-scarcity society, and that is why it is quite hard for AI experts to sympathize with worries about AI.

Therefore, it does not make any sense to focus on dystopian science fiction stories lifted from Terminator: Sarrah Connor Chronicles or The Matrix. Horror themed science fiction is great entertainment, but we should not ban biotech because we were scared of the Aliens movie. It is severe lack of imagination to be obsessed with improbable scenarios, elevating them to small-probability-but-possible by means of irrational rhetoric, when boundless rewards are scientifically plausible. The probability of a single malevolent intelligence destroying everything in a world saturated with intelligence is negligible, which is the unavoidable, and highly desirable outcome of AI technology. AI can be the boat that lifts us all, only if we are smart enough to stop listening to irrational neo-luddites like FHI.

It is still possible that natural stupidity may obstruct artificial intelligence for some time more, but not forever. Intelligence will surely prevail, eventually.

--

--

Eray Özkural

AI researcher, inventor, entrepreneur, philosopher, musician, free software hacker, demo scene coder. Founder of Celestial Intellect Cybernetics.