The Singularitarian Principles

Version 1.0.2
Extended edition

©1999-©2001 by Eliezer S. Yudkowsky.
Revised 04/19/2001.
All rights reserved.

Definitional Principles:

Descriptive Principles:


(If you haven't read the compact edition, you may wish to consider reading that first.  This document should be considered as commentary on that edition.)
Imagine the scene on this planet, a few years before the end, when there's a personal AI on every personal computer and programmers are openly discussing how to run a Transcendence.  Imagine the scene on this planet when that Transcendence begins, or when the first intelligence-enhanced humans start to make an impact.  If it happens in public, and it isn't over in milliseconds...

It'll be a madhouse.  Every single human on the planet who takes the Singularity seriously will try to do something, anything, to affect the Singularity.  Who's a "Singularitarian", in that kind of mess?  How can we define "Singularitarian", when the "Singularity" in "Singularitarian" stops being the Singularity-dream and starts being the Singularity-event?  The Singularity is real.  What's a Singularitarian?

A previous version (draft edition 0.1) of these Principles was an attempt to precisely define the term "Singularitarian"; what I learned in that attempt was that Principles are often not definitions.  To make each principle an absolute test, one that would apply even to Singularitarian AIs - yes, that was the standard I tried to use - the Principles had to be so thoroughly hedged and qualified that most of them ceased to be interesting.  The four Principles marked as "definitional" are the ones which survived that challenge, but they really don't convey the spirit of Singularitarianism.  The remaining Principles are an attempt to describe, rather than define.

These Principles surely are not articles of faith; most Principles can be challenged, excepted, or renounced entirely, and nobody will lose their "status" as a Singularitarian.  In fact, in accordance with the Principle of Independence, (A) being a Singularitarian is a personal, not a communal, decision; and (B) I don't have any particular authority to define the Principles.  The Principles, especially the "descriptive" Principles, express majority opinions, curves of greater and lesser inclusiveness, cognitive structures that can be altered or broken without altering the goal.  Although if none of the Principles hold true of you, you're almost certainly not a Singularitarian.

And the definition itself?  Singularitarians are the partisans of the Singularity.  A Singularitarian is someone who believes that technologically creating a greater-than-human intelligence is desirable, and who works to that end.  A Singularitarian is friend, advocate, defender, and agent of the future known as the Singularity.

That's the spirit of the word.  You can qualify it by advocating only a particular kind of Singularity (1); or by viewing the Singularity as a means to some greater end; or by believing that the Singularity may be more desirable than the alternatives, but not something to sing hosannas about; or by helping out because you're curious about the Singularity, but not a fanatic about it; or by defining "Singularity" as a drastic global change brought about by accelerated technological progress, rather than as a drastic global change brought about by the existence of greater-than-human intelligence; or even by being an AI with a goal system in which "defender" or "advocate" exist simply as temporary subgoals rather than separate emotions.  Nonetheless, at the end of the day, a Singularitarian is a friend of the Singularity.  "Singularity" plus "arian" equals "Singularitarian".

Finally, as stated in the Principle of Independence, a Singularitarian is a friend of the Singularity - rather than Singularitarianism.  Which leads to the question:  What Singularity-purpose is served by the Singularitarian Principles - the Principles of the Friends of the Singularity?

To add cohesiveness to the community; to provide a simple summary of basic principles that can be presented to others; to make it more possible to say "Singularitarians think" or "Singularitarians believe" (with a clearly defined level of exceptability); to solidify commitments by solidifying the concept; and, to some extent, to shape Singularitarians by shaping the meme (2).

Definitional Principles:


One of the shaping truths about a Singularitarian is what he (3) believes about the Singularity - whether it's a Strong Singularity or a more comprehensible one; a Transcendence or an Event Horizon; whether or not positive feedback is involved; the magnitude of the gap between humans and transhumans; whether the domino chain of enhancement stops at superintelligences or Powers or somewhere beyond.

The odd thing is that what we believe about the Singularity shouldn't make such a large difference to our actions.  The Singularitarian who believes in a relatively weak "Singularity" brought about by enormously accelerated economic progress, thus eliminating poverty and hunger, should probably calculate around the same "rational distribution of efforts" as someone who believes the planetary mass will be transformed into nanocomputers and every human uploaded into private paradises; or someone who believes that every human will become a Power with their own wormhole network of Universes; or someone who believes in the Event Horizon simple - that there's no point in predicting what lies beyond, because if a dog couldn't do it, and a Neanderthal couldn't do it, and a nineteenth-century human couldn't do it, neither can we.

But in practice, the dream of Singularity is a shaping dream.  The Singularity-dream is still a human concept, nothing like the Singularity-event itself - but it's a dream powerful enough to alter the mind that contains it.  If you stare into the Singularity long enough, the Singularity stares back into you.  (4).  The dream of Singularity lies at the center of our Singularitarian ideals.  It can even lie at the center of ourselves, at the far end of this Principle's curve.  Ideally, over time, our personalities take on some of the future shock of the dream, absorb some of the alienness of our subject matter.  (And then we can use it to shock people at parties.)

It's a human phenomenon, the effect I'm describing (5).  Once you've visualized the most intelligent and smart entity you can imagine, you start to reshape yourself in that entity's image.  Oh, if your visualization is halfway competent you won't have a chance of matching even the image, and you'll know it.  But there are still some decision points, especially emotional, philosophical, and selfimage decision points, where even the human dream of a Power is more rational than humans are set up to be.  With enough self-awareness and sufficient knowledge of cognitive science, you can categorize increasingly large parts of your personality as "anthropomorphic" and take a shot at either eliminating them, counteracting them, or using them optimally, depending on how deep down you're operating.

The dream of superrationality tends to shape personal rationality, and the strength of future shock that lies in the dream of Singularity is a strong determinant of the pattern and strength of the Singularitarian.


Singularitarians are the partisans of the Singularity.  A Singularitarian is someone who believes that technologically creating a greater-than-human intelligence is desirable, and who works to that end.  A Singularitarian is advocate, agent, defender, and friend of the future known as the Singularity.

The previous definition of "Singularitarian" - which was not mine - stated that a Singularitarian is "One who advocates the idea that technological progress will cause a singularity in human history."  In this recreation of the term, I'm redefining the "arian" to mean "advocate of" - in the active sense of "advocate" - rather than "believer in.  I'm redefining the "Singularity" in "Singularitarian" to refer to the Singularity-event rather than the Singularity-meme.  In other words, "Singularitarian" now means "one who acts to increase the chance of a Singularity", not one who merely believes the Singularity is possible or probable.

That's the key word - act.

In cognitive terms, the Singularity is a major goal - one capable of initiating new plans, subgoals, and actions.  It's not enough just to influence the future whenever the opportunity presents itself; a Singularitarian seeks out such opportunities if none are present, or creates those opportunities if none exist.

In other words, this Principle is a quantitative predicate; opportunistic help is 25%, opportunity-seeking is 50%, and full-scale planning and opportunity creation is 80%.  Or, since those activities are mediated by talent and necessity, this Principle has a truth curve measured by the levels of emotional will and cognitive dedication that usually result in such activities, all else being equal.

Many of us, I'm sure, aren't as active as we'd like to be.  At this point in time, our "activism" may consist simply of waiting for something to do.  Not all of us have the gift of planning; to seek out and create opportunities requires time and talent, not just will.  But at the least, the will has to be present.  Perhaps things will be picking up shortly, with the creation of the Singularity Institute.


The "Singularity" is a natural, non-mystical, technologically triggered event.  In a sense, this is not so much a constraint on belief as a constraint on terminology - if you happen to believe that this planet's Singularity represents or is identical with the Apocalypse of some particular religion, from the Age of Kali to the Christian Armageddon - or even if you believe that the Singularity is the inevitable destiny of humanity for moral reasons, rather than a possible outcome for causal ones - then that's perfectly okay, as long as you keep quiet about it.

Or perhaps a "gag order" is coming on a bit too strong.  But keep the terminology clean and the terms separate.  Say "I think that the natural event called the Singularity, or the technological creation of greater-than-human intelligence, will be the trigger of the religious event called Armageddon."  And if you want to bring about the Armageddon, rather than the Singularity as such, call yourself an Armageddonist rather than a Singularitarian.

Of course, the distinction between "natural" events and "mystical" events, considered on an ontological basis, is not exactly clear.  Likewise, what constitutes "science fiction" and what constitutes "fantasy" is still debated in the speculative-fiction community.  But I expect discussions of the Singularity to respect the real boundary between technology and magic, the same boundary that separates science fiction and fantasy:  If you do something by pressing a button, it's technology.  If you do it by chanting a spell, it's magic.  If you do something really cool by pressing a button, it's ultratechnology.  The Singularity is something really cool that happens when we press a button.

Likewise, if something happens because people go out and make it happen, that's natural; if something happens because of moral necessity, then that's - I'm not sure, "fatalism" maybe? - but not "natural", anyway; if something is fated to happen regardless of what we do, then that's destiny.  The Singularity is a natural event, and that's why we're trying to make it happen via technology.

That's also why "greater-than-human" intelligence has to involve a hardware improvement, not just a new personal philosophy.  Maybe a personal philosophy imported from the Transcend (6) could quadruple IQs or enable reprogramming of individual neurons, but not here.  Above all, having a brilliant idea doesn't respect the basic boundary; it doesn't involve pushing buttons.  Artificial Intelligence!  Neurohacking!  Neurocomputing interfaces! There's some serious button-pushing!  There's ultratechnology!

We, the Singularitarians, are allied in the purpose of bringing about a natural event through natural means, not sitting in a circle chanting over a computer.  There are thousands, perhaps millions, of stories and prophecies and rituals that allegedly involve something that could theoretically be described as "greater-than-human intelligence".  What distinguishes the Singularitarians is that we want to bring about a natural event, working through ultratechnology, without relying on mystical means or morally valent effects.  If we allow "Singularity" or "Singularitarian" to encompass anything else, the meme will disintegrate.

If you have some mystical/religious purpose for being a Singularitarian, or a morally-valent view of the run-up, ideally you should mention it at most once, keep it quiet thereafter, and not let it show in your actions - but that's more along the lines of a Descriptive Principle for preventing the show from turning into a religious war.  With regards to Definitional Principles, I have no problem with the idea of a Singularitarian motivated by religious purposes, as long as the beliefs about the nature of the Singularity don't show in the engineering details of navigating to the Singularity - as long as the belief is just a motivator, or a statement about outcomes, and doesn't get mixed in with the part of the Singularity we humans have to handle and turn it into something other than button-pushing.

Please don't get me wrong - if I were convinced I'd found a magical ritual for creating an AI, I'd do it in a flash.  I'm not saying there's anything wrong, or invalid, about using something other than technology.  I don't think it'll work, judging from history, but that's a different matter.  Nor is it necessary for someone to be a Singularitarian before they can get involved in the Singularity.  If the technopagans want to help out, they have the right - the Singularity belongs to humanity (Solidarity), and you don't have to be a Singularitarian to have a dark ulterior motive the Singularitarians can unwittingly serve (Independence).

The Principle of Ultratechnology also covers the general fascination with really cool technology that we inherited from the transhumanists.  While you can be blasé about tech, and still be a full Singularitarian, we are far more likely to forgive mystical talk if you're a techno-mystic - and by that, I do not mean glossing up your mysticism with gibberish borrowed from pop science tracts of the last two centuries (7); I mean spending your spare time with Recreational Explosives or hacking Linux or whatever.

I am, however, going to be strict about the naturalism side of it.  I think we should all treat it as a necessary part of the definition itself, as implied by the phrasing in the first paragraph.  We should regard anyone who violates this Principle as a poser, because if we go public we're going to get a lot of loonies violating this Principle.

I'm just trying to keep the word - and the meme - clean.  History shows that any concept this powerful is easily distorted, or even hijacked.


Under some mistaken formulations of the Singularity, only the first group to develop the key technology goes on to control the Universe, while the rest of the human race either continues scratching out a living or is simply exterminated.  In some formulations, the fate of individual humans is even dependent on their ideological convictions or their native intellectual talents.  I'm confident this isn't the case, mostly because the whole idea of some event occurring to some-but-not-all of humanity is anthropomorphic, even "gaussiomorphic" (8), because it implies forces at balance with the world; it implies forces that vary in the same way that humans vary, so that the forces affect some humans but not others.  In reality, any technology of the order Singularitarians deal with would relate to all humans in the exactly the same way.  The ideological differences between individual humans wouldn't raise a blip - the forces involved are orders of magnitude more powerful, like a tidal wave washing across a pond, or a billionaire buying a pair of shoes.  Under the "Friendly AI" version of the Singularity, and volition-based Friendliness in particular, this enormous force might still respect the choices of individual humans - but if so, the Singularity would respect the choices of all humans, of all sentient minds, without noticing shape or form.

But that's just my personal opinion about the way the Universe works.  If a Singularitarian thinks the facts indicate that the first group to develop AI will win all the marbles, then a fact's a fact.  Nonetheless, there's no place in our community of "shared ulterior motives" for someone who's only willing to accept her personal development of AI as success - who, in fact, would count a Singularity sponsored by any other person as a failure.  We're united by the fact that your success counts as my success.  But, in fact, the principle of Globalism is insufficient to support this result; unison requires the Principle of Nonsuppression as well.  Beyond that, the Singularitarians have not placed themselves at odds with humanity, for all that some of us have other allegiances.  But that result requires the Principle of Solidarity.

The Principle of Globalism actually is a matter of definition, a very subtle one, marking the difference between the terms "Singularitarian" and "posthumanist".  It's possible to want to bring about an event that would qualify as a "Singularity" without being a "Singularitarian".  Someone who thinks that the first uploadee will win all the marbles and leave the rest of humanity out in the cold, and who wants to personally be that first upload, is trying to bring about an event that would qualify as the Singularity... but she is not a Singularitarian.  A posthumanist, but not a Singularitarian.

Perhaps the best analogy is to "liberty" and "libertarian".  Being a "libertarian" doesn't mean that you advocate liberty only for yourself, but for a society.  It has nothing to do with altruism.  Objectivists, who view altruism as actively evil, are still libertarians.  You can be a libertarian because you believe the liberty of others enhances your own, strictly selfish, ends; you can be a libertarian because safeguarding the liberty of the whole is necessary to safeguard the liberty of the one; you can be a libertarian from first moral principles, or as a matter of pragmatism, or because your astrologer told you to.  Motivations have nothing to do with the definition, which is simply that a libertarian is someone who advocates liberty for everyone.  Someone who advocates liberty only for himself could as easily be in favor of autocracy, theocracy, monarchy, dictatorship... just about anything, actually.

Similarly, although the Singularity is simply the creation of greater-than-human intelligence, the "Singularity" in "Singularitarian" is the Singularity as seen from the perspective of the vast majority of humanity.  Just as the "liberty" in "libertarian" is the degree of liberty available to the society, the "Singularity" in "Singularitarian" is something that happens to the human race, as well as whatever events come afterward.  It's the event seen from a global perspective, just like the "liberty" in "libertarian" is global.  If you don't advocate global liberty, you aren't a libertarian.  If you don't advocate global Singularity, if you just advocate a personal, private Singularity, then you're not a Singularitarian.  Again, this has nothing to do with morality or motives; it's a matter of definition - I say "advocate" instead of "want" because the reasons for your advocacy have nothing to do with the definition; it's perfectly possible to advocate global liberty, be a libertarian, and care only about your own liberty.

(For those of you who went directly to Singularitarianism without stopping off at transhumanism along the way, and are wondering why the heck I'm going to such lengths not to say anything bad about selfishness, it's because a number of transhumanist ethical philosophies are selfish in a foundational sense.  This is a very formal sort of "selfishness" - for example, gaining selfish pleasure by gratifying your personal impulse towards charity is entirely acceptable (except to loony Objectivists).  In my personal opinion this is part of the general overreaction of technocapitalist philosophies to the great Communist disaster, but that is, of course, only my opinion.)

That's what the principle of Globalism is about - being willing to share the fate of your fellow humans.  It's not that we demand altruism.  You can be in it for the godhood, but then you have to believe that godhood is available to everyone else as well.  You can believe that the Singularity will exterminate humanity and go on to better things, but then you have to believe that it'll exterminate you too.  You can believe that the first group to develop AI will control the Universe, but then you have to be willing to let a fellow Singularitarian develop it.  Whatever you think the fate of the majority of humanity will be, you have to be willing to share it.  The Singularity may be your personal goal, but it can't be a personal Singularity.

As with Ultratechnology, the probability of this part of the meme being hijacked is high enough, and the consequences of that hijacking undesirable enough (being correctly called Nazis), that I think we should treat Globalism as a necessary part of being a Singularitarian; hence the definitional phrasing.

Descriptive Principles:

These are items which are more than usually dependent on personal opinions about the Universe, opinions which vary independently of accepting the possibility and desirability of Singularity.  Descriptive Principles are "detached"; they could be disproved independently of the Singularitarian meme itself.  However, I do think that these Principles form either an important part of Singularitarianism, or an important part of what binds the Singularitarian community together.


The Singularity holds out the possibility of winning the Grand Prize, the true Utopia, the best-of-all-possible-worlds - not just freedom from pain and stress or a sterile round of endless physical pleasures (9), but the prospect of endless growth for every human being - growth in mind, in intelligence, in strength of personality; life without bound, without end; experiencing everything we've dreamed of experiencing, becoming everything we've ever dreamed of being; not for a billion years, or ten-to-the-billionth years, but forever... or perhaps embarking together on some still greater adventure of which we cannot even conceive (10).  That's the Apotheosis.

We accept the possibility that this future may be unattainable; there are many visualizations under which Apotheosis is impossible.  Probably the most common category is where the superintelligences have no particular reason to be fond of humanity - all superintelligences inevitably come to serve certain goals, and we don't have any intrinsic meaning under whatever goals superintelligences serve, or we're not sufficiently optimized - so we get broken up into spare atoms.  Perhaps, in such a case, the superintelligences are right and we are wrong - by hypothesis, if we were enhanced to the point where we understood the issues, we would agree and commit suicide.

There was a point where I was sure that superintelligent meant super-ethical (probably true), and that this ethicality could be interpreted in anthropomorphic ways, i.e. as kindness and love (unknown).  Now, with the invention of Friendly AI, things have gotten a bit more complicated.  Apotheosis is definitely a possibility.  I refuse to hope for an Apotheosis that contravenes the ultimate good, but I can hope that the ultimate good turns out to be an Apotheosis - and if there is no "ultimate good", no truly objective formulation of morality, then Apotheosis is definitely the meaning that I'd currently choose.  So I hope that all of us are on board with the possibility of an Apotheosis, even if it's not necessarily the first priority of every Singularitarian.

The Principle of Apotheosis covers both the transhumanist and altruist reasons to be a Singularitarian.  I hope that, even among the most philosophically selfish of transhumanists, the prospect of upgrading everyone else to godhood sounds like at least as much fun as being a god.  There are varying opinions about how much fun we're having on this planet, but I think we can all agree that we're not having as much fun as we should.

Even after multiple doses of future shock, and all the other fun things that being a Singularitarian has enabled me to do to my personality, I still like to think of myself as being on track to heal this planet - solving, quite literally, all the problems of the world.  That's how I got into this in the first place.  Every day, 150,000 humans die, and most of the survivors live lives of quiet desperation.  We're told not to think about it; we're told that if we acknowledge it our minds will be crushed.  (11).  I, at least, can accept the reality of child abuse, cruelty, death, despair, illiteracy, injustice, old age, pain, poverty, stupidity, terror, torture, tyranny and any other ugliness you care to name, because I'm working to stop it.  All of it.  Permanently.

It's not a promise.  It can never be a promise.  But I wish all the unhappy people of the world could know that, whatever their private torment, there's still hope.  Someone, somewhere, is working to stop it.  I'm working to stop it.  There are a lot of evil things in the world, and powerful forces that produce them - Murphy's Law, blind hate, negative-sum selfishness.  But there are also healers.  There are, not forces, but minds who choose to oppose the ugliness.  So far, maybe, we haven't had the knowledge or the power to win - but we will have that knowledge and that power.  There are greater forces than the ugliness in the world; ultratechnologies that could crush Murphy's Law or human stupidity like an eggshell.  I can't show an abused child evidence that there are powerful forces for good in the world, forces that care - but we care, and we're working to create the power.  And while that's true, there's hope.

There is no evil I have to accept because "there's nothing I can do about it".  There is no abused child, no oppressed peasant, no starving beggar, no crack-addicted infant, no cancer patient, literally no one that I cannot look squarely in the eye.  I live a life free of "the normal background-noise type of guilt that comes from just being alive in Western civilization", to paraphrase Douglas Adams (12).  It's a nice feeling.  All you have to do is try to help save the world.


There's a certain amount of controversy and uncertainty surrounding the question of "What happens to humanity after Singularity?"  Actually, this is something of an understatement.  We can't even agree on what the grounds for debate are.  I have now reached the point where, due to the Fermi Paradox (13), I can't think of any outcome for the life cycle of an intelligent species, from an absolute Transcendence to a complete fizzle of the Singularity, which is consistent with the empty skies.  Still, in my visualization, humanity's best chance of navigating to survival lies in creating the Singularity as fast as possible.  (14)

Probably the strongest reason I so believe is that creating a greater-than-whatever intelligence seems like a natural stage in the life cycle of any intelligent species.  If humanity survives, say, the next thousand years, sooner or later someone is going to start fooling around with neurohacking or create an AI or whatever, no matter what safeguards are in place.  In a million years, even evolution would eventually produce superintelligence.  And eventually, in a few hundred billion years, the Universe will - in the absence of intelligent intervention - become unliveable.  Our species just doesn't have the option of continuing on forever with human-equivalent minds.  If it were even possible, some earlier species would have expanded to fill the Universe, including Earth; something mysterious happens, and it happens to everyone, even though it seems likely that some races tried to avoid it with more will, cooperation, and discipline than humanity could possibly muster.

On the other hand, I can think of all kinds of unfortunate accidents that can befall a bright young race, with the leading candidate being nanowar.  Sooner or later, someone will create either a superweapon or a superintelligence.  We cannot avoid the issue.  We can only choose which will come first - and at least intelligence can have conscience; we can rely on it not being blindly destructive.  That's the survivalist reason to be a Singularitarian.

Even if it were possible to delay the issue until a later generation, what would be the point?  Delaying the issue would not alter the fundamental options facing the human species, and it would condemn many individual humans now living to certain death by old age.  All four of my grandparents are alive, and I'd like them to stay that way right up to Apotheosis, thank you very much.  Besides which, the coercive instruments necessary to create significant delays would tend to negatively affect the eventual outcome - for example, by ensuring that the key technologies were developed in secret, probably by rogue states or terrorists.

Even if we somehow knew for a fact that any superintelligence would exterminate humanity - including me and all other Singularitarians, of course - this "Shiva-Singularity" might still be a goal that all of humanity could share.  Dying in the creation of something better strikes me as significantly less pointless than dying of old age or nanowar.  As attractive possibilities go, this one is significantly less attractive than Apotheosis; but sometimes, choosing the best available action doesn't mean that any of the available actions are good.  Think about the possibility that there might be a better world but that we are absolutely barred from it, no matter how unpleasant this possibility is - because once you've confronted this possibility and thought about it openly, it loses a lot of its "scare power", and you become a more confident futurist as a result.


The machinery of Transcendence - Jupiter brains, spacetime engineering, Powers, intelligences with quintillions or knuthillions of times human processing power - makes up an important part of our appreciation of the Singularity.  But for me, the final heart of the Singularity is captured in Vernor Vinge's old explanation, the paragraph that was the first time I ever heard the word "Singularity":
"Here I had tried a straightforward extrapolation of technology, and found myself precipitated over an abyss.  It's a problem we face every time we consider the creation of intelligences greater than our own.  When this happens, human history will have reached a kind of singularity - a place where extrapolation breaks down and new models must be applied - and the world will pass beyond our understanding."
            -- Vernor Vinge, True Names and Other Dangers, p. 47
That is the formulation of Singularity known as the Event Horizon.  Look back through Earth's history.  Could a dog have understood an ape?  An ape a Neanderthal?  A Neanderthal a Cro-Magnon?  Could a hunter-gatherer have predicted Socrates?  Could Socrates have predicted the 1800s?  The 1800s the 1900s?  Could any of them have predicted the Singularity as we see it, Jupiter brains and all?  And what makes us think we've got it right?

For me, intelligence is the essence of morality, because intelligence is the essence of decision, because intelligence is the essence of thought.  There are a dozen ways I could make this rather vague statement more concrete, but all of them miss the point.  Simply by arguing about the subject, this subject that a Neanderthal would never have understood, we admit the sovereignty of intelligence - in practice, whatever the ultimate philosophical grounds may be.  Every time we employ an argument against the sovereignty of intelligence, we admit that our argument would have been wrong at every previous point in history where that argument would not have been invented.

If humanity is simply too young as a species to understand the ultimate grounds on which choices should be made, then there's nothing left but the appreciation of intelligence.  I am tempted to define this formally, in terms of representative bindings and so on, but that misses the point.  Simply by engaging in a complex philosophical argument, even this very sentence, I'm admitting that virtually everyone I could look back on, including my own past selves, would have gotten it wrong.  It's why the emphasis in "Friendly AI" is so strongly on equalling or exceeding human philosophical capabilities, not just specifying some particular brand of altruism.

I believe that there are at least as many fundamental arguments left to discover as I've discovered already.  I know that without knowing what I've learned in the last five years, including this sentence, I would get everything wrong.  I believe that there's at least that much left to learn.  I believe that there is at least one argument as deep as the sovereignty of intelligence which I do not know. That is the ultimate reason I'm a Singularitarian.

A Fire Upon the Deep flashes future shock, but True Names carries the deep magic.

Finally, this principle applies, not only to Powers, or to transhumans, but even on the human scale.  The appreciation of transhuman intelligence is linked to the appreciation of human intelligence - I've yet to see one without the other.  And the appreciation of human intelligence, the sovereignty of intelligence in thought, is one of the primary safeguards that prevents Singularitarianism from descent into "the banality of fanaticism".

I may be a fanatic, but while I can appreciate intelligence, I won't be a stupid fanatic.  I won't make any of the mistakes that are commonly attributed to stupid fanatics; I won't possess the stereotypical qualities of fanaticism.  I won't attempt to excuse cloudy thinking by saying that it serves the cause; stupidity is stupidity.  I will preserve my sense of humor, and my ability to laugh at myself, because losing the ability to question yourself and your cause leads to stupidity.  As long as "That's stupid" takes logical priority over ideology, it's harder to be stupid.  (I won't say that it's impossible to be stupid, because that would be stupid.  I will not exalt adherence to any ideology - Singularitarianism, logic, intelligence - above intelligence itself.)

Of course, it's not just enough to make a commitment to intelligence.  You have to back it up with the ability to perceive intelligence, and to distinguish it from stupidity.  That high art is a topic for some other page.

But making the commitment is also important.


Independence means regarding the Singularity as a personal goal.  The desire to create the Singularity is not dependent on the existence, assistance, permission, or encouragement of other Singularitarians.  If every other Singularitarian on the planet died in a tragic trucking accident, the last remaining Singularitarian would continue her personal efforts to make the Singularity happen.

The intellectual heritage of Singularitarianism comes from transhumanism and Extropianism, both of which have a strong streak of individualism and quite explicit antiauthoritarianism.  Historically speaking, most human causes do tend to organize themselves around explicit sources of authority.  It should go without saying that neither I, nor Vernor Vinge, nor the Singularity Institute, nor any other human institution, should be believed to have any "authority" over other Singularitarians - except that voluntarily granted by other Singularitarians, of course.  That much is implicit in our transhumanist heritage.

But it's possible to go far beyond that, looking through the eyes of our dreams of superintelligence.  What is authority?  What is an organization?  Can you taste it, smell it, hold it in your hand?  An organization of a thousand people has no existence, except as a common delusion distributed across a thousand minds.  And while that delusion - less pejoratively, that common set of assumptions, adaptations, and special-purpose social code - may certainly be useful, it can't possibly have any real moral force.

Sometimes, the dream of superintelligence can override the "dream of humanity" - "life as we know it", "the way things are".  After all, the political instincts are tuned to maximize fitness in the hunter-gatherer environment, not accomplishing the actual stated purpose in the modern environment.  (15).  There's a cognitive distinction between "leaders" and "followers" that I don't think we should allow into our minds.

There's a whole set of cognitive adaptations tuned to coming together in a common cause, and with luck Singularitarians can choose to ignore the hindering ones; banding together in the same way AIs would - because agents who share your goals are likely to be useful to those goals.  The Principle of Independence states that Singularitarians are bound together by shared ulterior motives.

It's a subtle distinction, and it shows in many ways.  Let's say that we have two Singularitarians, Alice and Bob, of whom the Principle of Independence holds true.  Alice has money, Bob has a plan, Carol has the skill.  Alice gives Bob a million dollars to implement the plan, and Bob hires Carol.  Is Alice Bob's patron?  Should Bob be grateful to Alice?  Is Alice "helping" Bob's plan?  Does Bob have the right to demand gratitude and obedience from Carol?

No.  These are all anthropomorphisms.  Alice is not helping Bob, or "helping the Singularitarian cause"; she's forwarding her own, personal goal of bringing about the Singularity.  Her actions can be described as "using" Bob as much as "helping" Bob, and she isn't higher on the pecking order, because the pecking order is just the human way of doing things.  And likewise, if Bob turns around and hires the Singularitarian Carol to write an AI, Carol isn't an "employee"; she's using Bob so she can spend more time writing AIs, and thus forward her own, personal goal of bringing about the Singularity.  Now that's egalitarian - though it remains to be seen how well it'll work in practice.

I should warn you that the Principle of Independence may reflect my own personal philosophy to a greater extent than the other Principles.  I think Independence will prove to be a useful principle, but it's... controversial?... a potential rather than an actuality.


I strongly favor AI, but believe that the development of nanotechnology will inevitably lead to a planetwide and very short war.  I've spoken of the necessity for accelerating AI research to beat the "deadline" of nanotechnology development.  (16).  Why haven't I simplified my life by moving to delay the development of nanotechnology?  Why, in fact, have I spoken out against the concept of banning or regulating nanotechnology?

Singularitarian grew out of transhumanist anarchocapitalism, which grew out of libertarian science fiction, which grew out of science and engineering, and we all tend to feel very strongly about attempts to suppress the development of technology.  If there's any group universally despised by Shock Levels One through Four, it's the Luddites - people with Shock Level Negative Two.  If these people and their interminable regulations, and their media panic-mongering, and their illegal sabotage, have ever accomplished anything, we don't know about it.  (17).  If these people had their way, automobiles would have been suppressed to protect jobs in the saddle industry, penicillin would still be in FDA trials, heart transplants would be unnatural and Frankensteinian, Gutenberg would have been burned at the stake... we'd still be sitting around in caves wondering if rocks were edible (18).  But an intellectual heritage isn't an argument - so, why Nonsuppression?

The rationale behind Nonsuppression is partially pragmatic, partially Prisoner's Dilemma (PD), partially ethics, and partially a matter of not dropping a match in a fireworks factory.

The PD aspect is perhaps easiest to understand; for any person who loves technology A and hates technology B, there's probably a person who loves technology B and hates technology A.  (For "love" read "believes is desirable", and for "hate" read "believes will destroy the world".)  I have seen arguments that nanotechnology is the key to the future, while a transhuman AI would inevitably exterminate humanity.  There's simply more absolute advantage to be gained from unity than from any relative advantage within the transhumanist community.  If we start a catfight, we'll be so busy trying to sabotage each other that the end result will simply be to place matters in the hands of less futuristic organizations, people who lack the strategic foresight to become involved in pointless battles.

Pragmatically, of course, trying to suppress the development of a dangerous technology only inflicts more damage.  Slowing or banning nanotechnology in the western industrial democracies would simply result in its development by rogue states or terrorist groups, as discussed in Nanotechnology and the World System.  Just as the refusal to develop nuclear weapons in World War II could have handed victory to Hitler, or, later, Stalin.  (Note that I am fully in favor of suppressing the proliferation of dangerous technologies, as with nuclear weapons; but to suppress their development is suicide, as with nuclear weapons.)  When ultratechnology is outlawed, only outlaws will have ultratechnology.

It's a poor blade that won't cut both ways; this is the principle behind "Don't drop a match in a fireworks factory."  Technophobia spreads.  Convincing a Congressbeing to regulate nanotechnology would lay the groundwork for regulation of Artificial Intelligence, neurohacking, neurocomputer interfaces, and all the other ultratechnologies; in fact, any simple investigation of the subject, in book form or Internet pages, would lead to that conclusion.  Push for the Nanotechnology Safety Act and you'll get the Comprehensive Ultratechnology Regulation Bill.  Future shock - or, more precisely, future panic - is the single greatest danger.  To all of us.  Suppression can only feed it.

Finally, of course, there's the ethical principle involved.  Ethics has its roots in uncertainty.  "The end does not justify the means" because so often in human life the end fails to materialize, or proves worthless to its purported beneficiaries.  Intelligent entities on the human level, with our evolved emotions, cannot be trusted - cannot affort to trust themselves - to navigate the future without certain safeguards.  I cannot presume to understand what's really going on, or to know better than the transhumanists developing "dangerous" technologies.  Trying to navigate the future by suppressing technologies is like trying to steer a car by shooting out the tires.  How do I know that nanotechnology research isn't necessary to creating the hardware for AI?

The Principle of Nonsuppression is part of what makes the Singularitarian community tick; we can't get anywhere if we're all trying to sabotage each other's efforts.


Of course, any project or organization that deliberately moves against you, or your technology, is fair game for direct opposition.  As with anarchocapitalism, it's not really "nonsuppression", it's the non-initiation of suppression.  Also as with anarchocapitalism, "suppression", by definition, requires coercion - or otherwise attempting to crush the competition rather than convert it.  (20).