The Plan to Singularity

Version 1.0.10.

©1999 and © 2000 by Eliezer S. Yudkowsky.
All rights reserved.

Table of Contents

"The Plan to Singularity" is a concrete visualization of the technologies, efforts, resources, and actions required to reach the Singularity.  Its purpose is to assist in the navigation of the possible futures, to solidify our mental speculations into positive goals, to explain how the Singularity can be reached, and to propose the creation of an institution for doing so.

NOTE: Since the creation of the Singularity Institute in July 2000, much of this document has become obsolete.  In particular, the whole concept of starting an AI industry may turn out be unnecessary; the Singularity Institute does not currently plan to develop via an open-source method; the entire technological timeline has been compressed into three stages of a seed AI designed by a private research team; and, of course, most of the sections dealing with how to establish a Singularity Institute are obsolete.

These strategic changes are largely due to improvements in the understanding of seed AI - see Coding a Transhuman AI 2 - which may make it possible to develop a seed AI using fewer resources than previously thought.  If the problem of seed AI turns out to be less tractable than expected, the strategies described here would still be a valid fallback plan.

This document comes in two versions, monolithic and polylithic.  You are reading the monolithic version.  This version of the document is intended for continuous reading, or downloading to local disks.  The polylithic version of the document is intended for incremental reading and light browsing.

This document was created by html2html, a Python script written by Eliezer S. Yudkowsky.
Last modified on Wed Apr 25 22:54:25 2001.

May you have an enjoyable, intriguing, and Singularity-promoting read.
Eliezer S. Yudkowsky, Navigator.


The Plan to Singularity ("PtS" for short) is an attempt to describe the technologies and efforts needed to move from the current (2000) state of the world to the Singularity; that is, the technological creation of a smarter-than-human intelligence.  The method assumed by this document is a seed AI, or self-improving Artificial Intelligence, which will successfully enhance itself to the level where it can decide what to do next.

PtS is an interventionist timeline; that is, I am not projecting the course of the future, but describing how to change it.  I believe the target date for the completion of the project should be set at 2010, with 2005 being preferable; again, this is not the most likely date, but is the probable deadline for beating other, more destructive technologies into play.  (It is equally possible that progress in AI and nanotech will run at a more relaxed rate, rather than developing in "Internet time".  We can't count on finishing by 2005.  We also can't count on delaying until 2020.)

PtS is not an introductory-level document.  I am assuming you already know what a Singularity is, why a Singularity is desirable, what an Artificial Intelligence is, and so on.  For more information about the Singularity, see An Introduction to the Singularity.

As with any other document I publish, I guarantee no perfection and claim no authority; I only believe that publishing the document will prove better than not publishing it.  As these words are written, PtS is the very first attempt to sketch a complete path to Singularity; it has no competition.  If you're reading these words in a time substantially after 2000, I hope you will appreciate the historical context.

The future sketched in PtS is not intended as speculation.  I intend to spend my life making it real.  If you believe that the Singularity is a worthwhile goal, and you're interested in making it happen, consider joining the Singularitarian mailing list.

NOTE: Since the creation of the Singularity Institute in July 2000, much of this document has become obsolete.  See note above.

Guide to Contents

1: Vision is a high-level introduction to the PtS plan; it describes the top-level goals and the reasons for the top-level goals.  Whether you plan to browse or read straight through, start here.

2: Technology sketches the sequence of technologies leading up to the Singularity.  (Obviously, this section is not intended to contain a complete technical whitepaper for the next ten years.  The technologies are introduced, rather than explained.  Nonetheless, where the technical architecture has consequences for the PtS strategy, I get technical.)  2.1: Component technologies introduces the Aicore architecture for artificial intelligence and the Flare programming language.  2.2: Technological timeline describes the specific path taken to Singularity, although the purpose of such early technologies as Flare may not become clear until 2.2.9: Self-optimizing compiler.

3: Strategy describes how the PtS goals will be accomplished.  In each category, a "timeline" section describes what will be done in the short-term, mid-term, and long-term.  Other sections discuss miscellaneous questions of strategy, or how to deal with problems that are likely to crop up.  For maximum reading ease, read in given order.  3.1: Development strategy discusses the task of creating the necessary technologies.  3.2: The Singularity Institute discusses the administrative backbone required.  3.3: Memetics strategy discusses the tasks of finding help, not creating opposition, and the art of writing about the Singularity.  3.4: Research strategy discusses how to handle any further research required.  3.5: Miscellaneous contains issues that affect general strategy (3.5.1: Building a solid operation) or details that don't fit under any specific heading (3.5.3: If nanotech comes first).

4: Initiation describes what has to be done to get started, the people needed to do it, and how much it's all going to cost.

Appendix A: Navigation describes how the content of PtS was determined by the structure of the spectrum of possible futures.  For example, this section includes the reason why developing AI is easier than developing intelligence enhancement, why 2010 is the target date, and why 2005 would be better.

Those of you who are just in it for the future shock will probably enjoy 3.5.3: If nanotech comes first and 2.2.9: Self-optimizing compiler through 2.2.14: Transcendence.

Table of Contents

NOTE: Since the creation of the Singularity Institute in July 2000, much of this document has become obsolete.  See note above.

1: Vision

The Singularity, by the old definition (1), is the creation of greater-than-human intelligence.  The Singularity, as the goal pursued by Singularitarians, is the existence of at least one transhuman with enough power to prevent catastrophe and take the next step into the future.  Failure, as a negative goal, is any event that sterilizes Earth and destroys all intelligent life in the Solar System, thus permanently preventing the Singularity.  I think most of us want to share the Singularity with as much of humanity as possible, so widespread war and billions of deaths would probably count as at least a partial failure.

The best candidate for creating the Singularity is Artificial Intelligence; the technology most likely to wipe out the human race is nanotechnology.  (See Appendix A: Navigation.)  To win, we need to create an AI.  To avoid losing, we need to outrace nanotechnology.

We must create a "seed AI", initially dumber than human, but capable of redesigning itself to increase intelligence, and re-redesigning with that increased intelligence, until transhuman ability is reached.  (See the page Coding a Transhuman AI.)  We must ensure that the computing hardware exists to run that AI, and that we have access to that hardware.  We must do all this before nanotechnological warfare becomes capable of completely destroying humanity, and ideally before nanotechnological warfare has wiped out a substantial fraction of the human race (2).  To have a good chance of outracing nanotech, we should plan to develop a seed AI by 2010.  (3).

We need to develop an AI fast, with the same kind of hypergrowth seen in the creation of the Web - what's known as "Internet time".  A private effort probably won't be enough to get that kind of speed; it'll take an industry.  As with the creation of the Web, only the core technologies should be developed by private teams, or in this case, Singularitarian efforts.  The flesh, the content, should be distributed across the shoulders of a planet.  Even the core architecture should be an open-source effort.

The primary thread of PtS deals with creating an open-source AI architecture, an AI industry, a seed AI, and accessible hardware.  Other efforts are required to support these goals and create an environment favorable to success.  We should encourage a community spirit in Silicon Valley that actively favors a Singularity, and encourage an atmosphere in the wider population (and politics) which either favors a Singularity or will not take action against it.  A nonprofit organization - the Singularity Institute - is needed to fund the initial prototype, provide Web and legal infrastructure for the open-source effort, provide funding for the final seed-AI development project, and provide funding and an administrative nucleus for other efforts.

1.1: Open-sourcing an AI architecture

In my visualization of AI development, a tremendous amount of original, brilliant coding and architectural design is required to create a seed AI.  It's not a matter of a few simple reductive principles and a lot of hardware, or even a few simple principles and a lot of knowledge, as previous paradigms in AI have usually claimed.  This is understandable, given that the paradigms of modern AI were born long before the Internet era, in the 50s, 60s, 70s, and occasionally 80s.  The AIers who built the field planned on a scale of small projects, and tried to implement those projects on computers that modern pocket calculators would sneer at, so it isn't surprising that they adopted theories of cognition that promised success with those limited resources.

Even if it turns out that artificially designing a self-improving mind is easier than biologically evolving a non-reflexive one, coding a mind is likely to be a huge project.  An effort that attempted to "go it alone" would spend all its resources on writing, debugging, and testing a few simple algorithms, and developing rudimentary features of the tools to write the tools.  A true mind is simply too complex to be developed by any one project with a realistic level of funding.

The PtS plan seeks to farm out the effort of AI development.  One of the primary methods is developing the core AI architecture - Aicore - through an open-source effort, as with Linux, Apache, Python, Perl, and many other names of honor in the computing industry.

DEFN: Open-source:  Software in which the source code is free, as is the software itself.  Open-source software is partially or wholly developed by a distributed group of users/testers/developers working with the open source code and submitting changes to an open forum.  See the Open Source Definition for the formal definition, The Cathedral and the Bazaar for an early introduction, and The Magic Cauldron for a later and more rigorous analysis of the economics.

By open-sourcing the core architecture, we will reduce the amount of Singularitarian resources required to build, test, and debug (especially test and debug) the core tools.  Building an AI is likely to involve a number of fundamental programming design innovations.  (See 2: Technology.)  As a closed effort, each brilliant new idea represents a brilliant new drain on resources.  As an open-source project, each brilliant new idea, if the idea is brilliant enough, will attract more programmers to the project.  Open source acts as a force multiplier, particularly where bright ideas are concerned.

The Cathedral and the Bazaar also notes the usefulness of open-source for exploiting the design space; that is, open-source users are capable of coming up with bright ideas, useful new features, and even more elegant design architectures.  Open-source users contribute intelligence as well as labor, and to build an AI, we'll need all the intelligence we can get.

Of course, open source also requires a pool of users.  It might be possible to attract a sufficient programming population through the sheer coolness factor of open-source AI, not to mention the high altruism of the Singularity itself.  Every true hacker (4) wants to code an AI and save the world; it's part of the job description.  But I don't intend to rely on that, which brings us to 1.2: Creating an AI industry.

1.2: Creating an AI industry

The open-source AI architecture is only half of the equation; we may compare the architecture to the HTML and TCP/IP protocols underlying the World Wide Web.  The content is another question, and that question is:  "What is the AI doing?"

Eurisko, designed by Douglas Lenat, is the best existing example of a seed AI, or, for that matter, of any AI.  (If you haven't heard of Eurisko, please see footnote (5).)  If "promising" performance is defined as "doing at least one thing a human hasn't done", this being the characteristic that creates the potential for profitability, then Eurisko exhibited promising performance in areas from game-playing to VLSI circuit design.

I believe this level of intelligence and generality can be exceeded, or at least matched, by the AI design paradigms assumed in PtS.  (See 2.1.1: The "Aicore" line and Coding a Transhuman AI.)  Even matching Eurisko should be enough for the first stages.  (If Eurisko's source code were available, there'd be a thousand programmers playing with it right now.)  With luck, the existence of an Aicore architecture will be enough to create the potential for profitable performance in hundreds or thousands of domains - some significant fraction of the domains encountered by modern-day IT (7).

However, trying to replace human experts or match human creativity - historically the great failed venture-capitalist-attracting promise, the shiny sparkly minefield of AI - is not the task I would choose for creating an AI industry.  I believe in mundane AI.  (8).  I think that the New Promise should be significantly decreasing the cost of IT development; AI as a quiet, behind-the-scenes programming tool.  First as an intelligent debugger to provide the "codic cortex" humans lack; later as a part of the core architecture of the program, so that the language itself has a certain amount of common sense.  (See 2.2: Technological timeline.)

The New Promise of AI
The use of artificial intelligence can reduce the cost of software development, speed the development process, improve reliability, and increase the usability of the software.  AI can provide a framework for programming, assist with debugging, and simplify program maintenance.

I believe that source code is the natural domain of AI, the ancestral savannah of computer-based intelligence, and that an AI with domain-specific intelligence targeted on programs should be an enormous aid to the human programmer.  After all, humans don't have a codic cortex, so we've always been in the position of a human without a visual cortex drawing a picture pixel by pixel.  A human programmer is a blind painter (9).

IT presently accounts for half of all capital expenditures in the United States.  A significant reduction in development costs should be more than enough profit-motive to fuel the widespread adoption of our AI.  It should be enough profit-motive to fuel the creation of an industry centered on our AI, in the same way that Linux has changed from a free operating system to an industry centered on a free operating system.

Once an AI industry exists, the development effort should have a wider pool of open-source volunteers and more contributed improvements.  There'll also be the profit-motive to develop better AI - better applications for our architecture - with many private organizations trying new ideas.  And finally, if the AI has the capacity to improve itself, learn, or develop heuristics, there'll be much more computing power devoted to generating shareable improvements.  In that helpful environment, doing the core research to move along the timeline should be more like flying and less like wading through molasses.  We'll be able to develop the potential for a group of features, release the potential, and get back the features.

This vision has several technological consequences:

1.3: Spreading the right memes

The larger our support base, both of active Singularitarians and of people who are kindly inclined, the better our chances of getting to the Singularity.  As yet, there are no groups directly opposing our immediate purposes; we should try, as much as possible, to keep it that way.  These are the two goals that need to be served by memetic efforts.

The memetic task is further complicated by the number of audiences being targeted.  We need to target Internet tycoons and programmers (particularly open-source programmers) with the full Singularitarian meme.  We should try to target the rest of the technophilic populace, from SF fans to the readership of Wired, with Singularity-ownership memes.  We need to worry about the reactions of CEOs, Greenpeace, politicians, TV reporters, teens, journalists, televangelists, honest religious fundamentalists, the middle class, truck drivers who've lost their jobs, "disadvantaged youth" and the "urban poor".  And that's just in America.

The circumstances needed for the easiest, safest path to Singularity can be compactly stated:  We need rich Singularitarians in Silicon Valley, open-source programmers who believe in seed AI, CEOs who don't object to using AI and are even attracted by the sparkle, supercomputing vendors who either believe or turn a blind eye when the time comes to run the Last Program, no interference from politicians, no fad television programs about the Singularity, and citadels of technophobia worrying about something else.

Is the safe path possible?  Probably not.  It relies on nobody outside the technophilic minority hearing about the Singularity, believing it if they did, or spreading the meme if they didn't.  That's a pretty fragile situation.  The Singularity is an awesomely powerful meme, and I have the feeling that if we don't spread it, someone else will.  The ethical question involved in leaving "the average guy" out of humanity's victory is thus somewhat moot.  While targeting only technophilic audiences may remain the wisest use of resources in the short term, everyone else (13) will find out eventually.

The "But someone will" rule also simplifies the oft-asked question of "Shouldn't we tone down the Singularity meme, for fear of panicking someone?"  In introductory pages and print material, maybe.  But there's no point in toning down the advanced Websites, even if technophobes might run across them.  Given the kind of people who are likely to oppose us, we'll be accused of plotting to bring about the end of humanity regardless of whether or not we admit to it.  (14).

Only in the early stages will we be able to choose the material presented and the target audience; in later stages, if we're not fortunate enough to manage a rapid, quiet Transcendence, we'll be dealing with a rapidly evolving memetic environment containing every kind of idea about the Singularity.  We can take for granted that the negative memes we're afraid of will come into existence and propagate.  We have to either get there first with positive memes, or develop counter-memes that get there first, or create positive memes that can out-propagate the negative memes, or corrupt the negative memes so that they don't result in active opposition.

1.4: Starting a Singularity Institute

All else being equal, the tasks outlined in PtS will go faster if there are people working on them full-time (15).  Some tasks, like running the Last Program (16) on rented supercomputing hardware, are likely to require large-scale funding (17).  Finally, there should be an obvious target for people who would like to help out the Singularity via cash donations, preferably in a tax-deductible fashion.  A nonprofit (18) institution devoted to providing Singularity infrastructure, a "Singularity Institute", would seem to be required.

The right kind of nonprofit (19) would be eligible to apply for grants from private foundations; which, especially during the initial stages, may be our best bet for funding.  It would also be possible for individuals to make tax-deductible donations.  Later on, my hope is that the Singularity will prove a popular cause among Silicon Valley millionaires; in the long term, this will probably be the major source of funding.  I'm not particularly counting on the middle class for broad-based support, since even institutions that actively solicit small donations typically get 80% of their funding from a few large donors.

In the beginning, I expect the Singularity Institute to employ one or two full-time developers; once we have a major sponsor, or we successfully apply for grants, this will go up to a couple of dozen people including some Singularity PR people and a few researchers.  This may not be enough to reach Singularity in 2005 or 2010, but given unlimited time, we can probably get all the way to the Singularity with no higher level of funding.  Since our time probably is limited, we should try to build a stronger operation.  If we can get Silicon Valley to adopt the Singularitarian ideal, the Singularity Institute might have enough funding to sponsor massive PR efforts, run dozens of research projects, and start subsidiary institutions.  Anything beyond that is probably superfluous, although I sincerely doubt that we'll ever run out of uses for money.

For more on the Singularity Institute, see 3.2: The Singularity Institute.  For a description of the initial people required, see 4.2: Institute initiation.  For guesses at the amount of funding required, see 4.1: Development initiation and 3.2.1: Institute timeline.

1.5: Dealing with opposition

A substantial fraction of the population is likely to react badly to the prospect of Singularity - either because it contradicts deeply held moral principles, or because they learned their reflex reactions from watching Star Trek.  We should emotionally accept the possibility of government interference, and be prepared to move against attempts to regulate the development of AI, or evade those regulations if they are successful.  (We should also oppose attempts to turn the public against the Singularity, even if no government regulation is immediately proposed; the general battle over how the Singularity is perceived comes under 1.3: Spreading the right memes.)

The probability of public or governmental opposition is a primary reason why running the Last Program distributed over the Internet, formerly a major part of the PtS vision (20), has been abandoned.  With that tempting target for regulation (or public protest) gone, and the necessary public exposure reduced, there's a much smaller chance of crippling legislation being passed.

Unless the Singularity becomes a major public issue, a complete and enforceable ban on AI research is not likely in the United States.  Technophobia is more likely to find outlets in bans on government funding, regulations requiring public disclosure, and so on.

There are a few psychologically plausible pieces of legislation - which I see no need to be more specific about - that would impose enough inconvenience to force the open-source project to move to Australia or, if necessary, China.  I'm not saying we should be ready to move on a minute's notice, but it's something to bear in mind.  (For example, we should back up all our materials in several offshore locations, so that nobody can prevent us from taking our information with us when we move.)  This state of readiness should also help prevent a ban on AI; if it's absolutely clear that banning AI would simply move the project overseas, to the detriment of American (or Australian, or English) industry, there's some slight chance that the legislators involved will see reason.  (But not much, so we need to be ready to actually move.)

There are also technological precautions we can take against a complete ban on AI.  We should be ready to switch the open-source administrative structure from being public and centralized to being anonymous and distributed, with the source code being submitted via PGP, and protecting participant identities.  In short, we may need to go underground, and that's something to keep in mind while organizing the project and writing the code.  I'm not suggesting that we be ready to disappear on a moment's notice, since that would take a lot of work that might turn out to be unnecessary, but I am suggesting that we bear it in mind.

I would also suggest encouraging encryption (whether governments like it or not), particularly ubiquitous protocols like Secure IP (21), just in case it turns out we do need distributed computing.

"Dealing with opposition" may also include dealing with groups that resort to extra-legal means to oppose us.  Aside from locating our working sites under the protection of police forces principled enough not to "look the other way", I don't see any particular aspect of this task that should be discussed in advance.  (It is, however, something to bear in mind.)

1.6: Protecting the IT industry

The computer industry is our base.  It's not enough to start a gold rush; we have to ensure the miners are healthy.  In particular, we have to ensure that Moore's Law (23) keeps on trucking, and that the software industry remains viable.

A little-known adjunct of Moore's Law is that the capital required to build a chip fabrication plant also keeps doubling.  We have to ensure that hardware demand remains strong.  (24).  Bill Gates is famous for telling Intel that no matter how much power they supplied, he would "develop some really exciting software that will bring the machine to its knees".  Of course, that was some time, and a factor-of-1000 performance improvement, ago.  And now it looks like Bill Gates may finally be defaulting on that promise (25) - the slow stuff isn't exciting and the exciting stuff isn't slow.  People are starting to wonder whether they really need the fastest new machine.

Since PtS abandoned the distributed-computing plan, the strength of the individual machines on the Internet is no longer all-important - but said machines still need to be able to support the local infrahuman AIs we develop.  Furthermore, modern supercomputers increasingly consist of thousands of Pentiums wired together, meaning that the cost and magnitude of supercomputing depend on the cheapness and speed of the individual processors.  Above all, we need to encourage the growth of "ultracomputing", software that uses massive amounts of computing power (and provides equally massive benefits), to spur others to build and rent out massively supercomputing hardware (26).  But we also need to ensure that the desktop computer keeps getting brainier, or hardware companies won't earn the money to build the factories to make the chips that go into the supercomputers.

The software industry isn't in trouble now, but there may be trouble brewing; to wit, CEOs and even CIOs starting to wonder whether investing trillions of dollars in software development - half of all capital investment in the US is going into information technology - is really paying dividends.  Large projects in particular are starting to run into Slow Zone problems (27), the tendency for anything above a certain level of complexity to bog down.  Of course, many of these people are still using COBOL, and there's not much you can do to help a company that clueless, but some projects use C++ or Java (or even Python) and still run into trouble.

I am not suggesting that we launch a special-purpose effort to Save the Software Industry.  That doesn't need to be done by Singularitarians (28).  Rather, I think we should try and kill three birds with one stone.  The primary immediate goal of AI (29) should be to reduce development, maintenance, and debugging time for mainstream, internally developed IT.  First bird - this is AI targeted on source code, meaning that it's a step towards self-improving AI.  Second bird - providing the profit motive for the computing industry to adopt and develop AI.  Third bird - heading off economic trouble in the software industry.

2: Technology

This section explains the sequence of technologies leading up to the Singularity.  It's not intended as a detailed technical whitepaper.  It's not even intended as a high-level guide to design principles, or anything else aimed at the eventual implementors.  Rather, this section is intended to convey some of the plan-relevant characteristics of each technological stage.  (30).

The technologies described here are presented in chronological order, which happens to be the reverse order of their invention.  Coding a Transhuman AI (which describes how to build a seed AI, the final stage of the timeline) was published in 1998, long before I'd thought of turning a bag of programming tricks into the Flare language.  My personal notes preserve for posterity the exact moment in which I realized there was a direct path from Flare to a self-optimizing compiler, and from a self-optimizing compiler to a Singularity.  My brain preserves the memory of the triumph I felt.  That was when I decided to write PtS.

"So I think I may be able to map out the full Path to the Singularity, here."
           -- Written in the predawn hours of April 28th, 1999.
The technological timeline was created by starting with seed AI and working backwards.  The upshot is that, by my standards, we don't get to the exciting part until 2.2.9: Self-optimizing compiler.  I hope you'll bear with me until then.  (Alternatively, you can read the sections in reverse, starting from 2.2.14: Transcendence and working backwards, but PtS wasn't designed to be read that way.)

2.1: Component technologies

2.1.1: The "Aicore" line

Creating a self-enhancing AI with the potential to get all the way to Power level involves a number of complexities not needed to just hack something up.  Non-seed AI is considerably easier than seed AI.  You can design systems with no way to automatically integrate changes to the architecture; in fact, it's even possible to design systems with clear-cut distinctions between content and architecture.  The system can contain premanufactured knowledge that it has no way of learning, and subsystems that it can't understand or modify (31).  In short, there are a lot of shortcuts that we can take initially.  As time goes on, more development resources will become available, and computing power will become cheaper, enabling us to stop taking shortcuts.

The "Aicore" timeline, or at least the initial stage (a.k.a. "Chrystalyn"), describes a "crystalline" AI (32).  If Elisson (33) is a mind, then we might consider Chrystalyn, a.k.a. Aicore One, to be a picture of that mind.  As we move along the Aicore line, the flat picture becomes a sculpture, and the sculpture becomes a mind.

Initially, the Aicore I visualize will provide a basic framework for programs that can use artificial intelligence:  A programmatic architecture, an API (34), a set of architectural domdules, and whatever library domdules are standard.  "What's a domdule," you say?  The actual explanation of what domain modules are, and why domdules form a fundamental part of AI, is in Coding a Transhuman AI.  I shall nonetheless essay a one-minute explanation.

Domdules and RNUI in 60 seconds:
A "domdule", a module targeted on some domain, is what enables the AI to Represent, Notice, Understand, and Invent cognitive structures in that domain.
  • Represent means having data structures that can mirror the low-level elements and structures being modeled.
  • Notice means the ability to detect simple facts about the domain - in programmatic terms, a set of codelets that annotate the data structure with simple facts about relations, simple bits of causal links, obvious similarities, temporal progressions, small predictions, et cetera.  The converse of notice-level simple perception is simple manipulation, the availability of choices and actions that manipulate the cognitive representations in direct ways.
  • Understand means integrating the simple facts with the goal system and other architectural domdules to form designs with internal purposes, to represent larger designs, analogies, and facts about usefulness.
  • Invent means that the AI can start with a goal and create a design that fulfills it.
You can think of a domdule targeted on source code as being a "codic cortex", by analogy to the human "visual cortex" (35).  If you think of a programmer as a blind human painting a picture pixel by pixel, you'll understand the level of improvement I'm hoping to bring to programming.

The AI application developer would write an "application" domdule for some specific domain - say, cash-flow analysis - and integrate it with the rest of the system.  Then the application domdule automatically gets all the built-in capabilities - design optimization, learned prediction, goal-oriented planning, and so on (36).  In short, the application AI would be able to engage in reasoning about the domain.  Furthermore, any skills the AI has already learned can be applied to that domain - if the AI knows how to look for anomalies, then, after a bit of experience, it will know how to look for anomalies in cash flow.

And domdules add up; the standard library in later versions of the Aicore system might provide a natural-language domdule (37), which could be added to the existing AI to create a question-answering interface for the cash-flow analysis database.  Of course, I doubt this integration will be automatic during the initial stages.  How much cumulative ability you can get by combining domdules, and how much work is required to combine domdules over and above the work required to create the domdules themselves, will be one of the key parameters shaping the AI industry at any point in time.  (38).

The primary goal of Aicore, in the initial stages, will be making IT development easier.  The most obvious way this can happen is through smart IDEs (39); that is, application domdules targeted on source code.  The next most obvious way is making it easier to develop IT applications that "wanted" AI to begin with, i.e. IT applications that need to perform reasoning about the domain.  The Aicore programmer would just have to define the way the domain behaves, since general reasoning is already present.  The most subtle and important goal of the Aicore line is to become integrated into the system libraries and the program architecture.  The programmer will write programs that make use of AI reasoning in the algorithms (40); or, better still, write "programs" that are simply thoughts in the AI (41).

The ideal application for Aicore - the one we want most to encourage - would be systems that consist of the Aicore system, an application domdule, a user interface, a set of goals, and lots and lots of Beowulf or supercomputing hardware.  Where, previously, the system would have been a three-year hundred-million-dollar program that was two years late and never did work right.  Aicore has the highest utility where being able to assume some basic reasoning ability is the key difference between writing one set of high-level instructions, and spending a thousand programmer-years writing ten thousand slightly different low-level procedures ten thousand slightly different ways.  That's what it means for AI to be "part of the program architecture".

The very first releases might not have the robustness needed to run corporate IT in real life, but even the first releases would still be useful for rapid prototyping, and robustness is what open-source does best.  In later stages, the products of the Aicore line will begin to approach the level of sophistication and decrystallization needed for seed AI, perhaps even exhibiting some signs of rudimentary intelligence.  No harm will be done if true intelligence isn't possible on desktop hardware, however.  The important thing is that the stuff Aicore is made of should begin to approach the same design used in true seed AI, so that the genuine full-scale seed AI research project can take advantage of industry advances.

2.1.2: The "Flare" line

Flare is a proposal for a new programming language, the first annotative programming language, in which programs, data, and the program state are all represented as well-formed XML.

NOTE: At present, I don't even have a publishable Flare whitepaper.  I don't even have a finalized design.  I am engaging in the sin of aggravated vaporware because I have been told, and convinced, that the timeline does not make any sense without knowing some of what Flare is and what it does.  Please consider all discussion of Flare to have whatever degree of tentativeness and subjunctivity is required for that discussion to be an excusable act of speculation.

Since I do have over half a megabyte of design notes, I use the present tense in discussing properties of Flare.  Those properties are no longer tagged as "subjunctive" in my mental model, and using the present tense feels more natural (42).  No claim that the Flare design has been finalized is implied.

XML is to Flare what lists are to LISP, or hashes to Perl.  (XML is an industry buzzword that stands for eXtensible Modeling Language; it's a generic data format, somewhere between generalized HTML and simplified SGML.)  The effects are far too extensive to go into here, but the most fundamental effect (43) is that XML is easy to extend and annotate, and this property extends into Flare programs and the Flare language itself.  (44).

Our own cognition is also annotative - we note arbitrary facts about things, and think in heuristics that act on arbitrary facts about things.  An "annotative" programming language, which recognizes this fact, is thus a higher-level language.  "Higher-level" means "closer to the human mind" - annotative thinking is reflected in annotative language, just as object-based cognition is reflected in object-oriented languages, just as procedures are closer to our mental representations than assembly language.

Flare has four primary purposes:

Influenced, but not commanded, by the worse-is-better philosophy from LISP:  Good news, bad news, how to win big, I've divided up the implementation stages into the 50%-right version that spreads like a virus (Flare Zero), the single perfect gem (Flare One)... and also the total development environment (Flare Two), followed by effective integration with the Aicore line (programs in "Flare Three" are essentially thoughts in the AI).

The current stage is Flare Negative One.  I have at least 500K of notes on Flare, but I haven't put them into publishable form yet.  I don't have a Flare whitepaper available; I could probably get one together in, say, a month or so.  Since I don't have a complete whitepaper, I was reluctant to say as much as I've said already.  I don't want any crippleware versions coming out and depriving Flare of its incremental benefit.  (I'm not worried about crippleware AIs for the simple reason that anyone bright enough to implement any part of the Aicore line without my help - no matter what I post on the Web - is bright enough to do their own navigation.)  I'm also reluctant to engage in acts of aggravated vaporware, but I've been convinced it's necessary (50).

2.2: Technological timeline

2.2.1: Flare Zero

As noted in 2.1.2: The "Flare" line, Flare is the name for a new programming language that is to be the vehicle of a series of improvements in programming techniques.  It is a programmer's truism that 90% of the features provide 10% of the functionality.  This can't necessarily be reversed to provide 90% of the functionality with 10% of the features, but it's often a good idea to try.  As designed so far, the true, elegant version of the Flare language contains a number of features which will probably not be frequently used (51).  Thus "Flare Zero", a version of Flare designed according to the "New Jersey approach" to have around 50%-80% of desired functionality (52).  For example, Flare Zero will have programs in XML, and data in XML, but the program state will probably not be expressed in XML.  There are all sorts of cool things you can do with an XML program state (53), but they won't happen every day.  The "Zero" is because the number of omitted features render Flare Zero not-really-Flare.

Nonetheless, Flare Zero should yield substantially improved development times, functionality, and maintainability for the development of certain types of complex programs.  In particular, any attempt to explain to the program a set of regulations or rules originally designed for humans to follow (54) should be substantially easier, an improvement on the order of the transition from procedural programming to object-oriented programming.

Ubiquity (55) isn't likely to reach the same level as, say, Python or Java until Flare One.  (On the other hand, it should be fairly easy to interoperate with other languages (56).)  Of course, that's all the "mature" version.  The first release will be a research language, the way these things always work, and without the speed and sophisticated tools that many programmers demand.  Even so, it'll be a fun language, and it'll be possible to do things in Flare that simply couldn't be done otherwise, so we can probably get enough open-source volunteers to bootstrap.

Flare Zero will be a reasonably "mundane" project in that, given the basic insights, the design and implementation should not require Specialist-level talent or nonobvious, fundamental revision of the basic insights.  In short, it's a project that I can reasonably turn over to someone else; I don't have to be the limiting factor.  This may make it suitable for an initial project by the Singularity Institute, even though Flare is not actually necessary to the Aicore line until Aicore Two or thereabouts (58).

Flare Zero is not on the critical path, but it's also the easiest, and most conventional, of all the projects on the timeline.  Depending on the resources available and the perceived necessity for experience and a successful initial project, it might be wise to start work on Flare Zero first.  (See 3.1.1: Development resources and 3.1.2: Development timeline.)

2.2.2: Aicore One ("Chrystalyn")

Chrystalyn is a minimal version of Elisson - Elisson being the seed AI from Coding a Transhuman AI - which lacks the cognitive and programmatic features required for self-alteration, independent learning of new domains, self-organizing integration of new representations, automatic adjustment to architectural changes, and flexible symbols.

However, Chrystalyn will have a domdule-based architecture and world-model, RNUI design and notice codelets, a goal system, causal and similarity analysis, reflectivity, and some self-improvement with at least as much reflexivity as Eurisko (59).  Or at least it will have simplified versions of causal and similarity analysis et cetera which imitate cognition in useful ways.  As the name implies, Chrystalyn will be designed as a mostly "crystalline" (60) AI.  The programmer will simply sit down and write notice-level functions, and those functions will have direct meanings that make immediate sense to humans.  (61).

Although Chrystalyn is not an intelligent entity, it should be one heck of a computer program.  Eurisko achieved impressive (62) performance in a wide variety of domains, through a combination of generalized heuristics and domain-specific models.  I would like to duplicate and exceed this capability in open source code.  Given a domain (chess, inventory replacement, stock prices), it should be possible to write a domdule which obeys some standard API and has human-provided integration with the "architectural" domdules (causality, goals, etc.).  That is, the notice-level functions inherit from the QNoticeCodelet class (63), and the AI application developer annotates (64) notice-level functions (and representations, and reflexive traces) with standard, crystalline labels (65) that let them link up with the rest of the system - the domdules and representations for goal-oriented thinking, causal analysis, heuristic-learning, evolutionary design, and so on.  The programmer writes the application domdule; Aicore provides the cognitive architecture, and skills to perform actual reasoning about the domain, such as design optimization, prediction, question-answering, and so on.

I wouldn't be surprised if the first Aicore architecture required, for a pair of domdules to work together, that at least one domdule have been designed by someone who knew about the other domdule.  That doesn't mean things are hopeless; two domdules might be able to interoperate fairly well in practice because they both know about the causal-analysis domdule.  But if this level of AI goes mainstream, it will create a market in domdule packages (rather than just domdules) and minor "OS wars" for new architectural domdules.  Software folk will find this paradigm familiar.

At minimum, the initial releases of Chrystalyn should be useful for rapid prototyping, exploratory programming, and knowledge mining.  Given a lot of domdule work and enough computational power, Chrystalyn should be capable of a number of novel applications (or assistances) - automatic translation between communication protocols and data formats; user interfaces that learn frequent actions and understand user goals (66); porting code between operating systems and languages; spotting bugs.  The initial versions of Chrystalyn will probably be memory hogs and slow like molasses (although perhaps the heuristics and procedures could be learned on a SPARC and run on a PC).  But software bloat is relative to hardware, and perhaps someday Chrystalyn will become part of word processors.

Work on Chrystalyn can occur contemporaneously with Flare Zero development.  It should be possible to write Chrystalyn in Python, using some Flare techniques without having the actual language (67).  If Chrystalyn is still around when Flare Zero becomes reliable and supported, we'll translate it into Flare Zero.  Same goes for Flare One.  (Anything less than Flare One is a hack as far as doing AI is concerned.)  But meanwhile we'll do it in Python.  Since Python is open-source and embeddable, applications with embedded Chrystalyn shouldn't be too complicated.

As always, the disclaimer:  At present, Chrystalyn is simply an idea.  As with Elisson and Flare, it will probably take at least a month of thought to translate the idea into a design, then another month if I need to publish it on the Web.  I do not think it will be possible for a team to translate the high-level concept into a design and implementation without Specialist-level assistance, both because of the intrinsic difficulty, and the high probability of further research-level insights and redesigns being required.  The creation of any mind is the hardest solvable intellectual task in existence.

2.2.3: Flare One

If Flare Zero is an advance compilation of "Flare's Greatest Hits", Flare One is the first actual implementation of Flare.  Flare One eliminates all the shortcuts taken to get Flare Zero out the door, and adds some fundamental concepts needed for distributed operation, self-examining code, secure execution, and so on.

One example would making the program state representable as XML; another example would be replacing the monolithic interpreter with the Tcl-like modular interpreter implied by an XML program state.  (68).  This might make Flare One slower, but given the above modular interpreter, it should be easy to create alternate implementations that leave out unused features.  It should also become a lot easier to port Flare to new environments.  (See above footnote.)

If there's a usable version of Aicore One when we're ready to put out Flare One, we should integrate Aicore into the Flare IDE, although not (yet!) into the language.  This should lend at least a minimal level of intelligence to program editing (69), debugging (70), semantic searches (72), change propagation (73), and possibly even code reuse (74).  Even if only the most obvious applications are possible, I would expect a significant improvement in coding time and the manageability of large projects, especially as more AI "scripts" were contributed.  (Although I do worry about whether the AI "scripts" will be fast enough for people to actually use.)

If Flare Zero is the "50% right" version, then Flare One skips the "90% right" phase and goes directly to the "single perfect gem".  And yes, I know that single perfect gems are supposed to take forever to design and be impossible to implement efficiently.  But somehow, after trying to design a mind, the prospect of designing a single perfect gem doesn't seem very intimidating.

2.2.4: Symmetric Multiprocessing and Parallelism

(Should be part of the Flare One release.)

Flare One should contain language features intended to set things up for parallel computing (75) on symmetric multiprocessing machines.  The ideal of SMP is that almost any Flare program will run four times faster on a machine with four processors.  In practice, a "parallel Flare" program that runs on one processor will probably be ten times slower than a "serial Flare" program, which is why SMP should wait until Flare One, when unused features won't detract as much from efficiency (76).

The purpose of symmetric multiprocessing is threefold.  First, as an optional research feature and rapid prototyping method, making certain kinds of code more natural, and encouraging programmers to experiment with parallelism.  Second, to introduce the theoretical potential for upward scalability - if a Flare program won't run a hundred times faster on a hundred processors, perhaps it will at least run ten times faster.  Third, as an ordinary programmer's tool for managing preexisting parallel processes (77).

The ulterior motive of parallel Flare is to start setting things up for the rise of the sixteen-processor home computer.  Admittedly, since the concept of running on the Internet has been abandoned, the sixteen-CPU home PC is less of an advantage (to us).  Nonetheless, SMP helps set things up for very-high-powered apps like Aicore, may advance techniques that will help us run a seed AI on massively parallel hardware, may advance ultracomputing in general, and may also help keep the computing industry going in the event that Moore's Law temporarily fails.

Admittedly, parallel Flare per se won't actually be a practical advantage, capable of driving demand for SMP machines, until 2.2.9: Self-optimizing compiler.  But just the technical advancement in programming techniques may be enough to affect the SMP market (78) - if, for example, the Aicore line can use SMP efficiently, then this will increase demand for SMP machines.

2.2.5: Flare Two

Flare Two integrates Aicore into the language itself, not just the IDE.  This step will probably take place at least a year after the release of Flare One, so that people have had time to evolve applications and libraries and programming techniques that use Aicore as part of the architecture.  (79).  The fruits of the previous loose Flare/AI coordination will be compiled and coordinated.

Flare Two is the point at which system libraries get turned into domdules.  As you hopefully recall from Domdules and RNUI in sixty seconds, a domdule contains "a set of codelets that annotate the data structure with simple facts about relations, simple bits of causal links, obvious similarities, temporal progressions, small predictions, et cetera.  The converse of notice-level simple perception is simple manipulation, the availability of choices and actions that manipulate the cognitive representations in direct ways."

System interfaces, database interfaces, and anything else with an Application Programming Interface - anything with an interface for getting information and acting - can be thought of containing some  notice-level functions of a domdule.  Turn the API functions into domdule codelets, document the codelets with the labels that tell the rest of the AI what the functions are, add in information about visualizing consequences, integrate the domdule with the goal system and the other architectural domdules, and teach the AI about the purposes of the API.  Train the AI; show it what the API is usually used for.  Voilà.  The AI can use the API on its own; all it needs is the set of end-goals.  The AI can perceive when you make silly mistakes in using the API.  Et cetera.  (80).

And "API domdules" are only the simplest, most obvious application, just the tip of the AI iceberg, like using computers for high-speed arithmetic.  If the program-as-domdule concept really works, there'll be applications I haven't even imagined.  Flare Two is where we start "Exiting the Slow Zone":  AI begins to make mainstream programming significantly easier - not just the task of editing and debugging, but system design.  There's a wider range of things you can assume the computer will understand, and the language itself has a certain amount of common sense.  Yes, Virginia, there is a silver bullet.  (82).

Current programs have an internal coherence that's represented only in the mind of the programmer, or at most in human-readable documentation.  But once the programming language and the IDE AI can represent cognitive facts about the program, programs will get a lot easier to debug.  And once the language can turn cognitive facts into programs, programs will get a lot easier to write.

2.2.6: Aicore Two

(Not synchronized with the Flare line.  (83).)

Aicore Two will be a major reference release, written completely in the best Flare available at the time.  (84).  If there are any versatile domdules that have become popular and widely used, they will become part of Aicore Two's new set of architectural domdules.  (85).  If there's anything we've learned about faster development of Aicores, better domdule representations, better formats for the labels that integrate the system, et cetera, then we'll incorporate that too.  We'll do the lessons-learned thing.  The business case for Aicore Two will consist of that update.

But the primary purpose of Aicore Two is three timeline-desirable fundamental improvements:

2.2.7: Planetary AI Pool

(Should be part of the Aicore Two release.)

The Planetary AI Pool is a central repository of content developed by AIs.  "Content" includes domdules, domdule elements (i.e. notice-level functions), heuristics, concepts, models, and whatever other high-level constructs or low-level elements exist.  The low-level elements should obey a standard API, and the high-level constructs should be mutable by the same cognitive processes that created them.  Hopefully, the problem will not be finding two pieces that fit, but finding two pieces that fit together well.

One example might be a bunch of word-processors all trying out new heuristics and sharing any user-interface adaptations that they've learned from the user.  I create a word-processing maicro (89) that does something cool, and if your word-processor thinks you might like the maicro, your AI downloads the maicro and tries it out.  An example maicro might be "If the user is making periodic entries in some document, offer to add the date and time of each entry" or "If the user is writing a Web page in FAQ format, check the Internet to see if a previous FAQ already exists".  More mundanely, the AIs might just swap lower-level heuristics, like "Investigate cases close to extremes" (instead of "Investigate extreme cases").

If participating in the Pool reliably yields good results, and if selling spare CPU cycles is only worth a couple of bucks a month, then optimizing the local AI might provide more benefit than renting out your computer - making the Planetary AI Pool one of the largest MIPsuckers on Earth.  This isn't as Singularity-desirable as it looks, however, since the AIs don't have a global intelligence.  The primary effect would be to vastly increase the amount of computing power applied to tweaking bits of code, with no greater intelligence than the maximum locally achievable.

Nonetheless, there will be self-improving AI around, interacting on a global scale, so we need to at least start thinking about a possible Transcendence.  Hence the "crystalline Interim Goal System" requirement in 2.2.6: Aicore Two.

Note:  Being a pessimist by neurology as well as profession, I can't help but wonder whether all the "easy wins", all the interesting results, will be gathered in the first two days of the running the AI Pool - after which nothing interesting will happen, wrecking the business case for further participation (90).

2.2.8: Scalable software

"Scalable software" is software that shows a continuous qualitative improvement with better hardware.  Deep Blue is the canonical example; IBM's research team just piled on computing power (91) until Deep Blue exhibited "a new kind of intelligence" (92) and beat Kasparov.  It seems plausible to me that an AI, with more intelligently shaped search trees, would scale even better.

Any software that uses scalable AI automatically becomes scalable itself (93).  But I also have hopes that scalable AI will start a trend towards the general use of scalable programming techniques.  The name of this stage is "scalable software", not "scalable AI".

When scalable AI or scalable something is an integral part of word-processing programs, Joe Q. Consumer will always have a motive to buy the latest 16-processor 2GHz tower.  When scalable programming becomes common, Joe Q. CIO (94) will be able to throw hardware at a late software project.  Above all, the style of programming involved will hopefully extend to the creation of "ultracomputing" software applications - software that would do something amazingly useful on a supercomputer.  (95).

The purpose served:

2.2.9: Self-optimizing compiler

After Aicore and Flare have been around for a few years, there should be mature Flare domdules for Aicore - domdules capable of understanding the logic and execution of Flare programs.  There should be domdules that parse (and notice) other languages as well.  In combination, this should yield AI capable of translating other languages into Flare.  Flare, being XML-based, is obviously suited very well to being a universal program format.  (96).

If the AI has a reasonable understanding of the logic behind the program (97), it should also be possible to treat a Flare program as a prototype, and write code that does "the same thing" using C++ or assembly language.  (98).

In fact, given that Flare's XML representation should be easy to manipulate and translate, I would expect the first experiments with Flare-to-C++ or Flare-to-assembly compilers to begin soon after Flare One was released.  By this point on the timeline, we might just be assembling the experiments and compiling them into a coherent whole (99), rather than doing any actual research.

Automatic translation will break down the distinction between languages.  If the AIs can analyze machine code and translate it back into commented, named, understandable source code, even the distinction between source code and assembly will break down.  And at that point, Flare will eat the entire software industry.  When Flare is as fast as C++ but infinitely safer; when all your legacy code only looks like it's written in COBOL or assembly language, but - like it'll say on the T-Shirts - It's Really Written In Flare; when Flare becomes the common format, the meeting point of every programming language and IDE - then, things can really start to move.

When Flare-tuned AIs can examine machine code as easily as the original source, there won't be any comprehension hit for using assembly language.  When Flare programs can automatically be rewritten as machine code (100), there won't be any performance hit from using Flare - quite the contrary!  "Code" will become an abstract liquid that can be poured from one substrate to another.  There won't be any part of its own source code an AI can't understand - if necessary, it will be able to look at its own program in RAM.

And thus, the AI will be able to understand, and optimize, its own source code.

NOTE: Veteran Singularitarians will recognize this as a description of Dan Clemmensen's "self-optimizing compiler".

Fully self-swallowing programs are a key step on the road to Singularity.  A long, slow, extended step, a step that starts with Flare Zero and probably won't come to fruition until after Aicore Two.  But that's one of the main reasons for having Flare and Aicore.  Flare is an XML-based annotative programming language.  The Aicore architecture has notice-level functions that annotate a world-model.  And thus, J. Random Hacker can experiment with noticing facts about programs.

So by this point there should be a huge library of programs and notice-level functions and domdules that understand Flare, and manipulate Flare, and translate other languages to and from Flare.  The self-optimizing compiler stage occurs when that collective intelligence can read assembly language, and write it, as easily as it reads and writes Flare.  It should be possible to write a Flare interpreter and Aicore implementation in High Flare, Flare with all the features turned on.  Earlier, this would have run like molasses; with a self-optimizing compiler, it'll run as fast as C++ or assembly.  And experimenting with Flare AI will get even easier, since it'll be possible to write intelligent Flare evaluators in Flare without running into a major performance hit or infinite recursion.

2.2.10: Adaptive hardware utilization

With a self-optimizing compiler, capable of translating 68040 machine code for a PalmPilot interface into Flare and thence to parallel-computing Intel assembly that runs on a multiprocessing Linux machine (101), the SMP market should really hit the mainstream for the first time.  (102).  With true code-understanding AI, it should take only a small additional refinement to handle asymmetric multiprocessing.

I used to wonder why, if we can fit a primitive CPU onto thousands of transistors, and a modern CPU onto millions of transistors, we can't fit a thousand primitive processors onto a modern chip.  But I know why:  It's because we don't have the programming techniques to use the darn things.  So we just build larger and larger serial CPUs with as many bells and whistles as it takes to turn all those transistors into one instruction-execution event loop.

With a self-optimizing compiler around, it should be possible for Intel to design thousand-processor asymmetric multiprocessing chips and be assured that existing programs will be capable of using them to their full potential.  And this, in turn, should mean that instead of million-CPU supercomputers, we'll have billion-processor supercomputers.  With any luck at all, this should be more than enough raw power to run a seed AI.

Since this development would have to take place in "hardware time" (103), the PtS plan doesn't rely on it.  It would really help, though.

2.2.11: Aicore Three

This is the point at which we start to decrystallize the Aicore line and make it self-swallowing; this where we start moving towards seed AI.  It's the release where we start long-cutting some of the shortcuts.  Aicore Three is when we put in some of the Elisson characteristics that were originally omitted, but which will become necessary once mutating code is around (104).  I won't say that this stage will have to occur after SMP or adaptive hardware utilization, since who knows if we'll have the time for hardware to catch up with software - but nonetheless, decrystallizing does take power.  (105).

Aicore Three will also contain a reference release of any infrastructure that got invented by the Planetary AI Pool.  However, most of this infrastructure will be obsolete.  (See next paragraph.)

The defining change in Aicore Three will be self-integrating domdules.  While human effort may still be required to label all the functions and representations, it shouldn't require human effort to link up two sets of labels.  The links will be learned.  AI should exist which examines possible links between tags, associations between representations, et cetera, and which improves or invents them by studying similarities and covariances.

As a result, the distinction between "architectural" domdules and "content" domdules should start to break down.  The market for domdule packages will vanish.  And if the learning techniques can apply to previously created domdules, the total intelligence of the Aicore system will take a huge leap; all the existing intelligence will come together.

Perhaps existing domdules will become obsolete as well.  (Not useless, just obsolete.)  At this point, the time and effort and research computing power should exist to decrystallize the domdules and even the architecture - bump the domdules down a level or two, break them up into subcomponents.  This is getting close to true cognition, but not quite there yet.

Furthermore, some or all of the Aicore code should be "documented" in a way that the AI can understand (106), so that the AI itself can improve on it.  Perhaps as much as possible of the code will be replaced with an implementation generated by the AI itself, so that the AI can manipulate the design and thus manipulate the implementation directly.

When you factor in the Planetary AI Pool, this all sounds like Singularity-class stuff, and the probability does exist - but, once again, I don't think it's enough.  I think it's necessary, but not sufficient.  Reflexivity and circularity create the raw material for Transcendence, but to start it off, you need a fundamental spark of creativity, of smartness.  I'm not sure that will be present at this point.

As with the self-optimizing compiler, I think the result will be a superbly optimized design, perhaps with some interesting new tricks and features, but not a wholly new design with an interesting new purpose.  The AI might be capable of all kinds of coding tricks, but not of performing scientific research or holding a philosophical conversation with a human.

But since there's a real possibility of a Singularity, or of real-but-infrahuman intelligence, Aicore Three should have a safe, decrystallized, and fairly complete goal system (107) - one with almost all the precautions we'd want in a real AI (108).

At this stage we're obeying the second rule of navigation:

Second Rule of Navigation
Before you can create X, you must create the potential for X.

2.2.12: Ubiquitous AI

If progress continues long enough, Flare and Aicore will merge.  Most mundane programming will consist of taking an AI and telling it what you want, in natural language.  One will be able to give instructions to computers in the same way one would give instructions to humans.  Programs will become thoughts.  AI will eat the software industry.

This stage will also see the rise of the World Wide Program:  When all programs are thoughts in AIs, they should all interface automatically - programs will just flow together, like puddles of water merging.  Just as the modern Web can be viewed as one massive document, all the public IT in the world will be one massive program.

There is no way this will happen before the self-optimizing compiler stage, because otherwise the programs will run like molasses.  It will be possible, but a bit more difficult, to carry this off without adaptive hardware; instead of having AIs that think, we'll have AIs that write programs.  (Perhaps the programs will call on the corporation's central AI (109) whenever something exceptional happens.)  There will probably be a significant difference in user happiness between having a personal AI, and just having AIs write all the code, but the effect on the software industry will be the same:  Blam.

This stage is more futuristic than navigational.  I'm not sure it'll happen and we can certainly do without it, but I think it might happen - the potential will be there - so I'm mentioning it.  Why?  Mostly in case our meme people want to talk to Wired about it.

In real life, I'm not sure ubiquitous AI would be a good idea.  In fact, it could be nearly as bad as nanotechnology.  I'm not going to be specific, because it doesn't necessarily help matters to sketch out all the possible ways something can go wrong, especially in public fora (110).  In essence, there'd be three major categories of problems:

And as Dan Clemmensen's contribution to navigation states:

Clemmensen's Law
"IMO, the existing system suffices to permit technological advance to the singularity. Any non-radical change is unlikely to advance or retard the event by much. Any radical change is likely to retard the event because of the upheaval associated with the change, regardless of the relative efficiency of the resulting system."

On the other hand, trying to retard a radical change is also a bad idea, in accordance with Yudkowsky's Third Threat:  "Attempting to suppress a technology only inflicts more damage."  (After all, if the Hidden Variables are kind, the enormous power of ubiquitous AI might enable us to deal with the enormous problems posed by ubiquitous AI.  The Information Age may have sent enormous shocks through the economy, but it also helped build an economy flexible enough to take it.)

Clemmensen's Law says that it's rarely a good idea to attempt major changes to society.  The Third Threat says it's rarely a good idea to try to prevent major changes to society.  I think the upshot is that we don't need to help ubiquitous AI along.  If ubiquitous AI happens anyway, or looks like it might happen, then we'll try to deal with it.  But there's not much that needs to be navigated in advance - except, as stated, the public-relations potential.

In short, this is one of those "destabilizing" applications of AI - an "ultraproductivity" effect.  (See 3.5.2: Accelerating the right applications.)  Ubiquitous AI can't be held off indefinitely, but it doesn't have to happen before the Singularity.  If ubiquitous AI happens anyway, it doesn't have to happen before we're ready; it can wait until the economy is built to take it.

2.2.13: Elisson

And now, the big finish:  Developing a fully self-swallowing seed AI, capable of creatively enhancing itself to greater-than-human intelligence.  We take the best version of Aicore and finish decrystallizing it, doing all the things we couldn't do earlier (because it would be too slow for the user, or because it would be so big that it would have to run on a major supercomputer).  In short, we'll use the Aicore line as the raw material for building Elisson, the AI from Coding a Transhuman AI.

I imagine I'll have revised CaTAI extensively by this time in the future, but for the moment, it will serve as delineating the goal.  Every aspect of the AI, from the low-level code, to the conceptual architecture in CaTAI, to the reasoning behind CaTAI, will be explained in a way that the AI can understand and manipulate.  With the full power of cognitive science as it exists at that time, we will try to duplicate, at least in potential, every useful detail of human thought.

We'll do our best to explain the concept of "better thinking" as a goal, the measurable ideal of better representing, predicting, and manipulating reality.  We'll give Elisson the ability to see the internal coherence of designs for a mind.  We'll give Elisson the ability to evaluate those designs, to see how they serve the goal of better thinking.

And once full self-understanding is achieved, it's only a short step up to self-invention.  When innovation is achievable in theory through a massive search through all possible designs, then innovation should be possible in practice to any self-modifying mind that understands search trees.

AI will change, from a computer program designed for speed and reliability, into a real mind designed for power and flexibility.  We will add the spark of creativity, and link that spark to a clearly defined goal of self-enhancement.

Elisson will probably be a tremendous challenge, possibly requiring a centralized effort.  Elisson is also a Deep Research project, very very Deep, the Deepest humanity will ever face before the end, and it will require an immense amount of ultra-top-flight brainpower (112).  But with a pre-existing Aicore-based IT economy, small improvements coming out of the Elisson Project should yield immediate profits, thus providing a motive for the investment required.

With a huge pool of AI hackers, with planet-years of knowledge and expertise in domdule programming and code understanding and self-modification, the potential will exist.  In the end, every other point along the timeline exists only to create the largest possible support base for Project Elisson (or other AI projects, if Elisson should fail).

Project Elisson should be started as soon as the necessary resources are acquired.  Those resources probably won't be available until AI goes mainstream at 2.2.6: Aicore Two, and the project will not yield directly applicable results - it won't be part of the timeline - until AI becomes decrystallized at 2.2.11: Aicore Three.  Likewise, the timeline will not yield direct programmatic support until Aicore Three, just hints and tools.  Nonetheless, Project Elisson will represent the leading edge of research in AI, which will trickle back to the Aicore line.

Besides, you never know where the breakthroughs lie, and with self-modifying AI, any breakthrough might be the last.  Project Elisson should start up as soon as it's practical.

NOTE: This marks the point at which we are actively and directly trying to bring about the immediate creation of a true Singularity, the birth of greater-than-human intelligence.

2.2.14: Transcendence

And then, at some point, the Elisson project succeeds.

A major breakthrough occurs within the research project - the local version of Elisson does a major rewrite with much greater creativity, exhibits flashes of smartness, but perceptibly runs up against the limit of the hardware lying around.  In short, Elisson exhibits some kind of progress that leads us to think it can go all the way.

The next step would be running Elisson on adequate hardware.  There are three possibilities:  "Adequate hardware" is what's lying around the Singularity Institute's basement, "adequate hardware" can be rented out for a few days and a couple of million bucks, and "adequate hardware" simply isn't available.  In the first case, hardware isn't a problem.  In the second case, we quietly (113) rent out the best available hardware and run the latest version of Elisson.  In the third case, after the attempt on the best available hardware fails, we keep on researching and try again when a significantly better supercomputer becomes available.  For the sake of discussion, we'll assume that adequate hardware is found.

I hope and pray (and guesstimate using the power/optimization/intelligence curve described in Singularity Analysis) that there's very little chance of winding up with a merely human-equivalent AI.  Once the AI reaches the vicinity of human intelligence, it should be able to redesign its architecture for greater efficiency, which would translate into even greater intelligence, which would enable it to redesign its architecture yet again.  Since the forces involved in self-modifying intelligence are folded in on each other, the total curve is completely different from the non-self-referential forces whereby evolution produced human intelligence.  There's no particular reason for the curves to have plateaus in the same places.  Given the historical fact that Cro-Magnons (us) are better computer programmers than Neanderthals, I would expect human-equivalent smartness to produce a sharp jump in programming ability, meaning that, for self-modifying AI, the intelligence curve will be going sharply upward in the vicinity of human equivalence.

Thus, there should be a fast transition between considerably-dumber-than-human AI and considerably-smarter-than-human intelligence.  In the event that I'm wrong about this, we'll probably have to grit our teeth, go public with the birth of human-equivalent AI, and hope for the best.  But I really do think that's a low-probability event.

There are two critical levels of intelligence:  First, the level of intelligence necessary to take over leadership of the Singularity effort.  Second, the level of intelligence needed to create "rapid infrastructure", or nanotechnology (114).  I think it very probable that these two levels will be achieved almost simultaneously; in the event that this is not so, things get more complicated than I'm going to talk about in this section.  (116).

Even though we're assuming that Elisson is running things at this point, there are still some things we should do in advance.  A merely transhuman AI (as opposed to a Power) might have trouble renting a nanotechnology lab without attracting attention.  So, if the Singularity Institute has the money, we should have a nanotechnology lab in our basement.  The remarkable thing about nanotechnology, circa 2000, is how cheap the basic equipment is (117).  Having a nano lab is likely to be considerably easier than having our own supercomputer.  Circa 2000, a pocket nanotech lab would probably consist of a scanning tunnelling microscope (118), a DNA sequencer, and a protein synthesis machine.  (119).  Given superintelligence, I get the impression that this should be enough in the way of raw materials.  Of course, I am not a nanotechnology expert, so I could be totally off base.

Given all those devices (120), I would expect diamondoid drextech - full-scale molecular nanotechnology - to take a couple of days; a couple of hours minimum, a couple of weeks maximum.  Keeping the Singularity quiet might prove a challenge, but I think it'll be possible, plus we'll have transhuman guidance.  Once drextech is developed, the assemblers shall go forth into the world, and quietly reproduce, until the day (probably a few hours later) when the Singularity can reveal itself without danger - when there are enough tiny guardians to disable nuclear weapons and shut down riots, keeping the newborn Mind safe from humanity and preventing humanity from harming itself.

The planetary death rate of 150,000 lives per day comes to a screeching halt.  The pain ends.  We find out what's on the other side of dawn.  (121).

3: Strategy

3.1: Development strategy

3.1.1: Development resources

One of the strengths of open-source development is the possibility of a casual, volunteer-run, decentralized structure - there doesn't have to be a "core" operation.  I don't think we should take advantage of this possibility.  It strikes me as being unnecessarily fragile, sensitive to random variables in the life of the project leader.  As everyone knows by now, it's possible to run a huge open-source project in a Finnish college student's spare time.  But, historically, this isn't true of all open-source projects (123).

In the case of Aicore and Flare, where the projects are entirely new ideas instead of improvements on previously developed tools, I don't think it would be a good idea to run things on an ad-hoc basis.  There are usually a few key people in any open-source project, and while they often work as spare-time volunteers, the plan will be less vulnerable to the random factors if they can work full-time.  (124).  Likewise, the project will be more scalable if there's an expandable support operation instead of one person handling everything in vis (125) spare time.  (See 3.5.1: Building a solid operation.)  So the PtS plan assumes a support operation.

The virtual nucleus for an open-source project is a Website, a mailing list, and a CVS server (126); as of 2000, this remains constant over the initiation, short-term, mid-term, and long-term stages of open-source projects.

I'm not sure if the Aicore or Flare projects will need an evangelist or other memetic personnel.   (127).  I get the impression that good open-source projects generate their own evangelists.  But, again on the principle of building a solid operation, we might want at least one full-timer.  (128).

This is the minimum nucleus which can support arbitrarily fast growth of the project, in terms of user base and development base.

If the project does start growing "arbitrarily fast", which is the mid-term to long-term scenario, then ideally the Singularity Institute will grow with it (129).  This would enable us to expand the support operation, which would hopefully pay off in faster growth, or at least reinforcement and consolidation of existing growth.  But considering that huge open-source projects have been known to run without any full-time developers at all (130), nothing will go irreparably wrong if the project grows faster than the Institute.

Note that "short term" refers to after (A) the development project has "something potential contributors could easily run and see working" (131), which is required for getting open-source volunteers, and (B) the Singularity Institute exists.  During the "initiation" period (covered in 4: Initiation) and depending on the interaction of the development and Singularity Institute timelines, the build-something-cool stage (132) could take place with anything from one or two part-time volunteers distributed over the Internet to a full-time development crew with a physical location.  (A physical team would probably be considerably faster, one of the primary reasons for using full-time developers.)  We can hope that some Singularitarian volunteers will contribute during the initial development, so a CVS Website and a mailing list are still appropriate. Aicore and Flare

Another initiation-stage question is how to divide resources between Aicore and Flare.  These are very different projects, with different difficulty quotients, requirements, timelines and strategic effects.  The upshot is that even though Aicore is on the critical path and Flare is not, I think initial resources should be concentrated on Flare.  Flare is more scalable, and more accelerable.

The Aicore project presents special difficulties, both programmatic (133) and social (134), that are not present in the Flare line.  I think it makes sense to initiate the far more conventional Flare project first, since Flare is easier to develop, vastly easier to explain, and will, initially, be usable by a wider group.  It should just be easier to get people enthused about Flare, from a programmatic perspective.  AI has major coolness factor, but in practice, it'll take a lot of work before your AI app hello-worlds.  Flare should be easier for mortals to sink their teeth into.

The Flare project creates the infrastructure, influence, contacts, experience, and credibility needed to get the word out about the Aicore project.

Flare also provides a language of implementation for the Aicore project; we can do the prototyping in Python using ugly hacks, so Flare isn't on the critical path, but ugly hacks will only get us so far.  Seed AI will take a self-optimizing compiler, which requires an annotative programming language and annotative programs (137), which means Flare (138).  Flare will also help protect our base in the software industry.  Flare is a legitimate Singularitarian accomplishment.  (I'm saying all this, of course, because I instinctively feel guilty about spending time on anything except AI.)

We should expect that the Flare growth curve will significantly outpace the Aicore growth curve, which should translate into the Flare timeline being ahead of the Aicore timeline (139).  We have to steer between the Charybdis (140) of being seduced by Flare's faster growth and neglecting the Aicore project, and the Scylla of being just another AI project with one or two researchers.  That last part is the organizational reason why Flare is necessary.  A human-scale challenge ensures the Singularity Institute doesn't need to wait indefinitely for successful projects and completed milestones.  (142).  Growth, which is necessary for ultimate success, requires interim successes.

3.1.2: Development timeline

NOTE: All development times are wild guesses that can extend into indeterminate amounts of time or (less likely) become shorter.  Disclaimer, blah blah, legalese, disclaimer, import * from disclaimer, #include "disclaimer.h", require disclaimer, #!/usr/bin/disclaimer, visit, you get the idea.

The relative growth curves of Flare and Aicore are likely to be as follows:  The Flare project gets started after either (A) I put the language into the form of a whitepaper that can be handed off to any competent and creative programmer, which will probably take about a month, or (B) I explain the Flare concept in person (143) and remain personally available for later consultations, which implies a Singularity Institute strong enough to support either a physical center or travel fees.  I would prefer option (B), as it will save time, even though (A) is more solid (145).  At this point, the Flare project has been "handed off", in the sense that I will no longer be the limiting factor.

It's utterly impossible to estimate development times in true research projects, of course, but I would hope that the formal open-sourcing of Flare Zero would occur in between six months and one year, that a version stable enough and featureful enough for AI development (146) would be available in from one to two years, and that a significant number of users and a sustainable open-source community would develop in from one to three years.  If the resources were available, Flare One would begin as soon as there was enough feedback on Flare Zero to provide design feedback.

Meanwhile, after Flare had been handed off, I would start working on the Aicore line.  I'm thinking in terms of spending a month or two thinking all the basic concepts through in greater detail (147), then another month or two concretizing the basic architecture (148), then some indeterminate amount of development time (probably a month or two) to "SimpleMind", a rapid-prototype skeleton AI (149).  Then I'd probably have to rewrite the architecture over the course of a few weeks or months (150), after which I'd have a complete design for Aicore One's basic architecture and APIs.  If, while all this is happening, I'm also trying to play some administrative or memetic role in the Singularity Institute (151), getting to this point is likely to take six months to a year.

If Flare Zero is usable at this point, further development will occur in Flare.  (If not, I'll keep working in Python.)  Once there's a clear design for the architecture and API, it'll be possible to initiate the Aicore project with a core crew of full-time developers.  Once the architectural code, the "operating system", is developed, the creation of the architectural domdules can begin.  Because of the cognitive nature of domdules - the notice-level codelets and so on - this stage of the task should easily lend itself to volunteer assistance (152).  I'm not sure how much skeletal material will need to be there before Chrystalyn runs and does something cool, but afterwards, we can party with the open-source process.  We'll only have volunteer-developers rather than developer-users, but I think we can expect quite a few of these due to the coolness factor.  Figure the "does something cool" stage for two to three years since I handed off Flare.

Figure another year's worth of volunteer open-source domdule fleshing, skill teaching and heuristic creation, experimentation with application domdules, knowledge learning, and so on before the first formal business-ready distribution of Chrystalyn.  (154).  I would not realistically expect a substantial user base before four years have passed, making my "Singularity 2005" T-Shirt a touch unrealistic... but we can always hope.  (155).  T-Shirts aside, the PtS navigation assumes 2010 as the target date (156), and if we can seed an AI gold rush in four years, it should be possible to do the rest of the work in six.

Working out specific schedules beyond Flare Zero or Aicore One strikes me as pointless and unrealistic.  I don't see what current decisions would be affected, and any plans made now would almost certainly have to be completely revised.  This can be planned later, and should be.

3.1.3: Open-source strategy

Open-source resources:
The Cathedral and the Bazaar ("CatB")
        Eric S. Raymond ("ESR") and the original announcement of the revolution.
Homesteading the Noosphere
        ESR on the psychology of open-source.
The Magic Cauldron ("MC")
        ESR on economics (and doing a damn fine job!)
Open Sources:  Voices from the Open-Source Revolution
        A book by O'Reilly, readable online.  Essays from the leaders (including ESR).
The Open Source Page
        Home page of the Open Source Initiative.  (ESR is president.)

Prerequisite:  1.1: Open-sourcing an AI architecture.

Open source, as defined by the Open Source Initiative, means free use of the program and free availability of source code.  Free source code allows volunteer programmers and interested users to assist in developing the AI's core architecture.  Free distribution encourages maximal use of the core architecture.  Maximizing use maximizes AI content development (157), the number of "interested users" with a motive to help develop the core architecture, and the amount of publicity attracting Singularitarian volunteers.

I find it fascinating that this entire open-source strategy is made possible by the treatment of core AI as infrastructure instead of application - which, in turn, is only possible because the model of cognition is complex enough to use domdules.  (Wrong AI uses such simple algorithms that the problem-solving intelligence can't be divided into content and architecture.)  The distinction between {networked infrastructure, partially standardized middleware, and local application} is one of the key factors determining how well the open-source model pays off; Aicore is infrastructure, and may become networked.  In a very real sense, the pattern of the industry is caused directly by the pattern of the artificial mind.  Cognitive science for MBAs!

That's it.  Most of what I want to say about open-source is in either 1.1: Open-sourcing an AI architecture or some other part of 3.1: Development strategy.

3.1.4: Designing an open-source community

"...In his discussion of ``egoless programming'', Weinberg observed that in shops where developers are not territorial about their code, and encourage other people to look for bugs and potential improvements in it, improvement happens dramatically faster than elsewhere.  Weinberg's choice of terminology has perhaps prevented his analysis from gaining the acceptance it deserved -- one has to smile at the thought of describing Internet hackers as ``egoless''..."
        -- CatB:  The Social Context of Open-Source Software (ESR)

"...the number of contributors (and, at second order, the success of) projects is strongly and inversely correlated with the number of hoops each project makes a user go through to contribute. Such friction costs may be political as well as mechanical. Together they may explain why the loose, amorphous Linux culture has attracted orders of magnitude more cooperative energy than the more tightly organized and centralized BSD efforts and why the Free Software Foundation has receded in relative importance as Linux has risen."
        -- MC:  The Inverse Commons (ESR)

Although the Singularity Institute is providing core infrastructure for the Aicore and Flare projects, this does not mean that the global effort should be tight, disciplined, or centralized.  Ease of contribution, as ESR notes, must be maximized.  Aside from Internet infrastructure (158), this means establishing an open, relaxed, "egoless" culture, one in which there are no political obstacles to progress.

This is hardly the place for an open-ended discourse on the best way of creating egoless project leaders (159), but I'll take a stab at it.  One way to remain egoless is to start your project as a spare-time college student, so you know that you have absolutely no political authority over the people donating time to the project.  Another way to remain egoless is to have a very high degree of self-awareness, which, in my observation (160), comes from studying evolutionary psychology (161).  I think we can get by on the second method.  If we set out to deliberately create an open, relaxed, egoless culture, as egoless as a shoestring operation, we should be able to do it.  An adept of evolutionary psychology should be able to disable the contextual triggers, suppress the activation, identify and countermand the influences, and disbelieve the suggestions of the emotions having to do with the exertion of obnoxious political control.  It would be silly to rely on this degree of mental discipline in any large group, but I don't think it's too much to ask of a few Singularitarians (162).

It would be best if the "top people" were principled Singularitarians, partly so that we can rely on them to help steer the projects along the line that leads to seed AI, and partly because we don't want them getting cold feet when the day comes to run the Last Program.  Similarly, considering the desirability of building a strongly idealistic Singularity Institute, it'd be best if the full-timers were Singularitarians.  We should also try to seed the main project with first-step (163) Singularitarian and transhumanist memes; that is, I'd like first-step Singularitarian memes to show up in literature about the purpose of the project, and I'd like the average volunteer to have some idea of the ideals that are being served.  An ideal is not necessary to an open-source project, but it does help.

(The preceding paragraph holds true of both Aicore and Flare, but more strongly in the case of Aicore.)

But!  We should be very careful not to create a mindset among ourselves that Singularitarian project members are superior to other project members.  We have to establish a mindset that says:  "Being a Singularitarian is great, but it doesn't mean you're any good as a coder."  I'm an agent of the Singularity, not Singularitarianism, not the Singularity meme.  The Singularity and the timeline projects require intelligence (164) far more than they require a particular set of beliefs.  If a Singularitarian and a non-Singularitarian have an argument over a project feature, the side that needs to win is whichever side is right.

The minor benefits of a Singularitarian leadership cannot be allowed to interfere with the creation of a meritocracy.  Discrimination on the basis of political beliefs can rip a community spirit apart.  For Linux coders to believe that they're taking on the Evil Empire is one thing; if Linux coders who said they were just in it for the money were discriminated against, the effort would die instantly.  My hope is that the people who care enough to go full-time will care that much because they know the whole world is at stake, and that the really bright people will go SL4, and thus the top layer formed of really bright people who really care will be composed mostly of full-time Singularitarians.  But we can't force it.  We can only try to make it happen by ensuring that the project literature mentions the ideals.

Memetic note:  Since the actual short-term task is creating great software, more should be said about the necessity for and uses of that software then about saving the world.  But both should be mentioned, and neither should explicitly be said to be more important than the other.  That's something people can decide for themselves; raising the issue explicitly is not cognitively necessary (165) and would create an unnecessary risk.

Above all else:  Keep the project fun!

3.1.5: Keeping the timeline on-track

Given that there is a technological timeline, steering the project is likely to become necessary; we don't want to run off the track into blind alleys.  I certainly have no problem with rewriting the timeline to take advantage of unforeseen opportunities, but we still need to move along the technological timeline without losing control of the project's direction.  I see at least four major challenges:

In the short-term of Flare, the challenge is preventing an infinitely extensible language from becoming balkanized, like Unix; or at least, ensuring that the balkanized versions still work together perfectly and seamlessly (166).

In the short-term of Aicore, the challenge is keeping what is essentially an open-sourced research project on track through severe differences of opinion about how minds should work.

In the long-term of Flare, the challenge is preventing major vendors from decommoditizing the language, and to convince everyone to go along with the transitions to Flare One and Flare Two.

In the long-term of Aicore, the challenge is ensuring that the middleware war over what set of secondary library domdules (167) and domdule packages (168) to use doesn't backfire and balkanize the architecture.  Furthermore, as time goes on, popular domdules need to be integrated into the primary libraries and if possible the architecture, perhaps in the face of any blocking patents (169).  Finally, the feature set and basic architecture need to keep moving towards seed AI, and users need to be convinced to adopt Aicore Two and Aicore Three.

The trick is maintaining control without being obnoxious about it.  Any attempt to maintain directional control through brute force - exploiting the Singularity Institute's privileged position as maintainer, adding clauses to the license - may simply result in the open-source project being forked away.  We have a great deal of influence as maintainers and project leaders, but this should not be confused with control.

The socially acceptable method of steering a technology - supreme technical excellence - should be assisted by the idealistic, Singularitarian underpinnings of both projects; supreme technical talent does tend to care about ideals.  On occasion, this will hold true even if the technical talent is working for a closed-source company.  Furthermore, open source is more powerful than closed source, and the open-source projects should be able to outcompete closed-source vendors.  Not everything should be dominated by open source, since we want a market to exist, with corresponding profit-motive.  The market occupies the "leaves" of the tree, as it were.  But in the branch nodes where the network effects live - the points where the Evil Ones might uglify the architecture - open source should be, and will be, technically superior.

However, for the correct direction to triumph over crippleware in the larger market (170), not once but every single time, standards and core interfaces and network protocols must be so intrinsically open, open on the level of the components from which higher structures are made, that no reasonable delay between the rise of a "wrong thing" and the Singularity Institute's publication of a "right thing" can create a lock on the market.  This is one of the major driving forces in the fundamental architecture of both Flare and Aicore.  An extensible implementation lets anyone grab your project away from you.  An open architecture lets you grab it back.

Concretely:  If a software company were to try to decommoditize Flare by adding a set of custom XML tags, then a modular interpreter architecture should ensure that any such tags could be added as drag-n-drop libraries to the Flare interpreter.  Furthermore, given that Flare is designed to eventually give birth to the self-optimizing compiler, translating between Flare dialects should be relatively trivial.  If the architecture is extensible enough, the dark forces can't decommoditize it, because anything they do winds up as an extension.  That's the ideal, anyway.

3.1.6: Dealing with blocking patents

The US patent office is severely broken with respect to software patents.  (See the Wired article Patently Absurd or the Upside article Surviving a War with Patents.)  Software patents have been granted, in total ignorance of the prior art, on everything from multimedia to virtual function tables.  Since the US software patent system no longer fulfills its stated moral purpose, the moral issues and the legal issues must be dealt with separately.  (171). The moral issues

The moral issue is simple with respect to so-called "nuisance" patents - patents obtained in bad faith and clear defiance of the prior art by hoodwinking the patent examiner.  (Examples would be the patents granted on multimedia and virtual function tables.)  Nuisance patents are evil; they have absolutely no moral authority and can be evaded by any means necessary.  (I talk about the means of evasion in The legal issues, below.)

On the other hand, I find it conceivable that some company will be the first to come up with a visual domdule (172) for Aicore One.  Let's suppose that they invest a fair amount in research, put out a good product, and then, in addition to copyrighting the domdule itself, they patent the concept of a visual domdule.  Then what?

I would have to say that this still verges on a nuisance patent; there's a requirement that the concept be "unobvious to a professional skilled in the art" (though the phrasing is from memory).  The idea of a visual domdule, or a chemistry domdule, or a seawater fluid dynamics domdule, is obvious to any professional skilled in the art.  Any attempt to patent the idea of a domdule covering a particular domain is evil, and may be dealt with accordingly.  You might as well try to patent the idea of "software that deals with seawater fluid dynamics".  (173).

The point at which we start getting into morally ambiguous territory would be if the company invented a visual domdule, and, in doing so, discovered a new algorithm for visual processing.  Then they patent the algorithm.  At this point, under ordinary circumstances, the company would have a legitimate claim to the algorithm - even if the algorithm is so inevitable, so necessary, that it's impossible to write a visual domdule without it.  That the algorithm is necessary - meaning that anyone trying to write a visual domdule would eventually have to invent it - does not necessarily make it "obvious", under the morality of patents.  It could still take time, money, and research talent to discover the algorithm.  Then, under the morality of patents, the company that spent the money "owns" the algorithm and has a right to prevent others from mooching off the research effort.

And then the Aicore project is in trouble, because we can't include a visual domdule with our free distribution.  (Maybe the free distribution doesn't need a visual domdule, but there's still a problem with the Elisson research project.)  The duration of a US patent is 20 years.  If someone patents an algorithm necessary to cognition, we'll hit the nanowar deadline before the patent expires.  In short, our nightmare is that someone will patent - whether it's a real patent, or a nuisance patent - an algorithm necessary to the development of the timeline or to the creation of seed AI.

Personally, I believe that 20 years is far too long a duration for software patents, thus making even a validly obtained software patent morally shaky.  It'd be like granting 100-year patents on ordinary technologies.  But even if that duration were replaced by something sane, like 5 years, the PtS timescale still wouldn't permit that kind of delay.

My moral argument for running a "patentless" operation is that I have given away the ideas in Coding a Transhuman AI, and I will be giving away the ideas behind Flare and Aicore.  In return, rather than asking for money, I'm asking everyone who builds ideas based on my ideas to give those ideas away - or at least, to let Aicore and Flare incorporate those ideas into library code, if necessary.  (174).  It's a quid pro code:  You use my ideas, and I expect to be able to use your ideas.

Let it be known to one and all that Aicore and Flare are "patent-free" efforts.  By using the ideas given away in Aicore and Flare, you relinquish any moral claim to a "blocking" ownership of ideas that you invent as a result.  You still receive social credit for being a genius, and you can still beat everyone else to the market and make a buck, but once the idea is out, you can't prevent Aicore and Flare from using it.  You can, morally, keep the algorithm a secret through compiled or obfuscated source code, forcing us to reinvent it; as long as we are allowed to reinvent it, that's fine.  You might be able, morally, to sue your fellow for-profit domdule sellers if they steal your research, but if the Aicore project decides to make your bright idea part of the freely distributed core libraries, that's just the quid pro quo.

There must be no insuperable obstacles to progress. The legal issues

With the understanding that nobody has a moral right to sue Aicore or Flare, how do we keep from getting sued? Patentleft and the Mozilla license

The Mozilla Public License (v1.1) (175) contains language intended to ensure that nobody can contribute open source code that infringes on a patent they own, then jump up and say:  "Aha!  Now you have to pay us license fees!"  An earlier version of the license would have exempted all Mozilla source code from infringement on any patent owned by a contributor, although this was later alleged to be a typo, and I don't believe it persists in the current version.

The point is that there exists a precedent for mentioning patents in open-source licenses.

Another honorable tradition in open-source is known as "copyleft"; in fact, this is the very basis of the licensing system.  "Copyleft" is when you maintain copyright to your code, instead of putting it into the public domain, so that you can safely give your code away.  Not only is the code given away for free, but others cannot sell it; others must also give the code away for free, or are allowed to charge only for the cost of distributing collections.  Likewise, the copyright (or copyleft) must travel with the code, and attribution must be maintained.

BSD created the tradition of a "viral license", or a license that applies, not only to the thing itself, but to all derivative works.  Actually, most open-source licenses have a clause about derivative works; you can't take sendmail and add a feature and sell the result.  The BSD license was more extreme; they said, essentially, that any time you used a library to build an application, the application was a derivative work and had to be covered by the BSD license.

Combining these two traditions yields the concept of "patentleft".  (176).  In essence, the license for Aicore would state that any derivative copyrights or derivative patents may not apply to open-source distributions of Aicore.  Just as Linux has an unlimited right to incorporate any modified Linux code, Aicore would have an unlimited right to incorporate any innovation that was published (177), despite, not only copyrights, but patents.

This absolute access would be triggered by the creation of a module dependent on Aicore technology, not just by the deliberate contribution of source code.  Publishing, selling, using, or merely developing a closed-source and patented domdule which (a) used the Aicore API, (b) was linked against Aicore libraries, or (c) ran under Aicore would, under the license terms, grant any open-source operation (178) the right to infringe on that patent.

In order to implement this "patentleft" theory, it may or may not be legally necessary to patent Aicore or Flare (179), just as it's legally necessary to copyright open source code in order to free it.  If so, the patent may or may not scare off some corporate users.  I think that a properly developed license, granting nonrevocable rights, should put all legal fears to rest; this is the same theory behind most open-source licenses.

One should bear in mind that applying for a patent can be expensive; the operation might have to wait until the Singularity Institute had reached the appropriate stage.  I would imagine that the patent could be applied for before the publication of any Aicore code, however.  (180). Auto-downloaded modules, anonymously developed overseas

The patentleft license is the first line of defense.  Suppose it fails, either because someone sues us anyway, or because a random nuisance patent is used against us.  Suppose that we lose the legal battle, or that we don't have enough money to fight, or that the judge issues a preliminary injunction.  As is often the case when a legal system malfunctions, there is nothing we can do that will make us completely safe from the lawyers.

Both Aicore and Flare should run on a plugin architecture, an absolutely modular design.  This being the case, any modules we are legally barred from developing - this goes for encryption technology too, not just nuisance patents - could be developed by an operation based overseas.  I believe Netscape was developing an entire browser in China, at one point; I'm not sure what came of that, or why they weren't using a plugin architecture, but it will serve as an example.

One CVS site overseas; infrastructure for secure, encrypted, anonymous development; and the modules we need are available on Chinese or Russian servers.  Obviously, we'd have to digitally sign versions of the source code which we approved as safe and noncorrupted, but there's no law against distributing digitally signed checksums of strong encryption code.  Likewise, our installer would have to automatically download the code, perhaps through indirection; i.e., the Singularity Institute server contains the URLs of the latest code and signed checksums for the contents, and the installer downloads the strong encryption module and checks the signature.

Admittedly, a judge might be skeptical of such goings-on.  Ideally, we should use sign-and-download techniques for a wide variety of optional modules, not just strong encryption and so on.  It will be harder for the government to mount a legal challenge if the challenge has to be leveled at plugin architectures or distributed installations in general.


All of this might prove unnecessary, of course.  The patentleft license might work perfectly (182), thus obviating all necessity for overseas development.  Nonetheless, distributed installation is something we should set up, with a few optional items, as soon as possible; it establishes the precedent that distributed installation is a general technique, not just a method of evading patent laws or export restrictions.  On the other hand, secure anonymous development is something we might want to have the source code around for.  But we shouldn't release it, much less use it, until the patentleft defenses fail and anonymous development becomes necessary.

Note:  Even if the patent-infringing modules are downloaded from overseas, the end users might, or might not, be legally liable for the use of the items.  So the techniques described here do not solve the problem completely.  If they become necessary, we might lose a few nervous large corporations, or an even larger segment of the audience.  We should definitely try to avoid this contingency, fighting it out in court before resorting to overseas development; both because of the PR problems, and because of the user-side problems.  But even in the worst case, we won't be crippled.  Quiet use by people who aren't rich enough to be sued is a smaller market segment, but it's enough to keep an open-source project going.  The plan will be slowed, but it will survive. Software patent review board

Considering the "broken" state of software patents, which I'm sure only gets worse on a global scale (183), it seems likely to me that some kind of industry-supported arbitration board will come into being (184).  In effect, a duplicate patent office, one that decides whether government-issued patents are "valid" or "nuisance".  (I would also expect there to be a considerable fight over the existence of said office, all sorts of government critters screaming about the theft of the authority they were too dumb to use responsibly.  I don't know, offhand, who's likely to win.)

In the event that such an office comes into existence, we want to get in on the early stages, so we can make sure they understand the concept of "patentleft".

3.1.7: Opportunities for profit

It would be useful, for many reasons, if there was the prospect of making money off the development timeline at some point - or at least, intermediate profits to be pointed to.  Many Singularitarians, despite our admitted fanatic and ascetic devotion to our Cause, would probably be more effective in our service if we were filthy rich.  (185).

For the reasons discussed in Why not venture capital?, even the glorious opportunities don't imply that we should ditch the nonprofit concept and start our own company.  (186).  But there's the possibility of starting "supporting" companies alongside, which, on top of the opportunity for profit, might take some of the strain off the Singularity Institute.

The Magic Cauldron shows that the correct business model for certain types of software is selling support, rather than selling the software itself.  It's conceivable that the Singularity Institute could exist alongside "Crimson Headgear Inc." (187), which would sell CDs and technical support for (free) Flare distributions.  Of course, this company could only start up after Flare had gotten going, but once it did, it would represent, not just an opportunity to create wealthy Singularitarians (188), but the chance to tap into the startup and for-profit side of Silicon Valley.  Furthermore, the Crimson Headgear people could legitimately help develop, advertise, and evangelize Flare on a more traditional basis.  (We could do that ourselves, but they would have a better excuse.)

Besides the Flare software, a market would be born for Flare programmers and Aicore developers.  Consulting companies, training companies, certification companies, and headhunting companies would all be possibilities.  (I even see the opportunity to put the Flare job market on a really systematic basis by building a centralized resume repository right into the Flare IDE.  The same might go for a centralized repository of contract jobs that Flare freelancers could bid on.  You get the idea.)

More mundanely, there may be the chance to provide Aicore T-Shirts and Flare coffee mugs, or for that matter "Singularity 2005" T-Shirts (189).  I'm not sure whether nonprofits are allowed to sell tchotchkes (191), but I imagine this is one operation that wouldn't have to be split off from the Singularity Institute.  Whether the income would be significant is another question.

Finally, the rule that source code must be open only applies to direct-to-Singularity projects.  Side-effect applications we want to accelerate - Teacher AIs (192), Elizas (193), and so on, could conceivably be run as closed-source private corporations.  Of course, it might be Singularity-preferable if these applications were open, free, or at least very cheap, but on the other hand, developing and selling them as private software may move faster, thanks to a larger marketing budget.

Or, leaving the Singularity arena entirely, there should just be cool things to do with Aicore and Flare.  Cool things that could become the basis for companies, allowing Singularitarians to leverage our research talent and reputation into startup products and venture capital - if, of course, we have the spare time.  But the point is, we're planning to deliberately create an industry.  If we succeed, we'll be at the center from start to finish.  We'll be the ones who choose the direction of the industry and know what's coming.  If we can't make a few bucks off that, we don't deserve to be rich.

I mean, speaking of open source, Eric S. Raymond woke up one morning and found out that VALinux, of which he was a member of the Board of Directors, had gone public, making his 0.3% share of the company worth $36M.  Whether the loony stock valuations will still be around in five years is doubtful, but even at sane prices, major names in Flare and Aicore should still be valued members of Boards of Direction, and minor names should still get in on IPOs (194), and all concerned should still become reasonably rich as a result.

But we shouldn't get our priorities mixed up.  The primary purpose is a Singularity, and no IPO profit can compare to immortality and transhumanity.  The primary goals are the timeline projects; if we divert our efforts, or even divert our purpose, we'll probably fail and be left with nothing.  The opportunity to start a company is a side effect, and it'll arrive when it arrives.

3.2: The Singularity Institute

3.2.1: Institute timeline

The stages of the Institute's growth will be determined more by available funding than by time or progress.  Thus, no durations or concrete times are given.  We move from one stage to the next when that level of funding becomes available, and from what I know of history, that's usually random chance.  Likewise, although the stages are presented in order, we should feel free to skip anything skippable.  If Marc Andreessen walks up and hands us ten million dollars, we should set up a ten-million-dollar Institute as fast as the infrastructure can be called into existence, without bothering about the intervening steps.

That said, the financial numbers are guesstimates, even with respect to the order of magnitude.  Hard data and real details on how much it costs to support a given level of functionality - sample histories of nonprofits - are amazingly difficult to find on the 'Net.  If any of my readers have a better feel for the finances, please let me know if I'm underestimating or overestimating - or if I'm exactly right, for that matter. Skeleton Institute

This would be an Institute with no money coming in or going out, but possessed of nonprofit status and a Board of Directors - everything needed to accept contributions if a donor could be found.  Getting to this stage would take somewhere between $1K and $12K up-front, for the legal work on applying for nonprofit status.  The primary advantage would be that we'd be able to apply for grants.  The secondary advantage would be that we'd be able to put up a flag and say, "Look, here's a Singularity Institute!"  In other words, we'd have increased access to the "fire and forget" class of donors.

At this stage, we'd be looking for a major supporter and writing grant proposals.  There'd also be the possibility of selling paid memberships and putting out a newsletter, but I think we should just skip it.  (195).

Infrastructure:  We probably wouldn't have a central physical location.  We might have a small Website put up by volunteers. The short-term:  One or two projects

From around O($100K) to O($1M) (196), the Singularity Institute would be capable of running one or two projects - Flare and Aicore.  In other words, in the short term, the Singularity Institute could implement the first few years of 3.1.2: Development timeline by having at least two full-time developers, one on Flare and me on Aicore.  Paid evangelism and other research projects would have to wait, and any other services related to the projects (tech support, etc.) would have to be provided by volunteers, or other institutions.

This level of funding could be reached from initiation by finding a major donor, or reached from the skeleton stage by successfully applying for a grant proposal.

Infrastructure:  We might be able to get the people together in a central physical location, although not necessarily real offices.  Our people might work at home, but we should be able to buy them development stations.  We can have a professional-looking Website.

The Foresight Institute spent $404K in 1997; now, in 1999, they have 11 staff members listed, not counting Foresight Europe.  This gives us an eyeball figure for the order of magnitude. The mid-term:  Development teams and memetics

With O($10M), it would become possible to deploy full-time development teams and professional writers in support of Flare (which might not need it) and Aicore (which probably will).  Volunteers and major corporate users can be recruited by at least one paid evangelist.  We can employ one or two people to write articles and publish papers about the Singularity or Singularitarianism.  We can hold conventions.  We can influence events outside our immediate circle.

We can move as far along the Flare and Aicore timelines as we need to, given indefinite time.  I'm not sure that starting the Elisson project (or obtaining supercomputer time) would be easy with this level of funding, but it should be possible to at least start a minimal Elisson project, and worry about obtaining supercomputer time when that becomes an issue.

Infrastructure:  We can have central offices.  With a secretary and an accountant, if either should become necessary. The long-term:  Deep research, rapid development, mass evangelism, and meddling

With O($100M) to O($1G), or more (197), the Singularity Institute should definitely be able to make it to the Singularity.  Without funding-related speed limits on AI development, we may even be able to beat the nanowar deadline.  We should be able to fund Elisson research and supporting research in cognitive science.  We should be able to fund subsidiary projects, such as Teacher AIs and design-ahead of nanowar survival stations (198).  We could engage in large-scale evangelism.  We could "meddle" in things like independent patent agencies, government hearings on biotechnology, and so on.

If large-scale funding becomes available, some of it might not go to the Singularity Institute.  We might need to set up a sibling organization to handle political lobbying, which is not tax-deductible.  There are also some for-profit ventures that would be nice, though not necessary, to have around.  E.g. if AI starts having an impact on the economy, there are some other technologies (199) we should sponsor (preferably in advance) to cushion the impact of ultraproductivity.

Planning in any greater detail doesn't seem to be necessary this far in advance, since even if that amount of money lands in our lap tomorrow, the time necessary simply for the legalese should be enough to figure out exactly what to do with it.

Infrastructure:  We can have offices in Silicon Valley (200).  We can also have a small nanotechnology laboratory (and possibly even a supercomputer) in the basement.

3.2.2: Nonprofit status

(The following discussion assumes we're operating under US tax law.)

The Singularity Institute should be a nonprofit operation - in legal terms, a "501(c)(3) public charity".  Whether a nonprofit is classified as a "private foundation" instead of a "public charity" depends on the funding method; how, I have not found to be clear, but I think it has something to do with the ratio of assets to expenditures.  Private foundations acquire several significant legal restrictions.  Also, by convention, private foundations fund public charities - never vice versa and rarely foundation-to-foundation.  For SingInst to apply for grants from existing foundations, it must have legal "public charity" status.  In previous drafts, I had worried about whether having only a few major funders would change the status from "public charity" to "private foundation", but this shouldn't be a problem (201).

As far as I can tell, however, there is no significant advantage - with respect to the law or grant applications - for public charities with a narrow focus.  Thus it should not be necessary to have an elaborate multi-nonprofit structure (202) - certainly not at first!  (203).  At most, we might have to start a Singularity Outreach Committee a few years down the line, or the first time we want to talk to a Congressperson, since nonprofits lose their status if they engage in political lobbying.  But at initiation, the Singularity Institute should be enough.

Unless the section about nonprofits needing to have "educational, scientific, religious or whatever" purposes is exclusive or, in which case we would need separate Institutes for research and memetics.  For example, Foresight and Extropy are 501(c)(3) educational charities, while the Singularity Institute would be a 501(c)(3) scientific charity.  But I don't anticipate this being a problem. Why not venture capital?

When dealing with extremely cool technologies, there's always the temptation to guard every idea like gold, on the theory that funding the Singularity takes money and the idea is the ticket to founding a major company.  Well, founding a company takes a lot more than an idea.  It takes time, and effort, and venture capital, and the acceptance of a 90% chance of failure, but mostly time.  Even supposing the success of the startup, it would simply take too much time to develop the timeline technologies as private projects.  By the time the company goes public and we can finally go to work on the "real" project, the planet will probably have fried.  The core architectures must be public to get the necessary speed.

Also, venture capital involves a set of assumptions that would make it very difficult to implement the PtS plan, or even make a profit.  There may be profits eventually (see 3.1.7: Opportunities for profit), but getting there requires the long-term mindset to concentrate on building a real mind, not making pretty toys.  (204).  A venture capitalist would probably insist on a proprietary architecture, meaning we'd have ten full-time developers instead of one full-time developer and a thousand volunteers, probably a net loss.

The truth is that we aren't trying to make a profit; we're trying to bring about the end of the human condition, with any profits along the way being a pleasant side-effect.  It doesn't seem likely to me that a venture capitalist would be willing to accept that philosophy, no matter what the return-on-investment looked like.

3.2.3: Funding, grants, and donations

I expect that almost all of our short-term and mid-term funding will come from two sources:  Wealthy individuals (usually from Silicon Valley) and private foundations. Private foundations

A foundation generally provides funding in the form of a "grant", which is usually, but not always, tied to the implementation of a particular project.  (As discussed above, grants are almost never given except to public charities.)  Whether the grant is given depends on whether the foundation approves of the project.  The other type of grant is general operating funds; this type of grant is much more rare, and presumably occurs only if the foundation very strongly approves of the charity, or if the foundation was chartered with the purpose of providing general operating funds.  Most foundations have fairly tight charters and purposes, and will not be able to fund anything except projects within those purposes.

Funding from foundations goes towards whatever you convinced the foundation to fund.  The nature and priority of the projects funded by foundations will probably be tuned to the preferences of those particular foundations, unless the range of foundations is so wide that we can pick and choose.  Thus, resources from foundations are "non-optimable".

DEFN: Optimable resources:  Resources which can be used optimally; that is, resources which can be used wherever they'll do the most good at that time.  Non-optimable resources would include most grants from foundations, which can only be used on specific projects.  An "optimable project" is one important enough to be pursued with optimable resources (205).  A "non-optimable" project is something we can do if the money falls into our laps, but not otherwise - at least, not at that time.

Finding the foundations to fund the highest-priority tasks will take some work, and along the way we may run into foundations with mandates that fit low-priority projects.  The upshot, perhaps, is that low-priority (but still Singularity-related) projects - in particular, I have some cognitive science questions that might help with constructing an AI - might still be undertaken, not because completion is most necessary, but because funding is most available.  Of course, I am not advocating that we waste our efforts on makework.  There's likely to be a limited number of Singularitarians available, and only projects that advance the Singularity should be considered.

Likewise, there may be shifts in the particular emphasis of the important projects.  One of the things I "personally" would very much like to do - in my capacity as a human rather than as a Singularitarian - would be developing a "Teacher AI".  I'd like to see an AI capable of teaching children mathematics - real, fun mathematics, not the dull pap they get in school; starting from arithmetic (or any later level) and continuing to, say, calculus. If we just can't get funding for the general Aicore project, then developing an Aicore and the associated Teacher domdules would probably fit the mandate of a far greater number of foundations.  That's a last resort, though, since it would involve a genuine change of focus, a diversion of research talent, more time to the first release, and a considerably greater probability of failure.

(On the other hand, I do intend to create Teachers eventually - just farther along the timeline, when the substrate is there - and it would be entirely honest to mention this possibility in grant proposals.  There are all kinds of wonderful improvements to the world that would be possible with an AI capable of dumber-than-human general cognition, and I intend to take a shot at them; if mentioning specific examples proves persuasive, then I see no problem with doing so.)

I get the impression that the primary effort required to obtain funding from foundations is in writing grant proposals, and occasionally engaging in telephone talks to nail things down. Open-source grant proposals

I am intrigued by the prospect of writing "open-source grant proposals".  The seed material would consist of the topic and suggested subtopics to cover, any information we have about which past proposals were successful, and any previously sent-in proposals we have on hand.  Volunteers could then take their stab at writing proposals for particular foundations, or suggesting foundations to write for.  Sufficiently literate efforts would get sent off.  There are probably a number of intelligent people who would love to donate some time to the Singularity and simply haven't had an outlet - in fact, writing is probably the least barriered-to-entry volunteer work around, unless you count writing skills as a barrier.  Then again, I don't know how feasible this is, or if writing grant proposals is enough work to require volunteer efforts, so it's just a thought.  If this idea is unworkable, writing grant proposals may be a full-time job for someone.

Even if the Singularity Institute has enough funding (from individuals, most likely) to run a project without a grant from a foundation, the open-source proposal project might still be worthwhile.  I doubt any project will run out of uses for money. Individual supporters

Private individuals can fund whatever they like, and are likely to have a less formal and more personal relation to the charity or the charity's purpose.  Thus, they are much more likely to provide general operating funds, or to fund any given project.  However, private individuals - unlike foundations - do not exist for the sole purpose of philanthropy, do not publish their funding criteria, and are thus, in brief, harder to get.  (206).

Funding from individuals (whether large or small donors) can probably be used optimally - on whatever project is presently most important.  The nature and priority of the projects funded by individuals can be determined by the preferences of the Singularity Institute.

The effort required to contact an individual supporter is (a) obtaining publicity or (b) persuading in private interviews.

In the very long-term, if there is a sizable percentage of the public interested in supporting the Singularity, small individual contributions may become as important as large contributions or grants.  (This is not likely, however.)  I think it would also be a good idea to have a means of handling small contributions in the short-term, simply because a small contribution will do more good at the Singularity Institute than a small contribution elsewhere.  (Besides, It's Their Planet Too and it should be as easy as possible to get involved.)

If there's a project that scales down well, it might be a good idea to have that project specifically supported by individual contributions.  Maybe we could even provide reports on how the contribution was spent - i.e. "Your contribution went towards paying for a computer that will be used for development on the Flare project." Paid memberships:  Why bother?

I'm not sure we should have paid memberships in the Singularity Institute, or even memberships at all.  It seems like an inefficient way to run things, even if it used to be traditional.  In the days of Web architecture and hypergrowth, paid membership - even formal membership - is only another barrier to entry.

Perhaps Institutes such as Extropy and Foresight got started to support minimal infrastructure - i.e. a newsletter - for the members, in which case membership funding is reasonable.  But the Singularity Institute exists to change the world, which I don't think can be funded out of any reasonable membership fee.  Less ambitious purposes, such as community solidification, don't require an Institute; it can be done with free mailing lists.

I can see sociological benefits to creating a list of recognized Singularitarians; I see no reason why this should be confused with payment of a token fee (207), or the problem of funding.  Likewise, if anyone can subscribe to an online newsletter, why conflate the recipient list with the membership list?  Above all, paid membership in the Singularity Institute should not be a prerequisite for access to any projects which benefit from increased participation.

"Membership" seems like a centralized way of tracking a lot of things that will work far better if tracked separately.  The list of known Singularitarians, assuming we have the time and the inclination to compile one, should be as close as we get. Conclusion

Funding from individuals is the only way of moving into the "long-term" stage (208).  While it may be possible to work with minimal funding, I believe the primary strategy should be to find at least one donor wealthy enough that we simply don't have to worry.  Ideally, we should become the Silicon Crusade, the heart and ideal of Silicon Valley, the charity of choice for every technomillionaire.  Why not?  The Singularity deserves to be a crusade, and the meme is powerful enough.

We're trying to massively alter the fate of the human species; as I've remarked elsewhere in this document, trying to do it on a shoestring is silly.  Our ideas are on the grand scale, our goals are on the grand scale, and there's no reason to think small.

In the beginning, it may be necessary to form a skeleton Institute and then apply for grants.  But I don't think we can realistically get through the middle and final stages of the PtS timeline on an underfunded operation.  We can gain credibility and publicity by going through the initial stages on a shoestring, if that becomes necessary, but longer-term operations should assume adequate funding.  To get to the Singularity, to design a true seed AI and rent the hardware to run it, we need to eventually become a well-funded organization.

3.2.4: Leadership strategy

A nonprofit organization, like a corporation, requires a Board of Directors and a chairperson, which brings us to the question of "leadership".  The precise question of who should be on the Board of Directors is addressed in a later section (209); for now, we'll ask what kind of leadership the Singularity Institute needs, and why.

This is my nightmare scenario:  We're at the Elisson stage, we've got a working seed AI, we're almost ready to run it, and we so inform the Board of Directors.  Who's on the Board?  A group of funders that thought the Singularity sounded cool, but never really adjusted, emotionally, to the concept (210).  Now all of a sudden it's here, it's real, it's decision time - and they lose their nerve.  If we're really unlucky, the on-highs will start meddling in the design of the AI, demanding unworkable Asimov Laws (211) and the like.  The urge to meddle is strong; it seems to be a human instinct to do something, anything, even if it's the wrong thing, when anything important is at stake.

There are deep policy questions surrounding the question of how to program the Last AI.  Who should make that decision?  Well, me, of course.  But giving an observer-independent answer, I would say "whoever knows the most about the seed AI" - the same person who decides whether to add any other architectural feature.  (This isn't necessarily me; one of my lifelong ambitions is to find a replacement.)  I can't rely on this leader acting from the same philosophical motives as mine (which are, of course, the only correct ones), but someone who intimately understands the AI is unlikely to voluntarily do anything blatantly suicidal, and that's enough for me (213).  So if we write that into the Institute's charter, does that solve the problem?

I don't think so.  The headlong rush for Singularity is a decision that requires maturity - the ability to acknowledge risks that exist (such as nanowar) and take risks that are necessary (such as not meddling with the seed AI).  If that maturity doesn't exist in the Board - the ability to take the Singularity seriously, which is 90% of the definition of a Singularitarian - then we're likely to run into problems long before the Last Minute.  The Board might decide to stop all AI research and concentrate on uploading, for example. Making policy decisions

So what does it take to make policy decisions?  I think the qualifications are, in order of importance:

However, these "ideological" qualifications, or character qualifications, don't have to hold entirely true of all Board members.  I'd just get nervous if they didn't hold true of a majority, or if they didn't hold true of the chairperson.  If the Board is really intended to direct Singularitarian policy in the long run, then it should hold true of almost everyone. The dangers of power

But I don't think the Board should direct policy.  Singularity Institute policy, maybe; Singularity policy, definitely not.  (Besides, there are other criteria involved in choosing the Board; it makes no sense to try and make one body serve two very different design functions.)  So what am I proposing, a Council of Navigators?  No.  Actually, I don't think anyone should have that power.  Maybe it's just my pseudotraumatic childhood, but in my experience, power is something that other people use to screw up your life.  I would want to minimize, as much as possible, the power held by the Board or by any other formal body, and I speak as someone who plans to be on the Board.

Even if it were legally possible to take all the reins of power into my own hands, I'm still not sure it would be wise.  Concentrating power in your own hands doesn't mean you're safe; it means that the power is concentrated, ready to be taken away and used against you.  That power should remain distributed over all the Singularitarians in the effort.  And that doesn't mean some kind of voting system, either!  A voting system would just distribute power to whoever decides who the voters are.

The deep policy questions about the Singularity cannot be made politically; they are, ultimately, engineering questions.  Putting the coercive power to decide that question in the hands of anyone, even me, even a democracy of Singularitarians or a planetary plebiscite, probably isn't going to help.  The ultimate questions should be left in the hands of the same engineers who would make the decision if it wasn't so vastly important, morally charged, and philosophically controversial.  Yes, there's a possibility that the engineers will make mistakes, but that's not as bad as the possibilities opened up simply by the idea of making it a political question.  If the AI's goal system is designed by a democracy of Singularitarians, why not by the Board?  Why not by the government?  Why shouldn't every television commentator second-guess us?  Who gave us the power to decide the fate of humanity, if it's a political question?

The question, then, is how to ensure that the questions remain in the hands of the engineers.  Which brings us to 3.2.5: The open organization.

3.2.5: The open organization

The goal introduced by the previous section is preventing anyone, including the Singularity Institute's own Board of Directors, from exerting coercive control to torpedo or pervert the seed AI project.  In short, preserving the independence of the engineers.  (Yes, I plan to be on the Board of Directors, and I'll do my best to prevent interference, but I can always be hit by a truck, or outvoted.)

The projects are all open-source, so it's certainly possible to fork off a new project - the Singularity Institute can't threaten to withhold the source code (219).  Can the staff quit en masse and move to another organization, without penalties?  Sure; we'll write that into the contracts.  So now the engineers have a counterargument to any sufficiently obnoxious interference:  "We'll pack up our code and leave."

I am not suggesting that starting a new Institute would be easy, or painless, or that the new Institute would be as good as the old.  Commitments will tend to accrue to the "Singularity Institute" - reputation, funding by foundations, the Web address visited by open-source contributors.  (220).  Likewise, the engineers would have to convince at least one major funder to back the new Institute, especially if access to supercomputing hardware is needed.  In short, the process would not be inertialess.  The Board of the existing Institute would have the normal "power of the paycheck" over individual engineers on a day-to-day basis, an organizational design we have no overpowering reason to tamper with.  But as long as it's practically possible to split off a new Institute, however difficult, there's an "out" if the Board starts messing up the seed AI project.  In a final emergency, this will establish a limit on how screwed-up things can get.

For it to be practically possible to start a new Institute, the unique position of the Singularity Institute has to be minimized.  Hence the caveat in 3.1.6: Dealing with blocking patents about using language that refers to any open-source effort, not just the Singularity Institute.  Another privileged position would be the internal administrative data of the Singularity Institute - salaries and other "preferences files" of the individual, a list of contacts at foundations, the complete list of open-source contributors and the internal source for Websites, and so on.

This brings us to the concept of the open-source organization; that is, publish all accounting information and everything else that can be published without hurting anyone.  Other items, such as contact lists, may not be openly publishable (221).  Even so, such information should still be available to staff, and departing staff should have the right to walk away with it and use it.  (Although, in the case of abusable information, we might rule that ten or more staff members have to issue a united request for the information.)

The point is that secrecy of information usually serves nobody but the people holding the secret, and often not even them.  If the knowledge and administrative details of the Singularity Institute are as open as the source code, it shouldn't be difficult to fork off a new Institute in case of problems.  And there should be other benefits as well, some of the same benefits of open source.  Anyone can contribute advice, anyone can build as we have built - I'm sure I'd've had a much easier time writing this document if I had access to, say, the detailed history of Foresight.

Furthermore, I think running an open-source organization will lead to more contributions.  Open books are easier to trust, just like open code, and also easier to get interested in.  When you can see exactly how much money a project has, and the open list of what it needs, then the idea of contributing will become much more concrete.  If the detailed plans for expanding a project exist, then the project is more likely to be expanded.  There'd even be the possibility of tracking exactly where donations go, or selecting between possible donations, another way to provide positive feedback to donors and get them more involved with the organization.

I know that publishing certain things isn't traditional, but unless I get really strong opposition, I'm going to push for publishing them anyway.  Like the salaries of all the staff members, for example.  Is there really any good reason not to publish this?  I don't think so.  At absolute minimum, all such information should be internally available.  (In a public corporation built around thousands of competing mini-fiefs, office politics mandates secrecy.  If they ran true "open-book management", an open-source company, the fiefs might never form in the first place.  But that's a topic for another time.)

Finally, of course, the usual regulations necessary to enforce "organizational discipline", like not making fun of upper management, should simply be ditched.  That's just a holdover from the Industrial Revolution.  You can't make a corporation (for-profit or non-profit) a free democracy (222), but you can make it free.

Speaking as a nearly certain member of the Singularity Institute's Board of Directors, I do not see how the Singularity will be served by giving the Board any privileged status in the Singularitarian community, or in the part of that community that forms the Singularity Institute.

3.2.6: The Board of Directors

Both non-profit and for-profit corporations, by law, are managed by a Board of Directors.  The organizational design, and to some extent the responsibilities, are mandated by federal and state laws.  (For extra bonus fun, the state laws vary.)  The Web (223) claims that a Board of Directors is legally required to have a Chairperson, a Vice-Chair, a Treasurer, and a Secretary.  Looking at Foresight's Board of Directors, however, I see that it has only three people.  So we'll probably need to consult a legal expert before trying to grok the legal constraints on the Singularity Institute's Board. The Board/staff problem

Okay, you say; even if you do need four whole people, they shouldn't be too hard to find, right?  But it would seem that modern nonprofit law has an astonishingly medieval built-in bias:  Staff members aren't allowed to form the board.  (224).  There's a rigid set of traditional distinctions between the responsibilities of Board members and staff, many of which, I get the impression, are incarnated in law.  I'm not sure we can find a state to charter in that will let us run things sanely, although California (for example) allows up to 49% of the Board to be composed of staff members.

The problem is that the people whom I would otherwise place on the Board of Directors are also the people I'd pick to head the development efforts.  I can think of two Singularitarians whom I'd like to see on the Board, including myself, both of whom would probably be employed by the Singularity Institute, including myself.  See 4.2: Institute initiation for a discussion of options for handling this problem during the initial stages. Evangelism by the Board

In the long run, the only Singularity-related (rather than administrative) function of the Board (rather than the Institute) will probably be providing credibility to our evangelists (see memetics).  In the environment of American industrial ancestry, the Boards of nonprofits were composed of the founding wealthy individuals, in a time when wealth usually meant trying to behave like a prototypical English aristocrat.  (Hence the traditions and regulations related to not getting your hands dirty.)  But if you wanted to persuade other pseudo-aristocrats to join up, or preside at functions where pseudo-aristocrats would be present, you had to be a pseudo-aristocrat yourself, or they wouldn't listen.  Thus a traditional responsibility of a nonprofit Board member is being the public representative of the charity, especially at fundraisers.  We could go along with that, even if it's a tad outdated (225).  I'm not suggesting that we pack the Board with evangelists, unless a non-governing Advisory Board would lend as much credibility.  I'm just suggesting that the top evangelist might want to be a Board member for added punch, especially with large corporations and mainstream media.

3.2.7: Volunteers: Good or bad?

There is, I feel, a great deal of social design baggage created by the origins of nonprofit work as conscience-salve.  Rich people donate money and go on the Board of Directors; middle-class people donate time and become volunteers; paid staff may get additional job satisfaction, even to the extent of ignoring higher-paid jobs, but they don't get full karmic credit for their time.  There's an idea that real altruists shouldn't expect to be paid, that people should split their time between making a living as tobacco executives and salving their conscience as clerks in the Hungry Cat Drive.

The Singularity is not conscience-salve.  At this point in time, humanity is engaged in "making a living" - running the factories and so on.  But for that to be meaningful, somebody has to win.  It's not enough to go one more day without losing.  It's not enough to just stay alive.  Sooner or later the odds run out, and what's the point of staying in the game if it just goes on forever?  As I see it, the point of staying in the game is to win.  For running the factories to matter, someone has to be trying to create an AI.  Someone has to be trying to win.  That's us.

I think that trying to create a Singularity is just as "valid" a job as running the factories, and I don't think that expecting to be paid for it is unreasonable.  It might not be possible, but if the funds are available, that's what should be done.  It's not some over-and-above hobby like trying to stamp out war or end world hunger, it's as much a part of real life as an ambassador trying to prevent some particular war, or an office projecting grain exports.

Of course, it may take three years of living with the Singularity meme on a daily basis before one starts thinking about it in those terms.  Then, too, my economic theories may also have something to do with it.  This is not the place for a full exposition, but in brief:  As technology becomes more powerful, productivity goes up.  Where before 100 million people were supporting 100 million people, now it only takes 90 million people to support 100 million people.  Under those circumstances, four things can happen:  First, standards of living can go up, but due to a complex sort of inertia, our economy tends to lag behind in doing this.  Second, 10 million people can become unemployed and starve, after which it only takes 81 million people to support 90 million... you get the idea.  The third option is to take the 10 million "surplus" people and put them to work on some common quest for humanity - space travel, investigating physical laws, building an AI.  The fourth option is to create a lot of paperwork, thus absorbing the additional productivity.  The modern American economy is employing a mix of all four, mainly option four.  I, of course, favor a mix of option one and option three.

Now that humanity - or at least Western democracy - is running a surplus, I think that the Singularity - trying to win - is one of the projects that can legitimately absorb such a surplus as an alternative to mass unemployment.  If only a few individuals recognize that mental framework, it doesn't change the underlying concept.

The point is that, except for individuals with enough money in the bank to do whatever they want all day, and except for individuals who are donating small amounts of time (rather than sacrificing large amounts), the Singularity Institute should pay for what it gets.  (If, of course, the Singularity Institute can afford it; just because the Institute was created as a framework for humanity's quest doesn't mean that humanity is energizing it.)  I see no a priori reason anyone should suffer for contributing to the Singularity; sacrifice is unnecessary to validate altruism.  Full-time contributors should be paid - making them, I suppose, "staff", not "volunteers".

Well, that was a whole long speech, and it probably really belongs under "Memetics strategy" as a (true, always true) belief to be offered to funders and supporters.  The direct utility-to-Singularity of this strategy is that - if funding is available - it will make the Singularity Institute's support stronger and more reliable.  In essence, asking volunteers to make difficult sacrifices "burns" willpower and Singularitarian-ness to replace funding.  I'm not saying that this can't be done, because the Singularity is that important to many of us.  I'm saying that it should be held as a last resort, and not imposed because of romantic traditions of sacrifice or narrowly-domained cost-benefit visualizations.  This is a theme developed more fully in 3.5.1: Building a solid operation.

3.3: Memetics strategy

3.3.1: Memetics timeline

Of all the timelines listed here, the memetics timeline is the most difficult to define, due to the multiple, conflicting audiences and the multiple, conflicting priorities and the multiple, conflicting deadlines.  Some of the results being balanced include:

Some of the considerations that result: Almost any action, any publication, will impact at least two of these considerations, and of course, all the forces are interacting with each other.  Under the circumstances, I say we wing it (227).  With that in mind, my current visualization of the timeline is as follows: Short-term:  The people we need

In the short-term, the primary goal is sparking the creation of new Singularitarians, particularly founders, funders, writers, and genius-level programmers.  It may be necessary, at most, to address SL1 audiences, and preferably SL2 or SL3.  As a general rule, assume that it takes at least 1000 readers - readers, not Web hits, not people who got the magazine and never read the article - to produce a helpful Singularitarian.  And "helpful" means someone who's likely to help out during these initial stages, not just someone who's favorably inclined.  (Order-of-magnitude derived from the TMOL site.)  I'm not sure how this varies with shock level, but a pre-selected SL2 or SL3 audience should be good for an order-of-magnitude improvement.

My experience so far leads me to think that finding enough people should be feasible.  Difficult, but feasible.

Publications to target:  Slashdot, Mondo 2000, Analog, F&SF (228).

Any source of eyeballs is still useful, however. Mid-term:  The Silicon Crusade

Once the Singularity Institute enters the mid-term stage, so that there's a reputable "front" organization, it will become possible to try and convert Silicon Valley over to the Singularity, en-masse.  This would entail articles in such publications as Wired, the conversion of respected "celebrity" spokesfolk such as Eric S. Raymond, and reaching out to key editors and such.

However, in contradiction to the draft editions of PtS, I've concluded that Singularitarian-initiated direct-contact is not feasible at this time.  Most of the known people we need (229), to preserve their own sanity, have established a don't-call-us-we'll-call-you policy.  This doesn't mean that they aren't willing to help; just that the chain of events that ends with them helping the Singularity begins when they read an article about the Singularity and get interested enough to contact us. Long-term:  Popularizing the Singularity

The effort to acclimate the general public to the Singularity should only be begun when necessary, or when we're sure we have the resources to prosecute a public flap.  There are two conflicting forces:  One, waiting until after a public flap gets started would mean conceding the initiative.  Two, if we can make it all the way to Singularity without it ever becoming a "public policy" issue, I think maybe we should.

Figure that if the Silicon Crusade (above) or the Singularity becomes a popular topic among technophiles, the mainstream media will notice sooner or later; if we have the resources at that point, we should seize the initiative by targeting the general populace with memes that are Singularity-supportive, or likely to produce a good first impression, in a form likely to get favorable word-of-mouth.  (In other words, keeping the future shock toned down.)  We shouldn't actually go to discussion of high-future-shock issues until it's clear that this will happen in any case.

3.3.2: Meme propagation and first-step documents

Those of my readers who are trained to an ethic of writing are probably trained to the scientific ethic, which requires the mention of every possible objection, every possible reason why a theory might be wrong.  Applying this rule directly to memetics would require mentioning every possible objection to each statement, and mentioning every possible argument under which a goal might not be in the reader's best interests.  In fact, the scientific ethic requires mentioning arguments that would contradict goals or statements, given the reader's probable assumptions rather than the author's.  (230).

We all know that "the media" doesn't work that way; or at least, outside of the scientific community, it doesn't work that way the vast majority of the time.  Even scientists writing about complex issues in newspapers or magazines, or journalists who believe "the reader should be allowed to decide", are sometimes unable to strictly obey the ethic simply because printed media often doesn't have the room to present all the issues.  (The space squeeze occurs in media but not in science because there's a lot more room in the peer-reviewed journals, and also because the vast majority of scientific articles deal with non-morally-charged issues where the ultimate answers are supposed to be simple.  In science, unlike social domains, if you can't fit the discussion of all the caveats into 1500 words, this is a good sign the theory is wrong.)

In "the media", readers understand that a flat, uncaveated statement may be simply the personal opinion of the author in a controversial field, though this is more true of statements about politics and morality than about statements of fact.  This "skeptical reader" assumption may not be true of everyone, but it is both traditional and necessary to assume it is, at least if you want to get anything published.

There's also a stereotype to the effect that the faceless public "doesn't want to hear the whole story", just popularizations and simplified good-guy bad-guy conflicts.  Whether this stereotype is true - or rather, the percentage of which it is true, and to what degree - is irrelevant if the publishers believe it's true, and demand articles for that faceless public.  This appears as a problem chiefly when one wishes to include hints that there's more to the matter than has been said; publishers who think themselves panderers (231) believe that the reader wishes only the illusion of understanding.  Likewise, some media will object to any science more specific than "quantum uncertainty" and "everything is relative".  Personally, I'd say we can afford to avoid any memetic channels in which this tendency has become pronounced, but traces may often be visible elsewhere.

Finally, aside from tighter space constraints, more complex issues, less cooperative publishers, a higher perceived standard for keeping the reader awake, and a different set of assumed reader behaviors, which is standard across the general problem of popularizing science, Singularitarians have an extra bonus problem:  Informing the public about the impending end of life as we know it without creating a lot of opposition.  We are future shock, and sudden exposure to our complete set of ideas is likely to send some readers screaming into the night, maybe literally.

The traditional ethic of High Journalist culture (232) does not permit partial presentation of a meme in order to make a better impression, holding this to be a form of lying; the entire meme must be presented, and the public permitted to make its own decisions.  And if this were 1990, that would be hard to argue with. Audience composition as a function of reference trajectories

With the advent of the Web, with the ability to insert rememberable URL references into even printed documents, the fundamental assumptions change.  The printed article is merely the first step; in a way, it's almost analogous to the blurb that newspapers use to summarize headline news.  The document is the spiderweb; not one article, or one Web page, but the link halo, the probability that a reader with a given set of characteristics will read a given page.  The differentials give rise to some interesting ethical effects, but first it's worth the time to explore the underlying formalism.

Visualizing the trajectory of someone stumbling onto printed material about the Singularity, ve will either not be interested and stop reading, ve will be interested enough to finish reading but not interested enough to look up further material on the Web, or ve will look up further material on the Web.

In the case of a Webber, the first pages/essays/directories arrived at will be the ones referenced in the printed material (and particularly URLs that look interesting, or ones specifically designated as being "for more information"; most people are also more likely to type in a short URL then a long one).  From there, the reader will spider through the Web, following the links of greatest interest.  Some readers will surf for a single session and never return, but others will have established an enduring interest in the Singularity.  (233).  (During the initial stages of the PtS plan, it's that last audience which we care about more than anything else, but we still can't leave the other audiences out of the equation.)

So there are at least three Singularitarian memetic channels.  "First-step documents" are printed material in magazines or newspapers or other widely distributed media, television interviews, and any Web pages referred to by non-Singularitarian sites (234).  This is the "initial audience" in this analysis, and it's fair to assume that the majority will never have heard of the Singularity.  "Second-step documents" are any Websites referred to by the first-step material; this will reach the parts of the audience who care enough, or are horrified enough, or are having enough fun, to type in the URLs from the first-step article, or click on the "For more information" links on a first-step site.  Third, there's all the other Singularity-related Websites on the planet (235), which will probably be read both by long-time Singularitarians and by people spidering on from the second-step sites.

Three channels imply three kinds of writing:

I haven't yet rewritten my own sites to follow this formula.  When I do, the TMOL FAQ will be an example of a first-step document.  Staring Into the Singularity will be a second-step document.  This page, Coding a Transhuman AI, and Singularity Analysis are all third-step documents. Invariance under the whole story

The primary ethic of writing first-step documents, IMO, is what I call "Invariance under the whole story."  Sometimes, due to constraints of space, or the desire to avoid frightening off the reader, one must leave out some parts of the story.  The cognitive structures that remain - the logic and emotion - must remain invariant under the whole story.  The content, the matters of scale, the concrete visualization can change, but not the structure.  When saving the world, the difference between a group of benevolent but mortal-scale transhumans working for the common good, and AI-born Powers rewriting the Solar System on the molecular level, is simply a matter of how much future shock the reader is exposed to.  The same benefits, the same risks, the same moral structures, the same hopes, the same fears, the same idealisms, all remain invariant.  (236).

One mustn't lie to the reader.  This ethic expresses itself in two ways:  First, by the requirement that someone who goes on to read the whole story shouldn't feel that the first-step document was a lie; second, by the requirement not to make any statements which one does not personally believe, or invoke any emotions which one does not judge fit for personal use.

In the example above, a concrete picture of "a group of intelligence-enhanced humans remaining on the mortal level and helping out, perhaps by making scientific advances," if I were to draw it explicitly in writing, would strike out on two counts.  First, this is not the Singularity which I believe in (237).  Second, the reader, on hearing that I planned to take apart the world on the molecular level, would realize that I did not believe what I had written earlier.  The invariance doesn't count if there's a concrete contradiction.

Ideally, one should remain vague about what form transhuman aid would take, at least until the second-step document.  (Unless, of course, you yourself believe in a concrete scenario weak enough not to shock your readers.)  Thus, each first-step reader will visualize whichever outcome they can imagine, at their own level of future shock.  If the author's second-step visualization is more powerful, the reader will count the first-step mental image as a bad visualization, rather than as a deception - so long as the basic ideas remain invariant under the whole story. Secondary channels:  Word-of-mouth, other reporters

Remember, especially when writing first-step documents, that any sentence you write, any paragraph you write, may be taken out of context and quoted.  If you need to say something that could be quoted out of context, go ahead and say it; we shouldn't weaken our own memes, much less lose completeness in our analyses, because we're worried about being misquoted.  It's simply something to bear in mind, that's all.  Until one of your works has been popularized, you really don't realize what your carefully reasoned, holistically cohesive document can look like when the exciting concluding paragraph is quoted out of context.  Have you ever wondered why works intended for popular consumption often get right up to the exciting climax of the chapter, and then, right when your pulse is pounding, the author repeats everything he's just said for the last dozen pages?  It's because the concluding paragraphs are the ones that get quoted.  (Yes, I learned this the hard way, and that was with a friendly, even Singularitarian author.  Imagine what the hostile ones will do.)

This is the other reason why we should be careful about what kind of future shock we pour into the first-step documents.  An individual reader, perusing a carefully structured argument, can be called upon to understand it, no matter how high the level of future shock.  Ve cannot be asked to remember the entire argument to repeat to vis friends.  Anything that goes into a first-step document is something that has to be simple enough, and innocuous enough, that repeating only the highlights of your carefully crafted argument, in an order that bears no particular resemblance to your calculated sequence, doesn't create panic and opposition.

Of course, reporters can read second-step and third-step documents, and can quote them out of context as easily as first-step documents.  All I'm suggesting is that the journalists who are too busy, or who want everything simple, or who are just lousy reporters, will be happy to look no farther than the first-step documents.  A journalist who's diligent enough to go on to second-step and third-step documents may form vis own impressions of the complete truth, and convey them to vis reader, as is vis right.  How ethical journalists choose to deal with the issues outlined here is their responsibility, although we should certainly feel free to point out how serious that responsibility is, and suggest means for handling it.

Despite the idealism, we can always wind up with a lousy journalist who stumbles over an exciting second-step document and runs away screaming with little dollar signs in vis eyes (238).  But we're probably going to get that problem in any case.  All we can do is try to get the Singularity Institute established before that happens, and appeal to good journalists to fight the bad afterwards.

3.3.3: Emotions of transhumanism

The direct appeal to emotion has always been somewhat taboo among technophiles, and for a damn good reason.  Emotions are so easily abused that emotional argument is seen as foreign from, or in opposition to, intelligence.  And it often is.  There's a reason why clichés become clichés.  (239).  Nonetheless, if anyone is afraid to be emotional, I have three words to say:  Get over it.

Human intelligence grew up around emotions, and whether this is good or bad (240), building cohesive structures of thought often requires emotions as glue.  It's not just that emotions are needed to translate purposes into actions (241); sometimes, our emotions extend into intuitions.  Being enthusiastic about the prospect of saving the world is rational, and the resulting "gut responses" can lead to intelligent choices about priorities.  Intelligence is whatever lets you model, predict, and manipulate reality.  Emotions are an extension of intelligence by other means.  Emotions are neither as reliable or as powerful as abstract thought, but emotions can be valid.

The authorial ethic requires that we make only those statements we personally believe to be true, and appeal only to those emotions which we personally use.  It does not require that we appeal only to those forms of cognition which we would wish to design into an artificial intelligence.

We live in a culture with ambient technophobia memes, transmitted by uncontradicted statements and the last fifty years of television.  You can't fight that by railing against irrationality or superstition (242).  People who derive their morality from these sources won't give it up on your say-so.  At most, if they explicitly mention the Borg, you can remark that it might not be a good idea to decide the fate of humanity based on the last fifty years of bad TV.  You can say this because bad TV isn't a culturally approved form of culture, attacking bad TV is socially accepted, and people are willing to be told that listening to bad TV is wrong.  But other carriers of technophobic memes enjoy higher social approval.

You can't fight technophobia with artificial incredulity toward your own subject matter.  People are willing to take things seriously, if you ask them to do so.  If you don't ask, there's no reason why they should.  If you don't take your own work seriously, don't be surprised if most of the audience you wanted to target does likewise, while your technophobic readers see through the mask to become frightened and horrified.  (243).

The only way to combat the floating social perception of "unnatural equals immoral" is with positive reasons why ultratechnology is moral.  (244).  And that doesn't mean holding out a big carrot, like eternal life or infinite wealth, because the same technophobic memes say these things are unnatural.  You have to offer positive moral and emotional reasons, reasons that the audience can accept within themselves, without interference from what they've been told it's socially acceptable to feel.  You have to offer positive moral reasons which are higher and more idealistic than technophobia, and which feel higher.

You have to make them feel the holiness of creating a new mind unmarred by hate, the exhilaration of exploring the Universe, the courage to face the unknown, the altruism of the quest for the Singularity, the joy of working to heal the world.  Because that's what makes a technophile.

You have to teach them that humanity is strong enough to change the world, that they are strong enough to change the world, because it's that strength, and belief in strength, that modern-day humanity is starved for.  The world can be improved, problems can be solved.  The fundamental message of technophobia is that the world is perfect the way it is, or the way it was in some mythical past, but people know better; they can see it on the evening news.

To combat the memes of resignation, all most people need is the belief that the problems they see can be solved.  In modern First-World culture, people are starved for meaning, starved for the chance to make a difference.  If we play our cards right, we should not only be able to beat technophobia, we should be able to beat it hands-down.

In a first-step document, we can choose our own moral and emotional battlegrounds.  If we lose, it's our own damn fault. The ethics of emotion

By "choosing our own battlegrounds", what I mean is that we'll get better reactions if we present our ideas in a certain order.  We'll do better if people have a chance to have positive emotional reactions to the more exciting aspects of the Singularity, before they encounter an involved discussion of why most of the apparently negative aspects turn out to be moot points.

I think the emotional logic involved is an actual use of intelligence, not rationalization.  If someone emotionally attached to the concept of Apotheosis becomes more capable of emotionally accepting the necessity of the risks involved, then this can be viewed either as leading someone down the garden path, or as the correct functioning of the built-in cost-benefit analysis intuitions, depending on what you think is the actual correct answer.  This involves a judgement call on the author's part, but in a multi-step document, it's a judgement call the reader is given a chance to second-guess.

What would be unethical is allowing someone to become attached to the possibility of Apotheosis, then using this attachment to persuade them that there are no risks involved.  That would be taking advantage of what they want to believe, rather than what they become willing to understand.

3.3.4: Content and audiences

In the long-term, the audiences - the memetic carriers - whose reactions we need to worry about will include Silicon Valley tycoons (245), open-source programmers, CEOs, Greenpeace, politicians, televangelists, television reporters, truck drivers, print journalists, the middle class, the upper class, technophiles and technophobes, honest religious fundamentalists, and that's just in the First World.

While the first-step arguments need to be adapted to unique characteristics of unique audiences, one useful abstraction for reducing this complexity is your audience's Future Shock Level, or shock level for short.  (This measures the level of technology with which you're comfortable, not the highest level you've heard of.Future Shock is a good page for memeticists to read, but to summarize:

The general rule is that we should try to minimize jumps of more than two shock levels.  In first-step documents, concrete visualizations of SL4 material should be reserved for SL2 audiences and above.  In the short-term, when we're likely to be writing articles for Wired or Mondo 2000 or Slashdot, SL3 material is appropriate.  Below SL3, it's not possible to say much about the Singularity, so in the mid-term and long-term we may simply need to expose the general audience to a jump of three shock levels.  (246).

It should be remembered that future shock increases the strength of any reaction, good or bad.  While a first-step document should contain enough future shock to get people interested, to get the people we need to go on to the second-step documents.  Anything above this may not be wise.

How much future shock should be stuffed into a second-step document is not something I've thought much about.  Staring Into the Singularity contains as much future shock as I could write, at the time I wrote it.  On the other hand, I was writing for an SL3 audience.  It's an interesting question.  And, regardless of the answer, my tactics are constrained by the amount of writing time I have.

Detailed analyses of memetic propagation through all audience segments is another subject I don't have time to write.  In the short-term, it shouldn't be necessary.

A first-step document targeted at an audience of SL1s or SL2s should contain nanotech and AI, but not Powers; it should discuss the unknowability of intelligence enhancement, but not the positive-feedback effect.  (Of course, if you need a fast dose of future shock and you don't have the wordage to do it gradually, I find that the line "If computing speed doubles every two years, what happens when computer-based AIs are doing the research?" is fairly effective.)  It should contain an appeal to at least one emotion of transhumanism (see above).

All of these heuristics may be ignored as circumstances warrant; when I'm answering a specific question, I discuss whatever I need to discuss to answer that question, even if it's a first-step document.  I don't include any mention of the quest for Singularity unless there's a natural way to work that in.  Publishers, and readers, often look dimly on attempts to include off-topic polemics.  (247)

Finally, in an article (as opposed to a question-answer or a letter-to-the-editor), mentioning the altruistic, the Singularity-as-quest, is very important; and not just because our goal is to create new Singularitarians.  If we're keeping future shock down to SL3 levels, the article still has to be of interest to potential Singularitarians who are already SL3s.  That requires new content, originality, even if not raw future shock.  The idealistic-crusade aspect of ultratechnology doesn't seem to be mentioned much, with most authors stuck in gosh-wow mode.  Mentioning the altruistic aspects should suffice to keep it interesting, even for readers who've already heard of nanotech and AI.  Then, when they get to the second-step Websites, the fun can begin.

3.4: Research strategy

3.4.1: Fundamental research

Fundamental breakthroughs, all witty sayings to the contrary, are hard to produce using nothing but hard work.  It takes either genius or a lucky experiment.

Building a transhuman AI will require genius.  It will require more genius than any single breakthrough in human history.  We are trying to create a mind.  There is no higher task, not in any sense of the word.

Once upon a time, John McCarthy said that, to succeed, AI needed "1.7 Einsteins, 2 Maxwells, 5 Faradays, and 0.3 Manhattan projects."  The Manhattan project is described here.  I don't know how much of the list still remains, after Lenat and Hofstadter and Marr, but my aspiration is to be the Drexler of transhuman AI, and hope that another 0.3 Manhattan projects and 1.0 Drexlers is enough.  If I'm not sufficient... then it's really only a matter of casting our net as widely as possible, and hoping that the necessary genius falls in.

What strikes me, looking at the past history of AI, is that there has only been one attempt to design an actual mind.  Douglas Lenat's Eurisko was the only AI that captured more than a single facet of cognition; the only AI that had enough complexity to count even as an attempt.  There have been other valid and successful efforts at capturing facets of artificial intelligence, notably Hofstadter and Mitchell's Copycat and David Marr's vision project, but Eurisko was the only attempt at a true artificial mind.  I find this oddly comforting.  There is no long history of failure to contend with; Eurisko is the only relevant attempt, and it did pretty well.

The research talent we need is more likely to reside in the field now called "cognitive science" than in the field called AI.  AI remains crippled by the ideologies formed back when it was necessary to believe that a 50's-era computer program was exhibiting "intelligence".  There might be some geniuses slogging it out in existing academic AI, and while it might be worth the effort to let academic AI know what's going on, I'm not sure it'd be worth the controversy (248).

From examining Lenat's papers on Eurisko (250), it would appear that the primary talent necessary to design artificial minds is the ability to grit your teeth and write the features you know the program needs, even if it's a bloody lot of work.  (251).  My best guess is that this is a programmer's ability.  So the other place to look for the research talent is in the field of programming, the same place we're looking for the development talent.

But the PtS plan does not relying on finding additional genius.  In the PtS visualization, the technological timeline and the principles in Coding a Transhuman AI - the things I already have specific ideas for doing - are enough.  Oh, I'm sure we'll look back on that in five years and laugh our heads off, but the point is that I'm not saying that the fundamental research problems are bridges to be crossed when we arrive.

3.4.2: Supporting research

Fundamental breakthroughs take genius or a lucky experiment, but both can be helped along by planned research.  I can think of a number of areas I'd like to see investigated, mostly in cognitive science.  Some examples:

(You'll note that many of these projects are cool.  Benefits of the coolness factor include publicity, ease of persuading someone to fund it, and "morale" or general fun.)

"Grant priority" projects aren't on the necessary path, or the critical path, but they will advance the Singularity or support a project that does.  They should be supported only with non-optimable resources - that is, funding and personnel that could not otherwise go to necessary or critical projects.  E.g. funding from a foundation that isn't interested in other projects, and researchers who wouldn't be interested in other areas.  Of course, this is a quantitative tradeoff rather than a qualitative injunction; if the non-optimable funding and researchers are already there, you can use optimable resources to do the paperwork.

"Optimable priority":  May be supported by optimable funds (i.e. grants from individuals that can be used optimally), but not at the expense of higher-priority projects.  "Optimable priorities" are often projects that will directly advance a Singularity, although perhaps not the critical-path AI Singularity.  (But maybe we only think it's the critical path...)  Whether a project is "optimable" at any given time depends on how expensive the project is, and how much optimable funding is available - it's a cost-benefit thing rather than a qualitative difference.

3.5: Miscellaneous

3.5.1: Building a solid operation

Three examples of a decision based on the heuristic:  "Build a solid operation":

A solid operation is one in which the available resources are matched to the expected problems, a plan which doesn't make a habit of relying on extraordinary efforts.  A "shoestring operation" is a plan that relies on willpower to compensate for inadequate resources.  A shoestring operation relies on extraordinary efforts for day-to-day operations.  When something unexpected goes wrong in a shoestring operation, there isn't any slack available.  Shoestring operations also have a nasty habit of burning people out, especially programmers.

I like to think of it in Gaussian curves (253).  There's a curve that describes how much effort people can put out, ranging from "hardly trying" to "supreme effort", with the midpoint requiring some mental energy and willpower to sustain, but not more than the mind's natural rate of replenishment.  A solid plan requires effort, but not unusual effort.  A plan that assumes 50% output is solid; a plan that assumes 90% output is shoestring; a plan that assumes 100% output is unworkable.

In some cases, the PtS plan assumes extraordinary brilliance.  This can't be helped.  Building a mind is an extraordinary problem (255).  But I've tried not to unnecessarily assume tenuous or improbable events; that would make the whole plan too fragile.  Large, carefully polished sections of a previous draft were junked when it became clear that running distributed over the Internet created too many opportunities for things to go wrong.  I've tried to create a plan where setbacks will mean delays or solvable problems rather than crashes.  I've tried to plan such that the success of the early stages will advance the Singularity, even if the later stages fail.  Where more then one plausible outcome exists, I've tried to plan for both.  I do my best to emotionally accept the possibility of negative outcomes.

I don't trust to luck, but I've had to assume that events which are low-probability in the individual case can be made to happen at least once in the general case - for example, finding a funder.  I assume these things because they're necessary to the plan and they seem worth a shot.  (256).  But I have not assumed, at any point, extraordinary efforts (257) on the part of anyone involved.  This is one variable that lies entirely within the discretion of the planner, and I believe that extraordinary efforts should be reserved for unexpected problems.

I admit to being prejudiced.  I think the whole concept of a shoestring operation is based on the romantic stereotype of nonprofit work - or, for that matter, the romantic stereotype of starting your own company in a garage, or the romantic stereotype of open-source projects led by college students, or the hero theory of software development.  There's this idea that if you're going to have a solid plan and adequate funding, you might as well put on a business suit and be done with (258).  There's this idea that you're not "worthy" to start your own company or save the world unless you're willing to work 16-hour days.  (259).  I, for one, am more impressed with someone who plans, so that 16-hour days aren't necessary.  (260).  Gratuitous heroism is great for scoring bonus points in mental fantasies, but it's the wrong attitude to take if you're trying to keep your planet from being converted into a boiling puddle of slag.  I have nothing against heroes, but the planner's job is to keep the necessity for heroes down to a minimum.

But maybe I'm making the wrong tradeoff.  Maybe the improbability involved in finding "adequate funding" makes the plan more fragile than a shoestring operation.  And if the only funding we can find isn't "adequate", then I suppose we'll just have to get by on inadequate funding during the initial stages.  I just think that if we can really do it, make the Singularity, change the course of an entire industry and build an intelligent mind, then we probably won't be doing it on a shoestring - unless we're seduced by the romance of heroism, or if our plans create a self-fulfilling prophecy.

To sum up, there are three (advantages to)/(characteristics of) a "solid operation":

3.5.2: Accelerating the right applications

Technological change can be hard on an economy, and harder on the people making up that economy.  (267).  Jobs get lost.  It happens.  What makes technology a good thing - the reason why the doomsayers and slowsayers and Luddites always prove mistaken - is that new jobs come along to replace the old.  My father has a saying:  "If the modern government had been around in the time of Ford, cars would have been outlawed to protect the saddle industry."  (268).

Nonetheless, there's a limit on how fast economies can adapt.  There's a limit to how fast people can be reeducated.  (I should note, for the record, that we are presently not even approaching this limit.  (269).)  Even infrahuman AI is still an ultratechnology, and if we can really pull off the zero-to-sixty stunt needed to have a seed AI ready by 2010, much less 2005, this implies a rate of change that could put enormous stresses on the economy.

However, there are technologies that can compensate.  The great computer revolution has increased rates of change in some industries, but other computer technologies have enabled (some) companies to change faster and keep up.  It's the reason why "change" is one of the great clichés of our time.  Technology doesn't just create economic stress, it creates the ability to keep up with economic stress.

So within the near-term economic horizon, meaning the next 10 years or so, we want to accelerate the stabilizing applications of AI, such as educational AI, and avoid accelerating the applications that would cause "ultraproductivity", which in today's economy would translate into "mass unemployment".  I do have some schemes for "smart economies" that can rapidly absorb almost unlimited increases in productivity, although the technologies involved (270) are not AI as such; these also go on the list of things to accelerate if we have the spare time.  (After all, if the Singularity drags on beyond the next 10 years, human economies are just going to have to adjust to ultraproductivity.)

3.5.3: If nanotech comes first

Nanotechnology has really taken off in the past few years.  I remember when nanotech was a loony dream, not something that got featured in Time and Business Week.  I remember when there wasn't any such thing as a Scanning Tunnelling Microscope, and "IBM" hadn't been spelled out in xenon atoms.  I remember when people were still arguing over whether it was possible to create chemical bonds by mechanical manipulation.  (Yes, it's been done.)  (271).

Drexler published "Engines of Creation" in 1984, and it may be that nanotechnology just has too much of a head start.  I'll be overjoyed if we have 'till 2020 to create a seed AI, but it's increasingly looking like the deadline may be more on the order of 2005.  That's not impossible, but it's damned tenuous.  So what can we do to prepare for the possibility that nanotechnology comes first? Survival stations

To be specific, what can we do to increase the probability that the human species survives, in the event of either a grey goo outbreak, or - far more likely, and far more deadly - nanowar, the large-scale military use of nanotechnology?  Make no mistake, nanotechnological warfare or even grey goo is easily capable of wiping out the entire human race (272).  Faced with that threat, our first priority must be to ensure that some fraction of humanity survives, most likely in a survival station somewhere in space (274).

Undoubtedly the anti-disaster groups, including ourselves, will do everything possible to preserve the six billion people presently living on this planet.  But our first priority must be to preserve the existence of the human species.  The survival of individuals, including ourselves (275), must be secondary.  (Not that the goals are likely to conflict directly; I'm talking about the allocation of project resources.)  If intelligence survives in the Solar System, there will be a Singularity, sooner or later.  Given enough time, someone will code an AI.  We just have to ensure that survival stations, capable of (A) sustaining life indefinitely and (B) reproducing into an acceptably-sized culture, (C) come into existence before military nanotechnology (279) and (D) are out of the line of fire (280). Advance planning and design-ahead of survival stations

This project is independently initiable; it doesn't depend on the technological timeline or any other PtS projects (281).

The purpose of design-ahead is to narrow the gap between the invention of nanotech and the launching of survival stations.  The method is doing as much work as possible in advance.  In particular, design-ahead would consist of:

Obviously, this is a long-term project.  Even in the short-term, however, it's imaginable that we might fund, say, a paper on what it would take to produce a survival station.  Even that much would be an improvement.

See also the Molecular Manufacturing Shortcut Group, a nonprofit devoted to discussing space travel and nanotechnology.  They might even have investigated survival stations; I'm not sure.  Anyway, they'd clearly be the people to turn to if we have a research question. Brute-force seed AI

Humanity's experience with computing suggests that brute force can make up for blind stupidity.  I believe that Deep Blue was examining two billion moves per second, to Kasparov's two moves per second, when it finally beat him.  Thus, we may speculate that Deep Blue was approximately one billionth as smart as Kasparov.

It's conceivable that a seed AI could be designed (but not run) which would operate on nanocomputing hardware.  This "brute force" seed AI would make up for lack of intelligence by using wider search trees.  If the potential for intelligence were present, the ability to understand what needs improving, the brute-force AI might be able to improve itself up to human smartness.  The interesting question is whether human smartness can be brute-forced.  This question is too technical, and too deep, to discuss here - but I think our evolutionary history says it's worth a shot.

I believe that design-ahead of a brute-force seed AI is the single most effective strategy for dealing with the possibility of nanotechnology.  The interval from nanotech to Singularity would equal the interval between nanotech and nanocomputing, or only slightly longer.  Nanocomputing, in turn, is likely to be one of the first applications possible, perhaps even a prerequisite application for an assembler breakthrough.  Nanocomputing is also likely to be available on the open market, or, if developed by Zyvex, available to fellow transhumanists. Emergency neurotranscendence

Failing the design-ahead or success of a brute-force seed AI, we can try to amp the existing hardware, also known as the human brain.  The idea would be to create someone/something capable of coding a brute-force seed AI, or at least someone capable of saving humanity from the tremendous mess consequent to the invention of nanotechnology.

The procedure would be trying every imaginable way of increasing the raw power available to the brain.  The method would probably be attaching nanodevices to individual neurons and using those nanodevices to change or expand the brain's information-processing characteristics.  Some examples might include:

The thing to remember about most of these methods is that they would require sophisticated nanomedicine, which means nanotech capable of operating inside a human body.  In-body nanomedicine is a far more advanced application than open-air nanoweaponry, and thus nanoweaponry is likely to arrive first.

We'd have to either rely on design-ahead, trust to the altruism of the technology's controllers (283), or cut a lot of corners on safety. Zetetics:  Augmented self-awareness

One harmless form of intelligence enhancement, technologically and legally practical in modern times, would be experimentation with augmented self-awareness.  Neurofeedback, or learning how to think rationally by watching your thoughts and emotions on a neuroimaging device.  Yes, I know that neuroimaging results don't come with handy labels, but it's possible that people could learn to correlate the patterns they see with the type of thought they're using.  If the cognitive technique of "rationalization" can be detected and unlearned... well, when I picked up the knack of identifying the subjective sensation that accompanies rationalization, my effective intelligence took a big jump.

Clichés to the contrary, history doesn't teach that everyone is corruptible.  There are some individuals in history who were corrupted by power, and some who weren't.  Corruptibility isn't an absolute, it's a balance, and balances can be tipped.  Zetetics might not be absolutely incorruptible, but they might reliably fall on one side of the balance.  And if there are reliably hard-to-corrupt individuals around, that may provide an "out" to some of the dilemmas associated with the rise of nanotechnology.

Would a government or a company, having obtained ultimate power, turn it over to a group of supposedly incorruptible individuals?  Not in today's world.  If everyone's desperate, and the Zetetics have already built a reputation, it could happen - but it would still be a fringe probability.  The only reason I'm even mentioning it is that Zetetics seem like nice people to have around in any case. Getting to know the independent labs

All this neat stuff assumes access to nanotechnology.  To minimize research and deployment times, we would need to be in on the nanotech breakthrough when it happens.  In practice, I think this would just mean making sure that Zyvex and co. know who we are beforehand.  Getting an endorsement from Eric Drexler might also prove effective.  Aside from that, there's not much to say about this - but it's a key point. After WWIII

One of the major branches in my visualization is the possibility that the invention of nanotech, or even the prospect of nanotech, would trigger a general war fought with nuclear weapons.

A nuclear war is not likely to actually wipe out humanity.  Australia might have a good chance of surviving (or not).  The end result would be to set us back ten or fifty years.  And in ten or fifty years, humanity will wind up in pretty much the same situation.  What can be done to affect the race between AI and nanotech in that time?

There are a lot of possible factors affecting the outcome.  The only method I can see for influencing the outcome would be preserving the knowledge of AI and computing hardware.  We would record basic research insights and detailed techniques, in a format and location likely to survive nuclear war.  So the Australian Backup Initiative would be one possible project, as would a more detailed time capsule intended to survive a thousand-year interregnum.

Nuclear war is not a happy thought.  Most of the human species dying out, with civilization returning over a period of fifty or more years, is not a pleasant thing to contemplate.  That is, however, one of the major possibilities.  If a small action taken now can make a big difference the next time around, then we should do it.

4: Initiation

4.1: Development initiation

4.1.1: Absolute minimum resources needed to begin development

I know of at least three potential part-time volunteers.  So, given absolutely nothing else, we should still be able to start development of the Singularity timeline, however slowly.  My own assistance is going to have to go part-time (very part-time) soon, I think, unless I get funding to continue.

Funding required:  $0.  But it's going to be pretty slow.

4.1.2: Resources for initiation of stated strategy

4.1.3: Ballpark figures on funding

Format:  $X/$Y/$Z.  X=minimum, Y=best guess, Z=maximum.  (284).
My best guess, from these figures, is that around $100K would be needed to get started, with $300K necessarily available to ensure the project could survive for at least three years.  (285).  Actually, I don't know how much is required for a Singularity Institute, but I like to err on the side of caution, and as you can see I'm providing figures for sticking it through, not for launching it into the air and praying.

4.1.4: Interpolation

The figures given reflect my opinion that beliefs should be expressed in curves and probabilities, not scalars and certainties.  (287).  However, the curves given are for the funding required to accomplish a single vision - one developer working on Flare, and one developer (me) working on Aicore One.  The level of funding doesn't just change the points on the curves; it also changes what we're trying to do.  (288).

With more funding (289), the most obvious outlet is to add additional developers to the Flare project, and additional developers to the Aicore project when that project begins.  (290).  If any cost-cutting compromises have been made to get the project underway, from underpowered development tools to decreased salaries, these should be rectified.  (291).  With even more funding, the Singularity Institute can begin mid-term projects such as memetic outreach programs, cognitive science research, looking into nanotech survival stations, and so on.

The least tenuous source of additional funding, once initiation occurs, will probably be grants made by foundations.  (292).  As discussed above, such grants (a) may be non-optimable resources, (b) will require an initial investment of time to write proposals, (c) may impose constraints on the makeup of the nonprofit (293), and (d) will also require a proposable outlet for the grant.  Alternatively, the first funder may wish to accelerate development further and have resources available to do so, or additional funders may become available.  Such funding is to be preferred, since it imposes fewer constraints.

With less funding, it's possible to interpolate between the volunteer strategy and the Singularity Institute strategy.  With less than full funding, say around $30K, one could simply "launch" the effort and hope it attracts grant or individual funding; if that fails to materialize, work could revert to part-time status.  Interpolating again, the project "launched" could be solely Flare Zero, with no attempt to start work on Aicore.  Interpolating again, the Flare Zero project could proceed with one full-time developer (294) instead of two.  And so on, with the expenses being eliminated one by one.

Let me emphasize that this kind of minimalism should not happen unless necessary.  The Institute strategy, with two programmers, may appear to involve scarcely more benefit than the volunteer strategy, but the end goal is to bring about the Singularity, and that is not something that's likely to happen with only two full-time programmers, much less a shoestring volunteer operation.  The short-term goals of the Singularity Institute's short-term projects are important to the timeline, but equally important is the potential to move on to the next stage.

Before you can create X, you have to create the potential for X.  The volunteer strategy is a shoestring operation.  There's little effort put in, little prospect of rapid growth, and no means to handle any growth that does occur.  The Institute strategy provides a solid foundation for growth, the nonprofit status to attract grant funding and donations, and support for continuous rather than intermittent development.  It's more solid, more reliable, and thus far more able to accrete power and accelerate.  See 3.5.1: Building a solid operation.

4.2: Institute initiation

NOTE: It's been initiated.

The technical requirements for incorporating a 501(c)(3) nonprofit are $1K/$10K/$15K in legal fees, an ad-hoc Board of Directors (295), and a charter.  To get the Singularity Institute started, we need the nonprofit, a real Board of Directors, a founder, at least one initial project, and funding for said project.

The "founder" is someone who's likely to fill the same wide variety of hats assumed by the founder of a startup company - filing tax returns, calling up foundations, talking with possible donors, doing media interviews, managing projects, writing Websites, and so on.  I'm not sure whether this would be the Chairperson of the Board or the Executive Director; in the early stages, possibly both.

I may have written the plan for a Singularity Institute, but I am extremely reluctant to play the part of "founder", even at the very beginning.  I view my primary task as implementing the Aicore line.  The talent to run a foundation, even if combined with the policy-making requisites (296), should not be so rare as to render impractical the idea of finding someone better suited than myself.  There's a well-known set of personality traits associated with founding an organization, and they are not mine.  That's really all there is to it.

So if you have (or know someone with) energy, drive, willpower, charisma, enthusiasm, high-end intelligence, dedication to the Singularity, strong self-awareness, planning and organizational talent, the ability to use the expertise of others, writing ability, and the policy-making requisites described under 3.2.4: Leadership strategy, give me a ring.  Non-perfect candidates will be considered.

Given the founder, finding the rest of the Board... is up to the founder, of course.  But we can still be "on the lookout" now.  So what do we want in a Board member?  Three considerations have already been identified:  Maturity (enough to not screw up policy); self-awareness (enough to not screw up management of non-Singularitarian staff and Singularitarian allies); charisma (enough to make phone calls to foundations and someday preside over dinners).  Obviously, these requirements are not universal; we only need one charismatic, at least at first.  We need a majority with enough wisdom not to actively screw up policy, but active management might not be part of the job of all directors.

California law requires that there be more non-employees than employees on the board; while even non-employee Board members may be paid a stipend, I get the impression this is not supposed to come within several orders of magnitude of a salary.  Since both the founder and myself (to name two individuals likely to be on the Board) are likely to be employed full-time, I see several options for resolving this conflict:

Regardless of which strategy is used, I don't think we'll be able to select a good Board by all these criteria and still have physical meetings.  The Singularitarian cause began in cyberspace and, as yet, has no physical basis.  The most likely candidates are scattered across the planet.  (On the plus side, however, we won't have to rent offices.)  Fortunately, California law seems to explicitly allow for cyberspace-based Board meetings.

4.3: Memetics initiation

As explained in 3.3.1: Memetics timeline, the primary goal for memetics in the short-term is creating additional Singularitarians, particularly those needed for the initiation of the Singularity Institute and its projects.  (299).

The task of memetics in the short-term is ensuring that the maximum number of likely helpers find out about the Singularity in all its coolness, the quest for Singularity, and the location of the Singularitarian mailing list.  (Note:  The last piece of information, or even the fact that a Singularitarian group exists, belongs in second-step or third-step documents.)

The three points where effort can be applied:

Publication memetics - Websites and articles - are, I hope, something that's easy to volunteer.  The only prerequisite is intelligence, writing ability, a computer, and a lot of timesweat; but it's timesweat that can be distributed over an arbitrary amount of volunteered time.  In short, it doesn't take Institute resources.

Publication memetics, because they require little in the way of an initial investment (while by-their-nature being directed towards future growth) are the chief instruments of initiation, and the primary present way in which "You can help the Singularity now."  I've been "volunteering" Websites in the service of the Singularity for three years.  (300).  This includes the very page you're now reading.

Appendix A: Navigation

A.1: Principles of navigation

"Navigation" is the name I've given to the art and skill of altering the future.  I feel that "futurism" doesn't cut it; futurism focuses on prediction rather than manipulation, and most futurists as-seen-on-TV focus on a single future, which is presented as either utopian or dystopian.  Navigation is the art of choosing between futures.  At issue is not "good" and "bad", but "better" and "worse".  At issue is not the probability of a future, but how the probability can be affected by our actions.

The underlying formalism for goal-based decision-making is covered in TMOL::Logic::choices, but it's worth exploring a simplified version.  We start with a goal (or set of goals) G, and assume that there's some way of calculating the value of G for any future F (say, the "fulfillment" of G in F times the "desirability" of G in F).  Each future has an estimated probability P given the present; for example, the probability of "nanowar" might be 30%.  When considering a choice, each possible action leads to a different probability spectrum for the possible futures; A1 might lead to "nanowar" with a probability of 30% and to "Singularity" with a probability of 50%, while A2 might lead to "nanowar" with a probability of 20% and to "Singularity" with a probability of 45%.  Given all that, there's an obvious arithmetical method of calculating the value of an action:

One then chooses the action with the highest value.

I've never used the zeroth-order formalism directly, of course.  Any form of cognition which can be formalized mathematically is too simple to contribute materially to intelligence.  I've never used the arithmetic at all; getting the relative quantities right, to within an order of magnitude, is enough to yield unambiguous advice.  (This rule is itself part of the second-order theory of navigation:  "If the first-order theory doesn't give strong advice, or the advice is sensitive to minor fluctuations in the model of reality, then navigation is the wrong skill for making the decision.")

However, I've used heuristics that are derived from examining the formalism.  For example, if the utility of a particular effort is measured by its effect on the probabilities of the possible outcomes, then it's clear that what matters is not the absolute value of any of the probabilities, but how large the shift in probabilities is.  Likewise, the importance of a particular shift in probabilities is measured by the difference in value between the two futures.

The principles of navigation, mostly derived from the second-order theory, are actually simpler than the formalism:

It's often important to remember the relativistic nature of navigation.  For example, some people would prefer a Singularity that occurs via uploading (301) rather than a pure artificial intelligence.  I rather doubt that it makes a difference whether a grown-up's mind started out as a baby human or a baby AI, but let's assume that there exists a significant probability that humanborn Minds are nicer than AI-born minds (and that this probability is greater than the probability that AI-born minds are nicer than humanborn Minds, and that "nicer" represents a significant differential desirability which is approximately equal in both cases). Is it necessarily rational to take actions that will increase the probability of an uploading Singularity relative to an AI Singularity by trying to sabotage AI efforts?  (302).  No, because intramural fighting would reduce the probability of both Singularities, thus increasing the probability of nanowar.  (See A.3: Deadlines.)

These are the rules of navigation, as best I've learned them:

  1. Don't toast the planet; don't lose permanently.  (303).
  2. Before you can create X, you must create the potential for X.  (304).
  3. The variables whose values determine the future:
  4. Clemmensen's Law:  "IMO, the existing system suffices to permit technological advance to the singularity. Any non-radical change is unlikely to advance or retard the event by much. Any radical change is likely to retard the event because of the upheaval associated with the change, regardless of the relative efficiency of the resulting system."

  5. Or as I would put it:  "Don't meddle."  Don't get sidetracked into subproblems of sociology or politics, no matter how great the enthusiasm or indignation.
  6. When dealing with a large group of humans, assume that at least one will take the undesirable action you're worried about.
  7. It is the responsibility of a navigator to emotionally accept all the possibilities, and to plan for any that have a reasonable chance of occurring.

A.2: CRNS Time

One of the tools I use for navigation is "CRNS" time, which stands for Current Rate No Singularity.  CRNS measures how close we are to a given technology - or rather, how close the world is, without further intervention, if progress continues at the current pace.

For example, Drexler was quoted in a 1995 Wired article as predicting nanotechnology in 2015, so that's 2015 CRNS.  Of course, because navigation is a probabilistic thing, the CRNS time (as I guesstimate by interpolating the expert guesses and adjusting for developments since 1995) is more like "a 95% chance of getting nanotechnology between 2002 and 2020, a 65% chance of getting nanotechnology between 2007 and 2015, and the 50% point being 2012" - all CRNS, of course.  One imagines that Drexler would give a similar curve (306).  In recent times, I've moved up my CRNS estimate on nanotechnology in response to a series of reported technological breakthroughs (307) and announced massive investments (308); it now seems that the 50% point may be 2010, or earlier.

Some other key CRNS numbers include AI at 2020 CRNS (309); uploading at 2040 CRNS (310); ubiquitous uploading at 2060 CRNS; the first true neurohacks, modified as children, become contributors at 2030 CRNS (311); the first adult-neurohack Zetetics (reengineered for greater rationality) at 2015 CRNS (312); Vingean headbands (neurosilicate or mind/computer IA) at 2020 CRNS (313).  Those are just my numbers - best guesses.

The key thing about all these numbers is that each one assumes none of the other ones have come into play yet - for example, the numbers for uploading assume no access to Drexlerian nanotechnology, and the numbers on AI assume no nanocomputers or Specialists.  CRNS time measures the current distance, not the dependent distance.

And that's because of the way CRNS time is used - to spot deadlines.  For example, AI is 2020 CRNS while nanotech is 2010 CRNS.  For reasons I'll discuss below, I would very much like AI - the full Singularity - to beat nanotechnology into play.  Hence the PtS target date of 2005-2010 CRNS.  Because the technologies of AI are "easy" to invest in, and relatively easy to accelerate, the PtS plan is plausible.  However, trying to get uploading (2040 CRNS) to beat nanotechnology into play is basically impossible; the gap is far larger and the uploading technologies are considerably harder to accelerate.

CRNS time, combined with common-sense "ease of investment" numbers, makes it clear which technologies will be relevant to the final outcome, and what level of effort - of acceleration - is necessary to win.  (Obviously I'm skipping over a lot of stuff here, like where I'm getting all my CRNS numbers; maybe someday that'll go in a separate page.)

A.3: Deadlines

"Watch, but do not govern; stop war, but do not wage it; protect, but do not control; and first, survive!"
        -- Cordwainer Smith, "Drunkboat", in The Instrumentality of Mankind
The first rule of choosing the future is to make sure there is one.  I think that at this point, that has to be the dominant consideration.  My projection of the unaltered future - current rate, no intervention - ends with the world being destroyed by nanotechnological weapons.  I don't think we can afford to be picky, at all, about what kind of Singularity we get.  Life as we know it is meta-unstable (314); it ends either when we blow ourselves up or invent better minds.  Shifting the balance from the first group of probabilities to the second must take priority over any internal divisions within a group.

In my visualization, nanotechnology is the primary deadline.  I think the development of nanotechnology will be followed by a rapid descent into nuclear war, nanotechnological warfare, or possibly worse.  Some arguments I found convincing appear in MNT and the World System, Nanotechnology and International Security, and the Nanowar discussion from the Extropian mailing list.

I find it difficult to visualize the specific descent into chaos.  I can't find an explanation of what stages nanotech is likely to go through, what the capabilities are at each level, and how long it will take to develop the software for any given capability at each level.  I find it difficult to imagine how any individual will respond to the prospect of nanotechnology, much less societies or governments.  I find it difficult to imagine my own reaction, and I've been living with the prospect of nanotechnology since age eleven.

I see many powerful organizations attempting to develop the military applications at maximum speed, and trying to prevent anyone else from gaining access to the technology.  I see said organizations immediately exploiting the military applications for social leverage through blackmail or actual attack.  I see individuals within and without nano-capable organizations attempting to hack into the system or seize power.  I'm really not sure what the outcome of such a madhouse would be, but it seems likely that most of the Earth's population would wind up as casualties, and it looks to me like there's a significant probability of humanity, maybe even all life in the Solar System, being wiped out altogether.

Every now and then, you hear veterans of the Cold War saying they don't know how we avoided nuclear war for forty years.  Looking at the prospect of military nanotech, it becomes quite clear how nuclear war was avoided.  Nuclear weapons, as a technology, have several built-in limitations and characteristics that make nuclear war unlikely.  This becomes clear because nanoweapons lack those limitations.