I have copied all the transhumanism pages to my new transhumanism Web site, ThinkBeyond.us. New articles related to skepticism, transhumanism, and rationality will appear there.



I've been thinking a great deal these days about Outside Context Problems, and on a trip to Chicago to visit one of my sweeties, I had something of an epiphany on the subject.

Put briefly, an Outside Context Problem is what happens when a group, society, or civilization encounters something so far outside its own context and understanding that it is not able even to understand the basic parameters of what it has encountered, much less deal with it successfully. Most civilizations encounter such a problem only once.

For example, you're a Mayan king. Life is pretty good for you; you've built a civilization at the pinnacle of technological achievement, you've dominated and largely pacified any competition you might have, you've created many wondrous things, and life is pretty comfortable.

Then, all at once, out of the blue, some folks clad in strange, impervious silver armor show up at your doorstep. They carry long sticks that belch fire and kill from great distances; some of them appear to have four legs; they claim to come from a place that you have never in your entire life even conceived might exist...

Civilizations that encounter Outside Context Problems end. Even if some members of the civilization survive, the civilization itself is irrevocably changed beyond recognition. Nothing like the original Native American societies exists today in any form that the pre-Columbians would recognize.

Typically, we think of Outside Context Problems in terms of situations that arise when one society has contact with another society that's radically different and technologically far more advanced. But I don't think it necessarily has to be that way.


In a sense, we are, right now, hard at work building our own Outside Context Problem, and it's going to be internal, not external.

Right now, as I type this, one of the hottest fields of biomedical research is brain mapping and modeling. I've mentioned several times in the past the research being done by a Swiss group to model a mammalian brain inside a supercomputer; such a model is essentially a neuron-by-neuron, connection-by-connection emulation of a brain in a computer. Such an emulation will, presumably, act exactly like its biological counterpart; it is the connections and patterns of information, not the physical wetware, that makes a brain act like it does.

This group claims to be ten years from being able to model a human brain inside a computer. Ten years, and we may see the advent of true AI.


Let me backtrack a little. The field of AI has, so far, been disappointing. For decades, we have struggled to program computers to be smart. The problem is, we don't really quite know what we mean by "smart." Intelligence is not an easily defined thing; and it's not like you can sit down and break up generalized, adaptive intelligence into a sequence of steps.

Oh, sure, we've produced expert systems that can design computer chips, simulate bridges, and play chess far better than a human can. In fact, we don't even have grandmaster-level human/machine chess tournaments any more, because the machines always win. Always. Deep Blue, the supercomputer that beat human grandmaster Garry Kasparov in a much-publicized tournament, is by modern standards a cripple; ordinary desktop PCs today are more powerful.

But these are simple, iterative tasks. A chess-playing computer isn't smart. It can't do anything besides play chess, and it approaches chess as a simple iterative mathematical problem. That's about where AI has been for the last four decades.

New approaches, though, are not about programming computers to act smart. They are about taking systems which are smart--brains--and rebuilding them inside a computer. If this approach works, we will create our own Outside Context Problem.


Human brains are pretty pathetic, from a hardware standpoint. Our neurons are painfully, agonizingly slow. They are slow to respond, they are slow to fire, they are slow to reset after they have fired, and they are slow to form new connections. All these things limit our cognitive capabilities; they impose constraints on how adaptable our intelligence is, and how smart we can become.

Computers are fast. They encode new information rapidly and efficiently. Raw computing power available from a given square inch of silicon real estate doubles roughly every eighteen months. Modeling a brain in a computer removes many of the constraints; such a modeled brain can operate more quickly and more efficiently, and as more computer power becomes available, the complexity of the model--the number of neurons modeled, the richness of the interconnections between them--increases too.


We humans like to make believe that we are somehow the apex of creation--and not just of creation, but of all possible creation. It pleases us to imagine that we are created in the image of some divine heavenly architect--that the universe and everything in it was made by some sapient being, that that sapient being is recognizable to us, and that that sapient being is like us. We like to tell ourselves that there is no limit to human imagination, that human intellect can understand and achieve anything, and so on.

Now, all of this is really embarrassingly self-serving. It's also easy enough to deflate. The human imagination is indeed limited, though by definition limitations in the things you can conceive of tend to be hard to see, because you...can not conceive of things you can not conceive of. (As one person once challenged me, without apparent irony: "Name something the human imagination can't conceive of!")

But it's relatively easy to find some of the boundaries of human imagination. For example:

Imagine one apple. Just an apple, floating alone on a plain white background. Easy to do, right? Imagine three apples, perhaps arranged in a triangle, floating in stark white nothingness. Simple, yes? Four apples. Picture four apples in your head. Got it?

Now, picture 17,431 apples in your head, each unique. Visualize all of them together, and make your mental image contain each of those apples separately and distinctly. Got it? I didn't think so.

Imagine a cube in your head. Think of all the faces of the cube and how they fit together, Rotate the imaginary cube in your head. Got it going? Good.

Now imagine a seventeen-dimensional cube in your head. Picture what it would look like rotating through seventeen-dimensional space. Got it?

The first example indicates one particular kind of boundary on our imaginations: our limited resolving power when it comes to holding discrete images in our imagination. The second shows another boundary; our imaginations are circumscribed by the limitations of our experiences, as perceived and interpreted through finite (and, it must be said, quite limited) senses. Quantum mechanics and astrophysics often pose riddles whose math suggests behaviors we have a great deal of difficulty imagining, because our imaginations were formed through the experiences of a very limited slice of the universe: medium-sized, medium-density mass-bearing objects moving quite slowly with respect to one another. Go outside those constraints, and we may be able to understand the math, but the reality of the way these systems works is, at best, right at the threshold of the limitations of our imaginations.


Everyone who has ever owned a dog knows that dogs are capable of a surprisingly sophisticated sort of reasoning. Dogs understand that they are separate entities; they interact with other entities, such as other dogs and humans, in complex ways; they can differentiate between other living entities and non-living entities, for the most part (tough I've seen dogs who are confused by television images); they have emotional responses that mirror, on a simple scale, human emotional responses; they are capable of planning, problem-solving, and analytical reasoning.

They can not, however, learn calculus.

No matter how smart your dog is, there are things it can not understand and will never understand because of the biological constraints on its brain. You will never teach a dog calculus; in fact, a dog is not capable of understanding what calculus is.

Yes, I know you think your dog is very smart. No, your dog can't learn calculus. Yes, you can too, if you set your mind to it; the point here is that there are realms of knowledge unavailable to the entire species, because all dogs, no matter how smart they may be in comparison to other dogs, lack the necessary cognitive tools to get there.

The intelligence of every organism is circumscribed in part by that organism's physical biology. And just as they are entire reals and categories of knowledge unavailable to a dog, so too are there realms of knowledge unavailable to us. What are they? I don't know; I can't see them. That's exactly the point.


To get back to the idea of artificial intelligence: A generalized AI would in many ways not be subject to the same limitations we are. One nice thing about modeled brains that isn't true of human brains is that we can easily tinker with them. The human brain is limited in the total number of neurons within it by the size and shape of the human pelvis; we can't fit larger brains through the birth canal. We have, in essence, encountered a fundamental evolutionary barrier.

Similarly, we can't easily make neurons faster; their speed is limited by the complex biochemical cascade of events which makes them fire (contrary to popular belief, neurons don't communicate via electrical signals; they change state electrochemically, by the movement of charged ions across a membrane, and the speed with which a signal travels is dependent on the speed with which ions can propagate across the membrane and then be pumped back again). They are limited in how quickly they can learn new things by the speed with which neurons can grow new interconnections, which is pretty painful, really.

But a model of a brain? What if we double the number of neurons? Increase the speed at which they send signals? Increase the efficiency with which new connections form? These are all obvious and logical paths to explore.

And the thing about generalized AI is that it's so goddamn useful. We want it, and we're working very hard toward it, because there are just so many things that our current, primitive computers are poor at, that generalized Ai would be good at.

And one of those things, as it happens, is likely to be improving itself.

The first generalized AI will be a watershed. Even if it isn't very smart, it can easily be put to the task of making AIs that are smarter. And smarter still. Hell, just advances in the underlying processor power of the computer beneath it--whatever that computer may look like--will probably make it smarter. Able to think faster, hold more information, remember more...and able to have whatever senses we give it, including senses our own physiology doesn't have.

The first generalized AI might not be smarter than us, but subsequent ones will, oh yes. You can bank on that. And that soon presents an Outside Context Problem.

Because how do we relate to a sapience that's smarter than we are?

In transhumanist circles, this is called a singularity--a change so profound that the people before the singularity can not imagine what life after the singularity is like.

There have been many singularities throughout human history. The development of agriculture, the Iron Age, the development of industrialization--all of these created changes so profound that a person living in a time before these things could not imagine what life after these things is like. However, the advent of smart and rapidly-improving AI is different, because it presents a singularity and an Outside Context Problem all rolled up into one.

In past singularities, the fundamental nature of human beings and human intelligence have not changed. A Bronze Age human is not necessarily dumber than an Iron Age human. Less knowledgeable, perhaps, but not dumber. The Bronze Age human could not anticipate Iron Age technology, but if they meet, they will still recognize each other.

But a smarter-than-us AI is different, in the ways we are different from a dog. We would not--we cannot--understand the perception or experience of something smarter than we are, ay more than a dog can understand what it means to be human. And that presents an interesting challenge indeed.

Civilizations tend not to survive contact with Outside Context Problems.


Which brings me, at last, to the epiphany that I had while I was walking with a partner of mine in Chicago.

Transhumanism is the notion that human beings can become, with the application of intelligence and will, more than we are right now. I've talked about it a great deal in the past, and talked about some of the reasons I am a transhumanist.

But here's a new one, and I think it's important.

Strong AI is coming. It's really only a matter of time. We are learning that our own intelligence is the result of physical processes within our brain, not the result of magical supernatural forces or spirits. We are working on applying the results of this knowledge to the problem of creating things that are not-us but that are smart like us.

Now, there are several ways we can approach this. One is by creating models of ourselves in computers; another is by using advances in nanotechnology and biomedical science to make ourselves smarter, and improve the capabilities of our wet and slow but still serviceable brains.

Or, we can create something not based on us at all; perhaps by using adaptive neural networks to model increasingly complex systems in a sort of artificial evolutionary system, trying things at random and choosing the smartest of those things until eventually we create something as smart as us, but self-improving and altogether different.

Regardless, we have a choice. We can make ourselves into this new whatever-it-is, or we can make something entirely independent from us.

However we make it, it will likely become our successor. Civilizations tend not to survive contact with Outside Context Problems.

If we are to be replaced--and I think, quite honestly, that that is only a matter of time as well--I would rather that we are replaced by us, by Humanity 2.0, than see us replaced by something that is entirely not-us. And I think transhumanism, refined down to its most simple essence, is the replacing of us by us, rather than by something that is not-us.

 
 


This site is part of Franklin Veaux's Sprawling Web Empire