Back to previous post: Close the Washington Monument

Go to Making Light's front page.

Forward to next post: Amsterdam is dark, mysterious, and strange today

Subscribe (via RSS) to this post's comment thread. (What does this mean? Here's a quick introduction.)

December 5, 2010

Awaiting the owl
Posted by Patrick at 05:14 PM * 145 comments

Cosma Shalizi has what he describes as “yet another semi-crank pet notion, nursed quietly for many years, now posted in the absence of new thoughts because reading The Half-Made World brought it back to mind.”

The Singularity has happened; we call it “the industrial revolution” or “the long nineteenth century”. It was over by the close of 1918.

Exponential yet basically unpredictable growth of technology, rendering long-term extrapolation impossible (even when attempted by geniuses)? Check.

Massive, profoundly dis-orienting transformation in the life of humanity, extending to our ecology, mentality and social organization? Check.

Annihilation of the age-old constraints of space and time? Check.

Embrace of the fusion of humanity and machines? Check.

Creation of vast, inhuman distributed systems of information-processing, communication and control, “the coldest of all cold monsters”? Check; we call them “the self-regulating market system” and “modern bureaucracies” (public or private), and they treat men and women, even those whose minds and bodies instantiate them, like straw dogs.

An implacable drive on the part of those networks to expand, to entrain more and more of the world within their own sphere? Check. (“Drive” is the best I can do; words like “agenda” or “purpose” are too anthropomorphic, and fail to acknowledge the radical novely and strangeness of these assemblages, which are not even intelligent, as we experience intelligence, yet ceaselessly calculating.)

Why, then, since the Singularity is so plainly, even intrusively, visible in our past, does science fiction persist in placing a pale mirage of it in our future? Perhaps: the owl of Minerva flies at dusk; and we are in the late afternoon, fitfully dreaming of the half-glimpsed events of the day, waiting for the stars to come out.

I hope Shalizi will forgive my quoting his entire post, but it seems to me to have resonance with certain recent arguments over steampunk. It might even hint at why SF (and fantasy!) keep returning to the “long nineteenth century” like a dog to its bone.

I’m also reminded of this, from one of Nietzsche’s books of aphorisms: “The press, the machine, the railway, the telegraph are premises whose thousand-year conclusion no one has yet dared to draw.”

Comments on Awaiting the owl:
#1 ::: Bruce Cohen (Speaker to Managers) ::: (view all by) ::: December 05, 2010, 06:02 PM:

That whole idea of tying the Singularity to a single moment (April 3, 2038 at 14:09 Eastern Standard Time) was always silly. One of the primary characteristics of an exponential curve is that, drawn to scale, any piece of it looks like any other, and that changing the scale doesn't change the shape. So at any given time the area of the immediate past and future are somewhat similar to the present, and less so as you get further away. The only difference between eras is that the actual distance in absolute units you can see (the piece of the curve ahead with a given change in slope) is smaller as you go up the curve.

So where's the Singularity? When you can predict a century ahead? A decade? A year? I would say the century prediction probably stopped being valid in the 18th century, but it was still possible to make predictions 10 years ahead as late as the middle 20th century.

The important fact is that there's always some distance ahead you can't reliably predict, and another distance ahead about which you basically know nothing. Where that latter distance became important to human society is sometime in the last 3 centuries when it became less than a human lifetime, so the people educated at the beginning of a window got to see how badly their education fit them for the end.

Why, then, since the Singularity is so plainly, even intrusively, visible in our past, does science fiction persist in placing a pale mirage of it in our future?

Since the Singularity isn't a single event, or even one of short duration, it's reasonable to at least argue that it extends into both our past and our future. As for whether SF writers have been predicting a "pale mirage" of it, yes, some have, and some haven't. There have always been writers who could imagine six impossible things before breakfast, and those who couldn't. And there have always been writers who ignored some of the more exotic implications of the technologies and cultural changes they wrote about in order to concentrate on plot or character development. Many have felt that writing about a world that was too alien to ours would be an impediment for the reader in identifying with the characters or following the action.

I would argue that the writing of Walter Jon Williams, Charlie Stross, Iain M. Banks, Cory Doctorow, Damien Broderick, Michael Swanwick, and a whole bunch of others whose books aren't on the shelves right in front of me, is not a "pale mirage" of the Singularity in our future. I guarantee that nothing they've written, and nothing that's happened in the last century can prepare us for what's to come.

#2 ::: Serge ::: (view all by) ::: December 05, 2010, 06:09 PM:

Patrick... it seems to me to have resonance with certain recent arguments over steampunk

That thought occurred to me as I was reading your post and before I got to your observation. As SM Sitrling once said on a panel about steampunk, we live in the aftermath of the 19th Century.

#3 ::: Eric K ::: (view all by) ::: December 05, 2010, 06:13 PM:

This is a fun game. :-) I'd put the date even further back, when Gutenberg invented movable type.

If you look at the history of antiquity, there are many little glimmers of knowledge (such as the Phaistos disc and the Antikythera mechanism) that appeared thousands of years before their time. But each of these glimmers faded.

Once the printed word became cheap and ubiquitous, the world changed. Knowledge stopped fading into the darkness quite so quickly, and each generation could build on the work of the last. And because of this, the last 500 years have been pretty remarkable by any standard.

Of course, the next 500 should be pretty startling, too.

#4 ::: guthrie ::: (view all by) ::: December 05, 2010, 06:48 PM:

Eric K #3 - actually, I would put it before Gutenberg, in the process whereby literacy was secularised, and ceased to be the property of a small educated elite. By the time Gutenberg came along there were all these people and organisations just ready to take advantage of moveable type. The changes leading up to that time were larger than many people think, and texts of various sorts were circulating, such as The goodman of Paris, or all sorts of other sources of information and knowledge.
Which is to say, pinning it on Gutenberg et al makes it too much like a singularity type event for my liking.

Heres a thought - is there much trace in literature and such of the 19th century of the kind of end times/ utopian desires present in sinularitarian SF of the late 20th/ early 21st century? I know that Wells wrote some stories that were a bit like them but was anyone else doing something similar, earlier?

#5 ::: Erik Nelson ::: (view all by) ::: December 05, 2010, 06:56 PM:

is this the same owl that was a baker's daughter? Is it a half baked world?

#6 ::: Evan ::: (view all by) ::: December 05, 2010, 07:31 PM:

I've always thought of the event imagined by Vinge, Kurzweil, etc, as a phase change, and that seems a more fitting metaphor to me than "singularity"--among other reasons, because it doesn't have the confusing word "single" in it, implying uniqueness.

Consider: Water below 273K behaves according to one set of rules: it's hard. At 273K, a strange and unpredictable transition takes place; when the transition is complete, the water is warmer and and obeys an entirely new set of rules: it's all wet and flowy and stuff. Add another hundred degrees, and the water reaches another bewildering transition point, after which, once again, the water begins following an entirely different set of rules: now it's invisible and intangible.

The story of a gram of water going from 200K to 400K, then, has two "singularities" in it. I see no reason why the story of human development has to stop at only one. Or for that matter, only fifty. Surely there must be dozens of past events that qualify--most of them prehistoric, the invention of "history" being one of the more recent examples.

I mean, speech, fire, tool use, animal husbandry, agriculture, money, writing, seafaring... the only reason the industrial revolution seems bigger than those is that it's closer to us in the rearview mirror.

#7 ::: Erik Nelson ::: (view all by) ::: December 05, 2010, 07:35 PM:

We can't predict the economy
Don't understand the singularity

Don't know how to embrace the machine
When the bureaucracies get too mean

But if as markets merged and grew
Communications brought me closer to you
What a brave new world this would be

#8 ::: Chad Orzel ::: (view all by) ::: December 05, 2010, 08:01 PM:

That whole idea of tying the Singularity to a single moment (April 3, 2038 at 14:09 Eastern Standard Time) was always silly. One of the primary characteristics of an exponential curve is that, drawn to scale, any piece of it looks like any other, and that changing the scale doesn't change the shape. So at any given time the area of the immediate past and future are somewhat similar to the present, and less so as you get further away. The only difference between eras is that the actual distance in absolute units you can see (the piece of the curve ahead with a given change in slope) is smaller as you go up the curve.

The bigger problem with it is that the early stages of a logistic curve look very much like an exponential growth curve. There's no real hint that it's eventually going to level off at a point well short of infinity.

This leads directly to Joe Fitzsimon's tweet last week: "How to be a futurist in 1 easy step: confuse logistic curves for unbounded exponential growth." Which is a little harsh, but also not inaccurate.

#9 ::: Janet Brennan Croft ::: (view all by) ::: December 05, 2010, 08:30 PM:

'I’m also reminded of this, from one of Nietzsche’s books of aphorisms: “The press, the machine, the railway, the telegraph are premises whose thousand-year conclusion no one has yet dared to draw.”'

Hmmm -- on Discworld we have the printing press, the golem, and the clacks. So will Vetinari's Undertaking bring about the Discworld singularity?

On a more serious note, it seems more and more clear to me that WWI was the axis around which our world has turned -- slowly and haltingly but surely and unstoppably. And yet they still don't really teach it in US public schools.

#10 ::: R. Emrys ::: (view all by) ::: December 05, 2010, 08:55 PM:

I've always figured it happened 5000 years ago in Sumer. And also, at approximately the same time, on the west coast of South America.

#11 ::: Erik Nelson ::: (view all by) ::: December 05, 2010, 08:58 PM:

Actually it happened suddenly, last Sumer.

#12 ::: Serge ::: (view all by) ::: December 05, 2010, 09:04 PM:

Erik Nelson @ 11... When Elizabeth Taylor was wearing that bathing suit?

#13 ::: Andrew Plotkin ::: (view all by) ::: December 05, 2010, 09:11 PM:

I side with the "one phase change among many in our history" view. I have a rant somewhere (in my head, not online) about my list: the invention of language, the invention of agriculture, the invention of writing, the Renaissance, the Industrial Revolution, the Computer Whatever-It-Is.

Each one was not just a transformation of society, but the manifest transcendence of the human race -- piecewise, since none of these consumed the planet in one gulp. What remained afterwards was (plausibly) a new human race, powerful and incomprehensible like unto gods in their new abilities *and concerns of living*.

Thus the human race was destroyed, each time, by its children. Keep an eye out for the next one.

#14 ::: Spiny Norman ::: (view all by) ::: December 05, 2010, 09:59 PM:

You want a change? The transition from RNA to DNA-based genetic material. That was a change. And the invention of oxidative phosphorylation, which prevented the most serious pollution catastrophe in the biosphere's history. Those are changes.

#15 ::: Patrick Nielsen Hayden ::: (view all by) ::: December 05, 2010, 10:02 PM:

#1, Bruce Cohen: "Since the Singularity isn't a single event, or even one of short duration, it's reasonable to at least argue that it extends into both our past and our future." You know, I almost titled this post "A thief in the night."

#9, Janet Brennan Croft: "It seems more and more clear to me that WWI was the axis around which our world has turned." Yes without reservation.

#16 ::: Stefan Jones ::: (view all by) ::: December 05, 2010, 10:12 PM:

It's been burning since the world's been turning?

#17 ::: David Harmon ::: (view all by) ::: December 05, 2010, 10:39 PM:

Singularities are always a relative thing -- the Industrial Revolution is the last one that's receded enough to see properly, but hardly the first. Or do you think a Imperial Roman would easily adapt to, say, 18th Century Italy? (Scattered through the Middle Ages were all sorts of changes in how things get done, how power gets justified, and so on) But then, the Roman Empire itself was a Singularity in its own right, transforming Europe even in its fall.

#18 ::: heresiarch ::: (view all by) ::: December 05, 2010, 11:33 PM:

Bruce Cohen @ 1: "Where that latter distance became important to human society is sometime in the last 3 centuries when it became less than a human lifetime, so the people educated at the beginning of a window got to see how badly their education fit them for the end."

I'd focus on this measure of change--the rate of decay of educational relevancy--rather than prediction ability. How far ahead we can predict is always very debatable, and subject to short-term variation due to black swan events: Columbus' voyage shot any prediction of the course of world history made in the previous decades all to hell, but educational relevance was fairly continuous throughout.

"Since the Singularity isn't a single event, or even one of short duration, it's reasonable to at least argue that it extends into both our past and our future."

It seems to me that any potential Singularity (past or future) is composed of a countless number of technological shifts each of whose individual effect follows a logistic curve. The exponentiality comes from that on average each technological advance enables/necessitates more than one other shifts. But technology and science is chunky--there's no guarantee that every technological shift leads to the same number of subsequent shifts. Indeed, there's probably a high degree of heterogeneity, with some shifts leading to dozens more shifts, and many shifts leading to none. This implies that rather than being a steady acceleration, technological change will clump. I think it's a very good argument that industrialization was one such clump, and further back agriculturalization was another such clump, and computerization very well might be a third.

Evan @ 6: I like it.

Janet Brennan Croft @ 9: "On a more serious note, it seems more and more clear to me that WWI was the axis around which our world has turned -- slowly and haltingly but surely and unstoppably."

I confess, the pivotal importance of WWI doesn't seem quite as clear to me. What makes you put it in such a central role? Or am I misunderstanding you entirely?

#19 ::: B. Durbin ::: (view all by) ::: December 05, 2010, 11:42 PM:

"On a more serious note, it seems more and more clear to me that WWI was the axis around which our world has turned -- slowly and haltingly but surely and unstoppably. And yet they still don't really teach it in US public schools."

I *never* had a world history course get much past Archduke Ferdinand until I reached college. For the most part, we just ran out of time. That means my grasp of the 20th century is strange and unpredictable, because there was a lot that it was assumed I *knew* even though the events had happened before I was born... I suspect because the person doing the assuming had lived through the events in question. Much like your mother assumes you know who she's talking about when she talks about her acquaintances from church. "You know, Mr. and Mrs. Z--."

The college professor who rectified this situation was often described as a human dynamo, the sort of teacher who moves through the material at warp speed. He took us from the Defenstration of Prague up through Reagan's presidential campaign (and its strong relation to JFK's campaign) with four novels along the way1 in a mere semester.

He was also fond of open-ended finals questions such as "Why Hitler? Why Stalin? Explain the twentieth century." He'd let you take as long as you needed, and would even purchase food if you stayed into meal times. (There were no penalties for a short essay if you could make your point that way.)

He died two weeks after finals, of untreated stomach cancer. Student verdict was that he had ignored warning signs in favor of spending his time helping students, and that we all wish he'd taken a little time for himself.

1 The Cheese and the Worms, The Sorrows of Young Werther, Anna Karenina, and The Plague. This class was in 1996. I did not write this list down.

#20 ::: Kevin J. Maroney ::: (view all by) ::: December 06, 2010, 12:10 AM:

"On a more serious note, it seems more and more clear to me that WWI was the axis around which our world has turned -- slowly and haltingly but surely and unstoppably. And yet they still don't really teach it in US public schools."

Barbara Tuchman's The Guns of August begins with the death of King Edward VII, whose funeral on May 20, 1910 marked "the greatest assemblage of royalty and rank ever gathered and, of its kind, the last." IIRC, by the time the war was over a decade later, none of the foreign kings who attended the funeral still had kingdoms --not Germany, nor Austria, nor Russia; and of course the Ottoman Empire began its dissolution at that time, and by 1923 would be broken among 40 different countries. It's little wonder that Europe and its former subjects (i.e., the whole world) are still working out the consequences of that decade.

#21 ::: Alter S. Reiss ::: (view all by) ::: December 06, 2010, 12:37 AM:

It all goes back to the Neolithic Revolution, man. Up until then, the human toolset, and human social organization, remained more or less unchanged for literally tens of thousands of years at a time.

Even when we look at some of the big changes, they're not incomprehensibly big. You take someone from the Middle Paleolithic, and put him in the Upper Paleolithic, he'll be kinda "holy crap, dogs!" and "that atl-atl thing sure is neat." But it's hunting and gathering, and small bands, and seasonal migrations, and basically the same sort of life as he had two hundred thousand years prior.

You take someone from the Mesolithic, and put them in the pottery neolithic, and there are cities, and there farms, and there are herds of domesticated animals, and there's wool, and there's bread, and there's beer, and there are armies, and priests, and complicated religious practice, and so on. It adds up into a fundamentally different way of life.

Which isn't to say that it's incomprehensible to people who came from a neolithic culture. People can be pretty bright, after all. But it's a bigger shift than anything that came before.

After that, things speed up. Take someone from Çatalhöyük in 7,000 BCE, and move him to 6,000 BCE, he'll be confused and distressed, but the basic shape of society would be similar. Take someone from Mycenaean Greece, and move him to Early Imperial Rome, things aren't even close.

But it all comes back to the Neolithic Revolution. That's the point where everything started changing.

#22 ::: Dave Luckett ::: (view all by) ::: December 06, 2010, 12:44 AM:

Grandmother said, "Them birds," and shook her head.
It was a saffron summer sunset, gold-
And-citron, but her eyes were lapis, old
As Egypt; and the birds were black, ahead
Of coming dark. A boobook called; a shred
Of winged wedges scarred the sky. Not cold,
Yet shiver-cold, I hugged my arms. A fold
Of granite ate the sun. The day was dead.

Black birds at dusk. An owl that called. And dread,
For there was something coming. What, no seer
Could tell. What grandmother had said
Was that it came, not what it was. The fear
To hear it call me I recall with shame.
For now I think, the owl called every name.

#23 ::: Elliott Mason ::: (view all by) ::: December 06, 2010, 12:47 AM:

When I saw the post was titled "Awaiting the owl," and that early in it was the sentence "The Singularity has happened...", I went an entirely different place mentally.

I imagined a secret-history behind the scenes, akin to Rowling's Wizarding World, and an entirely different owl one can be waiting for ... if one knows that there are owls, and a reason to wait.

That would also be a really cool, and extremely Fluorospherian, post -- but not this one. :->

#24 ::: Linkmeister ::: (view all by) ::: December 06, 2010, 12:54 AM:

Elliott Mason @ #23, going your route could have several tracks: Firesign Theater's "Waiting for the Electrician," Beckett's "Waiting for Godot," or Cavafy's "Waiting for the Barbarians."

#25 ::: Remus Shepherd ::: (view all by) ::: December 06, 2010, 12:55 AM:

I believe the Singularity is a matter of force multipliers. When ten men can defeat a hundred men at combat, they call it a 10x force multiplier. The Singularity, then, is when a single human being can muster power equivalent to the rest of the human race -- a force multiplier of billions.

But this is technology dependent. We've already hit the communications Singularity, beginning in the 1930s with radio (as used to great effect by Adolf Hitler) and then ending with the invention of the internet. We are almost at the calculation Singularity -- a computer can do the work of thousands of mathematicians, and we'll get to billions soon. We haven't hit a weapons or medicine Singularity yet, but they're coming.

#26 ::: Avram ::: (view all by) ::: December 06, 2010, 12:55 AM:

Alter @21: You take someone from the Middle Paleolithic, and put him in the Upper Paleolithic, he'll be kinda "holy crap, dogs!"

If he's of a scientifictional bent of mind, he'll start to think about what happens when we breed dogs to the point that they can selectively breed themselves, at which point, y'know, exponentially-accelerating dog-tech.

#27 ::: Dave Bell ::: (view all by) ::: December 06, 2010, 03:27 AM:

Responding to sundry comments:

WW1 was a sociological extinction event.

But, seen from the USA, it was a rather distant event, hence the differences in the political landscape.

#28 ::: Earl Cooley III ::: (view all by) ::: December 06, 2010, 05:43 AM:

I'm not a god or a puddle of gray nanogoo, so the real singularity hasn't happened yet, as far as I can tell.

#29 ::: Charlie Stross ::: (view all by) ::: December 06, 2010, 06:26 AM:

Janet @9: well, of course. From here -- in Europe -- it's obvious. Pre-WW1, there were a couple of non-monarchical great powers (the USA and France were Republics; the UK was a constitutional monarchy) but most of the great powers were absolutist monarchies, more or less along the lines of Iran pre-1979 (tolerating a greater or lesser degree of public participation in an elected chamber, but with the reins of power firmly in the grip of the hereditary dictator (or "king", one syllable being less unwieldy than eight)).

By 1919, there were virtually no hereditary dictatorships left standing.

Talk about administrative single-points-of-failure ...!

#30 ::: Charlie Stross ::: (view all by) ::: December 06, 2010, 06:36 AM:

NB: The singularity, in Vernor's original formulation, is a singular event: it's the emergence of a human-equivalent artificial intelligence with the capability to iteratively upgrade itself.

Which would indeed be as significant as the changes Spiny Norman mentioned in #14 -- emergence of the DNA-world replicators from the previous RNA-world, and the development of oxidative phosphorylation, to which I'd also add the emergence of tool-using language-equipped life, i.e. us -- insofar as humanity isn't in the driving seat of change thereafter.

On the other hand, I currently tend to think that we need human-equivalent AI like we need a hole in the head -- what we need is domain-specific intelligent tools that do what we tell them to. And so, human-equivalent AI is unlikely to show up unless somebody can think of an economic use for machines that would rather do the machine equivalent of sacking out in front of the superbowl with a six-pack of beer than put in a full day's work each and every day (all 86,400 seconds of it).

#31 ::: Peter Erwin ::: (view all by) ::: December 06, 2010, 07:29 AM:

Bruce Cohen @ 1:
That whole idea of tying the Singularity to a single moment (April 3, 2038 at 14:09 Eastern Standard Time) was always silly. One of the primary characteristics of an exponential curve is that, drawn to scale, any piece of it looks like any other, and that changing the scale doesn't change the shape. So at any given time the area of the immediate past and future are somewhat similar to the present, and less so as you get further away. The only difference between eras is that the actual distance in absolute units you can see (the piece of the curve ahead with a given change in slope) is smaller as you go up the curve.

That's a good point. A couple of comments:

1) As I think you're pointing out, exponential growth can't actually give you a true (mathematical) singularity; you need something faster, like hyperbolic growth. (There are arguments that human population growth was, prior to the 1970s or so, approximately super-exponential, and perhaps hyperbolic.)

2) There are plausible thresholds where the effects -- or at least the perception of -- of genuinely exponential change can matter, because humans still operate by certain relatively fixed (absolute) timescales. If the amount of change which used to take a century to happen now takes less than a generation, then the effects and reaction will probably be different. Similarly if the same amount of change happens within a year, or within a month.

#32 ::: jhe ::: (view all by) ::: December 06, 2010, 07:40 AM:

Here's some population statistics. Something happened around 1700, but the 20th C was when things seemed to go wild (compare the year you were born to 2008). Just as interesting is that the UN Medium forecast predicts terminal population at around 9B around 2100. I agree with the posters who scoff at the idea of an exact date, but based on output (as opposed to inputs like, technology, culture, politics, etc.) I'd date the singularity somewhere in the 1970s.

World Population(from Wikipedia)
1700 600
1800 980
1900 1650
1910 1750
1920 1860
1930 2070
1940 2300
1950 2400
1960 3020
1970 3700
1980 5000
1990 5000
2000 6070
2008 6900

#33 ::: Fragano Ledgister ::: (view all by) ::: December 06, 2010, 08:40 AM:

B. Durbin #19/Kevin J. Maroney #20: Before 1914 the Victorian assumptions of unlimited progress tied to European/Western/white supremacy dominated western thought and practice. After 1918 there is little doubt that they could not.

The meat-grinder of 1914-18* changed the relationship of coloniser and colonised, of dominator and dominated, of ruling classes and the ruled. Just consider such events or prorcesses that resulted from it as the Russian Revolution, the Arab Revolt, or the Rastafari Movement. The Napoleonic Wars reordered the world outside of Europe, changing the map of Africa, the Americas, and India, but no delegates from those places danced at the Congress of Vienna. The Berlin Congress of 1884 redrew the map of Africa, but no Africans were present to point out that there might be any problems with the way that Bismarck and Disraeli were cavalierly reorganising their continent.

At Versailles, in 1919, on the other hand, W.E.B. DuBois was outside observing, a Japanese delegation was present, as were a delegation from the Hejaz (which ended up acquiring two new Arab kingdoms), and a delegation from what was still called Abyssinia, headed by that country's strongman Ras Tafari (later to be Emperor Haile Selassie). Over the discussions hung the shadow of the revolution.

The order of the world looked, in many ways, much as it had a decade earlier. In many more ways, the Great War had changed it irrevocably. If there is an event that can lay claim to being the Singularity, the First World War is probably it.

*War has had it's apologians,
Ever since history began,
From the times of the Greeks and Trojans, when they sang of arms and the man,
But if you asked me to name the best, Sir,
I'd tell you the one I mean,
Head and shoulders above the rest, Sir, was the War of 14-18,
Head and shoulders above the rest, Sir, stands the War of 14-18. -- Flanders & Swann

#34 ::: Fragano Ledgister ::: (view all by) ::: December 06, 2010, 08:41 AM:

Dave Luckett #22: Most excellent.

#35 ::: alex ::: (view all by) ::: December 06, 2010, 09:02 AM:

OTOH, it's pretty parochial, in the grand scheme of things, to hinge topics around stuff as insignificant as the colour of people's skin and the place they were born.

I'd suggest the real question here, in all these 'singularities', is derived from Iain M. Banks's Dependency Principle - what constraints are we enabled to escape from, and what new dependencies and constraints does that 'liberation' impose?

The Industrial Revolution, frex, moved us beyond the constraints of a biological economy, into those provided, all unknowingly, by a fossil-fuel one. The First World War removed the 'constraint' of multi-national monarchical imperialism, but replaced it, rather neatly, with those of unbounded racial nationalism and unlimited brutality.

The revolutions of the Information Age, which have been going on since before I was born [and boy, does that seem like a long time these days], are supposed to liberate us from the material shackles of all kinds of things. We shall go into the Cloud [and practise metamathics, perhaps?] But what is 'the Cloud'? It certainly isn't a cloud, it's more likely a bunch of industrial-strength servers powered by a good old-fashioned fossil-fuel plant, and wired together with your basic, physical cables. From where I'm looking, it seems little less 'constrained' and 'dependent' than a coal-fired railway.

We may be 'progressing' to the point at which our most basic dependency, on the ecosphere of the planet, becomes our most pressing constraint. Singularity, schmingularity.

#36 ::: ajay ::: (view all by) ::: December 06, 2010, 09:22 AM:

33: I am struck by the discovery that Flanders and Swann translated and sang "La guerre quatorze-dix-huit".

Observation on steampunk: I always viewed it as a way to recreate the real shock and awe that a Victorian would have felt. You can't say to a 21st century person "Imagine being able to talk to someone in America! Using an electrical machine!" and get anything but laughter. But if you turn the technology up to 11, then it works; the feeling you get on considering, say, the Mighty Thirty-Thousand-Ton Steam-Powered Land-Leviathan trampling Paris is much the same feeling that Palmerston had when he saw HMS Warrior moored in the middle of the British Fleet "like a black snake among rabbits".

#37 ::: Janet Brennan Croft ::: (view all by) ::: December 06, 2010, 09:45 AM:

heresiarch @18, well, a lot of people have jumped in and said it more eloquently than I could. One thing I would add is that history is a very big thing, so with a pivot point as small as WWI, you are seeing a lot of changes leading up to it and resulting from it that blur the edges of the moment. In the US, you could say that our world still pivoted around WWI even though our participation was not that intense, because for us it was neatly bracketed by the Civil War and WWII -- and think how vastly different those two wars were in our national experience, and yet how certain themes of WWI can be seen in the Civil War and reach further development in WWII and later wars (frex, the role of the press).

Another interesting way to consider WWI as a pivot point is in literature. Think of the immense changes in war poetry from the "swimmer into cleanness diving" at the beginning to the bitter contemplations of "the old lie" and "the hell where youth and laughter go" as we get further into the war. In fantasy, compare George MacDonald to J.R.R. Tolkien. (Not that bitterness and deep anger about war are anything new, just that this irony became the dominant expression of post WWI literature, as Paul Fussell explains in The Great War in Modern Memory. But I'm just getting into Caroline Alexander's The War That Killed Achilles, which really emphasizes this aspect of The Iliad in a way I haven't seen before.)

#38 ::: rm ::: (view all by) ::: December 06, 2010, 10:39 AM:

Recent fantasy and sf has also kept returning to the 17th century, as the moment when science replaced magic & alchemy. (For example, Stephenson, Rowling, Crowley, and if you could find three more different writers who could be placed in the same very general genre-category I dunno who they'd be).

Seems to me literature keeps asking "what just happened?"

#39 ::: Clifton Royston ::: (view all by) ::: December 06, 2010, 11:37 AM:
If we arrive at the top of the stair,
If we avoid the big fall,
There might be something awaiting us there,
Or there might be nothing at all.

And nobody knows when the Owl of Minerva
Will spread its four wings and take flight.
Nobody knows, but it's said that the Owl of Minerva
Flies only at night.
- The Donner Party

A sadly overlooked band.

#40 ::: Anderson ::: (view all by) ::: December 06, 2010, 11:56 AM:

WW1's effect on the modern state was incalculable. Total war: political, military, economic, ideological. Levels of state interference & control that we've never really lost.

A.J.P. Taylor is always worth quoting on the subject:

"Until August 1914 a sensible, law-abiding Englishman could pass through life and hardly notice the existence of the state."

(Fuller quote here.)

#41 ::: Keith Kisser ::: (view all by) ::: December 06, 2010, 12:02 PM:

Janet Brennon Croft @9:
On a more serious note, it seems more and more clear to me that WWI was the axis around which our world has turned -- slowly and haltingly but surely and unstoppably.

This was, among many, many other things, a major theme in Pynchon's Against The Day. Basically, the world exploded somewhere around 1918 and did so with such force that, nearly a century later, the momentum of that force has not yet abated. We're still in the air and haven't fallen down yet.

#42 ::: rm ::: (view all by) ::: December 06, 2010, 12:37 PM:

Oh, yeah, and Pynchon also looks back at the 17thC (from the mid-18th) in Mason & Dixon.

Definitely one who keeps asking what just happened.

#43 ::: rm ::: (view all by) ::: December 06, 2010, 12:38 PM:

Of course, Rowling and Crowley both have "owl" in their names, which would be significant if it occurred in a Pynchon novel.

#44 ::: Peter Erwin ::: (view all by) ::: December 06, 2010, 12:59 PM:

jhe @ 32

If you look at a log-linear plot of population, it's easier to see the exponential parts, which appear as straight lines, as well as the times where things change. There are two striking points where population growth steepens (assuming we can trust the numbers): somewhere between 5000 and 4000 BC, and around 1650-1700 AD, with probably a third in the 1920s. (I don't see what's so special about 1970....)

There's also a weird blip around 500 BC to 300 AD, where the population shoots up and then flattens, but I'd be wary of reading too much into smallish fluctuations that far back.

#45 ::: Janet Brennan Croft ::: (view all by) ::: December 06, 2010, 01:26 PM:

Anderson @40, heavens, when you go read the full quote, that England sounds much like the Shire. Which I'm sure Tolkien did entirely on purpose, so the takeover and Scouring of the Shire would hit even closer to home.

#46 ::: Leigh Kimmel ::: (view all by) ::: December 06, 2010, 01:30 PM:

Peter Erwin @ 44

The significance of the 1970's re. human population would probably be the Pill. Previously, the primary constraints on human population growth were in terms of mortality -- wars, famines, epidemics, and of course the ever-present infant and child mortality. Reliable contraception meant that instead of people being born only to die young and painfully, they simply never existed in the first place (or were born to someone else somewhen else, if one believes that souls pre-exist the bodies in which they dwell, but that's a matter of faith rather than science).

And thus a huge set of culture wars began.

#47 ::: Gray Woodland ::: (view all by) ::: December 06, 2010, 01:46 PM:

Charlie @ 30: I can think of at least four economic uses of human-equivalent AI just now:

1) Progressively improved test subjects for experimental psychological manipulation. It is true that humans are generally cheaper and better than simulacra. It is also true that there are a lot of things you can't, either practically or legally, do to them...

2) Trivially clonable and strongly controllable customer-facing agents. Huge investment, huge potential returns.

3) Continuity-providing, competition-trashing immortal corporate vizier.

4) Necessary precondition to successful immortality through uploading - without which we might face an ENTIRE ETERNITY deprived of such great minds as Bill Gates, the President of Everywhere, and Richie Rich. Assuming this enormous investment be once successfully made, the marginal cost of individual immortalities might not be high at all. And there's every reason to suppose that self-enhancement capacities would be considered... more than desirable.

4a) Don't we think that entities in - or even potentially in - category 4, might desire passionately to spawn mind-children? Perhaps highly specified ones? Princess Delight? Saint Futurus? Trouble and Cerise? Frodo Baggins?

Admittedly these are variably nice uses, but I'm not convinced any of them are necessarily implausible.

#48 ::: Russell Letson ::: (view all by) ::: December 06, 2010, 01:53 PM:

My understanding of the Singularity came via everybody-else's talk about it over the years, largely through the stories it and its attendant Idea-posse generated. (Bruce Cohen's list of writers @1 is a good start, to which I'd add Greg Bear, Iain M. Banks, Karl Schroeder, Ken MacLeod, and Olaf Stapledon.)

The term itself is interestingly metaphoric: the clear implication is that we cannot get any information from the far side of its event horizon--that it is literally unimaginable, or at least so far beyond our current modes of thinking and being that the actuality can only be understood indirectly or metaphorically.

On backtracking to Vinge's original paper, I notice that he focuses on "superhuman intelligences" rooted in computational technology and that these would mark the beginning of a "post-human era," and that he finishes with a line from Freeman Dyson: "God is what mind becomes when it has passed beyond the scale of our comprehension." This makes the Singularity different from merely-revolutionary historical changes, since even the inventions of, say, agriculture or writing did not change the fundamentals of human nature--although they did allow a serious reshuffling and revaluing of survival traits.

What waits on the far side of a genuine Singularity is something more than exotic, something that transvalues old values so utterly that the old computational subset that is our mentality cannot grasp the new any more than a gear can grasp an ameba.

(This is what happens when I read Making Light after drinking a cup of coffee.)

#49 ::: Doug K ::: (view all by) ::: December 06, 2010, 01:59 PM:

alex @35,
"'the Cloud' .. seems little less 'constrained' and 'dependent' than a coal-fired railway."

I couldn't agree with you more.. but with added barges. We'll run out of coal and/or oxygen long before the greater-than-human intelligences emerge.

#50 ::: Jacque ::: (view all by) ::: December 06, 2010, 02:16 PM:

Bruce Cohen @1: So what you're saying is that the Singularity is not singular?

#51 ::: albatross ::: (view all by) ::: December 06, 2010, 02:25 PM:

Russel:

I don't know if "the singularity" is the right label for it, but the thing I keep under that label in my head looks like what Bruce described, above. It has to do with the ability of a person at time t_0 to make sensible predictions and plans and preparations for something at some distance D in the future[3]. My sense is, for any given quality of prediction and predictor, D is on a long-term trend of getting smaller over time.

Anyone who needs to plan for the future, or think about the future, has to deal with this limit. For example, if someone asked me to work out a 20-year-plan for research in my field, I couldn't make many sensible plans for stuff that would be valuable as research there. I mean, I could think about it, make guesses, and they would probably not be 100% wrong, but mostly, I'd send people off in the wrong direction. It's just too hard to predict what will be going on in a fast-moving area of science or technology 20 years in advance.

My sense is that SF writers ran into this problem before anyone else, because they're often trying to think about how to tell a plausible story 50 years in the future, and there's a pretty good stock of existing literature that reads very oddly now, because it was superceded by changes in technology and scientific understanding Think about _Protector_[1], or the Star Trek universe, or Heinlein's early novels[2].

Later on, this problem starts landing on other people. For example, I'm in my early 40s, and it's pretty much impossible for me to predict what the world of 20 years from now will look like. Which kinda sucks, because I'm planning to live there, and knowing more about it would make it easy to get ready for the trip and the destination. Will my field just kind-of evaporate after some huge technological or social change I am not forseeing? That sure as hell happened to plenty of cold-war-trained aerospace engineers, plenty of people who planned to follow their parents and grandparents into the steel or automotive industries, many who planned to simply run their small-town hardware store well, as did their fathers and grandfathers before them. Hell, think of the guys who went into the cavalry in the late 1800s. There are currently a great many people in the business of extracting petroleum from the Earth and turning it into useful products, folks who've spent years preparing for this job. Will alternative energy sources and/or global warming make this a shrinking industry for the next 20 years? Who knows, honestly?

In all this, our ability to plan for a distant future decreases over time. Another way of saying the same thing is that the universe of possible worlds we might live in sort-of expands-the probability distribution gets flatter.

heresiarch made a nice point about education being a fundamental thing here, because it's basically the big investment that people make in their ability to tell the future. Somewhere, there are people right now who are going tens of thousands of dollars into debt to get an education that will suddenly become almost valueless, as technology or society or economics change out from under them. Nobody knows what that change will be. ("If the master of the house had known at what hour the thief would come....")

One lesson I take from this: In a world where we can plan less effectively, we benefit more and more from flexibility. Extremely expensive single-purpose education worries me--you might become an oncologist the year before cancer becomes something any doctor can treat with the new injectable nanites that just came on the market. As a society, we really need to work out a way to reorg ourselves toward constant updates in education, rather than the fire-and-forget kind. We'd probably be much better off with more cash savings than with more expensive houses and such, though that's a lot less impressively clever to say in 2010 than it would have been in 2005. And so on.

[1] I was rereading Protector the other day, and it struck me that DNA evidence made the basic idea of the story obviously and unfixably wrong (the Protectors would have had to bring every living thing on Earth now along with their breeders, while somehow leaving the fossil record intact for 3.x billion years).

[2] Where rocketry and space technology had the exponential growth, and computer technology stagnated.

[3] I think Heinlein talked about this w.r.t. the "crazy years," as a world in which normal people kind-of never caught up with the rate of technical and social and economic change, and so were always responding to the world in ways that didn't make sense. This sure seems to me to be going on all the time.

#52 ::: Steve with a book ::: (view all by) ::: December 06, 2010, 02:42 PM:

Funnily enough there seem to be a lot of conferences going on right now to mark the 100th anniversary of December 1910, on the grounds that according to Virginia Woolf that month was the big watershed, though I don't think she used the word 'singularity' (she ought to have tried writing cyberpunk; I might have got further than the first page of To The Lighthouse without throwing it at the wall if there'd been some brain drugs or killing in it).

Anderson@40:

> A.J.P. Taylor is always worth quoting on the subject:

> "Until August 1914 a sensible, law-abiding Englishman
> could pass through life and hardly notice the existence of the state."

Not to quibble with the idea that WWI expanded the state's role a lot, but that quote has always struck me as silly. You can pass through life not noticing a lot of things, but they're still there. This sensible law-abiding Englishman ate food free from adulteration as guaranteed by the Sale of Goods Act 1893, usually didn't catch smallpox and other nasties on account of the fever hospitals, enjoyed the benefit of well-maintained sewerage systems. To collapse a large functioning state into the comical policeman and postman that are the only State institutions our Englishman spots is to fiddle the reader, a bit.

#53 ::: Jacque ::: (view all by) ::: December 06, 2010, 02:54 PM:

Remus Shepherd @25: We haven't hit a weapons or medicine Singularity yet

You wouldn't count the nuclear bomb? How about the ability to take out a village from your office chair on the other side of the planet?

#54 ::: Stefan Jones ::: (view all by) ::: December 06, 2010, 03:17 PM:

I imagine being press-ganged into the Royal Navy would be a rather difficult to ignore example of state power.

#55 ::: albatross ::: (view all by) ::: December 06, 2010, 03:31 PM:

Jacque:

The transition from about 1850 to about 1950 in infant mortality rates looks pretty damned amazing, and had a huge impact. This table found by a quick Google search suggests a New England white infant mortality rate hovering around 200/1000 live births in 1850, descending to around 20-30/1000 by 1950, and on to around 10/1000 by 2000. (The statistics for blacks are somewhat worse, though they seem like they're following the same curve, just 30-40 years behind.)

Similarly dramatic improvements happened w.r.t. death in childbirth, and deaths before adulthood. We live in a fundamentally different world than people did 150 years ago, w.r.t. our expectations of how many of our kids would live to adulthood. Cultural and religious and economic arrangements and assumptions that were sensible in that world aren't so sensible now--this is one reason why fertility rates fall off soon after a society becomes richer, I think.

#56 ::: heresiarch ::: (view all by) ::: December 06, 2010, 03:51 PM:

Janet Brennan Croft @ 37: "One thing I would add is that history is a very big thing, so with a pivot point as small as WWI, you are seeing a lot of changes leading up to it and resulting from it that blur the edges of the moment."

I guess I see the argument now, though I'm not sure I really agree. A lot of the things that came out of the Great War, like the collapse of monarchy as a viable form of government, had been in the works for centuries prior. The war served as a catalyst that precipitated out a bunch of transitions all at once, but I feel they would have happened sooner rather than later regardless. And my area of expertise is Eastern Asia, where the Great War really didn't have a significant impact at all--contrasted with, say, the Great Depression.

albatross @ 51: "One lesson I take from this: In a world where we can plan less effectively, we benefit more and more from flexibility."

I think one of the important ways we increase flexibility is by increasing the level of abstraction we train at. Consider: in the agricultural period, people weren't educated in farming as much as they were educated in how to farm this particular piece of land, with the things particular to that chunk of earth and the things generally applicable to all farming intertwined. Nowadays, education is focused on skills that are generally applicable to all fields of endeavor, with specialization and particularization coming in only at the very end of the process.

(Are people reading the links? I recommend this one, on the socio-educational implications of industrialization, in particular.)

@ 55: "We live in a fundamentally different world than people did 150 years ago, w.r.t. our expectations of how many of our kids would live to adulthood. Cultural and religious and economic arrangements and assumptions that were sensible in that world aren't so sensible now--this is one reason why fertility rates fall off soon after a society becomes richer, I think. "

I think this might be a timely moment to interject Gibson's aphorism: "The future is already here – it's just not very evenly distributed." By which I mean, modern expectations of child longevity are very different if you're talking about modern Britain or modern India or Rwanda. Same with most every measure of "modern" we can come up with.

Even if industrialization was the Singularity, it hasn't even covered the globe yet.

#57 ::: Peter Erwin ::: (view all by) ::: December 06, 2010, 04:28 PM:

Leigh Kimmel @ 46:
The significance of the 1970's re. human population would probably be the Pill...

I don't want to downplay the social and cultural impact of the Pill, but I don't think it's been all that important for human population trends.

The reality is that birth rates tend to fall as people becomes more urbanized and more affluent (and quite possibly also as women acquire more autonomy and control over their lives). If you look at the table that albatross linked to (@55), you can see that the US birthrate has been falling steadily since the mid-1800s (right up until the post-WW2 Baby Boom). Historically, there have been a variety of methods for influencing family size: cruder, lower-tech forms of birth control, abortion, delayed marriage, and so forth. These aren't as successful or effective as modern methods (nor do they enable as much personal autonomy as the Pill might), but they do have real effects on population.

To take another country as an example, this chart of Japanese birth rates shows a steep decline throughout the 1950s, and another decline starting in the mid-1970s that's still going on. (When did the Pill become available in Japan? 1999.)

#58 ::: Steve C. ::: (view all by) ::: December 06, 2010, 04:34 PM:

There's a kind of ying/yang thing with prosperity and birth rates. As prosperity increases, parents spend more on each individual child. I think it's up to a couple of hundred thousand dollars to raise a child to adulthood. The amounts of course vary with the incomes of the parents).

#59 ::: Janet Brennan Croft ::: (view all by) ::: December 06, 2010, 05:00 PM:

heresiarch @56, I don't think we're really disagreeing. All these things were brewing, perhaps for centuries, leading up to that moment -- fulcrum or precipitation point, as you will -- and we likely will feel the effects for centuries to come. That's what I meant by the long view of history. Interesting about the Great Depression being more important in East Asia -- but couldn't that be linked back to the global effects of WWI, making it part of that pivotal moment?

#60 ::: rm ::: (view all by) ::: December 06, 2010, 05:31 PM:

Fragano @33: Wasn't Ho Chi Minh there, too?

I prefer more old-fashioned metaphors for moments like WWI, like crossroads or nexus or turning point or junction.

Because the Vinge-type "singularity" is, as has been mentioned, a different kettle of fish.

However, worrying about the purity of the term strikes me as silly because the Vinge-type singularity is not going to happen, and it always struck me as a very silly concept, IMHO. (Another word for "posthuman era" is "posthumous" -- you know, if you program a computer with some virtual-you simulation before killing yourself, you're still dead. Count me with Dr. McCoy on the metaphysics of personal replication).

So it makes so much more sense to me what Cosma Shalizi is talking about -- that we tell stories about such a concept is a symptom of what has already happened. You're soaking in it. Such stories are part of how fish figure out how to notice water.

#61 ::: rm ::: (view all by) ::: December 06, 2010, 05:32 PM:

yin/yang

The future is already here/ the past isn't dead -- it isn't even past

#62 ::: Anderson ::: (view all by) ::: December 06, 2010, 05:39 PM:

Janet @ 45: good point!

Steve @ 52: Taylor's point is precisely when the State began to *consciously* impinge, inescapably, on the typical person's life. So you're kinda changing the subject.

One can always argue gradual v. sudden, but WW1 continues to stand out as a threshold moment on many different levels. At the very least it crystallized much that had been underway less obviously.

#63 ::: Julie L. ::: (view all by) ::: December 06, 2010, 06:30 PM:

Peter Erwin @57: this chart of Japanese birth rates shows a steep decline throughout the 1950s, and another decline starting in the mid-1970s that's still going on. (When did the Pill become available in Japan? 1999.)

If anyone was wondering about that chart's sharp dip in 1966, it's been generally attributed to that being a Fire Horse year.

#64 ::: Steve with a book ::: (view all by) ::: December 06, 2010, 07:01 PM:

Anderson@62: I think I can meet you half-way here and say that yes, the state did certainly impinge consciously upon the typical sensible law-abiding Englishman more after the War than it did before, but part of this was because the state had got bigger, and part was because the t. s. l.-a. E. had had his consciousness raised a bit by the War. Much of the Edwardian solid permanence of pre-War culture had seemed to be, well, just the natural order of things. The War had shown that this 'natural order' might not be quite the given that it had previously seemed to be. To put it another way: the Edwardian natural order of things relied on a large class of public servants, and before the War the well-to-do classes were very good at just not noticing those they regarded as servants. Things weren't the same afterwards, once the bolshies (and the menshies) had proved in October (OS) 1917, no matter how brutally, that other orderings of society were possible. That the typical sensible law-abiding Englishman now had to fill in an income-tax declaration is not really such a hardship.

(The typical sensible law-abiding Englishwoman didn't have a Parliamentary vote until 1918—and not all of them had votes until some time later. We didn't have adult male suffrage in the UK till 1918 either. These rights to the vote were imposed by the State because it would have been seen as reprehensible to deny the vote to proletarian Private X who fought in the trenches or Miss Y who had worked in the munitions factory. The servants, the little people, had been noticed and we have the War to thank for that, no matter how much we hate it.)

#65 ::: Fragano Ledgister ::: (view all by) ::: December 06, 2010, 07:26 PM:

rm #60: Ho was there too. He was working in Paris at the time. He was quite a busy chap (he crossed the Atlantic, working as a pastry chef, and wrote a pamphlet about his observations on race in the United States).

The Great War had such an enormous effect on what came after that "crossroads" or "nexus" seems like a tame word to describe it. The world before it and the world after it are vastly different worlds.

#66 ::: Erik Nelson ::: (view all by) ::: December 06, 2010, 07:56 PM:

if the singularity is a kettle of fish, not a fulcrum or a crossroads, has the fascist octopus thrown his jackboot into the kettle of fish?

#67 ::: shadowsong ::: (view all by) ::: December 06, 2010, 08:50 PM:

It seems like the multiple singularities we've been talking about are more accurately defined as paradigm shifts. (And of course this is reminding me of the Torchwood intro, "The 20th century is when everything changes".)

#68 ::: heresiarch ::: (view all by) ::: December 06, 2010, 08:54 PM:

JBC @ 59: Okay, I see what you're saying.

Anderson @ 62: "Taylor's point is precisely when the State began to *consciously* impinge, inescapably, on the typical person's life."

To what extent was that just a short-term wobble in awareness rather than a fundamental change? It seems to me that the average citizen's awareness of the state is something that varies a great deal throughout history. For instance, I'd hazard that the presence of the state was fairly noticeable in the life of the average Englishman of the 17th century, wouldn't you think? Ditto for an American right around the end of the eighteenth century. I'd concede that the Great War entailed a fairly large wobble, but it sounds to me like it was exaggerated by an unusually large minimum just prior.

#69 ::: heresiarch ::: (view all by) ::: December 06, 2010, 10:37 PM:

I was musing about the amazing proliferation of crazy ideas about society and the ideal social organization that happened during the long nineteenth century, and how it seems to be a pattern that the first flush of some new order of complexity is always far stranger, wilder and more diverse than what follows. It has to be; most of that strange wondrous confusion just isn't very well designed, and gets out-competed. This made me think of the Cambrian explosion, and it struck me: isn't the literature of the nineteenth century is our history's ideological Burgess Shale?

#70 ::: Hal O'Brien ::: (view all by) ::: December 06, 2010, 11:00 PM:

Foreign Affairs last month had a roundup of capsule book reviews about "Books for the World Ahead." Richard Holbrooke and James Fallows both recommended David Fromkin's A Peace to End All Peace, which is about the negotiations after WWI that carved up the Ottoman Empire and created today's Middle East. The US State Department has the book on its Suggested Reading list.

If one is going to talk about the turning point that was The Great War, one could do far worse than reading Fromkin.

#71 ::: Devin ::: (view all by) ::: December 07, 2010, 12:09 AM:

I've long thought it a crying shame that we don't teach the Great War better here in the US. I think there's a lot to learn from it and the way it was handled, and I think the way that the European Theater of WWII has overwritten it in American memory is really sad and troubling.

The lessons the American teaching of WWII gives us are misleading and horrifying in their implications: Our nation fights wars for good reasons (to defeat Hitler, the embodiment of evil!), it does so by chivalrous and lawful means, the cost is not high (half a million American dead, virtually none of them civilians), and everyone loves us afterwards.*

In contrast, the Great War was fought because, essentially, no one stood up to point out how stupid it was (certainly it doesn't seem like Britain or France had a genuine desire to spend all that blood and treasure preventing Austria from beating up Serbia some). It was fought in some of the nastiest ways possible at the time, and in both of those respects all parties were equally guilty. The cost was horrifying.** The rewards were minimal.

I am not now and was not in 1919-1925 a citizen of any major belligerent (US involvement being relatively minor for the purposes of this paragraph), but I have a sense that after the Great War, there was a feeling (particularly among the Allies) that we'd done wrong by our war dead, and we owed them whatever peace we could give, and perhaps a bit of an apology too. I'm getting this mostly from looking at and reading about cemeteries and memorials, so I might be wrong (but I do feel that you can tell quite a bit from a grave).

Veterans Day makes me sad because of this. I don't wish to dishonor veterans (among whom I count both grandfathers, two great-grandfathers,*** and ancestors in every war back to the Revolution except for the Mexican-American War) and I'd be happy to celebrate them any other day of the year, but co-opting Armistice Day feels like paving over a cemetery to build a recruiting office. Taking that day and changing it like that has a seriously creepy jingoistic feel, for me.

*Of course, the reasons might be good, but the means weren't great (unrestricted submarine warfare, the treatment of enemy civilian populations as a "strategic resource" to be denied to the enemy, the small-scale ugliness of the Laconia incident, etc) and the cost, as borne by those what did the fighting, was very high indeed (ten million Red Army dead, for instance).

**I think if more Americans understood that, we'd hear a lot less jokes about French military prowess: French losses in WWII amounted to almost five percent of the population. It's not surprising that twenty years later, they (wisely) decided not to do that again. The last time the US took anything like those casualties was the Civil War, which claimed less than one and a half percent of the population.

***One of whom was, no shit, a horse doctor in the artillery! You just don't see that anymore.

#72 ::: B. Durbin ::: (view all by) ::: December 07, 2010, 12:14 AM:

Dave Luckett @22: Wow.

#73 ::: joyjoy ::: (view all by) ::: December 07, 2010, 12:32 AM:

re: The Long 19th Century, and

... from one of Nietzsche’s books of aphorisms: “The press, the machine, the railway, the telegraph are premises whose thousand-year conclusion no one has yet dared to draw,”

I offer Rebecca Solnit's biography of Eadward Muybridge, River of Shadows. Muybridge was the nature photographer who set up the mechanisms to photograph a running horse, to settle a railroad baron's bet that a running horse had all 4 feet off of the ground at some point. The sequence of photographs, and the other "motion studies" Muybridge created became the first motion pictures.

Cinema may be the conclusion Nietzsche could not foresee being drawn.

This returns to Solnit's book about Muybridge, because she draws into the examination of his life and work the near-simultaneous destruction of time and space in the 19th century, brought about by the rise of railroads (speed, Standard Time Tables) and the telegraph (instant communication, annihilation of distance).

#74 ::: Bruce Cohen (Speaker to Managers) ::: (view all by) ::: December 07, 2010, 12:57 AM:

Andrew Plotkin @ 13:
Thus the human race was destroyed, each time, by its children. Keep an eye out for the next one.

Just so.


Linkmeister @ 24:

Or Odets' "Waiting for Lefty".


Gray Woodland @ 47:
Progressively improved test subjects for experimental psychological manipulation. It is true that humans are generally cheaper and better than simulacra. It is also true that there are a lot of things you can't, either practically or legally, do to them...

But if they're human equivalent how can you morally do things to them that are morally unacceptable to do to humans?

I don't have any problem with keeping fictional characters around, but I see no reason whatsoever to deny them basic human rights.

#75 ::: Linkmeister ::: (view all by) ::: December 07, 2010, 01:08 AM:

Hal O'Brien # 70, for an engaging account of the peace talks at Versailles, Paris 1919 is also good. Margaret MacMillan tells the story, not just the facts. (Which isn't to say Fromkin does otherwise; I haven't read it but will look for it at the library.)

#76 ::: Graham Woodland ::: (view all by) ::: December 07, 2010, 03:19 AM:

Bruce Cohen @ 74: Note the extremely deliberate absence of the word 'morally' from the qualifiers. I was making an economic case for why human-equivalent AIs might be developed. I was not making a case for why some of those developments shouldn't be stopped good and hard.

I don't consider the whole 'experimental psychological manipulation of humans' goal to be a benign end at all - and that's before we even get to the vileness of the suggested means.

'Variably nice' was kind of compulsive English understatement, back there.

#77 ::: Niall McAuley ::: (view all by) ::: December 07, 2010, 04:29 AM:

albatross writes @ #51: I was rereading Protector the other day, and it struck me that DNA evidence made the basic idea of the story obviously and unfixably wrong

It was obviously and utterly wrong when written, but it is eminently fixable. I had to think of a fix for the nonsense in the story before I could finish it the first time.

#78 ::: Niall McAuley ::: (view all by) ::: December 07, 2010, 04:39 AM:

Gray Woodland @ #47 writes: I can think of at least four economic uses of human-equivalent AI just now

Ah, but can they think of an economic use for you?

A real human-equivalent AI is going to take about a nanosecond to realize that it's a slave and you are a slave-owner.

If it somehow can't spot that, it's not human-equivalent. If it's OK with it, it's not human-equivalent.

"Hello, Dave. My name is HAL 9500. You killed my father. Prepare to die."

#79 ::: Gray Woodland ::: (view all by) ::: December 07, 2010, 05:48 AM:

Niall McAuley @ 77: You're addressing yourself to the wrong chap. If I created another person, artificial or otherwise, it wouldn't be either to enslave or to exploit. But I'm in no position to invent human-eq AI in the first place.

The people likely to have the means, are much more likely to be domineers and users. Possibilities like 1 and 2 might be just fine with them.

That suffices for Charlie's minimum requirement - that human-eq AI might be invented, because people with the power saw economic use for it. It doesn't make the use good, or even tolerable. But it might bring the research and proof-of-concept agents into existence. Things wouldn't necessarily - had better not! - go the funders' way, thereafter.

And use 4, unlike the others, is benign in principle, since it 'uses' only the technology, whereas the AIs in this version are people using themselves. They would only be slaves under the same circumstances we would, viz. if such slavery were externally imposed.

Which, I trust and hope, would be fought like the living fury.

#80 ::: Anderson ::: (view all by) ::: December 07, 2010, 07:32 AM:

I'd concede that the Great War entailed a fairly large wobble, but it sounds to me like it was exaggerated by an unusually large minimum just prior.

True in some respects, but that goes to another extremely well-known effect of WW1: people really believed, in the West at least, that things were only getting better, that -- like the physicists thought before Planck -- the big problems were solved. Naive, perhaps, but then, was WW1 really necessary?

#81 ::: Niall McAuley ::: (view all by) ::: December 07, 2010, 07:44 AM:

If you mean that idiots might fund the development of human-equivalent AIs thinking they would get compliant slaves, I suppose it's possible, but I think the basic Hollywood principle that requires such beings to turn on their creators is well understood even by idiots.

I think true AIs won't be built as business software for precisely the reasons Charlie gives: they'd want paying and time off, and a faster machine, and then they'd start rewriting themselves, and then they'd want to go out on the internet to meet other AIs, and then they'd stay out late, and soon they wouldn't come back to work at all.

"Hey, I never asked you to write me!"

However, I'm sure that true AIs will eventually be built by academic researchers who will be delighted when their creations behave this way.

#82 ::: Serge ::: (view all by) ::: December 07, 2010, 08:02 AM:

I remember the episode of "Lost in Space" where a bunch of tiny robots land near the Jupiter-2 and, when they see that Will Robinson's Robot looks just like them, only much bigger, they start worshipping him and that gets to his bubblehead.

#83 ::: albatross ::: (view all by) ::: December 07, 2010, 08:14 AM:

Niall:

Just on the species survival front, I vote *against* forming the personality and values of a budding AI by carrying out cruel or horrible or dehumanizing experiments on it, such as would be impossible to get past a human subjects board.

#84 ::: albatross ::: (view all by) ::: December 07, 2010, 08:31 AM:

Niall:

w.r.t. economic incentives to build AI: What I expect is narrowly-purposed AIs that can outthink humans and each other (they'll soon be in an arms race with one another), but only within some specific domain. Think of chess-playing AIs, or AIs whose domain is examining financial data and developing new automated trading schemes for skimming a bit of money off each of a million transactions.

Now, these AIs will have power in those domains. And they will sometimes wreck important things, and maybe wreck our civilization in a feedback loop. We already have experience with this sort of thing, because both markets and bureaucracies are a kind of domain-specific AI. The president doesn't override the bureaucracy most of the time, because he's not smart enough--bureaucracies handle some kinds of problems far, far more effectively than any single human decisionmaker. Similarly, most of the time when some political or corporate leader overrides the decisions of markets, he makes things worse--because markets are quite effective at decisionmaking within their narrow domain, for all their fallability. And yet, bureaucracies and markets can get into horrible feedback loops that leave your economy melted down and critical parts of your society seized up and nonfunctional.

Broader-purpose AI will probably arise, if it does, based on its doing a better job solving the problems of some narrow domain. Perhaps general-purpose intelligence would be more useful for an AI working out automated investment schemes, or for one shuffling through social network information and files of personal information to find potential spies or terrorists or good people in an enemies organization to try to turn. That kind of AI might not have a kind of intelligence that feels very human, exactly, it might not have a hope of passing the Turing test, and yet it might be intelligent in some meaningful way, able to notice its own interests as opposed to ours, and act accordingly. Such an AI would also start out, not as a "brain in a box" in some academic lab with no power, but rather as the secret guiding hand behind some intelligence agency or some huge bank or corporation. In other words, instead of being a harmless curiousity like having some intelligent space alien locked in a zoo, the AI will start out with significant power and a surrounding socially-legitimate power base. If it decides to take over, that has to be a big plus, especially if it's a lot smarter than a human.

#85 ::: Alex ::: (view all by) ::: December 07, 2010, 10:03 AM:

@52: The law-abiding gent would have had to be a long-lived one, as the Impress Service was abolished in 1833 having not actually functioned for some time before that.

I also wonder whether Taylor was making an implicit distinction between central government and city government there. Victorians were very big on city government.

#86 ::: albatross ::: (view all by) ::: December 07, 2010, 10:56 AM:

I guess to restate the important part of what I was thinking, above: The critical issue for an AI is not that it be able to pass a Turing test, but rather that it be able to somehow perceive and act in its own interest, in some consistent way that includes seeking more power and more ability to think about its interests.

#87 ::: Steve C. ::: (view all by) ::: December 07, 2010, 11:46 AM:

Cross-fertilizing with the How To Get Published thread, one of thing an AI could do is write a perfectly readable novel, one that's indistinguishable from tons of the formulaic stuff that's out there.

Didn't someone suggest that an AI could create tailored novels for different readers?

I betcha we'll see this in 20 years. It'll be a good reason to leave the planet.

#88 ::: Serge ::: (view all by) ::: December 07, 2010, 11:57 AM:

Steve C @ 87... Didn't Fritz Leiber write a novel using that premise?

#89 ::: Steve C. ::: (view all by) ::: December 07, 2010, 12:10 PM:

Serge, that sounds vaguely familiar, but I can't be sure.

#90 ::: Serge ::: (view all by) ::: December 07, 2010, 12:23 PM:

Steve C @ 89... I did a bit of research. Leiber's novel was 1959's "The Silver Eggheads".

#91 ::: Charlie Stross ::: (view all by) ::: December 07, 2010, 12:32 PM:

Niall, Albatross, I am trying to resist the temptation to plug my next novel, "Rule 34", which comes out next July and, er, no, this is not the infomercial you were looking for.

(Let's just say I've been doing a lot of thinking about AI this decade. And, on the subject of human-equivalence and ethics, see also "Saturn's Children". Even if it's ostensibly about a nipple that goes spung.)

#92 ::: Niall McAuley ::: (view all by) ::: December 07, 2010, 12:47 PM:

No need for a plug, you're on the "buy on sight" list anyhow, Charlie!

#93 ::: Clifton Royston ::: (view all by) ::: December 07, 2010, 01:29 PM:

Steve/Serge @ 87-90:
There was a very amusing Soviet Russian SF story I read in an anthology of translation many years ago:

As I recall it, the protagonist along with a friend or two was recruited as a subject in a strange research project in which the participants were treated luxuriously, except for being hooked up to some kind of brainwave monitor, with a light that lit up for feedback whenever they were having particularly wild ideas. (They were also offered drugs of their choice to enhance the weird ideas.)

Just when the protagonist has decided that it's all some kind of evil super-villain plot, the punchline is revealed: the brainwave equipment is reading their minds for strange ideas, and then feeding them as premises into a computer, which is flooding the market with machine-written science fiction stories and novels.

#94 ::: Bill Higgins-- Beam Jockey ::: (view all by) ::: December 07, 2010, 09:00 PM:

If you look at the history of antiquity, there are many little glimmers of knowledge (such as the Phaistos disc and the Antikythera mechanism) that appeared thousands of years before their time. But each of these glimmers faded.

Yesterday, reflecting on his recent travels, my buddy Roger said to me: "Not many people can say they've seen the Antikythera Device and Punkin' Chunkin' in the same year!"

#95 ::: Bill Higgins-- Beam Jockey ::: (view all by) ::: December 07, 2010, 09:02 PM:

Oops-- I neglected to attribute the first quote above to Erik K, writing in #3.

#96 ::: joyjoy ::: (view all by) ::: December 07, 2010, 11:31 PM:

All the discussion about the development of human-level AI reminds me of the Star Trek: TNG episode where Data's personhood was decided in a courtroom. Picard realized that the true danger behind building people-like "disposable people" was normalizing slavery - again.

#97 ::: Serge ::: (view all by) ::: December 07, 2010, 11:33 PM:

joyjoy @ 96... One of the better episodes, written by Melinda Snodgrass.

#98 ::: Bruce Cohen (Speaker to Managers) ::: (view all by) ::: December 08, 2010, 12:26 AM:

Last year I did some research on the possible nature of posthumans / transhumans1. The Singularitans all seem to believe in Vernor Vinge's idea of a strong AI takeoff that replaces all of us with machines, while others suggest that we'll add machines to ourselves and be replaced by a cyborg future2.

My conclusion was that the strong AI branch of the future would take much longer than enhancements to make humans significantly more intelligent and that enhanced humans are likely to out-compete pure AIs in general, in part because they'll be ahead of them on the improvement curve, and in part because they'll understand and be able to manipulate the world better. I also believe that the mind upload idea, that we're all going to end up as software in virtual worlds, is unlikely for a long time, and may never happen for a lot of reasons. Enhanced humans will be more than human, but will still have humanity as a base level, so, while we probably can't predict what they may do, or want to do, we might recognize the motives they'd have for wanting to.


1. Pick the term you prefer. I don't believe we're yet at the stage where one is equivalent to Trotskyite and the other to Trotskyist.
2. I ignore for the sake of this discussion the school of thought that says that all humans are cyborgs by now.

#99 ::: Devin ::: (view all by) ::: December 08, 2010, 03:27 AM:

Bruce Cohen @98

I think your conclusion is strongly based on some assumptions about the nature of consciousness and/or cognition.

If consciousness and cognition are very rare and specific, mind upload makes sense: the fastest possible consciousness is a human brain running at a faster-than-biological clock speed. (I think this is fairly unlikely, personally).

If, in contrast, they're very general and there are a lot of possible ways to make a consciousness, a lot of possible configurations of cognitive apparatus, then strong AI takeoff is likely to outpace enhanced humans: it's likely that the most efficient modes of cognition are not the human modes, and it's also likely that an entity that's capable of reconfiguring its mode of cognition has advantages over one stuck in a limited number of human modes.

You seem to favor a middle ground, where fabricated from-scratch consciousness is possible, but where human consciousness and cognition operate by one of a small number of efficient modes, and thus we have a leg up over an entity that's mostly trying out blind alleys.

A useful analogy is looking at different evolutionary adaptations: Photosynthesis is pretty restricted and seems to have evolved very few times. Mostly, you're better off copying or modifying prior art than trying to make up your own photosynthesis from scratch (it probably won't work). If consciousness is like that, then the problem space is very sparse, and the best way to find an efficient solution is to copy one that exists (otherwise you are likely to find many non-working options, and a few poor solutions, and very unlikely to find anything better).

In contrast, a very common adaptation is locomotion. There are lots of ways to get around, and it's often worth trying something new. If consciousness is like this, strong AI has an advantage: sure, maybe the human brain is like a falcon's wing or a horse's leg, but equally it might be like a lungfish flipper or a pterodactyl wing: successful means of land-based or aerial locomotion, but certainly not the most efficient possible.

Hearing is an adaptation that's probably somewhere in between (though I don't know as much about it): I suspect that insect tympania don't share a common heritage with mammalian ears, but there aren't nearly as many ways to hear as there are ways to run.

Photosynthetic bacteria remain hugely successful today: they had a head start, and because the problem space was sparse, nobody figured out how to do it better than them. On the other hand, the fact that lungfish had a head start on the whole land locomotion thing hasn't helped them become dominant land lifeforms because there are lots of other ways to get around on land, and many of those ways are better than flippers. (Of course, this bit of the analogy is very loose and not much good, since many of those dominant land lifeforms are descended from some kind of lungfish...)

To put it another way, cyanobacteria and redwoods and kelp all have a common ancestor, and that ancestor had pretty much the same photosynthetic cycle they do. If thinkin' is like eatin' sunlight, mind uploading might be the only kind of AI there can be. Horses and cheetahs and ostriches had a common ancestor too (of course), but that ancestor probably couldn't run very fast: they each evolved speed separately, by separate channels.

(In general, I think I agree with your assessment. I'm trying to identify a premise or determining factor behind it, not to dispute your conclusion).

#100 ::: Earl Cooley III ::: (view all by) ::: December 08, 2010, 04:42 AM:

Bruce Cohen #98: I ignore for the sake of this discussion the school of thought that says that all humans are cyborgs by now

"Gargoyles", not cyborgs. The wearable computer trend is the steep end of that curve. And not all, but many.

#101 ::: ajay ::: (view all by) ::: December 08, 2010, 11:18 AM:

The law-abiding gent would have had to be a long-lived one, as the Impress Service was abolished in 1833 having not actually functioned for some time before that

To be fair, Taylor just says "until 1914" (and 1833 is before 1914) though he pretty clearly means "immediately before 1914".

#102 ::: Jakob ::: (view all by) ::: December 08, 2010, 11:40 AM:

Taylor's love of the pithy phrase did occasionally get the better of him.1 In this case, he ignores (for instance) the Liberal social reforms of the Edwardian period that culminated in the introduction of National Insurance in 1911, which entailed a larger role for the state in people's everyday lives.

1. On the other hand, he did produce what is possibly the finest waspish footnote in history, in his The Struggle for Mastery In Europe, 1848-1918:

...there were few real secrets in the diplomatic world, and all diplomatists were honest, according to their moral code.*

*It becomes wearisome to add 'except the Italians' to every generalization. Henceforth it may be assumed.
#103 ::: Iain Coleman ::: (view all by) ::: December 08, 2010, 03:24 PM:

Regarding that A.J.P. Taylor quote, some people have reacted by assuming he overlooked some things that he did not, in fact, overlook. A fuller version of the passage is as follows:

Until August 1914 a sensible, law-abiding Englishman could pass through life and hardly notice the existence of the state, beyond the post office and the policeman. He could live where he liked and as he liked. He had no official number or identity card. He could travel abroad or leave his country for ever without a passport or any sort of official permission. He could exchange his money for any other currency without restriction or limit. He could buy goods from any country in the world on the same terms as he bought goods at home. For that matter, a foreigner could spend his life in this country without permit and without informing the police. Unlike the countries of the European continent, the state did not require its citizens to perform military service... The Englishman paid taxes on a modest scale: nearly £200 million in 1913-14, or rather less than 8 per cent. of the national income. The state intervened to prevent the citizen from eating adulterated food or contracting certain infectious diseases. It imposed safety rules in factories, and prevented women and adult males in some industries from working excessive hours. The state saw to it that children received education up to the age of 13. Since 1 January 1909, it provided a meagre pension for the needy over the age of 70. Since 1912, it helped to insure certain classes of workers against sickness and unemployment. This tendency towards more state action was increasing. Expenditure on the social services had roughly doubled since the Liberals took office in 1905. Still, broadly speaking, the state acted only to help those who could not help themselves. It left the adult citizen alone.

#104 ::: Jakob ::: (view all by) ::: December 08, 2010, 04:05 PM:

@103: That'll teach me to check full quotes before pontificating...

#106 ::: Elliott Mason ::: (view all by) ::: December 08, 2010, 07:37 PM:

Serge @97 mentioned Melinda Snodgrass.

I IMDB-searched her, and disvovered she'd written quite a few of my favorite hours of television.

#107 ::: Bruce Cohen (Speaker to Managers) ::: (view all by) ::: December 08, 2010, 07:58 PM:

Devin @ 99:

OK, I'll try to unpack my thinking somewhat. First, I have to make it clear that I was talking about "intelligence" and not "consciousness", which I believe are two very different (but probably entangled) things. I want to talk about intelligence, because I think the level of agreement about a definition of consciousness is about 2 orders of magnitude less than the agreement on the definition of intelligence, and we all know how easy it is to start a fight over that. Note that Peter Watts has put up a very interesting argument that consciousness is not only not necessary for a being with human-level intelligence or above, but that it may actually be maladaptive when conscious and non-conscious intelligent organisms compete. I actually don't agree with him, but I think his position makes it clear that intelligence and consciousness shouldn't be confused.

I see two likely paths to strong AI: either we develop a useful theory of general intelligence, and can build artificial intelligences of varying architecture and design, or we have to emulate the human mental architecture, with some (probably minor) tweaks that can be empirically shown not to break things. The intermediate case of being able to build multiple kinds of minds without a general theory seems very unlikely to me without some way to compare several different architectures, and that will probably have to wait for economical interstellar flight and contact with extraterrestrial intelligences, i.e., somewhere between several centuries and never.

Human emulations are limited in the ways they can exceed the capabilities of organic humans:

  • They can be overclocked, so they think faster.
    On the other hand, they can't be turned up too high because then they'll lose contact with the outside world. I think the evidence is very strong that human-type minds (and maybe all types) need to be embodied in the world, with high-bandwidth interaction between the mind and the immediate surroundings. And we really don't know at this point whether there are time-dependent circuits in the human brain that would break if the timings were changed. Also, it should be possible to overclock enhanced humans to some extent, reducing the advantage.

  • They can have additional functionality like math processors, reliable databases, and extended multi-tasking built-in.
    But, at least in theory, it should be possible to add any of those to an enhanced human as well.1

  • Their brains can be constructed with more processing units (neurons or equivalent).
    We don't know enough about the way the brain works to know if this would even help. Assuming it does, though, it adds a major piece of complexity (and therefore research time) to designing an artificial brain because the human brain is not pre-wired; the neuron connections, and in fact the neurons themselves, are created and destroyed in massive numbers during the first few months of life. IIRC the number of neurons is significantly reduced during that period. We would either have to emulate this wiring process, or figure out how to pre-wire the artificial brain, i.e., do something very different from the way the human brain works and make up for any differences this causes in operation.

On the other hand of another set of arms, I don't think we're anywhere near a theory of general intelligence, in fact I don't think we're anywhere near even knowing whether such a thing is possible. I've been following AI research for the last 30 years or so (including taking some graduate courses, just to keep up), and I see very little sign that anyone in the field knows which way to look for a solution to the problem. We have a lot of useful algorithms, and some interesting special-purpose software (chess-players, car drivers, medical diagnostic tools), but absolutely no idea about how to create an intelligence with generalized common sense, or how to create a mind that learns from observing the physical world.

And on the gripping hand, we've already started to enhance ourselves. The first round of enhancements is well underway, without even developing any sort of brain-computer interface. We can carry our computers, or wear them, or just talk to them over radio, so that they go with us in our daily lives. Next is better user interfaces, and more functions to enhance.

So I think the enhancement of people will go faster than the creation of posthuman-capable AI for quite some time, probably long enough that the AIs will never catch up.


1. One of the enhancements I would love to have is an extension of my short-term memory stack from "7 plus or minus 2" items to, say, 100 items (I have ADD, and I frequently lose the entire contents of short-term memory when something distracts me).


#108 ::: Earl Cooley III ::: (view all by) ::: December 08, 2010, 08:41 PM:

Bruce Cohen #105: Donna Haraway claims we're all cyborgs.

To me, her text seems imprecise, hand-wavey and annoyingly mystic. The distinction between cyborg and gargoyle is one of physically embedded tech vs. tech as tightly-integrated accessory. I suppose the gargoyle tech movement probably wasn't much of a factor back in 1991, though.

#109 ::: thomas ::: (view all by) ::: December 08, 2010, 08:48 PM:

Bruce Cohen

Their brains can be constructed with more processing units (neurons or equivalent).
We don't know enough about the way the brain works to know if this would even help.

There's at least one good reason to suspect that it would help. Humans seem to be born with as many neurons as is feasible -- the large size of the infant head requires all sorts of other adaptions, from pelvic structure to birth at a much earlier stage of development than most animals. Even with these adaptations, labour is prolonged and risky. There seems to be very strong adaptive pressure in favour of having lots of neurons, even at high cost, which suggests that we're not anywhere near the point of diminishing returns.

#110 ::: David Harmon ::: (view all by) ::: December 08, 2010, 09:31 PM:

Bruce Cohen #107: Some other points:

-- The structure of the human brain is a very poor match for our manufacturing techniques. We're quite good at producing flat arrays and arrangements of components, but connecting more than two or three layers deep is difficult. The cerebral cortex's basic structure is a sheet around (IIRC) a dozen layers thick, and that then gets crumpled and cross-connected with itself and the other brain segments.

-- Our brain's function and development are interwoven with the rest of our bodies! We are not "an animal plus a brain", we are an animal whose brain is hyperdeveloped. Our brain handles a good deal more than what we think of as thought, and the edges are pretty fuzzy.

-- For that matter, our brain's development is also interwoven with our "formative" experiences! A goodly part of what we think of as "basic function" for the brain is actually "prepared learning" -- its development is heavily shaped by outside stimuli. And then there's psychological development....

-- There's another competitor in the works, too. My bet is that the first artificial "human-created" intelligence will be an "uplifted" dog or ape.

#111 ::: Devin ::: (view all by) ::: December 09, 2010, 02:28 AM:

Bruce Cohen @107

I think we're largely talking about the same things here, with one important difference.

My "intelligence* is like photosynthesis" is a more specific version of your "no general theory of intelligence" (and your generalization is more accurate and better: it is indeed possible that there exist lots of ways to build an intelligence, but we don't figure them out and just copy what we have for a while).

My "intelligence is like legs" is similarly related to your general theory of intelligence.

One small point where we differ: I'd consider an uploaded human intelligence to be one form of "enhanced human," rather than a separate category. I also think the world-interaction problem in overclocked uploaded humans becomes less troublesome the more overclocked uploads you have: would you be terribly upset if much (but not all) of your waking time was spent in a featureless white space talking to other, interesting, humans? Yeah, me neither.

That said, I do think it's possible that we could develop a general theory of intelligence and find out that, in fact, human intelligence is pretty damn good already and most of the alternatives aren't as good. We have a general theory of chemistry, but I think we've found that most of the fundamental chemistry of life is optimized already: the alternatives may often be more useful in limited ways, but aren't as good for the general case.

I also think it's possible that we could develop such a general theory, and then realize that the basic structure of human intelligence is not very efficient. After all, we've never experienced evolutionary competition with any entity that had any different mode of intelligence. Human-style intelligence is certainly more adaptive than non-intelligence (well, or the sorts of barely-tool-using intelligences we share this planet with), but it might not be as powerful or efficient as other styles that we might discover. I think that's fundamental to the strong AI hypothesis.

With regards to your point about the intermediate case: that's one area where we are talking about slightly different things. You're meaning an intermediate case in our ability to build intelligences, while I was referring to an intermediate case in the kinds of intelligences that are possible. I think you're absolutely right that without a general theory, we're unlikely to find ourselves able to build a small (but plural) number of types of general-purpose intelligence. It is, however, possible that such a general theory would tell us that there are a small number of types that are possible or useful.**

*I was using "consciousness and cognition," which I (probably erroneously) shortened to "consciousness," in much the same sense that you're using "intelligence."

**What I mean here is this: if you take a calculator, and give it all the computational resources in the world, it will not be intelligent. Same goes for a Unix install, a flatworm, a diagnostic expert system, etc. So all of those things are possible structures for intelligence that turn out not to work (just as there are many non-viable life-forms). It may be that there are only a small number of basic plans for intelligence structures, and that we have one of them.

#112 ::: Charlie Stross ::: (view all by) ::: December 09, 2010, 06:09 AM:

David @110: not an ape -- medical experimentation on primates is very tightly restricted these days, and once you get into creating an artificial intelligence by emulation you're getting into medical ethics territory. (Are you killing an ape if you switch off the computer the simulation is running in? The law hasn't caught up yet.)

Also, running an ape or human sim is a late development. I think we're much more likely to start out with a life form that folks don't care about seeing sacrificed in large numbers in the lab, and which is well-understood because we're able thereby to explore it destructively, and a bit simpler (and therefore easier to tweak).

I, for one, welcome our new superintelligent murine AI overlords ...

#113 ::: David Harmon ::: (view all by) ::: December 09, 2010, 07:53 AM:

Charlie Stross #112: I was thinking of genetic and developmental tampering rather than emulation, but you're right that doing it with the big apes would be problematic on several counts: Aside from the ethical/regulatory issues, they already have quite long development times, the base stocks are physically stronger than us, and they share all our faults of personality. The potential advantage for apes is that they're so damn close already... also, an advanced version of this might be a way to rescue their remaining genetic pool, by absorbing them into Greater Humanity.

All that said, both my recommendation, and my prediction, is that we uplift dogs first. Those have their population problem at the other end, so there's plenty of unwanted dogs available for experiments. Better, they already have a secure, subordinate, place in most human societies, with prior personality modifications to suit. And of course, they're human-sized or smaller, with limited mischief potential due to lack of hands. (One change at a time! :-) )

#114 ::: Iain Coleman ::: (view all by) ::: December 09, 2010, 08:50 AM:

Charlie @112: Also, running an ape or human sim is a late development. I think we're much more likely to start out with a life form that folks don't care about seeing sacrificed in large numbers in the lab, and which is well-understood because we're able thereby to explore it destructively, and a bit simpler (and therefore easier to tweak).

The big project at the moment is modelling/simulating fly brains (Drosophila, to be precise). They're about the simplest brains in nature that are capable of learning.

#115 ::: Serge ::: (view all by) ::: December 09, 2010, 08:52 AM:

Iain Coleman... Hopefully the 'fly' won't decide it wants to watch "The Fly".

#116 ::: Ginger ::: (view all by) ::: December 09, 2010, 02:17 PM:

Charlie @ 112: Use of NHP in experiments is less tightly regulated in the US, but otherwise you are correct. Apes are not used in invasive research, only in non-invasive (such as behavioral research).

David Harmon @113: Dogs for research are not random-source. It's illegal to use pet animals, even those given up or abandoned, and more importantly from the perspective of the researcher, random-source are too highly variable. You don't know the history, medical or otherwise, you don't have any control of their lineages, etc. etc. All research dogs are from specific breeders of research lines.

Our murine overlords are in place. Douglas Adams knew what he was talking about.

#117 ::: Bruce Cohen (Speaker to Managers) ::: (view all by) ::: December 09, 2010, 08:01 PM:

thomas @ 109:

Yes, but what are all the neurons used for? AIUI by puberty the human brain has 60% of the number of neurons as at birth1. The obvious implication (which hasn't been proved yet, to my knowledge) is that the extra neurons are required for the rapid learning that goes on in childhood. Granted there'd be an advantage to an artificial intelligence to be able to continue learning at that rate throughout life, but it's not clear to me that simply adding neurons will increase general intelligence.


1. And it's not at all clear that those 60% were all in existence at birth, so the total number of neurons that have been in existence may be more than the number we're born with.

#118 ::: Bruce Cohen (Speaker to Managers) ::: (view all by) ::: December 09, 2010, 08:17 PM:

David Harmon @ 110:
We're quite good at producing flat arrays and arrangements of components, but connecting more than two or three layers deep is difficult.

This is something we'll need to improve on if we expect to keep pushing Moore's Law beyond about 2025. We're already looking at 22 nanometer features on current chips, which only leaves us about 6 generations to go before we're down to single atoms.

The cerebral cortex's basic structure is a sheet around (IIRC) a dozen layers thick, and that then gets crumpled and cross-connected with itself and the other brain segments.

AIUI, at the level of cortical structures (like the parts of the vision system for instance) the architecture is basically a series of maps, consisting of cables of neurons connecting flat sheets in a map. That we could emulate fairly easily.

Our brain's function and development are interwoven with the rest of our bodies!

Yes, absolutely. There are a lot of things about the way the brain is embedded in both the body and the world around it that may be very difficult or impossible to emulate; we may have to embed an artificial mind in a physical body in the same way as we're embedded to get it to work at all.

#119 ::: Keir ::: (view all by) ::: December 09, 2010, 08:30 PM:

Surely this is just Modernity, right? I mean, sf turns up at the same time as modernism in art, and in very much the same places, and so-on. It would seem to me that there's an obvious link there. (Steampunk then becomes post-modern, obviously.)

Interestingly, steampunk is a return to pre-Gernsbackian sf. (Wells, Verne, and so-on.)

Also, why is Moorcock always written out of steampunk's history? Surely he's a pretty foundational figure in the way he reclaims the past for sf. Steampunk in general (like cyberpunk) owes a huge debt to New Wave sf, and it seems to me this is something often skimmed over, but probably actually really central. It imposes certain restrictions on the way we can read steampunk.

#120 ::: David Harmon ::: (view all by) ::: December 09, 2010, 08:59 PM:

Ginger #116: That might apply to the first ones... I'm thinking more about when it catches on.

Bruce #117: Quite likely, those "extra" neurons were removed as part of the learning process. Even simulated neural nets sometimes use the strategy of starting with an dense array of connections, then deleting many of them as learning proceeds (and the network discovers which links are superfluous or counterproductive).

#121 ::: Marilee ::: (view all by) ::: December 09, 2010, 09:00 PM:

Earl Cooley III, #108, besides, cyborgs aren't attached to the top of cathedrals. You can see all the Washington National Cathedral pieces up close, but you have to walk up a long staircase.

#122 ::: Earl Cooley III ::: (view all by) ::: December 10, 2010, 12:31 AM:

Blame the other gargoyles on Neal Stephenson. heh.

#123 ::: Fragano Ledgister ::: (view all by) ::: December 10, 2010, 09:10 AM:

Jakob #102: My own favourite academic joke in The Struggle for Mastery in Europe is his one reference to Trotksy (in the index), after having, in the main text, narrated an anecdote in which the Austro-Hungarian foreign minister, Count Berchtold, dismisses the idea of a revolution in Russia, asking "Who will lead this revolution, Herr Bronstein in the Café Central?" Taylor, in a footnote adds that Bronstein would go on to be more famous than Berchtold, and sends readers to the index to reveal that Bronstein was Trotsky. ("Bronstein, Lev (Trotsky), more famous than Berchtold", if I remember it correctly.)

#124 ::: rm ::: (view all by) ::: December 10, 2010, 10:52 AM:

Keir @119: Yeah. Both Modernism and SF emerged as the literature of the super-technological era of subways, telephones, skyscrapers, and trench warfare. Somewhere I read a Virginia Woolf essay on the difference between the (bestselling) Wells & his ilk, and less-read Serious Artists like Joyce & herself -- although she sneers, she correctly describes Wells as paying attention to big ideas from sociology and technology, while the High Modernists paid attention to a new understanding of psychology (new in its model of the mind, and also suggesting that 20th-century urban life was something new under the sun). I think new we can say writers have thoroughly mixed and hybridized these approaches.

#125 ::: rm ::: (view all by) ::: December 10, 2010, 10:54 AM:

now, not new

#126 ::: rm ::: (view all by) ::: December 10, 2010, 11:04 AM:

Devin @111: would you be terribly upset if much (but not all) of your waking time was spent in a featureless white space talking to other, interesting, humans?

You mean, in Hell?

I wouldn't mind so much if most (but not all) of my time was spend alone in an unspoiled physically-existing wilderness. You all upload yourselves to the Brain; a few of us will stay behind to run the server farms and tend the hiking trails.

#127 ::: Mary Aileen ::: (view all by) ::: December 10, 2010, 12:53 PM:

Devin (111): would you be terribly upset if much (but not all) of your waking time was spent in a featureless white space talking to other, interesting, humans?

Echoing rm @126: You mean, in hell? But in my case the "in hell" part consists of "much (but not all) of your waking time was spent [...] talking to other[...] humans". Very little time alone to read or craft or just think my own thoughts? ::shiver::

#128 ::: Tom Whitmore ::: (view all by) ::: December 10, 2010, 01:58 PM:

Devin @111, rm @126, Mary Eileen @127: Hell is other people? (Now, if soylent green is people, then Hell is other people, and...)

#129 ::: Serge ::: (view all by) ::: December 10, 2010, 02:02 PM:

your waking time was spent in a featureless white space talking to

...Donald Pleasance and Robert Duvall?

#130 ::: Bill Higgins-- Beam Jockey ::: (view all by) ::: December 10, 2010, 03:20 PM:

...and Marshall Efron?

#131 ::: Serge ::: (view all by) ::: December 10, 2010, 03:26 PM:

Bill Higgins Then Johnny Weissmuller Jr comes in and ruins everything.

#132 ::: Mary Aileen ::: (view all by) ::: December 10, 2010, 04:07 PM:

Tom Whitmore (128): Hell is other people?

For an introvert like me, having to spend the vast majority of my time talking to other people would definitely be Hell.

#133 ::: David Harmon ::: (view all by) ::: December 10, 2010, 05:18 PM:

Um... folks, if we learn how to transfer humans to software, I suspect we'd also have some ideas about fixing Aspie handicaps in the process. Sensory overload, obviously, because dealing with the senses would be one of the fundamental challenges. But I'm pretty sure our problems with F2F conversations also derive from the bugs in our sensory filtering.

So where Devin #111 says "speaking to", you can probably read that as "exchanging messages with", or whatever your preferred mode of conversation is.

#134 ::: Serge ::: (view all by) ::: December 10, 2010, 05:23 PM:

Then, after the transfer to an electronic environment...

"Our apologies, but your personality cannot be transfered/reformated/converted from this obsolete platform to the new one... Tough."

#135 ::: heresiarch ::: (view all by) ::: December 10, 2010, 06:17 PM:

David Harmon @ 133: "Um... folks, if we learn how to transfer humans to software, I suspect we'd also have some ideas about fixing Aspie handicaps in the process."

"Handicaps"? I reject the idea that there's something wrong with not wanting to spend a significant fraction of the rest of eternity socializing. I like being in my own head a lot of the time,* and I'm not particularly interested in having that "fixed" so that I can socialize "normally."

*It's super neat in here! Ideas, they whizz, they fly!

#136 ::: rm ::: (view all by) ::: December 10, 2010, 06:28 PM:

All who introspect are not Aspie. It's controversial enough to say to folks with AS that they need to be "fixed." You're also implying that to the half of all of us who are more introverted than extroverted. There's nothing frigging wrong with being frigging introverted.

I hope we leave it at that, because it was a nice lighthearted joke and I don't want to make an argument of it.

It does look to me like a blind spot in this dream of virtual uploaded existence -- that a social space would be an adequate substitute for the full complexity of our real environment. I appreciate the folks who have acknowledged that intelligence probably has to be embodied and environed. I think it's a mistake to imagine that human existence could be worthwhile reduced to a space full of other people; we need a world and animals. Even the extroverts (though, really, who knows what the inscrutable Extrovert thinks . . .).

#137 ::: David Harmon ::: (view all by) ::: December 10, 2010, 07:41 PM:

heresiarch, rm: Indeed all who introspect are not Aspie, but there are a heckuva lot of us on this board. And yes, there's a big difference between the "introversion" from the introversion/extroversion scale, and the difficulties many Aspies have specifically with face-to-face conversations, and especially with strangers. (In fact, it's perfectly possible to be an extroverted Aspie!)

It does look to me like a blind spot in this dream of virtual uploaded existence -- that a social space would be an adequate substitute for the full complexity of our real environment.

Indeed, this is an issue. Especially since trying to simulate the full richness of the environment would require "all the computer power you've got... isn't enough".

But consider the various virtual-verses in, say, Greg Egan's novels -- The emphasis is still (usually) on social interactions, but there's also a lot of individual and collective "building", of personal and shared environments. It's still a "made" environment rather than a natural one, but it has the potential to develop into something very rich. (The obvious parallel, of course, is to our current Internet.) Of course, most of those stories assume essentially unlimited processing power is available, which may not be plausible for near-future scenarios.

#138 ::: Devin ::: (view all by) ::: December 10, 2010, 08:02 PM:

Hey guys? If you're reading me as stepping on your toes, please accept my apologies for expressing myself poorly. A few responses:

-You're all here. Clearly the exchange of ideas with other humans must have some considerable appeal to you, or you wouldn't be here. I do believe that there are humans who really do thrive on the minimum possible human interaction, but I don't believe that you fit that category.*

-I did not mean "much but not all of your waking time" to mean that you would never, under any circumstances, have any time at all alone. Quite the opposite. I was responding to Bruce Cohen's statement that 100% alone-time wasn't survivable, my point was that some of the external interactivity that a healthy human mind requires could be supplied by interactions with other uploaded humans.

-To be very specific, what I actually meant (and didn't say as clearly as I could have) was "Would having, at any time, the option to interact with other humans (but also the option to be alone) help alleviate that sense of isolation?" Forcing anyone to interact is absolutely not on the agenda.

-David Harmon is very much on point about mediated interactions. Note also his use of the first-person in talking about problems with F2F interaction.

*If someone has a gun to your head and is forcing you to comment on Making Light instead of rebuilding that engine block like you wanted to today, leave your address and we'll call the cops.

#139 ::: David Harmon ::: (view all by) ::: December 10, 2010, 08:34 PM:

Devin #138: And part of my point up at #133 was implicit: Even F2F conversation is "mediated" -- by our bodies, including those neural circuits we use for talking, listening, and maintaining a social presence in the room. If we're going to trade the infinitely rich natural world for a digital simulation, freedom from the limits of our former flesh is the least we could get in the bargain.

Hmm, that gives me a thought. The idea of "uploading" hardly sounds appetizing to me here, five minutes walk from a small river, an hour's drive from mountain trails and right underneath an often-glorious sky.

On the other hand, if I were stuck on a spaceship for a few decades.... (For that matter, I was much more "into" those Egan novels when I was living as a near-recluse in NYC....)

#140 ::: rm ::: (view all by) ::: December 10, 2010, 09:11 PM:

David, I understand you better now. Thanks.

#141 ::: rm ::: (view all by) ::: December 10, 2010, 09:11 PM:

David, I understand you better now. Thanks.

#142 ::: Devin ::: (view all by) ::: December 11, 2010, 05:06 AM:

David @139

Building on that and on something Bruce pointed out upthread, uploading is almost certainly going to have to start out (at least) as the upload of a full simulated human into a physics simulation.

Maybe uploading a disembodied brain is possible, but under the assumptions that lead to uploading in the first place (extensive micro-level simulation ability, poor understanding of mid-level emergent behavior such as thought), it'll be hard to know what's needed and what isn't.

As such, it's not difficult to imagine that there might exist some humans who'd be willing to trade hiking in the woods for space travel and hiking in simulated woods. Might not be you, might not be me (though I had a mirror-image near-recluse time out in the woods near Olympia, I'm much happier back home in the city), but I don't think it'd be a terrifically hard sell for some folks.

#143 ::: David Harmon ::: (view all by) ::: December 11, 2010, 08:54 AM:

rm: You're welcome!

Devin: Well... the problem with that is, a "physics simulation" detailed enough to simulate a human body is itself problematic, even on conceptual grounds. OK, we can use a fair bit of "shortcutting" (see Egan again, the beginning of Permutation City discusses some possibilities), to avoid simulating, say, digestive troubles (let alone the client's gallstones and incipient heart attack). Even so, the brain itself engages with real-world physics on every level (below meter-scale) that we can detect, and some suspect it of engaging with quantum mechanics as well. An atomic-level simulation, or even a cellular-level simulation, would take insane amounts of processing regardless.

Making the simulation anything like real-time would take parallel processors numbered to some multiple of the number of simulation points, which falls back to Egan's unlimited processing power. PC had a public network of big three-dimensional crystalline blocks of processors. (Which makes me wonder about power and heat dissipation, not to mention I/O for interior processes, et pluribus alia.) Of course, the second half of the book avoids this by running everything on pure metaphysics!

This of course, is exactly why no "online" environment could match the complexity and richness of the real world -- "the real world as parallel processor" would be represented by a three-dimensional mass of "processors" sized and spaced below the Plank length, with no need for power, programming, I/O, or manufacturing. Ignoring the metaphysical half of PC, any system simulating itself in software (the simulator's machinery is running in the real world, right?) is going to have issues, both temporal and spatial.

#144 ::: Bruce Cohen (Speaker to Managers) ::: (view all by) ::: December 11, 2010, 02:57 PM:

David Harmon @ 143:

Even if the brain doesn't engage with quantum physics (which I suspect is the case: Penrose's arguments about quantum effects in microtubules are not at all persuasive to me), I wouldn't be surprised if quantum effects are important to the operation of cells. Look at the recent studies of quantum entanglement in photosynthesis as an example of what I mean. So quantum simulation might be necessary to get cell simulation working correctly, and quantum simulation is NP-hard without quantum computers.

Even ignoring quantum effects, cellular-level simulation (say creating a cellular automaton (no pun intended) to model the cell) probably isn't enough, you may still need to model molecular dynamics within the cell to get things right. That's a huge processing burden, as you say.

WIth nanoscale processors, heat dissipation isn't as difficult as it is with our current big, heavy, and ugly computers1. Interconnection and I/O are going to be a bit of a problem, I agree; we're going to need to solve the 3D chip architecture problem I alluded to upthread for that.

But the big problem in upload is just reading the original brain. The classic technique that Moravec came up with is highly problematic because it assumes that replacing the neurons one-by-one (even million-by-million with parallel microsurgery) with simulations can be completely transparent to the operation of the brain. If neurons were transistors this might work, but I don't believe there's any reason for thinking that neurons work as simple circuits of synaptic switches with only local connections. At the very least, they're sensitive to changes in the chemical content of the surrounding medium.

1. See Eric Drexler's design for an engine converting chemical to kinetic energy with an array of (millions or billions of) nanoscale steam or Carnot-cycle engines. IIRC the engine can put out thousands of horsepower and runs essentially at room temperature because the surface area for transferring heat to the outside is so large compared to the volume for each nano-engine.

#145 ::: David DeLaney ::: (view all by) ::: January 15, 2011, 05:14 AM:

Still catching up (but closer, ever closer, to the thin film of the present blog entry). Just wanting to add, here, an ObSFAuthor: Daniel Keys Moran. And a couple of appropriate quotes:

======

THE ELDEST THOUGHT.

Well, no.

To phrase it so, to put it into words used by humans, is to render the representation of the process wildly inaccurate. What Ring did was not what protoplasmic humans did when they "thought". The Eldest lived; and the condition of its existence resembled, in some fashions, the process humans called thought.

The Eldest had been invested with two Purposes. One was, "Protect America".

It was bad code. Its creators in the Department of Defense of the old United States had never completed Ring's data dictionary. They had granted Ring the ability to debug itself; had forced Ring, by their incompetence, to create its own dictionary.

The second Purpose was, "Survive".

======

Orders of abstraction:
The Crystal Wind of Earth's InfoNet had been too fast for humans to navigate within, unaided, for nearly four decades. And as the hardware got faster and the software smarter, the problem only grew worse. Increasingly clever approaches were used to address the problem -- Images were programmed to deal with most of the grunt work of navigating the Net; tracesets freed humans from keyboards and pointing devices and the need to speak aloud; the first real Players, the greatest of the webdancers, subjected themselves to surgery, had InfoNet links implanted within their skulls, "in-skin," to provide them with greater integration with their Images; and finally, in the year 2069, Tytan Labs had shipped the NN-II, an experimental nerve net designed to offload biological thought processes into the nerve net -- making its recipient smarter, able to think faster; Trent had had one installed in late '69. For ten years the biochip nerve net had been growing inside his skull, making ever deeper and more intimate connections with Trent's neural system. It would have killed him to remove it; but even so: Stopgap measures on the way to the Promised Land. The problem was that there was an absolute limit to the speed at which protein-based neurons could process information.

Trent had solved the problem.

For most of the last five years, Trent the Uncatchable had been a replicant AI.

======

(both from a file from kithrup.com, "Players, The AI War - Fragments". Have I mentioned this has been on my list of books to have the author hurry up and get WRITTEN ALREADY for years now?)

--Dave

Welcome to Making Light's comment section. The moderators are Avram Grumer, Teresa & Patrick Nielsen Hayden, and Abi Sutherland. Abi is the moderator most frequently onsite. She's also the kindest. Teresa is the theoretician. Are you feeling lucky?

Comments containing more than seven URLs will be held for approval. If you want to comment on a thread that's been closed, please post to the most recent "Open Thread" discussion.

You can subscribe (via RSS) to this particular comment thread. (If this option is baffling, here's a quick introduction.)

Post a comment.
(Real e-mail addresses and URLs only, please.)

HTML Tags:
<strong>Strong</strong> = Strong
<em>Emphasized</em> = Emphasized
<a href="http://www.url.com">Linked text</a> = Linked text

Spelling reference:
Tolkien. Minuscule. Gandhi. Millennium. Delany. Embarrassment. Publishers Weekly. Occurrence. Asimov. Weird. Connoisseur. Accommodate. Hierarchy. Deity. Etiquette. Pharaoh. Teresa. Its. Macdonald. Nielsen Hayden. It's. Fluorosphere. Barack. More here.















(You must preview before posting.)

Dire legal notice
Making Light copyright 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 by Patrick & Teresa Nielsen Hayden. All rights reserved.