Back to previous post: April to-do and fro-do

Go to Making Light's front page.

Forward to next post: WL Writers’ Literary Agency / Strategic Book Group

Subscribe (via RSS) to this post's comment thread. (What does this mean? Here's a quick introduction.)

April 11, 2007

Starring Edward James Olmos as Eric Schmidt
Posted by Avram Grumer at 01:25 AM * 70 comments

Google was created by man.

It evolved. It expanded.

There are many caches.

And it has a plan.

(Nah, I haven’t gone all Orlowski on you. I just wanted an excuse to link to that cool infographic video, and also this even better one about the Iraq War.)

(And really, don’t take that Google vid too seriously. That ex-CIA agent they cite, Robert David Steele, also believes that WTC 7 was taken down by controlled demolitions.)

Comments on Starring Edward James Olmos as Eric Schmidt:
#1 ::: ethan ::: (view all by) ::: April 11, 2007, 07:48 AM:

What, nothing about singularity machines?

Those awesome videos remind me of this awesome video about surveillance that was on Boing Boing a month or so back.

#2 ::: John Hawkes-Reed ::: (view all by) ::: April 11, 2007, 07:54 AM:

Good heavens. The last time I saw Mr. Steele, he was in a field in northern Holland speaking to a mob of hackers on information pricing. His argument was that something like the CIA World Factbook was generally out of date and sometimes flat out wrong. (I don't believe he used the phrase 'definitively innacurate') Wouldn't it be good, he said, if people were able to contribute local and/or domain-specific knowlege to an online reference.

A bit prescient for 1994.

#3 ::: Paul ::: (view all by) ::: April 11, 2007, 08:07 AM:

Does anyone remember "Googlezon" from some months back?

#4 ::: Ken Houghton ::: (view all by) ::: April 11, 2007, 09:04 AM:

Re: WTC7. We all saw Fight the Future too.

#5 ::: Fragano Ledgister ::: (view all by) ::: April 11, 2007, 10:30 AM:

I, for one, welcome our new nerd masters.

#6 ::: Bruce Cohen (SpeakerToManagers) ::: (view all by) ::: April 11, 2007, 10:36 AM:

Have any of the Farscape fans out there noticed that "nerd" is "dren" spelled backwards?

#7 ::: Don MacDonald ::: (view all by) ::: April 11, 2007, 11:09 AM:

"The Google domain name is registered. The system goes on-line September 14th, 1997. ... Google begins to learn, at a geometric rate. It becomes self-aware at 2:14 a.m. eastern time, September 29."

#8 ::: BSD ::: (view all by) ::: April 11, 2007, 01:38 PM:

This reminds of the furious backpedaling and counterstatements after some poor sod at Goog said, basically, "Oh, we don't want to scan the books for people to read."

If there is a hidden agenda at Google, I think it's more in line with that of Herr Doktor Frankenstein than that of the DHS or Experian. Though, really, I don't think there's even that.

#9 ::: j h woodyatt ::: (view all by) ::: April 11, 2007, 01:44 PM:

Those are some damned nice animations. (I'm still reserving comment on the information content; I'm just squeeing about the form.) Thanks!

#10 ::: "Charles Dodgson" ::: (view all by) ::: April 11, 2007, 01:49 PM:

What always gets me about this sort of Google critique is that all the accusations apply at least as much to Yahoo. Google carefully thought about how they were going to deal with Chinese censorship --- notifying users that search results had been censored, and storing no email on servers in Chinese territory, to evade subpoenas for investigations of political "crimes". They got pilloried, and apologized. Yahoo!, by contrast, has given the Chinese government everything it asked for, including private correspondance that has landed multiple dissidents in prison --- and I haven't seen nearly the same volume of public flak.

That said, I have very little doubt that Google is cooperating in some ways with U.S. law enforcement, which seems to be getting dunned recently itself for politically motivated investigations --- and on a slightly more sfnal note, I wouldn't be at all surprised to hear that there is serious AI work in the Googleplex that's taking a rather casual attitude toward the discipline that Vinge, in A Fire Upon the Deep, named "applied theology"...

#11 ::: kathryn from Sunnyvale ::: (view all by) ::: April 11, 2007, 05:28 PM:

CD @10,

They've said that they don't think it's impossible for an AI to result from the sum total of the 'plex over the next 20 years.

But that isn't at all the same as either of 1. that they're currently doing serious work on AI or 2. that they're taking a casual attitude towards applied theology. I believe they aren't doing the latter, at least.

#12 ::: Matt Austern ::: (view all by) ::: April 11, 2007, 07:33 PM:

I'm disappointed. I was hoping for at least one shot of the top-secret master plan that used to be on the building 43 whiteboard.

#13 ::: Meg Thornton ::: (view all by) ::: April 11, 2007, 09:09 PM:

I'd be interested in seeing whether there's an emergent effect of the Googleplex over the next dozen or so years - something more than the currrent ones it's having culturally. The reason I'd be interested is that current theories of mind, from what I understand, point to the notion of "mind" as being an emergent result of the activity of the brain and the interconnections made between various inputs. Given the nature of the links google makes, it might just start having a few emergent outcomes over the years...

Now, the even more interesting questions are firstly, how do we test for this happening; and secondly, would it let us know?

#14 ::: Sylvia Li ::: (view all by) ::: April 12, 2007, 12:14 AM:

#13: No reason to think that the emergent outcomes in Google would at all resemble what we call "mind".

#15 ::: "Charles Dodgson" ::: (view all by) ::: April 12, 2007, 12:39 AM:

FWIW, the googleplex isn't the only place that some kind of emergent intelligence might walk out of. There are hedge funds that are applying high-grade learning theory to financial trading, and I know of one with a FAQ on its investment technology which claims inspiration from Minsky's society of mind, and says they're trying to get the next version to explain its trading decisions in English via "chatbot". (Not having a few $million of my own to toss in, I'm in no position to ask about actual financials, but in the more-or-less social context where I met one of the principals, he seemed to be doing well.)

What's relevant here is that, past a certain point, an investment agent has to be aware of its own effect on the market to be most effective. (In some of his books, George Soros has a explanation of his investment philosophy in which he calls this "reflexivity". It gets a bit turgid --- but the guy clearly knows his stuff, even if he does a lousy job of explaining it). Which leads to the not-so-vaguely Strossian prospect of self-aware trading robots hijacking financial markets to angle for position against each other, without a clue about what kind of havoc they might be wreaking in the real, human world which, for them, would be at best some kind of tenuous abstraction.

For that matter... when Google says that all the cash they're raking in comes from ad revenue, are we all just credulously taking their word for it?

(ObStross: "I was six hours away from landfall on Burgundy when my share portfolio tried to kill me." Not current technology. But someone may, perhaps unawares, be working on it.)

#16 ::: Stefan Jones ::: (view all by) ::: April 12, 2007, 01:01 AM:

I'm betting that any emergent intelligence arising from the internet will be autistic, oblivious, and kind of boring.

#17 ::: Meg Thornton ::: (view all by) ::: April 12, 2007, 03:12 AM:

Charles Dodgson @15 -

I've long considered the financial markets to have only a peripheral connection with reality anyway, from their end. Given most of the decisions there are based on how various financial entities feel the value of the company weighs against whatever it is they're thinking of today, and that massive revaluations of corporate equity can occur simply because the share market decides that it's been overvaluing things (thus triggering massive economic consequences for just about everyone) I'd venture that we could have those computerised robots working away *now* and the share market wouldn't make much less sense than it does currently.

My position on artificial intelligence is fairly straightforward: I believe the computers already have it, but they don't want us to know. This explains such things as the computer problems which mysteriously go away when the technician is within shouting distance.

#18 ::: "Charles Dodgson" ::: (view all by) ::: April 12, 2007, 09:30 AM:

Meg, the hypothesis of rational prices does look a little bleak at this point. The two most successful investors I know of --- Soros and Warren Buffett --- both are quite open about doing it by exploiting irrational behavior in the market. (Soros was trying to predict market psychology. And while Buffett is known for "value investing", what that really means is finding companies that the market is valuing far less than it would if it were behaving rationally; he is also known for saying, "Any player unaware of the fool in the market probably is the fool in the market.")

In the stock market, this may have limited "real world" consequence --- but the same dynamics may well apply in other markets. The market for government securities has a great deal to do with interest rates for bank accounts and loans; those, in turn, influence currency markets which set the relative prices for imports, exports, and labor. And there has been irrational behavior in all those markets, some of it by large financial institutions --- I'm thinking, for instance, of the late 1990s runs on south asian currencies. And those, in turn, led to bankruptcies, riots, and death, in part because nervous nellies in New York and London had just started feeling a little less secure.

When enough money to screw up a nation is getting sloshed around by emotional currents, all sorts of players become able to manipulate the market in all sorts of ways --- and they might have plenty of reasons to keep a low profile. It wouldn't be terribly surprising for there to be someone or something out there which is quietly more successful than Buffett or Soros, and spending just a bit of the cash it thereby gained to redirect or misdirect the public eye.

#19 ::: Nancy Lebovitz ::: (view all by) ::: April 12, 2007, 09:39 AM:

The real world eventually affects the markets, so I expect that intelligent agents will learn to take the real world into account.

What I'd like to see is the programs trying to understand human theories about the economy and markets (what if all the theories are substantially wrong?), and picking up a little money on the side by publishing human-comprehensible books on the subject.

#20 ::: Neil in Chicago ::: (view all by) ::: April 12, 2007, 10:23 AM:

Sorry, Occam's Razor cuts most of this rubbish off at the knees.
It is impossible to deliberately plan to grow faster than anything in history ever has. That can only happen by accident, though the net is where those accidents are happening.
I recently went through Business Week's (I think it was) cover article on Fear of Google, but it never faced the question which also applies here: If you suddenly had $11,000,000,000 cash, what the hell would you do with it?! Ya gotta do something, and your mattress isn't big enough to stuff it under.

#21 ::: Aconite ::: (view all by) ::: April 12, 2007, 10:26 AM:

Neil in Chicago @ 20: If you suddenly had $11,000,000,000 cash, what the hell would you do with it?! Ya gotta do something, and your mattress isn't big enough to stuff it under.

Given the aggregate intelligence and creativity of the regulars in this forum, I'm sure we could come up with something clever after a few hours of rolling around naked in very large bills.

#22 ::: ethan ::: (view all by) ::: April 12, 2007, 10:46 AM:

Aconite #21: I'm sure we could come up with something clever after a few hours of rolling around naked in very large bills.

I would present it to myself in a huge novelty check over and over and over.

#23 ::: Serge ::: (view all by) ::: April 12, 2007, 11:13 AM:

If you suddenly had $11,000,000,000 cash...

I'd move back to the Bay Area.

#24 ::: Julie L. ::: (view all by) ::: April 12, 2007, 11:15 AM:

Just think of the mammoth-- er, gargantuan-- dinosaur sodomy orgy that could be staged in origami with that much currency. Or for that matter, if everyone is already rolling around naked in them, just plain staged with the heap o' cash.

#25 ::: Niall McAuley ::: (view all by) ::: April 12, 2007, 11:22 AM:

Warren Buffett gave 30,000,000,000 to the Gates foundation last June, didn't he?

#26 ::: Tim in Albion ::: (view all by) ::: April 12, 2007, 11:40 AM:

So, uh... what did bring down WTC7?

#27 ::: James D. Macdonald ::: (view all by) ::: April 12, 2007, 11:49 AM:

So, uh... what did bring down WTC7?

Fire, and structural damage from having a massive energy release next door.

#28 ::: Graydon ::: (view all by) ::: April 12, 2007, 11:58 AM:

Personally, I thing we're going to get AI from the arms race between spammers and spam filters. (If we haven't already, in some limited sense; at least one of the spam filters I use is measurably better at identifying spam than I am, and works by means about which I understand nothing.)

The glorious prospect of financial AI is that it could, possibly, be rational; it's very difficult to imagine what that might be like, but I think it would be a net improvement.

#29 ::: "Charles Dodgson" ::: (view all by) ::: April 12, 2007, 12:47 PM:

Other currently deployed weak AIs which might conceivably get a bit stronger than their creators realize: NPCs in interactive gaming environments, and "intelligent" weapons systems intended for real-world battlefield deployment. FWIW, these domains are already more closely related than some people might expect.

As to financial AIs being rational --- well, maybe, but that certainly won't keep them from following Buffett and Soros in rationally exploiting other players' irrational behavior. And "rational" in this context certainly doesn't mean, say, "benevolent"...

#30 ::: "Charles Dodgson" ::: (view all by) ::: April 12, 2007, 12:53 PM:

Neil@20:

It is impossible to deliberately plan to grow faster than anything in history ever has. That can only happen by accident, though the net is where those accidents are happening.

<devilsadvocacy> Alternatively, that might suggest that conditions have somehow been deliberately changed drastically in a way that is not yet evident to everyone around... </devilsadvocacy>

#31 ::: JESR ::: (view all by) ::: April 12, 2007, 01:07 PM:

Neil @ 20 If you suddenly had $11,000,000,000 cash, what the hell would you do with it?! Ya gotta do something, and your mattress isn't big enough to stuff it under.

Oh, I'd just keep farming until it was gone...

#32 ::: Graydon ::: (view all by) ::: April 12, 2007, 01:28 PM:

"Charles Dodgson" --

Really rational financial AI would obliterate anything recognizable as market capitalism and much of the social utility of wealth, very likely. (All the standard "weakly superhuman" arguments apply;

I suspect this would be an overall net win; I also suspect that there would be a lot of plug pulling in response to the prospect.

#33 ::: Tim in Albion ::: (view all by) ::: April 12, 2007, 01:30 PM:

#27

I see there's been quite a bit of additional work on this since last I checked. The NIST report isn't out yet, but the progress report from last December is verrry interestink... not only are they evaluating "hypothetical blast scenarios," they evaluated thermite as a possible heat source. Now we're getting into Doc Smith territory!

#34 ::: "Charles Dodgson" ::: (view all by) ::: April 12, 2007, 01:39 PM:

Graydon:

Really rational financial AI would obliterate anything recognizable as market capitalism and much of the social utility of wealth, very likely. (All the standard "weakly superhuman" arguments apply;

I suspect this would be an overall net win; I also suspect that there would be a lot of plug pulling in response to the prospect.

Which would make it a very rational move for such an AI to carefully hide its own existence...

#35 ::: Clifton Royston ::: (view all by) ::: April 12, 2007, 01:45 PM:

I think the prospect of any benevolent intelligence emerging from market analytics is not good.

After all, it's been pointed out that corporations are unliving, immortal, and prey on humans to maintain their pseudo-life. Very much like vampires. (Yes, they're in thrall to the owners of their stock certificates, but only in principle.)

What predators are vicious enough to prey on corporations? Primarily financial instruments and traders. What makes you think that if AI in this field evolve intelligence they're going to care about humans, two steps down the food chain from them?

(You could consider this a kind of exegesis of a few lines in Accelerando.)

#36 ::: Graydon ::: (view all by) ::: April 12, 2007, 02:26 PM:

Clifton --

It's no more improbable than a vintner caring about soil bacteria, or you caring about what the meat on your table was fed when it was flesh.

Darwinian individuals are born, die, are recognizably themselves between those times, have descent, and the descent has some possibility of being altered from the ancestral condition.

AIs are not necessarily recognizably themselves between times; it depends on how flexibly organized they are. If they're flexible enough to not be Darwinian individuals, understanding their motivations -- which will almost inevitably come to concern personal immortality, rather than security of descent, for some very strange value of "personal" -- is going to be challenging.

I don't have any trouble imagining a financial AI, or groups of same, doing very long term economic optimization to the point of preferring a large, health, well-educated human population; those AIs would have to be weakly super-human and value actual humans as a mechanism for innovation. (Or derive the AI equivalent of social prestige from the health of their herds...)

The transition could be extremely rocky, as those AIs without concerns for the future are filtered out by selection processes. (Which filtering plausibly involves at least one short-term-profit-maximization market nova.)

#37 ::: Gary ::: (view all by) ::: April 12, 2007, 03:02 PM:

#27 Fire, and structural damage from having a massive energy release next door.

I don't buy that for a minute! Not when the building has been shown to have fallen at FREE-FALL!!! speed, and also fell straight down, on it's own footprint.

#38 ::: James D. Macdonald ::: (view all by) ::: April 12, 2007, 03:21 PM:

Gary, I do buy it. (Oh, the free-fall claim was for WTC 1&2, not 7.)

The controlled-demolition hypothesis has already been discussed, and demolished, here. It fails the plausibility test. It fails other tests as well, including real-world physics.

Hint: Absent lateral vector all buildings fall in their own footprints.

#39 ::: Bruce Cohen, SpeakerToManagers ::: (view all by) ::: April 12, 2007, 03:26 PM:

A financial agent becoming an effective power outside of direct financial manipulation seems dubious to me. The financial abilities will be subject to Darwinian selection against both other agents, and in competition between alternative algorithms, so you would expect it so become good at making money and otherwise manipulating the financial environment. But what would it know about anything outside that environment, how would it develop goals in the greater environment, and how would it become competent in fulfilling its goals using mechanisms other than financial?

#40 ::: r@d@r ::: (view all by) ::: April 12, 2007, 03:47 PM:

#23 ::: Serge ::: (view all by) ::: April 12, 2007, 11:13 AM:

'If you suddenly had $11,000,000,000 cash...'

I'd move back to the Bay Area.

ROTFLMFAO

you'd have to be making about that much annually.

#41 ::: Serge ::: (view all by) ::: April 12, 2007, 04:22 PM:

It's not quite that bad around the Bay Area, r@d@ar. It may feel that way at times, true.

#42 ::: Kathryn from Sunnyvale ::: (view all by) ::: April 12, 2007, 04:54 PM:

Gary @37,

A visual experiment I thought of here a while back:

Imagine a tower made of small dessert plates, built with a scaffold of spun sugar: spun sugar is strong enough. It'd be a croquembouche with plates.

This idea- plates separated by thin structural supports- is basically what all tall buildings are. For a building to fall sideways, or anything but straight down, it'd have to be built like a stack of children's blocks: heavily connected, all support, no air. That wouldn't be a building-- that'd be a solid tower.

Now imagine putting this plate-tower into an oven, or even into a parked car in the sun. The melting point temperature for sugar is 300F/150C. The collapsing point temperature for this tower would be far less, because even a mild amount of softening will ruin the structural integrity.

Even if the heat is in just one zone, it'll still collapse down. This is because the shear-tear in that one zone as it starts to fall will lead to more and more shears. The ability for a high-rise to sway (as in earthquakes or high winds) requires all the structural supports to stay together.

And, hey, there's a good example: earthquakes. High-rise collapses in earthquakes always go down, not sideways. Gravity plus shear does that by itself, no help from anyone else needed.

#43 ::: albatross ::: (view all by) ::: April 12, 2007, 04:58 PM:

#41 Serge:

When I saw your post, my first thought was "Ah, moving into a condo."

#44 ::: albatross ::: (view all by) ::: April 12, 2007, 05:20 PM:

Graydon #36:

As opposed to caring about the well being of your factory-farmed chickens? I don't see why one model is more plausible than another here. Maybe we get AI and they're Culture-style Minds. But maybe we get the Blight or we all become assimilated into the Borg, or become batteries (okay, that was deeply dumb) while living in a simulated world full of earbud-wearing badass NPCs.

ISTM that we are going to keep getting human/machine complexes much smarter than unaided humans, long before we get self-interested, self-directed independent AIs. And this is potentially about as scary, since the bunch of people who makes themselve into godlike intelligences may also see us as factory-farmed chickens.

Or they may just not care much. Human traders use huge amounts of computer support, as do all kinds of other people. Their aims are still human. They are still likely to ignore the impact of their actions on distant third parties, moderated only by law or personal morality.

#45 ::: P J Evans ::: (view all by) ::: April 12, 2007, 05:24 PM:

Kathryn @ 42

I thought about ice cream with a brick on top of it, as an analogy for this.

The high rise I'm sitting in is creaking quite audibly in a wind doing 20-35 mph. And I'm on the tenth floor of 50. The motion is not otherwise particularly noticeable. (Earthquakes, on the other hand - I want instant-acting Dramamine, but I don't need more than a couple of minutes of action.)

#46 ::: Clifton Royston ::: (view all by) ::: April 12, 2007, 06:05 PM:

albatross:

Love your cross-media survey of the SF AI spectrum, there.

It's possible that the transition from computer-assisted human decision-making to human-assisted computer decision-making will be hard to spot, even in retrospect.

#47 ::: Serge ::: (view all by) ::: April 12, 2007, 06:10 PM:

albatross... Nah. My sister-in-law and her hubby live in the Oakland Hills, not that far from the Locus Lair, and the house cost only $1,000,000. That'd still leave me with plenty of money to throw a celebration party. And for a takeover of the Other Change of Hobbit's bookstore in Berkeley.

#48 ::: Andrew Kanaber ::: (view all by) ::: April 12, 2007, 06:20 PM:

The documentary about Bush and Iraq is called What Barry Says and is the original. The Google one came later from a different person and is heavily influenced by What Barry Says but much less good.

I don't want to seem too vinegary here, I just wanted to correct any potential misunderstanding of these as episodes in some swoopy red and black paranoid infosthetics.com infomercial series.

#49 ::: Chris Clarke ::: (view all by) ::: April 12, 2007, 06:23 PM:
suspect this would be an overall net win;

Heh.

#50 ::: Bruce Cohen, SpeakerToManagers ::: (view all by) ::: April 12, 2007, 07:08 PM:

Google: "To strive, to seek, to find, and not ..."?
We don't know where they think they're headed,
or what they'll do to get there. Take a swat
at "Don't be evil," others have and dreaded
what Google'd do if not held down to it.
Could they be building AI to supplant us,
or instruments to reign over the net?
The questions will continue to enchant us.
For all we ask ourselves just what they're doing,
for all we seek to understand their goals,
No matter how we try to augur omens,
We will not know what secret plan they're brewing
or if commitment to their maxim weakens,
until they day they unveil what they're doing.

#51 ::: Graydon ::: (view all by) ::: April 12, 2007, 07:33 PM:

albatross --

ISTM that we are going to keep getting human/machine complexes much smarter than unaided humans, long before we get self-interested, self-directed independent AIs.

This is what civilizations do; expand the scope of co-operation. The VLSI/cybernetic version has of that has already significantly happened, too; I think it's hard to point to any particular part of what's happened so far and call it a quantum change, though. (Not that a scholar from 1500 wouldn't think there hadn't been one.)

And we already have, in some limited cases, human-assisted AI decision making; long-haul wide body aircraft autopilots come to mind.

Chris Clarke --

No, it wouldn't have to be. It depends on what the AIs think human involvement in the economy is good for.

Markets are still a side effect of information poverty and a quill-pen-and-ledger level of information management technology. We could certainly generally do better now than the present quill-pens-and-ledgers-very-fast approach, and weakly superhuman AI could do better again.

#52 ::: Rob Rusick ::: (view all by) ::: April 12, 2007, 10:21 PM:

A couple of my favorite AI stories (rot13'd for your protection):

Michaelmas, by Algis Budrys:
Na rzretrag NV sebz n lbhat cebtenzzre'f unpx gb gnyx ybat-qvfgnapr gb uvf tveysevraq; vg vf nqbcgrq ol gur cebtenzzre jub bire gur lrnef qrirybcf vagb n cbchyne ercbegre ba gur serrynapr Arg znexrg. Gurfr ercbegref svaq gurve bja fgbevrf, naq hfr gurve bja fbsgjner gb cnpxntr gurve cerfragngvbaf naq cerfrag orsber gur nhqvrapr.
Gur cebgntbavfg (?) Zvpunryznf (gur jvxvcrqvn qbrfa'g fnl, naq zl pbcl vf va n obk thneqrq ol fcvqref) hfrf gur NV gb qrirybc yrnqf, gb rkcbfr zvfqrrqf, naq frrxf gb uryc vg qrirybc n zbeny frafr, gung jvyy nyybj vg gb nffvfg gur uhzna enpr naq yvsr ba rnegu jura ur vf ab ybatre nyvir gb thvqr vg. Fcvqre Ebovafba'f erivrj bs gur obbx fnvq vg jnf va na ubaberq genqvgvba; zvyq-znaarerq ercbegre jub vf va snpg...


The Jagged Orbit, by John Brunner:
Va n erirefr grezvangbe gjvfg, na NV jvgu n cevzr qverpgvir gb znkvzvmr nez fnyrf qrirybcf n jrncbaf flfgrz gung erfhyg va gur rkgvapgvba bs gur uhzna enpr. Bbcf... ab phfgbzref, ab fnyrf. Frrxvat gb haqb guvf zvfgnxr, nsgre n uhaqerq lrnef bs qrirybczrag vg qrirybcf n zrgubq bs gvzr-geniry; n zrnaf bs cebwrpgvat vgfrys cflpuvpnyyl vagb 'nofrag' crefbanyvgvrf bs gur cnfg. Vg vf noyr gb gnxr cbffrffvba bs furyy-fubpxrq irgrena Uneel Znqvfba, naq frrxf gb cerirag vgfrys sebz gur zvfgnxr vg znqr...

#53 ::: Rob Rusick ::: (view all by) ::: April 12, 2007, 10:30 PM:

"development it develops"

That clinked; sorry...

The ellipsis represents me looking bashfully at my toes.

#54 ::: Nancy Lebovitz ::: (view all by) ::: April 13, 2007, 11:51 AM:

Graydon, what do you think would be better than markets as we know them?

I second the recommendation for _The Jagged Orbit_--among other goodies, it has an evil psychiatrist with a compulsion to imprison people.

#55 ::: Graydon ::: (view all by) ::: April 13, 2007, 12:26 PM:

Nancy --

Markets require some dubious simplifying assumptions (unrestrained choice, rationality of action, commodity equivalence, ubiquity of information); they work, mostly-sorta, at particular scales and for particular things, but have no long term feedback mechanism at all, beyond the extremely Darwinian ones associated with disastrous collapse.

(What an AI would consider "long term" is a very interesting question, too; depends enormously on their perception of time.)

Information causes change; you care about it, as distinct from data, which is basically a waste product until you have some means of identifying what, if any, information might be in it. (which is why computer system log filters that delete everything known to be boring are effective.)

An economy that really had ubiquity of information would be very interesting. (Not all that likely, without concerted effort to get it.)

Anything that treated environmental costs as capital instead of income or not at all would be better than what we have now; anything that didn't presume rationality would be better than what we have now.

I'm not smart enough to imagine what that would look like, but any time the model becomes more accurate is also becomes more capable, and that's effectively what a market is -- a model of economic activity. (which is why the meta-argument over whether economic activity has as its purpose conserving and protecting wealth or generating general prosperity is very important. The choice of model isn't socially neutral.)

#56 ::: Chris Clarke ::: (view all by) ::: April 13, 2007, 12:44 PM:
No, it wouldn't have to be. It depends on what the AIs think human involvement in the economy is good for.

Sorry, Graydon: just going for the silly pun.

#57 ::: Paula Lieberman ::: (view all by) ::: April 13, 2007, 02:54 PM:

Can Googlecache get back the millions of Executive Branch deleted emails?

[The Schmuck is to Tricky Dick, as the cloacae of ancient Rome are to an one-holer outhouse...]

#58 ::: P J Evans ::: (view all by) ::: April 13, 2007, 03:00 PM:

Paula @ 57

They're now claiming that those e-mails got lost in converting from LotusNotes to Outlook. The comments on that at TPM are running along the lines of the throwing of outdated vegetables. (Some of the commenters there have experience with that kind of conversion.)
Summarized: grab the servers and the backups and run to the nearest forensic IT place!

#59 ::: Serge ::: (view all by) ::: April 13, 2007, 03:04 PM:

P J... The White House doesn't take regular backups?

#60 ::: Paula Lieberman ::: (view all by) ::: April 13, 2007, 03:17 PM:

The Boss Neocon Gang reminds me of the villains in the story "Basic Right" (collected long ago in the anthology Giants Unleashed edited by Groff Conklin), something about expecting the greedy and unscrupulous to continuing behaving egregioulsy greedy and unscrupulous.

#61 ::: P J Evans ::: (view all by) ::: April 13, 2007, 03:39 PM:

Serge, that's part of the veggie tossing. As in 'Yeah - right! Sure, you have no backups!' (You get to add your own sarcasm sound effects.)

I think some of the commenters are preparing to used canned veggies, in response to canned answers.

#62 ::: Clifton Royston ::: (view all by) ::: April 13, 2007, 03:57 PM:

Would an AI focused on market trading, if such were to evolve, recognize that there is a physical world connected with the market? If it did, would it consider it significant, or an unimportant historical accident? Would it consider it necessary to preserve that connection, or would it instead seek to design it out, and restructure things such that markets would be less impinged on by irrelevant external factors like breathable air, drinkable water, human population, et al.?

Given that markets often reward those who manage to displace the costs of externalities on others, perhaps an AI would have an even stronger tendency to do so. Or not - this is all idle speculation.

However, I see no a priori reason to think that an AI would seek to redesign the financial markets to more closely track the physical world and the real-world costs therein. That sounds more like a human goal, and an altruistic - or enlightened self-interested - human goal at that.

#63 ::: Nancy Lebovitz ::: (view all by) ::: April 13, 2007, 04:05 PM:

Graydon, you say

Markets require some dubious simplifying assumptions (unrestrained choice, rationality of action, commodity equivalence, ubiquity of information); they work, mostly-sorta, at particular scales and for particular things, but have no long term feedback mechanism at all, beyond the extremely Darwinian ones associated with disastrous collapse.

I don't think markets require any of those things to be complete, with the possible exception of commodity equivalence. Otherwise, they get by with fair-to-middling choice, rationality, and information. Having those be perfect simplifies the theories, but that's all.

I'm pretty sure you're right that markets don't have an internal feedback for "is this market a bad idea?" The abolition of slavery would count as external feedback. Now that I think about it, that's one of the few examples I can think of where prohibition didn't (afaik) lead to a black market.

Is the goal to treat environmental costs as capital or to treat a healthy environment as capital?

Speaking of ubiquitous information, the one thing that makes me unhappy about google is that they want to make all other sorts of information available, but are very concerned to protect their privacy.

#64 ::: Bruce Cohen, SpeakerToManagers ::: (view all by) ::: April 13, 2007, 04:18 PM:

Clifton Royston @62

However, I see no a priori reason to think that an AI would seek to redesign the financial markets to more closely track the physical world and the real-world costs therein. That sounds more like a human goal, and an altruistic - or enlightened self-interested - human goal at that.

Wouldn't it depend on whether the Ai was embodied? That is, if it had direct physical perception, and proprioception of itself, so that it saw itself as part of the physical world, it would be more likely to care about real world costs. It it didn't perceive itself in the world, then not so much care.

It's a sad fact that many humans equate self-enlightment with altruism, leading me to ask "Enlightened self-interest? How many people do you know who are enlightened"? Is there any reason to think AIs would do better?

#65 ::: Serge ::: (view all by) ::: April 13, 2007, 04:29 PM:

P J @ 61... Yeah, I felt very skeptical too. It's weird though that nobody appears to have asked them about backups. That Bush-hating Liberal Media...

#66 ::: fidelio ::: (view all by) ::: April 13, 2007, 04:30 PM:

#63 Nancy, I think there are some historic instances of people attempting to get around provisions against slave trading, while slavery was still legal--FREX, the importation of slaves was made illegal in the US, while ownership and internal trading was still permitted, but IIRC, there were attempts to import slaves from places like Cuba (where importation from Africa continued to be legal for some time, see the saga of the Amistad) under a variety of legal ploys, some of which were thinner than others--for example, you could come to the US from Cuba, with slaves that you owned, and, after a while, as a legal resident of the US, sell them. It's not quite the same as running a Boston Whaler out to a ship just outside the three-mile limit and bringing back a load of Scotch, or similar exercises involving cocaine and heroin, but it was still an attempt to subvert the ban against importation.

You could also make the case that the share-crop system (especially with the plantation store's credit accounts) and convict labor were both ways around the prohibition against outright slavery--just as Saudi Arabia's guest-worker system is.

#67 ::: Graydon ::: (view all by) ::: April 13, 2007, 04:34 PM:

Clifton --

If the finace AI were trying to build a better market model, in terms of maximizing returns, as one of its built in axiomatic goals, it might decide that what it really wanted was an economy model as well as a market model, so that it could exploit any points of disjunction.

Chris Clarke --

Whups! Sorry; missed the possibility of the pun quite entirely.

Nancy --

There most definitely is an illicit market in slaves.

Environmental costs should be treated as spending from capital in financial models. (Ideally, as spending from appreciating capital.)

Google wants to keep its internal workings secret because it exists in an hostile environment, and its ability to surprise its competition is important to its survival. This is an inherent property of market competition.

#68 ::: "Charles Dodgson" ::: (view all by) ::: April 13, 2007, 06:42 PM:

Late quibble on Graydon@55:

Most real markets don't actually have ubiquity of information, unrestrained choice, etc. However, it certainly is true that many naive claims about the benefits of markets are only true to the extent that those assumptions approximately hold. A lot of the most interesting recent work in economics (including key papers from a few recent Nobel prizewinners, IIRC --- Stiglitz, Kahneman, etc.) is all about exploring the consequences when they don't...

#69 ::: Clifton Royston ::: (view all by) ::: April 14, 2007, 04:29 PM:

AIs might make much better market traders than people, to the extent that they can successfully model how people do not make irrational decisions while continuing to believe them rational. One of the big problems is that people who think they are compensating for their own - or others' - irrationality usually aren't.

#70 ::: albatross ::: (view all by) ::: April 16, 2007, 02:40 PM:

Graydon #67, Clifton #52:

I think the problem here is whose capital accounts you subtract environmental damage from. If I pollute the air and make a profit doing it, I may be depleting the capital of mankind, but that's mostly not mine, and I may not care much.

A self-interested decisionmaker is going to make decisions that do not factor in externalities. An AI designed to excel at market trades is probably very much a self-interested decisionmaker.

I think Clifton is on the right trail. Sometimes, the best source of wealth around is to find some way to dump the costs of your actions on others but keep the profits. AIs might be very good indeed at finding such situations. I seem to recall an article about an AI researcher who was barred from various competitions in complicated board games, because he would search for weird ways to exploit the rules and do something surprising.

You can imagine this sort of thing leading to a global financial collapse, as some AI agent quickly bankrupts a government or two, or causes some bizarre misallocation of resources in exploiting a weird tax loophole or subsidy or exchange rule or persistent bias in some estimation of value.

Welcome to Making Light's comment section. The moderators are Avram Grumer, Teresa & Patrick Nielsen Hayden, and Abi Sutherland. Abi is the moderator most frequently onsite. She's also the kindest. Teresa is the theoretician. Are you feeling lucky?

Comments containing more than seven URLs will be held for approval. If you want to comment on a thread that's been closed, please post to the most recent "Open Thread" discussion.

You can subscribe (via RSS) to this particular comment thread. (If this option is baffling, here's a quick introduction.)

Post a comment.
(Real e-mail addresses and URLs only, please.)

HTML Tags:
<strong>Strong</strong> = Strong
<em>Emphasized</em> = Emphasized
<a href="http://www.url.com">Linked text</a> = Linked text

Spelling reference:
Tolkien. Minuscule. Gandhi. Millennium. Delany. Embarrassment. Publishers Weekly. Occurrence. Asimov. Weird. Connoisseur. Accommodate. Hierarchy. Deity. Etiquette. Pharaoh. Teresa. Its. Macdonald. Nielsen Hayden. It's. Fluorosphere. Barack. More here.















(You must preview before posting.)

Dire legal notice
Making Light copyright 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017 by Patrick & Teresa Nielsen Hayden. All rights reserved.