Back to previous post: Pretty rocks

Go to Making Light's front page.

Forward to next post: What future we were making

Subscribe (via RSS) to this post's comment thread. (What does this mean? Here's a quick introduction.)

February 12, 2010

The lily knows not why it blossoms in the spring
Posted by Abi Sutherland at 04:07 PM *

(This started as a comment on this thread, then grew and grew and grew until the ceiling hung with vines and the walls became the world all around.)

So the latest meme that all the cool kids on the internet are using is I don’t like the new Facebook login page. Look here to see the moment of its birth. Note that fourth, bolded paragraph, and read some of the comment thread. Then look at the comment posting box.

Still puzzled? You are, trust me, not alone. Daring Fireball will clarify matters. And this is a very good analysis of where the problem really comes from. I’m sure there are more interesting comments to be found on the matter.

But I didn’t notice when it happened, because on that day, the Facebook login failure day, Buzz was launching, and my gmail was afire with everyone I know poking around at this strange new interface, getting into conversations with friends of friends, unearthing previously unexplored connections, and wrestling with the privacy settings. And it all went wrong for some people, and there was shouting, and Google started working on improvements. They issued an explanation, which tells me more about how they got into this situation than how a user can get out of it.

What’s the commonality here? Turns out the people who couldn’t make head nor tail of why their old route for reaching facebook put them at this red blog weren’t alone. Turns out that the secret engines of the world do weird things even to techies and geeks. And some of the people who pointed and laughed at the Facebookers are probably sitting there right now, trying to figure out whether their profile is public enough to allow them some privacy.

It really isn’t that people are stupid. Some people are stupid, particularly when you constrain the definition of intelligence to certain fields. But plenty of people are shrewd and smart and still haven’t grasped the underlying nature of the tools they use1.

Remember that the acquisition of a mental model is like a flash of enlightenment, entirely changing the universe before one’s eyes2. To top it off, it’s an irreversible change, and one can’t truly re-inhabit the world one lived in before it happened. My worst arguments with my colleagues, the ones that leave everyone sulking, happen when I use their product without sharing their mental model. The irritation comes because we’re both right within the bubble universes we inhabit. They just don’t overlap at all.

So let’s talk about meatspace.

I used to work on cars. I can explain, succinctly and with hand gestures, the basic mechanics of an internal combustion engine. But I know many, many people whose structural comprehension of automotive engineering is just barely past the belief that Queen Mab and her invisible fairies tow the thing along when you summon them with the vroomy noises. And yet they drive.

And me, I’ve never understood electricity. I’ve been turning lights on all my life, but it wasn’t until a couple of weeks ago that I finally got a good teaching book on electronics and started to look at how all these volts and amps and ohms make the shiny thing happen. And yet my electrical goods have always worked3.

I guess, you could say, that I’m getting the point of the iPad. Even though I still don’t want one.


  1. I once met a very well-dressed lady in her fifties who explained to me that one did not press elevator buttons to indicate what one wished to do—for instance, pressing the up button to go up. Rather, one used the buttons to tell the elevator what to do next. Since it was above us, pressing the up button would make it go further away; the correct button to press to summon it to take us upward was down. Her world was full of unresponsive elevators, but she had still, quite clearly, made her way through it.
  2. I still remember the aha of object orientation. It was like a conversion experience, and I kept being driven to explain it to everyone I knew. That was a dull week or two for my friends and family.
  3. Well, apart from the travel light box, whose need for repair prompted the book purchase.
Comments on The lily knows not why it blossoms in the spring:
#1 ::: Scraps ::: (view all by) ::: February 12, 2010, 04:55 PM:

My god. I'm stunned.

#2 ::: Steve with a book ::: (view all by) ::: February 12, 2010, 05:09 PM:

Charles Platt! The O'Reilly about-the-author page says "He wrote five computer books during the 1980s", and one of those was the first book I ever read about computer culture, as opposed to books about what computers were and what they did and what the difference was between PL/I and ALGOL. It was Micromania, co-credited (in the UK edition?) to Dave Langford, and I read extracts of it in 1983, in a magazine forgotten now by everyone but geeks of a certain age (and in particular British retro-computing enthusiasts): Which Micro? and Software Review.

We still have computer magazines, of course; but I do miss the hobbyist spirit of the magazines of the 80s. Good to see that the 'maker' culture is keeping the flame alive.

#3 ::: IreneD ::: (view all by) ::: February 12, 2010, 05:10 PM:

Ouch. Being my family's resident geek and go-to person when problems with computers arise, I can absolutely understand. And sympathize. The interface designers have come a long way to make their products user-friendly, but the real challenge is to make them user-proof.

#4 ::: abi ::: (view all by) ::: February 12, 2010, 05:13 PM:

Steve with a book @2:

Hey, cool! I didn't know that about him.

He's certainly written an excellent book this time. Really readable, clear explanations, good projects, excellent sense of humor. I'm looking forward to working through the first chapter or three on Sunday. (And fixing that travel light box on the side.)

#5 ::: Thena ::: (view all by) ::: February 12, 2010, 05:15 PM:

Today, on the drive home, my other half told me about a frenetic email from one of his work superiors wanting a map of all the 'links' to some information that would be changing on the organizational website. Apparently it took a large part of the afternoon to explain that if you change the page that the links point to, you don't actually have to change all the links....

I'm not mocking the boss, I'm baffled how someone could -not- understand how web pages work.

(And yet...)

#6 ::: TexAnne ::: (view all by) ::: February 12, 2010, 05:24 PM:

Abi, did you perhaps forget to close the footnote-font tag? All the comments are tiny.

#7 ::: Steve Downey ::: (view all by) ::: February 12, 2010, 05:29 PM:

I've also learned through many computer epiphanies, like finally grokking what an Object is, that the insight that puts you over the edge is not actually the cause of the epiphany.

And in particular, the hopeful thought that if you could just tell everyone that last insight they would get it too just isn't true.

Of course back in the early '90s when I finally really figured out objects, I could only annoy a few friends and coworkers with the amazing insight that 'Objects Do Things!', and be disappointed that they still were in the dark. Today I'd post it to my Blog. And be disappointed that everyone was still in the dark.


#8 ::: abi ::: (view all by) ::: February 12, 2010, 05:31 PM:

TexAnne @6:

Hm. I did, and Chrome cleaned it up for me so it didn't show. Fixed now?

(This is, of course, very ironic.)

#9 ::: nerdycellist ::: (view all by) ::: February 12, 2010, 05:35 PM:

Oh dear. I always considered myself a computer idiot. To use the car metaphor, just because I know how to drive it and fill the gas tank doesn't mean I know anything about the insides, much to my embarrassment. But I'm having a hard time understanding why so many people with Facebook icons that put them right in my generation (and a little younger) seem to be the computer version ST:TNG's Pakleds. I'm suddenly very worried for the author of the "Red Page"; watch out, Georgi!

#10 ::: Nix ::: (view all by) ::: February 12, 2010, 05:43 PM:

That is a deeply impressive thread over there at RWW, but yes, it's humbling. What's worse is that the myriads of people falling for this trap aren't going to be jolted by this into fixing their mistakes: they're going to assume that computers are even *more* fragile and mystical than they thought: that Facebook radically 'changing' without warning is something intentionally done by someone else, for reasons beyond their comprehension.

And then they're going to come to me, all of them, and they're going to ask me to fix their Windows machines. Even though that's not my job.

Again.

#11 ::: nerdycellist ::: (view all by) ::: February 12, 2010, 05:46 PM:

ugh. Nerd fail. I meant to type "Geordi".

"We go fast. We are strong!"

#12 ::: Harry Connolly ::: (view all by) ::: February 12, 2010, 05:50 PM:

My wife could live her life very happily without a computer, except that all the cameras are digital now, and people keep giving her email addresses as contact info (I type out the messages she dictates).

She's also seriously dyslexic. Typing out URLs is an exercise in frustration; she much prefers the Google window, which will try to interpret her spelling. Finding a useful result in the search results is a separate headache.

#13 ::: TexAnne ::: (view all by) ::: February 12, 2010, 05:51 PM:

abi, 8: Yep, all better!

#14 ::: Steve with a book ::: (view all by) ::: February 12, 2010, 06:12 PM:

I suppose that if we apply Clarke's Law to the RWW/Facebook affair, we must conclude that computer technology is now Sufficiently Advanced: there are huge numbers of users to whom What's Going On is so opaque that their best bet is to offer incantations to the pastel-coloured Gods of Google and pray for deliverance.

Not that I'm mocking—in twenty or thirty years I'll be wandering baffled around a virtual world I don't understand, too afraid to ask for help. It's all wrong, how taking your eye off the technology ball for a little while can leave you careering towards a hopelessly deskilled old age; I hope that improvements in interface design mean that this stops happening in the future but I don't bank on it.

#15 ::: Scott W ::: (view all by) ::: February 12, 2010, 06:18 PM:

The RWW thread is amazing. Some from the large number of people who have incorrect models of how the web works, but more because of the reactions of the clued-in: dismissive, derisive, and some dececption-for-lulz. I feel bad for all the times I was short with my dad regarding his computer questions.

#16 ::: Tatterbots ::: (view all by) ::: February 12, 2010, 06:47 PM:

I usually use Gmail via third-party software, so I hadn't heard about Buzz until I read this post. I don't want a unified service that does everything, I just want an email account, so I have just been to the site and switched Buzz off.

It was not at all obvious how to do that, and even when I found the help page explaining how, I was alarmed to read that I first had to delete my Google Profile (which I turned out never to have created, but that wasn't obvious either) and block all my followers (I had one, whom I hadn't yet approved, so blocking him turned out to be unnecessary too). I was hesitant to do these things in case they broke something or emailed my would-be follower making it look like I'd deliberately rebuffed him. I went ahead anyway, but I know many people who wouldn't.

I object to being made to jump through hoops to opt out of a service I never signed up for. I consider it rude of Google to make me spend time on this. I don't have a MySpace account, and if MySpace created one for me of their own accord they would definitely be crossing the line. The fact that I have an email account with Google doesn't mean Google isn't crossing it too.

#17 ::: Graydon ::: (view all by) ::: February 12, 2010, 06:54 PM:

I somewhat object to the idea of secret engines of the world; I think that idea is itself the beginnings of getting stuck in the mystical world view. (So frequently very useful inside one's own head, so desperately useless outside it.) It's a lovely phrase, and it describes something real, but, well, it's also, I believe, part of the precipitate.

I regard myself as marginally inept with computers; no one else does, but I tend to think that if I'm not a kernel maintainer, a hardware design engineer at a chip or board level, or a software researcher (or at least doing compiler optimizations) I'm just being a rather persistent user. Maybe this is silly; people have after all paid me to install operating systems, set up their databases, and run their production servers, usually as a side effect of the rest of my job. But I think it might also be useful; There Is Always More To Know. I'd rather have that reaction than be forced to conclude something is cursed. (Anybody who has spent three days chasing one specific elusive erratic bug, well. It's not that difficult to feel cursed.)

Still, one of the most basic things about new communications technology is that it doesn't take off until it has a widespread social use; this happened with newspapers, it happened with land line and cellular telephony (with a CB radio sideline in there) and it's certainly happening with computers. The point is only how it works if you're in a particular band of monkeys that grants social status for knowing that; otherwise it really is irrelevant. (I would rather it wasn't, please understand, but generally speaking irrelevant is what it is.)

#18 ::: chris ::: (view all by) ::: February 12, 2010, 07:13 PM:

I guess, you could say, that I’m getting the point of the iPad.

ISTM that if iPad owners know as much about their iPads as the lady in footnote 1 knows about elevators, this incident barely scratches the surface of what is possible.

This isn't just not understanding how a carburetor works; this is rubbing your clothes up and down the front of a washing machine and wondering why it doesn't work as well as your old washboard. There's no amount of user-friendly design you can apply to washing machines that will solve that problem.

#19 ::: markdf ::: (view all by) ::: February 12, 2010, 07:17 PM:

This incident is what bothers me about privacy controls too--only, as Graydon #17 points out, those are intentionally confusing.

This week someone updated Facebook with a number of photos of a private family gathering in which an ill person, who is a very private non-computer user, would have been embarrassed to be seen in her condition. The poster is not stupid and, in fact, works in a computer-related field (non-programming). He thought he was protecting her privacy--i.e., he was aware of the issue--by sending it to select people he knew she wouldn't mind about. What he didn't know is that once he sent it, every "friend of friend" of his distribution list could see it too. He didn't believe me until I showed him through a different account. He immediately took the photos down, but, like I said, he has a higher level of understanding of this stuff than most people.

And now Google decided willy-nilly to violate privacy without an opt-in to its new Buzz service.

It's to the point that, on the internet, everyone will know you're a dog.

#20 ::: Evan ::: (view all by) ::: February 12, 2010, 07:19 PM:

For what it's worth, typing "Facebook" or "Facebook login" or "Google" or "Yahoo" or "Hotmail" into a search engine is incredibly common, much more common than most techies think. This kind of query actually represents one of the three major categories of search traffic -- they're called "navigational queries". (In case you're interested, the other two are called "informational queries" and "transactional queries".)

The upshot is that Google and Yahoo! and Bing actually spend a substantial amount of time and effort shuffling people around the internet. This is how many, many real people navigate the web. Which is all well and good except for when the wrong URL goes into Slot #1. Then Bad Things can happen. Lucky for everyone concerned, ReadWriteWeb isn't a phishing site.

#21 ::: Josh Jasper ::: (view all by) ::: February 12, 2010, 07:31 PM:

Charles Platt is also the author of a number of sci-fi novels. Oh, and he's the father of my wife.

#22 ::: C. Wingate ::: (view all by) ::: February 12, 2010, 07:35 PM:

FWIW, as someone with a CS MS, I find Facebook's interface maddeningly obtuse. The way the redesign has hidden the logout under a menu perfectly symbolizes the wrongness of it: can you imagine how many people use Facebook on a library computer and don't log out because they cannot figure out how to?

#23 ::: Clark E Myers ::: (view all by) ::: February 12, 2010, 07:57 PM:

can you imagine how many people use Facebook on a library computer and don't log out because they cannot figure out how to?

The cube of the number of people who once ended terminal sessions by typing quit pause done pause bye pause......

#24 ::: P J Evans ::: (view all by) ::: February 12, 2010, 08:45 PM:

16
There's a button down at the bottom of the page, in GMail, below your inbox, that says 'turn Buzz off'. It's a toggle. I turned it off. (I've also told it not to show me Buzz, and moved that from the list of features on the left sidebar to the 'other features' section, which is also not something they explain very well.)

#25 ::: Nicole J. LeBoeuf-Little ::: (view all by) ::: February 12, 2010, 09:38 PM:

"turn off buzz", in fact.

I was trying to find it by typing "/turn Buzz off" into Firefox, and the phrase wasn't coming up.

Found now.

#26 ::: Caroline ::: (view all by) ::: February 12, 2010, 09:42 PM:

P J Evans @ 24, that "turn Buzz off" link does not turn Buzz off. It merely hides Buzz from your view. You can still follow and be followed (which evidently exposes the list of followers and followed), and if anything was publishing to your Buzz feed (like shared Google Reader items and Picasa, which are set to publish by default), people following you will see it.

In order to actually disable Buzz, you have to go through the process described by Tatterbots @ 16.

I believe I've managed to disable mine, although I'm not entirely sure. I don't seem to have created a Google Profile at any point so I don't have one to delete. I'm still confused about the relationship between the Google Profile and Buzz. All of the Buzz controls seem to be housed in the Google profile; what does that mean when I don't have one?

I don't strive for online invisibility or anything -- I'm Googleable, I have social networking accounts -- but I do feel angry and betrayed by Google with Buzz. I should not have been opted-in and automatically signed up to follow and be followed without being asked for consent at any point, and without a clear way to opt out.

I should not now be having to guess and hope whether I've successfully opted-out of and disabled Buzz, or whether I'll have more people following me without my knowledge. I use Gmail for email; I don't want to be forced to take part in a social network that so far is unpredictable and extremely unclear about what information is published to whom.

I always understood that I was trusting Google with a lot of personal information by using Gmail, but up until now they had appeared to be worthy of that trust. Now I'm frankly not sure. I've used Gmail for years and now I'm seriously considering moving elsewhere, because this was such a nasty surprise, and such a confusing frustration to figure out what exactly was going on and how exactly I could control it at all.

I mean, have the Buzz team been paying any attention to the internet over the past couple of years? Have they actually used social networking at any point? This is all just shockingly stupid.

#27 ::: P J Evans ::: (view all by) ::: February 12, 2010, 09:46 PM:

26
I've turned it off and hidden it everyplace I could find. If it requires going to a Google Profile, then they need a link to find the thing, because I've never set that up. (Fortunately I have no mobile device attached to that account, and it's not one I use for personal mail.)

I agree with you that it should be opt-in, and it sure ought to be easier to shut off/disable/get out of.

#28 ::: Emily Horner ::: (view all by) ::: February 12, 2010, 09:57 PM:

Public librarians know this very, very well.

Actual behavior I witnessed a few weeks ago:

User typed "www.google.com" in search bar.
Google search results page came up and directed User to google.com
User then used Google to search for what she meant to search for in the first place.

I consider myself a decently advanced computer user -- I run Ubuntu Linux on the computer I use most, I provide tech support at the library -- but what I've come to realize is that computers ARE indistinguishable from magic for me as long as they're working right. It's only when they're working wrong that I have to actively access my mental model of how the computer works.

People who have been using computers for 15 years or so learned their computer skills in a time when you HAD to understand the machine or give up. People who learned computers more recently have dealt with a gentler interface, an interface with a lot less on the surface. A lot of them are used to the idea that you call for help if you can't immediately find out how to do what you're trying to do. I keep telling them, I'm not a computer expert. I'm someone who pokes at things until she finds something that works. But I think you're poking at something that's a little less transparent than it used to be....

#29 ::: Clifton Royston ::: (view all by) ::: February 12, 2010, 10:04 PM:

The issue of default choices seems to be one which a lot of think-of-themselves-as-computer-savvy users dismiss. The fact is that a huge number of people will never change their defaults, if they even know that such a thing exists or have control over it. Any programmer who doesn't understand that is, frankly, incompetent in their trade.

I consider the specific case a particularly egregious if not maliciously faulty design. Note that the control which putatively disables Google Buzz is in a section of which is labeled as controlling the view of your Gmail inbox, not the operation of the account, at the very foot of the page, literally in 8-pixel type.

I say putatively, because as it's designed there's no particularly obvious way for the user to verify that it in fact does disable the Buzz functionality rather than removing it from view, other than getting a fresh Gmail account and signing it up for Google Buzz to check from.

I had an elaborate analogy here about designing cars with razor-edged blades surrounding the passengers, but decided it was too long. You can add your own.

#30 ::: Edgar lo Siento ::: (view all by) ::: February 12, 2010, 10:19 PM:

#26 ::: Caroline,
What you said. Word.

I too am considering getting a different email service. Maybe I should look into panix.net, or some kind of webhosting service.

Does anyone have suggestions?

#31 ::: TexAnne ::: (view all by) ::: February 12, 2010, 10:31 PM:

Harriet's written a new post.

#32 ::: P J Evans ::: (view all by) ::: February 12, 2010, 10:35 PM:

Okay. You can get to your google profile, such as it might be, from GMail's settings pages.

Go to Settings > Accounts and Import

At the bottom there's a box for Google Account Settings; use the Google Account Settings link to get to that page.

You can edit your profile from there without having actually set one up (top left corner), and you can check to see what's going into your stuff (top right corner), but in the middle of the page there's a section labelled 'My Products' with all the stuff you're actually connecting to.

(This is also the page where you can delete your GMail account, so you really want to know how to get to it.)

#33 ::: heresiarch ::: (view all by) ::: February 12, 2010, 11:18 PM:

Steve Downey @ 7: "I've also learned through many computer epiphanies, like finally grokking what an Object is, that the insight that puts you over the edge is not actually the cause of the epiphany."

Which, itself, was a little epiphany for me. (Or at least that the shock you get when you read something that puts a vague niggling hint of an epiphany in the back of your mind into words.)

Graydon @ 17: "I somewhat object to the idea of secret engines of the world; I think that idea is itself the beginnings of getting stuck in the mystical world view."

I'm not quite following you here; "secret engines" isn't a phrase abi used and I'm not sure what you mean by it. Can you unpack that a bit for me?

#34 ::: Zack ::: (view all by) ::: February 12, 2010, 11:22 PM:

I have nothing to say that my SO hasn't already said better:

How do we design for learned helplessness?
Designing for understanding

#35 ::: eric ::: (view all by) ::: February 12, 2010, 11:33 PM:

My favorite old style .sig was line after line of someone trying to quit vi, and getting increasingly desperate and aggressive.


#36 ::: Thomas ::: (view all by) ::: February 12, 2010, 11:44 PM:

I remember suddenly realising how some of my students see command-line code for statistical data analysis. It's just a block of letters, with no internal structure, much the way Japanese looks to me. I knew they didn't understand the code, but I hadn't realised how many of them don't parse it or even tokenise it in their minds.

#37 ::: Anonymous Coward ::: (view all by) ::: February 12, 2010, 11:59 PM:

Well, this is just wonderful. I have found out that a Buzz "follower" of mine is someone that I unsuccessfully did business with a couple of weeks ago and had hired because said person was a housemate of a friend. Now I'm going to have to tell this friend about the housemate. Thanks for everything, Google. Oh, and I have no public "profile", either. I suppose more of these are going to show up now.

#38 ::: Alan Hamilton ::: (view all by) ::: February 13, 2010, 12:11 AM:

If I'm not sure of a company's real URL, I'll often search rather than typing in the company name as the domain name. The Google result is more likely to be to the actual site even if the url is a variant or I've misspelled it.

#39 ::: Bruce Cohen (SpeakerToManagers) ::: (view all by) ::: February 13, 2010, 12:21 AM:

The Facebook interface may be particularly egregious, but it's hardly the only bad user interface on the web, and is probably not the worst.

Some years ago I realized that almost every artifact and process humans have invented has some form of user interface¹; until very recently (in historical terms) it was assumed that a) all tools required practice to be good at their use, and b) the user interface was intuitively obvious from the shape and mass of the tool². For some reason, designers have become infected with the meme that all user interfaces should be "intuitive", that is, a user with little or no experience with the gadget in question should be able to figure out how to use it without instruction, or at most from an explanation of the "conceptual model" the interface presents.

What's wrong with this notion? Well, first, there are damn few designers who are good enough to create a user interface based on a simple conceptual model that makes all the capabilities and features available in terms of that model, even when it is feasible to have a single simple conceptual model that explains all of what a device does. Second, many of the people who create hardware and software deliberately make them more complex rather than less for reasons that have to do with egoboo and peer group admiration, and because they're making them for themselves rather than non-technical people (at least non-technical in terms of the fields of technical expertise the designer is knowledgeable in). Third, user interfaces are dynamic things, and they are usually designed to be rather static (until someone changes them without telling the users). And fourth, having a simple conceptual does not mean the interface is easy to use, as any student of the violin can tell you.

Let me expand that second point. Many experts in a field feel a certain contempt for people who are not experts; it's a very human thing to do, but it can interfere drastically with the task of creating an artifact which allows a non-expert to deal with that field. Even when the expert doesn't feel contempt, he or she is not likely to be able to judge accurately how difficult a task may be for someone who doesn't have years of training and experience.

On top of which, very few interface designers seem to be understand that classifying users requires more than a single "beginner"/"power user" dimension. Some people are technically proficient, but don't use the interface often enough to have their knowledge at their fingertips³. And some users can become experts at a particular interface without having a comprehensive conceptual model of it: they have internalized all the special cases and the exceptions to the rules. As examples, consider fluent speakers of a language who don't know anything about linguistics, or users of PowerPoint, which consists mostly of special cases.

There is a large literature in user interface design (though it's called other things in some fields, like "user experience", or "ergonomics"). But the stuff I've found that seems useful is written not by engineers or programmers, but by cognitive psychologists (e.g., Don Norman), sociologists (most especially Sherry Turkle), and dramatists (like Brenda Laurel). All of them start by looking at the requirements and limitations of human beings who will use devices, rather than with the design of the device.

¹ Some things we don't normally think of as having user interfaces but really do: scientific theories (many have a very compact interface called a mathematical model), shoes (they have laces or velcro straps and/or pull tabs on the back; we expect to have to teach our kids how to use them), and outside doors (Don Norman tells a hilariously frightening story about what can happen when you fail to understand how to operate a door in one of his books on design).
² How do you use a hammer? It's obviously intended to whack things with, or to pry them apart (if it's a claw hammer). But how many people can become good at using a hammer without some instruction and a lot of practice? I think that the idea that it doesn't require time and effort to learn how to use something is relatively new, and rather pernicious.
³ In the case of computer interfaces, that's often literally where the knowledge resides in an expert user.

#40 ::: janetl ::: (view all by) ::: February 13, 2010, 12:34 AM:

I use Google for navigation sometimes, too. It's handy for sites where I can't remember if it is .com, .net, or .org.
It's also a good idea if you're in a public place, or on a projector in a meeting. Selecting one of the sites from a list of search results is safer than accidentally typing in misspelling, and finding yourself looking at a NSFW page. I have a vivid memory from my library volunteer days. The novice I was helping typed in a URL, and found herself in a porn site that popped up a new window every time she closed a window. She was mortified.

#41 ::: Paula Helm Murray ::: (view all by) ::: February 13, 2010, 12:36 AM:

Question to the fluorosphere: I have a gmail account (I think, see later info), I do not use it at all.

Is this something I need to look into changing?

I have never used my gmail account, I don't remember when I signed up for it and when I did it was on Stardust, the laptop that was kidnapped June 26, 2009 and who has not been found yet. Since then, I was on a borrowed, 10+ year-old mac laptop that went belly up just before Christmas (and I'm getting what info they could recover off the hard drive next weekend on a CD) and which I did not do much personalization to. And now I've got Stardust II, a MacBook that I'm starting to get along with real well.

We have a relaxacon in Hutchinson that involves a Cosmosphere visit, where I hope to score another Stardust mission sticker for the new Mac.

#42 ::: P J Evans ::: (view all by) ::: February 13, 2010, 12:43 AM:

41
I'd urge checking your account to see what the settings are. (If you can get to Google, you should be able to get to your account.)

#43 ::: xeger ::: (view all by) ::: February 13, 2010, 12:45 AM:

I find myself reminded that the 'principle of least surprise' is one which I would be pleased to see in action far more often than abeyance.

#44 ::: xeger ::: (view all by) ::: February 13, 2010, 01:00 AM:

P J Evans @ 42 ...
I'd urge checking your account to see what the settings are. (If you can get to Google, you should be able to get to your account.)

I'd also note that checking your account may be no help at all about what your settings are or aren't...

#45 ::: Avram ::: (view all by) ::: February 13, 2010, 01:01 AM:

I still remember the aha of object orientation.

I remember getting that same aha. Unfortunately, I got it while working in Perl, so it went away after a few days, and I had to get it all over again. It's stuck with me through JavaScript and Ruby, so I think I've got it for good now.

#46 ::: Erik Nelson ::: (view all by) ::: February 13, 2010, 01:33 AM:

Paula Helm Murray #41:
Take into account the possibility that a thief can know your account password if your missing laptop remembers it.

#47 ::: mcz ::: (view all by) ::: February 13, 2010, 02:00 AM:

eric @ #35:

Do you still have that lying around? I'd love to see it.

#48 ::: Erik Nelson ::: (view all by) ::: February 13, 2010, 02:08 AM:

ambient findability

#49 ::: Erik Nelson ::: (view all by) ::: February 13, 2010, 02:11 AM:

Other free web based mails to use instead?

I have both gmail and Yahoo. Yahoo has too much animation in its ads, and it tacks ads on the end of your mail.

What else is there?

#50 ::: Erik Nelson ::: (view all by) ::: February 13, 2010, 02:15 AM:

The Domestic Abuse Hot Line website has a link to an innocuous-looking website you can switch to in case someone is looking over your shoulder. The page they chose for this is Google.

This seems like a problematic choice. People who have reason to not want to be found are being shown that here is a tool for finding everything.

#51 ::: janetl ::: (view all by) ::: February 13, 2010, 02:37 AM:

I have figured out a way to complain to Google, though I daresay it may not get very high up the chain. I'll just hope that they are collecting metrics on this sort of thing.

1. Log into your Google account.
2. Click on Help, and look for help on any topic you like.
3. At the bottom of the page, you'll see Didn't find your answer? Continue to the next step >>, where "Continue to..." is a link. Click it.
4. On successive pages, spurn their help and repeat clicking the "Continue to the next step" links until you get a page where you can submit a bug report.

I wrote a bug report about how I'm not sure what information Buzz has exposed, or how to prevent more being exposed, and that I'm upset that I wasn't asked to opt-in to Buzz in the first place. Alas, due to my limited writing skills, my report is in prose, but I'm sure many of you can do much better.

#52 ::: janetl ::: (view all by) ::: February 13, 2010, 03:02 AM:

There is a Buzz and Contacts help forum. It's full of people complaining about Buzz, and a few posts from Google employees pointing to the information on how to disable Buzz. Alas, I am not alone in finding the instructions to disable it less than informative.
I am also not alone in thinking seriously about no longer using Google to store information. I knew that I was exposing information to Google employees and their analysis tools, but really never thought they'd do something like automatically opting me into social network information sharing. I'm really appalled.

#53 ::: Kevin Marks ::: (view all by) ::: February 13, 2010, 03:33 AM:

janetl @51 The best way to complain to Google is to write a lucid emotional rant like 'Harriet' did, post it on your blog, tell your friends, and wait for the magic that is the internet hive mind to pick it up and propagate it for you.

#54 ::: abi ::: (view all by) ::: February 13, 2010, 03:37 AM:

Anonymous Coward @37:

You forgot to change your email address, so your comment was hooked up to your (view all by).

I've frigged it.

I am aware of the irony.

#55 ::: abi ::: (view all by) ::: February 13, 2010, 03:47 AM:

Graydon @17, heresiarch @33:

I did use the term "secret engines of the world", but I don't think secret means unknowable. It means hidden.

#56 ::: Joe McMahon ::: (view all by) ::: February 13, 2010, 03:47 AM:

This is really an interesting bit of synchronicity. I just finished writing an article yesterday about a not-so-recent, but key change in a commonly-used Perl module. A good-idea fix for new programs just happened to break all old programs using it. I argued that if you want to make a change like this, you had better make it really plain stuff wll break, and if you want to be perceived as a Good Person, you will provide a workaround for some period so people can upgrade their code.

I actually still got a "well, you shouldn't upgrade code without testing it carefully". Um.

Plan for what people do, not what they ought to. This apparently is an Outside Context Problem for some people in a big way.

#57 ::: little light ::: (view all by) ::: February 13, 2010, 04:13 AM:

Extra points for the Gaiman reference.

#58 ::: abi ::: (view all by) ::: February 13, 2010, 04:24 AM:

(*bows*)

#59 ::: Dave Langford ::: (view all by) ::: February 13, 2010, 04:51 AM:

#2: Micromania was simply a UK adaptation of Charles Platt's US The Whole Truth Home Computer Handbook. At his and Gollancz's request I anglicized the text, added specific British examples of small computer systems, and so on. I'm not sure I deserved a cover credit, but Charles wanted it that way, which was kind of him.

#60 ::: Wesley Osam ::: (view all by) ::: February 13, 2010, 10:03 AM:

I've been reading a blog called "Clients From Hell," a place for anonymous designers to complain about their most trying professional moments. (I think I found the link on Making Light, actually.) Clientcopia is similar.

There are running themes. One is the client who thinks Photoshop is magic--they get requests to "enhance" low-res photos like on CSI, or remove a building so you can see what's behind it, or rotate something in a photo so that it faces in another direction. (I thought stories like this were urban legends, but apparently they really happen.)

Often clients ask for an impossible (and usually unethical) control of their visitors' computers. One client wanted their site to open Microsoft Word on their visitors' computers and automatically begin filling out an order form.

More common stories involve clients who think this "web design" thing must be pretty easy and so assume they can get a website for fifty bucks. Or for free, because the designer will get something for their portfolio, and "exposure." Or they ask for impossible deadlines. There are also clients who are upset when their site isn't number one on Google five minutes after it's been uploaded.

What I've learned from following that blog is that it's not only users who haven't grasped how the internet works. A lot of the people who actually have and run websites aren't any more sophisticated. Which may partly explain how we end up with so many user interface failures.

#61 ::: sara ::: (view all by) ::: February 13, 2010, 10:57 AM:

#39: Some years ago I realized that almost every artifact and process humans have invented has some form of user interface¹; until very recently (in historical terms) it was assumed that a) all tools required practice to be good at their use, and b) the user interface was intuitively obvious from the shape and mass of the tool².

You should make an exception for wine bottle openers -- the elaborate mechanical kind. (I don't drink much and so I don't have extensive practice.) I imagine the feeling of non-techie people trying to use unfamiliar websites, especially to buy something or find information that they need, is similar to wrestling with an unfamiliar wine corkscrew.

#62 ::: Lin Daniel ::: (view all by) ::: February 13, 2010, 11:21 AM:

I still remember the aha of object orientation.

I spent months trying to understand object orientation. Months. It should have been a concept I easily understood, but it kept slipping away.

Until one day, someone used words I understood. And I realized I'd been using object orientation all my life, programming or otherwise. I was trying to figure out this new thing, when in fact it was just new terminology describing an old friend.

That particular aHA moment was profound. Not only did I understand this cool new/old tool, but I understood that sometimes it's not me being stupid. It's not them being deliberately obtuse. It's that they can't teach in a way that I can learn it. I can't learn it the way they're teaching, and so must find another teacher. Lessens the frustration just a bit. I'm still frustrated at learning difficulties, but the frustration of either side "being stupid" is removed.

And now, I'll read the comments.

#63 ::: P J Evans ::: (view all by) ::: February 13, 2010, 11:30 AM:

51
I just got some satisfaction doing that. I went to the Buzz help section, looked up the information on disabling buzz, discovered it isn't nearly as helpful as it should be ('block all follower before turning Buzz off': that's nice, how do I do that without the button, or am I supposed to assume that not having a block option means I don't have followers?), then wrote my rant from there. I included 'not telling users about an opt-out before opting them in' as part of the rant.

#64 ::: eric ::: (view all by) ::: February 13, 2010, 12:15 PM:

mcz: yes and no. I suspect it's saved on my Powerbook from 1995 that hasn't been booted in a good many years. A quick google for it brings up a lot of other noise, but I'd suspect that it would be possible to find in the old usenet archives.

#65 ::: heresiarch ::: (view all by) ::: February 13, 2010, 12:31 PM:

abi @ 55: "I did use the term "secret engines of the world", but I don't think secret means unknowable. It means hidden."

So you did. Um. Yet another reason I'm not, nor shall ever be, an editor.*

*I even used Find, and still missed it.**

**It's also particularly fitting that I did that on this thread.

#66 ::: Randolph ::: (view all by) ::: February 13, 2010, 12:54 PM:

A large number of people don't deal with complex abstractions. For these people, what they understand of the internet is what they see on the screen. (Consider the large number of people who don't read books.) These people are not "stupid," they do not deserve abuse, and they do not deserve to be taken advantage of. (I'm looking at your IP policy, Facebook. Which someone just asked me to find--it takes some skill to locate, and I'd guess that at least 70% of Facebook users don't have that skill.)

As to Buzz, it doesn't do anything unless you post to it--your followers have nothing to follow until you do. Far as I know, Google doesn't automatically create a public profile for you, though I might be wrong about this, or it might have changed. If you take the "Buzz" link off your Gmail page, and don't fiddle with Buzz, I think you will have no problems. Google seems to be behaving not-so-evil in how they operate Buzz (I call them the not-so-evil empire), and I think they deserve some cred for it. I'll take their attitude to Facebook's any day.

#67 ::: J Greely ::: (view all by) ::: February 13, 2010, 01:06 PM:

@56 Joe McMahon

Joe, which one? A quick google didn't turn up your article, and I live in Perl. I'm still recovering from the "too late for -CADS" breakage, and I don't look forward to having other scripts break because of a module upgrade.

-j

#68 ::: Serge ::: (view all by) ::: February 13, 2010, 01:23 PM:

Could someone provide me with a link to the XKCD cartoon about the simple upgrade that, by the end, has people in the ocean with sharks circling? Some of my co-workers are involved in a project like that.

#69 ::: TexAnne ::: (view all by) ::: February 13, 2010, 01:28 PM:

Randolph, 66: I disagree. Opt-out services launched with no warning and no clear way to opt out are entirely evil.

#70 ::: TexAnne ::: (view all by) ::: February 13, 2010, 01:35 PM:

I just had a horrible thought. I don't have a Google anything, but I email lots of people who do--using both my fannish and realname accounts. Am I worrying about nothing?

#71 ::: Constance ::: (view all by) ::: February 13, 2010, 01:39 PM:

#24 P J Evans

Thank you for that info! I had thought that when Buzz showed up in my g-mail, when I refused to click anything to activate (not even to explore I did not wish to, no I did not!), that was enough.

Love, C.

#72 ::: J Greely ::: (view all by) ::: February 13, 2010, 01:54 PM:

@68 Serge: XKCD cartoon about the simple upgrade that, by the end, has people in the ocean with sharks circling?

That would be XKCD 349.

-j

#73 ::: Graydon ::: (view all by) ::: February 13, 2010, 01:56 PM:

Abi @55 -

Granting the intention of hidden, I'm still not sure it's helpful, though I am less sure I can usefully explain why.

There are things that are outright invisible to direct perception; xrays, electrons, and the internal workings of VLSI hardware all fall into that category. This isn't anything like hidden, though; they're generally entirely obvious if you can build a detector for them.

There are things that are abstracted; almost anything complicated develops some kind of abstraction. Money is an abstraction for trade, code libraries abstract horrible details, whether horrible hardware details or horrible algorithmic details, standard social phrases abstract complex interaction concepts, and so on.

Most things that are abstracted are not actually hidden, in the sense of concealed; they may be hidden in the sense of "not obvious", but that's not the same thing.

("hidden" and "not obvious" are not really distinct states, I admit; roosting owls want to be hidden, and are sometimes only not obvious, and how not obvious they are is a function of who is looking, and by what means; they're poorly camouflaged versus infra-red viewers, for instance.)

The idea of asking "how does that work?" is, to my mind, only really persistent in an environment when people think of the question as an issue of how to look and where to look, rather than as a secret thing you might not be able to find without help. (Help and education and the never to be sufficiently blessed accurate how-to document are excellent things, but we don't have them if there isn't a widespread "I wonder how I get the cover off" response.)

So I think I'm arguing for "someone understands this, so you can too" as the general case, rather than any form of "that's obscure and complicated". Obscure and complicated are way too contextual to make a good general case.

#74 ::: Constance ::: (view all by) ::: February 13, 2010, 02:12 PM:

Abi -- Thank you so much for providing this thread, and to all the knowledgable people who contributed their information re teh buzz.

You have helped me a great deal, and now I can help some others.

Checking via the help the contributors here provided I can see that I never turned on anything. My profile does have my real name and nothing else, so I changed that. My products ain't much -- all that stuff that google and blogger try to shove on you, and I tediously, studiously ignore. It does show which gadgets I've added to my blogger blog, which is pretty innocuous.

I've never put my photos on my g-mail account (which is set up mostly to deal with comments from public blogs that I'm a part of like DeepGenre, in order that I don't have to click on their sites and scroll down, blahblahblah -- this also provides the info if spam shows up on ancient entries so I can get rid of it).

You all have performed good deeds this day.

Love, C.

#75 ::: Caroline ::: (view all by) ::: February 13, 2010, 02:30 PM:

A partial answer to my question @ 26: Evidently when you first post anything to Buzz, you're asked to create a Google profile if you don't already have one.

Deleting your profile is only necessary if you've already been using Buzz and have decided to opt out after using it.

Since I never posted anything to Buzz, I think blocking followers and unfollowing people is enough. I'm still not sure whether I can block new people from following me. I don't guess it will do much if people do follow me, since I'm not posting anything to Buzz, and without a Google Profile, I don't think my list of followers will be made public. Still a bit grumpy though. And I still don't trust that Google won't somehow turn it back on accidentally, and I'll end up publishing updates from somewhere without being aware of it, because using Gmail and using Buzz are evidently supposed to be the same thing.

The thing is, I actually kind of dig the idea of a social network aggregator. It's likely I would have used something like Buzz, if I had been presented with information about it, and then asked if I wanted to sign up. But logging into my Gmail and being told "Hey, you're already participating in this social network you've never heard of with a bunch of people you weren't asked about, and we've already gone ahead and made a bunch of stuff publicly viewable! Surprise!" just made me go "OH HELL NO" and immediately set to work on disabling it. Way to pre-emptively lose my trust for your social network before I've even touched the thing.

#76 ::: Randolph ::: (view all by) ::: February 13, 2010, 03:14 PM:

TexAnne, #69: "Opt-out services launched with no warning and no clear way to opt out are entirely evil."

Mmmmm. It's pretty soft cell. (Typo to good to fix.) They do strongly encourage its use, but there's a clear choice presented. When I started Gmail after they turned it on--a great big "press here to use Buzz" button, and a much smaller "nah" link (which doesn't look like a button.) When I clicked the "nah" link, it took me to my Gmail page, and Buzz retreated to a small button, which the "turn off Buzz" link turned off, along with the advert. I'd guess--and you can bet Google has done user testing to this point--this approach will get a lot of people to sign up. I'd say it is taking advantage of people--advertising works, after all, and who knows better than Google? But compared to Facebook, it's positively saintly, especially once you look at the IP policies.

Google:

Google does not claim any ownership in any of the content, including any text, data, information, images, photographs, music, sound, video, or other material, that you upload, transmit or store in your Gmail account. We will not use any of your content for any purpose except to provide you with the Service.

Facebook:

For content that is covered by intellectual property rights, like photos and videos ("IP content"), you specifically give us the following permission, subject to your privacy and application settings: you grant us a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook ("IP License"). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it.

Which of those two service providers would you rather work with?

#77 ::: Serge ::: (view all by) ::: February 13, 2010, 04:02 PM:

J Greely @ 72... Thanks! I know some people who'll appreciate this cartoon.

#78 ::: janetl ::: (view all by) ::: February 13, 2010, 04:08 PM:

Randolph #76:
Comparing Facebook and Google is illuminating. I keep a vigilant eye on my FB privacy settings because I know I'm sharing info with people there. That's the point of FB. I never, ever thought I was sharing info with people when I used Google unless I actively sent them an email or shared a Google doc.

I wholeheartedly agree with you that Facebook's intellectual property policies are ridiculous, which is why I don't put anything in FB except links to photos stored elsewhere, and my status updates. I am confident that my FB status IP is of not great value either commercially or as artistic expression.

I "use" gmail, but almost entirely through the Mac Mail client. I just happened to open Gmail in a browser sometime after Buzz was turned on. I have a vague memory of being asked about Buzz, and saying No, and ignoring the little Buzz button on the left. When I read something about Buzz later, and clicked that button, I saw a list of people following me, people I was "following", and some posts by the people I had mysteriously and without my volition started following.

I found the disable Buzz button (which was deliberately made very hard to find). Later, I thought to look at Reader. Months ago, I set up some subscriptions in Google Reader, decided I hated the UI, and forgot about it. The subscriptions were still there, only now everyone that Buzz decided should follow me showed up as followers in Reader. It had never occurred to me to make my Reader subscriptions private when I was playing with it, and frankly don't see why anyone I've ever emailed should see what I'm reading or see the names of other people I've emailed. I've now canceled all my Reader subscriptions.

#79 ::: Debbie ::: (view all by) ::: February 13, 2010, 05:15 PM:

Google is not the only one busy aggregating. I recently got an email from Classmates.com:

--"To make it easier for old friends—including you!—to reunite, we're coming up with ways to let more people use Classmates from around the Internet without having to visit Classmates.com.

--"To do that, we're about to start making your public Classmates content available to people using a variety of sites and devices, including Facebook and the iPhone. This content can include your name, photos, community affiliations, and more.

--"Of course, we care about your privacy as much as we do your ability to catch up with your past. We're updating our privacy policy to make these new features possible, and you're able to opt out...."

And then they go on to describe how to do that. I guess one of the morals of the story is, ignore emails at your peril.

I definitely want control over how my (admittedly boring, but still) internet personae are mixed and matched.

#80 ::: Earl Cooley III ::: (view all by) ::: February 13, 2010, 06:08 PM:

Randolph #76: Which of those two service providers would you rather work with?

I'd go with whichever CEO recants their horrid anti-privacy heresies first. If they both trip over each other in their eagerness to recant, a tiebreaker is to be resolved by the largest apology cash donation to the EFF (starting at USD$10 million).

#81 ::: abi ::: (view all by) ::: February 13, 2010, 06:23 PM:

Graydon @73:

Unpacking this in my own head, because I've not tried to make it explicit before:

Like I said, the secret engines of the world are secret because they're hidden from view. It's related to your definition of "abstracted", but rather turned around.

What they are* is mental models, such as the ones that allow us to transform reality into abstractions. Going back to the one I used in the original post, it's the concept of object orientation.

This is about understanding rather than information. You can talk about them, but mastery is the product of a personal change, that moment of leaving one universe and coming into another. We have one standard process for teaching people how to do this (university†), but plenty of people figure out the trick of it on their own‡.

That aha is the moment another secret engine reveals itself to you.

See also: For now we see through a glass, darkly; but then face to face: now I know in part; but then shall I know even as also I am known

-----
* At least in part. Another element is belief in possibilites, though not in the gung-ho sense. It's simply that since what is unthinkable is impossible, one of the driving forces of the world is imagination. That's also private, mysterious, and universal.
† This is why I have both a BA in Latin and an intellectual job.
‡ PNH is an excellent example of this.

#82 ::: P J Evans ::: (view all by) ::: February 13, 2010, 06:29 PM:

79
Classmates at least let its 'members' know about it first. (I reset my privacy settings to 'no' a month or so back. They're not getting money from me, anyway; I shut that off last year.)

#83 ::: Serge ::: (view all by) ::: February 13, 2010, 06:35 PM:

abi @ 81... This is why I have both a BA in Latin and an intellectual job

Intellectual?
First time I've heard that said about computer programming.

#84 ::: abi ::: (view all by) ::: February 13, 2010, 06:38 PM:

Serge @83:
First time I've heard that said about computer programming.

I couldn't possibly comment. I'm a tester; commenting on the intellectual requirements of computer programmers would be unwise in the extreme.

#85 ::: Serge ::: (view all by) ::: February 13, 2010, 07:34 PM:

abi @ 84...

"I'm not sure, but I think we've been insulted."
"I'm sure."

#86 ::: HelenS ::: (view all by) ::: February 13, 2010, 07:59 PM:

"can you imagine how many people use Facebook on a library computer and don't log out because they cannot figure out how to?"

Do you mean they don't log off the library computer, or they don't log off Facebook?

#87 ::: P J Evans ::: (view all by) ::: February 13, 2010, 08:07 PM:

86
Probably Facebook. The computers I've seen in libraries don't have logins.

#88 ::: Edgar lo Siento ::: (view all by) ::: February 13, 2010, 08:18 PM:

I've been thinking about the Buzz debacle constantly for two days. I've decided that Google is too much of a threat to my privacy. So I've stopped using Google Reader as well. I've switched to Thunderbird, where I access my gmail from to begin with. I'm not crazy about the interface, but it allows me to star items, and email them. You can find OPML export as an option buried in Google Reader settings. That part is easy.

However it's hard to get starred items out of Google Reader. You have to go through some steps here:
http://aleksandaraleksandar.blogspot.com/2007/07/export-google-reader-starred-items-into.html

and you end up with an xml file that is kinda human readable, but doesn't import into Thunderbird.

Note that you have to manually unstar each and every one of those starred items to delete them, even if you delete all your original subscriptions. Frackin' data roach motel. "We're not evil" my big left toe!

#89 ::: Mary Aileen ::: (view all by) ::: February 13, 2010, 08:21 PM:

HelenS (86): Probably both.

P J Evans (87): At the library where I work, the public computers have logins. A surprising number of people neglect to log out. (Then we don't know if they've really left, or just gone to the bathroom or something. This can be a problem at busy times.)

#90 ::: P J Evans ::: (view all by) ::: February 13, 2010, 08:30 PM:

89
The one library where I know much about the computers, they put time limits on them because they have so many people coming in to to use them (reservations requested). Some of their databases require accounts and passwords, but the computers themselves are open. (The catalogs are on different machines, without Internet access.)

#91 ::: xeger ::: (view all by) ::: February 13, 2010, 08:50 PM:

Randolph @ 76 ...
Mmmmm. It's pretty soft cell. (Typo to good to fix.) They do strongly encourage its use, but there's a clear choice presented. When I started Gmail after they turned it on--a great big "press here to use Buzz" button, and a much smaller "nah" link (which doesn't look like a button.) When I clicked the "nah" link, it took me to my Gmail page, and Buzz retreated to a small button, which the "turn off Buzz" link turned off, along with the advert. I'd guess--and you can bet Google has done user testing to this point--this approach will get a lot of people to sign up.

That's interesting. I didn't get any "nah" link at all -- just Buzz having added itself to my list of services, willy-nilly, up yours, so much for opt-in services.

Out of curiosity, did you already have a "public profile"?

#92 ::: Paula Helm Murray ::: (view all by) ::: February 13, 2010, 09:07 PM:

Thanks everyone. I found my log-in id, and there was nothing to my profile or any google badness.

#93 ::: Graydon ::: (view all by) ::: February 13, 2010, 10:21 PM:

abi @81 --

I take your point.

I'm not sure I've ever had that experience, though, so I should probably not try to say anything else about it.

serge @83 --

Real computing science is a branch of mathematics; not everybody walloping code has to worry about that, but "not everybody" isn't "nobody". So there's a good deal of thinking going on, I would have said.

#94 ::: Randolph ::: (view all by) ::: February 13, 2010, 10:22 PM:

xeger, #91: "did you already have a 'public profile?'"

No, I didn't. Did you? I wonder if that made a difference. Did you get the splash screen?

#95 ::: Andrea Phillips ::: (view all by) ::: February 14, 2010, 12:05 AM:

For another data point: When I first logged into Gmail post-Buzz launch, I also got the splash screen with a big "Sign me up!" link and a little "Nah" link. I chose no.

Nonetheless, when I got into my inbox, there was a Buzz link just under the inbox, and clicking on that showed that I was following/followed by a few dozen people already, and that it had linked itself up with my accounts on Google Reader and (IIRC) YouTube. I unlinked from everything and unfollowed a few people, but haven't disabled Buzz entirely. I feel like it might be better to keep it where I can see what it's up to.

I'm also wondering if I should explore another email option. Gmail has in its favor the conversations metaphor, the ability to handle large attachments, and my huge number of business cards printed with my Gmail address that I'm about to hand out at SXSW. But Google's definitely burned through all of my goodwill, and its ashes scattered to the wind, too.

#96 ::: Mikael Vejdemo Johansson ::: (view all by) ::: February 14, 2010, 12:37 AM:

Through this whole Facebook Login bruhahaha, I've found myself regularly considering how this is kinda tricky for me to phrase in a way that makes the issue transparent.

The core of my issues lies in the diagnosis of people with limited mental models for computation as people who fundamentally have better things to do with their lives than acquiring a mental model for this particular set of metaphors.

And while I have an intellectual understanding for it, originally, I've been failing the understanding at a visceral level, since I, as many of my friends, and just as the cast of The Big Bang Theory, view the acquisition of new knowledge and new mental models as something inherently enjoyable.

The point where I suddenly reached an understanding of the wish not to have to internalize a new concept just to use a specific tech came today, when I realized that the check I had sent in for my CA back taxes the other day actually overdrafted my bank account. So I call up my bank, in order to see whether I could SOMEHOW divert the resulting overdraft onto my credit card.

And the interaction with the customer rep. led me through several instances of Things I Don't Care About: type of my account, account number (I know my _member number_, but not my account numbers), my phone banking password (I call them once a year, roughly, and they still want me to remember a token for them...) et.c...

Suddenly I know what it feels like to interact with something completely opaque that I don't care to learn.

#97 ::: Pendrift ::: (view all by) ::: February 14, 2010, 06:13 AM:

Google has backtracked on Buzz, and you can now disable the service entirely. They wasted a lot of goodwill in the meantime, but better late than never.

Mikael @96:
Suddenly I know what it feels like to interact with something completely opaque that I don't care to learn.

Bank accounts are one. Credit cards are another. And mortgages. And insurance policies, and any type of contract. I read the fine print because it was a habit learned from my dad, a lawyer, but this is not the case of most people I know.

I'm the go-to person in my family for tech problems* and in my choir for computer issues, and the vast majority of problems can be solved by RTFM, but RTFM** is an alien concept.

I remember feeling this way about high school geometry; no doubt I could've figured out how to do that theorem-proving exercise if I worked at it, but all I wanted to say when asked to prove that the angles of a rectangle were congruent was "I don't care how it works, just look at the damn things!"

For some reason, this discussion reminds me of a quote from Alain de Botton about how we no longer know where things come from. It's just as applicable to the immaterial as it is to the material.


*despite the fact that I'm 7,000 miles away. Tech support questions include "how do I turn the oven on?" and "how do I make a friends list on Facebook?"
** or FAQ, or Help section

#98 ::: Pendrift ::: (view all by) ::: February 14, 2010, 06:17 AM:

Earlier comment held for review,* so in the meantime, here's the link from the Gmail blog explaining how to disable Buzz entirely.

*emwltk what the magic words were

#99 ::: abi ::: (view all by) ::: February 14, 2010, 06:51 AM:

Pendrift @98:

It was "Google <a href"

We get spammers who use an innocuous but irrelevant link to Google in order to test the sophistication of our spam filtration. If we let those posts through, they serve as signposts for NO HUMAN READERS HERE, SPAM ON!

This comment will now be held for moderation.

#100 ::: SeanH ::: (view all by) ::: February 14, 2010, 08:14 AM:

Google increasingly, to me, seems to have a sort of mad-scientist ethos: the idea that science really will be better/faster/more creative without ethical constraints. Every Google service seems to be permanently in the beta stage*, and who knows what they'll do with what you give them. This has some benefits - they are a very creative company - but it does make them a dangerous place to store data.

*One of the Labs features in Google Mail puts the little "beta" notice back on the Google Mail logo. There's actually some really great stuff in the Labs features - my favourites are the embedded Translate option for emails in another language (I occasionally use companies based in other countries), and the feature which spots if you have mentioned an attachment in your e-mail but haven't actually attached any files (a mistake I constantly make) and warns you before you send.

#101 ::: Mary Aileen ::: (view all by) ::: February 14, 2010, 09:41 AM:

P J Evans (90): We had time-limits, which were largely unenforceable, and caused huge arguments. Now we have a time- (and print-) management system that enforces the limits for us. People log in with their library card number, or we have "guest passes" for people from out of town. The catalogs are on separate computers that don't have word processing or general Internet access.

#102 ::: Sundre ::: (view all by) ::: February 14, 2010, 09:45 AM:

Evan @20
This is painfully common. I work for an internet retailer that has no storefronts. We get daily calls from people who look up the product they want, click on the first link, and call us because we must be the manufacturer. We patiently explain that we can't make their local stores carry the desired object, and assure them that we'd be happy to send it by mail, but their frustration is immeasurable.

My favorite customer query: "Aren't you the internet?"

#103 ::: Edgar lo Siento ::: (view all by) ::: February 14, 2010, 10:10 AM:

#102 ::: Sundre,
An internet retailer with no storefronts? How does that work?

#104 ::: Lin Daniel ::: (view all by) ::: February 14, 2010, 10:29 AM:

#102
My favorite customer query: "Aren't you the internet?"

Which only reinforces my choice/decision not to work in customer service. (I'm the reason my last day job trained their customer service people to handle questions about the product I created; I was too honest and forthright.)

#105 ::: joann ::: (view all by) ::: February 14, 2010, 12:15 PM:

Pendrift #98:

The Gmail blog you're pointing us to says the following: Third, we're adding a Buzz tab to Gmail Settings. From there, you'll be able to hide Buzz from Gmail or disable it completely.

I've just personally observed that "we're adding" does not, as of this writing, equal "we've added".

So it's not fixed yet.

#106 ::: David Harmon ::: (view all by) ::: February 14, 2010, 01:05 PM:

Mikael Vejdemo Johansson #96, Pendrift #97:

Indeed, and to take it one step further: Those "mental maps" and models include such things as "the economy", "the legal system", "national defense", "politics" (and recursively, it's subtopics), and so on.

#107 ::: C. Wingate ::: (view all by) ::: February 14, 2010, 01:50 PM:

re various: It was logging out of Facebook I meant, though I suppose that plenty of people aren't solid on the notion that they need to log out of anything when they leave.

#108 ::: janetl ::: (view all by) ::: February 14, 2010, 02:42 PM:

Came across a give us feedback on Buzz URL. I have submitted some. It was not warm and fuzzy.

I certainly didn't find a page like this a few days ago, but this URL was listed by a Google employee in a reply to a "question" on the forum. The Contacts and Buzz forum has been full of so-called Questions and Answers that consist entirely of complaints about Google's roll-out of Buzz, and I'm sure they don't want that sort of thing in there along with people who are actually choosing to use Buzz and asking questions about it.

#109 ::: Neil in Chicago ::: (view all by) ::: February 14, 2010, 03:55 PM:

No, not entirely.
Good user interface design can be done systematically and reproducibly, but not, obviously, if it's ignored. (A "bible", for those interested, is Don't Make Me Think.)
Facebook continues to leap from one bad design to another, unrelated bad design at random, too-small intervals.

#110 ::: Caroline ::: (view all by) ::: February 14, 2010, 04:24 PM:

They do strongly encourage its use, but there's a clear choice presented. When I started Gmail after they turned it on--a great big "press here to use Buzz" button, and a much smaller "nah" link (which doesn't look like a button.) When I clicked the "nah" link, it took me to my Gmail page, and Buzz retreated to a small button, which the "turn off Buzz" link turned off, along with the advert.

But "turn off Buzz" only hides Buzz. It doesn't disable it. You keep following the people you've been automatically set up to follow, and other people keep following you.

Whether you click the great big "press here to use Buzz" button or the "Nah" link, you're still using Buzz. That doesn't constitute "presenting a clear choice" to me.

#111 ::: P J Evans ::: (view all by) ::: February 14, 2010, 04:38 PM:

110
You have to turn off the following before you can make Buzz go away. (Which is bad design, because you can have people following you that you didn't even know about.)

I bet they're getting a lot of flak about this.

#112 ::: HelenS ::: (view all by) ::: February 14, 2010, 04:47 PM:

I have a user account even on my home computer (having kids and all), so the concept of logging out of Facebook was actually something I had to think about myself. I don't think I ever have.

#113 ::: Kevin Riggle ::: (view all by) ::: February 14, 2010, 07:32 PM:

Edgar lo Siento @88: You can find OPML export as an option buried in Google Reader settings. That part is easy.

I've been meaning to transition to a new Google account for a while now, and that information led me to go searching for the export options on all the other Google services' pages. It looks like Google Voice is the only one I use that's impractical to port at the moment. The whole thing took me about an hour, with a dozen or so services involved -- much less frustrating than I expected. Thank you for pointing that out.

#114 ::: Earl Cooley III ::: (view all by) ::: February 14, 2010, 07:33 PM:

P J Evans #111: you can have people following you that you didn't even know about.

That sounds like a prime marketing point for Buzz as a LEO surveillance application. "Keep potential lawbreakers and parole violators on a perfect electronic leash!"

#115 ::: janetl ::: (view all by) ::: February 14, 2010, 07:39 PM:

I wasn't just creeped-out that people I hadn't approved could follow me on Buzz (and Reader), but that I was now following people automatically. To them, it would look like I had actively chosen to do that, and could send some pretty odd messages.

#116 ::: Michael Roberts ::: (view all by) ::: February 14, 2010, 07:39 PM:

Entirely ignoring any subsequent discussion, I'd like to add two data points to Abi's original post: first, my wife the theoretical physicist, like all those Facebookers, also enters URLs into search engines to go to the sites they name. It irritates the hell out of me, but the concept of a URL is just so orthogonal to her experience that this system works best for her. (I have managed to get her to bookmark the pages she needs most frequently, and while she doesn't quite trust this system, she does admit it's quicker.)

Second, over twenty years after taking a Scheme class from Kent Dybvig, this week I finally clicked on what closures really are and why they're so cool. (Courtesy of Perl.) Isn't that a freaky story?

#117 ::: Lee ::: (view all by) ::: February 15, 2010, 02:43 AM:

Nix, #10: that Facebook radically 'changing' without warning is something intentionally done by someone else, for reasons beyond their comprehension

And they'd have damn good reason to think that, because it's something Facebook regularly does. I've been on Facebook for rather less than a year, and I've lost count of the number of interface redesigns they've gone thru in that period. Some of them are friendlier than others, but the one consistent hallmark is that they happen without warning -- you sign on one day and your feed page, or your wall, looks totally different; worse yet, the way you navigate from one part of your account to another doesn't work the same way any more. I view the entire Facebook programming staff with complete and utter contempt. They don't know how to communicate with even computer-savvy users, they don't have a clue about usability, and they absolutely fail at the programmer's Prime Directive: "IF IT AIN'T BROKE, DON'T FIX IT!"

I do not use Facebook as my only online social network, nor even as my primary one. It has its uses, and when they fuck things up yet again over there, I just shrug and wait it out. But I also feel sorry for the poor folks who have no other online communities, who depend on Facebook for the kind of community that I get from here and from LiveJournal (and a few other places). I am privileged, I have options. They don't.

Tatterbots, #16: Google has forgotten the Prime Directive of Online Communication: "NEVER, EVER MAKE OPT-OUT THE DEFAULT!" You're absolutely right about signing you up for something without your knowledge or consent. What were they thinking?

P J Evans, #32: Okay, I went and looked, and apparently (1) I have never created a Google profile, and (2) this seems to have made me immune to Buzz. There are times when being a curmudgeonly Old Phart late adopter comes in very handy.

Randolph, #76: That would be why I have never posted a picture or video to Facebook; that particular brouhaha blew up right after I joined, and before I had gotten around to exploring such things. I do occasionally post links to my Flickr photos, but only links -- nothing is hosted on any Facebook server. I also don't put their handy-dandy little "Share on Facebook!" button on anything belonging to me, because they grab for ownership when you do that too.

Debbie, #79: And AGAIN with the opt-out default! What is WITH these people?!! I have a Classmates Gold account, which (again) I haven't used for much; perhaps I should just cancel it and tell them why.

Pendrift, #97: I've been asked about RTFM things a few times, most recently about Facebook. My usual response is to look up the FAQ link, send it back, and say, "I'm not an expert, I just know how to use the help files. If you do this and it still doesn't work, check back with me again." I seem to have a reasonably competent group of friends, because usually that does it.

#118 ::: Randolph ::: (view all by) ::: February 15, 2010, 06:37 AM:

Returning to the original discussion, it strikes me that:
1. Many people don't "get" medicine.
2. Many people don't "get" US tax law.
and
3. Most people (it's been measured, it's at least 90%) don't "get" US politics.

This is not the world of the Enlightenment!

#119 ::: TexAnne ::: (view all by) ::: February 15, 2010, 08:00 AM:

Randolph, you can't be serious. The Enlightenment wasn't the world of the Enlightenment--back then 90% of people didn't even know how to read.

#120 ::: Randolph ::: (view all by) ::: February 15, 2010, 08:46 AM:

TexAnne, #119: sure. Still, the Enlightenment philosophers, I think, vastly overestimated individual human understanding. As groups and organizations, we know a great deal. As individuals, not so much.

(An army of libertarians with torches is even now headed for my apartment.)

#121 ::: abi ::: (view all by) ::: February 15, 2010, 08:51 AM:

Randolph @120:

An army militia of libertarians with torches is even now headed for my apartment.

FTFY

#122 ::: John Stanning ::: (view all by) ::: February 15, 2010, 10:13 AM:

Back to the OP about understanding technology:

Bruce Schneier posted on his blog last month about an article by Hovik Melikyan called The Era of Black Boxes.  Schneier was interested in a digression about hacking old-style telephones, but Hovik’s original point was about not knowing, or needing to know, how things work.

A commenter remarked

I think Hovik’s general point is about “abstraction”.  I don’t need to know how a device works, if I know how to use it and it does what I want.  The landline telephone is a perfect example.  I pick it up and dial a number:  it works (OK, not 100%, but 99.9999%).  If the number is valid, the system connects me to the phone at the other end, and rings or gives a busy tone.  I, the user, don’t need to know, or care, what happens between my phone and the other person’s phone. Most people don’t really have any idea:  we may understand in principle, as we understand in principle how a TV works, but the real application is mind-boggling.
This is a fact of life, ever since we started to trade with far-away places, and ever since there began to be more devices and technologies than any one person can understand.  When spices from what were then called the Spice Islands (the Molucca or Maluku islands in modern Indonesia) began to be sold in Europe, only a very few people understood where the spices came from or how they were produced;  but that didn’t matter – everyone just ate them and liked them.  Today, with more people and more trade, that’s greatly extended:  we use gasoline, for example, without needing to know or care where it comes from or the detail of how it’s produced, transported and refined.  Some people don’t even know that milk comes from a cow.
Similarly, today I can write a program using (say) a Windows API to perform a function.  I don’t need to know how it performs that function, and I don’t care, so long as it does it according to specification.
Like Graydon, I tripped over Abi’s phrase “the secret engines of the world” because to me “secret” means purposely hidden, whereas I think Abi is talking about things that are not secret, nor even deliberately hidden – a little research would tell us all about them – but in our everyday lives we just don’t want or need to know about them.  Yes, many people drive cars without knowing how they work.  Why not?

The business of Googling for URLs arises simply because the Internet, and browsers, and the URL (or URI), were never designed for ‘general’ users.  They were designed by geeks, for geeks.  That’s why they’ve never been easy to use (or secure, but that’s a whole other discussion).  If we want user-friendly, we really need redesign from the ground up.

A lot of us here are, or have been, programmers and may flatter ourselves that we understand computers.  No, we don’t.  Back in the 1970’s I was programming assembly language on paper-tape-and-printout computers:  at that level, the amount of computing that takes place on a modern PC just to move the mouse across the screen is awesome, let alone to do any real work.  And even back in those days, I understood the hardware only at conceptual level, and I certainly couldn’t fix it – if it didn’t work, I called the engineer.

#123 ::: Debbie ::: (view all by) ::: February 15, 2010, 10:30 AM:

Interestingly, tools such as Google Analytics are illegal in Germany, for the moment anyway, although there is some remaining uncertainty. AFAIK, this hasn't been enforced, but I will be interested to see the results of any test cases.

(Lee @117 - Classmates has never gotten a dime of my money, and they certainly aren't now. Back ten years or so ago, they were nearly the only game in town, and an interesting way for me as an expat to find out about people I used to know. These days? Not so much. Which is exactly why, I suspect, they're scrambling to jump on Facebook's bandwagon.)

#124 ::: Erik Nelson ::: (view all by) ::: February 15, 2010, 11:24 AM:

There should be a version of Google Street View that generates terrain randomly and rolls for wandering monsters.

#125 ::: Caroline ::: (view all by) ::: February 15, 2010, 11:33 AM:

abi @ 121: For some reason I read the italicized version as "An army militia of librarians is even now headed for my apartment," which is a different thing altogether.

John Stanning @ 142, yes, yes, yes. Keith and I discussed this over dinner the day this post went up, as a distraction from ranting about Buzz. Lots of people regard URLs as an arcane code they have to just remember to enter in a particular box, in order to make the computer show them their email or Facebook or whatever. They don't have a mental model of the URL as an "address" or "phone number," and frankly browser design doesn't help instill this mental model. It's not an easy problem to solve because you can't go back and re-engineer the way the whole Internet works. But I've been having fun brainstorming about how I might solve it from a UI design perspective, even though I haven't come up with any ideas that don't sound stupid to me.

I mean, right now my browser is showing "http://nielsenhayden.com/makinglight/archives/012186.html#400280." It's a string of gibberish, really. Yes, I (sort of) know what each piece means. But it's nowhere near obvious, and you can see why many people would see it as just mysterious computer language, meaningless to mere humans.

And yet how else do you do it? How do you let people know they're on the right page? (The mysterious computer language bit makes phishing easy -- you won't have any reason to know that bankname.scammer.com is different from word.bankname.com, or scammer.com/bankname is different from bankname.com/word. The order of words and symbols is a meaningless code to you.) How do you allow direct access and linking to unique pages if you don't use those codes?

A hard problem, but fun to think about.

#126 ::: P J Evans ::: (view all by) ::: February 15, 2010, 11:39 AM:

126
Firefox puts it on the tab: 'Making Light: The lily knows not why...'

I have to admit, there are some sites I prefer hitting the search engine for, rather than bookmark them.

#127 ::: Mary Aileen ::: (view all by) ::: February 15, 2010, 11:52 AM:

P J Evans (127): My Firefox tab reads "Making Light: The lily knows...."

#128 ::: P J Evans ::: (view all by) ::: February 15, 2010, 11:55 AM:

I think it depends on how many tabs there are; they get smaller as you put up more. (I usually have only one or two.)

#129 ::: Mary Aileen ::: (view all by) ::: February 15, 2010, 11:57 AM:

This morning, one of our more clueless regular computer users asked if email still worked today even though it's a holiday (Presidents' Day).

We get a lot of people asking us how to log into their email. The answer is usually "First you have to go to Yahoo/Hotmail/Gmail/AOL" (because they're still at the library's homepage, having just opened the browser).

#130 ::: j h woodyatt ::: (view all by) ::: February 15, 2010, 12:48 PM:

Lee @117 asks: " And AGAIN with the opt-out default! What is WITH these people?!!"

You have certainly heard the old chestnut about forgiveness vs. permission, yes?

#131 ::: Mary Aileen ::: (view all by) ::: February 15, 2010, 01:01 PM:

P J Evans (129): I know. I just thought the difference was amusing.

#132 ::: Elliott Mason ::: (view all by) ::: February 15, 2010, 01:16 PM:

Pendrift @97 said: Google has backtracked on Buzz, and you can now disable the service entirely. They wasted a lot of goodwill in the meantime, but better late than never.

Except that if I go where they said I should go, the page I see looks nothing like the screenshot they show in that post. Way to go, Google.

Also, as a person who has spent years of their life being paid to serve as tech support, I have great contempt for sites/institutions that write their help files like man pages. Man pages are intended for people already very fluent with the technology, who just need to be reminded what all the switches do. Help files are for PEOPLE WHO DO NOT YET UNDERSTAND. And if I, a reasonably technical person, cannot read your help page and use it to teach myself how to do something, my grandmother (to pick a not terribly random example) will have no freaking hope in hell.

#133 ::: Elliott Mason ::: (view all by) ::: February 15, 2010, 01:19 PM:

Pendrift @97 said: Google has backtracked on Buzz, and you can now disable the service entirely. They wasted a lot of goodwill in the meantime, but better late than never.

Except that if I go where they said I should go, the page I see looks nothing like the screenshot they show in that post. Way to go, Google.

Also, as a person who has spent years of their life being paid to serve as tech support, I have great contempt for sites/institutions that write their help files like man pages. Man pages are intended for people already very fluent with the technology, who just need to be reminded what all the switches do. Help files are for PEOPLE WHO DO NOT YET UNDERSTAND. And if I, a reasonably technical person, cannot read your help page and use it to teach myself how to do something, my grandmother (to pick a not terribly random example) will have no freaking hope in hell.

(also, I just tried to post this with no link text in the A tag; it got held off to the side for that reason. That version can be deleted; this one supersedes it)

#134 ::: Debbie ::: (view all by) ::: February 15, 2010, 01:39 PM:

Elliott Mason @133: Also, as a person who has spent years of their life being paid to serve as tech support, I have great contempt for sites/institutions that write their help files like man pages. Man pages are intended for people already very fluent with the technology, who just need to be reminded what all the switches do. Help files are for PEOPLE WHO DO NOT YET UNDERSTAND. And if I, a reasonably technical person, cannot read your help page and use it to teach myself how to do something, my grandmother (to pick a not terribly random example) will have no freaking hope in hell.

This has always been very close to my heart. In a butterfly moment in the early '90's, I was thisclose to being hired by Incredibly Big Megacorp for a project to make one of their system's UIs user-friendly.* As a psychologist with some interest in and knowledge of human factors (then, anyway), that would have been so cool.

*It was an 18-month position, I discovered I was pregnant, and figured the fair thing to do was tell them. Ah, well.

#135 ::: Erik Nelson ::: (view all by) ::: February 15, 2010, 02:21 PM:

Caroline at 126:
You probably wouldn't like the Google Chrome interface because on it you use the same text field for entering a search query and entering a URL.

So if people have trouble knowing the difference between these two distinctly different activities, this doesn't help.

#136 ::: Lee ::: (view all by) ::: February 15, 2010, 02:34 PM:

Mary Aileen, #128: And I have enough tabs open that mine only reads "Makin...".

The discussion of people who type URLs into the Google search box has been illuminating for me. I have an extensive bookmark menu, with sub-folders and sub-sub-folders, because I routinely bookmark anything I think I might ever want to find again. This is also partly because I know that using bookmarks is insurance against typos. But sometimes I get lazy and don't want to navigate the bookmarks, so I just type (frex) "Supercuts" into the box, knowing that it'll pop up a link to the corporate site, and that this will probably be the first link on the search page. So I can see other people doing that, but it had never occurred to me that they would use an actual URL that way.

#137 ::: KeithS ::: (view all by) ::: February 15, 2010, 02:56 PM:

Lee @ 117: And AGAIN with the opt-out default! What is WITH these people?!!

A lot of businesses don't like people having to opt in. Makes it harder to do whatever they want with the data they collect about you. See also that booklet of three-point type from your bank or credit card company that tells you that if you want them to keep your personal data to themselves you have to jump through three fire hoops while being chased by a hungry tiger.

Elliott Mason @ 133: Also, as a person who has spent years of their life being paid to serve as tech support, I have great contempt for sites/institutions that write their help files like man pages.

I have never been helped by Google's help. Ever. When it's not useless, it's cryptic or hard to navigate. Unfortunately, a lot of companies seem to think that documentation is a useless money sink, rather than something that helps people use their product.

Erik Nelson @ 135:

Browsers have supported searching from the address bar (Netscape 4, maybe 3) for far longer than they've had a separate search box. Looks like Google is going back to that model. I can't say I'm fond of it, because they're two completely different activities, but other people don't seem to view them that way.

#138 ::: Constance ::: (view all by) ::: February 15, 2010, 03:04 PM:

My tab only says, icon M...

Do I win? :)

I've been in many a university library and found many an e-mail account wide open.

This is much more difficult in our public library system here, since you can't get on the internet from the system's catalog network. You have to sign up for a limited time use slot on the internet enabled stations, which are of very limited number.

However, at this point the majority of the people actually inside the library are using their own laptops than looking at books, since all the branches now have a wireless connection to the internet. Every chair at every table is occupied, all the hours the library branch is open.

Love, C.

#139 ::: Pendrift ::: (view all by) ::: February 15, 2010, 03:59 PM:

Elliott Mason @133: Indeed, the rats. I didn't have a Google profile, so I didn't check after turning Buzz off.

#140 ::: David Harmon ::: (view all by) ::: February 15, 2010, 04:15 PM:

My tab says "The lily know...". The lily knows what color lurks in the hearts of men?

Also, when I type something into Firefox's address bar, I get a list taken from recent URLs. While this unfortunately includes the typos, there are a couple of sites where it's pretty useful.

Helping my Mom use her Windows box has given me a solid appreciation for both how bad the interfaces are, and how different people can have very different views of the same screen.

#141 ::: Diatryma ::: (view all by) ::: February 15, 2010, 04:15 PM:

Searching URLs: I used to do this because instead of bookmarking things like my email, Livejournal, and Making Light, I used the drop-down list in the address bar, and that saved each URL in the order it was put in. I used to go through some hoops to be sure that things would stay in the right order, and even now I get a little put out if my two emails switch places. I've gotten bookmarks now, but I don't necessarily put everything in them-- I have different browsing habits according to how I reach something. I'm also not a Livejournal Friendslist keeper at all, preferring to search for each person individually-- again, browsing habits change according to process-- but since Livejournal broke their search box recently, I've been typing usernames into the address bar and letting them autocomplete.

Not everyone uses the process you expect them to.

#142 ::: Nicole J. LeBoeuf-Little ::: (view all by) ::: February 15, 2010, 05:44 PM:

I've gotten into the habit of relying on Firefox's URL blank to act as a Google Lucky Search when what I put in isn't a URL. Then I just type "Making Light" or "Slacktivist" or even "Whatever" to read my favorite blogs. I have no excuse. I understand URLs, I'm a web designer, I use bookmarks, I complained 15 years ago that Macs were for people who didn't understand respectable command prompts. But I'm also lazy and it's quicker to type F6, "Making Light", than it is to type the whole URL or go into my bookmarks.

I am also fond of the purely typographical shortcut, built into both Firefox and IE, that you can type "Boingboing" and then hit CTRL-ENTER to have the "www." and the ".com" added. At least then I'm actually typing URLs, right?

#143 ::: VCarlson ::: (view all by) ::: February 15, 2010, 06:10 PM:

WRT people who don't logout of Facebook, say, I have a topper: When I got downsized, my former employer paid for an outplacement service (for an unreasonably short time, as it took me twice that long to find other work after my previous downsizing, when the economy was better). So, I had access to cubes with internet-connected computers (which, to do the placement outfit justice, I did not lose when my time was up - I just lost some services, which I was able to work around). The thing is, these cubes were general use - I don't see how a thinking person would believe no one used that computer but them, but I repeatedly had the experience of going to LinkedIn and being automatically logged into some VP-level finance guy's profile, because he kept clicking the "remember me" box! I know this because I developed the habit of telling whichever computer I was using to not only delete history but to forget stored passwords - and it kept happening!
I did occasionally toy with the idea of messing with his profile, but figured someone that stupid and/or arrogant wouldn't get it. Besides, I just don't do that sort of thing.

#144 ::: pericat ::: (view all by) ::: February 15, 2010, 06:35 PM:

My tab says "the lily knows not..." so I think Mary Aileen and I could work up a chorus line. :)

VCarlson, your exec guy may be neither stupid nor arrogant, but fundamentally confused about what the 'remember me' checkbox means.

I work with geologists. They're forever apologizing for not knowing anything about computers. They've forgotten more about rocks than I'll ever know, so I figure we're even.

#145 ::: P J Evans ::: (view all by) ::: February 15, 2010, 06:46 PM:

Where I work we get people who either don't even close the software when they leave in the evening, or they lock their computers when they leave in the evening. These cause problems because the people who run the remote updating of the software do it at night, and a computer that isn't logged off can't be updated. (What we tell them is leave it in 'restart' except on the days we're told to actually turn the machines off.) You'd think after the first couple of times they'd get the message ....

#146 ::: TexAnne ::: (view all by) ::: February 15, 2010, 06:56 PM:

The post title has been bugging me for days. This morning, I figured out why.

The lily that blossoms in spring,
Tra la,
Has nothing to do with the case.
A most unattractive old thing,
Tra la,
Is Buzz's desire to trace,
Is Buzz's desire to trace.
And that's what I mean when I say or I sing
"Oh bother the lily that blossoms in spring."
Tra la la la la,
Tra la la la la,
"Oh bother the lilies of spring."

#147 ::: Joel Polowin ::: (view all by) ::: February 15, 2010, 08:08 PM:

TexAnne @ 147: I had the same niggle and knew why, I just couldn't think of anything to do with the idea.

Another line is needed to match the original verse. Something like "From windows we all want to fling (tra-la) / A most unattractive old thing..."?

#148 ::: Earl Cooley III ::: (view all by) ::: February 15, 2010, 08:31 PM:

The character names from that opera annoy the heck out of me.

#149 ::: TexAnne ::: (view all by) ::: February 15, 2010, 08:46 PM:

Joel, I goog--er, looked up the lyrics and cut'n'pasted into Word. "A most unattractive old thing" is Gilbert's, but I moved it up a line--perhaps that's what's bothering you.

#150 ::: Sundre ::: (view all by) ::: February 15, 2010, 09:14 PM:

Edgar lo Siento @103

Oh, dear. I keep going online when I'm tired, so perhaps words don't mean what I think they mean. I was trying to say they have no brick-and-mortar store locations. Mail-order only.

#151 ::: Julia Jones ::: (view all by) ::: February 15, 2010, 09:30 PM:

And... a few minutes ago I was standing in the garden minding my own business, when the Street View car came by and took my photograph.

#152 ::: Jim Henry ::: (view all by) ::: February 15, 2010, 09:59 PM:

Caroline @126:

Lots of people regard URLs as an arcane code they have to just remember to enter in a particular box, .....

I mean, right now my browser is showing "http://nielsenhayden.com/makinglight/archives/012186.html#400280." It's a string of gibberish, really. .....

And yet how else do you do it? How do you let people know they're on the right page? (The mysterious computer language bit makes phishing easy -- you won't have any reason to know that bankname.scammer.com is different from word.bankname.com, or scammer.com/bankname is different from bankname.com/word. The order of words and symbols is a meaningless code to you.)

P J Evans @127:

Firefox puts it on the tab: 'Making Light: The lily knows not why...'

I'm fairly sure that the text Firefox puts in the window or tab title is the HTML TITLE element from the webpage you're looking at. It's trivial for a competent phisher to make the TITLE of their page identical to the TITLE of the page they're imitating, and though they can't make their URL identical they can often make it so similar as to fool an experienced user at a casual glance, and for less knowledgable users they don't even need to do that, as Caroline pointed out.

Paypal sends (or used to send?) occasional PSA emails to their users, and also sometimes put up a reminder on their login and logout pages IIRC, saying that one should always check the URL of the login page and make sure it starts with https:, not http:, and ends with paypal.com. Given the current state of Web architecture, I think that's the only useful defense against phishing -- educate the users about URLs at least enough to parse the protocol and domain name parts, and probably teach people how to use bookmarks as well. Designs like that of Google Chrome may hinder it.

On the other hand, re: usability in areas where phishing is not a problem, web authors and designers should get in the habit of (a) always filling in the TITLE element, and (b) making it a unique identifier at least for every static page and as far as feasible for dynamically generated pages. For instance, I generally read webcomics a month or so at a time, going to my bookmark from the last strip I read some weeks ago and reading everything from there onward to the current strip. This is easiest to do when every strip from the webcomic has a TITLE element that contains the name of the comic and a date or sequence number or some other unique identifier; too many of them have only the name of the strip in the TITLE, and some have not even that (e.g., Tweep and Order of the Stick), requiring me to type in the strip name and date whenever I make a bookmark. Yes, I suspect using RSS might fix this, but though I'm reasonably technically proficient, there's a limit to the number of new technologies I want to learn in any given time period -- after a certain point it becomes frustrating rather than fun.

#153 ::: P J Evans ::: (view all by) ::: February 15, 2010, 10:06 PM:

153
I wish more people would put in page titles, too. On the other hand, I'm also clued-in enough to check the 'hidden' URLs on links, so I can report phishing more usefully. It isn't really rocket science ....

#154 ::: Joel Polowin ::: (view all by) ::: February 15, 2010, 10:17 PM:

TexAnne @ 150: I'm not bothered by the line being moved up, but there's nothing taking its place -- your verse is one line shorter than the original.

#155 ::: David Goldfarb ::: (view all by) ::: February 15, 2010, 10:35 PM:

For my part, when working on my home Mac instead of my away-laptop, I prefer to browse in separate windows instead of tabs (using Exposé or CMD-~ to move between windows). So I have the full title up there at the top of the window.

#156 ::: TexAnne ::: (view all by) ::: February 15, 2010, 10:42 PM:

Joel: Well, damn. That'll teach me to be cute while having bronchitis! If anyone cares to fix it, feel free.

#157 ::: SeanH ::: (view all by) ::: February 16, 2010, 05:29 AM:

#153: Designs like Google Chrome's may hinder it

A few things I really like about Chrome - now my only browser - are security-related features. It highlights the important bit of the URL for clarification:

http://nielsenhayden.com/makinglight/archives/012186.html

I'm pretty sure the purpose of that is to making phishing sites a bit more obvious. It also throws up warning pages if you try to access a malware-hosting site, or if you get redirected to a different site than the one whose address you entered (this has thus far only happened to me in the innocuous case of my university's network demanding I enter my ID and password before it'll let me on the internet, but it's good to know Chrome's looking out for me).

#158 ::: John Stanning ::: (view all by) ::: February 16, 2010, 10:32 AM:

IE8 does those things too (don’t know who imitated who), though I think the malware and redirection warnings are options that the user can turn off.

#159 ::: Graydon ::: (view all by) ::: February 16, 2010, 10:43 AM:

KeithS @ 138 --

One of the reasons Konqueror is my favorite browser is that it decided to abstract the protocol prefix mechanism; you have, of course, http: and ftp: and about:, but you've also got user-defined prefixes that do stuff. So I use the (Konqueror default) gg: for "go google" and write searches in the address bar. Having to use a separate search window is annoying and requires mousing, whereas alt-o will give me the text field of the address bar.

There are about forty pre-defines; I only use three with any regularity, but appreciate their existence a great deal.

I will say this thread has been giving me a very odd sense of something; the idea that a computer-literate person would not read a URI as automatically and unconsciously as text would not have occurred to me.

#160 ::: C. Wingate ::: (view all by) ::: February 16, 2010, 11:45 AM:

Another thing just occurred to me: the way the Facebook people got linked in means that the article contains a list of names, with lots of pictures, of people who Didn't Get It. And there is nothing they can do about it.

#161 ::: heresiarch ::: (view all by) ::: February 16, 2010, 02:11 PM:

John Stanning @ 122: "Like Graydon, I tripped over Abi’s phrase “the secret engines of the world” because to me “secret” means purposely hidden, whereas I think Abi is talking about things that are not secret, nor even deliberately hidden – a little research would tell us all about them – but in our everyday lives we just don’t want or need to know about them."

I think you're on the right track, but I think abi's insight goes further than that. The point, I think, of the lily's ignorant blossoms is that using something and understanding it are in reality entirely separate endeavors that needn't overlap at all--that even something like a lily, which is entirely incapable of understanding, may nonetheless accomplish enormously complex things simply by virtue of practice, by virtue of nature.*

The scientific (or educational) view champions a different model of agency: that it is only by understanding that we may act on the world, discover which levers do what, and transform our intention into action. It is not wrong: understanding multiplies agency a thousand-fold, and without it I certainly wouldn't be typing this into this clicky board and watching the symbols appear on the glowy screen in front of me, confident that others will soon perceive them on glowy screens of their own. But despite how necessary understanding is to conceive of and execute something like the internet, understanding it is as necessary to using it as understanding the molecular structure of sugar is to digesting it. That is to say: even a system created by humans, authored by understanding, does not require understanding to be used. That we are surprised to find that this is so is an artifact of our world-view.

*see also: Blindsight

#162 ::: David Dyer-Bennet ::: (view all by) ::: February 16, 2010, 02:46 PM:

Graydon@160: I use that mechanism in Firefox ("quick searches" I think it's called there) and there are exactly three the I use regularly: "g" for google, "w" for wikipedia, and "sf" for isfdb.

CTRL-L to get to the URL bar where that goes in. I've removed the search bar from my layout, to save space.

#163 ::: David Dyer-Bennet ::: (view all by) ::: February 16, 2010, 02:51 PM:

PJEvans@146: whereas here, locking in the evening is the preferred thing, because stuff that's supposed to run automatically won't run if the user is logged out. Or something; our IT staff doesn't seem to have any strong opinion about that really. But then, the only way I learned the local hard drives weren't backed up was by asking. To my mind that combines poorly with a build procedure that's locked into a single directory on the local disk drive, but what do I know? I'm just a software engineer....

#164 ::: Bruce Cohen (SpeakerToManagers) ::: (view all by) ::: February 16, 2010, 03:09 PM:

The problem of computer ignorance is much deeper than it first appears. I suspect that fewer than 5% of all computer users are even sure just what a computer is (and the rest don't care as long as it does what they want). The trouble we're having is that the computer is not a tool as we normally think of such; it's the first universal meta-tool: a tool that can become other tools, and can be used to create any tool of a large class (and even maybe any class if you allow for adding arbitrary hardware). Not only that, but the interface that the computer provides to any given tool's capability need bear no obvious or intuitive relationship to those capabilities, even if it's emulating a physical tool originally created completely outside the digital domain (spreadsheets come to mind; once past the obvious and into the "programming", Excel looks nothing like a pencil & paper spreadsheet).

For many people, the notion of "application" as a separate set of functionality doesn't quite compute: they're still using the same keyboard and mouse, so what's changed? Similarly, I suspect many people have a very fuzzy idea of what a Web "page" is, especially since some of them are just static data like a page in a book and some are really programs yet we use the same term for them.

Computers are capable of a vast number of useful functions (and an even vaster number of functions that are useless or worse); if we insist that using those functions requires a knowledge of how the underlying digital infrastructure works I doubt very much work other than programming and system administration will get done. And as computers serve more and more functions, there'd be that much more for each user to learn over and above the minimum they need to understand how to invoke the functionality they actually need to get their work done. Users need to know a lot more than computer operating systems these days; understanding the basics of the Web and the quirks of the currently faddish user interface toolkits imposes a large cognitive burden as well.

Here's an interesting excercise: think about how you'd explain the basic technology of the internal combustion engine automobile to someone who doesn't remember (or never learned) elementary physics and chemmistry. It's actually not that hard, because you've got some basic folk physics to draw on: some things burn, and they may explode if you burn them in an enclosed volume; explosions move things, etc. etc. Now try to do the same thing for a computer that's connected to the internet. You can talk about Boolean algebra, digital logic gates, binary memory, and so on all you like, but you have to also find a way to ground those concepts in commonly understood principles; this is a lot harder because there will be more layers of concepts, and many of the connections will be indirect. For most people, the amount of effort required to understand all that will not be worth the return.

#165 ::: Bruce Cohen (SpeakerToManagers) ::: (view all by) ::: February 16, 2010, 03:28 PM:

Once you really understand abstract software concepts like object, recursion, and closure they become almost obvious. One of the reasons I think some people who are otherwise adept at abstract thought have trouble understanding them initially is that a lot of authors and teachers either don't understand them well or don't understand what aspects of them can be used to justify their use.

When I first started to use C++, I thought it might be faster for me to take a short course than to try to figure it all out myself. I had the idea of objects down pretty well from several years of Smalltalk programming, but the two languages are very different in philosophy as well as syntax. I attended the first class and immediately bailed: the instructer may have been a competent C programmer, but he had no idea of what an object was. I've worked more recently with Java and C++ programmers who are really unaware of the reasons for using objects; their code is not optimal </sarcasm>.

As for the justifications, there is still a widely-believed set of myths in the professional programming community and the academic software engineering community (as opposed to the academic computer science community) that objects impose a performance burden on software that is unacceptable in many system programming environments; that recursion is always too expensive and should be replaced by iteration in all cases, and that closures are just too complicated to use. None of these myths is more than partially true under some conditions, and in most cases even when there is some truth the benefits can easily outweigh the costs.

#166 ::: Graydon ::: (view all by) ::: February 16, 2010, 03:43 PM:

heresiarch @162 --

I'm going to put on my process geek hat here.

If you use something, you have an understanding of it. This understanding is without exception some degree of wrong, which can be thought of as a combination of incomplete and inaccurate. Absolutely NO ONE has a complete or correct understanding of anything. (Believing this about yourself is the scientific version of the virtue of humility.)

Different understandings are more or less useful in some context or another. People as a class are willing to change their understanding if they derive a greater feeling of control or will become able to do the same thing for less effort or more things than they could previously.

Computers are useful primarily as communications devices; the understanding adopted is the one that gets the best signal with the least effort.

The more detailed understanding generally makes the computer a *worse* communications device. (I am, for instance, not on Facebook because I cannot stomach the security implications.) The trade-off is IF the more detailed understanding grants either less effort or increased capability, but this is by no means guaranteed.

It's very rare for someone to set out to explain a computing device in either of those contexts; you're supposed to want to understand fiddlin' details because such understanding confers virtue. (Which is nonsense on its face, but lamentably widespread as a ... second-order decayed Protestant notion of sin, I think).

So the question isn't really "why don't people who us things understand them?" or "why should you have to understand something to use it?", but "what benefit derives from changing your current understanding to a less incomplete one?"

Since there are only three ways to deal with complexity (constrain it, match it, or build an amplifier for the control system) and pretty much the entire modern net works on "constrain" at a visible level, increased understanding immediately runs into a learning cliff.

http://nielsenhayden.com/makinglight/archives/012186.html#400524 decomposes trivially into "hyper-text transport protocol, sever name nielsenhayden.com, top level directory name making light, child directory archives, file name 012186.html, octothorpe signifies an HTML internal anchor, internal anchor identifier 400524", but none of those things is necessarily really true; this is probably not pure HTTP (tunnels, https, port forwarding...) nielsenhayden.com is not a physical server, it's unlikely the directory structure is represented on an actual file system, the alleged file representing this particular post and its comment thread is a label for a bunch of database references, and so on.

I happen to know that, but it's highly unclear there's any benefit to it; indeed, this is probably why there's such intense social pushback to the idea that you're supposed to incorporate increased complexity into your world view by default, it makes everything more expensive for no obvious benefit. ("This stuff is fun!" is on the whole either axiomatic or utterly inexplicable.)

So I would say the binary "understands/does not understand" is flat wrong, as is the idea that you don't understand how to digest sugar; you may not be able to explain the biochemistry, but the complexity handling is present in your metabolism, and one ought not to privilege cortex over pancreas in a sort of articulism.

#167 ::: David Dyer-Bennet ::: (view all by) ::: February 16, 2010, 03:44 PM:

Bruce@166: Worst example of premature optimization ever -- teaching the students that high-level tools impose unacceptable performance costs.

#168 ::: David Harmon ::: (view all by) ::: February 16, 2010, 03:48 PM:

Bruce Cohen #165: When explaining things to Mom I tend to fall back on "homunculus" models: There are various agents inside the computer, doing various tasks and mostly working together, but they have to share the screen, et cetera.

#169 ::: John Stanning ::: (view all by) ::: February 16, 2010, 04:59 PM:

Graydon #167 : thank you, very interesting thoughts.

Absolutely NO ONE has a complete or correct understanding of anything. (Believing this about yourself is the scientific version of the virtue of humility.)
Very well put.  A familiar idea, but I haven’t seen it expressed in that way.  I’ll quote it, if I may!  Is that your original, or should I know who said it?

one ought not to privilege cortex over pancreas in a sort of articulism
Don’t let your head rule your ... guts?
But what does “articulism” mean, please?

#170 ::: Graydon ::: (view all by) ::: February 16, 2010, 05:16 PM:

John Stanning @170 --

You are certainly welcome to quote that, and I believe it to be original.

"articulism" is me reaching for a word to denote "privileging something just because it can talk".

There's a substantial practical difference between how complexity handling gets done via design and via inherited biochemical mechanism, but the point I was after was that complexity handling is complexity handling ("understanding") whether or not it, or the mechanism of it, could be consciously articulated.

#171 ::: John Stanning ::: (view all by) ::: February 16, 2010, 05:20 PM:

Aha!  Thanks!

#172 ::: David Wald ::: (view all by) ::: February 16, 2010, 05:24 PM:

Graydon@167:

http://nielsenhayden.com/makinglight/archives/012186.html#400524 decomposes trivially into "hyper-text transport protocol, sever name nielsenhayden.com, top level directory name making light, child directory archives, file name 012186.html [...] it's unlikely the directory structure is represented on an actual file system [...]

I happen to know that, but it's highly unclear there's any benefit to it.

The hierarchical URI structure is an interesting example of an interface (and corresponding model) that's been quietly broken. Back when web sites more often mirrored real directory structures, I could often work around a bad site navigation interface by editing the URI to move around in the directory tree. The hierarchical directory model made the URI's internal structure useful.

Now, a combination of tighter security defaults and deeper abstraction layers has made that workaround less and less useful. In the meantime, unfortunately, the variety of painfully bad site navigation interfaces has only increased, making me really miss that (barely-intended) user interface.

#173 ::: joann ::: (view all by) ::: February 16, 2010, 05:28 PM:

Graydon #167: "This stuff is fun!" is on the whole either axiomatic or utterly inexplicable.

And things can change state (also utterly inexplicably) from fun to not-fun and back from one moment to the next.

#174 ::: David Dyer-Bennet ::: (view all by) ::: February 16, 2010, 05:43 PM:

I wouldn't say understanding the URL structure is "important", but I use it multiple times a day in figuring out what's going on on my screen; to me it's extremely useful. (Not counting days when I'm actually building web sites, where the number is far higher but it's specialist knowledge rather than user knowledge.) Perhaps more important, I use it to check link locations before following them, to make up my mind if they're dangerous or not.

And incidentally the two websites I maintain for work both DO reflect the directory structure in their URL structure. But they're extremely simple; the more complex the site, the less likely that is to be true.

I think we've got kind of a geek vs. non-geek thing here. I LIKE knowing how things work. Even cars, and certainly radios and tvs and computers. History, molecular biology, physics, whatever.

#175 ::: j h woodyatt ::: (view all by) ::: February 16, 2010, 05:44 PM:

One of the underlying threads in this discussion is connected to the extraordinary problem of devising a language of universal resource identifiers that can be used both by machines and by the people who hate them.

When we had this problem with telephone handsets, we decided that it made sense to give each one a short number, and we asked people to remember the numbers for handsets they frequently wanted to signal. That way, we wouldn't need vast rooms full of expensive, error-prone, human network operators who could sometimes connect you with the correct "Joe the Plumber, who lives in Outer Bumfsckistan" just by asking them in plain English.

Modern Problems: you need an awful lot more than ten numeric digits to encode a unique resource identifier for every point of data on the Internet even when you limit yourself to just covering the static, persistent data. So, what to do? Give up on the problem entirely and decide that the way forward is always to go straight from the keyword search engine to the resource locator bits? [Speaking of "secret engines of the world" now...] Or do you try to devise a resource identifier language that some people— but certainly not all people— will be able to store and retrieve for later use without relying on the availability of search engines that often return ambiguous or incorrect results? I think it's obvious you have to do the latter, but it's important to recognize that the nature of the problem makes it necessary to cope with the fact that some people are just never going to be capable of identifying a resource except by reproducing its search path— much the way an ant colony identifies a food source.

Why there are so many Internet applications that seemingly fail to cope with this problem properly is beyond me. At times, I'm inclined to believe, despite having no other evidence to support this hypothesis, that it's because the information technology sector is riddled with a secret conspiracy of misanthropic, nihilistic, apocalypse cults. They're not trying to solve problems for people— they're inviting people into a descending spiral that ultimately ends in suicidal despair.

Despair over not being able to log into Facebook. (Don't be too quick to laugh that off as a joke.)

#176 ::: abi ::: (view all by) ::: February 16, 2010, 05:46 PM:

DDB @175:
I think we've got kind of a geek vs. non-geek thing here.

To a certain extent. But we also have a time-rich vs time-poor thing. Not everyone has the leisure time to devote to grasping the mental models underlying everything they do.

(Indeed, no one does. See Graydon's comment above. But some people have time to go deeper than others, in more areas than others.)

#177 ::: abi ::: (view all by) ::: February 16, 2010, 06:00 PM:

jh woodyatt @176:

Why there are so many Internet applications that seemingly fail to cope with this problem properly is beyond me. At times, I'm inclined to believe, despite having no other evidence to support this hypothesis, that it's because the information technology sector is riddled with a secret conspiracy of misanthropic, nihilistic, apocalypse cults. They're not trying to solve problems for people— they're inviting people into a descending spiral that ultimately ends in suicidal despair.

You had me till the em dash.

There is a certain sector of the information technology sector that valorizes obscurity and difficulty. They don't want the kind of people who spend time on facebook to die in despair, but they don't want them cluttering up the wizard's cave and fiddling with the arcane gadgets, either.

I first noticed this back when the mapper/packer thing was doing the rounds. It's all very geek-worshipping, but the people it's defining in opposition are essentially strawmen. And as I said in the original post, an awful lot of computer mappers are automotive packers, or history packers, or economics packers. Fabric and clothing packers*.

-----
* Are any of the "mapper" types who nodded smugly along at that essay interested in mapping fibre arts? There's some neat stuff in there, but I bet most of them just wear T-shirts without being able to articulate how, structurally, they're pretty much sweater variants, or why they get shorter and wider after many washings. Their grannies who knit map fibre arts all the time; they just don't call it geekery and valorize it in essays.

#178 ::: Tony Zbaraschuk ::: (view all by) ::: February 16, 2010, 06:03 PM:

Whitehead once wrote that civilization advances by increasing the number of operations we can perform without thinking about them.

There are some problems to this model, but they basically boil down to "your life depends on other people."

#179 ::: Jim Henry ::: (view all by) ::: February 16, 2010, 08:11 PM:

Graydon @167:

This understanding is without exception some degree of wrong, which can be thought of as a combination of incomplete and inaccurate.

I agree with the idea, but not with its expression, as it seems to be contrary to the usual meaning of the word "wrong". I get 70K+ Ghits for the exact phrase "incomplete but not wrong". Maybe "imperfect, which ..... incomplete and inaccurate"?

I happen to know that, but it's highly unclear there's any benefit to it; indeed, this is probably why there's such intense social pushback to the idea that you're supposed to incorporate increased complexity into your world view by default, it makes everything more expensive for no obvious benefit.

There are occasions when understanding the structure of a URL allows you to troubleshoot a problem -- if a sub-page has moved to somewhere else on the site, but the site's still there and otherwise well-maintained, you can strip off one part of the URL after another until you get to a usable page and then navigate from there looking for the stuff you wanted. Similarly, if you arrive at a page via a link or search engine and it has no links to other pages on the same site, you can often get to the top page of the site by stripping off one or more filename and/or directory name parts of the URL. As other posters have pointed out, this is less often possible than it used to be, but still possible often enough to make learning the structure of a URL useful though not necessary. And understanding at least the protocol and domain name allows you to detect phishing, and is as far as I know the only reliable method of doing so given the present architecture of the web. (I hope someone will point out a simpler method that's easier to explain how to use than is explaining how to parse a URL.)

#180 ::: Allan Beatty ::: (view all by) ::: February 16, 2010, 09:42 PM:

Bruce @ 165: think about how you'd explain the basic technology of the internal combustion engine automobile.... Now try to do the same thing for a computer that's connected to the internet....

Thus the popularity of car analogies for explaining computer and internet concepts.

I think I'll give it a try for the kerfuffle that started this thread. Suppose I get in my car to drive to Wal-Mart. Along the way there is a detour and I get lost and accidentally wind up in the parking lot of a different store. Not my fault, and entirely understandable that I might not immediately realize that I was in the wrong place. But the big sign over the door saying Borders instead of Wal-Mart should be a big clue. By the time I walked into the store, even if there is a book about Wal-Mart on a table near the front, I'd know I was in an entirely different place. I wouldn't complain about how Wal-Mart had moved everything around so it was impossible to find, nor about why I couldn't use my Wal-Mart charge card.

#181 ::: Allan Beatty ::: (view all by) ::: February 16, 2010, 09:46 PM:

Abi @ 178: I like it. So Making Light is a place where publishing mappers and computer mappers and poetry mappers and knitting mappers can talk to each other.

#182 ::: Allan Beatty ::: (view all by) ::: February 16, 2010, 09:55 PM:

Myself @ 181: So of course I recognize the flaw in my analogy as soon as it is posted. For the analogy to be comparable to the real-life problem, we need a group of people who never drive anywhere but Wal-mart.

#183 ::: Avram ::: (view all by) ::: February 16, 2010, 10:36 PM:

Allan @181, but what if there was a sign saying "Walmart"? (They've dropped the hyphen, BTW.)

The comment form on that ReadWriteWeb post really did have a little Facebook logo, and the words "Sign in with Facebook". And Facebook has a history of redesigning its interface with no warnings. I'm amused at the results, muself, but the more I think about it, the harder it is to blame all those folks who haven't spent as much time as I have learning how to untangle the bewildering semiotics of computers and the web.

#184 ::: abi ::: (view all by) ::: February 17, 2010, 08:06 AM:

Avram @184:
he comment form on that ReadWriteWeb post really did have a little Facebook logo, and the words "Sign in with Facebook".

There's that (the "sign in" language was probably the deciding clue for many people.)

But wait! There's more. Go look at the article. Below the headline, right where a medieval scribe would put an illuminated initial, sits the Facebook logo.

#185 ::: John Hawkes-Reed ::: (view all by) ::: February 17, 2010, 09:02 AM:

Abi@178:

There is a certain sector of the information technology sector that valorizes obscurity and difficulty.

Yes. We call them 'idiots' and 'impossible to work with'. Insecure empire-builders who can't bear to share knowledge.

[Mapper/Packer]

It seems to me that if your brain-type is Mapper, then that's how you'll approach every problem. One of the good things about this computing malarkey is that it's given those of us with the Hacker mindset something desperately obvious to be good at. Thus computing ends up being somewhat Packer/Hacker heavy. However there's no particular reason why that way of thinking can't be applied to anything else.

For me, writing fiction lights up the same bits of brain as writing code or whatever other random creative endeavour has grabbed me this time. For my father it is/was fiddling with cars or agricultural machinery.

#186 ::: John Stanning ::: (view all by) ::: February 17, 2010, 09:22 AM:

There is a certain sector of the information technology sector that valorizes obscurity and difficulty.

True, though I don’t understand that attitude and never did.  The most satisfying work I did in IT was when I fully understood what the users wanted, and why (not what they said they wanted, which was sometimes different) and gave it to them.  One of my happiest moments was meeting at a conference a former colleague in a company that I’d left long ago, who said "you know that system you made for us 20 years ago?  We’re still using it, and it’s still good."

Except that it was 1998.  After the happy moment, my next (private) thought was “Aaargh!  Are they still using that?  I wonder if it’s year-2000-compliant...?“  It was.  (Not by my virtue – just luck.)

#187 ::: Serge ::: (view all by) ::: February 17, 2010, 09:57 AM:

John Stanning @ 187... Well, they need to think they're Clark Kent. You know, "I may look geeky, but I'm really one of the few capable of keeping the world from falling apart."

#188 ::: Serge ::: (view all by) ::: February 17, 2010, 09:59 AM:

Allan Beatty @ 182...

Dapper mappers.
Pitter-patter mappers.

#189 ::: Graydon ::: (view all by) ::: February 17, 2010, 10:35 AM:

Jim Henry @180 --

I realize I'm using "wrong" with some idiosyncrasy there, but I think this is the more helpful usage in this case.

The useful basis of dispute is over the degree of utility (and for whom); getting into a pile of semantic mechanisms to argue for "functional correctness" (which is a useful concept, when you're not trying to use it to claim you're not wrong) so you can have the argument about rightness indirectly is a trap. Any attempt to valorize or advance *anything* as Being Right is a semantic trap; it will burn arbitrary amounts of effort and energy (and people, from time to time) and it will not accomplish any improvement in anything.

So I think the useful approach is an axiomatic recognition that one is globally wrong; that runs the reflexive question into "is this useful in context?" which not only requires context, it requires a maintained idea about what constitutes useful.

(And yes, I know you can use the full URL/URI string to make guesses about the arrangement of the server's innards and where you might usefully poke it. I will continue to support the view that this is not an obvious benefit; it's not enough to know about URL syntax, you have to know something about how web pages are served (that web pages *are* served...) and the customary and conventional meanings of the represented hierarchy to use this approach, and, as widely noted, it doesn't reliably work in the present day. It's a bit of sometimes-useful arcane knowledge. Now, if httpd supported external XPath(ish) search on public server objects, *that* might be more generally worth learning, but so far as I know there's not even a gleam in anybody's eye in favour of doing that.)

#190 ::: abi ::: (view all by) ::: February 17, 2010, 10:45 AM:

John Hawkes-Reed @186:

My point is that I don't think packers exist. I think we pack easily remembered things and things we don't care enough about to put the time into mapping, and map the things we care about.

I think that the people who really get into the mapper/packer distinction are in denial about the contexts in which about which they're packers, and also about the ways that the people they label as packers actually map in areas that don't "count" for the distinction-makers.

Fibre arts is a good example. Most techie geeks that I know don't have a mapper's understanding of fabric, clothing, or tailoring. Furthermore, they tend to be scornful of the kind of people who use packer techniques to, for instance, get to Facebook, but who have a deep mapper's understanding of knitting or tailoring.

(Make no mistake. Fibre arts is a deep and complex subject. We've set up our society so that you don't have to know about it to be decent and hypothermia-free, but a fabric mapper can spend half the clothing budget of a fabric packer and look twice as good for four times as long doing it. Don't tell me you're a mapper and a geek if you don't respect that kind of prowess.)

#191 ::: Carrie S. ::: (view all by) ::: February 17, 2010, 11:01 AM:

When you get into fiber arts, that's a whole 'nother can of worms--having to do mostly with Craft, and how it is distinct from Art*.

But I am in general in agreement; it's like the Avenue Q song "Everyone's a Little Bit Racist". Similarly, everyone's a geek about something, where "geek" corresponds to the understanding I've gleaned of "mapper". It's just that some people are geeks about things they can be paid for, and they tend to get the useful attention.

* Mostly, men do Art and women do Craft. I can rant at length.

#192 ::: abi ::: (view all by) ::: February 17, 2010, 11:15 AM:

Oh yes, there is definitely a strong traditional gender distinction between Items Whose Structural Understanding Qualifies One As A Mapper and Boring Stuff That Doesn't Count.

It's not the only problem with the model, but it's an indicator that it's an incomplete descriptor of humanity. The fact that it's described with an irreducible emotional affect is another. It's smugness fodder, not a genuine sociological or psychological insight.

#193 ::: j h woodyatt ::: (view all by) ::: February 17, 2010, 11:36 AM:

abi: "You had me till the em dash."

Okay, maybe it just ends in my suicidal despair.

Still, I worry that most of the wizards might secretly think the best way to keep the grubby peasants away from the precious arcane instruments is just to feed every last damned one of the turnip-eating troglodytes to the hellspawn beastie in the basement, and thus to be rid of the pestilence once and for all.

"I first noticed this back when the mapper/packer thing was doing the rounds. It's all very geek-worshipping, but the people it's defining in opposition are essentially strawmen."

Whoa. That mapper/packer thing makes my early bullshit warning system ring off the hook. I really don't know why. I suppose I should read it more thoroughly before I slag on it in earnest, but something about it rubs me very badly.

#194 ::: j h woodyatt ::: (view all by) ::: February 17, 2010, 11:38 AM:

abi: "You had me till the em dash."

Okay, maybe it just ends in my suicidal despair.

Still, I worry that most of the wizards might secretly think the best way to keep the grubby peasants away from the precious arcane instruments is just to feed every last damned one of the turnip-eating troglodytes to the hellspawn beastie in the basement, and thus to be rid of the pestilence once and for all.

"I first noticed this back when the mapper/packer thing was doing the rounds. It's all very geek-worshipping, but the people it's defining in opposition are essentially strawmen."

Whoa. That mapper/packer thing makes my early bullshit warning system ring off the hook. I really don't know why. I suppose I should read it more thoroughly before I slag on it in earnest, but something about it rubs me very badly.

#195 ::: John Hawkes-Reed ::: (view all by) ::: February 17, 2010, 12:18 PM:

Abi@191:

To be fair to Mr. Carter, he seems to think that Packing can be unlearned. Also, it's pretty much about making better coders, so it may not map (ahaha) quite as well to other disciplines.

I am all for mental techniques that make it easier to get into The Zone/state of Flow and stay there, though.

I have hacker friends who have become fascinated by tailoring, so I think I'm in vigorous agreement with you on that. Given the sort of random stuff I've become involved with, I try hard not to bag on other people's enthusiasms.

#196 ::: David Harmon ::: (view all by) ::: February 17, 2010, 12:37 PM:

Abi #191/193: I think he's taken a mental skillset and elevated it to a virtue. Certainly, there are people whose thinking is predominantly creative and analytic, but they're not actually a breed apart!

#197 ::: heresiarch ::: (view all by) ::: February 17, 2010, 02:04 PM:

Bruce Cohen @ 165: "The problem of computer ignorance is much deeper than it first appears."

Even constructing "computer ignorance" as a problem is (I think) looking at things the wrong way. Ignorance of the Way Things Really Work isn't some problematic aberration from humanity's status quo unique to computers, it's the fundamental fact of human experience. It's the reality underlying everything people do, from eating to talking to thinking to driving to sleeping. The reason why lack of understanding manifests as problem with computers is that users have neither on their side (as they have in the case of eating and other evolved behaviors) a highly complex non-intelligent expert system that deals with it without their conscious input, nor on the device side an interface that is designed to deal with and compensate for their dearth of abstract understanding. Or rather, the simplifying interface they do have is poorly-designed, because it was (as abi and others have said) designed by people who do posess a complex understanding to the subject and so it's (perhaps literally) impossible for them to imagine how non-experts will conceptualize their design.

Comparison with other situations presents us with three potential solutions: an evolved or engineered non-intelligent expert system incorporated into human beings, universal education on the abstract nature of computers, or smarter user design. One seems unlikely in the foreseeable future; the second is conceivable but its success is dubious and its cost-effectiveness (in terms of time and money) uncertain; and the third is basically just doing something we're already doing better. A combination of a little of two and a lot of three seems like the easiest and most effective solution. It would, however, require a substantial shift in the culture of program designers.

"Users need to know a lot more than computer operating systems these days; understanding the basics of the Web and the quirks of the currently faddish user interface toolkits imposes a large cognitive burden as well."

I think we are already seeing a trend towards standardization of user interface design precisely for that reason. Having sunk untold hours into learning the quirks of Adobe Photoshop, few users will be thrilled by the idea of learning a whole new image manipulation user interface, even if it was objectively better in every respect. The point where complexity handling transitions from tool to user tends to ossify for that reason, I think: it's a lot easier to upgrade from a prop plane to a jet fighter than it is to change the pilot's control scheme.

Graydon @ 167: I wrote a reply the other night and ran into the glitch. It's saved, but on another computer. I should post it in, oh, maybe two or three hours.

#198 ::: abi ::: (view all by) ::: February 17, 2010, 02:28 PM:

heresiarch @198:

Having sunk untold hours into learning the quirks of Adobe Photoshop, few users will be thrilled by the idea of learning a whole new image manipulation user interface, even if it was objectively better in every respect. The point where complexity handling transitions from tool to user tends to ossify for that reason, I think: it's a lot easier to upgrade from a prop plane to a jet fighter than it is to change the pilot's control scheme.

See also, QWERTY vs Dvorak

#199 ::: Bruce Cohen (SpeakerToManagers) ::: (view all by) ::: February 17, 2010, 03:16 PM:

abi @ 178:
There is a certain sector of the information technology sector that valorizes obscurity and difficulty. They don't want the kind of people who spend time on facebook to die in despair, but they don't want them cluttering up the wizard's cave and fiddling with the arcane gadgets, either.

You said that ever so much more politely than I did. And I love the word "valorize" in this context. So many of the people you're talking about would love to be styled Prince Hacker the Valiant. The entire attitude is summed up in the derogatory term "luser" (see also "Bastard Operator from Hell").

j h woodyat @ 196:
Whoa. That mapper/packer thing makes my early bullshit warning system ring off the hook.

Mine too. After reading about half of it I came to the conclusion that my alarm was triggered by a certain smugness that the author and his friends are all "mappers" and thus superior in the view of the Cosmic All. That's hubris, and it rarely goes unpunctured.


heresiarch @ 198:
Even constructing "computer ignorance" as a problem is (I think) looking at things the wrong way.

Agreed. the word "problem" is problematic in this context. Perhaps "the issues in the debate over user ignorance of the internals of computing systems" would be a better way to phrase it.

Or rather, the simplifying interface they do have is poorly-designed, because it was (as abi and others have said) designed by people who do posess a complex understanding to the subject and so it's (perhaps literally) impossible for them to imagine how non-experts will conceptualize their design.

Also, very few of those designers have any training or interest in the occult art of creating useful abstract conceptions for people who aren't used to thinking abstractly.

I think we are already seeing a trend towards standardization of user interface design precisely for that reason.

Unfortunately it appears to be one standard per application type, with little carryover between them. I know many people consider the Apple UI designers akin to fascists for insisting on a rigid standard for user interface (and it's true they have violated it themselves in their own applcations at times), but that may be the only reasonable solution to the problem of proliferating interaces in the near-term.

#200 ::: VCarlson ::: (view all by) ::: February 17, 2010, 03:35 PM:

Abi @ 199: re the QWERTY vs Dvorak issue. Of course, it's not just the "remembering best what you learned first" issue (at least for me) when it comes to going Dvorak - it's also that I've heard Dvorak's so nice to use one has difficulty going back to QWERTY, and since most of the places over which I have no control are QWERTY, that's a problem.

That's my excuse, anyway. I don't think I'm too hidebound to learn a new way, but I could be wrong.

I mean, I learned @$*!! MS Word, after all, though I much prefer WordPerfect. The company I worked for went from WP to MSW because tout le monde (they're a French company, so I feel obliged to throw that in) used Word, and it was getting awkward.

#201 ::: Mary Aileen ::: (view all by) ::: February 17, 2010, 03:43 PM:

v Carlson (201): And then @$*!! MS Word changed everything about the way it works with the latest version. (Don't get me started. Just...don't.)

#202 ::: Serge ::: (view all by) ::: February 17, 2010, 03:57 PM:

Mary Aileen @ 202... C'mon. You know you wanna.

#203 ::: Earl Cooley III ::: (view all by) ::: February 17, 2010, 04:06 PM:

I'm still quite satisfied by MS Office Pro 2000 and Adobe Photoshop 3.0. heh.

#204 ::: Lexica ::: (view all by) ::: February 17, 2010, 04:38 PM:

Serge @ 203: Although I'm not Mary Aileen, I have also been diving into the depths of frustration dealing with Office 2007. All the things I used to know how to do automatically I can't do anymore. The keyboard shortcut no longer works; the command isn't on the menu; heck, the menu itself isn't there anymore!

It's like repeatedly stepping down onto what one thinks is a stair (but it's not really there) only to land with an uncomfortable *thump!* and bite one's tongue in the process. Or like reaching out for a tool that's supposed to be right there — only some miscreant has moved the entire flipping toolbox.

Argh.

#205 ::: TexAnne ::: (view all by) ::: February 17, 2010, 04:46 PM:

I still haven't forgiven those idiots who wrote Word...hm, two updates ago? They decided that "ctrl-E" was centering, instead of footnotes. And they didn't replace it with any other keyboard shortcut, either, so every time I want a footnote I have to go through some treasure-hunting rigmarole. A pox on the lot of them.

#206 ::: Pendrift ::: (view all by) ::: February 17, 2010, 04:53 PM:

TexAnne @206: Personalized toolbars help save my sanity for things like that. Assigning new shortcut keys doesn't work for me, because of conflicts between programs.

#207 ::: TexAnne ::: (view all by) ::: February 17, 2010, 04:56 PM:

Bless you, Pendrift! That won't stop me from going "ctrl-E," but it'll make me cuss a lot less.

#208 ::: heresiarch ::: (view all by) ::: February 17, 2010, 05:03 PM:

As promised,

Graydon @ 167: "If you use something, you have an understanding of it."

This statement is only true if you're using a very, very unintuitive definition of "you" and "understanding." For instance, when molecules fold themselves into proteins, who is understanding what? When oxygen combusts, is that reliant on the oxygen having an understanding of combustion? In what way can you sensibly discuss how oxygen's understanding of its combustion is flawed or incomplete?

In hopes of avoiding a segue into Leibnizian monadology, I'm going to define "understanding" as a process by which a model of reality is created in order to anticipate what actions will create which results. Using that definition, your first paragraph makes perfect sense--the map is never the territory, etc. But it doesn't make any sense at all to talk about that kind of understanding when discussing, say, a lily plant. It isn't creating any mental models; it hasn't got a brain. Insofar as it engages in complex behaviors like blossoming in spring, it isn't doing so out of any understanding--it does so because that’s what its DNA says. It's just a more elaborate version of a rock falling: natural forces acting on material objects.

"So I would say the binary "understands/does not understand" is flat wrong, as is the idea that you don't understand how to digest sugar; you may not be able to explain the biochemistry, but the complexity handling is present in your metabolism, and one ought not to privilege cortex over pancreas in a sort of articulism."

Complexity handling isn’t the same thing as understanding. Incredibly complex things can be handled without any abstract model, simply by evolutionary trial-and-error or brute-force calculation. My pancreas neither has nor needs any sort of abstract model of biochemistry in order to function--it works the way it works without ever knowing how or why. Drawing that distinction isn’t some bizarre form of discrimination; it’s recognizing that the problem-solving methods employed are fundamentally different. Failing to do so strikes me as anthropomorphism.

Bruce Cohen @ 200: "Perhaps "the issues in the debate over user ignorance of the internals of computing systems" would be a better way to phrase it."

Yes, though isn't it irritating when the only way to say what you mean is ridiculously convoluted? This is why jargon exists.

"Also, very few of those designers have any training or interest in the occult art of creating useful abstract conceptions for people who aren't used to thinking abstractly about computers."

FTFY. Sorry to be such a pedant, but I think this is one of those tricky conceptual issues where it's very easy to slip into a simpler but misleadingly wrong model. Being precise is the only defense.

#209 ::: dajt ::: (view all by) ::: February 17, 2010, 05:18 PM:

VCarlson@201: I can fairly easily switch between qwerty and dvorak, as long as I'm not doing it on the same physical keyboard in quick succession. One thing learning dvorak did was help break me of some of my bad keyboarding habits, like looking at the keyboard as I type. The upshot of this is that I look at the keyboard as I type in qwerty (where the letters I type match the letters on the keys) and I look at the screen when I type in dvorak (where the letters on the keys are wrong). This means that I can't type on a keyboard that has dvorak letters on it, but is using a qwerty layout.

#210 ::: guthrie ::: (view all by) ::: February 17, 2010, 06:29 PM:

Apropos Bruce at #165, I recalled these cartoons:
http://abstrusegoose.com/98
http://abstrusegoose.com/secret-archives/under-the-hood

To be read in that order. The second is in fact linked from the first.

#211 ::: Allan Beatty ::: (view all by) ::: February 17, 2010, 06:31 PM:

Ob-webcomic.

#212 ::: Lee ::: (view all by) ::: February 17, 2010, 06:38 PM:

Getting back to the original topic: European watchdog group files FTC complaint on Google.

"Google still hasn't gone far enough," Epic's consumer privacy counsel Kim Nguyen told BBC News. "Twitter is a social networking site and people know what they are signing up for. With Gmail, users signed up for an e-mail service, not a social networking service," said Ms Nguyen.

I think she nailed it. Gmail was launched as an e-mail service, which Google is now trying to turn into a social networking site without the consent of its users. They are NOT the same thing.

TexAnne, #206: We're still using Word 97, because it's the last version which allows multiple documents to be open in the same session. A lot of what we use it for involves cutting-and-pasting between documents, and having to have multiple copies of the program running is just too damned unwieldy.

#213 ::: Serge ::: (view all by) ::: February 17, 2010, 06:38 PM:

Lexica @ 205... I don't know how many times I've let out loud expletives when uploading softwares on my laptop, or my wife's, because I'm having a hard time going thru the process and I work with computers. When that happens, my deranged doguette Freya usually slinks away until I've calmed down.

#214 ::: Erik Nelson ::: (view all by) ::: February 17, 2010, 07:09 PM:

Yahoo has also added social-networking-like features to its webmail. Is their situation comparable?

#215 ::: Mary Aileen ::: (view all by) ::: February 17, 2010, 07:13 PM:

My biggest single complaint about the new Word is probably the way they hid all of the basic functions inside something that looks like a frigging design element. Whoever thought that one up was a moron.

And I hate-hate-hate the fact that it is impossible to set the default line-spacing to *not* double-space between paragraphs. (If I'm wrong about that one, someone please let me know.)

I don't just use Word myself (occasionally), I have to assist people, many of them *not* computer-savvy, with the public access computers in my library. Every Single One of them has to be shown how to print, because of the design element thing. Many of them have to be shown how to single-space between paragraphs; we can't just make it a default, because it's impossible. Grrrrrrrrrrrrr.

I use Open Office at home. Or Wordpad.

#216 ::: Graydon ::: (view all by) ::: February 17, 2010, 07:41 PM:

heresiarch @209 --

While my use of English is widely held to be idiosyncratic, I don't believe I've ever gone "You! Oxygen!" and intended to address a bunch of molecules.

The difference between the oxygen molecules and the lily (or you, or me) is that the lily (or you, or me, or some types of corporation) is a system; the organization matters. The organization is what's doing the complexity handling.

All the thinking we're doing -- I presume there are as yet no machine intelligences involved -- is being done by a bunch of complex grease regulated by chemical transfers. It's different in function but not in principle from the lily.

So if I have an understanding, the understanding of the lily is (perhaps) different in kind due to the presence or absence of systemic organization; it's not different in kind due to an issue of fundamental mechanism.

#217 ::: Bruce Cohen (SpeakerToManagers) ::: (view all by) ::: February 17, 2010, 07:43 PM:

heresiarch @ 209:
FTFY. Sorry to be such a pedant, but I think this is one of those tricky conceptual issues where it's very easy to slip into a simpler but misleadingly wrong model. Being precise is the only defense.

I agree about the need for precision, but I think we're actually talking about two different things. I think there are (a lot of) people who have never been taught how to think about abstractions at all. I'm not saying that they cannot do so; if we're to believe George Lakoff (and I do), then all language and symbolic thought consists of metaphors, which are abstractions based on some set of concrete things like spatial extent and integers between 1 and 4. What I am saying is that most people don't recognize the metaphors they use, and don't know how to think about abstract qualities. This usually manifests as thinking that some abstractions are concrete objects, or as not recognizing the common characteristics of two concepts abstracted from the same base concept.

In contrast, I take you as saying that in this discussion we need to understand that not understanding the abstract nature of computer systems and their components does not betoken an inability to reason abstractly, only an ignorance of how to reason abstractly within the domain of computer concepts. The two sets I've talked about are not synonymous, but I believe that the set of people who have trouble with abstracting about computers contains people from both of those set.

#218 ::: Joel Polowin ::: (view all by) ::: February 17, 2010, 09:23 PM:

The CBC reports that the Canadian privacy commissioner has had words with Google. Google's response isn't particularly edifying.

#219 ::: KeithS ::: (view all by) ::: February 17, 2010, 11:49 PM:

Mary Aileen @ 216:

That's the big gripe I have with the new Office stuff too. I haven't really used it, but it took seeing a few screenshots, including one of the 'Office' menu being used, for me to realize that that ugly knob at the top was actually the menu.

Does making a template with all the formatting settings you like no longer work in Word 2007?

On user-interface consistency:

Where it really helps is for those people who are, at least, mildly computer-savvy or working to get that way. (It helps for experienced computer users too, but experienced computer users are resigned to working with all sorts of crazy UIs anyway.) That way, users transfer their knowledge from one program to a new, unfamiliar one. For someone like my mother, who seems to view each application as its own, completely separate thing, perhaps not so much.

One of the problems with fancy websites, skinnable media players, DVD menus, and the like is that they all try to be different, because some designers automatically think that different is good. (IBM tried this a couple times.)

That said, a domain-specific application may well get away with being different. Its initial users already are familiar with the domain, but not necessarily with computers. It's when it gets a broader market that it becomes a liability. (By the way, how does anyone get anything at all done in Blender?)

#220 ::: KeithS ::: (view all by) ::: February 18, 2010, 12:29 AM:

The Buzz zettings (I was going to correct the typo, but...) tab in GMail is now there, and, even though I didn't have a public profile and seem to be all right, I'm about to nuke the service from my account just to be safe.

#221 ::: Lee ::: (view all by) ::: February 18, 2010, 01:54 AM:

I just nuked mine too. I checked firzt (yes, I'm leaving the typo in place too) and made sure that I didn't have any followers, or anyone I was following, and hadn't inadvertently made any posts to it. Then I KILLED THAT BASTARD! BWAHAHA!

Am I right in thinking that people who use Google Reader will no longer be able to do so with Buzz deactivated? If so, Google has shot themselves in the foot even more nastily than it first appeared.

#222 ::: janetl ::: (view all by) ::: February 18, 2010, 02:54 AM:

Yay! I just used the new (much, much overdue) Gmail Setting to disable Buzz. I noted that the checkbox to unfollow was defaulted to unfollow, so may be capable of learning.

#223 ::: John Stanning ::: (view all by) ::: February 18, 2010, 11:31 AM:

AFAIK Word 2007 does templates in much the same way as previous versions, so Mary Aileen (#216) should be able to set the default paragraph format in the “normal” template, either by opening normal.dotm directly* or simply by modifying the default paragraph style (usually “Normal”) and ticking where it says to apply this style to documents based on this template.

Microsoft, of course, couldn’t leave well alone in this area either, and introduced in Office 2007 something called a “document theme”;  I still haven’t figured out what that is, or how it relates to a template.

* normal.dotm is located C:\Documents and Settings\[username]\Application Data\Microsoft\Templates in Windows XP or 2000, or C:\Users\[username]\AppData\Roaming\Microsoft\Templates in Vista.  I don’t know where it is in Windows 7, which I’ve avoided so far.
In a student/library environment it may be necessary to prevent users from changing this template, either by putting security on the Templates folder, or else by locating the templates folder in a read-only central folder, pointing Word at it via the relevant registry entry, and putting security on said registry entry to prevent users from changing it via [stupid-round-thing-in-top-left-corner], Word Options, Advanced, File Locations.

#224 ::: John Stanning ::: (view all by) ::: February 18, 2010, 11:35 AM:

Apologies for the nuts-and-bolts detail in the post above;  this isn’t the place for that sort of stuff, but I thought it might just help someone.

#225 ::: John Stanning ::: (view all by) ::: February 18, 2010, 11:36 AM:

Apologies for the nuts-and-bolts detail in the post above;  this isn’t the place for that sort of stuff, but I thought it might just help someone.

#226 ::: Mary Aileen ::: (view all by) ::: February 18, 2010, 11:39 AM:

John Stanning (224): Modifying the default paragraph style doesn't work for the spacing-between-paragraphs thing. I could set the actual line spacing to "single" instead of 1.5, but the "don't add space between paragraphs of the same style" checkbox won't stay checked. I'll have to point the library tech-guy at your other explanation. Thanks!

#227 ::: E. Liddell ::: (view all by) ::: February 18, 2010, 11:48 AM:

KeithS @220: My experiences with Blender suggest that getting anything done with it is rather akin to bashing one's head against a wall: the sufficiently hard-skulled will eventually get through to the other side, but it's an extremely painful process. And the worst part is that it has to have been set up that way deliberately, since there's no other reason I can think of for those counterintuitive hand-rolled load/save dialogues.

#228 ::: Andrew Plotkin ::: (view all by) ::: February 18, 2010, 11:53 AM:

I got stuff done in Blender, eventually, by deleting Blender and downloading Google Sketchup.

#229 ::: John Stanning ::: (view all by) ::: February 18, 2010, 11:59 AM:

Mary Aileen #227:  I don’t understand that, sorry.  Your tech-guy may be able to help.
This support article might be relevant:  I’m not sure.

#230 ::: Mary Aileen ::: (view all by) ::: February 18, 2010, 12:09 PM:

John Stanning (230): That's done it! Thank you!!! Now I just have to get the tech guy to fix all the public machines.

#231 ::: P J Evans ::: (view all by) ::: February 18, 2010, 12:13 PM:

Nuked Buzz.
(I don't use Reader, and had no public profile anyway, so there shouldn't have been anything there that I didn't know about already.)

#232 ::: Earl Cooley III ::: (view all by) ::: February 18, 2010, 12:34 PM:

"What's the buzz? Tell me what's a-happening."

#233 ::: David Dyer-Bennet ::: (view all by) ::: February 18, 2010, 01:55 PM:

Abi@177: certainly time is one constraint everybody has to work within. Certainly nobody (and double plus especially not me) knows about everything. But the geek nature slants the resource allocation choices towards wanting to know more detail. I think finding other people's shop talk interesting is one of the most reliable indicators of geek nature.

#234 ::: heresiarch ::: (view all by) ::: February 18, 2010, 02:52 PM:

Graydon @ 217: "While my use of English is widely held to be idiosyncratic, I don't believe I've ever gone "You! Oxygen!" and intended to address a bunch of molecules."

Yes, I know. That is my point.

"The difference between the oxygen molecules and the lily (or you, or me) is that the lily (or you, or me, or some types of corporation) is a system; the organization matters. The organization is what's doing the complexity handling."

The solar system is organized; the solar system handles an immense amount of complexity. What does it understand? It seems to me as if you're biasing your arguments in favor of systems which you perceive to have intentionality (such as a lily) and against systems which you do not (complex clouds of gases undergoing chemical reactions).

"So if I have an understanding, the understanding of the lily is (perhaps) different in kind due to the presence or absence of systemic organization; it's not different in kind due to an issue of fundamental mechanism."

Again, you're confusing systemic organization, or complexity handling, or whatever term you prefer, for understanding. An understanding is an abstraction: a lily no more makes an abstraction out of itself and its environment in order to achieve an abstractly-articulated goal than a continental plate makes an abstraction out of itself and its surroundings in order to decide where a mountain range will form. These things simply happen, according to the discrete interactions of molecule upon molecule, in an unimaginably immense and entirely intentionless motion. It is the antithesis of abstraction.

#235 ::: Bruce Cohen (SpeakerToManagers) ::: (view all by) ::: February 18, 2010, 03:36 PM:

E. Liddell, Andrew Plotnik:

Thanks, you may have just saved me a lt of time and frustration. I downloaded Blender late yesterday, and played around with it for 15 or 20 minutes, rapidly getting nowhere. I have no idea what the interface designers were thinking, and I mostly was able to figure out Maya, at least to the extent of creating surfaces and hooking them together. With Blender, I'm still haven't figured out how to select objects reliably. So I may just dump it and find something else to render images with.

As bad as Blender is, it's still far from the record Really Bad User Interface. My vote for that is Kai's Power Tools, for egregious use of color and oddly shaped buttons, extra pineapple cluster for odd lacunae in functionality; basically it's what you see is what Kai wants you to get. My guess is he was having an affair with a jukebox when he designed it.

#236 ::: VCarlson ::: (view all by) ::: February 18, 2010, 03:39 PM:

Lee @213 citing TexAnne @206: What!! You can't have multiple documents open? Back when I was employed, a significant part of my time was spent with multiple documents from multiple searches open to combine them into coherent narratives for my customers. Having multiple instances of the program open - doesn't that chew through computer resources? Or is that the point?

Mary Aileen @216: That's one of the things I hatehatehate about MS products - the tendency of the programmers to assume they know better than I what I want to do. I first noticed it when Word changed a list with numbers in it to a sequentially-numbered list. Without telling me. I was setting up a list of channel numbers so I could blow it up to huge proportions so my stepfather could have a chance of reading it (macular degeneration - and it didn't work, anyway, his vision was too far gone).

Making it so you can't change the defaults is just a continuation of that arrogance, IMO.

#237 ::: David Harmon ::: (view all by) ::: February 18, 2010, 03:49 PM:

heresiarch #235: And yet the lily does have intentionality, even without understanding! Unlike a cloud of gas, the lily has deep structure deriving from megayears of development and selection. Evolution has given it the intention to survive and reproduce -- not as a central program, but implicit in everything it does. Note that even for a plant, that includes a certain amount of environmental awareness and responsiveness, because that's how anything living survives.

#238 ::: Graydon ::: (view all by) ::: February 18, 2010, 03:58 PM:

heresiarch @235

Despite the confusing name, our solar system isn't a system in the sense I'm using; there are structural but no functional relationships. (So Jupiter's gravity contributes to the structure of the solar system but there is no functional relationship between Jupiter and Earth, they're independent products of chance-dominated processes.) So the solar system doesn't understand a blessed thing. (Unless there's a _really strong_ value of the Gaia hypothesis that turns out to be factually well-supported, anyway, but in general it seems most consistent with observation to consider that the question "what is the function of Saturn?" isn't meaningful.)

The functional relationships -- which imply both a history of functional interconnection and successive approximation of an hypothetical ideal function -- are important. The cloud of ideal gas doesn't have those. A Darwinian individual does, whether worm or lily or Making Light comment poster. I strongly suspect but cannot prove there are other classes of system that are not equivalent to Darwinian individuals in a computability sense of which this is also true.

I'm arguing that(this sense of, not the formal computational sense of) abstraction, like the notion of intelligence from which it derives, is an incorrect assumption based on a theological world view. One observes various terrific philosophical convolutions about what, precisely, intelligence might be; it's much simpler to note that it's an assumption unsupported by facts.

It's all exapted evolved mechanisms until one gets down to the ideal gas level, where there's not functional connection. There may be a capacity for abstraction, in a formal sense equivalent to the computability/mathematical (as I recall, those are not quite the same but it's been a very long time) sense, but all the research of which I am aware points out two things; people's understanding of how they perform complex tasks is generally in error, and the actual mechanisms, as best can be told, are highly empirical and show little sign of attempting to abstract anything; they work off a set of cues and triggers, and while this can work very well indeed, it is not the same thing as abstraction.

#239 ::: KeithS ::: (view all by) ::: February 18, 2010, 04:09 PM:

E. Liddell @ 228 and Bruce Cohen (SpeakerToManagers) @ 236:

Blender's UI is strange, ugly, and completely insane. I think it's because it comes from a Unix background, where there are at least fifteen different UI toolkits to choose from, so instead of choosing one they rolled their own. Blender is powerful, and people can do amazing and impressive things with it. However, I wouldn't be surprised to learn that its usage would drop to about nil if Maya were ever released for free, because its UI is so completely bizarre.

I think I may try Andrew Plotkin's solution if I find it exports to the things I want it to.

Lee @ 213 and VCarlson @ 237:

Word 2002 (2000?) and up don't open one instance of the program per document. It's still one program running, but with one top-level window per document. This is technically known as single-document interface (SDI). The way it used to work was that there would only be one container window, but it would contain all the documents. This is known as multi-document interface (MDI), and is the way that Excel still does things. (Excel does try to confuse you by putting things in the taskbar for each document by default, so it's not a perfect example.)

With the exception of tabbed browsing, MDI is usually considered inferior to SDI from a modern usability standpoint, and there are some who dislike tabbed browsing because it's MDI in a new guise. It's really a matter of preference, but my preference is (with a few limited exceptions, like tabbed browsing) for SDI, and Excel's MDI behavior drives me up the wall. SDI also plays nicer with multiple monitors and large screens.

Now, newer versions of Word do use more resources than older ones, but that's a different issue.

#240 ::: Bruce Cohen (SpeakerToManagers) ::: (view all by) ::: February 18, 2010, 05:44 PM:

KeithS @ 240:

The problem I have with the multiple document model is that it implicitly assumes that all the documents for an application must appear on the same display (and even if their enclosing window can lie across display boundaries, it's an ugly and inefficient use of screen real estate). Right now I'm working on two displays, one on the clamshell of my laptop, the other a large display to the left of it. I often work with more than one document, and I'll put one document (or the controls for the application, in the case of complex apps like Photoshop or Maya) on the laptop display, and one or more other documents on the large display. I often have more than one application up in this manner, and I have 8 virtual displays on top of the two physical ones, so I can deposit document windows all around and not have to have them occlude each other. If my budget permitted it, I would buy at least one more large display and would want to extend my window organization.

Excel drives me up the wall too, for a lot of reasons. There are times when I've had to use it (I once even had to write a Java program to generate Excel spreadsheets, and I will never do that again), but I've never learned to like it, and I get away from it as soon as possible. It's my belief that Excel's designers had no conceptual model to organize their thinking beyond the idea of spreadsheet, so anything that didn't fit perfectly in that idea is like a wart sticking out of the program.

#241 ::: Lee ::: (view all by) ::: February 18, 2010, 07:41 PM:

KeithS, #240: MDI is definitely superior to SDI from our usability standpoint, which is why we're still using Word 97. Besides which, DFWP. :-)

Hmmm... maybe that's why I like Firefox so much better than Windows. It took me a while to get used to the tabbed browsing, but now you'll have to pry it out of my cold, dead fingers to get me to give it up.

Also, in a way it reminds me of how I used to get the most bang for the buck on the S/36 at one of my old jobs. I could have one series of jobs lined up on the JOBQ, several more individual jobs running detached (on "virtual terminals"), and be working at something entirely different on my screen. Being able to switch tabs and look at something else while one screen is being slow to load gives me a similar feeling.

#242 ::: chris ::: (view all by) ::: February 18, 2010, 09:06 PM:

ISTM that understanding is well exemplified by, say, John Stanning's post at #224. In order to write that post, he had to understand

* what was being communicated by the post he was directly replying to
* the nature of the problem thus described
* enough about how Microsoft Word works to produce a solution for that problem
* how to communicate the solution in a way that might be comprehensible to the previous poster

I would be astounded if a lily or a planet duplicated that feat.

Also, I don't think that the lily's actions (if even *that* word isn't too much of a stretch) are an intention any more than Cydonia is a face. Ascribing intention to entities that lack it is an old human pastime (although I would say it is not so much "based on a theological world view" as vice versa).

I'm not quite sure what to make of the idea that applying the intentional stance to *ourselves* is another instance of the same fallacy. It seems self-contradictory, somehow. If we're capable of having any theory about ourselves, even a fallacious one, doesn't that prove that we're a type of being that can have theories about things? The lily's inability to ask the same question rather proves the point, doesn't it?

----

I would define the geek nature as enjoying understanding itself, whether or not it has any relevance to what you can do with that understanding. Thus, while you certainly can be geeky about a field of knowledge with great practical significance such as textiles or food or auto mechanics, it won't define the field in the perceptions of others if you're outnumbered by people whose understanding of that field is oriented toward what they can do with it. Impractical geekery is the most noticeable.

Also: if you acquire knowledge in order to display it to others, then you are acquiring it instrumentally, and thus, not a true geek. Belittling others' fields of geekery is an attempt to enhance the social value of your own knowledge, and thus, a mark of a false geek. Practicality, status, social approval: a geek craves not these things.

Or am I just drawing this distinction because I (subconsciously) think it will enhance my own social prestige? I don't completely understand my own mind (who does?), so it could be.

#243 ::: Diatryma ::: (view all by) ::: February 18, 2010, 09:31 PM:

KeithS: So *that's* what Excel is doing. It bugged me for ages, and I've sometimes managed to close all spreadsheets rather than the one I'm done with.

John Stanning: thank you thank you thank you for making me not grumble every time I open a new document and hit enter.

On user interfaces: I worried about getting my mother a Linux computer from Surplus until I realized that XP was becoming Vista, and from everything I'd heard, it was as tough if not tougher to adjust to that. So now she has Linux, and except for a few things-- none of us kids has figured out how to get Youtube to work-- she can do everything she wants. Email people, send pictures, type things although not print them, and buy things.

My seventh-grade keyboarding class was also a How To Use Word class. It's been really useful in the years since, if only because I know that there *should* be a keyboard shortcut for everything.

#244 ::: TexAnne ::: (view all by) ::: February 18, 2010, 09:46 PM:

Chris, 243: "...enjoying understanding itself." Yes, a thousand times yes! Or perhaps I merely enjoy the endorphin rush.

I strongly agree that belittling others' chosen geekitudes is the mark of a false geek. Real geeks get excited about all bits of neepery! For example, the things I've learned recently about photography--wow. I don't care to do it myself, but it's fun watching other people be excited.

#245 ::: Joe McMahon ::: (view all by) ::: February 18, 2010, 10:36 PM:

J Greely@67: It was the famous "let's turn on automatic die-on-error for EVERYONE!" change in a minor revision of WWW::Mechanize that made hundreds of scripts fail, including our primary software installer. Article on Perlmonks.

It's really past history - a few years back now - but it came pointedly to mind when I started thinking about the issue of building things that are dependable and reasonable, and coding for the real world as opposed to the magical universe where no one ever pushes software into production without taking hours to do extensive testing and analysis, and where there is never an emergency or accident.

#246 ::: KeithS ::: (view all by) ::: February 19, 2010, 02:05 AM:

chris @ 243: enjoying understanding itself

Yes, just so. Geekish behaviors are the same, even where the subjects are wildly different, but many still enjoy hearing other fields of geekery.

Diatryma @ 244:

I eventually sat down and wrote a pseudo-tab-bar for Excel (and turned off the show documents in taskbar option), because otherwise I'd treat it as an SDI app and accidentally kill everything I was working on.

For Linux, you probably need to find the Flash package in your distribution's package manager, or download it from Adobe if your distribution doesn't have one. It'll probably be stashed in a category called something like "non-free" in the package manager.

For Windows 95, Microsoft did everything they could to make the Start menu discoverable, including, but not limited, to labeling it "Start", and that bouncy "click here to start" arrow. For Vista, they turned it into this ugly, graphical knob at the bottom of the screen that has no text. If you were a first-time computer user, which would you click on right away?

#247 ::: E. Liddell ::: (view all by) ::: February 19, 2010, 10:13 AM:

Bruce Cohen (SpeakerToManagers) @236: It's impossible to accomplish anything in Blender by just poking around--the only way to figure out how it works is to read the documentation. (And, if you've been away from it for a while, you have to *re*-read the documentation.) Depending on why you're fiddling with it, this may not be worth the effort.

KeithS@240: Blender started out as in-house software for some European video-production studio and the original UI was tightly married to their process. It was never intended to be let out into the Real World (and I'm told that the UI for the first publicly-released versions was even worse than the current one, although how they managed to accomplish that I'm not quite sure). So I don't think this one is Linux's fault.

Diatryma@244: YouTube works fine on the Linux box I'm typing on right now, so KeithS is probably right and you just need to install the Flash plugin. For printing, you'll need CUPS and a printer driver (if one exists for the printer involved).

#248 ::: Graydon ::: (view all by) ::: February 19, 2010, 10:53 AM:

chris @243 --

You should be equally astounded if John photosynthesizes.

The point is not to argue that a lily can think; the point is to argue that the human ability to consider things by category isn't an example of the mathematical sense of abstraction, it's an example of a bunch of exapted neurological function that (probably) evolved to keep track of social relationships, and that while this is a powerful and interesting form of complexity handling there are a whole lot of other powerful and interesting forms of complexity handling, and that it is very probably a mistake to consider human neurological function unusually capable of complexity handling and certainly a mistake to consider it generally capable. (I suppose there is a kind of poetic justice to an attempt at discussing complexity handling resulting in a sentence like that.)

#249 ::: J Greely ::: (view all by) ::: February 19, 2010, 11:06 AM:

@246 Joe McMahon: It was the famous "let's turn on automatic die-on-error for EVERYONE!" change in a minor revision of WWW::Mechanize

Ah, thanks. It sounded like it was something more recent, which would have sent me into a version-checking frenzy before my next release to Production. Always a good time. :-)

-j

#250 ::: David Harmon ::: (view all by) ::: February 19, 2010, 12:02 PM:

chris #243:

I'm not quite sure what to make of the idea that applying the intentional stance to *ourselves* is another instance of the same fallacy. It seems self-contradictory, somehow. If we're capable of having any theory about ourselves, even a fallacious one, doesn't that prove that we're a type of being that can have theories about things? The lily's inability to ask the same question rather proves the point, doesn't it?

Except that e.g. a dog, can't ask that question either, and they clearly have intentions behind much of their action. Why not accept that all life has intention, with widely varying degrees of complexity? Ours is highly-developed, elaborated, and especially abstracted -- but that doesn't make intention, as such, a Magical Human Thing. Remember that part of the usual definitions of life is homeostasis -- defending one's own processes against the slings and arrows of the environment. What is that, if not an implicit intention to continue existing?

#251 ::: Mary Aileen ::: (view all by) ::: February 19, 2010, 12:31 PM:

KeithS (247): For Windows 95, Microsoft did everything they could to make the Start menu discoverable, including, but not limited, to labeling it "Start", and that bouncy "click here to start" arrow.

Very true, although it still bugs me that you have to click "Start" in order to finish (i.e., turn off the computer).

#252 ::: Erik Nelson ::: (view all by) ::: February 19, 2010, 12:40 PM:

#252:
isn't there a song by Genesis?
"you've got to get in to get out"

#253 ::: KeithS ::: (view all by) ::: February 19, 2010, 02:56 PM:

E. Liddell @ 248:

Ah, I think I did know that it had been in-house software at one time, and I'd since forgotten.

That does illustrate the point I made above about domain-specific applications and user interfaces. If it works for them, and it models what they're doing in a way they're already comfortable with, that's fine. It's when outsiders come to it that the trouble starts.

David Harmon @ 251:

I think the problem there is that intention implies agency. It's one thing to personify life as selfish genes that want to propagate themselves, but that's a description for ease of human understanding. We observe dogs acting in ways that imply that they have certain mental models about the way the world works, and that they can reason to a certain extent. We don't observe the same things of lilies.

I do agree that intention is not a "Magical Human Thing", but I wouldn't describe it as something common to all life.

#254 ::: John Stanning ::: (view all by) ::: February 19, 2010, 03:02 PM:

Graydon #249 : You should be equally astounded if John photosynthesizes.

Sssh!  On the Internet, nobody knows you’re a lily.

#255 ::: Bruce Cohen (SpeakerToManagers) ::: (view all by) ::: February 19, 2010, 03:24 PM:

KeithS @ 254:

The Chilean biologists Humberto Maturana and Francisco Varela, who have not been given anywhere near the recognition they deserved for pointing the way towards a complex-systems view of biological life, coined the term "autopoesis":

An autopoietic machine is a machine organized (defined as a unity) as a network of processes of production (transformation and destruction) of components which: (i) through their interactions and transformations continuously regenerate and realize the network of processes (relations) that produced them; and (ii) constitute it (the machine) as a concrete unity in space in which they (the components) exist by specifying the topological domain of its realization as such a network.

I think of autopoesis as the most primitive form of intention; the base from which all higher forms, including our own conciousness, evolved. So there is conceptually a continuum (broken by step-increases in functionality in the actual contingent history of evolution) of forms of intention.

#256 ::: heresiarch ::: (view all by) ::: February 19, 2010, 04:54 PM:

Bruce Cohen @ 218: "I think there are (a lot of) people who have never been taught how to think about abstractions at all."

Oh, I see--you mean that people don't abstract abstractions, as it were. Yes, that is another difficulty.

"In contrast, I take you as saying that in this discussion we need to understand that not understanding the abstract nature of computer systems and their components does not betoken an inability to reason abstractly, only an ignorance of how to reason abstractly within the domain of computer concepts."

Mm, not quite. I'm arguing that requiring people to understand complex abstract models (file systems) in order to perform routine tasks (reading text documents) is contrary to nearly every other aspect of human life. That therefore the problem is not that many users lack that kind of understanding, but that computer interfaces are routinely designed to require it.

(Just to be clear, I don't think there's a way to design computers that doesn't require the user to have some level of abstract understanding of computers and how they are different from sticks or cars. Rather, they should be designed in ways that minimize and simplify the required abstractions.)

David Harmon @ 238: "And yet the lily does have intentionality, even without understanding!"

I don't think it does. It has a variety of functions that enable self-replication because in the past arrangements of molecules that have had those functions have successfully replicated themselves--at no point was there any intention to survive, no more than a bunch of molecules folding themselves into a protein are doing it because they intended to. Intentionality is a concept that only makes sense in terms of understanding: wanting to achieve some goal requires being able to conceptualize the goal, time, and any other number of abstractions.

Graydon @ 239: "(So Jupiter's gravity contributes to the structure of the solar system but there is no functional relationship between Jupiter and Earth, they're independent products of chance-dominated processes.)"

How are you differentiating between structural and functional relationships? Saturn's rings have shepherd moons which function to maintain ring boundaries, but I don't think it makes any sense to talk about them having any intention to do so. Similarly, a lily's photosynthetic process functions to allow the lily to self-replicate, but I don't think one can argue it's any more intentional, or the product of an abstract decision-making process, than the shepherd moons. Both emerged out of chance-dominated processes to become highly-organized and self-perpetuating systems. What is the distinction you're drawing?

(Independent Products of Chance-Dominated Processes would be a pretty good band name, by the way. Or maybe album name.)

"all the research of which I am aware points out two things; people's understanding of how they perform complex tasks is generally in error, and the actual mechanisms, as best can be told, are highly empirical and show little sign of attempting to abstract anything; they work off a set of cues and triggers, and while this can work very well indeed, it is not the same thing as abstraction."

I don't disagree with any of this, really. My position since the beginning of this thread has been that understanding (and abstraction) are far less pervasive and far less useful than people (especially technically-minded people) are prone to assume. It's really good at certain things--basically, predicting outcomes to never-before-experienced situations--but it is at best a part of human experience, not its dominant mode.

I do hold that understanding, as defined as a process of constructing abstract models of real phenomena, is a distinct kind of complexity handling from evolved systems--not better, not purer, not cooler--just different. Using the language of understanding ("abstract," "ideal," "goal," "intention," etc.) to discuss evolutionary systems clouds our understanding of both.

#257 ::: P J Evans ::: (view all by) ::: February 19, 2010, 04:57 PM:

(Independent Products of Chance-Dominated Processes would be a pretty good band name, by the way. Or maybe album name.)

Especially if it involves random-number-generated music.

#258 ::: Avram ::: (view all by) ::: February 19, 2010, 06:42 PM:

Mary Aileen @252, where else would you go to start finishing?

#259 ::: chris ::: (view all by) ::: February 19, 2010, 07:00 PM:

@258: Suddenly I have an idea to make a website that randomly generates tunes and has visitors rate them, and then evolves the highest-rated competitors (like Dawkins's biomorph program, but with music)... of course, nothing would stop the userbase from converging on a copyright violation, and more importantly, I don't have the technical ability to do that in a reasonable time.

The interesting part (IMO) would be designing the "embryology" of a tune so that a change in one part of the genotype would affect different repetitions of the same theme in parallel, like musical Hox genes. Music, like bodies, contains repetitions of similar elements and similar-but-slightly-different elements and it would be hopelessly clunky if the system had to modify each copy separately.


Anyway, to return to the main point, I rather doubt that John Stanning is descended from a line of ancestors whose ability to reconfigure Microsoft Word was crucial to their survival and/or reproductive success, which is of course precisely how the lily "learned" to... well, actually, host the organisms that photosynthesize, rather than doing any photosynthesizing of its own.

Except it didn't really learn anything, because evolution and cognition are rather poor metaphors for each other when you actually get into the details of how they work. The main similarity is that either one may, under the right conditions, produce a solution to a problem.

John's ability to repurpose his brain to the problem of Word configuration, which was not relevant to his evolution, is what makes it special, IMO. The lily blossoms in the spring because blossoming in the spring worked pretty well for its ancestors. John has some tricks like that (including some built into his nervous system, like throwing and catching, for example), but it isn't all he has.

#260 ::: Tim Walters ::: (view all by) ::: February 19, 2010, 08:19 PM:

chris @ 260: Evolutionary music is a fairly well-established field.

#261 ::: David Harmon ::: (view all by) ::: February 19, 2010, 08:31 PM:

Bruce Cohen #256: yes, that's pretty close to what I'm thinking of, with one addition...

heresiarch #257: namely, I'm claiming that the accumulation of "survival responses" in the lily's repertoire, represents something which is conceptually continuous with what we think of as "intention".

All those traits and response patterns are bent toward a single basic goal, which is completion of the lily's life cycle. The first big difference between that and human-style "intention" is that the goal wasn't chosen by the lily, it was incrementally imposed by the contingencies of life. The second big difference is that humans have much more complicated processing "in line" with our behavior patterns, which lets us respond to our environment much more productively. (Our other physical characteristics -- such as mobility -- help too.)

#262 ::: heresiarch ::: (view all by) ::: February 19, 2010, 10:13 PM:

chris @ 260: "Except it didn't really learn anything, because evolution and cognition are rather poor metaphors for each other when you actually get into the details of how they work."

Yes, thank you.

The distinction has serious ramifications for how the system responds to totally new challenges: understanding systems can make predictions about how things will work in new situations, but evolutionary systems have to adapt one stumble at a time. (On the flipside, an understanding system will collapse entirely when faced with a challenge it cannot usefully conceptualize, where an evolutionary will adapt, again, one stumble at a time.)

David Harmon @ 262: "I'm claiming that the accumulation of "survival responses" in the lily's repertoire, represents something which is conceptually continuous with what we think of as "intention"."

And I'm claiming that it is qualitatively different, that "intention" is a concept that only makes sense in the context of a process of abstract conceptualization; that minus that level of abstraction evolutionary behavior is simply a type of positive feedback loop. I don't think we don't understand each other; we just disagree. =)

#263 ::: Bruce Cohen (SpeakerToManagers) ::: (view all by) ::: February 20, 2010, 12:24 AM:

Tim Walters @ 261:

As is evolutionary visual art. I have a copy of Evolutionary Art and Computers, W Latham, S Todd, 1992, Academic Press cited in the Wikipedia article; it's got some fascinating images in it.

heresiarch @ 263:

I (and a lot of other people who've thought more on the subject than I have) see human thought and consciousness as evolutionary processes¹. And recent studies of the ability of protists like slime mold amoebae to deal with environmental change indicate that "simply a type of positive feedback loop" is not so limited a process as we might think.

My own experience with feedback loops is that they can result in some very complex behavior; my reading of some of the recent research in decision making in the human brain is that the mechanisms behind our behavior are simpler than we might like to believe.

It may be that the disagreement here is caused by our using different definitions of "intention". I think you're using something like the Merriam-Webster definition: "a concept considered as the product of attention directed to an object of knowledge". I, and perhaps others on this thread, am using it as a synonym of goal or objective. The difficulty with my definition is just how far down in system complexity you are willing to go while still admitting a system as "goal-oriented", a discussion I don't have time to get into in this post. Tomorrow morning, perhaps.

¹ Not from the point of view that they are products of evolution, which they are, but as processes which reach metastable states by generating candidate states and having them compete.

#264 ::: John Stanning ::: (view all by) ::: February 20, 2010, 10:49 AM:

Chris #260 : I rather doubt that John Stanning is descended from a line of ancestors whose ability to reconfigure Microsoft Word was crucial to their survival and/or reproductive success

I’m descended from a line of ancestors whose generic ability to solve new problems was crucial to their survival.  For example, I owe my existence to my father’s ability to adapt to new circumstances and deal with a problem outside his previous experience – without which he would likely have died in 1940 and I would not have been conceived.  Many of our ancestors survived likewise because of that generic ability.  My possession of it, in no way unusual, enables me to do lots of things, of which hacking MS Word is only a single, trivial example.

#265 ::: David Harmon ::: (view all by) ::: February 20, 2010, 12:19 PM:

Bruce Cohen #264: My own experience with feedback loops is that they can result in some very complex behavior

Oh yeah... I've never seen this stated as an explicit law, but it seems to me that chaotic behavior seems to pop up any time you have three or more independent but interacting factors. It doesn't seem to matter if they're scalars, vectors, complex, etc., but the next value of each has to depend on the other two. The classic example would of course be the three-body problem.


#266 ::: Bruce Cohen (SpeakerToManagers) ::: (view all by) ::: February 20, 2010, 03:21 PM:

David Harmon @ 266:

You only need to have one feedback loop to see chaotic behavior, consider the logistic map. With just 2 or 3 loops things get chaotic much more quickly. The systems I've played with a lot involve video feedback in one way or another. The simplest is to aim a video camera at a monitor that's displaying the output of the camera, but zoomed in. You can get the same effect with more control over more variables with a computer simulation, but the basic principle is the same: for some range of variable values (amount of zoom, tilt of camera, brightness and contrast settings, delay between camera and monitor, etc.) the image becomes stable, for others you get a constantly changing pattern that can be oscillatory, or completely chaotic. Each pixel of the monitor represents a separate feedback loop (though in the physical camera/monitor situation there's coupling among neighboring pixels, which can make for more stability), so that's a really complex system.

#267 ::: James Moar ::: (view all by) ::: February 20, 2010, 03:34 PM:

Bruce @ 267:

The first decade of Doctor Who title sequences are probably the most famous result of this process.

#268 ::: heresiarch ::: (view all by) ::: February 20, 2010, 05:21 PM:

Bruce Cohen @ 264: "I (and a lot of other people who've thought more on the subject than I have) see human thought and consciousness as evolutionary processes¹."

That could be true--I wouldn't be at all surprised if evolutionary processes played a substantial part in human thought processes--but even if that were so, there's still the fundamental fact that those processes are taking place in an abstracted space. If my mind uses an evolutionary model in order to figure out the optimum sandwich-making process, it's still setting up a system of abstractions including a goal, a set of criteria, and all the abstractions that represent the parts of the sandwich-making process, which then needs to be re-translated out into the physical world in order to have a real existence. This is a fundamentally different sort of thing than a purely physical evolutionary system which involves no abstractions, whose evaluation is done by reality itself. Does that make sense?

"And recent studies of the ability of protists like slime mold amoebae to deal with environmental change indicate that "simply a type of positive feedback loop" is not so limited a process as we might think."

Two things: given that my "simply a type of positive feedback loop" covers all the evolutionary complexity of the earth's biosphere, I don't think it's fair to characterize me as arguing that positive feedback loops are in any way "limited." Positive feedback loops are patently capable of immense wonders of complexity: I just don't think that implies understanding.

Second, that article seems to me to be a perfect example of why thinking of complexity handling as being synonymous with understanding is terribly misleading. An amoeba feeds in a "smart" manner? That implies that it is capable of abstractly modeling its environment and reacting accordingly, which I rather doubt is what's going on: complex abstract modeling would be an enormously wasteful way of solving that challenge.

"The difficulty with my definition is just how far down in system complexity you are willing to go while still admitting a system as "goal-oriented", a discussion I don't have time to get into in this post."

Yes, that is one problem that I see. Any number of systems (from lilies to shepherd moons) exhibit tendencies towards self-perpetuating balances--namely because if they don't, they stop being systems. I don't see how one can speak of those systems as intending to self-perpetuate simply because they have done so, no more than systems that destabilize and collapse intended to do so. Intention, goal, objective--all of these concepts only make sense to me in context of a process of abstract reasoning. What a lily does is to intention as a chair is to the concept of chair.

#269 ::: Bruce Cohen (SpeakerToManagers) ::: (view all by) ::: February 20, 2010, 07:52 PM:

heresiarch @ 269:

Then I guess we are using "intention" in different ways, and disagreeing about how the word applies to this discussion. I don't see that intention has much to do with understanding or with reasoning, only with goal-direction, and I'm not convinced that abstraction is necessarily related to reasoning either, unless we class the action of the nervous systems of all complex animals with a central nervous system as "reasoning". ISTM that abstraction is related to model-building, which happens in any nervous system with the ability to remember and react similarly to similar environmental stimuli under different global conditions. But if you want to use different words depending on whether you're talking about human reasoning or animal model-building, that's fine; but then I think you have to agree that humans do what other animals do as well as what you're talking about.

#270 ::: Tim Walters ::: (view all by) ::: February 20, 2010, 09:17 PM:

Bruce Cohen @ 267: I recently completed an electroacoustic piece consisting entirely of feedback (not just audio feedback, but data, for example controlling the frequency of an oscillator with a frequency detector whose input is the oscillator). Maybe I should get you to do a video for it!

#271 ::: David Harmon ::: (view all by) ::: February 20, 2010, 10:17 PM:

heresiarch #269: Except the shepherd moons don't actually self-perpetuate, they're just in a local energy minimum provided in part by resonance effects. They also don't respond to environmental stress (another defining factor of life), so if we were so inclined, we could probably rearrange them with big ion motors, and they wouldn't protest or defend themselves.

In contrast, life-forms have genuine homeostasis, both individually and collectively -- they will not only defend themselves against their "usual problems", but persist in doing their thing anywhere they find a suitable environment. (And some of them are more catholic than those lilies about what's "suitable"!)

#272 ::: chris ::: (view all by) ::: February 20, 2010, 10:46 PM:

#265: I’m descended from a line of ancestors whose generic ability to solve new problems was crucial to their survival.

Sure. So is everyone on this thread. Lilies, however, are not. Their ancestors' ability to solve old, stable problems (like how to extract useful energy from sunlight, or how to convince some lifeform with mobility to help with their pollination problem) was crucial to their survival, and still is, and the responses that their ancestors evolved are the same ones they use today.

Cognition is not a cost-effective way of solving an old, stable problem. Hardwiring the solution is faster and cheaper. That's why cognition occurs only in a few species (that we know of).

#272: Some life forms will only defend themselves against things their species has evolved to defend themselves against. Put a sufficiently different threat in their environment and they're roadkill (perhaps because they reacted with a misplaced response, like armadillos or opossums).

If I prune a hedge, I can't tell if it protests, but it definitely doesn't defend itself (in any effective way).

Even cognitive species don't necessarily deploy their cognition effectively enough to react to unfamiliar or difficult to perceive threats such as asbestos, or excessive dietary fat intake.

#273 ::: Paula Helm Murray ::: (view all by) ::: February 20, 2010, 11:18 PM:

Opossums are probably a bit higher on the abilities chain than armadillos, probably because they're omnivores rather than specific insectivores (armadillos).

As an example, city opossums don't, as a rule, 'faint' (that is basically what playing dead is). If they did they'd get et. city opossums will stand their ground and show you their fifty bazillion teeth and hiss, and then yield and waddle away if you aren't impressed AND show that you're going to push them off your porch with a broom. They also only live two or three years due to the stress,

Country 'possums will 'faint' when scared/alarmed/etc. But it isn't a true faint. I had a friend who moved from near where I live now (urbia) into a totally rural environment but near her mom. She told me about being amazed at hearing something in her trash barrel, going out and yelling at it, and being stunned that the opossum fell off the trash barrel. She then recounted (to my horrified self) that she picked it up, played with it, etc. Then asked, "why are you looking at me like like that?" I told her it could have waked up at any moment and bitten the crap out of her for playing with it, It IS just a syncope and they can wake up fairly soon.

#274 ::: Paula Helm Murray ::: (view all by) ::: February 20, 2010, 11:18 PM:

Opossums are probably a bit higher on the abilities chain than armadillos, probably because they're omnivores rather than specific insectivores (armadillos).

As an example, city opossums don't, as a rule, 'faint' (that is basically what playing dead is). If they did they'd get et. city opossums will stand their ground and show you their fifty bazillion teeth and hiss, and then yield and waddle away if you aren't impressed AND show that you're going to push them off your porch with a broom. They also only live two or three years due to the stress,

Country 'possums will 'faint' when scared/alarmed/etc. But it isn't a true faint. I had a friend who moved from near where I live now (urbia) into a totally rural environment but near her mom. She told me about being amazed at hearing something in her trash barrel, going out and yelling at it, and being stunned that the opossum fell off the trash barrel. She then recounted (to my horrified self) that she picked it up, played with it, etc. Then asked, "why are you looking at me like like that?" I told her it could have waked up at any moment and bitten the crap out of her for playing with it, It IS just a syncope and they can wake up fairly soon.

#275 ::: Bruce Cohen (Speaker To Managers) ::: (view all by) ::: February 21, 2010, 01:01 AM:

Tim Walters @ 271:

Well, I haven't set video feedback to music in quite a long while, but it might be fun to try it again. The last time I was working with a record of Javanese gamelan music and I was rather pleased with the result. Send me email if you want to try something: brucecohenpdx at gmail dot com.

#276 ::: abi ::: (view all by) ::: February 21, 2010, 08:29 AM:

Bruce Cohen (STM) @264:

My own experience with feedback loops is that they can result in some very complex behavior

...which leads, in many cases, to false mental models, superstitions both peculiar and common, and people telling me I'm pressing the wrong elevator button.

#277 ::: Serge ::: (view all by) ::: February 21, 2010, 09:55 AM:

abi @ 277... No! Not that button!

#278 ::: heresiarch ::: (view all by) ::: February 21, 2010, 05:09 PM:

Bruce Cohen @ 270: "Then I guess we are using "intention" in different ways, and disagreeing about how the word applies to this discussion. I don't see that intention has much to do with understanding or with reasoning, only with goal-direction,"

For me, that ends up with a statement like "the goal of any system is to do what it does," so that the goal of a lily is growing leaves, flowers, and producing seeds, and a lump of rock's goal is to continue being a lump of rock. While there are certain circumstances where I can see "A rock wants to be a lump of rock" being a useful expression of something, I don't think it maps very well onto something like "Bob wants ice cream."

There's a qualitative difference between a tendency which is built into the very nature of a system and between an intentionality which is itself a variable within the system. For a lily, the evolutionary imperative to self-replicate can't be changed or modified--it can experiment with an infinite variety of ways to accomplish that goal, but it can't ever forsake it or change it. Humans can, though: a person can choose not to reproduce, or even choose to end their individual existence. This is a fundamentally different sort of intentionality than a lily's drive to reproduce.

Which brings me to another point: even assuming an evolutionary model, abstract decision-making as performed by the human brain isn't equivalent to a lily's behavior--it's equivalent to the behavior of the entire lily species, or even of the entire universe. The lily is equivalent to one single candidate state. I guess I'm a lot more comfortable with the statement "the universe intends to create life" than I am with the statement "a lily intends to reproduce."

#279 ::: Graydon ::: (view all by) ::: February 22, 2010, 12:01 PM:

heresiarch @279, @257 --

The purpose of a system *is* what it does; this is one of the core insights necessary to doing good system design. (That new Point-of-Sale terminal software? If it's causing suffering, _causing suffering is what it's for_. If you don't think like that, your odds of producing good system design aren't what they might be.)

Humans do not use, and are not capable of, abstract decision making; humans (often) think that's what they're doing, but there's no evidence to support the view. Faking many of the capabilities with exapted neurological capability isn't the same; we're still inside the space of inherited constraints.

A structural relationship can be usefully modeled by a reductive model. A functional relationship cannot. (So we get a useful model by reducing planets to point masses; we can't get a useful biological model by reducing organs to point-kidneys or the abstract ideal of a lily.)

What Makes Biology Unique? by Ernst Mayr is a good place for a start at this; it's a bit dry and a lot magisterial, but it's also one of those "I've been doing this for 80 years and might be starting to understand a small amount" books.

#280 ::: Bill Stewart ::: (view all by) ::: February 22, 2010, 10:17 PM:

Heresiarch@257, somehow I missed your sentence about "shepherd moons" being a feature of Saturn's rings, so the next time I saw the phrase I was wondering what Enya had to do with it... (And White Lily was Laurie Anderson, which somehow seems appropriate.)

Heresiarch@279, I'm more in agreement with your last sentence than the rest of your posting. If lilies have anything like Intention, it's phototropism, or the tendency of roots to grow towards water.

I don't think the concept of an "Evolutionary Imperative to Self-replicate" makes sense - if organisms don't self-replicate then there'll only be any of them around if other organisms keep making more of them, and they'll disappear if that doesn't happen, but that doesn't mean there's any Evolutionary Imperative, it just means that "ok, they'll disappear then." Nor do lilies "experiment with" ways to accomplish that [whether or not "that" is a "goal"] - they just do stuff, sometimes in randomly different ways, and sometimes that results in reproduction, but Darwin doesn't really care whether it does or not, because he doesn't really care which species become common or rare or extinct. If there's a Creator, He might care about that, and he might have kicked around some butterfly-shaped particles during the Big Bang or even shaken up some dice in Schroedinger's cat-box, but Darwin's neutral about the results.

It really frustrates me when the people who think they're on the Evolution side of the Creation-vs-Evolution debate go attributing things to Evolution that don't belong there, like Progress or Fitness or Continuous Improvement or Historical Necessity or just about anything else in capital letters except Death and Taxes\\\\\Entropy. If that seems bleak and meaningless, go talk with a Creator, or an existentialist, or go admire all the beauty around us even if it's all transitory, but don't go trying to make Evolution anything except what it is.

#281 ::: TexAnne sees spam ::: (view all by) ::: February 26, 2010, 01:05 AM:

Word salad.

#282 ::: Scott Wyngarden ::: (view all by) ::: February 26, 2010, 01:34 AM:

Although "intellectual brain tumor" is an phrase full of possibility

#283 ::: Serge ::: (view all by) ::: February 26, 2010, 02:28 AM:

Marco Terminesi?
With a name like that, he was bound not to be long for this world.

#284 ::: Mark R ::: (view all by) ::: March 05, 2010, 08:15 AM:

The elevator story caught my eye. My wife had a colleague - a woman in her twenties - who thought that way about lifts, so it's good to learn of another example. My wife still sometimes cites her as an example of a very original and creative person whose way of seeing the world is oddly different from most other people's. She was charming and goofy somewhat in the way the character played by Tamsin Greig in Green Wing was. (It's a (UK) Channel 4 series that I thoroughly recommend if you don't already know it.)

Choose:
Smaller type (our default)
Larger type
Even larger type, with serifs

Dire legal notice
Making Light copyright 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 by Patrick & Teresa Nielsen Hayden. All rights reserved.