Archive for November, 2008

Lively, and the Braid

November 20, 2008

Today’s tech news is that Google is killing off Lively, its “Second Life killer” project. This isn’t wholly astonishing — while it looked very intriguing at first blush, there just wasn’t much There there.

That said, I don’t think this says much about the virtual-worlds idea in general. Yeah, there’s some faddishness to it, and yes, it continues to be marginal. But seriously: I still don’t think anyone has done a very good job of implementing it. Heaven knows that Lively wasn’t very impressive, precisely because it was too limited — basically just self-designed chat rooms with 3D avatars. That’s amusing for a little while, but it provides little benefit over and above text chat rooms, with massive overhead.

There is a moral in this, and it’s why I’m posting about this here — conversation systems need to be as natural as possible.  The form and technology strongly affect the conversations.  I’ll talk about this more at another time, but the immediate point is that you can’t build a successful conversation system that is a hassle.  No matter how cool it is, people just won’t use it.  (This is why I spend at least half my time on CommYou trying to make it easier to use.)

Anyway, it does surprise me a little that this hasn’t been done right yet; my best guess is that this is simply because there isn’t an obvious business plan in doing it well. But I’m still quite clear on how virtual worlds *should* work: it’s more or less what I described as the Braid a number of years ago. High concepts of the Braid:

  • It should be 100% decentralized — anybody can run servers that host however much “space” they want.
  • It must be based on completely open protocols, and there should be straightforward if basic reference implementations for all the pieces.
  • The “space” absolutely, positively *must* be extensible in some fashion, without a simplistic Euclidean model constraining it. This could be done with either Mark Pesce’s Cyberspace Protocol or (better, IMO, but weirder) my Portals Protocol.  (I can’t find the full description, sadly, but here is my original thinking about it, and more can be found in the www-vrml archive.)
  • You have to be able to *build* things in this space. People should be able to create objects, and those objects should be fully tradeable in a trusted way.
  • You have to be able to *do* things in this space. There should be at least a clear baseline concept of physics, and an ability for spaces to define extensions to cause-and-effect. The space needs to be programmable: the owner of a space should be able to define how that space *works*.
  • Combining the above two, it should be possible to build *stories* that happen in this space. At that point, you have a distributed system for writing at least simple RPG-oriented videogames, which I believe is the initial killer app to spread the thing around.

Granted, that’s a tall order — when the VRML project started in ’94, it was probably unrealistic for me to expect that this was where it was going. But at this point, it’s by no means rocket science: all the necessary elements exist, and many of them are commodity code by now.

A lot of projects have sprung up over the years, and many have gotten some of these pieces right, but nobody has put all the pieces together yet. The best progress has been made in for-profit ventures like Second Life, but most of those are proprietary (or at least started that way) with the result that they don’t *think* about the problem ideally — they weren’t designed for open protocols, so they aren’t as powerful and flexible as one might wish.

It’ll happen one of these days, though. For all I know, it’s possible that someone *has* done it, but hasn’t gotten the killer app part needed to get lots of people using it. I’m still quite sure that the potential is out there, but I don’t think it’s going to be realized until people think of the “space” — what I call the Braid — as a commons, in much the same way that the Web is. When it is open enough that *anybody* can build in it and do cool things in it, whether for profit or for free, you’ll see the explosion of creativity that will make it real.

Have you seen virtual-world systems that you think have real potential?  Do you think the whole idea is a dead end?  What do you think would make a virtual world interesting and worthwhile enough to spend ongoing time in?  Do you already do so, and if so, in which systems?  This is a bit of a tangent from our usual discussion of conversations, but it’s an important one: if virtual worlds do become more important, they’ll eventually be an important medium of conversation unto themselves.

The Creature Under the Bridge

November 18, 2008

A friend of mine mentioned recently that she needed to step back from a blog that she was fond of — not because of the content of the blog itself, but because of the arguments going on in the comments.  A particular commenter’s poorly-written stridency was making her grumpy enough that reading was getting unfun.

So let’s talk about Trolls.  For purposes of this discussion, I’m going to define a “troll” as a frequent commenter who, by their content and style, tends to raise the heat of the conversation.  Some do so deliberately in order to bait people; many don’t realize they are doing it, and would deny (loudly, of course) that they are doing any such thing.

Frankly, trolls are worse than spammers.  Spam at least has the virtue of being obvious: computers can generally recognize it with considerable accuracy, and humans can recognize it 99.9% of the time.  But trolling is in the eye of the beholder, and there is no objective measure of it.  So trolls are a common problem in online conversation, because people are generally too tolerant of them.

I suspect that many people haven’t yet come across the Five Geek Social Fallacies.  It’s a generally interesting list, and good reading for anyone who considers themselves geeky.  (Even if they don’t apply to you, they probably do to some of your friends).  But the first one is especially relevant to bloggers, even non-geek ones: “Ostracizers are Evil”.  Basically, the fallacy is the idea that kicking anybody out of any group is automatically bad.

Another cause of the problem: the Net is dominated by people who have proudly internalized the First Amendment, and who regard censorship as fundamentally evil; therefore, they don’t think they have the right to kick someone out of a conversation.  But you aren’t the government, and if you are running a conversation, you sometimes have both the right and responsibility to take care of it.

Trolls erode polite conversation.  Not only do they derail possibly healthy conversation, they shut more interesting people up, or drive them away, simply because the conversation becomes too annoying to deal with.  It’s not unusual for an online community to go into a death-spiral because the strident loudmouths drive out the more moderate folks.

The implication here is that, yes, there are times that you really should kick someone out of the conversation, not because they are spamming or off-topic, but simply because they are rude.  If you don’t, you may find that everybody else eventually goes away instead, because they don’t want to deal with the jackass.

All that said, be balanced.  Not everyone with strong opinions is a troll, nor everyone who disagrees with you.  A healthy conversation often involves some disagreement.  The question is, is this person disagreeing solely for the sake of disagreeing?  Or, worse, for the sake of provoking people?

I recommend taking a gradual approach.  If you think someone is causing problems, talk to them about it, privately if possible.  Ask them to tone it down.  If they don’t, warn them; if they refuse to listen to warnings, or don’t get the clue, ban them.

Keep in mind that trolls thrive on anonymity.  If your conversational environment allows anonymity, there is essentially no way to keep trolls out.  This is one of the reasons I dislike entirely anonymous forums.  Pseudonymity is fine — there is some kind of identity that you can kick out.  But if you allow anonymous comments, I strongly recommend screening them — otherwise you have no way to ban the trolls.

Next time: a bit more discussion about Moderation, and why it’s important.

Zuckerberg’s Law

November 12, 2008

Before I start, a brief apology for how long it’s been between entries.  I’ve been heads-down focused on getting the alpha release of CommYou out the door, and haven’t had the cycles to focus on blogging properly.  With any luck, now that that’s out, I can get back to a more regular schedule.

So, with that said, let’s restart by taking a look at this past weekend’s hot story: Zuckerberg’s Law.  This was described in the New York Times, quoting Mark Zuckerberg (the founder of Facebook) as saying:

I would expect that next year, people will share twice as much information as they share this year, and next year, they will be sharing twice as much as they did the year before.

Those in the field will recognize the echo of Moore’s Law, coined back in the 60s, saying (more or less) that computers would get twice as powerful every 18 months.  It’s a compelling (if slightly self-serving) assertion, and relevant to the topics here, so let’s talk about it a bit.

First, let’s get one thing out of the way: I doubt we’re talking about another Moore’s Law, likely to be literally true for decades.  This is simply because we’re dealing with fractions here: what percentage of “everything” is being shared.  Unless you believe that the total amount of data that is available to be shared is going to keep doubling every year (which is possible, but I’m skeptical), we’ll hit limits before terribly long.  It’s just barely possible that the amount not being shared will halve every year.

But let’s weaken Zuckerberg’s statement a little, and assume that the amount being shared is going to keep going up rapidly.  What are the implications if this is so?

I’ll assert that privacy is going to become ever more important.  People have been discovering this on Facebook: failure to keep stuff private is leading to frequent real-world consequences.  (For instance, consider the Patriots cheerleader who was recently fired due to a scandalous picture showing up online.)  On a more personal level, I think the digital native generation is getting the clue that it’s very easy to wreck relationships by saying too much online.

This being the case, everybody is probably going to move towards having at least some privacy controls.  This will probably hurt systems like Twitter, that are designed to be extremely open: I’d guess that they will gradually lose ground to competing tools that give the poster more control.  (Twitter might be able to introduce privacy controls, but I suspect it’ll be hard for them: it doesn’t fit the way the tool thinks at this point.)

On the flip side, this trend seems likely to sharpen the digital divide between generations.  The digital natives appear to be much more open at an instinctive level: they happily share things their parents wouldn’t have dreamed of.  And while I think they’ll learn some hard lessons and tighten up a bit as they get older, openness will probably remain their default.  That might have broad cultural ramifications, as sharing, rather than privacy, becomes the default.  America is famously more private than, say, some Asian countries; this might well shift over the next 50 years.

Finally, we’re going to see the rise of more tools that are focused on integration of data.  Social networking today is looking a lot like the Web before Google: there’s an enormous amount there, but it’s increasingly hard to find it.  I think as fondly of the days when all of my friends were just on LiveJournal as I did of the days when I could follow every new website as it came online.

At the moment, the data we’re sharing online is very fragmented: the serious digerati often have their online selves scattered across dozens of sites.  You can see the rise of integration already — indeed, you can tell that a site has Made It when everybody scrambles to import data from it — but it’s a haphazard pain in the tuchus right now.  In order to support ever-more sites sharing ever-more user-centric data, we’ll need ever-better standards for that sharing.

All of which leads me to one concrete prediction: if Zuckerberg is right, it means that his company will gradually stop being able to control the flow of information.  Currently, Facebook is in the driver’s seat: they dictate the terms of sharing a user’s information, often contrary to what the other companies and the user himself wants.  For now, they can get away with that because they’re the 800 pound gorilla.  But more information sharing will almost certainly lead to that information getting scattered ever more widely, which will lead to user demand for better integration.  And sites that use more open standards will do a better job of integrating, so Facebook is going to be dragged in that direction as well.

What do you think?  Are we looking at a more-social America in the coming decades?  Will open standards win out, or will Facebook manage to triumph as a partly-walled garden?  Is Zuckerberg correct in the first place, or do you think that sharing will find its level soon?