Archive for May, 2010

Okay, say it with me: Comments *are* Actions

May 21, 2010

So the good news from yesterday is that Google Buzz has opened up a bunch of APIs.  It’s officially a Labs project, so they’re doing it kind of tentatively (having been bitten in the ass by releasing Buzz itself too quickly and broadly), but by and large the new API looks pretty good.

But to my disappointment (although completely *not* surprise), it bakes flat commenting right into the data model.  If I’m reading this right, you can have “activity” objects (like a post), each of which has exactly one Comment Collection associated with it.

Why does this matter?  Because it makes the usual mistake of thinking about an “action” and a “comment” as completely different things.  They’re not, and it’s pretty broken to think about them that way.  In the larger online world, they’re just elements in the larger conversation that we are each having with our friends.

In practical terms, there are lots of implications here.  For example, by structuring things this way, it means that threaded discussions are right out — currently ruled out by the data model, and never likely to work quite right.  On the flip side, it has no concept of the other ways that an Activity can itself be a Comment — for example, a video, or another discussion, or something like that which is spawned off from a previous one.

None of which is new and different, mind.  It’s just a little depressing to see Google (which often does a good job of analyzing problems) making the same mistake that so many other sites have done.  That’s doubly true now, after Wave did a pretty good job on this.  (Although Wave then tried to do *so* much in the UI that it comes out as a little intimidating.  Their mistake was the opposite: trying to expose every conceptual detail to the user too quickly.)

The conclusion is that, while Buzz is decent at light-touch social-grooming sorts of communication (like Facebook), it’s not likely to ever be good at deep conversation (like LiveJournal) unless they wise up and fix this conceptual problem.  That’s a pity: the world needs more social networks that have a clue about how serious conversations really work…

Crowdsourcing can only take you so far

May 17, 2010

Interesting article here on ReadWriteWeb, about Facebook’s approach to banning.  It’s a bit hyperbolic, but assuming it’s correct (and really, it wouldn’t surprise me), it implies some dangerous naivete on Facebook’s part.

The high concept is that banning on FB is somewhat crowd-sourced — if a lot of people complain about someone, FB auto-bans them.  FB is claiming that this isn’t true, that all bans are reviewed; putting all the stories together, my guess is that the auto-ban *is* true, but that FB then reviews them after-the-fact.  That’s a plausible approach, but not a good one, since it means that a vengeful crowd can at least partly silence their detractors.

Mind, like I said, I don’t think it’s surprising: when you’re dealing with millions of users, including a fair number of trolls, and you have limited staff, you need *some* way to make things manageable.  But a simple numeric auto-ban (which this may well be) is too easy to abuse.  In our modern, polarized world, almost anybody who says anything really interesting is likely to have a crowd against them.

None of which means that an automated solution is impossible or evil — it just means that you need to be smart.  The story implies, quite plausibly, that there is a Facebook page dedicated specifically to listing people to attack with complaints, to get them kicked off.  If so, a smart network-detection system can pick it up.  If twenty completely random people complain about someone, the target is probably a troll.  If the *same* twenty people complain about person after person, then it’s much more likely that the complainers are the trolls (or at least, are abusing the system) — and *they* are the ones who should be banned instead.  At the least, it indicates that something suspicious is going on here, and the automated systems shouldn’t be trusted to make a decision without a human looking into it in detail.

Social networks are bigger and in some ways more complex than anything else the world has ever tried to grapple with.  That demands both cleverness, and openness about how you are managing them so that people can poke at those management techniques and find their holes.  I suspect Facebook is failing on both counts.

How would you deal with this?  Do you think automated mechanisms are even legitimate for deciding who to ban?  What tweaks should such a system put into place, to make it harder to abuse?

The little problems of coarse-grained privacy

May 14, 2010

I have to admit that I’m taking twitter a lot more seriously than I used to — at Arisia this year, @shava23 convinced that me that, if you manage your flist very carefully, it can be an extremely useful information feed.  Yes, many people still post too many “I’m eating waffles!” tweets, but if you ignore those and focus on friending people who mainly post content, it can be concisely useful.

(There are lots of folks who use Twitter for socializing.  Honestly, I don’t get that: even Facebook is a lot better at it than Twitter is.)

But it’s still got real problems, and one of those problems is its ridiculous all-or-nothing privacy model.  In most social networks, you choose on a post-by-post basis which items are locked and which are public; in the good ones, you can design highly customized filters for who will get to see what.  But in Twitter, either your entire feed is public, or it’s all locked — there’s no in-between.  That made sense when all posts were via SMS, but I think that stopped being the case quite some time ago.

This has some serious mal-effects — and one of them relates directly to that usefulness thing.  Consider: Twitter is most useful if you limit your following to people whose post information you find useful.  It’s still a social network, though, so unfriending is fraught — I’m sometimes forced to do so, in the interests of keeping that filter narrow, but it’s not something to do casually.  And if somebody’s feed is locked, I can’t see anything they say until *after* I friend them and they allow me in.

The result is that I find myself leery of friending anybody whose feed is locked.  Before I friend them, I can’t see what it’s like, to figure out if it’s information-rich.  And I’ve been doing social networks for long enough to be just a little nervous about the potential drama if I follow somebody, see that they’re posting way too much, and immediately drop them.  So I wind up not reciprocating a bunch of follows, which hurts the social network.

(Yes, it’s now possible to use lists to limit who I am actually reading.  In the long run, this may ameliorate the problem.  But third-party support for lists is still often crappy, so I’m not using them as much as I might wish yet.  Someday…)