Posts Tagged ‘facebook’

Crowdsourcing can only take you so far

May 17, 2010

Interesting article here on ReadWriteWeb, about Facebook’s approach to banning.  It’s a bit hyperbolic, but assuming it’s correct (and really, it wouldn’t surprise me), it implies some dangerous naivete on Facebook’s part.

The high concept is that banning on FB is somewhat crowd-sourced — if a lot of people complain about someone, FB auto-bans them.  FB is claiming that this isn’t true, that all bans are reviewed; putting all the stories together, my guess is that the auto-ban *is* true, but that FB then reviews them after-the-fact.  That’s a plausible approach, but not a good one, since it means that a vengeful crowd can at least partly silence their detractors.

Mind, like I said, I don’t think it’s surprising: when you’re dealing with millions of users, including a fair number of trolls, and you have limited staff, you need *some* way to make things manageable.  But a simple numeric auto-ban (which this may well be) is too easy to abuse.  In our modern, polarized world, almost anybody who says anything really interesting is likely to have a crowd against them.

None of which means that an automated solution is impossible or evil — it just means that you need to be smart.  The story implies, quite plausibly, that there is a Facebook page dedicated specifically to listing people to attack with complaints, to get them kicked off.  If so, a smart network-detection system can pick it up.  If twenty completely random people complain about someone, the target is probably a troll.  If the *same* twenty people complain about person after person, then it’s much more likely that the complainers are the trolls (or at least, are abusing the system) — and *they* are the ones who should be banned instead.  At the least, it indicates that something suspicious is going on here, and the automated systems shouldn’t be trusted to make a decision without a human looking into it in detail.

Social networks are bigger and in some ways more complex than anything else the world has ever tried to grapple with.  That demands both cleverness, and openness about how you are managing them so that people can poke at those management techniques and find their holes.  I suspect Facebook is failing on both counts.

How would you deal with this?  Do you think automated mechanisms are even legitimate for deciding who to ban?  What tweaks should such a system put into place, to make it harder to abuse?