1 — Community, society, diversity and stasis
According to the mythology we've received from the neckbeards we find squirreled away in server rooms, Eternal September turned the Internet from a place of constructive conversation and engagement into an endless and unwinnable war against trolls, griefers, crapflooders, spammers, and the 13-15-year-old demographic.
Antediluvian John Allen (in the linked video above) makes what are now risible claims about "Internet":
There's an interesting kind of restraint that you find. There's not a lot of cursing or swearing. There's not a lot of personal cuts. There's not a lot of put-downs that one would expect to find. There's not screenfulls of "go to hell." It's surprising. The kind of liberation is mixed. It's interesting because one would think, if you're anonymous, you'd do anything you want. But people in a group have their own sense of community and what we can do. The thing that I'm always left with, when I leave, is this overwhelming desire for people to be rooted, and the only way they feel rooted is through another person. And if this is the way, the only way maybe, that they can talk to somebody, this is how they'll do it.
The problem that Eternal September presented to this command-line Eden was one of growth and socialization. When it was just the yearly influx of freshman gaining Internet access for the first time, the socialization task was manageable. But with the flick of a switch, AOL unleashed millions of their internet-with-training-wheels subscribers on Usenet. The flood of new users ran roughshod over sys-admins' individual moderation capabilities in disregard for their established notions of civil vs. rude behavior. More significantly, AOL users overran the ability of the communities themselves to socialize newcomers by example, hints, rebuke, and frustrated injunctions to "lurk moar!"
Clay Shirky, dubious internet commentator who has somehow scammed a job at NYU teaching "new media," calls this an "attack from within":
[A]ttack from within is what matters. Communitree wasn't shut down by people trying to crash or syn-flood the server. It was shut down by people logging in and posting, which is what the system was designed to allow. The technological pattern of normal use and attack were identical at the machine level, so there was no way to specify technologically what should and shouldn't happen. Some of the users wanted the system to continue to exist and to provide a forum for discussion. And other of the users, the high school boys, either didn't care or were actively inimical. And the system provided no way for the former group to defend itself from the latter.
The problem faced by online forums in a post-Eternal September world was not a technological problem, because the system was working as designed. It was a social problem. Community disintegrated as the scope of their world widened following the technological baptism of the television-classes.
German, pragmatist, neo-Marxist, critical theorist, and possessor of a rather large nose, Jürgen Habermas is most famous for his concept of the 'public sphere.' Like John Allen's Usenet Eden, and the fall from grace represented in the Eternal September, Habermas described the fall from grace experienced by the liberal public sphere of the Enlightenment. The public sphere was a space within which people of varying backgrounds could come together to discuss the issues, problems, and culture of the commonweal. It was a space for reason and public criticality. But, significantly, it was also a place in which bourgeois and aristocrats came together as if they did not have social class differences and therefore different personal interests in the public problems under debate. Their ability to come together as if they did not have class or social interests was premised on the exclusion of the vast majority of society: women, workers, peasants, conservative nobles, slaves, etc. The pre-September 1993 Usenet can be seen as such a public sphere, before the baptism of the lower classes. Sixteen years hence, the 'as if' problem still remains: how do we organize ourselves civilly if we let just anybody join in?
German sociologist Ferdinand Tönnies first investigated the difference between 'community' and 'society' (respectively, Gemeinschaft and Gesellschaft). Small groups can exist in a sense of organic community, not requiring formal rules because a sense of common mores or norms unite them. Personal relationships can be cultivated and are quite strong, and there is little need for external enforcement. John Allen's quaint description of early Usenet illustrates Tönnies' idea of community. Larger groups find community hard to sustain. Individual interest rules behavior rather than common mores. Society, as opposed to community, is based on explicit rules that require enforcement. Society possesses greater flexibility and potentially more capability, but individuals are subject to greater anomie and anti-social behavior. Internal factional conflicts occur more frequently, despite the greater modularity of individuals' function in society.
The internet is still dealing with the problem of community collapse. Each site that attempts to build community and grow in size inevitably reaches this tipping point in which socialization into community is no longer possible. Community mores and identity breaks down into society, conflicts between old and new users increase. Those committed to the identity of the site follow two options: form an oligarchy or flounce.
- Slashdot used moderation and 'karma' in order to defeat trolling, but ended up creating insufferable groupthink magnified by braindead editor-controlled story selection.
- Kuro5hin quickly gave up on effective moderation, 'mojo,' and trusted users, ending up in a trollocaust flameout and extended undeath.
- 4chan's /b/ has suffered from uncontrollable, metastasizing, cancerous newfags.
- Digg's owners have deliberately expanded from a tech 'community' to a general interest 'society,' and abetted the continued existence of 'power users' and 'bury brigades' gaming the system in order to control the front page.
Dunbar's number is one anthropologist's attempt to define the threshold beyond which community is no longer cognitively possible. Various numbers are proposed—150, 230, 290—but the key point is that the capabilities of a community's members to sustain social relationships determines its ultimate size. Face-to-face relationships obviously have different requirements for their maintenance than do online relationships. As such, Dunbar's number (if indeed the concept is itself valid) ought to face different hurdles in scaling online than in the Pleistocene societies that Robin Dunbar studied—notwithstanding The Economist's recent defense of Dunbar-on-the-web. (An interesting side note, trolltrack notes that monthly diary usage for the last two years on k5 has been between 120 and 150 users, lending some credence to Dunbar's number.)
Our own LilDebbie asserts that community doesn't scale. It's painful to admit that, to a limited degree, he's right. But his absolute statement should be qualified: community doesn't scale easily or rapidly. In between taking bong hits, griefing Scifags, and running for Senate, Debs realized that k5 has reduced in scale from society to community, whereas Slashdot remained a society in which "Community doesn't matter [because] the comment and article volume is too great for any single voice to carry over the wave." Society scales easily because users are interchangeable, community scales with difficulty because relationships and identity are not interchangeable.
2 — Shii contra Shirky
Thinking about the community and society problems faced by online forums, we run into two opposing conceptions of identity: persistent identity and anonymity. Although there are a number of advocates for either position, on many different grounds, I'm going to choose two different representatives here to stand in: Clay Shirky and Shii.
Most respectable forums implement an identity system. Slashdot, Kuro5hin, Advogato, Wikipedia, Digg and so on down the line. The thinking is twofold:
- People prefer having an identity, keeping track of their comments and friends, and adorning their userpages with links and avatar pictures; and,
- Persistent identities allow for effective control through moderation rewards and penalties.
Localroger and Delirium argued over Shirky's article before, but I think a brief recapitulation of its central points are in order. Shirky argues that three things must be accepted when building a successful, long-term community:
- "You cannot completely separate technical and social issues."
- "Members are different than users."
- "The core group has rights that trump individual rights in some situations."
So, to a degree, the community structure is reducible to the technological structure. However, behaviors and uses that cannot be accounted for or 'solved' by changes to that technological base will always emerge. Shirky's formula is weighted toward preserving the community rather than embracing the society conception of online forums. His advice is to choose preserving existing forms of interaction even if it means suppressing new forms.
The technological base that he advocates is a strong system of persistent identity (although he prefers saying 'handle' instead of 'identity'). There are four components:
- "Handles the user can invest in... It's pretty widely understood that anonymity doesn't work well in group settings, because 'who said what when' is the minimum requirement for having a conversation... There has to be a penalty for switching handles. The penalty for switching doesn't have to be total... I have to lose some kind of reputation or some kind of context."
- "You have to design a way for there to be members in good standing. Have to design some way in which good works get recognized... You can do more sophisticated things like having formal karma or 'member since.'"
- "You need barriers to participation. This is one of the things that killed Usenet. You have to have some cost to either join or participate, if not at the lowest level, then at higher levels. There needs to be some kind of segmentation of capabilities."
- "Spare the group from scale. Scale alone kills conversations, because conversations require dense two-way conversations. In conversational contexts, Metcalfe's law is a drag."
The political science terms for what Shirky is trying to say are 'asset specificity' and 'selective incentives.' Users need to earn non-portable assets on an individual basis as a reward for constructive contributions to the community.
Unfortunately for Shirky, most of these suggestions have already been implemented in traditional forums and have been found wanting. First, handles do not prevent any negative, community-destroying behavior. Nor do rewards for good behavior. This is due to the possibilities for multiple identity syndrome inherent in interacting online. We here at k5 represent a malignant example of duplicate accounts engaging in trolling, griefing, crapflooding, shitposting and all other forms of destructive behavior. Dupe accounts, much like the shady accounting practices that allowed Enron to shift all its losses onto the balance sheets of fictive subsidiary corporations, allow the user's principal account to retain any specific incentives for constructive behavior while shifting all of the negative moderation and other penalties off onto the dupes.
Second, barriers to participation, even relatively minor ones like requiring an account, prevent community growth (and maybe even $300 million in sales). This is, of course, their designed function. Ever since k5 became a gated dysfunctional community we've experienced the slow communal constriction that effective barriers to participation create. While the barrier has solved problematic dupes for the most part (since no one seems to want to waste $5 on an account that will rapidly be banned), it hasn't solved the existing self-destructive behavior that drives away both new users and disaffected old users, see for example: [1] [2] [3] [4]. Once given over to griefers and trolls, it's unclear that normal users will ever return—bad money drives out good.
Shirky's final point on scale is similar to the difference between community and society, discussed above. Once too many people are involved, the ability to have unenforced norms and communal links between users breaks down. As users become interchangeable in their interactions with one another, 'community' collapses into 'society.' He lauds LiveJournal's clustering of users into soft groups, gives a hat tip to Rusty's favorite site which just closes the gates at arbitrary intervals, and notes that IRC and mailing lists are self-regulating insofar as people come and go as they please (a truly profound insight into scaling problems). Kuro5hin has been through the "should we form sub-communities" question before, and never seriously considered it (another option presented in that article, killfiles, has been implemented independently by j1mmy).
Shii takes the polar opposite approach to identity and participation in online forums. As the ideological mastermind behind the era of forced anonymity that 4chan's /b/ underwent at the hands of W.T. Snacks, Shii theorized that registration systems in fact had the opposite of their intended effect.
Shii and Shirky agree that registration poses a barrier to entry, but disagree on its implications for the resulting quality of forum interaction. Shii found that not only did the scale of interaction vastly increase after registration barriers were dropped, but that the percentage of automatically-identified "bad posts" dropped by more than 50%. Shii summarizes his lessons learned in four points:
- Registration keeps out good posters.
- Registration lets in bad posters.
- Registration attracts trolls.
- Anonymity counters vanity.
Just like the $300 million registration button case (linked above), registration can keep good posters out by frustrating their attempt to strike while the iron is hot. Wikipedia's open editing policy (although it grows progressively more closed as time wears on) operates on the same principle: compare the brilliant success of Wikipedia as a forum of interaction, compared to the abject failures of Nupedia and Citizendium. Anyone can dive right into editing Wikipedia, and, like other habit-forming business models, the first hit is always free.
The problem with sites like Wikipedia and Digg is that there are always registered users with less of a life than you. Persistence, not quality, counts for more than anything else. Wikipedians who persist the longest in retarded edit wars will win, regardless of how well-written or well-cited their opponent's contributions are. Persistence, not quality, earns them community recognition, and eventually a spot among the administrators and the IRC clique. Similarly, the Digg circlejerk of 'power users' spend all day running scripts that automatically submit hundreds of articles pulled from RSS feeds into the Upcoming Stories section, and digg-exchanging by digging every single story submitted by their fellow circlejerkers at a rate of one every few seconds. Truly astounding effort put into dominating the public face of 'community-driven' websites. And for what? The vanity of having your username and icon appear on the front page? And, how does registration ameliorate the problem of persistence? How can you kill that which has no life?
Anonymity counters vanity, instilling some degree of egolessness into users. Ad hominems are less effective, and the substance of the comment means more than the person saying it. Anonymity truly makes the users modular in the sense that Ernest Gellner means when he discusses the emergence of industrial modernity and the possibilities for civil society possessed by a new modular man. In "The Importance of Being Modular," Gellner writes:
Modular man can combine into effective associations and institutions, without these being total, many-stranded, underwritten by ritual, and made stable through being linked to a whole set of relationships... This is civil society: the forging of links which are effective even though they are flexible, specific, instrumental.
...
But the modularity, the flexibility of institutions, requires the substitutability of men for each other: one man must be able to fill the slot previously occupied by another. To do this, they need not be identical in all respects: were that so, nothing would be accomplished by the substitution... The communication symbols employed by the new occupant of the slot must be culture-compatible with those of his new neighbors. This is indeed one of the most important general traits of a modern society: cultural homogeneity.
...
The standardization of idiom is in any case imposed on this kind of society by the nature of work, which has ceased to by physical and has become predominantly semantic: work is now the passing and reception of messages, largely between anonymous individuals in a mass society, who cannot normally be familiar with their interlocutors.
Society, especially civil society, depends on shared culture, mores, norms. At the smaller scale, community can enforce its own mores, but as greater and greater scale comes, community collapses into society, and the mores that sustained the older users are incapable of being effectively transmitted to the newly inducted masses.
Maintaining the common institution of culture can be conceived of as a collective action problem. Mancur Olson gave the definitive treatment of the subject in The Logic of Collective Action. According to Olson, small groups are qualitatively different from large groups, when considered in terms of their respective abilities to achieve collective goods. Small groups are small enough that an individual's actions are noticeable by other members. In large groups, the effects of any given user's bad behavior are not necessarily discoverable by all members and there is little incentive for individual members to enforce the group's rules. This is why communities can only function when small, but collapse into societies when their growth outstrips the institutional capacity for individual behavior to be noticed (and punished).
Small groups can be effectively governed when one or a few members are granted greater capabilities to preserve the culture of the community—we call these moderators. While they play a crucial role in most online communities, their ability to police ever-larger numbers of participants is limited. Unfortunately, as the pool of moderators grows and as moderator status becomes increasingly institutionalized, the iron law of bureaucracy sets in. The transition from "MODS=GODS" to "The Cabal" and later to "MODS=FAGS" is a near universal feature of online forums.
As scale overwhelms community and the ability of even well-meaning mods to enforce its norms, society-driven moderation becomes the next option for enforcing cultural homogeneity. Broad swaths of users are given the ability to rate other users and their contributions. For example, new users might be asked to find sponsors among existing users in order to preserve some level of trust and social links. Ta bu shi da yu detailed the perverse incentive structure that sponsored users would create (at the time, Shirky called Ta Bu's opinion "hysterical"). Rusty, k5's deadbeat dad, eventually agreed with Ta Bu, admitting that sponsorship "was a stupid idea."
Users also might be asked to rate the degree to which they trust their fellow users, a system prominently employed by Advogato. While prominent Bay-area musicians have advocated adoption of a similar trust metric for k5, the idea was rejected. Our resident 'low-budget Filipino horror story' took time off from speaking for the vast majority of international governments, civilians, and people of Myanmar to speak for the rest of k5's users regarding the negative consequences of trust metrics: they focus on the individual rather than on their contributions (comments, stories), the outcome being neither community nor society but class conflict and stifling monoculture. As Paul Graham notes in his assessment of lessons learned from administering Hacker News: "It's bad behavior you want to keep out more than bad people. User behavior turns out to be surprisingly malleable. If people are expected to behave well, they tend to; and vice versa."
Furthermore, Advogato's trust metric is not, in fact, attack resistant. Because of the problem of pseudonymity, a troll posing as Richard Stallman was able to gain Master certification from 288 users without independent verification of his identity. Moreover, the attack resistance model is built only to resist multiple dupe accounts under control of a single user. This overlooks the more common internal-culture-war problem, 'attack from within': sites populated by multiple independent trolls, in which destructive behavior can come from multiple actors not necessarily acting in concert, and even from long-running members of the forum.
Alternately, moderation systems like Slashdot's Karma and Digg's approval-style voting put moderation of content (as opposed to moderation of users) into the hands of the userbase as a whole. While viewing Slashdot comments with an appropriately high threshold is effective in displaying only high quality comments, a vast amount of material that is high-quality yet counter to Slashdot's groupthink remains below the threshold. Slashdot's moderation not only separates the signal from the noise in terms of general comment quality, but also in terms of the degree to which the comment appropriately venerates group icons. Moderation abuse is hardly countered by offering these same users incentives (more moderation power) to moderate moderations. Meanwhile, Digg's uniformly pathetic comment quality is barely a step above YouTube's, despite the existence of moderation.
The bottom line is that no active moderation system, no matter how many users are empowered to rate each other and each other's comments, can preserve community in the face of the multiple identity syndrome inherent to online forums. Incentives, barriers, and moderation cannot counter trolls and dupe accounts—in fact, they may make things worse. If we cannot return to John Allen's Eden despite Shirky's formulas for success, then we must plan for life in Shii's Nod.
3 — "Technological solutions for social problems"
Shii's optimistic conclusions about the value of anonymity for social interaction are easily rebutted by the hideous, festering cesspool of 4chan's /b/. Not the content subject matter per se, because the extremes of its content matter are the essence of /b/. The problem with /b/ is the unbelievable rate of shitposting: the same topics over and over, retarded incoherent posts, repetition of tired forced memes. The very occasional strokes of brilliance are rapidly drowned out by noise. 4chan's rapid growth is made possible by the complete lack of barriers to entry. There are no technological barriers—no registration of 4chan Gold Accounts—and no cultural barriers (at least, for any idiot that's discovered the compendium-of-degenerate-culture Encyclopædia Dramatica).
While there may not be technological solutions for social problems, there may be institutional solutions for social problems. Shirky is correct insofar as social dynamics on the web have a technological base: patterns of interaction are shaped by the software used to interact. Knowledge of the capabilities and constraints imposed by forum software conditions how users act, what possibilities they perceive, what type of behavior they expect, and (most importantly) how the system can be gamed. Software is the institutional context within which users act, and within which the collective action problem of maintaining a culture of quality interaction is (hopefully) overcome, despite the problems of scaling, multiple identities, bad behavior, and limited capacity of moderators.
What are the technological (really, institutional) problems that need fixing then?
- Design with multiple identity syndrome as an unavoidable condition of operating on the internet.
- Provide effective selective incentives for constructive behavior.
- Keep barriers to participation as low as possible.
- Moderation that better reflects quality, as opposed to simple agreement.
- Moderation that lightens the load on admins.
- Reduce the ability of users to game the system.
The first condition requires making the identity of the poster less important. Slashdot, Wikipedia, and 4chan all allow anonymous contribution, but go out of their way to distinguish the accountless from users with identity. Slashdot allows 'Anonymous Coward' to post with a -1 moderation penalty. Wikipedia allows anyone to edit (nearly) anything, but their edits are identified by IP address and filtered when viewing recent changes. 4chan defaults to 'Anonymous' but allows namefags with secure tripcodes. However, an approach that truly de-emphasizes identity would do the opposite of the above sites: all comments would appear without any indicators of identity. Users with or without accounts would be indistinguishable. Unlike FORCED_ANON on 4chan, which did not allow for persistent identities, such a system would allow for user identity, but only in private. A user could have an account, but there would be no public acknowledgment of their identity linked to their posts.
In such a system, there would be a lesser incentive to trolling. Without particular individuals to follow, persistently baiting and harassing individual users would more difficult. The attention whoring and unwarranted self-importance of trolls would be more difficult to sustain in forced anonymity. Of course, this would not prevent more generic trolling (starting flamewars on partisan politics, operating systems, religion), but it would mitigate some of the more abusive forms.
Second, selective incentives ought to be provided for constructive behavior. Unlike sites that use social status to indicate constructive users, and thereby focus on individual vs. individual comparisons (giving new targets for griefing, trolling, and anti-social behavior), the incentives provided to users ought to be private in keeping with the forced anonymity. Slashdot gives users with 'excellent' karma automatically upmodded comments, Hacker News briefly highlighted good users with orange-colored usernames, kuro5hin used to have trusted users with special rating abilities, and so on. Just as user identity ought to be private, so too must incentives/status be private. The incentives for retaining one persistent identity are usually related to personalization and a record of all of one's comments, bookmarks, and other activities. Incentives for constructive behavior generally revolve around granting users more influence: more moderation power, more prominent comments, more access to control, more influence over the front page. Most of these are fine, but the focus is off: instead of rewarding good behavior with unique opportunities for more constructive contributions, they reward good behavior with opportunities for control, influence, and negation.
The third condition requires making it as easy to comment as possible. Don't make the user register an account to post a comment. Don't make the user learn a markup language to format their post correctly—Google's rich text composed in Gmail is a good example of avoiding the problem of making a user learn HTML, BBCode, or Wiki markup. Don't prevent the user from posting by making them jump through hoops such as the impossible-to-satisfy Slashdot "Lameness Filter" or the mute-banning Robot 9000 from xkcd's IRC channel and 4chan's /r9k/ board. Obvious and strict wordfilters encourage users to game the system rather than work to write better posts. The result is a commenting system that favors those that spend the time to master technical details over those who write useful contributions without knowing the intricacies of the site's parochial commenting system.
Fourth, moderation systems ought to be geared toward identifying quality contributions, rather than signaling agreement. Current moderation systems are based on the premise that better comments will end up with better scores. This approach is wrongheaded and flawed. As anyone familiar with Digg's wretched comments can attest, clicking 'thumbs up' on a snarky, flamebaiting, or erroneous one-liner signals almost nothing about the actual quality of the comment. Approval voting systems, wherein comment worth is represented by a raw number score, create an "I agree with this post" dynamic to moderation. There is precious little difference between numerical score-based moderation and the <AOL>Me too!!!</AOL> posts that began flooding into Usenet in September 1993.
Slashdot is the only major forum with a comment moderation system that takes a step in the right direction. While all of its moderation options are either +1 or -1, they all include some kind of descriptor allowing the moderator to assert why the post deserves a higher (or lower) score: insightful, informative, interesting, funny, offtopic, troll, flamebait, etc. Yet they're still wedded to a score-based moderation system. A set of moderation options that reflected quality rather than "I agree with this post" would be a further step in the right direction. No numerical score ought to be visible. The moderation options would be the descriptions of the comments we'd like to see—informative, informative links, engages parent directly, witty—and of the comments we'd like to see less of—one-liner, personal attack, flamebait, troll, abusive links, spam, offtopic. Options to express agreement could be provided too, in order to prevent the descriptive moderation options from standing in as proxies for agreement (moderators rating comments they disagreed with highly in terms of quality might be given extra weight, assuming they're moderating in good faith). Score-based moderation systems foster groupthink and the promotion of content-less one-liners to the detriment of actual conversation. Moderation centered around what makes a good post provides an institutional foundation for altering the dynamics of users' moderation behavior.
To further emphasize the quality-not-agreement aspect of moderation, scarcity ought to be applied. Slashdot's system of dispensing a few moderation pellets to its users on occasion works on the basis of scarcity, but suffers from being arbitrary and temporally contingent. A moderation system that operates on scarcity could value a user's moderations at a certain weight over a period of time—the more they moderate, the more they dilute their influence. The stock of moderation weight (ranging from pro-ana to rpresser) assigned to each user could vary according to criteria such as length of membership and quality of their contributions. Unlike Digg, where persistent users can set up scripts to digg hundreds of stories a day, thereby rewarding hideously pathetic levels of persistence, a system in which individual influence is scarce reduces the returns to becoming a 'power user.'
Fifth, and closely related to the above point, moderation systems need to be designed to lighten the load on moderators, whether they are admins or the regular users themselves. Paul Graham has become a believer in the "broken windows" theory of maintaining order: small violations of the spirit of expected behavior, if persistent and unchecked, can undermine broader adherence to those norms. To wit: if a rule is unenforced and constantly violated, is it really a rule? The solution that xkcd's Randall Munroe hit upon after reviewing the standard options faced by all rapidly scaling communities—restricted entry, moderators, user moderation, and sub-communities—was a system of passive moderation. Moderation would be automatically applied according to a predetermined set of criteria specifying what qualities a good comment would have. In Munroe's case, originality was the key, and any commenters attempting to say something that had already been said before would be penalized by increasing mute times. A similar project, the StupidFilter, being developed by one of our own, uses Bayesian logic to identify stupid comments based on a seed group of human-identified stupid comments. The criteria for stupidity include: over- or under-capitalization, too many text message abbreviations, excessive use of 'LOL' or exclamation points, and so on. Spam identification systems for email and blog comments (e.g. Akismet for WordPress) do much the same thing, identifying commonalities in junk messages and containing them in a junk/spam purgatory awaiting moderation.
Passive moderation can help solve the problem of moderator overload, just as spam filters aid managing one's email inbox or blog comments. Reducing the number of full-time admins to do moderation reduces the proclivities toward the "iron law of bureaucracy" and toward user-moderation abuse. Like the above passive systems, a Robot9000++ could be set to identify general characteristics of comments that make them good or bad: not only originality, but also ideal length of the post (with diminishing returns after a certain point), presence of links, paragraph structure, and so on. Likewise, it could identify posts that the typical profile of destructive or idiotic behavior: one-liners, ad hominems, common insults, links to shock sites, etc. False positives would be an issue, hopefully less of one over time if it had a Bayesian capacity to learn. But effective admin intervention and/or user moderation could correct erroneously downmodded comments.
Sixth, effective moderation systems will function best when pushed as far into the background of user interaction with the forum as possible. Munroe discovered, as did moot shortly thereafter, that announcing the rules of the game results in conversations and threads being overwhelmed by meta discussion and boundary-testing. Those with a stake in circumventing moderation (trolls, griefers, spammers, crapflooders, the usual set of malcontents) quickly discover the limits, whereas those who don't have the time to invest in circumventing the controls remain constrained ("when moderation is the law, only outlaws will be unmoderated"). Passive moderation and wordfilters ought not be immediately perceivable by the user: instead of blocking the user, muting them, or denying the comment from being posted, the systems should let the comment through. An automatic downmoderation ought to be applied to the offending post such that it will be below the threshold of normal comment viewing. However, downmodded comments ought to be discoverable and corrected by user moderation in the case of false positives. By obfuscating passive moderation systems, forums can achieve 'society through obscurity,' preventing moderation criteria from easy discoverability and gaming.
4 — The lowest common denominator
If it is indeed possible to construct a functioning scalable society within Eternal September, the question becomes, can we work backwards from society toward a reconstruction of community? Can communities discover themselves within society, without the creation of formal sub-groupings? Can Gellner's 'modular man' reenter a network of communal ties without the totalizing, exclusive, and oppressive aspects of Shirky's vision of online community?
There are two aspects to this problem:
- what links between users are indicative of community; and,
- what mode of content creation and consumption will sustain a coherent community?
Constructive conversation is central to community, not ideological like-mindedness or commonality of interests. Too many forums attempt to provide sub-communities on the basis of user self-selection, allowing the user to place themselves in categories of ideology, allegiance, or taste (e.g. Facebook groups, Last.fm groups, Wikipedian userboxes, etc.). Just as trading flames and ad hominems does not make for lasting interaction, groups of like-minded users decrying offenses to their objects of veneration and offering 'me too!' posts are among the least interesting forms of interaction on the internet. (Shirky discusses these self-destructive forms of group interaction, citing psychoanalyst W. R. Bion's 1961 volume Experiences in Groups.)
Placing conversation at the center of analysis changes how we think about constructive interaction. Current moderation schemes focus on discrete comments as the unit of analysis: a comment is either good or bad in and of itself. Slashdot's foster care practice of reparenting highly moderated comments attached to poorly rated parents is indicative of this comment-as-island-unto-itself mode of thought. But if constructive conversation is the goal, the comment itself is the wrong unit of analysis. The conversation—the series of comments responding to one another—is the proper unit of analysis, and the most important aspects are not inherent to the comments themselves but are relational.
Conversation and moderation are not just content creation or judgments. Replying to other users and moderating comments are expressions of relationship. A one-liner, flippant, or flaming reply expresses at best a weak relationship, but usually a negative relationship between two users. Likewise a negative moderation is an indication of one user's low esteem for the contribution of another user. Conversely, a longer post that directly responds to another user—not necessarily agreeing, respectfully disagreeing or providing informative links are just as good—or a positive moderation provides an indication of constructive relations between two users.
Changing the unit of analysis from comments to conversations is the first step in determining how community might emerge from an anonymous society. We can take a two comment dyad as an example and apply an AND logic to the pair of comments' worth (as judged by both passive and user moderation):
- Low value: a short snarky comment with an equally short snarky reply. Throwaway comments are throwaway interactions.
- Low value: a constructive comment with a flame or one-liner reply. An unconstructive response doesn't indicate potential for a relationship.
- Low value: a flamebait or troll garnering a nonetheless long and thought ought response. Feeding trolls, even if done calmly and patiently, is not constructive interaction.
- High value: a medium- to long-sized thoughtful comment followed by a thoughtful response of similar length.
Constructive comment dyads are the best indicator of a potential relationship between two anonymous users. Positive moderation of one user for another user's comment does express relationship potential, but less so than commenting, because moderation is quick and one-way, whereas writing a comment that engages the other user signals greater potential for interaction. The implicit relationship forming of commenting is also a better indicator of interaction potential than self-selecting membership in groups. Even people with common interests or common ascriptive identities will not necessarily interact fruitfully. In this sense, grouping along a priori lines is based on the dubious assumption that people will interact best with 'their own kind.' The reality is that providing people with labels and identity/interest groupings is more likely to artificially divide users against one another and to reinforce the negative modes of group interaction identified by Bion.
Communal groups ought not be based on self-selection by users into predefined ascriptive categories, but will function best when they emerge from proven ability to interact constructively. Father-of-sociology Émile Durkheim labeled these different organizing principles 'organic solidarity' (in which individual differences are minimized) and 'mechanical solidarity' (in which differentiated individuals cooperate). The problem then becomes, how can the forum determine, on the basis of comments and moderation, which users belong in the same community as other users?
If we reconceptualize commenting and moderation behavior as links between users expressing a relationship, a method for community's emergence from the broader social milieu becomes clear. Just as hyperlinks between web pages express a relationship of value, as Sergey Brin and Larry Page realized by 1998, so too do replies and moderation create a network of interlinked users. The problem of scaling rendered Yahoo!'s categorization scheme obsolete, and the problem of fraudulent/malicious tagging left AltaVista's meta tag crawling fatally compromised—Google introduced a system capable both of scaling and resisting attack (significantly, resistant to a greater degree than Advogato's trust metric). A modified PageRank algorithm could take into account the positive and negative links between users, establishing overall assessments of users useful for distinguishing malicious users from normal users and for dispensing selective incentives to users producing valued contributions.
Analyzing user interactions as a network of positive and negative links also opens up further possibilities for assessing and grouping users. Small-world network theory is premised on the study of network nodes that exhibit clustering behavior. A clustering coefficient can be used to determine how self-contained a group of interlinked nodes is (what Durkheim would have called the group's dynamic density). A substantial number of software projects aim to analyze social networks in this manner. The advantage of analyzing networks rather than relying on ascriptive categories to generate communities is that each user's community will be a different set of users—preventing systemic groupthink and the negative group dynamics that occur in closed/exclusive communities. The key criteria in maintaining a given user's community group will be their ability to maintain a level of consistent, constructive interaction with the users in their network neighborhood. (It would be interesting to see if this form of organic communal grouping can confirm Dunbar's number.)
Kuro5hin provides a good, but limited, model for the emergence of community from society. On k5, interaction in the queue and on the front page constitutes 'society,' whereas interaction in the diary ghetto constitutes 'community.' K5 was the first major forum (as far as I can discern) to provide this kind of separation between social content (for the discussion of the whole userbase) and communal content (diaries with personal content that can be followed on a per user basis). However, mashing all users together into a unified diary ghetto ended in tears, as different subgroups of users came into cultural conflict, and the griefers drove off the beautiful souls to Hulver's diary-only site.
For community to emerge from anonymous society, communal interaction and social interaction ought to be separated as they are on k5. But, whereas on k5 all communal content is placed together, leading to conflict between different communities and daily content overload (back when there were more users), on our hypothetical forum each user's community would be a unique set of neighbor users determined by their prior constructive social interaction. Because each user would have a different set of users constituting their community, the strategy of setting up dupe accounts for the purpose of harassing specific users would be rendered ineffective. Within the community section, it might be possible (even positive) to allow users' to have and display identity markers (username, icon, signature, homepage, etc.), while still maintaining forced anonymity in the society section of the forum. Thus identity would emerge alongside community, allowing affective bonds between users to develop.
Constructive interaction between users does not occur in a vacuum, however. All web forums are premised on discussion of content and—according to SEO 'gurus'—content is king. But not all content is created equal. We can categorize content along three lines:
- Original content - most queue content on k5, for example.
- Links with blurb - Slashdot, MLPs on k5, Digg, Reddit, Hacker News, Fark, and so on.
- Personal content - k5 diaries, HuSi, 4chan's /r9k/, LiveJournal.
Personal content, as above, is content suited more for a community audience than a forum's society as a whole (perennially flaccid HuSi excepting). That leaves us original content and link-n-blurb content for general consumption. While there's nothing particularly wrong with being yet another echo chamber for entertaining links, production of original content ought to be favored. Precious few forums, aside from k5, place content creation at the center of interaction. That's not because they want thin content, but because it's hard to build a social dynamic favoring substantial content and content creation.
Long form pieces take time to write, time to read, and time to judge. Links to top ten lists, funny images, and the latest sensationalized scandal-mongering headline on the Huffington Post take no time to post, a few seconds to read, and no discernible thought process to judge (or write). Paul Graham, observing this problem at Reddit and attempting to avoid it at Hacker News, calls it the "Fluff Principle: on a user-voted news site, the links that are easiest to judge will take over unless you take specific measures to prevent it." Graham's solution has so far been to explicitly ban fluff stories and rely on moderator intervention to kill fluff stories as 'offtopic.' But this solution, like using moderators to enforce comment quality standards, does not and will not scale. One need only peruse the thousands upon thousands of links submitted to Digg's upcoming stories section every day to realize the futility of reliance on admin enforcement.
Three institutional changes to content handling can help alleviate the Fluff Principle:
- Get rid of voting;
- Change the criterion for front page placement; and,
- Discriminate by content type.
First, voting to approve new stories for forum-wide consumption and comment is a central feature of most forums: k5, Digg, Reddit, Hacker News, and so on. Slashdot still relies on editorial omelette selection (assisted by voting in the trying-too-hard-to-compete-with-Digg-abortion-that-is-Firehose). On the other extreme, 4chan allows anyone to start a new thread. But just as +1/-1 moderation creates a proclivity towards "I agree with this post" moderation rather than moderating on the basis of comment quality, +1/-1 voting on stories creates a similarly destructive dynamic.
Take Digg as a case study. Controlling for the fact that they siphoned off the most retarded Slashdot readers and alloyed them with the most credulous and shrill Ron Paul and Obama supporters, Digg displays Graham's Fluff Principle perfectly: without fail, the top dugg stories each week are uniformly links to the same set of images that your mom will forward you three days later with the subject line "FW:FW:FW:FW:FW:FWD:FW:FWD:Funny Pics LOL!!" Engaging long-form pieces rarely make it very far, if at all, because +1 voting is equally weighted whether the story is 'sprawling New Yorker shit' or a picture of a cat hugging a dog, and because the picture is easier to judge than detailed investigative journalism.
The Wikimedia essay 'voting considered harmful' encapsulates the solution neatly. They grapple with many of the same problems considered here: the problem of multiple identities (dupe voting), tactical/malicious voting, avoiding groupthink, and the stifling of constructive discourse. But whereas Wikimedia aims for consensus decisions, a healthy web forum might settle instead for constructive conversation. That is, instead of voting +1/-1, users would vote with their comments.
Two extreme cases of commenting demonstrate the value of this approach. First, fluff submissions (e.g. images on Digg) tend to get very few comments, and the majority are of low quality ("cool pic! thanks for submitting!"). Second, sensationalist flamebait articles will rack up high numbers of low quality comments, as users post indignant one-liners, flames, personal attacks, and trolls. With passive moderation as described above, a great deal of these comments would have a hard time making it above a normal viewing threshold. With user moderation focused on comment quality rather than 'I agree with this post,' and an evaluation of quality that depends on comment dyads rather than single comments, back-and-forth flamewar threads, even if they racked up an impressive quantity of comments, would still have a very low quality of comments.
Constructive conversations (dyads of highly moderated comments) would be the key determinant of story promotion, not throwaway comments or flames. Because thoughtful comments take longer to construct and are premised on there being substantive content in the article (whether original content or a link), basing story promotion on comments will mitigate the problem of fluff articles on the front page. This method would also place the emphasis on the things important to sustaining a good site: user involvement and interaction. Any site can offer a collection of links, and those that do make commenting take the back seat (e.g. Digg and Reddit). Better sites offer a mix between being story driven and comment driven (e.g. Slashdot and k5). Still, a move toward being fully comment driven needs to take place.
Second, Graham contrasts the top-down vs. bubble-up front pages of Slashdot & Digg and Reddit & Delicious/popular respectively. Top down front pages are a simple temporal ordering of new stories, with no regard to the quality of conversation they produce. Graham notes that these encourage gaming of the story submission and promotion process, because new stories will occupy the top spot on the page and automatically command attention and clickthroughs. Bubble-up front pages allow the forum to decide on a criterion for a story's ascent to the top of the front page, balanced by a time-decay function. Delicious/popular pushes links up based on the number of bookmarks they've received, whereas Reddit and Hacker News move links based on up or down voting. (4chan occupies a median point between top-down and bubble-up methods, bumping threads to the top of Page 0 when they receive a new comment, tempered by limits to the max number of posts and images, and times each unique user can bump the thread.) Our hypothetical comment-driven forum would push stories up based on quality of conversation. Even if gaming the system could promote a story, it would not capture the top of the front page without being able to sustain users' interest enough to post thoughtful comments in response to the story and to one another.
Third and finally, users submitting stories can tag their submissions either as 'original content' or 'link-n-blurb.' The former should have a slight advantage in terms of front page hang time (perhaps a stricter time-decay function for link-n-blurb stories). Moderators won't be wholly responsible for killing fluff links, as they are on Hacker News, but for the more scalable task of fixing miscategorized submissions. Passive moderation may even be employable in flagging potential miscategorizations by analyzing submissions according to overall length and to ratio of links to text.
Conclusion
There are serious problems with existing web forums' institutional capacity to sustain constructive interaction over the long term. The foregoing has been an attempt to rethink what constitutes community and society on the web, and what the requirements for sustaining them are in an environment of rapid scaling.
The conclusions reached about the weaknesses of current forums are:
- Eternal September presents web forums with an inability to avoid the dilemma that scaling creates for socialization.
- Community and society, as forms of interaction, are not just different in scale but also different in kind.
- Community doesn't scale, and society is difficult to enforce.
- User registration and barriers to participation do not prevent community-destroying behavior.
- Scale quickly outpaces moderators' ability to enforce socialization of new users.
- Current forms of user moderation and trust ratings are vulnerable to gaming and attack.
Recommendations for a hypothetical forum structure are summarized as follows:
- Forced anonymity fosters society by countering vanity, making users modular, and placing the focus on the content/comments.
- Moderation can be improved by making it passive, scarce, and focused on comment quality rather than agreement with the substance of the comment.
- Conversation, not isolated comments or voting scores, must be the central criterion of user interaction.
- Communal groupings can emerge organically from society based on demonstrated constructive conversation.
- Forums should discriminate between original content, link-n-blurb content, and personal content.
- Story promotion and front page position should be determined by quality of conversation not voting.
It should be stressed that none of these are radical innovations. Most are already implemented piecemeal in some form or another in the various web forums, bulletin boards, chat rooms, and newsgroups throughout the internet. But there is no forum providing a coherent combination of these elements. I believe that these factors will provide the institutional foundation for a web forum that can achieve a greater scale-free status than any that we currently possess.