Discussion board moderation


Discussion board moderation is a new “profession” and as such it requires a new set of skills. These are not, as many believe, technical skills. Discussion board moderation is primarily a management task and therefore it requires management skills. Since management is not an exact science, the dos and don’ts of discussion board moderation are not chiselled in granite. Yet, there are some important principles which executive and prospective moderators should consider.

Discussion boards (or “forums”) are a newfangled social phenomenon that came about with the Internet. They are meeting places for people who share a common interest about which they like to talk. An online discussion is essentially a written asynchronous conversation between two or more parties who send and receive questions, answers, and comments with a relative delay. These written conversations are much slower than natural conversations, but still faster than a traditional exchange of letters.

The necessity for moderation exists for several reasons. Usually the board operator desires some level of control over the content posted by other participants in order to ensure that it does not violate laws and regulations. In addition, the operator might want to define specific rules for the discussion board that fit the culture of its community. Such rules usually target netiquette and ethical codes. Finally, the board administrator must uphold the technical functioning of the discussion board system and prevent abuse. The attainment of these goals are usually delegated to the moderator(s) who may or may not be the same person as the board operator.

Common Challenges

Discussion boards provide entertainment, support, and fun for many people, but they are not without challenges. A virtual meeting place is a bit like a masked ball where participants enjoy complete anonymity. This can lead to problems. Anonymity, as well as the lack of physical contact, has a tendency to lower the inhibition threshold for socially unacceptable behaviour in some individuals. Common challenges are angry, hateful, obscene, or otherwise inappropriate posts, cross posting, spamming, trolling, DoS attacks, identity theft, and other more technical problems.

Flaming And Flame Wars

Flames are intentionally hostile or insulting messages that usually result from a heated exchange between people holding different opinions. Flames are the most common problem of discussion boards. The flame character of a message is identified by its design to attack the opponent rather than the argument. Hence, flames are ad hominems with a strong emotional impact. Flame wars are prolonged exchanges of flame posts, into which –according to the group dynamics of the community– many individuals may get involved. The affinity to flame wars depends on many factors, such as community behaviour, the nature of topics discussed, as well as moderation practices. Flaming is generally deterring and discouraging to users. Obviously, controversial topics are especially susceptible to flames.

Flames are a rather difficult challenge for the moderator. The most suitable strategy to control flames is to employ non-punitive measures, for example posting placatory comments, appeals to fairness, and conciliation proposals to calm the situation. Diplomacy and humour often work well. Prevention of flames, for example by creating a relaxed and intimate atmosphere, is even better. If this doesn’t work, it may be necessary to remind the opponents of the rules regarding discussion style or to close the thread. If the posted flames are inappropriate it may also be essential to delete offensive passages or posts. Finally, if nothing else works, warning and barring the offending member(s) is the last recourse.


A troll is someone who habitually posts disturbing, inflammatory or nonsensical messages that disrupt the discussion and upset the community. Trolls are basically agitators who provoke and create perturbation by some means, usually by flames, in order to drag attention to themselves or to sabotage the discussion. Trolling is best moderated by confronting the offender directly via the personal message system and by putting the troll user on the pre-moderation list if the discussion board software allows it. The motives for trolling are varied. The troll may be a disgruntled user, someone who feels that the board community “has turned against him”, someone with an underlying psychological problem, or merely someone venting temporary frustration. Trolls can be quite problematic. Persistent trolls should be pre-moderated or banned if pre-moderation is not an option.


Outright spamming has become somewhat rare on discussion boards, since most board software prevents robots from signing up and submitting spam. Yet, there is still the problem of spam posted by human subscribers. Spam contents range from fairly subtle, such as text links to a commercial website, to blatant, such as advertising banners in user signatures and posts. Spammers frequently seek out communities that fit the target group for their products or services. For example, a shop that sells exercise machines might seek out sports communities. Evidently most spammers have an agenda apart from the community and the discussion. Nothing is lost by immediately deleting the spam posts and blocking the offending user and IP address. The situation is somewhat different if a regular member submits an advertising post. In most cases, deletion and a warning issued via PM or the warning system will be sufficient to deal with a one-time transgression.


Cross-posting is the practice of submitting the same message to more than one forum. The intention of the sender is to reach the greatest possible number of readers. The conjunct problem is fragmentation of the ensuing discussion. If the cross-post is targeted at the same community, people also get the impression of being spammed. Cross-posting within the same discussion board is annoying in most cases. The moderator needs to decide whether cross-posting is appropriate or whether to delete duplicate posts. In order to avoid thread fragmentation, duplicate threads may be closed, ideally with an annotation containing a link that leads to one thread singled out to continue the discussion. Alternatively, the administrator may disallow cross-posting within the same discussion board altogether.

Off-Topic Posts

This is a very common problem and it is simultaneously difficult to control. Off-topic (OT) posts arise from the associative nature of subject matters, a characteristic that goes to the root of human language. Getting off on a tangent is all to easy. For example, a discussion about nuclear energy may divert into a discussion about alternative energies, nuclear weapons, or state regulations. In the natural flow of a discussion, minor diversions are common and probably unobjectionable. However, a thread often develops in a contingent way that spawns discussions about multiple topics –often in parallel– which is confusing in the same way a group of people talking at the same time is confusing. Unfortunately, there are no universally valid guidelines for off-topic moderation. It always depends on context and community. In an informal discussion about philosophy OT posts may be of no concern, while in a more formal setting, such as a technical support forum, off-topic contributions may not be allowed at all.

A topic is usually outlined by its thread title and the tagline (short description). If a thread develops an OT sideline, the OT posts may be swapped out into a new thread by the administrator. Many software packages provide a “split thread” operation for this purpose. To what extent OT posts are moderated and how strongly OT contributions are discouraged depends very much on the nature of the discussion board.


“Noise” is text and other content that does either not belong to a discussion or that interrupts the flow of a discussion. For example, long quotations or distracting signatures can be considered noise. If the noise ratio exceeds a certain value, following the discussion becomes visually tiresome. The best strategy to avoid this is by limiting signatures to a certain length (perhaps also to disallow images in signatures) and by discouraging full quotes. Quotations are often useful, even necessary to remind the reader of something previously mentioned and to establish the context for a reply. However, a full quote in which the answerer refers only to a tiny fragment within the quote is confusing and counterproductive. To avoid this, the discussion board software may be configured to discourage full quotes, for example by ergonomic means. Alternatively, the moderator may remind people not to overuse full quotes and edit out noise manually if necessary.

Multiple Identities and Impersonation

Multiple identities result from the same user subscribing several times to the same discussion board. This might happen with technically inexperienced users, users who have lost their password, or users who intentionally create multiple identities. Although most software packages can be configured to prohibit multiple subscriptions with the same email address and/or from the same IP number, subscribers may bypass this mechanism by using different email addresses and IPs. Furthermore, blocking IP addresses is problematic with dynamically assigned IPs. In most cases, multiple subscriptions result in a number of dead accounts which can be deleted after a certain period of inactivity. Other cases are more troublesome, especially those which involve the continued use of multiple identities or impersonation (identity theft). These are deceptive tactics which are not always easy to detect. They a re popular with trolls. An analysis of IP numbers and time stamps of a sequence of posts is often necessary to uncover this form of abuse. Since this is a serious form of abuse, it usually results in account termination and banning.

Denial of Service Attacks, Hacker Attacks

Denial of Service (DoS) Attacks are technical sabotage manoeuvres aimed at disrupting the discussion board service. The most common method is flooding. A flooding robot (a program) sends huge quantities of messages to the board, which then becomes unusable for other users. Most discussion board software packages have basic features to avert such attacks, for example by limiting the number of messages a user can post within a certain period. However, resourceful attackers may find ways to bypass these protection mechanisms. Luckily, DoS are somewhat rare since they require a some technical sophistication, and quite a bit of dedication to the purpose of sabotage. Hacker attacks, on the other hand, are more common. The most ordinary hacker attack is password sniffing on unencrypted connections, and subsequently using passwords for gaining entry to the discussion board system, preferably as a user with administrator privileges. DoS and hacker attacks are serious forms of abuse and should be reported to the service provider and possibly to the law enforcement authorities. Board operators do not always have the technical means to take on such attacks on their own.

Types of Moderation

The Usenet community generally distinguishes between four types of moderation, which are likewise applicable to web-based discussion board systems. These types of moderation differ in the way posts are moderated. They feature different decision and communication flow models.


The most common form of moderation is post-moderation, which means that either a single moderator or a group of moderators reviews contributions once they have been posted. In such a setting, messages ought to reviewed on a regular basis (perhaps daily) and moderators ought to perform editorial tasks as required. Post-moderation is time-consuming if done correctly, because moderators need to review all content and respond to inappropriate content in time. Moderators have full censoring power.


The most restrictive form of moderation is pre-moderation. Again, moderators have full censoring power and need to review every message, but content is reviewed before it goes online, not after. This means that posted messages first go into a waiting queue before they are approved and released by the moderator. The delay that results from this procedure is quite detrimental to discussions, because replies are not available to the community in real time. Since this normally drains the lifeblood from a discussion, pre-moderation is applied only in special situations, where the sensitivity of the topic requires more restrictive action. One example for pre-moderation are the book reviews on amazon.com.

Reactive Moderation

Reactive moderation relies on alerts from members of the discussion board. It moves the task of supervision from the moderator to the audience by offering easily accessible means of reporting problems to the moderator. The moderator only needs to review those areas with reported problems. This form of moderation is quite effective in conjunction with automatic supervision, such as word filters. Its greatest advantage is the reduction of moderation workload associated with the pre- and post-moderation methods. What is more, the legal responsibilities of the operator seem to move primarily to removing questionable content, rather than preventing it being posted. The principal disadvantage of reactive moderation is that not all breaches of house rules and legal provisions might get reported.

Distributed Moderation

The distributed moderation model is even more radical. It dispenses with the concept of a moderator person altogether. Instead it relies on the assumption that a community can collectively decide what is appropriate for itself and what is not. Moderation tasks are thus carried out by the community by means of a voting system. Current implementations of voting systems are often similar to content rating systems. For example, if someone suggests a post for deletion, it takes a number of consenting votes to actually carry out deletion. There are two problems with this approach. First, the community might have different views about “appropriate content” than the board operator. Second, online voting systems are still prone to abuse. Thus distributed moderation is not yet widespread, although some groups, such as slashdot.org and wikipedia.org have used it with great success.