- Facebook revamps their moderation to fight off Russian political interference. - Twitter is failing to crack down on trolls. - Banning of Info Wars. - Reddit is banning entire subreddits. - YouTube's ever-growing hate speech and content piracy problems.
A lot of online community and platform problems can be summed up as moderation and policy issues. It’s no wonder that more and more companies are looking at their moderation policies and teams to make sure they’re sufficiently robust to deal with the online landscape. Here are some considerations when building your moderation policies and teams.
Strong community guidelines
Strong community guidelines are the bedrock of any your moderation strategy. Aside from setting the tone for the discussions to take place on your community, they allow you to intervene when someone breaks those guidelines. I know it seems simple, but it is rarely so. Community guidelines need to consider aspect such as hate speech, politics, race/religion, competitors, modification, etc...
The best way to keep your content and community away from controversy is to create strong community guidelines and policies and have them available to new and existing members. Instead of burying them in a TOS, or in your footer, have them stickied to the top of your community, or in the sidebar, where they are visible. Reddit does a great job of this, where each subreddits rules are found in the sidebar, and the main points are (typically) re-iterated in the empty state of a new post/comment.
There are different techniques and strategies to stop bad actors and find offending posts. Check out our blog post on 'Saving your Community from Spammers and Bots.'
Enforcing the rules of your community is not a question of merely banning bad actors. Of course, for those pushing topics which are illegal or SPAM, there’s no question that banning is the way to go. However, there are other ways to stop those who engage in bad behavior or break community guidelines. The ban hammer should be reserved for all but the worst of offenders.
The shadowban can be temporary or a permanent measure. You’re not outright banning the person, nor are you informing them that they’ve been banned. Instead, you’re hiding their posts to your other members. Everything looks fine on their end, but none of their posts will appear in the community. This is a useful tactic to stop trolls, or do put some more ‘aggressive’ members in a timeout. A shadowban can also stop users from creating and posting the same inflammatory content with other accounts.
When to use it: - An otherwise good member is acting up - A persistent troll is breaking your guidelines and harassing others - A member is hijacking conversations
If shadowbanning seems a little too underhanded for your community. You can also freeze accounts as well. Freezing means letting the user know that they cannot post or reply for X amount of days because of bad behavior. Think of it as a cooling off period if discussions become too heated. When to use it: - Someone violates the rules a few times in a small period of time - An otherwise regular member says something offensive/controversial
This is probably the most effective moderation technique. When someone breaks the rules, just remove the post. This assumes positive intent, that the user didn’t mean to break the rules, and allows everyone to feel good. Hopefully, the user gets the message, and their behavior improves.
When to use it: - When someone breaks the community rules - It's their first 'bad' post
Setting up a moderation team is no simple task. The first place to start is the size and scope of your community. If your community is a branded one, mainly built for clients, and you’re not expecting more than a few thousand users, a moderation team is probably not necessary, and support or other customer-facing staff would be sufficient.
On the other hand, if you’re going to run a large forum, with hundreds of thousands of users, and a lot of activity, then you’ll want a small team of dedicated community moderators/managers to manage and grow the community. No, support staff are not enough! You need professional community managers. Luckily, community management has been a growing profession, and it is relatively easy to find competent community managers in most major cities or remotely.
Community managers can do more than moderate discussions on your internal forum. They can also help steer and moderate discussions on social networks, large media aggregators, and other channels. More than just moderation, community managers can also act as brand advocates for your company.
We cannot stress this enough. Community management is a need to have, not a nice to have.
As any community grows, the resources and time required to manage them increase as well. Depending on the size of your community, it might not make sense to have a large army of community managers checking every thread and post. That’s where volunteer community moderators can help. These can be your power user, users who’ve been with your community since the beginning, or recognized voices within your community.
Volunteer moderation is a balancing act. If given too much freedom and autonomy they can become a more significant problem than what they are brought on to solve. The best bet is to set strict guidelines of what it means to be a moderator for your community, have a zero tolerance policy for abuses of power, and limit their content moderation to freezing accounts or removing posts. It's also important to have your internal community manager manage the volunteer team. The volunteers should have a way to contact you.
As discussed in our other post, tools now exist for auto-moderation. These tools are increasing powered by AI and able to block more than vulgar language but also make inferences to intent and sentiment of posts. Although costly, these tools can save significant labor costs. Automating moderation doesn’t only need to apply to sophisticated software. It can be as simple as creating rules for review. For example, if you’re noticing a specific user getting a lot of downvotes, or if a particular user has been reported multiple times in a short period, an alert can be sent to your mod team, which can further investigate.
Take a stand
Moderation shouldn’t be left up to chance or loose policy. As we’ve seen across social media, and the web, without firm community guidelines and moderation, we leave ourselves open to attacks, trolling and hate content. In a branded community this is less of an issue, but still, something to be considered because of how polarizing specific topics have become.
You’ve set up a community, seeded it with initial content, and everything is ready to go. But how do you create and grow an audience for your community?
Cloudinary is a SaaS product that provides, true to the name, a cloud-based digital asset management system. It is intended for any size web platform that wants their...
Integrating Stripe with any framework can be done with relative ease, and Stripe provides great documentation for integration and testing. In this post, I will discuss...