Moderating open source conversations to keep them productive
TL;DR: Successful open source projects create a welcoming community by fostering productive conversations, promoting healthy collaboration, and de-escalating conflict when things get heated or off topic.
Successful open source projects create a welcoming community by fostering productive conversations, promoting healthy collaboration, and de-escalating conflict when things get heated or off topic. Once you’ve set contributors up for success, automated common community-management tasks, and establishd a governance model, you should be prepared to moderate conversations in the repositories you maintain, to help build a strong community around your code.
Establish a Code of Conduct
Before you begin moderating, establish a code of conduct. Codes of conduct communicate expectations for behavior within your community, as well as how you will enforce those expectations when a contributor’s behavior diverges. While you don’t necessarily need to adopt a code of conduct on day one for small projects, as your project matures and your community grows, you’ll want to adopt a code of conduct, and adopt one before you need it. Not only does it signal to potential contributors that you will take action when necessary, it will reduce the appearance of bias when you respond to disruptive behavior.
Like open source licenses, a number of widely-adopted codes of conduct already exist, and you adopt one by adding a
CODE_OF_CONDUCT file to the root of your repository. This is best accomplished through a pull request, which often prompts a necessary conversation within your community, and allows you to adapt the code of conduct to your needs. On GitHub, if you visit your community profile, you should be prompted to choose from several popular code of conduct templates.
Allow community members to report comments
Before you can moderate disruptive behavior, you first need to learn about it. While it’s possible for maintainers of small projects to read every single comment and issue, as your project grows and matures, that quickly becomes untenable. Empower your community to flag unproductive comments for your review by opting-in to community reporting. Once enabled for your repository, you’ll have access to a queue where community flagged comments can be reviewed and moderated using the suite of tiered moderation tools, described below, allowing you to respond proportionately to unwanted behavior, and ideally bring contributor behavior back in line with community norms.
Moderating disruptive comments
The least severe tiered moderation response is minimizing a comment. Minimized comments are hidden by default when a comment thread loads, meaning viewers must opt-in to seeing the comment’s content. Hiding a comment is useful for reducing the screen real estate of off-topic or inappropriate comments, politely signaling to the author that such comments are not in line with community expectations.
If part of a comment contributes to the conversation but another part detracts, you can consider editing the comment to bring it in line with community norms, or for particularly egregious comments, you can delete it entirely.
Lock heated conversations
When an entire conversation is heated, unproductive, or violates your code of conduct, sometimes it’s best to lock the conversation entirely, either permanently or temporarily. Locking a conversation prevents anyone without
write access to the repository from commenting or reacting on the issue. Doing so can force folks to step away from the conversation for a bit - to create space for emotions to settle - so that things can get back on track, in the issue and across your repository.
Sometimes you might also lock an issue when a decision has been made and subsequent comments are not constructive. When you lock the conversation, you’ll be prompted to offer a reason, which appears in the issue timeline. Like minimized comments, this once again communicates to contributors what behavior is acceptable and what is not.
Limit interactions to force a cooldown
When disruptive behavior spills from one or two issues to your entire repository (or even across repositories), consider limiting interactions. Interaction limits are the next more aggressive moderation tier, and allow you to restrict who can comment or open issues and pull requests to established users, to those that have previously contributed to your repository, or only to those with write access. Similar to locking conversations, interaction limits are intendeds to enforce a temporary cooldown period when things get heated.
Block disruptive users
The final and most aggressive form of moderation is blocking a disruptive user. Blocking a user prevents them from interacting with your repositories in any way. This ban can be permanent, or if a first or second infraction, ideally temporary to allow for self-rehabilitation of their behavior. When you block the user, you could do so silently to avoid conflict, or if you’d like to, you can notify the user that they’ve been blocked, with a link to the offending content and/or your code of conduct. In a perfect world, disruptive comments would be a misunderstanding or lack of understanding of community norms, meaning a “time out”, along with specifics about the deviation, could create more constructive community contributions, moving disruptive contributors into productive contributors.
By establishing a code of conduct, enabling community reporting, and using GitHub’s moderation tools, active community management keeps conversations constructive by promoting healthy collaboration, keeping discussions on track, de-escalating conflict, and overall creating a welcoming community, all of which, help set your open source project (and your community members) up for success.
If you enjoyed this post, you might also enjoy:
- Twelve tips for growing communities around your open source project
- 10 lessons learned fostering a community of communities at GitHub
- Five best practices in open source: external engagement
- User blocking vs. user muting
- 15 rules for communicating at GitHub
- Why open source
- Why you probably shouldn't add a CLA to your open source project
- Why everything should have a URL
- 19 reasons why technologists don't want to work at your government agency
- Ten ways to make a product great
- How I re-over-engineered my home network for privacy and security
Ben Balter is Chief of Staff for Security at GitHub, the world’s largest software development platform. Previously, as a Staff Technical Program manager for Enterprise and Compliance, Ben managed GitHub’s on-premises and SaaS enterprise offerings, and as the Senior Product Manager overseeing the platform’s Trust and Safety efforts, Ben shipped more than 500 features in support of community management, privacy, compliance, content moderation, product security, platform health, and open source workflows to ensure the GitHub community and platform remained safe, secure, and welcoming for all software developers. Before joining GitHub’s Product team, Ben served as GitHub’s Government Evangelist, leading the efforts to encourage more than 2,000 government organizations across 75 countries to adopt open source philosophies for code, data, and policy development. More about the author →