Adding Community & Safety checks to new features
With the continuous shipping nature at GitHub, it’s easy for the most well-intentioned feature to accidentally become the vector of abuse and harassment. The Community & Safety engineering team focuses…
With the continuous shipping nature at GitHub, it’s easy for the most well-intentioned feature to accidentally become the vector of abuse and harassment. The Community & Safety engineering team focuses on building community management tools and maintaining user safety, but we also review new features our colleagues have written to ensure there are no accidental abuse vectors. Similar to Application Security reviews, these Community & Safety reviews hopefully catch any potential problems before they go out, in order to minimize impact on marginalized folks, reduce spam, and encourage healthy communities.
But manually reviewing every pull request doesn’t scale, so we’ve created a handy checklist for folks who haven’t had the privilege of being harassed on the internet for things to look out for.
Our approach focuses on three main areas: ensuring explicit consent, keeping an audit log trail, and minimizing abuse.
Ensuring explicit consent
On the Community & Safety team, we believe in explicit consent in our daily lives as well as when we build software. Many abuse vectors can be avoided by simply asking: Are all parties involved aware and consenting to this interaction? If everyone is aware and on board with what’s going on, we reduce the number of unpleasant surprises, lower support ticket volume, and increase user trust. A great example of explicit consent is the Repository Invitations feature by @CoralineAda.
Any time you have two or more users interacting, there’s potential for harassment and abuse. Let’s say that Alice has been contacted by Bob using your new feature (i.e. direct messages).
Some example questions we ask to ensure explicit consent include:
- Is Alice blocking Bob? If so, don’t allow this event or any associated notifications.
- If Alice is blocking Bob, what kind of message are you showing Bob? Will that put Alice in danger of retaliation if Bob finds out he’s blocked?
- Is it easy and safe to report Bob? Will Alice’s identity be revealed if they do report Bob?
- Is it easy to hide or report abusive content so Alice doesn’t have to see it if it’s harassment?
- Are you allowing images? If so, do you have spam filters enabled? Pornography filters?
- Is it easy for Alice to opt-out of this feature?
- Does Alice know how much of their personally identifying information is visible with every interaction? Can you minimize the amount of data needed? Is it easy to go back and fix this if necessary?
- Is it easy to guess or confirm the presence of private information such as Alice’s email address or physical location?
- Can the owner of a space (like on a repository) approve or decline the creation of new content on their space?
- Can the owner of a space easily remove offensive content on their space, or do they have to talk to Support?
Keeping an audit log trail
Support folks are the unsung heroes of all matters related to Community & Safety. They are often dropped into a battlefield with very little context of what’s going on. Make it easy for your support folks to help your users quickly and with minimal digging by ensuring there’s a clear audit log trail available. Audit logs keep track of your activity, and any organization you own, and can be very helpful to provide context and accountability in the event that something goes wrong. You can read more about audit logs in the documentation.
Some example questions we ask to ensure proper audit trail logs include:
- Does there need to be an audit log for this interaction?
- If multiple people can edit a piece of content, are we tracking what changes Bob made and which ones Alice made?
- Is it easy to see how many abuse reports Bob has accrued in the past? Over a certain time frame?
- Is it easy for staff to take swift action against Bob?
Minimizing abuse
Many sites are optimized for easy account creation, but this often leads to spam or sock puppet (throwaway) accounts that are handy harassment tools. Limiting the amount of features 0-day accounts can access on high-risk features can help limit abuse.
Some example questions we ask to ensure minimal abuse vectors include:
- How old is Bob’s account? Should you allow 0-day accounts to participate in this interaction?
- Are you rate limiting interactions?
- Should you consider hiding or minimizing 0-day account content?
- Do you have a reputation system? If so, how is it calculated? Do users with good reputations have privileges that those with poor reputations would desire?
- How are you treating content from reported/suspended users? Is it hidden? Deleted? Minimized on load?
These are just some things to think about that can help your teams curb abuse vectors on new features before they go out. We hope that this checklist will help you build safer products and lead to happier users.
Written by
Related posts
GitHub Availability Report: November 2024
In November, we experienced one incident that resulted in degraded performance across GitHub services.
The top 10 gifts for the developer in your life
Whether you’re hunting for the perfect gift for your significant other, the colleague you drew in the office gift exchange, or maybe (just maybe) even for yourself, we’ve got you covered with our top 10 gifts that any developer would love.
Congratulations to the winners of the 2024 Gaady Awards
The Gaady Awards are like the Emmy Awards for the field of digital accessibility. And, just like the Emmys, the Gaadys are a reason to celebrate! On November 21, GitHub was honored to roll out the red carpet for the accessibility community at our San Francisco headquarters.