Discord is undergoing a significant revamp of its moderation system, introducing a new warning system that promotes transparency and offers users an opportunity to learn from their mistakes rather than immediately resorting to permanent bans for rule violations. The warning system will implement one-year temporary bans for most violations, with exceptions for extremely harmful infractions.
In an effort to educate users and improve online interactions, Discord is introducing a new account standing section reminiscent of Microsoft's Xbox strike system. Unlike the "three strikes and you're out" approach taken by some platforms, Discord aims to offer users opportunities to change and enhance their behavior, particularly teens.
In conjunction with this warning system, Discord is introducing a "teen safety assist" feature. This includes proactive filters and alerts that will be enabled by default for teens. In the coming weeks, Discord will roll out a sensitive content filter that blurs sexually explicit content in direct messages (DMs) and in servers. It will be enabled by default, and while adults can opt in, it can also be disabled in settings.
To implement this feature, Discord will scan image attachments to detect explicit content. While this may raise privacy concerns, Discord emphasizes that it uses AI models, not human intervention, to carry out the scanning.
In addition to content filtering, Discord is also planning to expand its AI models to address other forms of challenging content, such as detecting sexual exploitation on the platform.
Discord's effort to bolster safety and moderation comes alongside a range of other feature updates, including an in-app shop, a new Midnight dark theme on mobile, and various improvements for users and developers.
It's evident that Discord is striving to provide a safer and more educational environment for its users, particularly for teens, as it continues to evolve its platform.