How We Moderate the Platform

Keeping PlaylistFeed safe, fair, and trustworthy

PlaylistFeed uses a combination of automation, human review, and community reporting to maintain a respectful and compliant environment for artists and curators. This article explains how moderation works and what actions we take to protect the platform.


1. Community Reporting

Users can report:

  • Inappropriate content
  • Harassment or abuse
  • Spam or fake accounts
  • Suspected pay-for-placement (payola) activity
  • Playlist or profile misuse

Every report is reviewed by our team. Reports are confidential, and the reported user is not notified who submitted it.


2. Automated Detection Systems

We use automated tools to monitor for:

  • Repeated spam submissions
  • Fake engagement (likes, saves, follows)
  • Duplicate accounts or bot activity
  • Sudden unusual patterns (e.g. mass requests, suspicious growth)

Flagged accounts are manually reviewed before any action is taken.


3. Manual Reviews

When needed, our moderation team investigates:

  • Reports submitted through the platform or support
  • High-risk behavior flagged by automated systems
  • Accounts involved in potential fraud, harassment, or platform manipulation

We look at context, frequency, and intent before taking action.


4. Actions We May Take

Based on severity and frequency, we may:

  • Send a warning
  • Temporarily limit certain features (e.g. submissions or Boosts)
  • Remove content or playlists
  • Suspend or permanently ban accounts

We always aim to be fair and provide a chance to correct behavior when appropriate.


5. Appeals

If you believe moderation was applied in error, you can contact [Support] to request a review. Include relevant details to help us investigate.


6. Our Goal

Our goal is to support real music discovery while protecting users from abuse, spam, or manipulation. Moderation is handled with care and accountability to ensure trust across the platform.