Fastino
  • Playground
  • Community
  • Blog
  • GET STARTED
    • Quickstart
    • Use the API
    • Rate Limits
    • Privacy Mode
    • GPU / CPU
  • Models
    • Classification
    • PII
    • Information Extraction
    • Text to JSON (experimental)
    • Function Calling (experimental)
  • Summarizaton (experimental)
  • Profanity Censor (experimental)
Powered by GitBook
On this page
  • Overview
  • Example Use Cases
  • Usage

Profanity Censor (experimental)

Please note that this is a research preview of our profanity censoring model

Overview

Fastino’s Profanity Censor model is designed to detect and redact profane, vulgar, or inappropriate language from user-generated content in real time. It offers developers precise control over content moderation, brand safety, and community standards enforcement—without compromising latency or accuracy. The model works in a zero-shot fashion and supports adjustable sensitivity via a configurable confidence threshold.

When enabled, redaction replaces offensive terms with a placeholder token (<REDACTED_PROFANITY>), making it easy to sanitize text before storage, display, or analysis.

Example Use Cases

  • Censoring offensive language in chat apps, forums, or comment sections

  • Filtering user reviews, feedback, or customer messages for brand compliance

  • Pre-processing content before display in moderated environments (e.g., educational tools, gaming platforms)

  • Enabling profanity-aware search, analytics, or sentiment analysis

  • Preventing toxic input in LLM-based assistant pipelines or content creation tools

Usage

Example Body
{
  "model_id": "fastino-profanity-censor-••••••••••",
  "input": [
    {
      "text": "This feature is damn helpful!",
      "parameters": {
        "threshold": 0.3,
      }
    }
  ]
}
Example Response
[
  {
    "input": "This feature is damn helpful!",
    "latency_ms": 18.27,
    "message": "Responses from endpoint.",
    "output": {
      "entities": [
        {
          "start": 16,
          "end": 20,
          "label": "profanity",
          "text": "damn",
          "score": 0.84
        }
      ],
      "redacted_text": "This feature is <REDACTED_PROFANITY> helpful!"
    },
    "status": "success"
  }
]
PreviousSummarizaton (experimental)

Last updated 1 day ago