Fastino
  • Playground
  • Community
  • Blog
  • GET STARTED
    • Quickstart
    • Use the API
    • Rate Limits
    • Privacy Mode
  • Models
    • Classification
    • PII
    • Information Extraction
    • Text to JSON (experimental)
    • Function Calling (experimental)
  • Summarizaton (experimental)
  • Profanity Censor (experimental)
Powered by GitBook
On this page
  • Create an API key
  • Call the API
  1. GET STARTED

Use the API

PreviousQuickstartNextRate Limits

Last updated 24 days ago

Create an API key

To obtain an API key to use with your Fastino model, log in to the Fastino platform. Once logged in, navigate to the "Keys" section on the sidebar. From this section, you can create keys as needed to suit your requirements.

The ID visible in your account is not the API key itself; it is merely an identifier. Store your keys in a safe place, you will not be able to retrieve them again after creating.

When generating a new API key please wait for 60 seconds for the key to propagate

Call the API

Fastino's inference API is designed to provide a seamless integration for deploying and running AI models with ease. With a simple POST request to the endpoint https://api.fastino.com/run, users can efficiently execute their AI models in real-time. The API requires standard authentication, utilizing a bearer token for security, and operates with a straightforward JSON payload that includes essential user details. Suitable for a wide range of AI applications, Fastino's platform ensures quick and reliable model inference, making it an ideal solution for developers looking to leverage powerful AI capabilities into their applications effortlessly.

Endpoint

Headers

Name
Value

x-api-key

Content-Type

application/json

Body

Name
Type
Description

model_id

string

Unique identifier for the model. This value can be copied from the Models screen on the platform UI.

input

array of objects

An array where each object represents a single model inference call. For PRO and TEAM plans, multiple objects can be included to perform batch inference in a single request.

input > text

string

The text being processed by the language model

input > parameters

object

The parameters specifying how the model should process the text. Parameters vary depending on the expected input format of the specific model being used. If not specified, the inference parameters revert to defaults.

Example Body
{
  "model_id": "fastino-pii",
  "input": [
    {
      "text": "9 AM for a Sedan for Jamie Derran",
      "parameters": {
        "entity_types": [
          "full_name",
          "car",
          "time"
        ],
        "threshold": 0.3
      }
    }
  ]
}

Response

Name
Type
Description

input

string

Original user-provided string.

latency_ms

number

Model inference time (excludes network and API gateway latency)

output

object

The output of the model. Structure varies depending on the model being used.

status

string

Overall request status (e.g., success, error)

Example Response
[{
  "input": "9 AM for a Sedan for Jamie Derran",
  "latency_ms": 77.31,
  "message": "Responses from endpoint.",
  "output": {
    "entities": [
      {
        "start": 0,
        "end": 4,
        "label": "time",
        "text": "9 AM",
        "score": 0.5893954038619995
      },
      {
        "start": 11,
        "end": 16,
        "label": "car",
        "text": "Sedan",
        "score": 0.8889331817626953
      },
      {
        "start": 21,
        "end": 33,
        "label": "full_name",
        "text": "Jamie Derran",
        "score": 0.9932531714439392
      }
    ],
    "redacted_text": "<TIME> for a <CAR> for <FULL_NAME>"
  },
  "status": "success"
}]

POST

Your API key generated on the platform (see )

https://api.fastino.com/run
Create an API key
Create a model key