Skip to main content
POST
/
tts
/
v1
/
voice:stream
curl --request POST \
--url https://api.inworld.ai/tts/v1/voice:stream \
--header 'Authorization: Basic <api-key>' \
--header 'Content-Type: application/json' \
--data '{
"text": "Hello, world! What a wonderful day to be a text-to-speech model!",
"voiceId": "Dennis",
"modelId": "inworld-tts-1",
"timestampType": "WORD"
}'
{
  "result": {
    "audioContent": "UklGRiRQAQBXQVZFZm1...",
    "timestampInfo": {
      "wordAlignment": {
        "words": [
          "What",
          "a",
          "wonderful",
          "day",
          "to",
          "be",
          "a",
          "text-to-speech",
          "model!"
        ],
        "wordStartTimeSeconds": [
          1.246,
          1.511,
          1.613,
          2.04,
          2.203,
          2.305,
          2.427,
          2.509,
          3.201
        ],
        "wordEndTimeSeconds": [
          1.47,
          1.531,
          1.979,
          2.163,
          2.244,
          2.387,
          2.448,
          3.161,
          3.527
        ]
      }
    }
  }
}

Authorizations

Authorization
string
header
required

Your authentication credentials. For Basic authentication, please populate Basic $INWORLD_RUNTIME_BASE64_CREDENTIAL

Body

application/json
text
string
required

The text to be synthesized into speech. Maximum input of 2,000 characters.

voiceId
string
required

The ID of the voice to use for synthesizing speech.

modelId
string
required

The ID of the model to use for synthesizing speech. See Models for available models.

audioConfig
object

Configurations to use when synthesizing speech.

temperature
number<float>
default:1.1

Determines the degree of randomness when sampling audio tokens to generate the response.

Defaults to 1.1. Accepts values between 0 and 2. Higher values will make the output more random and can lead to more expressive results. Lower values will make it more deterministic.

For the most stable results, we recommend using the default value.

timestampType
enum<string>
default:TIMESTAMP_TYPE_UNSPECIFIED

Controls timestamp metadata returned with the audio. When enabled, the response includes timing arrays, which can be useful for word-highlighting, karaoke-style captions, and lipsync.

  • WORD: Output arrays under timestampInfo.wordAlignment (words, wordStartTimeSeconds, wordEndTimeSeconds).
  • CHARACTER: Output arrays under timestampInfo.characterAlignment (characters, characterStartTimeSeconds, characterEndTimeSeconds).
  • TIMESTAMP_TYPE_UNSPECIFIED: Do not compute alignment; timestamp arrays will be empty or omitted.

Enabling alignment slightly increases latency. Internal experiments show an average ~100 ms increase.

Note: Timestamp alignment currently supports English only; other languages are experimental. Additionally, timestamp alignment is currently only available for inworld-tts-1 and inworld-tts-1-max models. Support for the 1.5 models (inworld-tts-1.5-mini and inworld-tts-1.5-max) is coming soon.

Available options:
TIMESTAMP_TYPE_UNSPECIFIED,
WORD,
CHARACTER
applyTextNormalization
enum<string>
default:APPLY_TEXT_NORMALIZATION_UNSPECIFIED

When enabled, text normalization automatically expands and standardizes things like numbers, dates, times, and abbreviations before converting them to speech. For example, Dr. Smith becomes Doctor Smith, and 3/10/25 is spoken as March tenth, twenty twenty-five. Turning this off may reduce latency, but the speech output will read the text exactly as written. Defaults to automatically deciding whether to apply text normalization.

Available options:
APPLY_TEXT_NORMALIZATION_UNSPECIFIED,
ON,
OFF

Response

A successful response returns a stream of objects.

result
object

A chunk containing the audio data. If using PCM, every chunk, not just the initial chunk, will contain a complete WAV header so it can be played independently.

error
object

A response may contain an error object if an error happens in the stream.