Voice Model (GLM-4-Voice)
GLM-4-Voice voice dialogue model supporting text-to-speech.
API Endpoints
POST
/audio/speechText to speech
Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Required | Model name: GLM-4-Voice |
input | string | Required | Text to convert to speech |
voice | string | Optional | Voice selection, default alloy |
Request Example
Request Example
{
"model": "GLM-4-Voice",
"input": "你好,我是智谱AI的语音助手",
"voice": "alloy"
}Response Example
Response Example
(返回音频文件流)Code Examples
Python
from openai import OpenAI
client = OpenAI(
api_key="your-api-key",
base_url="https://your-proxy-domain.com/v1"
)
response = client.audio.speech.create(
model="GLM-4-Voice",
voice="alloy",
input="你好,我是智谱AI的语音助手"
)
# 保存音频文件
response.stream_to_file("output.mp3")JavaScript
import OpenAI from 'openai';
import fs from 'fs';
const client = new OpenAI({
apiKey: 'your-api-key',
baseURL: 'https://your-proxy-domain.com/v1'
});
async function generateSpeech() {
const response = await client.audio.speech.create({
model: 'GLM-4-Voice',
voice: 'alloy',
input: '你好,我是智谱AI的语音助手'
});
const buffer = Buffer.from(await response.arrayBuffer());
fs.writeFileSync('output.mp3', buffer);
}
generateSpeech();cURL
curl https://your-proxy-domain.com/v1/audio/speech \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "GLM-4-Voice",
"input": "你好,我是智谱AI的语音助手",
"voice": "alloy"
}' \
--output output.mp3