Textgenerierung (Chat Completions)

GLM-4 Serien-Sprachmodell Chat-Completion-Schnittstelle. Vollständig kompatibel mit dem OpenAI Chat Completions API-Format, unterstützt Streaming-Ausgabe und Function Calling.

API-Endpunkte

POST/chat/completions

Chat-Completion erstellen

Anfrageparameter

ParameterTypErforderlichBeschreibung
modelstringErforderlichModellname: GLM-4-Plus, GLM-4-Air, GLM-4-AirX, GLM-4-Long, GLM-4-FlashX, GLM-4-Flash
messagesarrayErforderlichArray von Konversationsnachrichten, jeweils mit Rolle und Inhalt
temperaturenumberOptionalSampling-Temperatur, Bereich 0-1, Standard 0,7
max_tokensintegerOptionalMaximale Anzahl der zu generierenden Tokens
streambooleanOptionalOb Streaming-Ausgabe verwendet wird, Standard false
top_pnumberOptionalNucleus-Sampling-Parameter, Standard 0,9
toolsarrayOptionalVerfügbare Tool-Liste (Function Calling)

Anfrage-Beispiel

Anfrage-Beispiel
{
  "model": "GLM-4-Air",
  "messages": [
    {"role": "system", "content": "你是一个智能助手"},
    {"role": "user", "content": "请介绍一下智谱AI"}
  ],
  "temperature": 0.7,
  "top_p": 0.9,
  "stream": false
}

Antwort-Beispiel

Antwort-Beispiel
{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "GLM-4-Air",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "智谱AI是一家专注于大模型与认知智能技术的公司..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 100,
    "total_tokens": 120
  }
}

Code-Beispiele

Python

from openai import OpenAI

# 使用代理 API
client = OpenAI(
    api_key="your-api-key",
    base_url="https://your-proxy-domain.com/v1"
)

response = client.chat.completions.create(
    model="GLM-4-Air",
    messages=[
        {"role": "system", "content": "你是一个智能助手"},
        {"role": "user", "content": "请介绍一下智谱AI"}
    ],
    temperature=0.7,
    stream=False
)

print(response.choices[0].message.content)

JavaScript

import OpenAI from 'openai';

// 使用代理 API
const client = new OpenAI({
  apiKey: 'your-api-key',
  baseURL: 'https://your-proxy-domain.com/v1'
});

async function chat() {
  const response = await client.chat.completions.create({
    model: 'GLM-4-Air',
    messages: [
      { role: 'system', content: '你是一个智能助手' },
      { role: 'user', content: '请介绍一下智谱AI' }
    ],
    temperature: 0.7,
    stream: false
  });

  console.log(response.choices[0].message.content);
}

chat();

cURL

curl https://your-proxy-domain.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-api-key" \
  -d '{
    "model": "GLM-4-Air",
    "messages": [
      {"role": "system", "content": "你是一个智能助手"},
      {"role": "user", "content": "请介绍一下智谱AI"}
    ],
    "temperature": 0.7
  }'