Skip to main content

Overview

MistralLLMService provides access to Mistral’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService and supports streaming responses, function calling, and vision with Mistral-specific optimizations for tool use and message handling.

Mistral LLM API Reference

Pipecat’s API methods for Mistral integration

Example Implementation

Complete example with function calling

Mistral Documentation

Official Mistral API documentation and features

Mistral Console

Access models and manage API keys

Installation

To use Mistral services, install the required dependency:
pip install "pipecat-ai[mistral]"

Prerequisites

Mistral Account Setup

Before using Mistral LLM services, you need:
  1. Mistral Account: Sign up at Mistral Console
  2. API Key: Generate an API key from your console dashboard
  3. Model Selection: Choose from available models (Mistral Small, Mistral Large, etc.)

Required Environment Variables

  • MISTRAL_API_KEY: Your Mistral API key for authentication

Configuration

api_key
str
required
Mistral API key for authentication.
base_url
str
default:"https://api.mistral.ai/v1"
Base URL for Mistral API endpoint.
model
str
default:"None"
deprecated
Model identifier to use.Deprecated in v0.0.105. Use settings=MistralLLMService.Settings(model=...) instead.
settings
MistralLLMService.Settings
default:"None"
Runtime-configurable settings. See Settings below.

Settings

Runtime-configurable settings passed via the settings constructor argument using MistralLLMService.Settings(...). These can be updated mid-conversation with LLMUpdateSettingsFrame. See Service Settings for details. This service uses the same settings as OpenAILLMService. See OpenAI LLM Settings for the full parameter reference.

Usage

Basic Setup

import os
from pipecat.services.mistral import MistralLLMService

llm = MistralLLMService(
    api_key=os.getenv("MISTRAL_API_KEY"),
    model="mistral-small-latest",
)

With Custom Settings

from pipecat.services.mistral import MistralLLMService

llm = MistralLLMService(
    api_key=os.getenv("MISTRAL_API_KEY"),
    settings=MistralLLMService.Settings(
        model="mistral-large-latest",
        temperature=0.7,
        top_p=0.9,
        max_completion_tokens=1024,
    ),
)

Updating Settings at Runtime

from pipecat.frames.frames import LLMUpdateSettingsFrame
from pipecat.services.mistral.llm import MistralLLMSettings

await task.queue_frame(
    LLMUpdateSettingsFrame(
        delta=MistralLLMSettings(
            temperature=0.3,
            max_tokens=512,
        )
    )
)

Notes

  • Function calling: Mistral supports tool/function calling. The service includes deduplication logic to prevent repeated execution of the same function calls across conversation turns.
  • Mistral API constraints: The service automatically handles Mistral-specific requirements, such as ensuring tool result messages are followed by an assistant message and that system messages appear only at the start of the conversation.
  • Vision: Supports image inputs via base64-encoded JPEG content.
  • Default model: mistral-small-latest is used when no model is specified.
The InputParams / params= pattern is deprecated as of v0.0.105. Use Settings / settings= instead. See the Service Settings guide for migration details.