Skip to main content

ChatMistralAI

This will help you getting started with ChatMistralAI chat models. For detailed documentation of all ChatMistralAI features and configurations head to the API reference.

Overview​

Integration details​

ClassPackageLocalSerializablePY supportPackage downloadsPackage latest
ChatMistralAI@langchain/mistralaiβŒβŒβœ…NPM - DownloadsNPM - Version

Model features​

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingToken usageLogprobs
βœ…βœ…βœ…βŒβŒβŒβœ…βœ…βŒ

Setup​

To access ChatMistralAI models you’ll need to create a ChatMistralAI account, get an API key, and install the @langchain/mistralai integration package.

Credentials​

Head here to sign up to Mistral AI and generate an API key. Once you’ve done this set the MISTRAL_API_KEY environment variable:

export MISTRAL_API_KEY="your-api-key"


If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:

```{=mdx}

```bash
# export LANGCHAIN_TRACING_V2="true"
# export LANGCHAIN_API_KEY="your-api-key"

### Installation

The LangChain ChatMistralAI integration lives in the `@langchain/mistralai` package:

```{=mdx}

```bash npm2yarn
npm i @langchain/mistralai

## Instantiation

Now we can instantiate our model object and generate chat completions:

::: {.cell execution_count=1}
``` {.typescript .cell-code}
import { ChatMistralAI } from "@langchain/mistralai"

const llm = new ChatMistralAI({
model: "mistral-small",
temperature: 0,
maxTokens: undefined,
maxRetries: 2,
// other params...
})
3:7 - Exported variable 'llm' has or is using name 'MistralAICallOptions' from external module "/Users/bracesproul/code/lang-chain-ai/langchainjs/libs/langchain-mistralai/dist/chat_models" but cannot be named.

:::

Invocation​

When sending chat messages to mistral, there are a few requirements to follow:

  • The first message can not be an assistant (ai) message.
  • Messages must alternate between user and assistant (ai) messages.
  • Messages can not end with an assistant (ai) or system message.
const aiMsg = await llm.invoke([
[
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
],
["human", "I love programming."],
]);
aiMsg;
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: `Sure, I'd be happy to help you translate that sentence into French! The English sentence "I love pro`... 126 more characters,
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: { tool_calls: [] },
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: `Sure, I'd be happy to help you translate that sentence into French! The English sentence "I love pro`... 126 more characters,
name: undefined,
additional_kwargs: { tool_calls: [] },
response_metadata: {
tokenUsage: { completionTokens: 52, promptTokens: 32, totalTokens: 84 },
finish_reason: "stop"
},
tool_calls: [],
invalid_tool_calls: []
}
console.log(aiMsg.content);
Sure, I'd be happy to help you translate that sentence into French! The English sentence "I love programming" translates to "J'aime programmer" in French. Let me know if you have any other questions or need further assistance!

Chaining​

We can chain our model with a prompt template like so:

import { ChatPromptTemplate } from "@langchain/core/prompts";

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
],
["human", "{input}"],
]);

const chain = prompt.pipe(llm);
await chain.invoke({
input_language: "English",
output_language: "German",
input: "I love programming.",
});
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "Ich liebe Programmierung. (German translation)",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: { tool_calls: [] },
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Ich liebe Programmierung. (German translation)",
name: undefined,
additional_kwargs: { tool_calls: [] },
response_metadata: {
tokenUsage: { completionTokens: 12, promptTokens: 26, totalTokens: 38 },
finish_reason: "stop"
},
tool_calls: [],
invalid_tool_calls: []
}
Error in handler LangChainTracer, handleChainEnd: AbortError: The user aborted a request.

Tool calling​

Mistral’s API now supports tool calling and JSON mode! The examples below demonstrates how to use them, along with how to use the withStructuredOutput method to easily compose structured output LLM calls.

import { ChatMistralAI } from "@langchain/mistralai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
import { tool } from "@langchain/core/tools";

const calculatorSchema2 = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
});

const calculatorTool2 = tool(
(input) => {
return JSON.stringify(input);
},
{
name: "calculator",
description: "A simple calculator tool",
schema: calculatorSchema2,
}
);

const llm2 = new ChatMistralAI({
model: "mistral-large-latest",
});

// Bind the tool to the model
const modelWithTool2 = llm2.bind({
tools: [calculatorTool2],
});

const prompt2 = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);

// Chain your prompt, model, and output parser together
const chain2 = prompt2.pipe(modelWithTool2);

const response2 = await chain2.invoke({
input: "What is 2 + 2?",
});
console.log(response2.tool_calls);
[
{
name: "calculator",
args: { operation: "add", number1: 2, number2: 2 },
id: "Qcw6so4hJ"
}
]

.withStructuredOutput({ ... })​

Using the .withStructuredOutput method, you can easily make the LLM return structured output, given only a Zod or JSON schema:

note

The Mistral tool calling API requires descriptions for each tool field. If descriptions are not supplied, the API will error.

import { ChatMistralAI } from "@langchain/mistralai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";

const calculatorSchema3 = z
.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
})
.describe("A simple calculator tool");

const llm3 = new ChatMistralAI({
model: "mistral-large-latest",
});

// Pass the schema and tool name to the withStructuredOutput method
const modelWithTool3 = llm3.withStructuredOutput(calculatorSchema3);

const prompt3 = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);

// Chain your prompt and model together
const chain3 = prompt3.pipe(modelWithTool3);

const response3 = await chain3.invoke({
input: "What is 2 + 2?",
});
console.log(response3);
{ operation: "add", number1: 2, number2: 2 }

You can supply a β€œname” field to give the LLM additional context around what you are trying to generate. You can also pass β€˜includeRaw’ to get the raw message back from the model too.

const includeRawModel3 = llm3.withStructuredOutput(calculatorSchema3, {
name: "calculator",
includeRaw: true,
});
const includeRawChain3 = prompt3.pipe(includeRawModel3);

const includeRawResponse3 = await includeRawChain3.invoke({
input: "What is 2 + 2?",
});
console.dir(includeRawResponse3, { depth: null });
{
raw: AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "",
tool_calls: [
{
name: "calculator",
args: { operation: "add", number1: 2, number2: 2 },
id: "qQz1AWzNd"
}
],
invalid_tool_calls: [],
additional_kwargs: {
tool_calls: [
{
id: "qQz1AWzNd",
function: {
name: "calculator",
arguments: '{"operation": "add", "number1": 2, "number2": 2}'
}
}
]
},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "",
name: undefined,
additional_kwargs: {
tool_calls: [
{
id: "qQz1AWzNd",
function: {
name: "calculator",
arguments: '{"operation": "add", "number1": 2, "number2": 2}'
}
}
]
},
response_metadata: {
tokenUsage: { completionTokens: 34, promptTokens: 205, totalTokens: 239 },
finish_reason: "tool_calls"
},
tool_calls: [
{
name: "calculator",
args: { operation: "add", number1: 2, number2: 2 },
id: "qQz1AWzNd"
}
],
invalid_tool_calls: []
},
parsed: { operation: "add", number1: 2, number2: 2 }
}
AbortError: The user aborted a request.
at abort (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/node-fetch/2.7.0/lib/index.js:1458:16)
at AbortSignal.abortAndFinalize (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/node-fetch/2.7.0/lib/index.js:1473:4)
at innerInvokeEventListeners (ext:deno_web/02_event.js:754:7)
at invokeEventListeners (ext:deno_web/02_event.js:801:5)
at dispatch (ext:deno_web/02_event.js:658:9)
at AbortSignal.dispatchEvent (ext:deno_web/02_event.js:1043:12)
at AbortSignal.[[[signalAbort]]] (ext:deno_web/03_abort_signal.js:146:11)
at ext:deno_web/03_abort_signal.js:116:28
at Object.action (ext:deno_web/02_timers.js:154:11)
at handleTimerMacrotask (ext:deno_web/02_timers.js:68:10) {
type: "aborted",
message: "The user aborted a request."
}
Error in handler LangChainTracer, handleChainEnd: AbortError: The user aborted a request.

Using JSON schema:​

import { ChatMistralAI } from "@langchain/mistralai";
import { ChatPromptTemplate } from "@langchain/core/prompts";

const calculatorJsonSchema4 = {
type: "object",
properties: {
operation: {
type: "string",
enum: ["add", "subtract", "multiply", "divide"],
description: "The type of operation to execute.",
},
number1: { type: "number", description: "The first number to operate on." },
number2: {
type: "number",
description: "The second number to operate on.",
},
},
required: ["operation", "number1", "number2"],
description: "A simple calculator tool",
};

const llm4 = new ChatMistralAI({
model: "mistral-large-latest",
});

// Pass the schema and tool name to the withStructuredOutput method
const modelWithTool4 = llm4.withStructuredOutput(calculatorJsonSchema4);

const prompt4 = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);

// Chain your prompt and model together
const chain4 = prompt4.pipe(modelWithTool4);

const response4 = await chain4.invoke({
input: "What is 2 + 2?",
});
console.log(response4);
{ operation: "add", number1: 2, number2: 2 }
Error in handler LangChainTracer, handleChainEnd: AbortError: The user aborted a request.

Tool calling agent​

The larger Mistral models not only support tool calling, but can also be used in the Tool Calling agent. Here’s an example:

import { z } from "zod";
import { ChatMistralAI } from "@langchain/mistralai";
import { tool } from "@langchain/core/tools";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";

import { ChatPromptTemplate } from "@langchain/core/prompts";

const llm5 = new ChatMistralAI({
temperature: 0,
model: "mistral-large-latest",
});

// Prompt template must have "input" and "agent_scratchpad input variables"
const prompt5 = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["placeholder", "{chat_history}"],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);

// Mocked tool
const currentWeatherTool5 = tool(async () => "28 Β°C", {
name: "get_current_weather",
description: "Get the current weather in a given location",
schema: z.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA"),
}),
});

const agent = createToolCallingAgent({
llm: llm5,
tools: [currentWeatherTool5],
prompt: prompt5,
});

const agentExecutor = new AgentExecutor({
agent,
tools: [currentWeatherTool5],
});

const input = "What's the weather like in Paris?";
const { output } = await agentExecutor.invoke({ input });

console.log(output);
Request failed: HTTP error! status: 400 Response:
{"object":"error","message":"Tool call id has to be defined in serving mode.","type":"invalid_request_error","param":null,"code":null}
MistralAPIError: HTTP error! status: 400 Response:
{"object":"error","message":"Tool call id has to be defined in serving mode.","type":"invalid_request_error","param":null,"code":null}

API reference​

For detailed documentation of all ChatMistralAI features and configurations head to the API reference: https://api.js.langchain.com/classes/langchain_mistralai.ChatMistralAI.html


Was this page helpful?


You can also leave detailed feedback on GitHub.