Page
Library
Module
Module type
Parameter
Class
Class type
Source
Anthropic.MessagesSourceMessages API for conversations with Claude.
This module provides functions to create messages and handle responses, supporting both synchronous and streaming modes.
type response_content_block = | Text_block of {}Generated text content.
*)| Tool_use_block of {id : string;Unique identifier for this tool use.
*)name : string;Name of the tool to invoke.
*)input : Yojson.Safe.t;Arguments for the tool.
*)}Request to invoke a tool with specific arguments.
*)| Thinking_block of {}Claude's internal reasoning (experimental).
*)| Redacted_thinking_block of {}Redacted reasoning content.
*)response_content_block represents content in Claude's responses.
Response blocks differ from input blocks, containing generated text, tool invocations, or thinking traces.
server_tool_usage tracks server-side tool usage statistics.
service_tier indicates the processing tier for the request.
type usage = {input_tokens : int;Tokens processed from the input.
*)output_tokens : int;Tokens generated in the response.
*)cache_creation_input_tokens : int option;Tokens used to create cache entries.
*)cache_read_input_tokens : int option;Tokens read from cache.
*)server_tool_use : server_tool_usage option;Server-side tool usage.
*)service_tier : service_tier option;Processing tier used.
*)}usage tracks token consumption for billing and limits.
Cache-related fields indicate when the caching beta feature is active. Token counts include all content, system prompts, and tool definitions.
type stop_reason = [ | `End_turnNatural conversation end.
*)| `Max_tokensHit the token limit.
*)| `Stop_sequenceEncountered a stop sequence.
*)| `Tool_useStopped to use a tool.
*)| `Other of stringOther stop reason.
*) ]stop_reason indicates why generation stopped.
This field is optional in responses.
type delta_usage = {input_tokens : int;Cumulative input tokens.
*)output_tokens : int;Cumulative output tokens.
*)cache_creation_input_tokens : int;Cumulative cache creation tokens.
*)cache_read_input_tokens : int;Cumulative cache read tokens.
*)}delta_usage tracks cumulative token usage in streaming responses.
Unlike the regular usage type, all fields are cumulative totals and non-optional in streaming delta events.
type response = {id : string;Unique identifier for this response.
*)type_ : string;The type of the response.
*)model : string;The model that generated the response.
*)role : role;Always `Assistant for responses.
stop_reason : stop_reason option;Why generation stopped.
*)stop_sequence : string option;The stop sequence encountered, if any.
*)content : response_content_block list;Generated content blocks.
*)usage : usage;Token usage statistics.
*)}response contains Claude's complete reply to a message.
The response includes all generated content, metadata about why generation stopped, and token usage for monitoring costs.
type stream_event = | Message_start of responseInitial response metadata.
*)| Content_block_start of {index : int;Zero-based position in content array.
*)content : response_content_block;The starting content block.
*)}Beginning of a new content block.
*)| Content_block_delta of {index : int;Index of the content block being updated.
*)delta : [ `Text of string | `Input_json of string ];Incremental content.
*)}Partial content for a block.
*)| Content_block_stop of {}End of a content block.
*)| Message_delta of {stop_reason : [ `End_turn
| `Max_tokens
| `Stop_sequence
| `Tool_use
| `Other of string ];usage : delta_usage;Cumulative token counts.
*)}Final message metadata.
*)| Message_stopEnd of the message stream.
*)| PingKeep-alive signal.
*)stream_event represents incremental updates during streaming.
Events arrive in order, allowing real-time processing of Claude's response as it's generated.
val send :
client ->
?max_tokens:int ->
?temperature:float ->
?top_k:int ->
?top_p:float ->
?stop_sequences:string list ->
?system:string ->
?tools:tool list ->
?tool_choice:tool_choice ->
?metadata:metadata ->
model:model ->
messages:message list ->
unit ->
(response, error) resultsend client ~model ~messages ?max_tokens ... () sends messages to Claude and awaits a complete response.
This is the primary function for synchronous conversations with Claude. The response contains all generated content at once.
For streaming responses, see send_stream. For simple text queries, see simple_query.
Example: Sends a simple message.
let response =
Messages.send client ~model:`Claude_3_5_Sonnet_Latest
~messages:[ Message.user [ Content_block.text "Hello!" ] ]
~max_tokens:1000 ()
in
match response with
| Ok resp ->
List.iter
(function
| Messages.Text_block { text } -> print_endline text | _ -> ())
resp.content
| Error e -> Printf.eprintf "Error: %s\n" (string_of_error e)val send_stream :
client ->
?max_tokens:int ->
?temperature:float ->
?top_k:int ->
?top_p:float ->
?stop_sequences:string list ->
?system:string ->
?tools:tool list ->
?tool_choice:tool_choice ->
?metadata:metadata ->
model:model ->
messages:message list ->
unit ->
(stream_event Eio.Stream.t, error) resultsend_stream client ~model ~messages ... () sends messages to Claude and streams the response incrementally.
Returns an Eio stream of events, enabling real-time processing of Claude's response. The stream automatically closes when the response completes.
Parameters are identical to send.
For higher-level stream processing, see iter_stream for callbacks or accumulate_stream to collect the stream into a final response.
Example: Prints text as it arrives.
match
Messages.send_stream client ~model ~messages ~max_tokens:1000 ()
with
| Ok stream ->
Eio.Stream.iter
(function
| Content_block_delta { delta = `Text text; _ } ->
print_string text;
flush stdout
| Message_stop -> print_newline ()
| _ -> ())
stream
| Error e -> Printf.eprintf "Stream error: %s\n" (string_of_error e)val iter_stream :
client ->
?max_tokens:int ->
?temperature:float ->
?top_k:int ->
?top_p:float ->
?stop_sequences:string list ->
?system:string ->
?tools:tool list ->
?tool_choice:tool_choice ->
?metadata:metadata ->
model:model ->
messages:message list ->
on_event:(stream_event -> unit) ->
on_error:(error -> unit) ->
unit ->
unititer_stream client ~on_event ~on_error ... () processes a stream with callbacks.
This function handles stream lifecycle automatically. It opens the stream, processes all events, and ensures proper cleanup.
Example: Collects streamed text with error handling.
let buffer = Buffer.create 1024 in
Messages.iter_stream client ~model:`Claude_3_5_Haiku_Latest ~messages
~max_tokens:500
~on_event:(function
| Content_block_delta { delta = `Text text; _ } ->
Buffer.add_string buffer text
| Message_stop ->
Printf.printf "Complete: %s\n" (Buffer.contents buffer)
| _ -> ())
~on_error:(fun e ->
Printf.eprintf "Streaming failed: %s\n" (string_of_error e))
()accumulate_stream stream collects all stream events into a complete response.
This function consumes the entire stream, accumulating text deltas and updating content blocks until Message_stop is received. The final response contains all generated content as if it were created synchronously.
Example: Converts streaming to synchronous style.
match
Messages.create_stream client ~model ~messages ~max_tokens:1000 ()
with
| Ok stream -> (
match Messages.accumulate_stream stream with
| Ok response ->
Printf.printf "Got %d content blocks\n"
(List.length response.content)
| Error e ->
Printf.eprintf "Accumulation error: %s\n" (string_of_error e))
| Error e ->
Printf.eprintf "Stream creation error: %s\n" (string_of_error e)user text creates a user message with text content.
Example:
let msg = Messages.user "What is the capital of France?"assistant text creates an assistant message with text content.
Example:
let msg = Messages.assistant "The capital of France is Paris."user_with_content blocks creates a user message with custom content blocks.
Example:
let msg =
Messages.user_with_content
[
Text "Here's an image:";
Image { media_type = "image/png"; data = base64_data };
]assistant_with_content blocks creates an assistant message with custom content blocks.
val tool_result_message :
tool_use_id:string ->
content:Yojson.Safe.t ->
?is_error:bool ->
unit ->
messagetool_result_message ~tool_use_id ~content ?is_error () creates a user message containing a tool result.
The content parameter accepts structured JSON data which will be automatically serialized.
Example:
let msg =
Messages.tool_result_message ~tool_use_id:"tool_123"
~content:(`Assoc [ ("result", `Int 42) ])
()extract_text blocks extracts the first text content from response blocks.
Example:
match Messages.extract_text response.content with
| Some text -> print_endline text
| None -> print_endline "No text content"extract_all_text blocks extracts all text content from response blocks.
find_tool_use blocks finds the first tool use in response blocks. Returns (id, name, input) if found.
response_to_input_content blocks converts response blocks to input blocks for use in conversation continuations.
response_content_to_input block converts a single response block to an input block.