package saga
Install
dune-project
Dependency
Authors
Maintainers
Sources
sha256=93abc49d075a1754442ccf495645bc4fdc83e4c66391ec8aca8fa15d2b4f44d2
sha512=5eb958c51f30ae46abded4c96f48d1825f79c7ce03f975f9a6237cdfed0d62c0b4a0774296694def391573d849d1f869919c49008acffca95946b818ad325f6f
doc/saga.tokenizers/Saga_tokenizers/index.html
Module Saga_tokenizersSource
Tokenization library for Saga.
This module provides the main tokenization API matching HuggingFace Tokenizers design. It supports multiple tokenization algorithms (BPE, WordPiece, Unigram, Word-level, Character-level), text normalization, pre-tokenization, post-processing, and decoding.
Quick Start
Load a pretrained tokenizer:
let tokenizer = Tokenizer.from_file "tokenizer.json" |> Result.get_ok in
let encoding = Tokenizer.encode tokenizer "Hello world!" in
let ids = Encoding.get_ids encodingCreate a BPE tokenizer from scratch:
let tokenizer =
Tokenizer.bpe
~vocab:[("hello", 0); ("world", 1); ("[PAD]", 2)]
~merges:[]
()
in
let encoding = Tokenizer.encode tokenizer "hello world" in
let text = Tokenizer.decode tokenizer [0; 1]Train a new tokenizer:
let texts = [ "Hello world"; "How are you?"; "Hello again" ] in
let tokenizer =
Tokenizer.train_bpe (`Seq (List.to_seq texts)) ~vocab_size:1000 ()
in
Tokenizer.save_pretrained tokenizer ~path:"./my_tokenizer"Architecture
Tokenization proceeds through stages:
- Normalization: Clean and normalize text (lowercase, accent removal, etc.)
- Pre-tokenization: Split text into words or subwords
- Tokenization: Apply vocabulary-based encoding (BPE, WordPiece, etc.)
- Post-processing: Add special tokens, set type IDs
- Padding/Truncation: Adjust length for batching
Each stage is optional and configurable via builder methods.
Post-processing patterns are model-specific:
- BERT: Adds
CLSat start,SEPat end, type IDs distinguish sequences - GPT-2: No special tokens by default, uses BOS/EOS if configured
- RoBERTa: Uses <s> and </s> tokens similar to BERT but different format
Text normalization (lowercase, NFD/NFC, accent stripping, etc.).
Pre-tokenization (whitespace splitting, punctuation handling, etc.).
Post-processing (adding CLS/SEP, setting type IDs, etc.).
Direction for padding or truncation: `Left (beginning) or `Right (end).
type special = {token : string;(*The token text (e.g., "<pad>", "<unk>").
*)single_word : bool;(*Whether this token must match whole words only. Default:
*)false.lstrip : bool;(*Whether to strip whitespace on the left. Default:
*)false.rstrip : bool;(*Whether to strip whitespace on the right. Default:
*)false.normalized : bool;(*Whether to apply normalization to this token. Default:
*)truefor regular tokens,falsefor special tokens.
}Special token configuration.
Special tokens are not split during tokenization and can be skipped during decoding. Token IDs are assigned automatically when added to the vocabulary.
All special token types are uniform - the semantic meaning (pad, unk, bos, etc.) is contextual, not encoded in the type.
Padding length strategy.
`Batch_longest: Pad to longest sequence in batch`Fixed n: Pad all sequences to fixed length n`To_multiple n: Pad to smallest multiple of n >= sequence length
type padding = {length : pad_length;direction : direction;pad_id : int option;pad_type_id : int option;pad_token : string option;
}Padding configuration.
When optional fields are None, falls back to tokenizer's configured padding token. If the tokenizer has no padding token configured and these fields are None, padding operations will raise Invalid_argument.
Truncation configuration.
Limits sequences to max_length tokens, removing from specified direction.
type data = [ | `Files of string list| `Seq of string Seq.t| `Iterator of unit -> string option
]Training data source.
`Files paths: Read training text from files`Seq seq: Use sequence of strings`Iterator f: Pull training data via iterator (Nonesignals end)