You can take the script from here or install it with PyPI:
pip install tokenmonster
import tokenmonster
# Optionally set the tokenmonster directory, otherwise it will use ~/_tokenmonster
tokenmonster.set_local_directory("/path/to/preferred")
# Load a vocabulary by name, filepath or URL
vocab = tokenmonster.load("english-24000-consistent-v1")
# Tokenize some text
text = "Some text to turn into token IDs."
tokens = vocab.tokenize(text)
Then to detokenize:
decoder = vocab.decoder()
decoded_text = decoder.decode(tokens)
There is a decode
function for both the vocabulary object vocab.decode()
, and also the decoder object that is made with vocab.decoder()
. The difference is that the decoder object is meant for when you are individually decoding a sequence of IDs that are part of the same generation sequence, e.g. decoding tokens as they are generating. If you already have the full sequence and intend to decode it all in one go, you can use vocab.decode
.
It's possible to pass a token to the Decoder and get an empty string in response. This is fine, it means that token doesn't represent a full printable character, for example it's the first part of a multipart UTF-8 character, or it's capcode uppercase marker meant to influence the next token. It's for this reason that the decoder object is used.
The Python library uses a subprocess called tokenmonsterserver
which runs in the background to tokenize and decode, this is downloaded automatically the first time you use the library. The tokenmonsterserver
file is located in the tokenmonster directory, which is ~/_tokenmonster
by default, but you can set it elsewhere with the TokenMonster.set_local_directory
function before loading the first vocabulary.
Some libraries (e.g. Hugging Face Datasets) use multiprocessing
to tokenize/decode in parallel. When doing this you need to use tokenmonster.load_multiprocess_safe()
instead of tokenmonster.load()
, or you will receive an error. It is, however, more efficient to batch tokenize/decode by passing a list of strings to the tokenize
function, which will be tokenized in parallel. Tokenizing by batch is more efficient because there is less overhead.
.
- Usage
- Loading & Exporting
- Tokenization & Detokenization
- Vocabulary Information
- Vocabulary Modification
- vocab.modify(add_special_tokens=None, add_regular_tokens=None, delete_tokens=None, resize=None, change_unk=None)
- vocab.add_token(token)
- vocab.delete_token(token)
- vocab.delete_token_by_id(id)
- vocab.add_special_token(token)
- vocab.resize(size)
- vocab.reset_token_ids()
- vocab.enable_unk_token()
- vocab.disable_unk_token()
- Other
vocab = tokenmonster.load("english-32000-balanced-v1")
tokens = vocab.tokenize(str)
decoded_string = vocab.decode(tokens)
Loads a TokenMonster vocabulary from file, URL or by name.
path
(string): A filepath, URL or pre-built vocabulary name.
Vocab
: An instance of tokenmonster.Vocab.
vocab = tokenmonster.load("english-32000-balanced-v1")
Loads a TokenMonster vocabulary from file, URL or by name. It's safe for multiprocessing, but vocabulary modification is disabled and tokenization is slightly slower.
path
(string): A filepath, URL or pre-built vocabulary name.
Vocab
: An instance of tokenmonster.Vocab.
vocab = tokenmonster.load("english-32000-balanced-v1")
Creates a new vocabulary from a YAML string.
A sample YAML file can be found here: https://github.com/alasdairforsythe/tokenmonster/yaml_guide
You should save it in the vocab format with vocab.save()
for future use.
yaml
(string or bytes string): The YAML file.
Vocab
: An instance of tokenmonster.Vocab class.
vocab = tokenmonster.new(yaml_string)
vocab.save(filename)
Saves the current vocabulary to a file.
fname
(string): The filename to save the vocabulary to.
None
vocab.save("test.vocab")
Exports the vocabulary as a YAML file, which is returned as a bytes string.
order_by_score
(boolean): If true the tokens are order by score instead of alphabetically.
YAML
(bytes string): The vocabulary in YAML format.
yaml = vocab.export_yaml()
with open(file_path, 'wb') as file:
file.write(yaml)
Tokenizes a string into tokens according to the vocabulary.
You can pass a string or a list of strings. If you pass a list of strings they are tokenized in parallel using as many threads as the list size. Note that if you pass a string it is converted to a binary string, so if you have binary string in the first place, feel free to pass that instead.
text
(string or list of strings): A string or bytes string, or list of strings or bytes strings.
tokens
(numpy array or list of numpy array): The tokens IDs
tokens = vocab.tokenize(text)
Same as tokenize, but it returns only the number of tokens.
The number of tokens is the same as you would get from tokenize
. If you want to count any characters
for which there are no tokens or single byte tokens, you should enable_unk_token()
. It's okay to
enable enable_unk_token()
, run tokenize_count
and then disable_unk_token()
.
text
(string or list of strings): A string or bytes string, or list of strings or bytes strings.
n_tokens
(int or list of ints): The number of tokens for each input string
number_of_tokens = vocab.tokenize_count(text)
Decodes tokens into a string.
Only use this "decode" method if you are decoding a complete "batch" or complete "conversation" in one go. For decoding an incomplete batch sequentially (as the tokens become available) instead use the decoder object.
tokens
(int, list of int, or numpy array): The tokens to decode into a string.
string
: The composed string from the input tokens.
decoded_string = vocab.decode(tokens)
Returns a new decoder instance used for decoding tokens into text.
tokenmonster.DecoderInstance
: A new decoder instance.
decoder = vocab.decoder()
Get the size of the vocabulary.
int
: The size of the vocabulary.
vocab = tokenmonster.load("filename")
number_of_tokens = len(vocab)
Returns a dictionary of all tokens in the vocabulary.
This returns a list of dictionaries with keys "id", "token", "token_decoded", "type" and "score". Note that you should not attempt to use this to interpret tokenized sequences because the capcode encoded tokens can change the way the next tokens are decoded. Therefore you should always use one of the two "decode" methods.
list
: A list of dictionaries where the index is the token ID and each is a dictionary with the following keys:id
(int): The ID of the token.token
(string): The token including capcode encoding.token_decoded
(string): The same token decoded from its capcode form.type
(int): The type of token (0 = regular, 1 = byte, 2 = special, 3 = UNK).score
(float): The token's representation in the dataset used to train the vocabulary.
tokens = vocab.get_dictionary()
Returns the character set used by the vocabulary.
string
: The character set used by the vocabulary. Possible values are "UTF-8", "None".
Returns the normalization of the vocabulary.
string
: The normalization of the vocabulary. Possible values are "None", "NFD", "Lowercase", "Accents", "Quotemarks", "Collapse", "Trim", "LeadingSpace", "UnixLines".
Returns the capcode level of the vocabulary.
- 0 = disabled
- 1 = only deleteToken
- 2 = enabled
int
: The capcode level (0-2).
Returns the optimization mode of the vocabulary.
- 0 = unfiltered
- 1 = clean
- 2 = balanced
- 3 = consistent
- 4 = strict
- 5 = (vocabulary was not trained with TokenMonster)
int
: The optimization mode (0-5).
Returns the ID of the UNK token, or 'None' type if there is no UNK token.
int or None
: The ID of the UNK token. None if there is no UNK token.
Get the token string from a single token ID, in its capcode-encoded form.
id
(int): The token ID.
string or None
: The token string corresponding to the input ID. None if the ID is not in the vocabulary.
Get the token string from a single token ID, in its capcode-decoded form.
id
(int): The token ID.
string or None
: The token string corresponding to the input ID. None if the ID is not in the vocabulary.
Returns the ID of a single token.
This works for both capcode-encoded "raw" tokens and their decoded form.
token
(string): The token to get the ID for.
int or None
: The ID of the token. None if the token is not in the vocabulary.
vocab.modify(add_special_tokens=None, add_regular_tokens=None, delete_tokens=None, resize=None, change_unk=None, reset_token_ids=False)
Modifies the vocabulary. Doing so invalidates all decoder objects associated with the model before modification.
Notes:
- Special tokens are special in that they cannot be skipped. All regular tokens that contain specials tokens within them are deleted.
- When resizing the vocabulary down, the worst performing tokens are deleted ensuring the vocabulary remains efficient. However, only regular tokens with a score > 0 are can be removed by resizing.
- A vocabulary can also be resized up. If any tokens have been removed by deleting or resizing, they can be restored by resizing the vocabulary to be larger.
- After modifying you will need to "save" the vocabulary to a file or it'll be lost when the script ends.
- To ensure token IDs remain sequential, pass reset_token_ids = True
- delete_tokens can be in either raw or decoded form.
add_special_tokens
(string or list of strings): Special tokens to add to the vocabulary.add_regular_tokens
(string or list of strings): Regular tokens to add to the vocabulary.delete_tokens
(string or list of strings): Regular or Special tokens to delete.resize
(int): Resizes the vocabulary to this size.change_unk
(boolean): If set, it enables or disables the UNK token.reset_token_ids
(boolean): If true the IDs are all reset starting from zero.
int
: The new size of the vocabulary.
# adds the special token <eos>
vocab.modify("<eos>")
# adds the special token <eos> and keep the vocabulary at the current size
vocab.modify("<eos>", None, None, len(vocab))
Add one or more regular tokens.
token
(string or list of strings): The regular tokens to add.
int
: The new size of the vocabulary.
Delete one or more regular or special tokens. You can give the token in either its encoded or decoded form.
token
(string or list of strings): The tokens to delete.
int
: The new size of the vocabulary.
Delete one or more regular or special token by specifying the token ID.
id
(int or list of ints): The IDs of the tokens to delete.
int
: The new size of the vocabulary.
Add one or more special tokens.
token
(string or list of strings): The special tokens to add.
int
: The new size of the vocabulary.
Changes the size of the vocabulary and optionally resets the token IDs.
A vocabulary can be enlarged as well reduced in size. Only the worst performing tokens are removed when reducing.
Resizing only removes regular tokens that are not single byte token and have score > 0. If there are not enough of these, the new size may not match the target size.
size
(int): The new size of the vocabulary.reset_token_ids
(boolean): If true, the IDs of all tokens are reset from zero.
int
: The new size of the vocabulary.
Resets the token IDs to be sequential beginning from zero.
If tokens have been deleted from the vocabulary there will be gaps in the token IDs. Resetting the token IDs removes these gaps but all tokens will have new IDs.
Enables the UNK token.
If enabled, the UNK token appears whenever there is a character that is not in the vocabulary.
Note that the UNK token will not be enabled if all possible characters have tokens.
Use vocab.unk_token_id()
to retrieve the ID for the UNK token.
int
: The new size of the vocabulary.
Disables the UNK token.
Without an UNK token, any character for which there is no token is ignored during tokenization.
int
: The new size of the vocabulary.
A nested class for decoding streams of tokens in sequence.
This class takes tokens and decodes them to generate human-readable strings.
vocab = tokenmonster.load("english-32000-balanced-v1")
decoder = vocab.decoder()
decoded_string = decoder.decode(tokens)
decoded_string += decoder.decode(more_tokens)
A decoder object used for decoding token streams.
This decoder object is used instead of the vocabulary decode method when you are decoding tokens in small segments, or one by one, that are part of a longer stream of encoded tokens. A new decoder object should be used for each stream, then deleted. If you are decoding all tokens in one call, instead of in multiple calls, then you can use the vocabulary decode method directly.
tokens
(int, list of ints, or numpy array): A token ID or list of token IDs.
string
: A human-readable string derived from the input tokens.
vocab = tokenmonster.load("english-32000-balanced-v1")
decoder = vocab.decoder()
decoded_string = decoder.decode(tokens)
decoded_string += decoder.decode(more_tokens)
Once you are finished with a vocab or decoder object, to free it from memory
use the del
syntax. This is worthwhile if you are creating many
temporary decoder objects.
vocab = tokenmonster.load("english-32000-balanced-v1")
del vocab
Sets the local directory for TokenMonster.
If no directory is specified, the default directory is ~/_tokenmonster
dir
(string): The local directory to use.
tokenmonster.set_local_directory("/path/to/preferred")
Disconnects and closes tokenmonsterserver.
None
Serializes tokens from a list of ints or numpy array into a binary string.
The encoding_length
used is from vocab.encoding_length.
integer_list
(list of ints or numpy array): The tokens to serialize.
bytes
: The serialized binary string.
Deserializes a binary string into a numpy array of token IDs.
The encoding_length
used is from vocab.encoding_length.
binary_string
(bytes): The binary string to deserialize.
np.array
: The deserialized tokens. .