Skip to main content
warning

🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase.

cortex embeddings

info

This CLI command calls the following API endpoint:

This command creates the embedding vector representing the input text.

Usage​

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex embeddings [options] [model_id] [message]
# Beta
cortex-beta embeddings [options] [model_id] [message]
# Nightly
cortex-nightly embeddings [options] [model_id] [message]

info

This command uses a model_id from the model that you have downloaded or available in your file system.

Options​

OptionDescriptionRequiredDefault valueExample
model_idSpecify the models to generate the embeddings.NoPrompt to select from the available modelsmistral
-i, --input <input>Input text to embed, encoded as a string or array of tokens. For multiple inputs, pass an array of strings or token arrays.Yes--i "Hello, world!"
-e, --encoding_format <encoding_format>Encoding format for the embeddings. Supported formats are float and int.Nofloat-e float
-d, --dimensions <dimensions>Specify the number of dimensions for the resulting output embeddings. Supported only in some models.No--d 128
-h, --helpDisplay help information for the command.No--h