Skip to main content
warning

🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase.

cortex engines

This command allows you to manage various engines available within Cortex.

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex engines [options] [subcommand]
# Beta
cortex-beta engines [options] [subcommand]
# Nightly
cortex-nightly engines [options] [subcommand]

Options:

OptionDescriptionRequiredDefault valueExample
-h, --helpDisplay help information for the command.No--h

cortex engines get

info

This CLI command calls the following API endpoint:

This command returns an engine detail defined by an engine engine_name.

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex engines get <engine_name>
# Beta
cortex-beta engines get <engine_name>
# Nightly
cortex-nightly engines get <engine_name>

For example, it returns the following:


┌─────────────┬────────────────────────────────────────────────────────────────────────────┐
│ (index) │ Values │
├─────────────┼────────────────────────────────────────────────────────────────────────────┤
│ name │ 'onnx' │
│ description │ 'This extension enables chat completion API calls using the Cortex engine' │
│ version │ '0.0.1' │
│ productName │ 'Cortex Inference Engine' │
└─────────────┴────────────────────────────────────────────────────────────────────────────┘

info

To get an engine name, run the engines list command first.

Options:

OptionDescriptionRequiredDefault valueExample
engine_nameThe name of the engine that you want to retrieve.Yes-llamacpp
-h, --helpDisplay help information for the command.No--h

cortex engines list

info

This CLI command calls the following API endpoint:

This command lists all the Cortex's engines.

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex engines list [options]
# Beta
cortex-beta engines list [options]
# Nightly
cortex-nightly engines list [options]

For example, it returns the following:


+---------+---------------------+-------------------------------------------------------------------------------+---------+------------------------------+-----------------+
| (Index) | name | description | version | product name | status |
+---------+---------------------+-------------------------------------------------------------------------------+---------+------------------------------+-----------------+
| 1 | onnx | This extension enables chat completion API calls using the Onnx engine | 0.0.1
| Onnx Inference Engine | not_initialized |
+---------+---------------------+-------------------------------------------------------------------------------+---------+------------------------------+-----------------+
| 2 | llamacpp | This extension enables chat completion API calls using the LlamaCPP engine | 0.0.1
| LlamaCPP Inference Engine | ready |
+---------+---------------------+-------------------------------------------------------------------------------+---------+------------------------------+-----------------+
| 3 | tensorrt-llm | This extension enables chat completion API calls using the TensorrtLLM engine | 0.0.1
| TensorrtLLM Inference Engine | not_initialized |
+---------+---------------------+-------------------------------------------------------------------------------+---------+------------------------------+-----------------+

Options:

OptionDescriptionRequiredDefault valueExample
-h, --helpDisplay help for command.No--h

cortex engines install

info

This CLI command calls the following API endpoint:

This command downloads the required dependencies and installs the engine within Cortex. Currently, Cortex supports three engines:

  • Llama.cpp
  • Onnx
  • Tensorrt-llm

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex engines install [options] <engine_name>
# Beta
cortex-beta engines install [options] <engine_name>
# Nightly
cortex-nightly engines install [options] <engine_name>

For Example:


## Llama.cpp engine
cortex engines install llamacpp
## ONNX engine
cortex engines install onnx
## Tensorrt-LLM engine
cortex engines install tensorrt-llm

Options:

OptionDescriptionRequiredDefault valueExample
engine_nameThe name of the engine you want to install.Yes--
-h, --helpDisplay help for command.No--h

cortex engines uninstall

This command uninstalls the engine within Cortex.

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex engines uninstall [options] <engine_name>
# Beta
cortex-beta engines uninstall [options] <engine_name>
# Nightly
cortex-nightly engines uninstall [options] <engine_name>

For Example:


## Llama.cpp engine
cortex engines uninstall llamacpp
## ONNX engine
cortex engines uninstall onnx
## Tensorrt-LLM engine
cortex engines uninstall tensorrt-llm

Options:

OptionDescriptionRequiredDefault valueExample
engine_nameThe name of the engine you want to uninstall.Yes--
-h, --helpDisplay help for command.No--h