Skip to main content
warning

🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase.

cortex run

info

This CLI command calls the following API endpoint:

  • Download Model (The command only calls this endpoint if the specified model is not downloaded yet.)
  • Install Engine (The command only calls this endpoint if the specified engine is not downloaded yet.)
  • Start Model
  • Chat Completions (The command makes a call to this endpoint if the -c option is used.)

This command facilitates the initiation of an interactive chat shell with a specified machine-learning model.

Usage​

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex [options] <model_id>:[engine]
# Beta
cortex-beta [options] <model_id>:[engine]
# Nightly
cortex-nightly [options] <model_id>:[engine]

model_id​

You can use the Built-in models or Supported HuggingFace models.

info

This command downloads and installs the model if not already available in your file system, then starts it for interaction.

Options​

OptionDescriptionRequiredDefault valueExample
model_idThe identifier of the model you want to chat with.YesPrompt to select from the available modelsmistral
-h, --helpDisplay help information for the command.No--h

Command Chain​

cortex run command is a convenience wrapper that automatically executes a sequence of commands to simplify user interactions:

  1. cortex pull: This command pulls the specified model if the model is not yet downloaded.
  2. cortex engines install: This command installs the specified engines if not yet downloaded.
  3. cortex models start: This command starts the specified model, making it active and ready for interactions.
  4. cortex chat: Following model activation, this command opens an interactive chat shell where users can directly communicate with the model.