CLI Commands¶
The LEIP command line interface allows you to easily perform common operations without having to interact with the python library directly.
leip¶
Latent AI develops core technologies and platform tools to enable efficient, adaptive AI optimized for compute, energy, and memory with seamless integration to existing AI/ML infrastructure and frameworks. The Latent AI (LEIP) software development kit enables developer and data scientist access to these tools
leip [OPTIONS] COMMAND [ARGS]...
analytics¶
Configure LEIP Analytics.
leip analytics [OPTIONS] COMMAND [ARGS]...
init¶
Initialize LEIP Analytics Connection (host, user, project).
leip analytics init [OPTIONS]
Options
- --loglevel <loglevel>¶
Log output level
- Default
WARNING
- Options
DEBUG | INFO | WARNING | ERROR | CRITICAL
- --host <host>¶
LEIP Analytics host. Default should work for local docker deployment. [default: http://backend:8080]
- --username <username>¶
Username in LEIP Analytics
- --password <password>¶
Password in LEIP Analytics
- --project_id <project_id>¶
Project UUID
- --tags <LIST>¶
Predefined tags for events
- Default
login¶
Login to LEIP Analytics.
leip analytics login [OPTIONS]
Options
- --loglevel <loglevel>¶
Log output level
- Default
WARNING
- Options
DEBUG | INFO | WARNING | ERROR | CRITICAL
- --username <username>¶
Username in LEIP Analytics
- --password <password>¶
Password in LEIP Analytics
set_host¶
Set host of LEIP Analytics.
leip analytics set_host [OPTIONS]
Options
- --loglevel <loglevel>¶
Log output level
- Default
WARNING
- Options
DEBUG | INFO | WARNING | ERROR | CRITICAL
- --host <host>¶
LEIP Analytics host. Default should work for local docker deployment. [default: http://backend:8080]
set_project¶
Set project for LEIP experiments.
leip analytics set_project [OPTIONS]
Options
- --loglevel <loglevel>¶
Log output level
- Default
WARNING
- Options
DEBUG | INFO | WARNING | ERROR | CRITICAL
- --project_id <project_id>¶
Project UUID
analyze¶
Analyzes a model and prints out/saves its layer names.
leip analyze [OPTIONS]
Options
- --loglevel <loglevel>¶
Log output level
- Default
WARNING
- Options
DEBUG | INFO | WARNING | ERROR | CRITICAL
- --input_path <input_path>¶
Required The directory or file path to the model
- --output_path <output_path>¶
The root output directory path for the saved model and/or summaries
- Default
./analyze_output
- --include_layers <LIST>¶
The comma separated partial or full name of layers that you DO want to include in analysis
- --exclude_layers <LIST>¶
The comma separated partial or full name of layers that you DON’T want to include in analysis
- --config <config>¶
Read configuration from FILE.
- --tags <LIST>¶
User defined tags for LEIP Enterprise events.
- Default
compile¶
Compiles the model to an executable for a particular target.
leip compile [OPTIONS]
Options
- --loglevel <loglevel>¶
Log output level
- Default
WARNING
- Options
DEBUG | INFO | WARNING | ERROR | CRITICAL
- --input_path <input_path>¶
Required The directory or file path to the model
- --output_path <output_path>¶
The root output directory path for the saved model and/or summaries
- Default
./compile_output
- --input_names <LIST>¶
The comma-separated names of the input layers of the model
- --output_names <LIST>¶
The comma-separated names of the output layers of the model
- --input_shapes <LISTS>¶
The shapes of the input layers of the model, hyphen-separated for each input, comma separated for each dimension. eg “1, 299, 299, 3 - 1, 400, 400, 3”
- --remove_nodes <LIST>¶
Comma-separated list of nodes to remove from the model before processing
- --layout <NCHW|NHWC>¶
The desired channel layout (For CPU targets only) [default: NCHW]
- Options
Layout.NCHW | Layout.NHWC
- --target <target>¶
The target device to compile the model to [default: llvm]
- --target_host <target_host>¶
Host compilation target, if target is cuda [default: llvm]
- --model_id <model_id>¶
Vendor specific model ID. If not present, a UUID will be generated.
- --crc_check <BOOLEAN>¶
Specify this argument if you want to save a crc check value when storing the parameters [default: False]
- --force_int8 <BOOLEAN>¶
Enforce the storage of parameters as int8 in the compiled output [default: False]
- --legacy_artifacts <BOOLEAN>¶
Generate three separate artifacts (lib, graph, params) [default: False]
- --optimization <optimization>¶
An optimization configuration. You may use this argument more than once to specify all desired optimizations. Current supported optimization formats: [‘category:kernel,level:<1-4>’, ‘category:cuda,enabled:true|false’, ‘category:graph,iterations:<0-30000>’]. Cuda optimization is enabled by default when target is cuda too, but may be overridden [default: [‘category:kernel,level:3’, ‘category:cuda,enabled:False’]]
- --config <config>¶
Read configuration from FILE.
- --tags <LIST>¶
User defined tags for LEIP Enterprise events.
- Default
config¶
Generates a LEIP config.json file to use as a baseline config template.
leip config [OPTIONS]
Options
- --output_path <output_path>¶
Where to save generated config file
- Default
config.json
diff¶
Prints out a textual diff of 2 models
leip diff [OPTIONS]
Options
- --loglevel <loglevel>¶
Log output level
- Default
WARNING
- Options
DEBUG | INFO | WARNING | ERROR | CRITICAL
- --left_path <left_path>¶
Required The path to the first model to compare
- --right_path <right_path>¶
Required The path to the second model to compare
- --names <BOOLEAN>¶
Whether to include op names, names of input to each op [default: False]
- --config <config>¶
Read configuration from FILE.
evaluate¶
Performs inference of test data on the model and collects accuracy information.
leip evaluate [OPTIONS]
Options
- --loglevel <loglevel>¶
Log output level
- Default
WARNING
- Options
DEBUG | INFO | WARNING | ERROR | CRITICAL
- --input_path <input_path>¶
Required The directory or file path to the model
- --output_path <output_path>¶
The output directory path for the evaluation summaries
- Default
./evaluate_output
- --input_names <LIST>¶
The comma-separated names of the input layers of the model
- --output_names <LIST>¶
The comma-separated names of the output layers of the model
- --input_shapes <LISTS>¶
The shapes of the input layers of the model, hyphen-separated for each input, comma separated for each dimension. eg “1, 299, 299, 3 - 1, 400, 400, 3”
- --preprocessor <preprocessor>¶
The callback method used for preprocessing input data when running inference. It has three possible forms: 1) A name from [bgrtorgb|bgrtorgb2|bgrtorgb3|bgrtorgbcaffe |imagenet|imagenet_caffe|imagenet_torch_nchw |mnist|mnist_int|rgbtogray|rgbtogray_int8 |rgbtogray_symm|float32|uint8|symm|norm] 2) A python function as ‘package.module.func’ 3) A python function as ‘path/to/module.py::func’
- --preprocessor_config_path <preprocessor_config_path>¶
The preprocessor configuration JSON, if any
- --postprocessor <postprocessor>¶
The callback method used for postprocessing output data after running inference. It has three possible forms: 1) A name from [top1|top5] 2) A python function as ‘package.module.func’ 3) A python function as ‘path/to/module.py::func’
- --output_format <classifier|yolov5|ssd>¶
The output format/architecture corresponding to the model [default: classifier]
- Options
OutputFormat.CLASSIFIER | OutputFormat.YOLOV5 | OutputFormat.SSD
- --inference_context <cpu|cuda>¶
Context under which to run inference [default: cpu]
- Options
InferenceContext.CPU | InferenceContext.CUDA
- --task_family <classification|detection|segmentation>¶
The type of task the model is designed for. [default: classification]
- Options
TaskFamily.CLASSIFICATION | TaskFamily.DETECTION | TaskFamily.SEGMENTATION
- --host <host>¶
Host on which to deploy model [default: localhost]
- --port <port>¶
Port to access at host [default: 50051]
- --test_path <test_path>¶
Required The path to the file containing a list of test examples and their output classification
- --batch_size <batch_size>¶
The number of test images to load into one batch for inference [default: 1]
- --warmups <warmups>¶
Number of warmup runs to get the model up to speed and cached in memory [default: 10]
- --test_size <test_size>¶
Evaluate on a subset of the total dataset, the size of which is defined by this flag
- --seed <seed>¶
The seed with which to shuffle the dataset [default: 0]
- --config <config>¶
Read configuration from FILE.
- --tags <LIST>¶
User defined tags for LEIP Enterprise events.
- Default
health¶
Checks the health of the SDK environment
leip health [OPTIONS]
Options
- --loglevel <loglevel>¶
Log output level
- Default
WARNING
- Options
DEBUG | INFO | WARNING | ERROR | CRITICAL
- --config <config>¶
Read configuration from FILE.
optimize¶
Quantizes and compiles a pre-trained model.
leip optimize [OPTIONS]
Options
- --loglevel <loglevel>¶
Log output level
- Default
WARNING
- Options
DEBUG | INFO | WARNING | ERROR | CRITICAL
- --input_path <input_path>¶
Required The directory or file path to the model
- --output_path <output_path>¶
The root output directory path for the saved model and/or summaries
- Default
./optimize_output
- --model_id <model_id>¶
Vendor specific model ID. If not present, a UUID will be generated.
- --input_names <LIST>¶
The comma-separated names of the input layers of the model
- --output_names <LIST>¶
The comma-separated names of the output layers of the model
- --input_shapes <LISTS>¶
The shapes of the input layers of the model, hyphen-separated for each input, comma separated for each dimension. eg “1, 299, 299, 3 - 1, 400, 400, 3”
- --remove_nodes <LIST>¶
Comma-separated list of nodes to remove from the model before processing
- --preprocessor <preprocessor>¶
The callback method used for preprocessing input data when running inference. It has three possible forms: 1) A name from [bgrtorgb|bgrtorgb2|bgrtorgb3|bgrtorgbcaffe |imagenet|imagenet_caffe|imagenet_torch_nchw |mnist|mnist_int|rgbtogray|rgbtogray_int8 |rgbtogray_symm|float32|uint8|symm|norm] 2) A python function as ‘package.module.func’ 3) A python function as ‘path/to/module.py::func’
- --preprocessor_config_path <preprocessor_config_path>¶
The preprocessor configuration JSON, if any
- --quantizer <asymmetric|asymmetricpc|symmetric|symmetricpc|none>¶
Which quantizer to use [default: asymmetric when cuda compile optimization is not set, otherwise symmetric]
- Options
QuantizerType.ASYMMETRIC | QuantizerType.ASYMMETRICPC | QuantizerType.SYMMETRIC | QuantizerType.SYMMETRICPC | QuantizerType.NONE
- --rep_dataset <rep_dataset>¶
The path to the file used as representative dataset input, used during calibration. This file should contain a newline separated list of path names to the input instances
- --compress_optimization <LIST from [tensor_splitting|bias_correction>¶
Which optimization passes to apply during the compression [default: ]
- --quantize_input <BOOLEAN>¶
Whether the model input layers will be (u)int8 [default: False]
- --quantize_output <BOOLEAN>¶
Whether the model output layers will be (u)int8 [default: False]
- --calibration_method <minmax|average|normal>¶
Which calibration method to use [default: minmax]
- Options
CalibrationMethod.MINMAX | CalibrationMethod.AVERAGE | CalibrationMethod.NORMAL
- --standard_deviations <standard_deviations>¶
The amount of standard deviations used to truncate the normal distribution [default: 4.5]
- --layout <NCHW|NHWC>¶
The desired channel layout (For CPU targets only) [default: NCHW]
- Options
Layout.NCHW | Layout.NHWC
- --target <target>¶
The target device to compile the model to [default: llvm]
- --target_host <target_host>¶
Host compilation target, if target is cuda [default: llvm]
- --force_int8 <BOOLEAN>¶
Enforce the storage of parameters as int8 in the compiled output [default: False]
- --legacy_artifacts <BOOLEAN>¶
Generate three separate artifacts (lib, graph, params) [default: False]
- --compile_optimization <compile_optimization>¶
A compile optimization configuration. You may use this argument more than once to specify all desired optimizations. Current supported optimization formats: [‘category:kernel,level:<1-4>’, ‘category:cuda,enabled:true|false’, ‘category:graph,iterations:<0-30000>’]. Cuda optimization is enabled by default when target is cuda too, but may be overridden [default: [‘category:kernel,level:3’, ‘category:cuda,enabled:False’]]
- --config <config>¶
Read configuration from FILE.
- --tags <LIST>¶
User defined tags for LEIP Enterprise events.
- Default
package¶
Generates a directory with all the required files needed to generate an executable on the target device.
leip package [OPTIONS]
Options
- --loglevel <loglevel>¶
Log output level
- Default
WARNING
- Options
DEBUG | INFO | WARNING | ERROR | CRITICAL
- --input_path <input_path>¶
Required The directory or file path to the model
- --output_path <output_path>¶
The root output directory path for the compiling artifacts
- Default
./package_output
- --input_names <LIST>¶
The comma-separated names of the input layers of the model
- --preprocessor <preprocessor>¶
The callback method used for preprocessing input data when running inference. It has three possible forms: 1) A name from [bgrtorgb|bgrtorgb2|bgrtorgb3|bgrtorgbcaffe |imagenet|imagenet_caffe|imagenet_torch_nchw |mnist|mnist_int|rgbtogray|rgbtogray_int8 |rgbtogray_symm|float32|uint8|symm|norm] 2) A python function as ‘package.module.func’ 3) A python function as ‘path/to/module.py::func’
- --postprocessor <postprocessor>¶
The callback method used for postprocessing output data after running inference. It has three possible forms: 1) A name from [top1|top5] 2) A python function as ‘package.module.func’ 3) A python function as ‘path/to/module.py::func’
- --metrics <LIST from [inferences_count|latency|most_common_class>¶
Metrics to include in runtime library
- --format <python3.6|python3.8|python3.9|docker|cc>¶
Library’s output format
- Options
PackageFormat.PYTHON36 | PackageFormat.PYTHON38 | PackageFormat.PYTHON39 | PackageFormat.DOCKER | PackageFormat.CC
- --config <config>¶
Read configuration from FILE.
- --tags <LIST>¶
User defined tags for LEIP Enterprise events.
- Default
- --password <password>¶
Password to encrypt output model
pipeline¶
Runs a pipeline of commands on a model from a configuration file.
leip pipeline [OPTIONS]
Options
- --loglevel <loglevel>¶
Log output level
- Default
WARNING
- Options
DEBUG | INFO | WARNING | ERROR | CRITICAL
- --input_path <input_path>¶
A directory or file path to be used as an input
- --output_path <output_path>¶
The root output directory path for all task results
- Default
./pipeline_output
- --config_path <config_path>¶
Required Path to a YAML/JSON file describing the pipeline
- --tags <LIST>¶
User defined tags for LEIP Enterprise events.
- Default
version¶
The LEIP sdk version.
leip version [OPTIONS]
Options
- --loglevel <loglevel>¶
Log output level
- Default
WARNING
- Options
DEBUG | INFO | WARNING | ERROR | CRITICAL
zoo¶
Access models from the LEIP Zoo.
leip zoo [OPTIONS] COMMAND [ARGS]...
download¶
Download a specified model or dataset from the Latent AI Model Zoo
leip zoo download [OPTIONS]
Options
- --models_index_json_path <models_index_json_path>¶
Path to a local json file that contains zoo metadata
- --model_id <model_id>¶
Model id to download
- --dataset_id <dataset_id>¶
Dataset id to download
- --variant_id <variant_id>¶
Required Variant id to download
license¶
Show the license for the zoo
leip zoo license [OPTIONS]
list¶
List the available models in the zoo as an ASCII formatted table
leip zoo list [OPTIONS]
Options
- --models_index_json_path <models_index_json_path>¶
Path to a local json file that contains zoo metadata