This directory contains scripts and other helper utilities to illustrate an end-to-end workflow to run a Core ML delegated torch.nn.module
with the ExecuTorch runtime.
coreml
├── scripts # Scripts to build the runner.
├── executor_runner # The runner implementation.
└── README.md # This file.
We will walk through an example model to generate a Core ML delegated binary file from a python torch.nn.module
then we will use the coreml_executor_runner
to run the exported binary file.
-
Following the setup guide in Setting Up ExecuTorch you should be able to get the basic development environment for ExecuTorch working.
-
Run
install_requirements.sh
to install dependencies required by the Core ML backend.
cd executorch
./backends/apple/coreml/scripts/install_requirements.sh
- Run the export script to generate a Core ML delegated binary file.
cd executorch
# To get a list of example models
python3 -m examples.portable.scripts.export -h
# Generates add_coreml_all.pte file if successful.
python3 -m examples.apple.coreml.scripts.export --model_name add
- Run the binary file using the
coreml_executor_runner
.
cd executorch
# Builds the Core ML executor runner. Generates ./coreml_executor_runner if successful.
./examples/apple/coreml/scripts/build_executor_runner.sh
# Run the delegated model.
./coreml_executor_runner --model_path add_coreml_all.pte
- The
examples.apple.coreml.scripts.export
could fail if the model is not supported by the Core ML backend. The following models from the examples models list (python3 -m examples.portable.scripts.export -h
) are currently supported by the Core ML backend.
add
add_mul
dl3
edsr
emformer_join
emformer_predict
emformer_transcribe
ic3
ic4
linear
llama2
llava_encoder
mobilebert
mul
mv2
mv2_untrained
mv3
resnet18
resnet50
softmax
vit
w2l
- If you encountered any bugs or issues following this tutorial please file a bug/issue here with tag #coreml.