PyTorchMobile-Speed ​​comparison on iOS “CPU only” “CoreML backend” “Metal backend” What is the fastest?

Use GPU power with Torch Mobile

Run MobileNet v2 on iPhone 11.
I compared CPU only, CoreML backend, and Metal backend.

We want to use PyTorch Mobile at high speed

CoreML backend or Metal backend can be used

You can configure your model to use the CoreML backend or Metal.
Now it seems that you can use the GPU and neural engine of the device.

Conversion method

CPU

CoreML

Metal

inference

The procedure for running the model is the same.

Model instantiation

execution

Comparison result

Run MobileNet v2 on iPhone 11.

CoreML was twice as fast.
It was surprising that the Metal backend wasn’t that fast.

🐣

I’m a freelance engineer.
Work consultation
Please feel free to contact us with a brief development description.
rockyshikoku@gmail.com

I am making an app that uses Core ML and ARKit.
We send machine learning / AR related information.

GitHub

Twitter
Medium

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store