Draw object detection box at high speed [10 times faster version]

Draws object detection results at high speed

Drawing becomes a bottleneck rather than detection

The object detection model for mobile such as Yolo can be executed on iPhone 11 in the order of 0.02, but if you draw the result on UIImage by the method like my article below, the drawing process will take about 0.5 seconds.

In other words, the
drawing process took 25 times longer than the detection itself.
In particular, drawing label text consumes 80% of the processing.

This is fatal for object detection processing of a large number of frames such as moving images .

10 times faster if processed with CGImage

If you draw on CGImage using CGContext, you can draw in 0.04 seconds in the same environment, which
is more than 10 times faster than drawing on UIImage.

Yolov5 detection and drawing sample code

🐣

I’m a freelance engineer.
Work consultation
Please feel free to contact us with a brief development description.
rockyshikoku@gmail.com

I am making an app that uses Core ML and ARKit.
We send machine learning / AR related information.

GitHub

Twitter
Medium

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store