People cut out on iOS (virtual background, background blur, composite)

Differences from existing segmentation

Until now, AVFoundation and ARKit also had a Person Segmentation function, but it was not always possible to obtain a segmentation matte.
At AVFoundation, only photos with Portrait Matte set at the time of shooting
were available , and ARKit did not provide matte images.
One of the implications is that AVFoundation and ARKit
couldn’t crop existing images and videos .
To do so, it was necessary to use machine learning models such as DeepLab in CoreML.
( I did that in the app that blurs or synthesizes the background of the person I made )

How to use Person Segmentation

The usual way of Vision.
Create a Request, execute it with Handler, and the result is returned with PixelBuffer, so you can get a black-and-white matte image immediately.

Request creation

qualityLevel

Three types of segmentation accuracy can be selected, and execution speed and resource consumption are in a trade-off relationship with accuracy.

original
“.fast” for streaming
“.balanced” for video
“.accurate” for image

Handler execution

When analyzing with a single image.

Background composition

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store