Makeup apps and hair color apps can be created on iOS. Machine learning separates facial parts.
How to get facial parts using machine learning
We want to automatically separate only the hair and only the eyes
If we can finely separate the targeted parts such as hair and lips, we can change the color and make a makeup app.
It is troublesome to have the user trace the outline and cut it.
We want to do it automatically.
Moreover, We want to carefully cut the hair as finely as possible.
However, is it possible to cut it into elaborate pieces automatically?
There aren’t many frameworks that can take the parts you’re aiming for.
There is a field called semantic segmentation in machine learning.
It is a technology that can finely separate images for each object.
Due to technological improvements, it has become possible to cut into fairly fine pieces.
iOS also has a framework that can use segmentation.
However, although you can get Portrait Matte at AVFoundation, you can get the whole body, skin and teeth, but you cannot separate only the hair and eyes.
Moreover, it is necessary to set it when shooting with the camera.
Vision Person Segmentation also cutout whole body.
Machine learning solves it in no time
Now, what is useful there is a semantic segmentation model called face-parsing, which learns masks for each part of the face.
GitHub - zllrunning/face-parsing.PyTorch: Using modified BiSeNet for face parsing in PyTorch
Using modified BiSeNet for face parsing in PyTorch - GitHub - zllrunning/face-parsing.PyTorch: Using modified BiSeNet…
This finely separates the hair, left and right eyes, lips and hat, and the accuracy is quite good.
If we use this on iOS, you can make a make-up app.
So I prepared a model converted for iOS. The accuracy does not change either.
The converted face-parsing model for iOS can be downloaded from Core ML-Models .
GitHub - john-rocky/CoreML-Models: Converted CoreML Models Zoo.
Converted CoreML Model Zoo. CoreML is a machine learning framework by Apple. If you are iOS developer, you can easly…
After that, you can execute the model by requesting with Vision.
The result is a number of pixels that correspond to the part.
There are 512 * 512 pixels, for example, 1 is the skin and 2 is the left eyebrow.
This array of numbers is converted into a black-and-white image and used.
Click here for sample code from model execution to image conversion and makeup.
GitHub - john-rocky/CoreML-Face-Parsing
The simple sample how to use face-parsing CoreML model in iOS. You can segment face parts. And you can also try make up…
Hair color and make-up can be done by synthesizing the acquired mask image.
I’m a freelance engineer.
Please feel free to contact us with a brief development description.
I am making an app that uses Core ML and ARKit.
We send machine learning / AR related information.