Finally, it is now possible to generate insanely realistic faces that do not exist. — On iOS, Easy use of StyleGAN with CoreML.

MLBoy
4 min readDec 28, 2021

--

Use a machine learning model to generate realistic images on iOS

Do you know the web page “This Person Does Not Exist”?
When you visit this page, you will see high quality portraits.

This person does not exist.

When you reload, you will see another person. These people do not exist.
It is an image generated by a machine learning technology called StyleGAN.

This article will show you how to use StyleGAN on iOS to easily generate realistic portrait images.

Super powerful image generation, but can it be used on iOS? .. ..

StyleGAN can generate not only human faces, but also paintings and anime images and more.
If we can handle it on iOS, it seems that we can make a fun app.

However, although StyleGAN is a technology announced in the late 2010s, I haven’t seen many cases where it can be used on iOS (as far as I know).
There is a way to run it on a web server, but if it can be run on a device, it does not require network communication and is fast. I imagine that the bottleneck was ….
・StyleGAN model was too big and

・ The structure seemed to be complicated

MobileStyleGAN has appeared

In 2021, MobileStyle GAN, which is lighter for mobile, was introduced.

This model can be easily converted to CoreML format and used on iOS.
Lightweight size of 38MB (before quantization), it takes only about 1 second to generate.
If this is the case, I think it can be used on the iPhone.

Specific method

1. Get the model

PlanA:

You can get the CoreML model by running the CoreML export script in the MobileStyleGAN repository.
The output is a multidimensional array (1024 * 1024 ML MultiArray).

PlanB:

You can download the converted MobileStyle GAN CoreML model from CoreML-Models(Model Zoo).
The output is an image (1024 * 1024 CVPixelBuffer).

2. Image generation

Run the model in Swift

MobileStyleGAN consists of two networks, Mapping Network and Synthesis Network.
Create a random seed of [1, 512] and pass it to the mapping network.
Pass the output of the mapping network to the synthesis network.

Generate an image using the Xcode sample project

You can also try sample project to use MobileStyleGAN on iOS

Here is a simple sample to select an image from the photo library and run the model.
You can clone and from Github and build. Press a button to generate a face image. It will be generated in about 1 second per an image.

Interesting apps may be created by you

Since the image is generated from 512 random seeds, I think that you can create an image with about 3 * 10 to the 1000th power pattern (Is it correct?).

Images generated on iPhone using MobileStyleGAN

🐣

I’m a freelance engineer.
Work consultation
Please feel free to contact us with a brief development description.
rockyshikoku@gmail.com

I am making an app that uses Core ML and ARKit.
We send machine learning / AR related information.

GitHub

Twitter
Medium

Thank you.

--

--