Use CartoonGAN in iOS.

MLBoy
2 min readAug 29, 2020
GAN on iOS device.

Paper

Original Project: mnicnc404/CartoonGan-tensorflow

Converted model: Core ML Models in GitHub

You can translate your images to Hayao, Shinkai, Hosoda, Paprika style!!

We convert from TensorFlow2 model to Core ML model for using GAN on iOS.

1. Clone the repository.

!git clone https://github.com/mnicnc404/CartoonGan-tensorflow.git
cd CartoonGan-tensorflow/

2. Install required modules.

!pip install -r requirements_cpu.txt
!pip install git+https://www.github.com/keras-team/keras-contrib.git
!pip install coremltools==4.0b3

3, I will write the conversion snippet in the test script.

The input shape is (None, None, 3) in the original model, but if it is left as it is, it will be (1, 1, 3) in Core ML, so the size is explicitly specified.

//cartoonize.py,line303models.append(cartoongan.load_model(style))
mlmodel = ct.convert(models[0],inputs=[ct.ImageType(shape=(1,256,256,3),bias=[-1,-1,-1], scale=1/127)])
mlmodel.save(style + ".mlmodel")

The usage in Xcode is the same as this article.

Converting UGATIT to CoreML Model.

With CoreGANContainer, you can just drop the model into your project and use it.

***

We send information related to machine learning.

Twitter

contact:

rockyshikoku@gmail.com

--

--