Converting UGATIT to CoreML Model.

UGATIT is a state-of-art of image-to-image technologies.

*Paper

*GitHub Project Page.

*Preconverted CoreML Model (Selfie2Anime).

https://github.com/john-rocky/CoreML-Models

You can use this model in iOS mobile devices through converting from TensorFlow model to CoreML model.

1,Clone from GitHub project page above.

git clone https://github.com/taki0112/UGATIT.gitcd UGATIT

2,Download the checkpoint of pretrained model from GitHub project page above. And put it in “checkpoint” directory you make.

mkdir checkpoint
## put checkpoint you downloaded in this directory.

3,Download the selfie2anime dataset from GitHub project page above.And put it in dataset directory you make.

mkdir dataset
## put selfie2anime dataset you downloaded in this directory.

4,Make pbtxt of model. For it, insert write_graph function in UGATIT.py:line 642 (i.e. in “def test(self):), and run the test prediction.

## UGATIT.py
## def test(self):
fake_img = self.sess.run(self.test_fake_B, feed_dict = {self.test_domain_A : sample_image})tf.io.write_graph(self.sess.graph_def, './', 'ugatit.pbtxt') ## ↑ insert this line.python main.py --dataset selfie2anime --phase test ## If the test success, you get "ugatit.pbtxt" in your current directory.

5,Install tfcoreml.

pip install tfcoreml

6,Make frozen_model. For it, write “convert.py” and run it.

## convert.pyfrom __future__ import print_functionimport numpy as npfrom tensorflow.python.tools.freeze_graph import freeze_graphimport tfcoreml
graph_def_file = 'ugatit.pbtxt'checkpoint_file = 'checkpoint/UGATIT_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing/UGATIT.model-1000000'frozen_model_file = './frozen_model.pb'output_node_names = 'generator_B/Tanh'freeze_graph(input_graph=graph_def_file, input_saver="", input_binary=False, input_checkpoint=checkpoint_file, output_node_names=output_node_names, restore_op_name="save/restore_all", filename_tensor_name="save/Const:0", output_graph=frozen_model_file, clear_devices=True, initializer_nodes="")python convert.py

7,Convert from frozen_model to CoreML model.For it, write “coreml.py” and run it.

## coreml.pyinput_tensor_shapes = {'test_domain_A':[1, 256, 256, 3]} # batch size is 1# Output CoreML model pathcoreml_model_file = './ugatit.mlmodel'output_tensor_names = ['generator_B/Tanh:0']# Call the convertercoreml_model = tfcoreml.convert(        tf_model_path='frozen_model.pb',        mlmodel_path=coreml_model_file,        input_name_shape_dict=input_tensor_shapes,        output_feature_names=output_tensor_names,        image_input_names='test_domain_A',        red_bias=-1,        green_bias=-1,        blue_bias=-1,        image_scale=2/255,        minimum_ios_deployment_target='12'        )

Now, you can use UGATIT in your iOS project.

import Visionlazy var coreMLRequest:VNCoreMLRequest = {   let model = try! VNCoreMLModel(for: ugatit().model)   let request = VNCoreMLRequest(model: model, completionHandler: self.coreMLCompletionHandler0)   return request   }()
let
handler = VNImageRequestHandler(ciImage: ciimage,options: [:])
DispatchQueue.global(qos: .userInitiated).async { try? handler.perform([coreMLRequest])}

For visualizing multiArray as image, Mr. Hollance’s “CoreML Helpers” are very convenient.

func coreMLCompletionHandler0(request:VNRequest?,error:Error?) {   let result = coreMLRequest.results?.first as! VNCoreMLFeatureValueObservation   let multiArray = result.featureValue.multiArrayValue   let cgimage = multiArray?.cgImage(min: -1, max: 1, channel: nil, axes: (3,1,2))

P.S. I made an iOS app with UGATIT CoreML model of selfie2anime.

You can make like this or more great apps.

Please follow my Twitter. https://twitter.com/JackdeS11 And please clap your hands 👏.

Happy Image Generating!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store