In this story, we use Pix2Pix tutorials model in TensorFlow Core. At first, train the tutorial model in Colaboratory.
## run all cells in colab to this line.fit(train_dataset, EPOCHS, test_dataset)
Then, insert new cells and run converter.
1, Install CoreMLTools and TFCoreML.
!pip install --upgrade coremltools!pip install --upgrade tfcoreml
2, Restore checkpoints.
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
3, Save generator as a “Saved Model” format for temporary.
generator.save('./savedmodel')
4, Run converter.
import tfcoremlinput_name = generator.inputs[0].name.split(':')[0]print(input_name) #Check input_name.keras_output_node_name = generator_g.outputs[0].name.split(':')[0]graph_output_node_name = keras_output_node_name.split('/')[-1]mlmodel = tfcoreml.convert('./savedmodel', input_name_shape_dict={input_name: (1, 256, 256, 3)}, output_feature_names=[graph_output_node_name], minimum_ios_deployment_target='13', image_input_names=input_name, image_scale=2/ 255.0, red_bias=-1, green_bias=-1, blue_bias=-1, )
mlmodel.save('./pix2pix.mlmodel')
Now, you can use Pix2Pix in your iOS project.
import Visionlazy var coreMLRequest:VNCoreMLRequest = { let model = try! VNCoreMLModel(for: pix2pix().model) let request = VNCoreMLRequest(model: model, completionHandler: self.coreMLCompletionHandler0) return request }()
let handler = VNImageRequestHandler(ciImage: ciimage,options: [:])
DispatchQueue.global(qos: .userInitiated).async { try? handler.perform([coreMLRequest])}
For visualizing multiArray as image, Mr. Hollance’s “CoreML Helpers” are very convenient.
Please follow my Twitter. https://twitter.com/JackdeS11 And please clap your hands 👏.
Ciao!