Distribute the results of the segmentation model (DeepLabV3) to each label by Core ML Helpers.
Apple’s official distribution DeepLab V3 Core ML model outputs a (512,512) ML Multi Array.
Each pixel has a label value of 0–14.
0,’background’
1,’aeroplane’
2,’bicycle’
3,’bird’
4,’boat’
5,’bottle’
6,’bus’
7,’car’
8,’cat’
9,’chair’
10,’cow’
11,’diningtable’
12,’dog’
13,’horse’
14,’motorbike’
15,’person’
16,’pottedplant’
17,’sheep’
18,’sofa’
19,’train’
20,’tv’
If you want to use the output result as a mask image, you need to determine the label value of the pixel.
It can be used as a mask image based on the label value in Core ML Helpers.
for c in 0..<channels {
for y in 0..<height {
for x in 0..<width {
var value = ptr[c*cStride + y*yStride + x*xStride]
//例えば車を判別したい場合、ラベル値が7なので、7は0(白)にして他は255(黒)にするとマスク画像ができる。
if value != T(7) {
value = T(0)
} else {
value = T(255)
}
let scaled = (value - min) * T(255) / (max - min)
let pixel = clamp(scaled, min: T(0), max: T(255)).toUInt8
pixels[(y*width + x)*bytesPerPixel + c] = pixel
}
}
}// Make the target label value white in the toRawBytes function of Core ML Helpers
Once the mask image can be taken, you can blur only the background or display the familiar segmentation image.
***
We send information related to machine learning.
contact:
rockyshikoku@gmail.com