In some cases, ARKit’s CapturedFrame is analyzed by Vision etc. to create an AR interaction, but in
this case , the frame that can be acquired by ARKitSessionDelegate and the area of the frame displayed on the display may be different. (May show a captured part)
For example, the iPhone 11 has a display of 414x896, but the captured frame size is 1440x1920.
The vertical width of the frame fits inside the display, but the width fits only in the center.
A code that crops the frame in the area shown on the display.
If this is analyzed, it can be analyzed based on the frame displayed on the display.
🐣
I’m a freelance engineer.
Work consultation
Please feel free to contact us with a brief development description.
rockyshikoku@gmail.com
I am making an app that uses Core ML and ARKit.
We send machine learning / AR related information.