How To Make (almost) Anything

interface & application programming



AUGMENTED REALITY FOR CACTUS CIRCUITS


Code: https://github.com/aberke/ar-cactus-circuits


TASK

write an application that interfaces with an input &/or output device that you made.

I’ve designed and programmed a number of interfaces and applications, but never any involving augmented reality (AR). This seemed like the right week to try.





BUILDING WITH AR

There are AR frameworks and tools to choose between

I used ARKit.



PREREQUISITES / DEVELOPMENT ENVIRONMENT

In order to make my development environment compatible with both my iPhone and apple’s ARKit, I:





RESOURCES

I read and worked through tutorials to help me understand Xcode and ARKit and Swift (the IOS programming language). Swift is funny looking.

I mashed a few tutorials together and modified them towards my project goals. I tested my work by pointing my phone at images I wanted it to detect.




But what is all this code doing?

Helpful apple documentation to explain the ARSCNView and related classes: https://developer.apple.com/documentation/arkit/building_your_first_ar_experience





AR MODELS

Models to drop into scenes must be in the Xcode project as .scn files.

Making my own .scn files to overlay on top of detected images:

The materials attached to the object did not correctly import into XCode. I used Xcode’s native scenekit editor to attach the materials again.

Model Sizing





ICONS

To make icons for the “app” I used a tool: Icon Set Creator: https://itunes.apple.com/us/app/icon-set-creator/id939343785





AR TARGETS & DETECTION

Apple documentation for ARkit reference images: https://developer.apple.com/documentation/arkit/recognizing_images_in_an_ar_experience



My AR target that I wanted to detect is the cactus circuit board. I want the interaction AR behavior to be different depending on whether the lights are:

1. All off

2. One on/one off

3. All on



So I took photos of the separate states, so that the different photos could serve as different AR target reference images.

i.e. when reference image with all lights off (1) is detected, then do behavior 1, versus when reference image with all lights on (3) is detected, then do behavior 3.



Reference images:

1. All off

2. One on/one off

3. All on

I had to use images of just the lights, and not the board, because otherwise the reference images were too similar for the AR app to detect their differences.





POSITIONING THE (CACTUS) MODEL IN THE SCENE







RESULT

In the end… my images of the light were too small and difficult for ARkit to identify or disambiguate, especially when the external light settings fluctuated.

So I cut scope and just showed one setting, where a cactus hovered and spun.



Along the way, I learned about using the XCode developer environment, coding in swift, and the limitations of AR.




Distribution?

It would be nice to share this “app” but I will not be able to submit this “app” to the apple store for distribution because it is too simple and does not adhere to apple’s guidelines.

From https://developer.apple.com/app-store/review/guidelines/#minimum-functionality:

4.2.1 Apps using ARKit should provide rich and integrated augmented reality experiences; merely dropping a model into an AR view or replaying animation is not enough.





SOURCE CODE

All code is in github: https://github.com/aberke/ar-cactus-circuits