The huge growth in the augmented reality market has created a dizzying array of technology options for app developers. Each new technology opens the door to amazing new experiences but also introduces new technical challenges and potentially steep learning curves. Furthermore, combining multiple types of augmented reality into a single app requires careful architecture and design, time that you could otherwise be spending on the experience itself.
Motive’s SDK and authoring tool solve many of these problems for you. Our extensible, plugin architecture makes it easy to mix and match a variety of technologies in one app. If our list of integrations doesn’t have what you need, we’ve made it easy to add your own. When it comes to augmented reality, it means that you don’t have to commit to a single use case--you want the power of image tracking AND the ability to link objects to real-world GPS points? No problem!
Motive’s mobile AR support falls broadly into two categories:
- Vision Augmented Reality: Vision AR covers technologies like Vuforia, Wikitude, and ARKit that use real-world image tracking to anchor media and interactive elements on a live camera feed.
- Location-based Augmented Reality: Location AR is used for Pokemon Go-styled games and apps where you use your phone’s camera to interact with virtual items connected to real-world locations.
Location-Based Augmented Reality
With Motive’s location-based augmented reality features, developers can easily create mobile games and apps that combine location services, GPS, and a variety of media types. The possibilities are only limited by your imagination:
- Unlock virtual items based on your GPS position and capture them using an AR camera minigame.
- Use locative sound to create a virtual easter-egg hunt where you follow a sound and an egg pops onto your AR screen when you get close enough.
- Interact with virtual characters who inhabit the real world.
- Overlay historical or other context-aware information on a live camera feed for historical or educational tours.
As the developer or content creator, Motive gives you a lot of control over how to place elements in your AR space and how you can interact with them.
- Place 3D objects, images, or video; adjust orientation, scale, and relative position
- Specify their “distance variation”: are they locked at a specific distance from the viewer (like Pokemon Go) or are they locked to a particular GPS coordinate (i.e., they get closer as you move closer)
- Specify whether they always face the user, or whether they act more like a real object in the world (i.e, you can walk around them to see every side)
- Trigger events based on user interaction: tap, gaze, etc.
- Attach “inspectors” that let you provide more information or give the user more interaction options (ask a question with multiple choice answers)
Motive.io’s vision AR support makes it easy to create engaging, immersive experiences using image tracking technologies like Vuforia, Wikitude, and ARKit. With Motive’s rich gameplay and branching narrative features, you can unleash the potential for vision AR to engage people in ways that were never before possible. Augment your world with interactive video, audio, images, and 3D objects for room escapes, adventure games, scavenger hunts, conference apps, or whatever else you can imagine. Check out our Escape to Mars demo app for a great example of vision AR powering a room escape-style game.
Designed for Flexibility
Combining vision AR with Motive’s powerful authoring tool means that you have maximum flexibility when it comes to designing vision AR experiences. Unlike most vision AR authoring systems, you can change the meaning of markers on the fly. Combined with Motive’s adaptive content engine, you can design experiences that change and adapt to every player as they go. Because Motive experiences are delivered on demand over the Internet, you can make changes to the experience in near-real time. This has enormous benefits--consider our experience building the Escape to Mars app for AWE 2017. Using the Motive authoring tool, we were able to set up the whole experience the day before the show by simply swapping our test markers with images from the show floor. We were also able to adapt during the show itself as we observed how foot traffic patterns obscured some of the images we chose. We did all of this through the Motive web tool without having to make any changes to the app itself.