Categories
Uncategorized

Why should Android developers start building AR apps before 2024?

ar-android-development

The phrase “augmented reality” or AR has long been on everyone’s lips and is used in many areas of life. AR is being actively implemented in mobile applications as well. A large part of the AR market is occupied by entertainment applications. Remember the PokemonGo fever of 2016? However, entertainment is not the only area with AR. Tourism, medicine, education, healthcare, retail, and other areas also actively use AR. According to studies, by the end of 2020, there were almost 600 million active users of mobile apps with AR. By 2024, a nearly three-fold growth (1.7 billion) is predicted, and the amount of revenue from such applications is estimated at $ 26 billion. The future is very close! 

That’s why in this article we’ll consider several popular tools for Android mobile app development with AR functionality, their pros and cons.

History of AR

It’s been quite a long time since the advent of AR technology and its implementation in smartphones. It was originally part of VR. In 1961, Philco Corporation (USA) developed the first Headsight virtual reality helmets. Like most inventions, they were first used for the needs of the Department of Defense. Then the technology evolved: there were various simulators, virtual helmets, and even goggles with gloves. Their distribution was not widespread, but these technologies interested NASA and the CIA. In 1990, Tom Codell coined the term “Augmented reality”. We can say that from that moment on, AR became separate from VR. In the ’90s, there were many interesting inventions: an exoskeleton, which allowed the military to virtually control cars, gaming platforms. In 1993, Sega developed the Genesis game console. However, this product did not become mass-market: users were recorded nausea and headaches during games.  The high cost of devices, scarce technical equipment, and side effects forced people to forget about VR and AR technologies in the mass segment for a while. In 1994, AR made its way into the arts for the first time with a theater production called Dancing in Cyberspace. In it, acrobats danced in virtual space. 

In 2000, in the popular game Quake, thanks to the virtual reality helmet, it became possible to chase monsters in the street. This may have inspired the future creators of the game Pokemon Go. Until the 2010s, attempts to bring AR to the masses were not very successful. 

In the 2010s, quite successful projects appeared: MARTA (an application from Volkswagen that gives step-by-step recommendations on car repair and maintenance) and Google Glass glasses. At the same time, the introduction of AR in mobile applications begins: Pokemon Go, IKEA Place, the integration of AR in various Google applications (Translate, Maps, etc.), the introduction of filters in Instagram, etc. Currently, there are more and more mobile applications with AR and their use is spreading not only in the field of entertainment.

What is AR and how it works on a smartphone

Essentially, AR is based on computer vision technology. It all starts with a device that has a camera on it. The camera scans an image of the real world. That’s why when you run most AR apps, you’re first asked to move the camera around in space for a while. Then the pre-installed AR engine analyzes this information and builds a virtual world based on it, in which it places an AR object or several objects (picture, 3D model, text, video) on the background of the original image. AR objects can be pre-stored in the phone memory or can be downloaded from the Internet in real-time. The application remembers the location of the objects, so the position of the objects does not change when the smartphone moves unless it is specifically provided by the application functionality. Objects are fixed in space with special markers – identifiers. There are 3 main methods for AR technology to work:

  • Natural markers. A virtual grid is superimposed on the surrounding world. On this grid, the AR engine identifies anchor points, which determine the exact location to which the virtual object will be attached in the future. Benefit: Real-world objects serve as natural markers. No need to create markers programmatically.
  • Artificial markers. The appearance of the AR object is tied to some specific marker created artificially, such as the place where the QR code was scanned. This technology works more reliably than with natural markers.
  • Spatial technology. In this case, the position of the AR object is attached to certain geographical coordinates. GPS/GLONASS, gyroscope, and compass data embedded in the smartphone are used.

Tools for AR in Android

table comparing different AR tools
AR tools comparison table

Google ARCore

The first thing that comes to mind is Google’s ARCore. ARCore isn’t an SDK, but a platform for working with AR. So you have to additionally implement the graphical elements that the user interacts with. This means that we can’t do everything with ARCore alone, and we need to implement technologies to work with graphics.

There are several solutions for this. 

If you want to use Kotlin:

  • Until recently, you could use Google’s dedicated Sceneform SDK. But in 2020, Google moved Sceneform to the archive and withdrew further support for it. Currently, the Sceneform repository is maintained by enthusiasts and is available here. It must be said that the repository is updated quite frequently. However, there is still a risk of using technology that is not supported by Google.
  • Integrate OpenGL into the project. OpenGL is a library written in C++ specifically to work with graphical objects. Android provides an SDK to work with OpenGL to run on Kotlin and Java. This option is suitable if your developers know how to work with OpenGL or can figure it out quickly (which is a non-trivial task). 

If you want to use something that isn’t Kotlin:

  • Android NDK. If your developers know C++, they can use the Android NDK for development. However, they will also need to deal with graphics there. The OpenGL library already mentioned will be suitable for this task.
  • Unreal Engine. There is nothing better for dealing with graphics than game engines. Unfortunately, ARCore is no longer supported by the Unity SDK, but Unreal Engine developers can still develop applications.

Vuforia

Vuforia is developed by PTC. Another popular tool for developing AR applications is Vuforia. Vuforia can work with normal 2D and 3D objects as well as video and audio, unlike ARCore. You can create virtual buttons, change the background, and control occlusion. It’s a state where one object is slightly hidden by another.

Fun fact: using Vuforia, a developer can turn on ARCore under the hood. Furthermore, the official Vuforia documentation recommends that you do this. That is, while running the application, Vuforia will check to see if it is possible to use ARCore on the device and if so, it will do so. 

Unfortunately, bad news again for Kotlin fans. Vuforia can only be used in C or Unity. Also, the downside is that if you plan to publish your application for commercial purposes, you will have to buy a paid version of Vuforia (Vuforia prices). 

It works with Android 6 and up, and there is a list of recommended devices.

ARToolKit

ARToolKit is a completely free open-source library for working with AR. Its features are:

  • support for Unity3D and OpenSceneGraph graphics libraries
  • support for single and dual cameras simultaneously
  • GPS support
  • ability to create real-time applications
  • integration with smart glasses
  • multi-language support
  • automatic camera calibration

This library is completely free. However, the documentation leaves a lot to be desired. The official website does not respond to clicks on menu items. Apparently, ARToolKit supports Android development on Unity. Using this library is quite risky.

MAXST 

A popular solution from Korea. It has very detailed documentation. There is an SDK to work with 2D and 3D objects. Available in Java and Unity. In Java, you need to additionally implement the work with graphics. The official website states that the SDK works on Android from version 4.3, which is a huge plus for those who want to cover the maximum number of devices. The documentation is quite detailed. However, this SDK is payable if you plan to publish the app. The prices are here.

Wikitude 

Development by an Austrian company that was recently taken over by Qualcomm. Allows you to recognize and track 2D and 3D objects, images, scenes and work with geodata, there is integration with smart glasses. There is a Java SDK (you need to additionally implement the work with graphics), as well as Unity and Flutter. This solution is paid, but you can try the free version for 45 days.

Conclusion

Now there is a choice of frameworks to develop AR applications for Android. Of course, there are many more, but I have tried to collect the most popular ones. I hope this will help you with your choice. May Android be with you.

Fora Soft develops VR/AR applications. Have a look at our portfolio, look at Super Power FX, Anime Power FX, UniMerse. We are #453 of 3162 top mobile app developers’ 2022 list by TopDevelopers.

Want to have your own AR? Contact us, our technically-savvy sales team will be happy to answer all your questions.

Categories
Uncategorized

How to apply an effect to a video in iOS

superpower effect

Have you ever thought about how videos are processed? What about applying effects? In this AVFoundation tutorial, I’ll try to explain video processing on iOS in simple terms. This topic is quite complicated yet interesting. You can find a short guide on how to apply the effects down below.

Core Image

Core Image is a framework by Apple for high-performance image processing and analysis. Classes CIImage, CIFilter, and CiContext are the main components of this framework.

With Core Image, you can link different filters together (CiFilter) in order to create custom effects. You can also create the effects that work on the GPU (graphic processor) which will move some load from the CPU (central processor), thus increasing the app speed.

AVFoundation

AVFoundation is a framework for work with media files on iOS, macOS, watchOS, and tvOS. By using AVFoundation, you can easily create, edit, and play QuickTime films and MPEG-4 (MP4) files. You can also play HLS streams (read more about HLS here) and create custom functions to work with video and audio files, such as players, editors, etc.

Adding an effect

Let’s say you need to add an explosion effect to your video. What do you do?

First, you’ll need to prepare three videos: the main one where you’ll apply the effect, the effect video with an alpha channel, and the effect video without an alpha channel.

An alpha channel is an additional channel that can be integrated into an image. It contains information about the image’s transparency and can provide different transparency levels, depending on the alpha type.

We need an alpha channel to not let the video with an effect overlap the main one. This is the example of a picture with the alpha channel and without it:

a picture with the alpha channel and without it

Transparency goes down as the color gets whiter. Therefore, black is fully transparent whereas white is not transparent at all.

After applying a video effect, we’ll only see the explosion itself (the white part of an image on the right), and the rest will be transparent. It will allow us to see the main video where we apply the effect.

Then, we need to read the three videos at the same time and combine the images, using CIFilter. 

First, we get a link to CVImageBuffer via CMSampleBuffer. We need it to control different types of image data. CVImageBuffer is derived from CVPixelBuffer which we’ll need later. We get CIImage from CVImageBuffer. It looks something like this in the code:

CVImageBufferRef imageRecordBuffer = CMSampleBufferGetImageBuffer(recordBuffer);
CIImage *ciBackground = [CIImage imageWithCVPixelBuffer:imageRecordBuffer];
 
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(buffer);
CIImage *ciTop = [CIImage imageWithCVPixelBuffer:imageBuffer];
 
CVImageBufferRef imageAlphaBuffer = CMSampleBufferGetImageBuffer(alphaBuffer);
CIImage *ciMask = [CIImage imageWithCVPixelBuffer:imageAlphaBuffer];

After receiving CIImage for each one of the three videos, we need to compile them using CIFilter. The code will look roughly like this:

CIFilter *filterMask = [CIFilter filterWithName:@"CIBlendWithMask" keysAndValues:@"inputBackgroundImage", ciBackground, @"inputImage", ciTop, @"inputMaskImage", ciMask, nil];
CIImage *outputImage = [filterMask outputImage];

Once again we’ve received CIImage but this time it consists of the three CIImages that we got before. Now, we proceed to render the new CIImage in CVPixelBufferRef using CIIContext. The code will look roughly like this:

CVPixelBufferRef pixelBuffer =[self.contextEffect renderToPixelBufferNew:outputImage];

Now, we have a finalized pixel buffer. We need to add it to the video sample buffer, and we’ll receive a video with the effect after that.

[self.writerAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:CMTimeMake(self.frameUse, 30)]

The effect is successfully added to the video here. With that being said, the work was completed using the GPU, which helped us take the load off the CPU, therefore increase the app speed.

This is cool, right? Actually, there are 9 simple ways to make an iOS app even cooler. How? Check out here!

Conclusion

Adding effects to videos in iOS is quite a complicated task, but it can be done if you know how to use basic frameworks for work with media in iOS. If you want to learn more about it, feel free to get in touch with us via the Contact us form!