Categories
Uncategorized

Why should Android developers start building AR apps before 2024?

ar-android-development

The phrase “augmented reality” or AR has long been on everyone’s lips and is used in many areas of life. AR is being actively implemented in mobile applications as well. A large part of the AR market is occupied by entertainment applications. Remember the PokemonGo fever of 2016? However, entertainment is not the only area with AR. Tourism, medicine, education, healthcare, retail, and other areas also actively use AR. According to studies, by the end of 2020, there were almost 600 million active users of mobile apps with AR. By 2024, a nearly three-fold growth (1.7 billion) is predicted, and the amount of revenue from such applications is estimated at $ 26 billion. The future is very close! 

That’s why in this article we’ll consider several popular tools for Android mobile app development with AR functionality, their pros and cons.

History of AR

It’s been quite a long time since the advent of AR technology and its implementation in smartphones. It was originally part of VR. In 1961, Philco Corporation (USA) developed the first Headsight virtual reality helmets. Like most inventions, they were first used for the needs of the Department of Defense. Then the technology evolved: there were various simulators, virtual helmets, and even goggles with gloves. Their distribution was not widespread, but these technologies interested NASA and the CIA. In 1990, Tom Codell coined the term “Augmented reality”. We can say that from that moment on, AR became separate from VR. In the ’90s, there were many interesting inventions: an exoskeleton, which allowed the military to virtually control cars, gaming platforms. In 1993, Sega developed the Genesis game console. However, this product did not become mass-market: users were recorded nausea and headaches during games.  The high cost of devices, scarce technical equipment, and side effects forced people to forget about VR and AR technologies in the mass segment for a while. In 1994, AR made its way into the arts for the first time with a theater production called Dancing in Cyberspace. In it, acrobats danced in virtual space. 

In 2000, in the popular game Quake, thanks to the virtual reality helmet, it became possible to chase monsters in the street. This may have inspired the future creators of the game Pokemon Go. Until the 2010s, attempts to bring AR to the masses were not very successful. 

In the 2010s, quite successful projects appeared: MARTA (an application from Volkswagen that gives step-by-step recommendations on car repair and maintenance) and Google Glass glasses. At the same time, the introduction of AR in mobile applications begins: Pokemon Go, IKEA Place, the integration of AR in various Google applications (Translate, Maps, etc.), the introduction of filters in Instagram, etc. Currently, there are more and more mobile applications with AR and their use is spreading not only in the field of entertainment.

What is AR and how it works on a smartphone

Essentially, AR is based on computer vision technology. It all starts with a device that has a camera on it. The camera scans an image of the real world. That’s why when you run most AR apps, you’re first asked to move the camera around in space for a while. Then the pre-installed AR engine analyzes this information and builds a virtual world based on it, in which it places an AR object or several objects (picture, 3D model, text, video) on the background of the original image. AR objects can be pre-stored in the phone memory or can be downloaded from the Internet in real-time. The application remembers the location of the objects, so the position of the objects does not change when the smartphone moves unless it is specifically provided by the application functionality. Objects are fixed in space with special markers – identifiers. There are 3 main methods for AR technology to work:

  • Natural markers. A virtual grid is superimposed on the surrounding world. On this grid, the AR engine identifies anchor points, which determine the exact location to which the virtual object will be attached in the future. Benefit: Real-world objects serve as natural markers. No need to create markers programmatically.
  • Artificial markers. The appearance of the AR object is tied to some specific marker created artificially, such as the place where the QR code was scanned. This technology works more reliably than with natural markers.
  • Spatial technology. In this case, the position of the AR object is attached to certain geographical coordinates. GPS/GLONASS, gyroscope, and compass data embedded in the smartphone are used.

Tools for AR in Android

ar-development-tools-android

Google ARCore

The first thing that comes to mind is Google’s ARCore. ARCore isn’t an SDK, but a platform for working with AR. So you have to additionally implement the graphical elements that the user interacts with. This means that we can’t do everything with ARCore alone, and we need to implement technologies to work with graphics.

There are several solutions for this. 

If you want to use Kotlin:

  • Until recently, you could use Google’s dedicated Sceneform SDK. But in 2020, Google moved Sceneform to the archive and withdrew further support for it. Currently, the Sceneform repository is maintained by enthusiasts and is available here. It must be said that the repository is updated quite frequently. However, there is still a risk of using technology that is not supported by Google.
  • Integrate OpenGL into the project. OpenGL is a library written in C++ specifically to work with graphical objects. Android provides an SDK to work with OpenGL to run on Kotlin and Java. This option is suitable if your developers know how to work with OpenGL or can figure it out quickly (which is a non-trivial task). 

If you want to use something that isn’t Kotlin:

  • Android NDK. If your developers know C++, they can use the Android NDK for development. However, they will also need to deal with graphics there. The OpenGL library already mentioned will be suitable for this task.
  • Unreal Engine. There is nothing better for dealing with graphics than game engines. Unfortunately, ARCore is no longer supported by the Unity SDK, but Unreal Engine developers can still develop applications.

Vuforia

Vuforia is developed by PTC. Another popular tool for developing AR applications is Vuforia. Vuforia can work with normal 2D and 3D objects as well as video and audio, unlike ARCore. You can create virtual buttons, change the background, and control occlusion. It’s a state where one object is slightly hidden by another.

Fun fact: using Vuforia, a developer can turn on ARCore under the hood. Furthermore, the official Vuforia documentation recommends that you do this. That is, while running the application, Vuforia will check to see if it is possible to use ARCore on the device and if so, it will do so. 

Unfortunately, bad news again for Kotlin fans. Vuforia can only be used in C or Unity. Also, the downside is that if you plan to publish your application for commercial purposes, you will have to buy a paid version of Vuforia (Vuforia prices). 

It works with Android 6 and up, and there is a list of recommended devices.

ARToolKit

ARToolKit is a completely free open-source library for working with AR. Its features are:

  • support for Unity3D and OpenSceneGraph graphics libraries
  • support for single and dual cameras simultaneously
  • GPS support
  • ability to create real-time applications
  • integration with smart glasses
  • multi-language support
  • automatic camera calibration

This library is completely free. However, the documentation leaves a lot to be desired. The official website does not respond to clicks on menu items. Apparently, ARToolKit supports Android development on Unity. Using this library is quite risky.

MAXST 

A popular solution from Korea. It has very detailed documentation. There is an SDK to work with 2D and 3D objects. Available in Java and Unity. In Java, you need to additionally implement the work with graphics. The official website states that the SDK works on Android from version 4.3, which is a huge plus for those who want to cover the maximum number of devices. The documentation is quite detailed. However, this SDK is payable if you plan to publish the app. The prices are here.

Wikitude 

Development by an Austrian company that was recently taken over by Qualcomm. Allows you to recognize and track 2D and 3D objects, images, scenes and work with geodata, there is integration with smart glasses. There is a Java SDK (you need to additionally implement the work with graphics), as well as Unity and Flutter. This solution is paid, but you can try the free version for 45 days.

Conclusion

Now there is a choice of frameworks to develop AR applications for Android. Of course, there are many more, but I have tried to collect the most popular ones. I hope this will help you with your choice. May Android be with you.

Fora Soft develops VR/AR applications. Have a look at our portfolio, look at Super Power FX, Anime Power FX, UniMerse. We are #453 of 3162 top mobile app developers’ 2022 list by TopDevelopers.

Want to have your own AR? Contact us, our technically-savvy sales team will be happy to answer all your questions.

Categories
Uncategorized

What Android Neural Networks Can Do in 2022? Explained in Comics

neural networks in android

In the span of the last 10 years, the term “neural networks” has gone beyond the scientific and professional environment. The theory of neural network organization emerged in the middle of the last century, but only by 2012 the computer power has reached sufficient values to train neural networks. Thanks to this their widespread use began. 

Neural networks are increasingly being used in mobile application development. The Deloitte report indicates that more than 60% of the applications installed by adults in developed countries use neural networks. According to statistics, Android has been ahead of its competitors in popularity for several years.

Neural networks are used:

  • to recognize and process voices (modern voice assistants), 
  • to recognize and process objects (computer vision), 
  • to recognize and process natural languages (natural language processing),
  • to find malicious programs, 
  • to automate apps and make them more efficient. For example, there are healthcare applications that detect diabetic retinopathy by analyzing retinal scans.

What are neural networks and how do they work?

Mankind has adopted the idea of neural networks from nature. Scientists took the animal and human nervous systems as an example. A natural neuron consists of a nucleus, dendrites, and an axon. The axon transitions into several branches (dendrites), forming synapses (connections) with other neuronal dendrites.

brain neural network
Brain neural network

The artificial neuron has a similar structure. It consists of a nucleus (processing unit), several dendrites (similar to inputs), and one axon (similar to outputs), as shown in the following picture:

neural network scheme
Artificial neuron connections scheme

Connections of several neurons form layers, and connections of layers form a neural network. There are three main types of neurons: input (receives information), hidden (processes information), and output (presents results of calculations). Take a look at the picture.

neural network architecture
Neural network connections scheme

Neurons on different levels are connected through synapses. During the passage through a synapse, the signal can either strengthen or weaken. The parameter of a synapse is a weight – some coefficient can be any real number, due to which the information can change. Numbers (signals) are input, then they are multiplied by weights (each signal has its own weight) and summed. The activation function calculates the output signal and sends it to the output (see the picture).

neural network function
Neural network function

Imagine the situation: you have touched a hot iron. Depending on the signal that comes from your finger through the nerve endings to the brain, it will make a decision: to pass the signal on through the neural connections to pull your finger away, or not to pass the signal if the iron is cold and you can leave the finger on it. The mathematical analog of the activation function has the same purpose. The activation function allows signals to pass or fail to pass from neuron to neuron depending on the information they pass. If the information is important, the function passes it through, and if the information is little or unreliable, the activation function does not allow it to pass on.

How to prepare neural networks for usage?

neural network algorithm
How neural network algorithm works

Work with neural nets goes through several stages:

  1. Preparation of a neural network, which includes the choice of architecture (how neurons are organized), topology (the structure of their location relative to each other and the outside world), the learning algorithm, etc. 
  2. Loading the input data into a neural network.
  3. Training a neural network. This is a very important stage, without which the neural network is useless. This is where all the magic happens: along with the input data volume fed in, the neuronet receives information about the expected result. The result obtained in the output layer of the neural network is compared with the expected one. If they do not coincide, the neural network determines which neurons affected the final value to a greater extent and adjusts weights on connections with these neurons (so-called error backpropagation algorithm). This is a very simplified explanation. We suggest reading this article to dive deeper into neural network training. Neural network training is a very resource-intensive process, so it is not done on smartphones. The training time depends on the task, architecture, and input data volume. 
  4. Checking training adequacy. A network does not always learn exactly what its creator wanted it to learn. There was a case where the network was trained to recognize images of tanks from photos. But since all the tanks were on the same background, the neural network learned to recognize this type of background, not the tanks. The quality of neural network training must be tested on examples that were not involved in its training. 
  5. Using a neural network – developers integrate the trained model into the application.

Limitations of neural networks on mobile devices

RAM limitations 

Most mid-range and low-end mobile devices available on the market have between 2 and 4 GB of RAM. And usually, 1/3 of this capacity is reserved by the operating system. The system can “kill” applications with neural networks as they run when the RAM limit approaches.

The size of the application

Complex deep neural networks often weigh several gigabytes. When integrating a neural network into mobile software there is some compression, but it is still not enough to work comfortably. The main recommendation for the developers is to minimize the size of the application as much as possible on any platform to improve the UX.

Runtime

Simple neural networks often return results almost instantly and are suitable for real-time applications. However, deep neural networks can take dozens of seconds to process a single set of input data. Modern mobile processors are not yet as powerful as server processors, so processing results on a mobile device can take several hours.

To develop a mobile app with neural networks, you first need to create and train a neural network on a server or PC, and then implement it in the mobile app using off-the-shelf frameworks.

Working with a single app on multiple devices

As an example, a facial recognition app is installed on the user’s phone and tablet. It won’t be able to transfer data to other devices, so neural network training will happen separately on each of them.

Overview of neural network development libraries for Android

TensorFlow

TensorFlow is an open-source library from Google that creates and trains deep neural networks. With this library, we store a neural network and use it in an application.

The library can train and run deep neural networks to classify handwritten numbers, recognize images, embed words, and process natural languages. It works on Ubuntu, macOS, Android, iOS, and Windows. 

To make learning TensorFlow easier, the development team has produced additional tutorials and improved getting started guides. Some enthusiasts have created their own TensorFlow tutorials (including InfoWorld). You can read several books on TensorFlow or take online courses. 

We mobile developers should take a look at TensorFlow Lite, a lightweight TensorFlow solution for mobile and embedded devices. It allows you to do machine learning inference on the device (but not training) with low latency and small binary size. TensorFlow Lite also supports hardware acceleration using the Android neural network API. TensorFlow Lite models are compact enough to run on mobile devices and can be used offline.

TensorFlow architecture

TensorFlow Lite runs fairly small neural network models on Android and iOS devices, even if they are disabled. 

The basic idea behind TensorFlow Lite is to train a TensorFlow model and convert it to the TensorFlow Lite format. The converted file can then be used in a mobile app.

TensorFlow Lite consists of two main components:

  • TensorFlow Lite interpreter – runs specially optimized models on cell phones, embedded Linux devices, and microcontrollers.
  • TensorFlow Lite converter – converts TensorFlow models into an efficient form for usage by the interpreter, and can make optimizations to improve performance and binary file size.

TensorFlow Lite is designed to simplify machine learning on mobile devices themselves instead of sending data back and forth from the server. For developers, machine learning on the device offers the following benefits:

  • response time: the request is not sent to the server, but is processed on the device
  • privacy: the data does not leave the device
  • Internet connection is not required
  • the device consumes less energy because it does not send requests to the server

Firebase ML Kit

TensorFlow Lite makes it easier to implement and use neural networks in applications. However, developing and training models still requires a lot of time and effort. To make life easier for developers, the Firebase ML Kit library was created.

The library uses already trained deep neural networks in applications with minimal code. Most of the models offered are available both locally and on Google Cloud. Developers can use models for computer vision (character recognition, barcode scanning, object detection). The library is quite popular. For example, it is used in:

  • Yandex.Money (a Russian e-commerce system) to recognize QR codes;
  • FitNow, a fitness application that recognizes texts from food labels for calorie counting;
  • TutboTax, a payment application that recognizes document barcodes.

ML Kit also has:

  • language detection of written text;
  • translation of texts on the device;
  • smart message response (generating a reply sentence based on the entire conversation).

In addition to methods out of the box, there is support for custom models.

What’s important is that you don’t need to use any services, APIs, or backend for this. Everything can be done directly on the device – no user traffic is loaded and developers don’t need to handle errors in case there is no internet connection. Moreover, it works faster on the device. The downside is the increased power consumption.

Developers don’t need to publish the app every time after updates, as ML Kit will dynamically update the model when it goes online.

The ML Kit team decided to invest in model compression. They are experimenting with a feature that allows you to upload a full TensorFlow model along with training data and get a compressed TensorFlow Lite model in return. Developers are looking for partners to try out the technology and get feedback from them. If you’re interested, sign up here.

Since this library is available through Firebase, you can also take advantage of other services on that platform. For example, Remote Config and A/B testing make it possible to experiment with multiple user models. If you already have a trained neural network loaded into your application, you can add another one without republishing it to switch between them or use two at once for the sake of experimentation – the user won’t notice.

Problems of using neural networks in mobile development

Developing Android apps that use neural networks is still a challenge for mobile developers. Training neural networks can take weeks or months since the input information can consist of millions of elements. Such a serious workload is still out of reach for many smartphones. 

Check to see if you can’t avoid having a neural network in a mobile app if:

  • there are no specialists in your company who are familiar with neural networks;
  • your task is quite non-trivial, and to solve it you need to develop your own model, i.e. you cannot use ready-made solutions from Google, because this will take a lot of time;
  • the customer needs a quick result – training neural networks can take a very long time;
  • the application will be used on devices with an old version of Android (below 9). Such devices do not have enough power.

Conclusion

Neural networks became popular a few years ago, and more and more companies are using this technology in their applications. Mobile devices impose their own limitations on neural network operation. If you decide to use them, the best choice would be a ready-made solution from Google (ML Kit) or the development and implementation of your own neural network with TensorFlow Lite.