Neural networks on Android

neural networks in android

In the span of the last 10 years, the term “neural networks” has gone beyond the scientific and professional environment. The theory of neural network organization emerged in the middle of the last century, but only by 2012 the computer power has reached sufficient values to train neural networks. Thanks to this their widespread use began. 

Neural networks are increasingly being used in mobile application development. The Deloitte report indicates that more than 60% of the applications installed by adults in developed countries use neural networks. According to statistics, Android has been ahead of its competitors in popularity for several years.

Neural networks are used:

  • to recognize and process voices (modern voice assistants), 
  • to recognize and process objects (computer vision), 
  • to recognize and process natural languages (natural language processing),
  • to find malicious programs, 
  • to automate apps and make them more efficient. For example, there are healthcare applications that detect diabetic retinopathy by analyzing retinal scans.

What are neural networks and how do they work?

Mankind has adopted the idea of neural networks from nature. Scientists took the animal and human nervous systems as an example. A natural neuron consists of a nucleus, dendrites, and an axon. The axon transitions into several branches (dendrites), forming synapses (connections) with other neuronal dendrites.

The artificial neuron has a similar structure. It consists of a nucleus (processing unit), several dendrites (similar to inputs), and one axon (similar to outputs), as shown in the following picture:

neural network scheme

Connections of several neurons form layers, and connections of layers form a neural network. There are three main types of neurons: input (receives information), hidden (processes information), and output (presents results of calculations). Take a look at the picture.

neural network architecture

Neurons on different levels are connected through synapses. During the passage through a synapse, the signal can either strengthen or weaken. The parameter of a synapse is a weight – some coefficient can be any real number, due to which the information can change. Numbers (signals) are input, then they are multiplied by weights (each signal has its own weight) and summed. The activation function calculates the output signal and sends it to the output (see the picture).

neural network function

Imagine the situation: you have touched a hot iron. Depending on the signal that comes from your finger through the nerve endings to the brain, it will make a decision: to pass the signal on through the neural connections to pull your finger away, or not to pass the signal if the iron is cold and you can leave the finger on it. The mathematical analog of the activation function has the same purpose. The activation function allows signals to pass or fail to pass from neuron to neuron depending on the information they pass. If the information is important, the function passes it through, and if the information is little or unreliable, the activation function does not allow it to pass on.

How to prepare neural networks for usage?

neural network algorithm

Work with neural nets goes through several stages:

  1. Preparation of a neural network, which includes the choice of architecture (how neurons are organized), topology (the structure of their location relative to each other and the outside world), the learning algorithm, etc. 
  2. Loading the input data into a neural network.
  3. Training a neural network. This is a very important stage, without which the neural network is useless. This is where all the magic happens: along with the input data volume fed in, the neuronet receives information about the expected result. The result obtained in the output layer of the neural network is compared with the expected one. If they do not coincide, the neural network determines which neurons affected the final value to a greater extent and adjusts weights on connections with these neurons (so-called error backpropagation algorithm). This is a very simplified explanation. We suggest reading this article to dive deeper into neural network training. Neural network training is a very resource-intensive process, so it is not done on smartphones. The training time depends on the task, architecture, and input data volume. 
  4. Checking training adequacy. A network does not always learn exactly what its creator wanted it to learn. There was a case where the network was trained to recognize images of tanks from photos. But since all the tanks were on the same background, the neural network learned to recognize this type of background, not the tanks. The quality of neural network training must be tested on examples that were not involved in its training. 
  5. Using a neural network – developers integrate the trained model into the application.

Limitations of neural networks on mobile devices

RAM limitations 

Most mid-range and low-end mobile devices available on the market have between 2 and 4 GB of RAM. And usually, 1/3 of this capacity is reserved by the operating system. The system can “kill” applications with neural networks as they run when the RAM limit approaches.

The size of the application

Complex deep neural networks often weigh several gigabytes. When integrating a neural network into mobile software there is some compression, but it is still not enough to work comfortably. The main recommendation for the developers is to minimize the size of the application as much as possible on any platform to improve the UX.


Simple neural networks often return results almost instantly and are suitable for real-time applications. However, deep neural networks can take dozens of seconds to process a single set of input data. Modern mobile processors are not yet as powerful as server processors, so processing results on a mobile device can take several hours.

To develop a mobile app with neural networks, you first need to create and train a neural network on a server or PC, and then implement it in the mobile app using off-the-shelf frameworks.

Working with a single app on multiple devices

As an example, a facial recognition app is installed on the user’s phone and tablet. It won’t be able to transfer data to other devices, so neural network training will happen separately on each of them.

Overview of neural network development libraries for Android


TensorFlow is an open-source library from Google that creates and trains deep neural networks. With this library, we store a neural network and use it in an application.

The library can train and run deep neural networks to classify handwritten numbers, recognize images, embed words, and process natural languages. It works on Ubuntu, macOS, Android, iOS, and Windows. 

To make learning TensorFlow easier, the development team has produced additional tutorials and improved getting started guides. Some enthusiasts have created their own TensorFlow tutorials (including InfoWorld). You can read several books on TensorFlow or take online courses. 

We mobile developers should take a look at TensorFlow Lite, a lightweight TensorFlow solution for mobile and embedded devices. It allows you to do machine learning inference on the device (but not training) with low latency and small binary size. TensorFlow Lite also supports hardware acceleration using the Android neural network API. TensorFlow Lite models are compact enough to run on mobile devices and can be used offline.

TensorFlow Lite runs fairly small neural network models on Android and iOS devices, even if they are disabled. 

The basic idea behind TensorFlow Lite is to train a TensorFlow model and convert it to the TensorFlow Lite format. The converted file can then be used in a mobile app.

TensorFlow Lite consists of two main components:

  • TensorFlow Lite interpreter – runs specially optimized models on cell phones, embedded Linux devices, and microcontrollers.
  • TensorFlow Lite converter – converts TensorFlow models into an efficient form for usage by the interpreter, and can make optimizations to improve performance and binary file size.

TensorFlow Lite is designed to simplify machine learning on mobile devices themselves instead of sending data back and forth from the server. For developers, machine learning on the device offers the following benefits:

  • response time: the request is not sent to the server, but is processed on the device
  • privacy: the data does not leave the device
  • Internet connection is not required
  • the device consumes less energy because it does not send requests to the server

Firebase ML Kit

TensorFlow Lite makes it easier to implement and use neural networks in applications. However, developing and training models still requires a lot of time and effort. To make life easier for developers, the Firebase ML Kit library was created.

The library uses already trained deep neural networks in applications with minimal code. Most of the models offered are available both locally and on Google Cloud. Developers can use models for computer vision (character recognition, barcode scanning, object detection). The library is quite popular. For example, it is used in:

  • Yandex.Money (a Russian e-commerce system) to recognize QR codes;
  • FitNow, a fitness application that recognizes texts from food labels for calorie counting;
  • TutboTax, a payment application that recognizes document barcodes.

ML Kit also has:

  • language detection of written text;
  • translation of texts on the device;
  • smart message response (generating a reply sentence based on the entire conversation).

In addition to methods out of the box, there is support for custom models.

What’s important is that you don’t need to use any services, APIs, or backend for this. Everything can be done directly on the device – no user traffic is loaded and developers don’t need to handle errors in case there is no internet connection. Moreover, it works faster on the device. The downside is the increased power consumption.

Developers don’t need to publish the app every time after updates, as ML Kit will dynamically update the model when it goes online.

The ML Kit team decided to invest in model compression. They are experimenting with a feature that allows you to upload a full TensorFlow model along with training data and get a compressed TensorFlow Lite model in return. Developers are looking for partners to try out the technology and get feedback from them. If you’re interested, sign up here.

Since this library is available through Firebase, you can also take advantage of other services on that platform. For example, Remote Config and A/B testing make it possible to experiment with multiple user models. If you already have a trained neural network loaded into your application, you can add another one without republishing it to switch between them or use two at once for the sake of experimentation – the user won’t notice.

Problems of using neural networks in mobile development

Developing Android apps that use neural networks is still a challenge for mobile developers. Training neural networks can take weeks or months since the input information can consist of millions of elements. Such a serious workload is still out of reach for many smartphones. 

Check to see if you can’t avoid having a neural network in a mobile app if:

  • there are no specialists in your company who are familiar with neural networks;
  • your task is quite non-trivial, and to solve it you need to develop your own model, i.e. you cannot use ready-made solutions from Google, because this will take a lot of time;
  • the customer needs a quick result – training neural networks can take a very long time;
  • the application will be used on devices with an old version of Android (below 9). Such devices do not have enough power.


Neural networks became popular a few years ago, and more and more companies are using this technology in their applications. Mobile devices impose their own limitations on neural network operation. If you decide to use them, the best choice would be a ready-made solution from Google (ML Kit) or the development and implementation of your own neural network with TensorFlow Lite.


Fora Soft Recognized As One of The Top 100 Fastest and Sustained Growing Companies by Clutch


Fora Soft has been included in 2 Clutch Top-100 lists: 100 Fastest Growing Companies and 100 Sustained Growing Companies!

Clutch is an online review and rating platform based in Washington DC. They verify whether to name a company as one of the best in their field. This specific award is for companies that recorded the highest verified revenue change:

  • Fastest Growth – from 2019 to 2020
  • Sustained Growth – from 2017 to 2020

Fora Soft has grown by 58% from 2019 to 2020 and by 158% from 2017 to 2020. For that, we’d like to thank our clients and partners, as well as for ten 5-star reviews on Clutch. Kind thoughts help motivate our team, and we’re especially thankful for reviews like this:

Their skills impressed us, they can do anything in a short amount of time.” – COO, INSTACLASS.

Quickly growing a business is never an easy task and is an achievement that most will never be able to see. This is why we also want to thank Clutch for setting up such an award and recognizing the important milestone we’ve reached as a company.

Thank you very much, the Clutch team, for choosing us. We are always hesitant about asking clients for a review. They are busy businessmen and businesswomen, and it takes time and mental effort. That is why we do not have many. However, the ones we have are all 5-stars, and we’d like to thank our clients for these kind words and the highest esteem. Happy that these were enough to make it into the Clutch 100 list. We’ll keep up the good work and improve further.” – Nikolay Sapunov, CEO of Fora Soft.

If you want to learn more about our services, leave us a message and we’ll get back to you as soon as possible. And we also launched Instagram, where we also answer DMs, share company news, project portfolios, and chat with you 🙂


Ali from TapeReal, ‘The team treated the project like their own.’


Our copywriter Nikita sat down with Ali, TapeReal CEO & Founder from Vancouver, to talk about his experience of working with Fora Soft. Ali came to us in August 2020, he wanted to create a video social media. This is how Ali describes the product:

TapeReal is a better version of YouTube. Like,if YouTube was built today instead of 2005, for the people of today. That’s what we’re trying to build.

Was Fora Soft your 1st choice?

No, it wasn’t. Before this, I was working with a number of developers and other software developing companies with varying degrees of success. We got some MVP prototypes.

I decided to turn to Fora Soft because I was impressed by their expertise in streaming technologies, the focus on quality, and their commitment to client’s success. 

What about your “before” and “after” of working with us?

Before working with Fora Soft, our development sprints were planned a little differently. I’d put together the requirements for design, but I wasn’t involved in user stories.

With Fora Soft we planned our sprints more effectively, they were more precise, the user stories were very clear.

The project manager provided a clear development plan about what is going to be achieved on certain days and certain milestones. So that part, I think, helped make the sprints more successful. The team was very involved in the whole planning process and provided a lot of great feedback as well.

Can you share any measurable figures? Like, profit, number of users, how many crashes?

In AppStore, we generated more positive reviews. The app was stabilized in many respects as well. We were facing a lot of issues with call recording and also with solo recording features. The team was able to fix some of those bugs and stabilize that experience for the community as well.

Were there any difficulties while developing the app?

We had an existing codebase, so it took some time for the team to familiarize themselves with the codebase.

You know, whenever there were challenges, the team did their best to overcome them, or they presented alternative options and solutions.

They communicated very effectively. In the beginning, it was just a matter of us getting used to each others’ communication styles and timelines. So, there were some miscommunications, some expectations from both ends that were a little bit challenging. Once we got to the standardized process, we had some clear expectations. The project ran pretty smoothly thereafter.

With TapeReal being quite an unusual project, communication is king. Determination and professionalism are very important, too. Rate us on those qualities, and maybe add some others?

In terms of communication, professionalism, and determination, nobody is perfect. I’d like to give you a score of 10, but obviously, we’re human beings. We make mistakes sometimes, but the main thing is that we learn from them, and we overcome them. In that respect, I really appreciated working with you guys. In terms of communication, you’re very proactive. The weekly status reports are really helpful, kinda gives you an idea of what was achieved, what’s planned for next week. You always communicated on Skype effectively. The professionalism also was always there.

The team treated the project like their own. They wanted to see the client succeed.

As for determination, I’d say that when there’s a technical challenge, the team enjoys trying to solve it. They put forth the best solution for it, which is great. If they’re unable to do it, you guys present the options or the alternatives for achieving the result in the end. In that respect, I also really appreciated the eye on the budget. Obviously, being a startup, we have limited funds. The team took that into consideration whenever they planned all their sprints, so I appreciate that, too.

Thanks! On behalf of the whole Fora Soft team, I wish you all the best with your project. I believe it has a great future.

Got a project idea of your own? Maybe, you’ve tried to make it come to fruition but were dissatisfied with the results? Contact us using the form on this website, and we’d be happy to review your case and offer the best solution.

We also started Instagram, so make sure to follow us there as we share a lot of information regarding projects. You can also DM us if that’s your preferred method of communication!


Code refactoring in plain words: what is it and when it’s needed

Make your project better with code refactoring

You know that feeling when you build something for a long time, update and rework it, and this something turns out to be a complete mess? What if several people are working there, and each of them has their own understanding and vision? For example, you spend decades writing a book. The way you see things will inevitably change, you will get new assistants. As soon as the book is ready, you will certainly have to read it once again and get rid of plot mistakes, logic inconsistencies. Then correct grammar, and you’re good to go!

Refactoring is the same rework. Not of a book though, but of a program code. Let’s find out when we need it, and when we don’t.

What does it mean?

Code refactoring is the process of changing the code structure. It helps understand the program’s internal structure and its work principle. Refactoring is also there to fix problems. Nevertheless, refactoring doesn’t fix mistakes, rework functionality, or change external program behavior. Optimization does that, but it’s a topic for another article.

Why does your project need refactoring?

  • to make the code clearer

Going through the mountains of old code is difficult even for seasoned professionals, so of course, it’s very hard for new ones. Refactoring helps new programmers spend less time onboarding on a new project. If you pay for their time, it’s in your best interest for them to work faster. 

  • to speed up development, to make code simpler and faster.

To extend the functionality of the program it is better not to “mold” additional code on top of the old one, but to refactor it first. The task here is the same as that of drilling a tooth, when you get filling put on a decayed tooth. It’s to clean up the old code so that the new code fits better.

  • to improve the stability of the program.

The more concisely you express yourself, the easier and more understandable it is to listen to you. Just like people feel comfortable communicating with those who speak clearly, concisely, and to the point. In IT, programs that have the most concise code work better. 

When does your project need refactoring?

Signals to the customer that it is worth accepting the offer to refactor from the programmer:

  • The project has been going on for a long time, and the requirements have changed frequently
  • The program becomes less efficient: slow, often glitches
  • Mistakes in estimating the deadlines and increased cost of implementing new functionality.
  • The programmers have been changed
  • You are going to use this project for a long time, support it and modify it: the time spent on refactoring now will pay up. As in, further development will go faster, there will be fewer bugs. Spend 50 hours now, to save hundreds in the future.

Signals to the customer that they are offering to refactor for the sake of refactoring, and there is no benefit:

  • There are no serious or frequent problems with the program
  • The project has not been going on for a long time and the requirements have not changed.
  • You are not going to tune the project for a long time.

These are the characteristics programmers look for when suggesting refactoring. If you hear any of these, you can be sure that they are offering to refactor for a reason: 

  • Bulky classes (each class has to perform one function of its own);
  • Long methods, noodle-like controllers;
  • A large number of parameters in a method;
  • Lack of use of framework functions;
  • Bad naming of variables and functions;
  • Much duplicated code;
  • Lack of documentation.

But the best thing is, of course, to find programmers you trust 🙂 

When code refactoring is not needed?

  • When the code is so bad that it takes too long to support and implement new functionality
  • When the program is written on too outdated technologies which are no longer supported. Example: Flash is no longer supported by browsers. Whatever you do, how long you refactor it – nah, it won’t work.

What to do in these situations? This:

When a project can’t be saved by code refactoring

Begin writing from a scratch 🙂

“Can I skip refactoring altogether?”

When asking this question, first ask yourself: “is it possible to write without mistakes right away?”. The answer is no, because humans are not machines, and there will always be factors that reduce code quality. Refactoring, however, when used correctly, keeps the project code in good condition. It also minimizes the time spent on adding functionality and hiring new developers. This in turn opens the door to customer acquisition and marketing – much more useful than endlessly adding new team members because of missed development deadlines 🙂

Still have questions? Want to know more about refactoring? Send us a message and we will be happy to answer it for you! The feedback form is here. And we also launched Instagram, where we also answer DMs, share company news, project portfolios, and chat with you 🙂


Minimizing latency to less than 1 sec for mass streams

Is it possible to achieve less than a second latency in a video broadcast? What if the stream goes to a thousand people? Yes. How? Let’s answer using the project WorldCast Live as an example. We did it using WebRTC. WCL streams HD concerts to an audience of hundreds and thousands of people.

Why would I reduce the broadcast latency?

Latency less than a second in a videoconference is normal, otherwise speaking is nearly impossible. For one-sided streaming, the latency of 2 or even 20 seconds is fine. For example, TV delay is about 6 sec. However, there are cases where you’d try to reduce it to 1 second.

  • Sport events

It’s highly unlikely that the user will be happy when their neighbors shout, GOAL!, and he still sees the ball somewhere in the middle of the field. What if the user is also live betting?

  • Interviews

Thanks to the pandemic, not only talks with friends have moved online, but also interviews with celebrities. Take an interviewer’s latency, add to the interviewee’s latency, and add the time spent while it all goes to the user. The more you get, the worse the experience is for all sides.

  • Concerts

The WCL player is embedded in different sites. Concerts broadcast online to all of them. If you and your friend use different sites to watch the same concert, it will negatively affect your viewing experience. Thus, the concert organizers are trying to minimize the latency. Although the user wouldn’t really care whether he hears the guitar riff now or in 2 seconds, there is a general trend on minimizing the latency. The faster the better!

And also…

The examples above combine. In WCL viewers and performers talk via a video chat. Therefore, the latency has to be like that of a video chat, as we need the minimum difference in time between the question and the answer.

How to reach low latency in live streaming?

Use WebRTC

The standard WebRTC package offers average latency of 500 ms (half a second). We could’ve finished the article here: creating a video chat where people can connect with each other isn’t difficult. What’s difficult is making it all work stable for thousands of people and customizing streams to improve their quality. 

Base WebRTC isn’t about HD streaming. Video and audio quality is enough for speaking but isn’t enough for streaming music. To decrease the latency, WebRTC can reduce the quality and skip parts of the content. To avoid it, you have to go under the hood of WebRTC, which we did.

Set up WebRTC

To ensure that the low latency doesn’t go against quality and the final user’s experience, you need to go for extra development. Here’s what we did for WCL so it can broadcasts concerts to thousands of people:

  • Enabled multi-channel audio

The standard audio in WebRTC is mono, it’s 1 channel. The stereo is 2 channels. There are 5 channels on WCL.

  • Upped bitrate

WebRTC settings limit the video transmission speed to 500 kb/s. That’s not much for action on the screen: if light colors change quickly with this limit, the quality will be lower because of trying to go inside that channel, pixels might appear. Not a great watching experience. Therefore we’ve increased the bitrate to 1,5 Gb to transmit HD video.

  • Increased discretization frequency

By this, we’ve improved the quality of audio and video, low and high frequencies. They are not lost anymore. We can’t disclose what exactly we did, but if you’re interested in that, let us know!

Scale Kurento

Kurento Media Server is an open-source WebRTC server.  Set up Master Kurento to stream video. 500 hundred people will connect directly to Master Kurento, from where they’ll get the stream. If there are more viewers, Edge Kurento is in play, to which other viewers connect. The more viewers there are on the stream, the more Edge Kurento you need to use. All together it creates a tree-like scheme.

When do you leave the latency of more than a second?

When the budget is limited. WebRTC is more expensive than HLS if you need scaling. The WebRTC server on WCL costs $0,17/h which is $122,4 monthly if we take 30 days.

HLS, however, costs $0,023/h and can be turned on and off, unlike WebRTC. If we take 3 hour-long concerts a week, then we’ll spend a bit less than $0,28 for 12 concerts. Note that there are many server providers, the prices are always different, but you can see what it looks like using our project as an example.

The first stable version of subsecond latency on WCL took us 3,5 weeks. If there’s no necessity in very low latency, why spend?


Latency less than a second in a broadcast is necessary when there’s communication between participants or a stream where things change all the time. WebRTC makes it possible for even a thousand people if you work with the technology.

If your app is about something from here, make sure to take a look at WebRTC. Contact us, too, we’ll help! Contact us via our form or DM on Instagram, which we created not so long ago 🙂


The pain of publishing Electron apps on macOS

Starting with Mojave 10.14, Apple has introduced serious changes in app publication. They are about signing and notarizing the apps. If your application is bad in the eyes of Apple, final users will see threatening messages upon the first launch. The messages would ask the users to delete the app, which is not really good for customers, right?

This is the 2nd part of the two-part article on signing and notarizing macOS Electron apps. You can find the 1st part here.

The author is an ElectronJS app developer himself, Around a year ago, faced the problem of having to notarize the app on Apple servers besides signing it. It means that you check whether the app has malware in it.

To notarize, you have to send the app, wait for 10 minutes tops, get your results, and be happy. For those who are not proficient in it, there is an article on our website. There we discuss basic concepts of publishing apps to macOS.

I began notarizing my apps and kept getting the Package Approved notification over and over and was pretty happy about it, until recently, when I noticed that the situation repeats itself. All right, I thought, let’s just notarize it once again. 

In the end, the status had over 9000 authorization errors. Invalid certificate for some files, absence of hardened runtime even an absence of executable library files from NPM.

Sounds scary, doesn’t it? That’s right, Apple is making their system tighter, more secure, and doesn’t allow publishing a threatening, from their PoV, application.

I started discovering solutions. In the end, I was able to solve the problems and understand the rules of the game with Apple deeper. Actually, there are solutions in the internet but they are spread too wide. I’ve spent a considerable amount of time amassing experience in the field and now am willing to share it with my fellow developers so that their Electron apps are notarized faster.

I faced these problems when the app was using the ElectronJS library 5.0.0. I used electron-packager 13.1.1 to build an app and exelctron-osx-sign 0.4.15 to sign it.

To sign an app, we have a certificate Developer ID Application: <TeamName> (<UniqueID>) on the Apple site. A certificate like this allows releasing apps through any online service. We use the release server Electron to do that. You are going to need another certificate to publish apps to Mac App Store.

So where’s the pain?

In errors. I’ve split all of them into subparts and showed the solution which will make you more successful in the matter. 

What Apple doesn’t like

The binary is not signed. Absence of certificates and timestamps for internal executable files

The notarization log from Apple for many internal app files had The binary is not signed error. Every executable file was a part of this, for instance, built FFmpeg and echopring-codegen libraries. which are used externally by the app. Some executable files from NPM libraries were there, too.

The solution here is simple. While signing apps, it’s important to sign all files that Apple doesn’t like with this certificate, not just the .app one.

If you use a codesign utility, it’s mandatory to list those files with a space. Use the electron-osx-sign library, activate binaries to create paths to binary files that need to be signed with this certificate.

It’s worth mentioning that this solution is also usable for The signature does not include a secure timestamp error. It’s because the certificate puts its own timestamp while signing a file.

The signature of the binary is invalid. Wrong routes in the general hierarchy of an Electron app

This error is, in my opinion. an Apple mistake. Whenever an .app file is sent to notarization, it automatically  is packed into a .zip archive. Because of that the hierarchy of some paths corrupts, thus the certificate becomes invalid for some files.

The solution is packing the file by yourself into the .zip archive, using the integrated utility ditto. It’s important to use a special flag – -keepParent, which allows saving the hierarchy of all paths during packing. So, this is the command I used for packing:

ditto -c -k –keepParent “<APP_NAME>.app” “<APP_NAME>.zip”

After this, you can send the archive to notarization.

I use electron-notarize for notarization. When the notarization is successful, the library tries to return an archive of the app, so that you unpack it and swap an .app file with a notarized .app file. However, please note that XCode spectool can’t work with archives.

The executable does not have the hardened runtime enabled.

Now let’s discuss the most difficult and interesting notarization problem. In the latest macOS versions (10.15 Catalina and 11.0 Big Sur), Apple requires that the hardened runtime is enabled to launch apps. It protects the app from malware injections, DLL attacks, and process memory space tampering. Apple wishes to protect their users, therefore this is an important requirement.

To go with this requirement, you have to use a flag –options=runtime if codesign is used or mention a field hardenedRuntime: true if you’re going with electron-osx-sign. So, I took this action and found out that using this flag completely breaks an Electron app: it either doesn’t launch or fails with a system error.

One way or another, after having spent lots of time googling, I found a set of requirements. Completing those requirements helps building in building and signing an Electron app with hardened runtime

How to build and sign an Electron app?

First of all, forget about Electron v. 5

As sad as it may sound, you have to forget about Electron v. 5 and update it to v. 6 at the very least. As the source says, hardened runtime support was introduced in Electron v. 6, so any attempts to enable it in the predecessor are doomed to fail. Besides, if you’re using electron-packager, you must update it to v. 14.0.4 for the apps to be build correctly.

Testers will probably not be happy to hear about it but there’s so much we can do. Apple sets the rules of the game, and it will be necessary to complete a full regression test of the app with a newer Electron version.

Second, create a parameter file entitlements.plist

This practice often comes in handy when creating apps in XCode entironment, but when it comes to Electron apps – not so much. Electron builder will automatically enable a deafult file with a standard parameter set. The file is an XML code and it lists system prohibitions and permissions on using different resources of the OS. This file now has to contain 2 important parameters.

<?xml version=”1.0″ encoding=”UTF-8″?>
<!DOCTYPE plist PUBLIC “-//Apple//DTD PLIST 1.0//EN” “”>
<plist version=”1.0″>



The first one, allow-unsigned-executable-memory, allows the app using raw memory. The second one, allow-dyid-environment-variables, allows using environment variables. Every JS developer knows what they are, but these variables are not exactly the same as those we use in the app. This parameter permits the Electron framework using the environment variables he needs for correct work. For example, a path to the system library libffmpeg.dylib. If this parameter isn’t in the file entitlements.plist, the app will fail on the first launch, and will show an error of that library not being where it’s supposed to, despite it actually being there.

Third, activate entitlements.plist correctly

The file must be connected during signing a built Electron app with a certificate. If you’re using codesign directly, type in a flag –entitlements, then space, then show the path to the parameter file. However, it might not work right away, and codesign will have an error unrecognized option for that flag. To deal with it, use the utility codesign not from XCode Developer Tools but from the full XCode environment with a more expanded version of the utility. Run these commands to do that:

xcode-select -print-path
xcode-select -switch /path/to/SDK

There is a simpler way, too: use electron-osx-sign. However, it’s important to note that you have to enable the parameter file in entitlements and entitlements-inherit fields, otherwise nothing will work. It’s because entitlements-inherit is responsible for the parameters mentioned for distributive frameworks, and our app actually does work on Electron framework. This framework needs environment variables with paths to system files.

Easier now?

If you develop desktop apps for macOS on XCode, you likely won’t face these issues. Apple adapts its environment rapidly. However, as it turns out, Electron apps are out there, too. It’s possible to take care of the notarization problems, and the users can be absolutely happy without seeing the warning about potentially dangerous software.

I hope that this article was useful to you. If you have any other questions on the topic, do not hesitate to contact us via the contact form!

We have also launched Instagram, so feel free to check us out there, too 🙂


Publishing desktop apps on macOS

Developing a program code always brings joy. You get the instant outcome, you may check it out and see how neat the user experience is.

However, there are some formalities that act as a buzzkill here. Some of them are about publishing an app. Windows and macOS protect their users from malicious software, therefore publishing anything and everything isn’t possible. It’s imperative that the OS knows that the software comes from the trusted developer (signing), and its code doesn’t contain threats to the user (notarizing). Windows has only signing requirements, while Apple has been checking notarization since the macOS 10.14 Mojave was released.

This article came about after having to face some troubles while publishing an ElectronJS-app for macOS, and I would like to share my experience. We will begin with some starting information. If you’ve already developed and published ElectronJS-apps, feel free to skip through this part and we will see you when the more interesting one comes out – in a week or two 🙂

A short introduction to ElectronJS

ElectronJS is a tool to easily create a cross-platform app to launch it on all popular operational systems. The library uses the Chrome V8 engine and emulates a different browser window, where you can apply your HTML code, written with ReactJS. Electron uses its own backend to interact with an operational system, for example, to show system windows. Communication between what the user sees on the screen and the Electron backend happens with events.

Building an application is also fairly simple. Electron daughter libraries are in charge here, such as electron-builder, electron-packager. As an outcome, we have a ready binary file – .exe for Windows, .app for macOS, or executive binary file for Linux

You wrote and built an app. Ready to publish and send to the customer?

Unfortunately, not. For the users to launch an app, a developer certificate is necessary. This is a so-called digital signature of the app. It protects the program by mentioning who the author is. Apps with digital signatures are verified and are less-provocative to the system, antivirus programs, and firewalls. Software like that rarely ends up in quarantine.

If the app isn’t signed, macOS will politely ask to move the file to the bin.

Отсутствие подписи приложения на MacOS

Windows will notify the user that the developer is unknown and you’re at risk.

Do not scare your user with these messages, better go and get that developer certificate! Add it to the keychain of the OS you are making a signature for. Later, use the certificate to sign the app. XCode Developer Tools on macOS provides codesign to do that. Electron-builder and electron-packager libraries supply you with their wraps to sign an app. You just have to let them know that you’re willing to sign an app with that particular name after the building is completed. More than that, macOS has one more way to do so: a separate wrap library electron-osx-sign.

Got the certificate, signed the app. Anything else?

Yep. Notarizing is the next step. It means that you have to check the code on the Apple servers to verify that there is no malware in it. Otherwise, the user will see this upon the first launch:

Пример “неприятного” сообщения при первом запуске Electron-приложения на Mac OS

To send an app for notarization, Apple ID is necessary. That’s your login email to and an Apple ID password from

XCode altool provides utility for notarization. Unfortunately, it’s not in the XCode Developer Tools pack, so we have to install the full XCode. That moment when you don’t use XCode, like, ever, but you need it so you kiss goodbye to around 30 Gb of memory 🙂

We send the application using the command:

xcrun altool –notarize-app –primary-bundle-id “<id>” -u “<appleid>” -p “<app-specific password>”

Anything works as primary-bundle-id. It doesn’t affect notarization at all. For instance, mention com.electron.<appName>.

Notarization takes about 5-10 minutes. Apple will give you a unique RequestUUID. Use it to check on your notarization status. For that, use the command:

xcrun altool –notarization-info <RequestUUID> -u “<appleid>” -p “<app-specific password>”

The whole history can be checked with:

xcrun altool –notarization-history -u “<appleid>” -p “<app-specific password>”

Electron developers have expanded electron-builder and electron-packager libraries by adding a notarization process into the general workflow. There is also a separate wrap library electron-notarize which does exactly the same things. Basically, you need for things ready: built and signed .app application, appleId, appleIdPassword, appBundleId.

If a notarization tool doesn’t stumble upon anything bad in the app, your status will turn to Package Approved with a status code 0. Otherwise, Apple will give you the Package Invalid status and a status code 2. Fear not, friends, as every “bad” notarization has a log attached. All problems are mentioned there, as well as the directories of the files that stopped you from publishing right away.

If notarization is successful, congratulations! You’re ready to release your app.

When you have that last piece of the jigsaw, everything will, I hope, be clear (Albus Dumbledore)

When starting developing a desktop app on ElectronJS, think about the certificate to sign the app in advance. Don’t forget about the notarization, either. If you skip these parts, the potential amount of software users will drastically reduce because of conflicts with an operational system.

The 2nd part of this text will be released soon, and it will touch on the problems one might face when notarizing an app on macOS. See you then!


How to make work atmosphere friendlier?

There is no right answer to that question. However, the Fora Soft team is willing to share our knowledge with you. The knowledge that we have been nurturing for several years.

We also provide some COVID-era solutions, so make sure to check them out, too!

Make communication between employees less formal

When there are no barriers that formal communication brings, it’s easier for your team members to share things about their problems and say their opinion on things during Scrum meetings.

We tend to demolish those barriers with the help of monthly office parties with a ton of pizza involved. Formality becomes a difficult concept to maintain when you are sitting at the same table consuming that saucy pizza 🙂

Как сделать так, чтобы в коллективе было комфортно работать, image #1
Doesn’t look too formal, does it? 🙂

Fun fact: it’s easier for people to start a dialog after they’ve established a visual contact, hence our choice of food. To get a bite of that delicious thing, you just have to hold the slice near to your face. Thus, we do not only have a great time eating pizza during the working day but also improve the working atmosphere.

In COVID times, we meet in Zoom instead, but weekly, not monthly. 

Make your workspaces comfortable

Place them so that there is a distance of 1,5-2 meters between employees. That way the coworkers will find themselves within each other’s social distance. Read more about proxemics, the study of human use of space, on Wikipedia.

Comfortable distance between workstations allows employees to be open for discussion while still having enough private space around.

There is no correct way of setting devices, it all comes down to personal preferences. Your task as a manager is to give your team the furniture that they can easily adjust.

Эти довольные лица говорят сами за себя!
These happy faces speak for themselves!

In COVID times, we work remotely. Each project team meets in a video call daily to compensate for the lack of communication. Effectiveness has even improved – we now talk more.

Lower distance between the boss and the team

In some companies, the boss doesn’t get enough trust and may even be feared by employees instead of respected. To make the relationship less formal, our CEO goes around the office and greets every and each employee face-to-face.

He can do this all day

We also have an Open Door principle: if the CEO’s door is open, you can come into his office and rap with him about life ask some questions, or suggest a way to improve the processes in the company.

In COVID times, it’s Open Skype principle instead 🙂 (poor CEO)

Be clear about the company’s / project’s aims

Every member of a project team needs to know where the project or the company is going. It helps them plan their development and stay updated on perspectives.

In Fora Soft project members discuss upcoming features, their feasibility, and possible improvements together in a comfortable conference Skype room. In COVID, we do that on daily video meetings.

Тут зарождаются легенды
Legends are born here

The company’s aims for the year are announced at the New Year’s party and monthly objectives are hanging in the dining room, which is – no surprise here! – the most visited venue in our office. In COVID, the CEO shares company’s aims at Thursday’s general company video meetings on Zoom. We post recordings on Instagram – catch a moment to see 🙂 They are in Russian, but you can feel the atmosphere.

That’s all, folks! Feel free to use our methods to establish a great working relationship with your colleagues! If you have any other questions on the topic or just wishing to share your secrets, don’t hesitate to contact us via the Contact form!


Why cut features and launch the product early or what is MVP?

Many of our customers come up to us asking if we could make an MVP for them. Yep. Even big corporate clients want what used to be a startup fad — and what is now an industry standard.

But what is an MVP? The abbreviation, standing for Minimal Viable Product, is used to imply you hit the market before you’re done with all the features. Why is it that popular then? Is it a fancy way of saying “whatever works is fine”, a bargain solution for low-budget projects… or a misused football term after all?

What is an MVP?

Speaking of minimal viable products, let’s get started with setting on the viability criteria. For a software piece, viability definition may lie within the range of “not crashing on launch” to “being able to compete against the leaders”. 

Within the Lean Production Methodology, where the MVP concept originated, V stands for “bringing value to the user”. That’s why they often read MVP as Minimal Valuable Product.

The MVP is the right-sized product for your company and your customer. It is big enough to cause adoption, satisfaction, and sales, but not so big as to be bloated and risky. Technically, it is the product with maximum ROI divided by risk. The MVP is determined by revenue-weighting major features across your most relevant customers, not aggregating all requests for all features from all customers

Frank Robinson, the author of the term

So, MVP is basically a killer feature, and the simplest buttons and handles one might need to make use of it. You cut all the bells and whistles off your concept, strip it of all the fancy design extras, and anything that is not crucial – as simple as that.

Why would you go for an MVP?

Here’s an example. Imagine you’re a fan of a particular sport — let’s say, boxing. And you want to become the new Muhammad Ali. So, you start training with all the passion and dedication and whatever else you may find in movies about Rocky Balboa. The question is: when will you take on your first fight?

Option one. You train for ages until you feel like you are two hundred percent fit and ready.

Option two. You get some pretty basic training and jump into the ring as soon the coach is sure you’ll make it to the last round without kicking the bucket. As soon as you are viable for the ring. 

Option one is tempting: if things go well, you’ll plow through the underdogs and face the big guys without losing your breath. Option two appears hurtful: if things go well, you will win. But should you win or should you lose, you won’t leave without a bruise.

However, there is a but. “If things go well” is the critical part in both scenarios. Once you enter the ring, reality comes at you fast. What if you figure out you missed something in your training? What if those in the ring are still trained better? What if the blows you receive hurt too much (shocker!)? Or – why not – what if what you deemed boxing and your passion was actually wrestling, so all your great punching skills are totally inapplicable? You’d better know that in three months than in ten years after you put your gloves on for the first time.

So, MVP is a reality check done the quickest and the cheapest way. It’s not actually cutting functions, but ensuring they are needed before you spend time and money on those.


The MVP-centric approach grows popular as marketing skills become commonplace for entrepreneurs, no matter which is their sphere. TTM is the reason for that. TTM, or Time-to-market, is one of the key metrics for a new product. Time is always money, but when the market launch countdown is ticking, every minute costs you cash in many ways.

  • You want to pay for what others will pay for.

With all the research and insight behind it, the great idea your new product is based on is still a hypothesis until proven. It’s not like you already know the market craves your product, but you assume it is. There are no other means to make this hypothesis a fact but to check it on real customers. The earlier you hit the market, the quicker the feedback is on what would make your product more desirable.

  • Your product might be lucrative, but before it is in the market, it earns nothing.

With all the variety of monetization models available today, your product may be able to generate revenue long before it’s done and ready. Think of World of Tanks, the videogame that earned billions while being still in beta, or of all the mobile apps that are paying back the investments made. Ad-supported or subscription-based, bought in a one-time purchase or enhanced via microtransactions, your product might add some green lines to your bank account report as soon as it can deliver any value to your clients.

And yes, while the scope is small yet, you are safer in your experimenting with monetization models per se. No matter what, your missed revenue numbers will be the least frustrating.

Think of almost every successful software product. Yes, even the hardcore enterprise one would work. They never start as a one-size-fits-all solution, but as a demanded yet simple tool designed to perform a specific task.

  • Take a turn before you’ll have to brake for that 

Developing a complicated product before learning you have to adjust it to the market is not only about losing money on features discovered to be undemanded. The more had been done before a pivot, the more has to be done to perform one. You only have to remove seats and add panels to make a cargo van from a microbus, but should you try turning a Corvette into one, you’ll end up building it from scratch.

OK, are there any real-life MVP cases?

Telegram grew popular before it got the channels, stickers, secure calls, and bots. In the beginning, all it had was a boringly simple messenger with awesome encryption. It could do less than competitors, but it could keep your communication secure. MVP? Betcha.

SAP, the first ERP software coming into one’s mind, started from a pretty simple accounting system, which didn’t sport even 10% of what it has now. The term MVP was not even coined in by that time, but in fact, that was it – a basic solution, offering a new approach on quite a limited scope.

Zappos, the mid-2000 iconic apparel e-store began as a guy buying footwear on your demand from regular stores and sending it by mail. As simple as that. 

Moreover, an MVP may prove an idea before you spend zillions on can even become a business even before you expect them to do it!

Dota and Counter-Strike, as lucrative as they are now, began as community mods for popular games (Warcraft III and Half-Life, respectively). No ginormous teams of developers, designers, and community managers, no weekly content updates or events: they offered the very basic setup needed for a new gameplay pattern. A Minimally Playable Product.


There is a mnemonic rule designed by Eric Rice, the author of the famous “The Lean Startup”: you take your first idea for an MVP, then cut that in half, and cut it in half again – and there you got what actually is an MVP.

For those hesitant on cutting, here’s another – not that brutal – rule of thumb we nurture here in Fora Soft.
You have to ask yourself three questions.

  • What is your favorite part of your big idea?
  • What do you need to use it?
  • If you had only 30% of your budget, what would you make first?

You may even put your imaginary budget threshold lower than that, until what fits it retains the uniqueness of your product and the very basic usability. Once you understand that you can’t push it down anymore, you got the MVP.

If it is still a complicated task like it may be, as you definitely love your idea, you can start with MVP-izing actual projects. Try to think about what makes Instagram, Google Docs, or your favorite game what they are. Take a shot on imagining they didn’t have all the features they boast now. Where is the border between “I’d go without this feature” and “Oh, it won’t be useless anymore”. 


Once you develop a vision of your product’s MVP, you start revving your engine. Time is money, so you want your Minimum Viable Precious right away.

Setting your step towards the MVP on your own is where you might run into a dilemma. On one hand, coding or testing in a rush never results in good code. On the other hand, the MVP stage is somewhat forgiving to the code quality — if it works, it works.

It’s up to you to find the balance between those, but the golden rule is: no matter how much duct tape is there under the hood, the user experience for the killer feature should be streamlined and lovely.  Whatever buggy and irrelevant to the key functions should be demolished mercilessly. Whatever is crucial and buggy should be a priority.


Well, everyone remembers the copycat cases – it happened to the MSQRD app, who’d had its minute in the limelight before every competitor got their own masks. Or Snapchat with its fading posts, which turned to be a fading fad (pun intended).  

That’s the concern faced by everyone taking (or sugging taking) the MVP path. If there is a killer feature, isn’t delivering a quick-and-basic product the simplest way to unwantedly hand it to bigger competitors, who’d wrap it up in glossy paper and grab your market in no time?

The short answer is yes. It is. If your killer feature is that great, all the big companies gonna copy it. Or release their own renditions already being in development. But the first one to release is the first one to win the audience — and is able either to retain it, entering the top league (like Zoom or Miro, the COVID-era superstars whose leadership is yet to be shattered). 

And even if the blue chips roll out their own, more refined solutions… Remember what happened to MSQRD? Facebook bought it. For a much bigger sum than they invested in development.


  • To check your idea and be able to refine it before there’s too much money spent;
  • To make the audience like your product before it is even totally complete;
  • To start earning before you’re done spending;
  • To make your product become a synonym to its killer feature. 

Can Fora Soft help you with an MVP?

Most certainly! On the MVP stage, it’s not just skill but experience that matters. A developer with a massive background in a certain area (media processing, streaming, and low-latency solutions in our case) is a shortcut to an MVP. Our developers, absolute pros cut the time-to-market, as they already know solutions for typical time-guzzling problems. Please, feel free to contact us via the Contact form, and we will get back to you right away!


How to report on testing

The article is based on How is the testing going by Michael Bolton.

Imagine that a Project Manager has approached you with a question “how is the testing going?”. We will let you know how and what to answer in this article.

An inexperienced tester will dive into numbers straight away, and his answer will sound like this: “Why, everything’s cool. I’ve completed 234 test cases out of 500. 16 automated tests out of 100 have gone down”. This here is a bad answer. Dry numbers with no context whatsoever do not reflect the state of the product, thus, useless. They do not help the manager decide what to do next and how the team should proceed. 

An experienced tester has to provide his team with useful information that will help assess risks correctly and set up the priorities.

How to present information?

At Fora Soft, we present the useful information in this order:

  1. Explain the state of the product. What serious problems we’ve encountered, why they are serious, and how they can affect our customers. This information helps the team understand what to deal with first
  2. Explain how the testing is going. What still needs testing, what has already been tested, what we will not test and why is that. It’s important to mention how the testing was being done, what environment was used and why. This data is mandatory to assess the risk of receiving problems in the untested product areas and correct the testing plan, should the need arise
  3. Explain why we test the way we do it. Why the tests we’ve chosen are more effective than those we haven’t. When time and resources are limited, it’s crucial that we choose the right testing areas and sort out priorities
  4. Let the team know about the problems we’ve encountered during testing. Namely, what makes testing harder, what may cause us to miss bugs, what could help us make testing faster and simpler. If your team knows about your problems, they can help you

The main task the tester has is finding problems that put the product’s value in danger and reporting on those problems to the project manager and the team. Providing this information in a timely manner allows creating a high-quality product without missing deadlines.

To catch any problem that endangers the product’s value quickly, we use test plans and test strategies. Stay tuned to find out how the plan and strategy are created!

Do you want to learn more about our processes and how we do things? Do not hesitate to contact us via our contact form! It is right here 🙂