In-App Purchase in iOS apps: how to avoid 30% Apple Pay commission

There are many ways to monetize an application. What affects your choice here are the aims and specifics of your application and the market, for which it was made. One of those methods is organizing purchases within the app. From this text, you will find out how iOS organized the process, what Apple and their competitors provide you with, and why you sometimes will have no choice.


In-App Purchases

This simple and easy to use mechanism was developed by Apple to help organize sales of their apps or of additional features from their apps. Apple takes a 30% fee from every purchase made with In-App Purchases

There are three types of In-App Purchases:

  • Consumable

This purchase can be done multiple times. For example, lives or energy in games.

  • Non-consumable

This purchase can only be done once. For example, a character in a game or a movie in an online theater.

  • Subscriptions (auto-renewable and non-renewable)

A payment that unlocks your app’s functions for a limited period of time. Auto-renewable subscriptions charge users automatically at the end of each paid period. To continue using non-renewable subscriptions, users need to renew them manually. iTunes is an example of that.

A few other payment systems

Stripe is an American company that develops solutions for accepting and processing electronic payments. Stripe allows users to integrate payment processing into their apps without a need to register a merchant account.

Stripe takes 2.9% + 30 cents from each successful transaction.

PayPal is the largest debit digital payment platform. PayPal users are able to pay bills, make purchases, accept, and send money transactions.

PayPal takes from 2.9% to 3.9% commission fee, depending on how expensive the product was. The exact fee amount depends on your sales figures and whether you trade domestically or internationally.

Do I need In-App Purchases?

Apple charges lots of money in comparison to their competitors. Going for Stripe or PayPal might look like a no-brainer, but it’s not so simple. When you develop an iOS application, you face multiple requirements from Apple. One of those requirements prohibits you from making purchases through something other than In-App Purchases.

All digital and virtual goods and services must be paid via In-App Purchases. Therefore, owners of entertainment apps and online movie theaters, digital content sellers, and others must use In-App Purchases.

On the other hand, if you’ve created a mobile app for your online store, tour agency, or air ticket office, the outcome of the deal between you and your buyer is a physical item or a physical document that proves your right to use the service. In that case, you can use an external payment system and get your money fast, avoiding being ripped-off by the App Store.


Mobile or web app if a budget is limited to one platform?

A mobile app or a web app? If you want to grow, attract new users, and retain old ones, you will have to do both. All major video service providers have mobile and web applications. Look no further than YouTube, Zoom, Instagram, TikTok, Skype

However, the development costs money, and the money isn’t always enough for all options. What to do, where to start? It’s difficult to answer these questions without any further information. It all comes down to what you want to do and what your plan of development is. In this article, we will explain some things about how to choose a platform and provide you with relevant statistics.

Let’s first take a look at the advantages of both options. Perhaps, it will already be enough for you to make a choice.

Advantages of a web app

  • Availability. A user doesn’t have to download an app from AppStore or Google Play. It’s enough to follow a link or simply google a website
  • Quick updates. The changes to a website go directly to a user
  • A big computer screen allows you to insert more information
  • You can choose a payment system. If you have a mobile app, you will have to pay Google Play or AppStore a 30% commission fee. On a website, you are free to choose any payment system you like, and the commission is ten times lower – 3-4%.
  • You can create a mobile website version. It will work on iOS and Android and cost much less than two native applications. We will compare mobile sites and apps later.

Advantages of a mobile app

  • Convenience. A cellphone is almost always with a user, and they can use an app any time they want.
  • Offline mode. Although a vast majority of apps need internet to work correctly, some developers allow an offline mode for the app. It’s impossible with a website.
  • Push notifications. You can launch a promotion, send an advertisement, or just remind inactive users about the app. SocialMediaToday’s research shows that push notifications are more effective than sending out emails or SMS.

An opportunity to create a mobile website version is a serious advantage when your budget is limited. By doing that, it’s possible to save money on iOS and Android apps. If, however, you think of doing that, you need to consider what you can get and what you can lose.

We get all the website advantages that we had before, plus a user can now use it anywhere where they have a phone and a cell service. It’s also worth noticing that the same development team will create a mobile version of a website. Mobile apps have some advantages, though:

  • Speed. They will launch and work faster. According to research conducted by Kissmetrics, 40% of users leave a website if it takes more than 3 seconds to load. It’s crucial because, according to the very same research, a loading delay in one second can lower conversion by 7%. What it means is that if your website makes $100k a day, you risk losing $2,5 mil a year.
  • Expanded functionality. You are free to use GPS, Bluetooth, camera, and all the platform functions in the app. It can also interact with other apps, integrate with social networks, etc.
  • Mobile apps are used more. Due to research by eMarketer, cellphone users spend 90% of their time on apps and only 10% of their time on websites.

With what do I start?

Although we’ve taken a look at the advantages of mobile apps and web apps, it’s still unclear as to what to go with first if the budget is limited. There’s no all-round solution, and it depends on your product type. Let’s create an algorithm using popular multimedia apps to help us with that.

Target audience

Are you creating an app for business people or just regular users? Your decision might differ based on your answer. As an example, let’s take YouTube and Zoom.

  • a service for regular people. According to official statistics, it’s being used by more than 2 billion people monthly, and 70% of them do that with a mobile app. It’s understandable. Whoever doesn’t watch YouTube nowadays? People go there on their way to work and home, on public transport, in queues, in traffic jams. The mobile app is a go-to thing for YouTube because access from anywhere is essential.
  • Zoom is a video conference service. It’s designed more for meetings and business calls; however, no one prohibits you from calling your mom on Zoom. But it’s the planned direction towards conferences that made Zoom to be used more on desktop computers. Judging by the official statistics, you can see that only about 10% of all registered users went into the meeting using their cellphone.

The conclusion is simple. If you expect your service to be used on a daily basis, and you want it to be accessible anywhere (Instagram, TikTok, WhatsApp), choose a mobile app. If you follow other goals, such as online conferences (Zoom, Google Meet), a web version is your pick. Your partner or employee won’t always be using the service. They will do it during a meeting.

Monetization opportunities 

Do you want to sell subscriptions or goods and services? When you sell digital content inside a mobile app, Google Play or AppStore will take a 30% commission fee. Unlike websites, where the payment will be not that significant, just 3-4%. It’s important because when you start out, you count every cent.

Also, according to Atrium research, people are ready to spend more on a website rather than on mobile apps. That means that if you sell expensive subscriptions, goods, or services, it’s more likely that a user will buy it on a website.

On the other hand, Jmango’s research shows that the conversion rate (a possibility that a user will buy something again later)  is higher on mobile apps.

Android vs. iOS

What to do if you need to create an app, and there is only enough budget for one of those? Let’s turn to the statistics.

According to DeviceAtlas’s research, there are more Android phones out there. But there are regions where that difference is not that substantial, and there are other regions where iOS devices prevail. The knowledge we can take from here is that our decision for a platform will be affected by the market at which the application is aimed. 

For example, in Argentina, Egypt, Brazil, India, and Indonesia iOS devices aren’t popular at all. In the States, Great Britain, Sweden, and Thailand iPhone competes hard with Android phones and sometimes even ends up on top.

BusinessOfApps also reports that those who own Apple devices pay two times more in apps that those with Android. Although there are fewer iOS devices, they are more expensive and are being used by more solvent people. 

DeviceAtlas statistics also show that iOS devices are popular in regions with a higher quality of life. You can see the region statistics for 2018 down below (blue is for iOS, green is for Android)

By the way, the same thing stays with mobile games industry – those with iOS pay more. Gamers from countries with a high quality of life (States, China, Japan), and 48% of the market is from America and China.

If you worry about your target audience age, don’t. Comscore reports that there is no difference in terms of age, so there is no point in diving deeper into this.

Taking everything into account, it’s safe to say that an iOS app is more attractive, and this operating system is better for a single app. iOS application will bring you more money. But it’s worth mentioning that although Apple earns more in general, things might change drastically in some countries. So, if you are looking at Europe, go with iOS. However, if the app is meant for use in a concrete country or city, gather more information, so you don’t have to kick yourself afterward.


A successful service should provide both mobile and web apps. Different platforms have different advantages and disadvantages and can attract different users. The choice is yours, and we can help you with it.

If you are not sure as to what platform to choose, feel free to contact us, and we will do our best to help you out!


Why Russian software engineers are ranked #1 in Coursera’s research

Russia is on top of the IT world, while the country’s programmers are unparalleled when it comes to technology and data science.

Coursera, an American e-learning company, has released the second edition of their Global Skills Index 2020. The index consists of three main fields, and Russia has been named cutting-edge in two of the three: Technology and Data Science, while not only being ranked #1 in Europe but also in the entire world. The third field is Business, where Russia is #8 in Europe and #9 in the world.

So what is it that puts Russia so high at the top of the IT world, where the competition is as fierce nowadays as it’s never been before? 

To create the ranking, Coursera has used five components: their skill graph, skill score for country/industry/role, trending skills, correlations with third-party data, and top field of study & roles per selected skills.

First, let’s dive into the rankings themselves. We’ll first take a look at the Technology and Data Science Fields, and then check out the authors’ reasoning behind the Index.


Russia tops Belarus, Switzerland, Ukraine, and Finland – these countries are on the places from the 2nd to the 5th. 

The technology field consists of six subfields: Computer Networking, Databases, Human-Computer Interaction, Operating Systems, Software Engineering, Security Engineering. 

Russia was able to hit 100% in Databases, Operating Systems, and Software Engineering, and 98% in Security Engineering.

Data Science

In Data Science, Russia tops Switzerland, Belgium, Austria, and Finland – these countries are on the places from 2nd to the 5th.

The Data Science field consists of six subfields: Data Management, Data Visualization, Machine Learning, Math, Statistical Programming, and Statistics.

Russia got 100% in Statistical Programming, Math, and Data Management, as well as 98% in Machine Learning and Statistics.

Coursera’s comments

All of Russia’s tech and data science competencies are categorized as cutting-edge or competitive, with the Index showing that the country is unparalleled in software engineering, statistical programming, operating system, database, and data management skills.

They also went on and mentioned that Russia has been outperforming China and the US at international programming contests. 

Another important thing is that informatics is being taught as a compulsory subject in middle school. The skills students get then are later developed in higher education. For instance, Russia’s Higher School of Economics has announced the first top tier online master’s program in data science.

Coursera has also addressed Russia’s relatively low position on the Business Index, compared to the other two. They have, however, noted that the situation is looking to change as the Moscow and St. Petersburg’s startup scenes are constantly growing.

IT students from St. Petersburg win more contests than anyone else

We would like to add to the research from Coursera that universities from St. Petersburg, the city Fora Soft is located in, win International Collegiate Programming Contest more often than not just any city in the world, but any country. Students from St. Petersburg won the contest 11 times out of 27. The first two places are split among St. Petersburg Institute of Fine Mechanics and Optics and Saint Petersburg State University, with 7 and 4 victories respectively. Many graduates from these schools end up working here, at Fora Soft!

International Collegiate Programming Contest is a competition between IT-specialized universities. 50,000 students from 3000 universities in 111 countries have taken part in the contest. Overall, Russian students have won 13 contests out of 27 times it was held, and the country has been on a winning streak since 2012.

The results can be easily seen on the Contest’s Wikipedia page. If you want to dive deeper into the information about the winners, please, visit the official website of the contest.

This fact correlates well with Coursera’s mention of how many engineering graduates are produced by Russia – about 450,000, which is more than in any other country in the world.

We at Fora Soft are proud to announce that even in the region with such high competition, we are still the best! Check out our report on the top B2B development companies in Russia.

If you want to learn more about our know-how or order a project from us, feel free to get in touch with us via the Contact us form!


Clutch named Fora Soft a top Russian B2B development company

At Fora Soft, we really know how to develop multimedia software. We’ve been doing it for years and amassed lots of experience! Our work has found appreciation in the latest Clutch rating.

Clutch is an American rating platform. They release geography-based top-company ratings every year, and this time we made it there. Fora Soft is on the 15th place out of 94 in the Development category.

Cool! Thanks to our clients – for the opportunities and for the kind references. – Nikolay Sapunov, Fora Soft CEO

To create the rating, Clutch took different things into consideration. For example, in order to determine the level of industry expertise, the following criteria are looked at:

  • Case studies
  • Awards that companies received
  • In-depth phone interviews with clients
  • Services offered
  • Social media presence

Fora soft is also able to deliver a high-quality product. To get to this conclusion, the rating platform checks how companies perform at the criteria mentioned below:

  • Brand reputation and visibility
  • Clients that a company works with
  • Services offered by a company
  • Reviews on Clutch

By the way, Fora Soft currently has 5 stars out of 5 from 9 reviews! Among many things, our clients highlited how we met their expectations, professionalism in development, clear and fast communication, and the fact that we always finished our projects in time.

The most focused company

We at Fora Soft know how to deliver high-quality services. One of the reasons for it is that we have been creating multimedia and video software for a long time – 15 years. Over this time, we’ve gained a lot of experience in the field, and we’ve never betrayed video and multimedia projects.

To check out the variety of the products in our niche that we have created, look no further than our portfolio!

With that being said, Clutch has put us into their Top Russian Custom Software Developers matrix that consists of top 15 leaders in the custom software development field in Russia. It seems like Fora Soft has been recognized as “the most focused” company in the region.

To learn more about us, check out our Clutch profile.

If you want us to estimate your project or learn tips and tricks that we use in the video & multimedia software development process, feel free to get in touch with us via the Contact us form!


What is WebRTC? Explanation in plain language

The majority of WebRTC-related material is about the application level of code writing and doesn’t help understand the technology. Let’s dive deeper into the topic and find out how the connection establishes, why we need the TURN and STUN servers, and what a session descriptor and candidates are.

woman in video chat

What is WebRTC for?

WebRTC is a browser-oriented technology that allows us to connect to clients to transmit video data. Internal browser support (external technologies, such as Adobe Flash, aren’t needed) and an ability to connect clients without using any additional servers (p2p connection) are the main peculiarities of WebRTC.

Establishing a p2p connection is complicated as computers don’t always possess public IPs (their internet addresses). Due to a low amount of IPv4 addresses and for the security’s sake, NAT was invented. It allows creating private networks, for instance, for home use. Many home routers support NAT, thus all devices that are connected to the router have internet access, although service providers usually allow one IP address. Public IPs are unique, whereas private ones aren’t, hence p2p connection is difficult.

To better understand the concept, let’s take a look at three scenarios:

  1. Both nodes are within the same network 
  1. Both nodes are within different networks (private and public) 
  1. Both nodes are within different private networks with the same IPs

The first letter in the images above represents a node type: r for a router, p for a peer.

  1. Image one shows a nice situation. Nodes within their networks identify with network IP addresses, and they can directly connect with each other. 
  2. Image two shows two different networks with similarly sequenced nodes. We introduce routers here, and they have two network interfaces: inside and outside their system. Hence, they have two IPs. Usually, nodes have only one interface, and they use it to interact within their networks, and if they transmit data to something outside their system, they only do it with the help of NAT inside of a router. That’s why these nodes appear as a router IP address – it’s their external IP. Therefore, the p1 node has an external IP ( and an internal one (, with the first address also being external for all other nodes within the network. The p2 node experiences similar circumstances, so their connection is impossible as long as only their internal addresses are used. It’s possible to go with external IPs, but it poses a challenge since all nodes within the same private network are under the same external address. NAT solves this problem.
  3. What happens if we decide to connect nodes via their internal addresses? The data won’t leave the network. To magnify the effect, imagine a situation from the third image, where both nodes have the same internal addresses. If they use those addresses to communicate with one another, they both will be communicating with themselves.

Here is where WebRTC steps in. In order to solve these problems, WebRTC uses the ICE protocol which requires additional STUN and TURN servers. 

The two phases of WebRTC

In order to connect two nodes with the WebRTC protocol (or just RTC if there are two iPhones), it’s necessary to complete some preliminary steps to establish a connection. That’s the first phase. The 2nd phase is video data transmission.

Although WebRTC uses lots of means of communication (TCP and UDP) and can flexibly switch between them, this technology does not possess a protocol for transmitting connection data, which is not surprising as connecting two p2p nodes isn’t a simple task. Having said that, we need an additional, not related to WebRTC, data transmission way. It can be an HTML protocol, socket transmission, or an SMTP protocol. This way of sending initial data is a signaling mechanism. Not too much information is transmitted. The data is transmitted as text and is split into two categories: SDP and Ice Candidate (you can also read about them here) SDP is used to establish a logical connection, Ice Candidate is for a physical connection. It’s important to remember that WebRTC gives you the information that needs to be passed on to the next node. As soon as we transmit the necessary information, the nodes will be able to connect, and our help won’t be needed anymore. Therefore, a signaling mechanism which we need to create separately, will be used only upon connection and not while we transmit video data.

So, let’s take a look at the first phase. It consists of several steps. First, let’s look at it as for the connection initiating node, and then as for the connection receiving node.

  • Initiator (caller):
  1. Receiving a local media stream and establishing its transmission (getUserMediaStream)
  2. An offer to begin video data transmission (createOffer)
  3. Receiving an own SDP object and sending it via the signaling mechanism (SDP)
  4. Receiving own Ice candidate objects and sending them via the signaling mechanism (Ice candidate)
  5. Receiving a remote media stream and showing it on the screen (onAddStream)
  • Receiver (callee)
  1. Getting a local media stream and establishing its transmission (getUserMediaStream)
  2. An offer to begin video data transmission and answer creation (createAnswer)
  3. Receiving an own SDP object and sending it via the signaling mechanism (SDP)
  4. Receiving own Ice candidate objects and sending them via the signaling mechanism (Ice candidate)
  5. Receiving a remote media stream and showing it on the screen (on AddStream)

Only the 2nd step is different.

However complicated these steps might seem, as a matter of fact, there are just three of them: sending a local media stream (step 1), establishing the connection parameters (steps 2-4), receiving a remote media stream (step 5). The 2nd step is the most difficult one as it consists of two parts – we need to establish the logical and physical connection. The latter shows the way for the packet to follow to get from one node to the other, and the former points at video and audio parameters – what quality and codecs to use.

Connect the createOffer and the createAnswer steps to the steps with the transmission of SDP and Ice Candidate objects.

Now we are going to take a look at some entities, such as MediaStream, SDP, and Ice Candidate.

Main entities


MediaStream is a basic entity, it consists of video and audio data streams. There are two types of media streams, local and remote. Local streams receive data from the input devices (camera, mic), remote streams receive data from the network.

Therefore, every node has a local and a remote stream. In WebRTC, for these streams, there is a MediaStream interface, as well as a LocalMediaStream sub-interface which is out there specifically for a local stream. In JavaScript, you can only face the former one, but if you use libjingle, you can also encounter the latter.

WebRTC suggests a difficult hierarchy within a stream. Every stream consists of several media tracks (MediaTrack), which can consist of several media channels (MediaChannel). There can also be several media streams.

For example, we not only want to transmit a video of ourselves but also our table with a piece of paper on it, as we are about to write something on the piece of paper. We’ll need two videos (of us and the table) and one audio (us). Obviously, we and the table should be divided into different streams, as they aren’t really dependent on each other. That’s why we’ll have two MediaStreams: one for us and one for the table. The first one will have video and audio data, and the 2nd one – video data only.

The media stream has to provide an opportunity to keep different types of data, namely video and audio. This is accounted for in the technology, therefore every data type is realized through MediaTrack. MediaTrack has a special quality called kind which determines whether it’s a video or audio before us.

So how does everything happen inside the program? We create two media streams. Then we’ll proceed to create two video tracks and one audio track. Get access to the camera and microphone. Tell every track what feature it needs to use. Add a video and audio tracks into the first media stream and the video track from the 2nd camera – into the 2nd media stream.

How to distinguish media tracks on the other end? By the feature label that every media channel has. Media tracks have the same feature.

So, if we could identify the media tracks with a mark, why do we need to use two of them instead of one in this example? You can transmit one media stream and use different tracks within it. Now we’ve reached an important feature of media tracks, they synchronize media tracks. Different media tracks aren’t synced between each other, but all tracks are played simultaneously within each media track.

Therefore, if we want our words, facial expressions and the piece of paper to be played at the same time, we need to use the same media track. If it’s not too important, it’d be better to use different media tracks, so the picture is smoother.

If a track needs to be switched off during the transmission, we can use the enabled feature of a media track.

In the end, it’d be nice to think about stereo sound. Stereo is two different sounds, and they have to be transmitted separately. MediaChannel is used for that. Media track can use different channels (for instance, 6 if we need a 5+1 sound). The channels inside the media track are also synced. When a video is played, usually one channel is used, but it’s possible to use several of them, for example, to apply advertisement.

To summarize: we use a media stream to transmit video and audio data. The data is synced inside each media stream. We could use different media channels if we don’t aim for synchronization. There are two media tracks inside each stream, for video and audio. There can be more tracks if we need to transmit different videos (interlocutor and their table). Every track can consist of different channels but usually is used for stereo sound only.

In the simplest situation, we won’t have a video chat, and there’ll only be one local media stream of two tracks, audio and video. Each track will consist of one primary channel. The video track is responsible for the camera, while the audio track – for the microphone. The media stream is a container for both of them.

Session descriptor (SDP)

Different computers have different cameras, mics, graphics cards, etc. There is a multitude of parameters to them. It all needs to be coordinated for the media data transmission between two network nodes. WebRTC does it automatically and creates a special object – SDP. Transmit SDP to another node, and you can transmit the video data. There is no connection with another node, though.

Any signaling mechanism can help here. SDP can be sent via sockets, humans (tell the SDP to another node via the phone), or.. well, post office. You get a ready SDP, and it needs to be sent out – as simple as that. When the other guy receives SDP, they need to send it to WebRTC. It is stored as a text and can be changed from the applications, but it’s rarely needed. As an example, with a desktop <-> phone connection, sometimes it’s obligatory to forcefully choose the right audio codec.

Usually, when the connection is established, it’s obligatory to mention an address, such as a URL. There is no necessity to do it here, as you yourself will send the data via the signaling mechanism. To tell WebRTC that we want to establish a p2p connection, function createOffer has to be invoked. After that’s been done, and the special callback created, a new SDP object will be created and sent to the same callback. All you need to do is transmit this object to another node (interlocutor) via the network. 

The signaling mechanism will help data, this SDP object, to arrive. This session descriptor is alien for this node, therefore it bears useful information. 

Receiving this object is a signal to start the connection. So, you have to agree with it and call the createAnswer function. It is an absolute analog to createOffer. Your callback will receive a local session descriptor and then will need to be transmitted via the signaling mechanism back again.

It’s worth mentioning that calling a createAnswer function is only possible after receiving an alien SDP object. That’s because the local SDP object that will generate upon calling createAnswer has to rely on a remote SDP object. Only then will it be possible to coordinate your video settings with those of your interlocutor. Also, don’t call createAnswer and createOffer before receiving a local media stream, as they will have nothing to write to the SDP object.

Since WebRTC allows you to edit an SDP object, you will need to install the local descriptor upon receiving it. Sending the things WebRTC gave us back to it might seem strange, but that’s the protocol. A remote descriptor also needs to be installed upon receiving.

After this handshaking of some sort, the nodes will learn about each other’s wishes. For example, if node 1 supports codecs A and B, and node 2 supports codecs B and C, they both will choose codec B. That’s because these nodes know local and alien descriptors. The connection logic has been established and it’s possible to send media streams now. There is another problem, though: the nodes are still connected with just a signaling mechanism.

Ice candidates

Upon establishing a connection, the address of the node that you need to connect with isn’t mentioned. First, logical connection establishes, then physical, although it used to be the other way around. It won’t be so strange, however, if we keep in mind that we use an external signaling mechanism.

So, the logical connection has been established but there’s no path that the nodes can use to transmit data yet. Not everything is simple here, but we can still start with the simple things. Imagine that the nodes are within the same private network. As we know, they can easily connect with each other via their internal IPs (or other addresses if TCP/IP is not in use).

WebRTC tells us the Ice candidate objects through some callbacks. They too arrive in the form of text, and they too need to be sent through a signaling mechanism, just like the session descriptors. If the session descriptor contained information about our settings on the camera and phone level, candidates do that with our placement inside a network. Send them to another node, and it will be able to logically connect with us. As it already has a session descriptor, the data will flow in. If it doesn’t forget to send us its candidate object (information on where it’s placed inside the network), we’ll be able to connect with it.

There is another difference from a classical client-server interaction. Communication with an HTTP server goes as request-answer. The client sends data to the server, the server processes it and sends it to the address mentioned in the request packet. It’s obligatory to know two addresses in WebRTC and connect them from both sides.

The difference from session descriptors is that only remote candidates have to be installed. Editing is prohibited here and won’t be of use. In different WebRTC realizations, candidates must be installed only after the installation of session descriptors.

So, why can there be one session descriptor but lots of candidates? Because placement within a network can be determined not only by an own internal IP address but also by an external router address (one or more) and by TURN server addresses.

So, we have two candidates within one network (picture below). How to identify them? With the help of IP addresses only. Of course, different transport can be used (TCP and UDP), as well as different ports. This is the information that’s contained inside the candidate object – IP, TRANSPORT, PORT, etc. For instance, let’s take port 531 and UDP transport.

So, when we’re inside the p1 node, WebRTC will send us this as a candidate object: [, 531, udp]. It’s not an exact thing, just a scheme. If we’re inside the p2 node, the candidate will change to [, 531, udp]. P1 will receive p2’s IP and PORT through a signaling mechanism and will be able to connect to p2 directly. In fact, p1 will send data to, hoping that it will reach p2. Whether that address is owned by p2 or an intermediary, not important. What is important is that the data will be sent to this address and will be able to reach p2.

While the nodes are inside the same network, everything is a piece of cake, as every node only has one candidate object (their own, which is their placement in the network). But the number of candidates will grow by a lot if the nodes are in different networks.

Let’s take a look at a more complicated case. One node is behind a router (NAT), and the 2nd node is in the same network as that router (for example, on the internet).

This case has its own solution. A home router usually has a NAT table. This mechanism is created for the nodes inside a private router network to communicate with, for example, websites.

Let’s assume that a web-server is connected to the internet directly, meaning it has a public IP. Let it be the p2 node. Then, the p1 node (web client) sends a request to the address. First, the data arrives at the r1 router or, to be precise, to its internal interface After that, the router memorizes the source address (p1) and puts it in the NAT table. Then, the router changes the source address to its own (p1 -> r1). Then, using its external interface, the router sends the data to the p2 web server. The web server processes the data generates an answer which it sends back to the router. When the router receives the data, it checks the NAT table and sends the data over to the p1 node. The router here is an intermediary.

Well, what if several nodes from the internal network send a request to the external network? How does a router realize where to send the answer? This problem is solved with the help of ports. When the router substitutes the node address with its’ own, it also substitutes the port. If two nodes request the internet, then the router substitutes their source ports to different ones. Then, when the packet from the web server returns to the router, the router will understand the recipient of the packet by the port. The example is down below.

Going back to the WebRTC and the part where it uses an ICE protocol (hence Ice candidates). The p2 node has one candidate (its placement inside the network,, and the p1 node that is with the router with NAT, has 2 candidates: local ( and a router candidate ( The first one isn’t of much use here, however, it is being generated as WebRTC knows nothing about a remote node – it can be within the same network or not. The second candidate is useful and as we know, the port will have an important role to get through NAT.

The entry in the NAT table is generated only when the data leaves the internal network. That’s why the p1 node has to send its data first, and only then can the data from p2 reach p1.

Actually, both nodes will be behind NAT. To create an entry in every router’s NAT table, nodes have to send something to a remote node, but this time none will be able to reach the other. That’s because nodes don’t know their external IP addresses, and sending data to the internal addresses is pointless.

However, if external addresses are known, the connection will be easily established. If the first node sends the data to the second node router, the router will ignore the data as its NAT table is empty at that moment. However, the first node router has got an entry in the NAT table. Now, as soon as the 2nd node sends the data to the first node router, the router will successfully send to the 1st node. Now, the NAT table of the 2nd router has the needed data.

The problem is, to find out an external IP, we need a node that is inside a public network. In order to deal with this problem, additional servers are used, that are connected to the internet directly. They also help create those entries in the NAT table.

STUN and TURN servers

Available STUN and TURN servers must be mentioned upon a WebRTC initialization, and we’ll be calling them ICE servers from now on. If the servers aren’t mentioned, only nodes from the same network will be able to connect (those that are connected to the network without NAT). It’s important to mention that 3g networks require you to use TURN servers to be operational.

The STUN server is a server on the internet that sends a return address (source address of the node) back. The node behind the router communicates with a STUN server to bypass NAT. A packet that arrived at the STUN server contains a source address. It is a router address, in other words, an external address of our node. This is the address that a STUN server returns. Therefore, a node receives its external IP and port that makes him available in the network. Then, WebRTC creates an additional candidate with this address (external router address and port). Now the NAT table has an entry that allows the packets that are sent to the router via a correct port, to our node.

A STUN server example: how it works

The STUN server will be s1. Router and node stand as r1 and p1 respectively. We will also need to look after a NAT table, let’s make it r1_nat. In that table there is usually a lot of entries from different subnetwork nodes – we won’t mention them.

Let’s start with an empty r1-nat:

Internal IPInternal PORTExternal IPExternal PORT

There are 4 columns in the table. It gives each column from the first two (IP, PORT), their couple from the last two (IP, PORT).

P1 sends a packet to s1. We see four interesting fields in the table down below, they’re in the title of a transport packet (TCP or UDP) – IP and PORT of the source and receiver. Let us imagine that these are the addresses.


P1 sends this packet to r1. The router will need to substitute the address of a source Src IP, as the address that’s mentioned in the packet won’t work for an external network. Furthermore, addresses from that range are reserved, and there is no address on the internet that has that address. The router substitutes the packet and creates a new entry in r1_nat. That’s why it needs to come up with a port number. Since different nodes within a subnetwork can call out to an external network, the NAT table has to contain additional information, so that the router can determine, what node is the recipient for the return packet from the server. Let’s imagine that the router created a port 888.

The changed packet heading:

Src IPSrc PORTDest IPDest PORT – router’s external address.


Internal IPInternal PORTExternal IPExternal PORT

IP address and a subnetwork port are the same as in the initial packet. Actually, sending it back, we need to have a way to completely restore them. IP for the external network is a router address, and the port will change to one created by the router.

An actual port, to which node p1 accepts connection is, indeed, 35777, but the server sends data to a dummy port 888. It will be later changed to the real one, 35777.

So, the router has substituted an address and a port of the source in the packet heading and added an entry to the NAT  Now the packet is sent via the network to the server – to the s1 node. S1 has a packet like this upon entrance:


So, a STUN server knows that it received a packet from The server sends this address back now. It’s worth stopping here for a bit and look once again at this.

The tables above are a piece from a packet heading, not from its content. We haven’t discussed the content since it’s not so important – it’s described in the STUN protocol. Now, however, we also will be looking at the content. It will be simple and will contain the router address –, despite us taking it from the packet heading. It’s not done often as protocols don’t usually care about node addresses. The only important thing is that the packets are delivered as intended. But here we are looking at a protocol that establishes a path between the two nodes.

Now we got the 2nd packet which goes backward:


The heading has changed because the source and the receiver swapped places which is logical as the packet’s destination is different now.


This is the content of the packet. Actually, it could contain a lot of information. But only what’s important for understanding how the STUN server works is mentioned here.

Then the packet travels throughout the network unless it ends up on the external interface of r1. The router understands that the packet isn’t meant for him. How? It can be determined by the port. Port 888 isn’t used by the router for its own purpose but for the NAT mechanism. That’s why the router is looking at that table. It also looks at the External PORT column and searches for a row that matches with the Dest PORT from the arriving packet, which is 888.

Internal IPInternal PORTExternal IPExternal PORT

We’re lucky that this row exists. If we weren’t so lucky, the packet would be dropped away. Now we need to understand, to what subnetwork node to send the packet. Don’t hurry, let’s remember how important ports are in this mechanism. Two nodes in the subnetwork could be sending requests to an external network. Then, if the router created port 888 for the first node, it created port 889 for the 2nd one. Let’s assume that that’s the case, and r1_nat looks like this:

Internal IPInternal PORTExternal IPExternal PORT

We can understand by port 888 that the needed internal address is The router changes that receiver’s address from




The packet successfully reaches node r1 and, upon looking at the packet content, the node finds out its external IP address – its address in the external network. It also knows the port that it makes way for through NAT.

What’s next? How is it useful? The usefulness lies within the entry to table r1_nat. If anyone sends a packet with port 888 to r1, the packet will be redirected to p1. Thus, a narrow way to a hidden node p1 is created.

From the example above you can imagine how NAT and STUN server work. Actually, ICE and STUN/TURN servers are there to bypass NAT restrictions.

Between node and a server, there can be several routers. In case the node receives the address of the router that is the first in the same network as the server. In other words, we’ll receive an address for the router that’s connected to the STUN server. It is exactly what we need for the p2p communication if we keep in mind the fact that each router will be updated with an important row in the NAT table. That’s why the way back will be as smooth as silk.

TURN server is an upgraded STUN server, therefore each TURN server can work as a STUN server. However, there are advantages to the TURN server. If a p2p communication is impossible (in 3g networks), the server becomes a relay and starts working as an intermediary. Of course, p2p is out of the question then, but outside of the ICE mechanism, nodes think that they have direct interaction.

When is the TURN server a must? Why is the STUN server not enough? Because there are different kinds of NAT. They substitute IP address and a port in the same manner, but some of them have embedded falsification protection. For example, in the symmetrical NAT table, two more parameters are stored, IP, and a remote node port. A packet from an external network goes through NAT to the internal network only when the address and port of the source match those mentioned in the table. That’s why the trick with the STUN server doesn’t work out NAT table stores the address and port of the STUN server. When the router receives a packet from a WebRTC interlocutor, it drops it off as it deems the packet falsified. The packet has arrived not from the STUN server.

Therefore, the TURN server is needed when the two interlocutors are behind a symmetric NAT (everyone’s behind their own)


Media stream

  • Video and audio data are packed into media streams
  • Media streams synchronize media tracks that they consist of
  • Different media streams aren’t synced between themselves
  • Media streams can be either local or remote. Local ones are in charge for camera and microphone, whereas remote ones receive data from the network as a code
  • There are two types of media tracks: for video and for audio
  • Media tracks can be turned on or off
  • Media tracks consist of media channels
  • Media tracks synchronize the media channels they consist of
  • Media streams and media tracks have marks that help to distinguish them from one another

Session descriptor

  • Session descriptor is used for a logical connection of two nodes within a network
  • Session descriptor stores information about available ways to code audio and video data
  • WebRTC uses an external signaling mechanism. Transferring session descriptors (SDP) becomes an application’s task
  • Mechanism of logical connection consists of two steps: offer and answer
  • Session descriptor generation is impossible without using a local media stream with an offer. It’s also impossible without using a remote session descriptor with an answer
  • A received descriptor must be given to WebRTC realization, regardless of whether this descriptor was received remotely or locally from the same WebRTC realization
  • There is also an opportunity to slightly change session descriptor


  • Ice candidate is a node’s address within a network
  • The address can be own, router’s or the TURN server’s
  • There are many candidates
  • A candidate consists of an IP address, port and a transport type (TCP or UDP)
  • Candidates are used to establish a physical connection between two nodes within a network
  • Candidates need to be sent via a signaling mechanism
  • Only remote candidates should be transferred to a WebRTC realization
  • In some WebRTC realizations, candidates can be sent only after a session descriptor is installed


  • NAT is a mechanism that allows access to an external network
  • Home routers support a special NAT table
  • Routers substitute addresses in packets. The source address becomes their own if the packet goes to an external network, and the source address becomes a node address within the internal network if the packet arrived from an external network.
  • NAT uses ports to allow multi-channel access to an external network
  • ICE is a mechanism to bypass NAT
  • STUN and TURN servers help to bypass NAT
  • STUN server allows the creation of obligatory entries in a NAT table and returns an external node address
  • TURN server generalizes STUN mechanism and makes it always working
  • In the worst-case scenarios, a TURN server is used as a relay, so a p2p turns into a client-server-client connection

WebRTC in iOS: Technology Basics Explained in Plain Words

You’ve probably heard of WebRTC if you wanted to create an online conference app or introduce calls to your application. There’s not much info on that technology, and even those little pieces that exist are developer-oriented. So we aren’t going to dive into the tech part but rather try to understand what WebRTC is.

WebRTC in brief

WebRTC (Web Real Time Communications) is a protocol that allows audio and video transmission in realtime. It works with both UDP and TCP and can switch between them. One of the main advantages of this protocol is that it can connect users via a p2p connection, thus it transmits directly while avoiding servers. However, to use p2p successfully, one must understand p2p’s and WebRTC’s peculiarities.

You can read more in-depth information about WebRTC here

Stun and Turn

Networks are usually designed with private IP addresses. These addresses are used within organizations for systems to be connected locally and they aren’t routed in the Internet. In order to allow a device with a private IP to contact devices and resources outside the local network, a private address must be translated to a publicly accessible address. NAT (Network Address Translation) takes care of the process. You can read more about NAT here. We just need to know that there’s a NAT table in the router and that we need a special record in the NAT which allows packets to our client. To create an entry in the NAT table, the client must send something to a remote client. The problem is that neither of the clients knows their external addresses. To deal with this, STUN and TURN servers were invented. You can connect two clients without TURN and STUN, but it’s only possible if the clients are within the same network.

STUN is directly connected to the Internet server. It receives a packet with an external address of the client that sent it and sends it back. The client learns its external address and the port that’s needed for the router to understand which client has sent the packet. That’s because several clients can simultaneously contact the external network from the internal one. That’s how the entry we need ends up in the NAT table.

TURN is an upgraded STUN server. It can work like STUN, but it also has more functions. For example, you will need TURN when NAT doesn’t allow packets sent by a remote client. It happens because there are different types of NAT, and some of them not only remember an external IP, but also a STUN server port. They don’t allow packets received from servers other than STUN. Not only that, it’s also impossible to establish a p2p connection inside 3G networks. In those cases you also need a TURN server which becomes a relay, making clients think that they’re connected via p2p.

Signal server

We know now why we need STUN and TURN servers, but it’s not the only thing about WebRTC. WebRTC can’t send data about connections, which means that we can’t connect clients using only WebRTC. We need to set up a way to transfer the data about connections (what is the data and why it’s needed, we’ll see below). And for that, we need a signal server. You can use any means for data transfer, you only need the opponents to exchange data among themselves. For instance, Fora Soft usually uses WebSockets.

Video calls one-on-one

Although STUN, TURN, and signal servers have been discussed, it’s still unclear how to create a call. Let’s find out what steps we shall take to organize a video call.

Your iPhone can connect any device via WebRTC. It’s unnecessary for both clients to be related to iPhone, as you can also connect to Android devices or PCs.

We have two clients: a caller and one who’s being called. In order to make a call, a person has to: Receive their local media stream (a stream of video and audio data). Each stream can consist of several media channels. There can be a few media streams: from a camera and a desktop, for example. Media stream synchronizes media tracks, however, media streams can’t be synchronized between each other. Thus, sound and video from the camera will be synchronized with one another but not with the desktop video. Media channels inside the media track are synchronized, too. The code for the local media stream looks like this:

func startLocalStream() {
    let stream = PublishStreamModel.publish)
    stream.startCameraCapturer(processDeviceRotations: false, prefferedFrameSize: CGSize(width: 640,height: 480), prefferedFrameRate: 15)
  • Create an offer, as in suggesting a call start.
if self.positioningType == .caller {
  • Send their own SDP through the signal server. What is SDP? Devices have a multitude of parameters that need to be considered to establish a connection. For example, a set of codecs that work with the device. All these parameters are formed into an SDP object or a session descriptor that is later sent to an opponent via the signal server. It’s important to note that the local SDP is stored as text and can be edited before it’s sent to the signal server. It can be done to forcefully choose a codec. But it’s a rare occasion, and it doesn’t always work.
func stream(_ stream: StreamController?,
            shouldSendSessionDescriptionsessionDescriptionModel: StreamSessionDescriptionModel,
            identifier: String,
            completion: ((Bool)-> ())?) {
    shouldSendSessionDescription?(sessionDescriptionModel, identifier)
  • Send their Ice Candidate through a signal server. What’s Ice Candidate? SDP helps establish a logical connection, but the clients can’t find one another physically. Ice Candidate objects possess information about where the client is located in the network. Ice Candidate helps clients find each other and start exchanging media streams. It’s important to notice that that the local SDP is single, while there are many Ice Candidate objects. That happens because the client’s location within the network can be determined by an internal IP-address, TURN server addresses, as well as an external router address, and there can be several of them. Therefore, in order to determine the client’s location within the network, you need a few Ice Candidate objects.
func stream(_ stream: StreamController?,
            shouldSendCandidate candidateModel: StreamCandidateModel,
            identifier: String,
            completion: ((Bool) -> ())?) {
    shouldSendCandidate?(candidateModel, identifier)
  • Accept a remote media stream from the opponent and show it. With iOS, OpenGL or Metal can be used as tools for video stream rendering.
func stream(_ stream: StreamController?, shouldShowLocalVideoView videoView: View?, identifier id: String) {
    guard let video = videoView else { return }
    self.localVideo = video
    shouldShowRemoteStream?(video, id)

The opponent has to complete the same steps while you’re completing yours, except for the 2nd one. While you’re creating an offer, the opponent is proceeding with the answer, as in answers the call.

if self.positioningType == .callee && self.peerConnection?.localDescription == nil {

Actually, answer and offer are the same thing. The only difference is that while a person expecting a call sets up an answer means, while they generate their local SDP, they rely on the caller’s SDP object. To do that, they refer to the caller’s SDP object. Therefore, the clients know about both device parameters and can choose a more suitable codec.

To summarize: the clients first exchange SDPs (establish a logical connection), then Ice Candidates (establish a physical connection). Therefore, the clients connect successfully, they can see, hear, and talk with each other.

That’s not everything one needs to know when working with WebRTC in iOS. If we leave everything as it is at the moment, the app users will be able to talk. However, only if the application is open, will they be able to learn about an incoming call and answer it. The good thing is, this problem can be easily solved. iOS provides us with a VoIP push. It’s a kind of push notification in iOS, and it was created specifically for work with calls. This is how it’s registered:

// Link to the PushKit framework
import PushKit

// Trigger VoIP registration on launch
func application(application: UIApplication, didFinishLaunchingWithOptions launchOptions: NSDictionary?) -> Bool {
    return true
// Register for VoIP notifications
func voipRegistration() {
    let mainQueue = dispatch_get_main_queue()
    // Create a push registry object
    let voipRegistry: PKPushRegistry = PKPushRegistry(mainQueue)
    // Set the registry's delegate to self
    voipRegistry.delegate = self
    // Set the push type to VoIP
    voipRegistry.desiredPushTypes = [PKPushTypeVoIP]

This push notification helps show an incoming call screen which allows the user to accept or decline the call. It’s done via this function:

func reportNewIncomingCall(with UUID: UUID,
                           update: CXCallUpdate,
                           completion: @escaping (Error?) -> Void)

It doesn’t matter what the user is doing at the moment. They can be playing a game or having their phone screen blocked. VoIP push has the highest priority, which means that notifications will always be arriving, and the users will be able to easily call one another. VoIP push notifications have to be integrated along with call integration. It’s very difficult to use calls without VoIP because for a call to happen, the users will have to have their apps open and just sit and wait for the call. That can be classified as strange behavior. The users don’t want to act strange, so they’ll probably choose another application.


We’ve discussed some of the WebRTC peculiarities; found out what’s needed for two clients to connect; learned what steps the clients need to take for a call to happen; what to do besides WebRTC integration to allow iOS users to call one another. We hope that WebRTC isn’t a scary and unknown concept for you anymore, and you understand what you need to apply it to your product.


How to estimate time and effort for a software development project as a developer

Estimating IT projects is a pain. Whoever gave promises they couldn’t keep, only to work overtime just to meet the deadline they have set up for themselves?

When I started my path and tried to estimate my time spent while being a developer, I always underestimated things. Every time there would appear a job I didn’t account for. Colleagues told me to multiply my estimates by 2, 3, the number Pi. Only it didn’t help to increase the estimation accuracy, just added other problems. For example, when I had to explain where the high numbers came from.

15 years have passed since then. Over this time, I’ve estimated over 250 projects, got a lot of experience, and now I’m willing to share my thoughts on the topic.

Hopefully, this article will improve the quality of the estimations you’re giving.

Why estimate?

No more than 29% of projects end up in success, according to the research by The Standish Group, conducted in 2015. The other 71% either failed or exceeded the triple limitation system: deadline, functionality, budget.

From these statistics, we can assume that project estimation is often not what it should be. Does it mean that this process is pointless? There’s even a movement on the Internet that invites you to not estimate anything and just write a code, so that whatever happens – happens (search by #noestimates).

Not having any estimations does sound appealing, but let me give you an example. Imagine that you come to a restaurant and order a steak and a bottle of wine but there are no prices on the menu. You ask a waiter: “how much?”, and he goes, “depends on how long it takes the chef to cook. Order, please. You’ll get your food, you’ll eat it and then we’ll tell you how much it cost”.

There can also be an Agile-like option: “The chef will cook your meal, and you’ll be paying as he proceeds. Do this, please, unless you’re out of money. When you have no more money, the chef will stop cooking. The steak won’t be ready, perhaps, or it will be just reaching the state when it’s edible. If it’s not edible, though.. Sorry, it’s your problem”.

This is approximately how customers in the IT-sphere feel when they’re offered to start a job without estimations.

In the example above we’d ideally like to get an exact price for the steak. At the very least, it’d be fine if we just got a price range. This way we can check whether we want to go to this restaurant, choose a cheaper one, go get a cheeseburger or stay home and cook a salad.

Going to the restaurant without any understanding as to what to expect is not the decision that a person in the right mind will take.

I hope I could convince you that estimation is an important part of a decision making process on any project. The estimation can be as close or far from reality as possible, but it’s needed nevertheless.

Reasons for underestimation

Ignoring probability theory

Imagine the following situation. A developer is approached by a manager, and the manager would like to know how long it will take the developer to finish a task. The developer has done something like that in the past and can give the “most probable” estimation. Let it be 10 days. There’s a probability that the completion of the task will last for 12 days, but the chance is lower than that of 10 days. There’s also a chance that the task would be completed in 8 days but this probability is lower as well.

It’s often assumed that estimation for a task or a project is distributed according to the normal distribution law (read more about it here). If you show estimation distributions as a graph, you’ll get this:

X shows estimation while Y shows probabilities that the estimation will turn out to be correct and a task will consume the precise amount of time. In the center, you can see the point of the highest probability. It goes with our 10-day estimation.

The area under the curve shows a probability of 100%. It means that if we go with the most probable estimation, we’ll finish the project by the deadline with the 50% chance (the area under the graph before the 10-hour estimation is half of a figure, therefore it’s 50%). So, if we go with this principle, we’ll be able to only miss 50% of the deadlines.

This is only if the distribution of probabilities does go hand-in-hand with the normal distribution. In this case, the possibility of finishing a project earlier than a possible estimation equals to finishing it later than a possible estimation. However, it’s also normal that something goes wrong with the project, and we finish it later. Of course, a miracle can happen, and we finish earlier but what is the chance of that? In other words, the amount of negative possibilities is always higher than that of positive ones.

If we go with this idea in mind, the distribution will look like this:

For this to be easier to read, let me represent this information as a cumulative graph. It will show a possibility of finishing the project earlier than a deadline or just in time:

Turns out, if we take the “most possible” estimation in 10 days, the possibility of the task being completed in that period or earlier is less than 50%

Ignoring the current level of uncertainty

As we work on a project/task, we keep learning new information. We get feedback from the manager, designer, tester, customer, and other team members. This knowledge keeps renewing. We don’t know much about the project from the beginning, but we learn more as we go along, and we can mention exactly how long it took us once we’ve finished the project.

What we know directly affects how precise our estimation is.

Luiz Laranjeira’s (Ph.D., Associate Professor at The University of Brasilia) research also shows that the accuracy of estimating a software project depends on how clear the requirements are (Luiz Laranjeira, 1990). The clearer the requirements are – the more accurate the estimation is. Usually, the estimation isn’t clear because the uncertainty is involved in the project/task. Therefore, the only way of reducing uncertainty is by reducing it in the project/task.

Considering this research and common sense, as we decrease the uncertainty on a task/project, we increase the estimation accuracy.

This graph is here to make it easier to understand. In reality, the most possible estimation may change as uncertainty decreases.

Dependency between precise estimation and the project stage

Luiz Laranjeira went on with his research and figured out how numerically dependent estimation spread in on a project stage (level of uncertainty).

Taking optimistic, pessimistic and the most possible estimations (optimistic estimation is the earliest period of finishing the project out of all, while pessimistic is vice versa, the latest) and show how their ratios change over time, from start to finish of the project, we’ll get the following picture:

This is called a cone of uncertainty. The horizontal axis stands for the time between the start and the finish of the project. The main project stages are mentioned there. The vertical axis shows a relative margin of error in the estimation.

So, at the start of the initial concept, the most possible estimation may vary from the optimistic one by 400%. When the UI is ready, the spread of estimation goes between 0.8 and 1,25 relative to the most possible estimation.

This data can be found in the table down below:

Life-cycle stageoptimistic estimationpessimistic estimation
Initial concept0.25х
Business requirements (agreed definition of the product)0.5х
Functional and non-functional requirements0.67х1.5х
User Interface0.8х1.25х
Thoroughly thought realization0.9х1.15х
Finished product

It’s very important to note that the cone doesn’t get narrower as time goes by. For it to narrow, one needs to manage the project and take action to lower uncertainty. If one doesn’t do that, they’ll get something like this:

The green area is called a cloud of uncertainty. The estimation is subject to major deviation up to the very end of the project.

To move on the cone to the most-right point where there’s no uncertainty, we need to create a finished product :). So, as long as the product is not ready, there will always be uncertainty, and the estimation can’t be 100% precise. However, you can affect the estimation accuracy by lowering uncertainty. With this, any action targeted at lowering uncertainty also lowers the estimation spread.

This model is used in many companies, NASA included. Some adapt it to consider volatility in requirements. You can read about that in detail in “Software Estimation: Demystifying the Black Art”.

What is a good estimation?

There are plenty of options to answer this question. However, in reality, if the estimation deviates by more than 20%, the manager doesn’t have a room for maneuver. If the estimation is somewhere around 20%, the project can be finished successfully by managing functionality, deadlines, team size, etc. It does sound logical, so let’s stop at this definition of good estimation, for example. This decision has to be taken on the organizational level. Some risk and a 40-50% deviation is OK for them; others see 10% as a lot.

So, if out estimation differs from the actual result by not more than 20%, we consider it good.

Practice. Estimating a project on various stages

Let’s imagine that a project manager has approached you and asked to estimate a function or a project.

To start with, you have to study available requirements and figure out the life-cycle stage of a project definition.

What you do next depends on the stage you’re on:

Stage 1. Initial concept

If a manager approaches you and asks how long it will take to create an app where doctors will consult patients, you are on Stage 1.

When does it make sense to make estimations on this stage?

On a pre-sale stage. When you need to realize whether it’s worth it to further discuss the project. All in all, it’s better to avoid estimations on this stage and try to lower the uncertainty as soon as possible. After that, we can move on to the next stage.

What do you need to estimate on this stage?

Actual labor time data on a similar finished project.

What tools are the most suitable for this stage?

  • An estimation by analogy

Estimation algorithm

Actually, estimating the project on this stage is an impossible task. You can only see how long a similar project took to launch.

For example, this is how you could put your estimation into words: “I don’t know how long this project will take as I lack data. However, project X which was similar to this one took Y time. To give at least an approximate estimation, it’s imperative to make requirements clearer”.

If there’s no data from similar projects, then lowering the uncertainty and moving to the next stage is the only way to estimate here.

How to move to the next stage?

For this to happen, the requirements must be clarified. You need to understand what the app is for and its functionality.

Ideally, one should have skills in gathering and analyzing requirements.

To improve that skill, it’s recommended to read “Software requirements” by Karl Wiegers and Joy Beatty.

To gather preliminary requirements, you might use this questionnaire:

  • What’s the purpose of the app? What problems will it solve?
  • What is the target audience? (for the task above that could be doctor, patient, administrator)
  • What problems will each type of users solve in the app?
  • What platforms is the app for?

After figuring these things out, you will have an image of the app in your head with all the necessary information. With this, we’re moving to Stage 2.

Stage 2. An agreed definition of the product

We have an understanding here, although not very detailed, about what the app will do and what won’t. 

When does it make sense to make estimations on this stage?

Again, on the pre-sale stage. Right when one needs to decide whether it’s worth it to complete the task or project, whether they have enough money, whether the deadlines are affordable. You need to check if the value that the project brings is worth the resources that need to be involved.

What do you need to estimate on this stage?

Quite a few finished projects and their estimations OR huge experience in the area of development to which the project is related. These two combined would be even better!

What tools are the most suitable for this stage?

  • An estimation by analogy
  • A top-to-bottom estimation

Estimation algorithm

If there was a project like this before, the approximate estimation would be the time spent on that project.

If there is no data on projects like that, you need to split the project into the main functional units, then estimate every block according to those that were done on other projects.

For example, with the app where the doctors would consult patients, we could have got something like that:

  • Registration
  • Appointment scheduling system
  • Notification system
  • Video consultation
  • Feedback system
  • Payment system

You could estimate the “registration” block by using something similar from another project and for the “feedback system” a block from a different project.

If there are blocks that were never done before or they lack data, you can either compare the necessary labor time against other blocks or reduce uncertainty and use the estimation method from the next stage.

For example, the “feedback system” module might seem twice as difficult as the “registration” module. Therefore, for the feedback, we could get an estimation twice as high as the registration.

The method of comparing one block against the other is not exactly precise, and it’s better used in the situation where the number of the blocks that were never done isn’t higher than 20% of the blocks that do have historic data. Otherwise, it’s just a guess.

After this, we summarize estimation of all blocks, and it will be the most possible one. The optimistic and pessimistic estimations can be calculated using the coefficients appropriate for the current stage – x0,5 and x2 (check the coefficient sheet).

Ideally, you should let your manager know what’s going on, and they will have to deal with it.

If the manager can’t deal with it and asks for one, single number, there are ways to do that.

How to calculate one estimation out of three? It will be answered down below in the corresponding chapter.

How to move to the next stage?

Prepare a full list of requirements. There are quite a few documentation ways, but we’ll look into a widely used one with a User Story.

We need to understand who will be using each block and what they’ll be doing with the blocks. 

For example, for the “feedback system” block we would end up with these bullet points  after gathering and analyzing requirements:

  • A patient can check all feedback about the chosen doctor
  • The patient can leave feedback for the doctor after a video consultation with him
  • The doctor can see feedback from the patients
  • The doctor can leave a comment on feedback
  • An administrator can see all feedback
  • The administrator can edit any feedback
  • The administrator can delete feedback

You will also need to collect and write down all requirements that are not functionality-based. To do that, use this check-list:

  • What platforms is it for?
  • What operating systems need to be supported?
  • What do you need to integrate with?
  • How fast is it supposed to work?
  • How many users at the same time can use the tool?

Clarifying this stage will move you to the next one.

Stage 3. The requirements are gathered and analyzed

This stage has a full list of what each user can do in the system. There is also a list of non-functional requirements.

When does it make sense to make estimations on this stage?

When you need to give an approximate estimation for the project before you begin working with the Time & Materials model. The estimation of tasks from this stage can be used to prioritize some of them on the project, to plan the release dates and the whole project budget. You can also use those to control the team’s efficiency on the project.

What do you need to estimate on this stage?

  • The list of functional requirements
  • The list of non-functional requirements

What tools are the most suitable for this stage?

  • An estimation by analogy
  • A top-to-bottom estimation

Estimation algorithm

You need to decompose each task (split it into components). The smaller the components are, the more precise will the estimation be.

To do it on the best of your abilities, you need to represent everything that needs to be done on paper.

For example, for our User Story that goes like “a patient can see all feedback about the chosen doctor”, we could get something like this:

We split the task here into three parts:

  • Create infrastructure in the database
  • Create the DAL level for data samples
  • Create a UI where the feedback will appear

If you could, you can write down the UI functionality and approve it with whoever asks for estimation. It will eliminate lots of questions, make the estimation more precise, and be a good quality of life change.

If you want to improve your interface design skills, it’s recommended to read two books: “The Humane Interface” by Jef Raskin and “About Face. The essentials of interaction design” by Alan Cooper.

Then you need to imagine what exactly will be done for each task and estimate how long it will take. Here you have to calculate time, not guess it. You have to know what you will do to finish each subtask.

If there are tasks that take more than 8 hours, split them into subtasks.

The estimation received after having done this can be considered optimistic as it most likely uses the shortest path from point A to point B, given that we haven’t forgotten anything.

Now it’s about time we thought about things that we’ve probably missed and correct the estimation accordingly. Usually, the checklist helps here. This is an example of such a list:

  • Testing
  • Design
  • Test data creation
  • Support for different screen resolutions

After completing this list, we have to add the tasks we might have missed to the task list:

Go through each task and subtask and think about what could go wrong, what is missed. Oftentimes, this analysis reveals things without which you can’t end up with a best-case scenario. Add them to your estimation:

After you calculate this, too, your estimation will be even closer to the optimistic one than to the most possible one. If we take a look at the cone, the estimation will be closer to its lowest line.

The exception here might be if you’ve done a similar task before and can speak with authority that you know how it’s done and how long it takes. In that case, your estimation would be called “the most possible” and it’d go along with the 1x line on the cone. Otherwise, your estimation is optimistic.

The other two estimations can be calculated with the coefficients according to this stage: x0,67 and x1.5 (check out the coefficient table).

If you calculate the estimation from the example above, we’ll get this:

  • Optimistic estimation: 14 hours
  • The most possible estimation: 20 hours
  • Pessimistic estimation: 31 hours

How to move to the next stage?

By designing the UI. Creating wireframes would be the best way to go.

There are multiple programs for that but I’d recommend Balsamiq and Axure RP.

Prototyping is another huge topic that is not for this article.

Having a wireframe means that we’re on the next stage.

Stage 4. The interface is designed

We have a wireframe here as well as the full list of what each user will do in the system. We also have a list of non-functional requirements.

When does it make sense to make estimations on this stage?

To create an exact estimation by the Fixed Price model. You can also do that for everything that was mentioned in the previous stage.

What do you need to estimate on this stage?

  • Prepared wireframes
  • A list of functional requirements
  • A list of non-functional requirements

What tools are the most suitable for this stage?

  • An estimation by analogy
  • A top-to-bottom estimation

Estimation algorithm

The same as in the previous stage. The difference is in accuracy. Having a planned interface, you won’t have to think as much and the possibility of missing something is lower.

How to move to the next stage?

Design the app architecture and thoroughly think about the realization. We won’t check that option as it is used quite rarely. With that being said, the estimation algorithm after thinking about architecture won’t differ from one on this stage. The difference is, once again, in accuracy increase.

Retrieving one estimation from the range of estimations

If you have the three types of estimation ready, we can use the knowledge by Tom DeMarco to retrieve one estimation. In his book “Waltzing with bears” he mentioned that absolute possibility can be obtained by integrating the area under the curve (in the graph we had before). The original calculation template can be downloaded from here or from here without registration. You need to insert three numbers in the template and receive a result as a list of estimations with their corresponding probabilities.

For example, for our estimations of 14, 20, and 31 hours we’ll have something like this:

You can choose any probability you deem decent for your organization, but I’d recommend 85%,

Don’t know how to estimate? Speak up!

If you don’t know what you’re asked or how to implement the functionality you need to estimate? Let your manager know, give him an approximate estimation, if that’s possible, and suggest actions that will make the estimation more precise.

For example, if you don’t know whether the technology works to finish the task, ask for some time to create a prototype that will either confirm your estimation or show what you’ve missed. If you are not sure that the task is doable, say that from the beginning. These things need to be confirmed before you’ve put their weight on your shoulders.

It’s very important to provide a manager with this information, otherwise, he can blindly trust you and have no idea that there’s a chance of missing the deadline by 500% of the time or simply not finish the product with the technology or requirements you have.

A good manager will always be by your side. You and he are in the same boat, and sometimes his career depends on whether you’ll finish on time even more than yours.

Doubts? Don’t promise

Many organizations and developers help their projects fail by taking on responsibilities too early on the cone of uncertainty. It’s risky as the possible result jumps between 100% and 1600%.

Efficient organizations and developers postpone decision making up until the moment when the cone is narrower.

Usually, this is normal for organizations that are on the more mature CMMI module level. Their actions to make the cone narrow down are clearly stated and are followed.

You can see the quality of estimations and its increase in the projects of the U.S. Air Force when they moved to the more mature CMMI level:

There’s something to think about here. Other companies’ statistics confirm this correlation.

Even here, accuracy of the estimations can’t be achieved only with estimation methods. It’s inextricably linked to the efficiency of project management. It doesn’t depend only on developers, but also on project managers and senior management.


  • It’s nearly impossible to give an absolutely correct estimation. You can, however, affect the range in which the estimation will fluctuate. To do that, try and lower the uncertainty level on the project
  • Making estimation be more accurate can be achieved through splitting tasks into components. As you decompose things, you’ll think what and how you will do things in details
  • Use checklists to lower the possibility of missing something as you estimate
  • Use the uncertainty cone to understand the range where your estimation will most probably fluctuate
  • Always be comparing the given estimate with the time that was actually spent on a task. It will help you improve your estimation skill, understand what you’ve missed, and use it as you move further.

Useful books

There is a lot of literature on the topic but I’ll recommend the two books that must be read.

  • Software Estimation: Demystifying the Black Art by Steve McConnell
  • Waltzing With Bears: Managing Risks On Software Projects by Tom DeMarco and Timothy Lister.

9 little things to make your iOS application cooler

Apple is the leader on the phone market not just because they produce high-quality smartphones, but also because they, unlike other companies, do pay attention to details. I’ll just tell you how to make an Apple-like application. All we’ll need to do that is just a couple of lines with a code, nothing too complicated. You don’t have to use any external libraries, you can just do with whatever Apple has provided for you.

1. Taptic engine

Taptic engine is a new vibration by Apple, and this solution was initially integrated into iPhone S6. It’s a small engine that can produce different vibrations. The best thing about it is that Apple has allowed developers to work with it.

Use scenarios:

  1. When you press a button. Your app will be way more appealing if it doesn’t only respond to a user doing something by changing the content on the screen, but if it also responds physically.
  2. When you scroll the content. A lot of people own wristwatches. Do you enjoy the sound when you wind yours? So why not add it to your app? This mechanic allows you to help a user dive into content more, it becomes more interesting for him to scroll down the feed. Thus, we make the user stay in our app for a longer period of time.
  3. When an error appears. You always have to put some effort into making sure your program doesn’t have errors. However, there are situations where the user is the one responsible. For instance, if they entered the wrong password. Of course, we’ll show a pop-up notifying them of that, but we can also do that using our engine.

Taptic engine helps add the Apple magic we all know and love.


let mediumGenerator = UIImpactFeedbackGenerator(style: .medium)

2. Spotlight indexing

Indexing the data within the iOS app

What’s your iOS device memory capacity? 128 GB, 256 GB, more? How many apps are on your smartphone? 50. 100? Can you imagine the amount of data stored on your phone? In order for a user to not get lost in that large information stream, Apple has added Spotlight.

Spotlight is a mechanism that allows you to find data on the device that’s operated by macOS, iOS, or iPadOS. Unfortunately, Spotlight only helps to locate the app, but iOS 9 introduced the functionality of indexing the data within those apps.

Unfortunately, not all apps are indexed so let’s be the first ones in order to be ahead of the competition!

Is your app a mail aggregator? Let’s search in letters! There are dozens of different ways to use Spotlight. What we have to do is accentuate the main task of the app.


import CoreSpotlight
import MobileCoreServices

Add an index now.

func indexItem(title: String, desc: String, identifier: String) {
    let attributeSet = CSSearchableItemAttributeSet(itemContentType: kUTTypeText as String)
    attributeSet.title = title
    attributeSet.contentDescription = desc
    let item = CSSearchableItem(uniqueIdentifier: "\(identifier)", domainIdentifier: "com.uniqeCode", attributeSet: attributeSet)
    CSSearchableIndex.default().indexSearchableItems([item]) { error in
        if let error = error {
            print("Indexing error: \(error.localizedDescription)")
        } else {
            print("Search item successfully indexed!")

Now, processing the app opening with a unique index.

func application(_ application: UIApplication, continue userActivity: NSUserActivity, restorationHandler: @escaping ([UIUserActivityRestoring]?) -> Void) -> Bool {
    if userActivity.activityType == CSSearchableItemActionType {
        if let uniqueIdentifier = userActivity.userInfo?[CSSearchableItemActivityIdentifier] as? String {
    return true

3. Animation upon pressing

The animation that Apple provides is very simple.

basic iOS button animation

I suggest that we improve it a bit. Why? It’s a lot more comfortable for a user when they change an item’s form by slightly touching it. It creates somewhat of a connection between an application and a user.

custom ios button animation


extension UIView {
    func addAnimate() {
        let xScale : CGFloat = 1.025
        let yScale : CGFloat  = 1.05
        UIView.animate(withDuration: 0.1, animations: {
            let transformation = CGAffineTransform(scaleX: xScale, y: yScale)
            self.transform = transformation
        }) { (_) in
            let transformation = CGAffineTransform(scaleX: 1, y: 1)
            self.transform = transformation

Do not forget about point one from this article! A combo of animation and taptic engine is simply amazing.

4. Permission requests

iOS location access request

No one likes to share their geolocation but we still have to, otherwise, the maps won’t work.

Now, imagine: your app works with a camera, microphone, geolocation, contacts. So when do we ask permission from a user?

Bad decision:

Ask permission for everything at the first launch

  • Quite fast
  • The negative attitude from a user to such an app as they don’t understand why they need all this.
  • The developer still has to check permissions before the actual module use.

Optimal decision:

Request permission before the actual use

  • User’s trust isn’t undermined;
  • The developer doesn’t do double work.

Advanced decision:

Onboarding that definitively describes where and what for will the phone be used.

iOS mic access request during onboarding
  • User understands exactly why he has requested permission;
  • The program becomes more user friendly;
  • Developing takes a lot of time;
  • Developer does double work as they have to check permission before the actual module use anyway.

I think that the Optimal decision strategy is the best here.

5. Home Screen Quick Actions

iOS quick actions from home screen

There is a 3D Touch function in iPhone (Haptic Engine in the modern iterations). Roughly speaking, this technology allows to understand the power with which you press the screen. This can be integrated into an app. For example, upon pushing the element hard, an event occurs. This, however, didn’t get wide recognition. I believe it’s because the user has to understand on their own whether a button has hidden functionality. Therefore, this function isn’t on the top of the priority list.

However, it’s different when the Home screen is involved. All icons have the “hard push” function. If the developer hasn’t done anything, the Home screen will provide the following functions:

  • Change the Home screen;
  • Share application;
  • Delete application.

Starting with iOS 12, this functionality can be widened by adding actions that you want. As a rule of thumb, the main features’ functionality is integrated there. For example, this is what Instagram offered:

  • New post;
  • Check actions;
  • Direct.

Pressing any of those will take you to the corresponding event.

You may find Apple documentation down below. Although it might seem that there’s lots of code, realization won’t take too much time.

Apple documentation

6. Dark Mode

iOS fans were waiting for a new dark theme for years. Developers didn’t,

Starting with iOS 13, the phone has two modes: light and dark. Change it in the settings, and all applications will change their interface. Roughly speaking, if a button is blue when the flight mode is active, it will go red once you switch to the dark mode.

iOS dark theme

I switch modes quite a lot on my iPhone. When I see that the app changed color on its own, I’m happy. You can see that developers tried harder and introduced a function. It’s a small but nice addition.

Light-on-dark color scheme on Gmail

Let’s see how this works taking our Russian Instagram as an example:

Night mode on Instagram

In my new project, I have decided to work with colors differently. Before, people used to create a separate file with app colors. I, however, have created a Color Set. Only the dark theme is supported by the app now, but if there’s an urgency to add a light theme, it’ll take no more than 30 min. You just have to know what color is used when swapping to the light theme.

iOS color set for dark and light mode

Now the color is always red regardless of the theme. But if there’s yellow instead of red in the light theme, I will just need to change the color here. You don’t have to do anything inside the code.

This solution has increased the developing time by 30 min. But if we decide to go with the dark theme, we’ll save about 20 hours!

7. iOS Modal Sheets

There are two ways of accessing a new window in iOS – when it appears on the right and at the bottom.

First option:

iOS right modal sheet

Second option:

iOS bottom modal sheet

We’ll be talking about option 2. Prior to iOS 13, it was working by opening a new window immediately on top of the previous one. Starting from iOS 13, it works differently by default.

iOS 13 modal sheet

We see on the gif that a window opens on top of the previous one but it doesn’t cover the entire screen. This is called Modal Sheets. Developers went on to fix in in Summer’19, by adding an attribute vc.modalPresentationStyle = .fullScreen. This trick allowed them to get back to the way apps opened as it showed on gif 2.

Now a new window would open full screen. It was a quick fix in order to avoid bugs. Why so? Because fullScreen has to add a close window button on its own and pushing it is easy to process. If you use Modal Sheets, you can just drag the window down, and iOS will close the window and remove it from the device memory. It can cause bugs, however – uncontrollable behavior, for instance.

This way of closing windows can be controlled via delegate:

extension BaseViewController: UIAdaptivePresentationControllerDelegate {
    func presentationControllerDidDismiss(_ presentationController: UIPresentationController) {

Logic has to be inserted here. For example, the same as the “close” button has.

Let’s use Modal Sheets, avoid fullScreen if you can. Modal Sheets makes an app fresher, modern and the UX is similar to Apple applications.

iOS modal sheet design example

8. System font size

Did you know that you can change the font size in iOS? This function is awesome for you if you have trouble seeing small objects. The size will change in all system applications.

That’s not all! You can set your font size so that it depends on the one in the system. This improves your interaction with apps, especially if there’s a lot of text in there.

You’d ask whether it’s easier to get a bigger size from the start? No, it’s not. I, for example, don’t like huge letters. Let’s think about all users, thus getting even more of them!

This is the technology description from the official documentation.

9. Password AutoFill and Strong Password

iOS suggest a strong password feature

Why will I never move to Android? There are more than 300 accounts in my password list, and I think you too have quite a few of them. That’s it, no more questions. It’s convenient.

Whoever doesn’t know what I’m talking about, I’ll explain. Your login and password are stored in a secure place. Why is it secure? Apple will answer.

You don’t need to write down your password on a piece of paper anymore (may my grandfather forgive me for this), nor you need to come up with passwords by yourself. Do you use the same password everywhere? Congratulations, you are at risk. This mechanism generates a strong password for you and automatically adds it to Keychain. The program will suggest a suitable password for authorization upon your next login.

In order for this to work, you need to add Associated Domains on the server and list it in Capabilities in the app.

Don’t forget to mention type at the filling field in an iOS app.


We’ve explained how we can make an application way more appealing using small features. Don’t forget about small things, so your application can be “huge”!

Want to get more tips on iOS development? Check out these articles:

?‍♂️ AVFoundation Tutorial: How to apply an effect to a video in iOS

WebRTC in iOS: Technology overview in plain words

In-App Purchase in iOS apps: how to avoid 30% Apple Pay commission


How to apply an effect to a video in iOS

superpower effect

Have you ever thought about how videos are processed? What about applying effects? In this AVFoundation tutorial, I’ll try to explain video processing on iOS in simple terms. This topic is quite complicated yet interesting. You can find a short guide on how to apply the effects down below.

Core Image

Core Image is a framework by Apple for high-performance image processing and analysis. Classes CIImage, CIFilter, and CiContext are the main components of this framework.

With Core Image, you can link different filters together (CiFilter) in order to create custom effects. You can also create the effects that work on the GPU (graphic processor) which will move some load from the CPU (central processor), thus increasing the app speed.


AVFoundation is a framework for work with media files on iOS, macOS, watchOS, and tvOS. By using AVFoundation, you can easily create, edit, and play QuickTime films and MPEG-4 (MP4) files. You can also play HLS streams (read more about HLS here) and create custom functions to work with video and audio files, such as players, editors, etc.

Adding an effect

Let’s say you need to add an explosion effect to your video. What do you do?

First, you’ll need to prepare three videos: the main one where you’ll apply the effect, the effect video with an alpha channel, and the effect video without an alpha channel.

An alpha channel is an additional channel that can be integrated into an image. It contains information about the image’s transparency and can provide different transparency levels, depending on the alpha type.

We need an alpha channel to not let the video with an effect overlap the main one. This is the example of a picture with the alpha channel and without it:

a picture with the alpha channel and without it

Transparency goes down as the color gets whiter. Therefore, black is fully transparent whereas white is not transparent at all.

After applying a video effect, we’ll only see the explosion itself (the white part of an image on the right), and the rest will be transparent. It will allow us to see the main video where we apply the effect.

Then, we need to read the three videos at the same time and combine the images, using CIFilter. 

First, we get a link to CVImageBuffer via CMSampleBuffer. We need it to control different types of image data. CVImageBuffer is derived from CVPixelBuffer which we’ll need later. We get CIImage from CVImageBuffer. It looks something like this in the code:

CVImageBufferRef imageRecordBuffer = CMSampleBufferGetImageBuffer(recordBuffer);
CIImage *ciBackground = [CIImage imageWithCVPixelBuffer:imageRecordBuffer];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(buffer);
CIImage *ciTop = [CIImage imageWithCVPixelBuffer:imageBuffer];
CVImageBufferRef imageAlphaBuffer = CMSampleBufferGetImageBuffer(alphaBuffer);
CIImage *ciMask = [CIImage imageWithCVPixelBuffer:imageAlphaBuffer];

After receiving CIImage for each one of the three videos, we need to compile them using CIFilter. The code will look roughly like this:

CIFilter *filterMask = [CIFilter filterWithName:@"CIBlendWithMask" keysAndValues:@"inputBackgroundImage", ciBackground, @"inputImage", ciTop, @"inputMaskImage", ciMask, nil];
CIImage *outputImage = [filterMask outputImage];

Once again we’ve received CIImage but this time it consists of the three CIImages that we got before. Now, we proceed to render the new CIImage in CVPixelBufferRef using CIIContext. The code will look roughly like this:

CVPixelBufferRef pixelBuffer =[self.contextEffect renderToPixelBufferNew:outputImage];

Now, we have a finalized pixel buffer. We need to add it to the video sample buffer, and we’ll receive a video with the effect after that.

[self.writerAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:CMTimeMake(self.frameUse, 30)]

The effect is successfully added to the video here. With that being said, the work was completed using the GPU, which helped us take the load off the CPU, therefore increase the app speed.


Adding effects to videos in iOS is quite a complicated task, but it can be done if you know how to use basic frameworks for work with media in iOS. If you want to learn more about it, feel free to get in touch with us via the Contact us form!


How to Simulate Low Network Speed to Test Your Mobile Application?

When testing mobile apps, newbies QA frequently forget to check the app with an unstable Internet connection. But in many cases this is critical: connection speed directly influences user experience and workability of the main functions. It is especially true for applications where geolocation and mobile Internet are heavily in use. For example, video chats, messengers, and other multimedia products we specialize in.

In this article, we’ll show how to spoil the Internet on a test device with no hassle. 

tetris on smartphone


Let’s start with a standard utility Network Link Conditioner for iOS apps testing. It lets the QA adjust the Internet connection as he needs.

To switch on this function on iPhone, you need a Mac OS device:

  1. Download and install Xcode for Mac
  2. Open Xcode on Mac
  3. Connect iPhone to Mac
  4. Allow Mac access iPhone
  5. Open Settings on iPhone
  6. Scroll down
  7. Tap “Developer”
  8. Tap “Network Link Conditioner”
  9. Pick network preset or create your own
  10. Switch on the toggle “Enable”

iOS lets us choose one of pre-installed presets of connection quality – or create our own preset.

For our own preset these settings are available:

Here we see that Apple took care of testing apps with different levels of connection quality and gave us almost all the needed settings.

Having got acquainted with Network Link Conditioner for iOS, we’ve been sure such a feature would be on Android too. God, how much we’ve been mistaking.


It appeared to be impossible to emulate a slow or unstable connection on a real Android with the help of standard tools. Therefore, I faced 2 paths: download some apps from Google Play that emulate slow connection, or use a specifically precise adjustment of the Internet connection access point.

Apps didn’t work out for me ☹ All the apps that give this function require Root access, and this breaks the concept of testing in real-world conditions.

So, having left the Root access as the last resort, I decided to closer look at path #2 – adjustment of the access point.

In the past, when being a student, mobile internet traffic was ending up quickly (and we needed to read, watch something while on the lesson), and we used iPhone as an access point. The idea came to mind: to mix the student experience and recently gathered knowledge.

If we use Network Link Conditioner and access point made of macOS or iOS devices does not require any extra knowledge and is easy to adjust. Exactly what’s needed if we want to save time. 

So, to emulate bad connection on Android we need the Android device and… iPhone with Developer Tools switched on.

  1. Make iPhone the access point (Settings > Modem regime)
  2. Adjust connection with Network Link Conditioner
  3. Connect to the access point with the Android
  4. Ready. You’re awesome 🙂

Of course, the ways to break the Internet we considered in this article are not the only solution. We’ll tell about more complicated options, such as Android and iOS emulators, in the next article.

Thanks and see you soon!
Always yours, 
Dima and Fora Soft QA team ?