How to Choose a Live Streaming Protocol? RTMP vs HLS vs WebRTC

cover image

When streaming a video, the delivery mostly depends on which live streaming protocol is used. Each protocol performs differently in terms of latency, scalability, and supporting devices. While there’s no universal solution to each live streaming need, you still have freedom to pick the best match for your specific requirements.

How does streaming work?

For the end user, live streaming is all about a quasi-real experience with a screen and a camera standing in-between them and the source. In reality, there’s much more backstage. 

It does start with capturing whatever is about to go live on camera. It does end with playing the content on an end-user device. But to make this happen, it takes four more milestones to complete.

  1. Once having recorded even a byte of information, video and audio get compressed with an encoder. In short, encoder converts raw media material into a piece of digital information that can be displayed on many devices. It also reduces the file size from GBs to MBs to ensure quick data delivery and playback.
  2. Compressed data then gets ingested into a streaming platform for further processing. 
  3. The platform resource transcodes the stream and packages it into its final format and protocol. The latter is basically the way data travels from one communicating system to another. 
  4. To get to its end user, the stream is then delivered across the internet, in most cases via a content delivery network (CDN). In turn, CDN is a system of servers located in different physical locations.

Aaand — cut! Here’s where a user sees the livestream on their device. A long way to go, huh? But in fact, this journey may take less than 0,5 second or 1-2 minutes. This is called latency and it varies from one protocol to another. As well as many other parameters. 

There’re quite a few live streaming protocols to choose from, but we’ll elaborate on three most commonly used ones: RTMP, HLS and WebRTC. In short, here’s the difference: 

protocols comparison
Live Streaming protocols (RTMP, HLS, WebRTC) comparison

Now in detail.


Real-Time Messaging Protocol (RTMP) is probably the most widely supported media streaming protocol from both sides of the process. It’s compatible with many professional recording devices and also quite easy to ingest into streaming platforms like YouTube, Twitch, and others. 

Supporting low-latency streaming, RTMP delivers data at roughly the same pace as cable broadcasting — it takes only 5 seconds to transmit information. This is due to using a firehose approach that builds a steady stream of available data and delivers it to numerous users in real time. It just keeps flowing!

Yet, this video streaming protocol is no longer supported by browsers. This results in additional conversion and transcoding the stream into a HTTP-based technology, prolonging the overall latency to 6-30 seconds. 


HTTP Live Streaming, developed by Apple as a part of QuickTime, Safari, OS X, and iOS software, is a perfect pick if you stream to a large audience, even millions at a time. HLS is supported by all browsers and almost any device (set-top boxes, too). 

The protocol supports adaptive bitrate streaming which provides the best video quality no matter the connection, software, or device. Basically it’s key to the best viewer experience.

Usually, to cover a large audience with broad geography stably and with the lowest latency, CDN is used. 

The only major drawback of HLS seems to be the latency — prioritizing quality, it may reach 45 seconds when used end-to-end. 

Apple presented the solution in 2019: Low Latency HLS shrinks the delay to less than 2 seconds and is currently supported by all browsers, as well as Android, Linux, Microsoft and, obviously, macOS devices, several set-top boxes and Smart TVs. 


RTC in WebRTC stands for “real-time communication” which suggests this protocol is perfect for an interactive video environment. With the minimum latency possible (less than 0,5 second) WebRTC is mostly used for video conferencing with not so many participants (usually 6 or less). We use it a lot in our projects for video conferencing. Check out the ProVideoMeeting or CirrusMED portfolios.

But media servers like Kurento allow WebRTC scaling to an audience of up to 1 million viewers. Yet, a more popular approach would be to use WebRTC as an ingest and repackage it into more scalable HLS. 

Apart from providing the absolute minimum latency possible, WebRTC doesn’t require any additional equipment for a streamer to do the recording. 


Thus, when choosing a live streaming protocol for your streaming goals, ask yourself these questions:

  • What kind of infrastructure am I willing to establish or already have? What recording devices and software are at my disposal?
  • What kind of content I want to deliver and which latency is acceptable? How “live” do I want the streaming to be?
  • What system scalability do I expect? How many users I guesstimate will watch the stream at a time? 

One protocol may not satisfy all your needs. But a good mix of two can. Hit us up to get a custom solution to turn your ideas into action.


How We Optimized Analytics Process in Software Development Company

Wireframing is one of the key stages of the analytics process at Fora Soft. Interactive prototype being a rough image of the future product gives the team space to alter functionality at the earliest stage before the development. It also tributes to the client’s confidence that we’re on the same page regarding the product view.

axure wireframe
Axure prototype example

Developing the systems and platforms of various structures, for various audiences and user needs, you still have to design core functionality and basic scenarios all the time. 

Imagine how rich the difference is between the functionality of an e-learning platform, a booking app, and a streaming service. However, most likely all of them will feature a sign-up scenario, editing a user profile and sending messages to the chat.

Having analyzed the wide scope of company projects, Fora Soft analyst team has distinguished the most common user scenarios and created the completed wireframe templates for them, following the usability guidelines and the best practices in web and mobile UX design.

These ready-to-use wireframe templates contribute to the analytics process optimization in Fora Soft. They allow us to dedicate more time and effort to the product’s killer features rather than its basic functionality. The templates are concise and flexible enough to be applied to different kinds of systems.

Among them, you will find a video conference template. Our area of expertise is multimedia software such as systems for online education, sports, teamwork, telemedicine, and video surveillance. 
Check the Fora Soft wireframe library here and try its templates to design your amazing projects.

Download the kit right here:


The analytical stage of software development at Fora Soft

At Fora Soft, the first person you will work with on your idea will be an analyst. As bringing your concept for a unique product to reality is typically one of the most difficult challenges for entrepreneurs, the company needs a professional team. First steps require an analyst who can lead you through all the challenges along the way. In this article, we will observe the value that analysts at Fora Soft bring to your project as well as the possible negative scenarios of missing a system analytic.

From Idea to MLP (Minimum Loveable Product)

Product development usually follows a process separated into stages or steps, through which a company conceives:

  •  product concept (idea generation) 
  •  researches (product validation ensures you’re creating a product people will pay for and that you won’t waste time, money, and effort on an idea that people don’t need)
  •  project planning
  •  prototypes
  •  designs
  •  development
  •  tests
  •  launch into the market
Product development process scheme

What if it seems that the idea is already sharpened? Why do I need the analytics process then?

According to Info Tech research, poor requirements are the reasons for 70% of unsuccessful software projects. This may lead to financial losses, wasted time and effort, and disillusionment (we will take a closer look at all the possible circumstances below). You may avoid these stumbling blocks and assure the adoption of best-practice approaches by working closely with an analyst at the outset of a project.

Possible outcomes of skipping the analysis stage

Let me showcase you the list of possible struggles you can face due to skipping the analytical stage of software development:


  • Inability to logically structure a development process
  • Postponement of a current release
  • Incapacity of long-term planning


  • Waste of development hours on functionality redesign
  • Radical change of initial estimation
  • Unrealistic time and costs estimation due to lack of requirements decomposition 


  • Team members may idle and be removed from the project due to lack of tasks at the current time period
  • The team doesn’t include the essential specialist as there were no special requirements gathered for the particular feature in the very beginning
  • Some tasks may require parallel development
  • Constant hole patching instead of building a clear coherent vision

Relationship damage

  • with the customer 
  • within the team


  • Customer and team have different product vision
  • Fragmented documentation 


  • Choice of more expensive functionality instead of simple and elegant solutions
  • Architectural limitations
  • Inconsistency or the requirements and logical holes

Preparation stage

So, as we have identified the significance of the analytical stage of software development, let’s dive into the details. 

First of all, we will review the initial requirements, understand your vision and product concept. As the next step, the analyst will provide research on: 

  • Target audience and their pains
  • Competitors
  • Best practises in the industry

That allows us to find unique selling points. A unique selling point describes your company’s distinct market position, getting to the heart of your offering: the value you provide and the problem you address. A good USP clearly articulates a distinct benefit – one that other competitors do not provide – that distinguishes you from the competition.

Requirement analysis

At the next stage of the analysts phase, we will start to design the system from requirement preparation. Requirements analysis is a vital procedure that determines the success of a system or software project. Functional and non-functional requirements are the two sorts of requirements.

Non-functional requirements: These are the quality limitations that the system must meet in accordance with the project contract. The priority or extent to which these aspects are incorporated varies depending on the project. Non-behavioral requirements are another name for them.

Functional Requirements: These are the requirements that the end user directly requests as basic system facilities. These are expressed or described as input to be delivered to the system, operation to be conducted, and expected output. In contrast to non-functional requirements, they are essentially the user-specified criteria that can be seen immediately in the finished product.

Functional requirements are carried out in the form of user stories, which are summaries of needs or queries created from the perspective of a particular product user. All stories will be divided into subsections – epics. The main goal after that is the maximum added value with minimal applied effort. It can be achieved by task prioritization. 


After a couple of iterations and clarifying the requirements, we will do the wireframe. 

Wireframe is a form of interactive prototype that has no user interface, no colors, fonts, or style – just functionality. Consider wireframes to be the skeleton of your product. They give you a good concept of where everything will end up by roughly shaping the final product. In a wireframe stage, it is easier and less expensive to evaluate and alter the structure of the essential pages. Iterating the wireframes to a final version will provide the client and design team confidence that the page/tab is meeting user needs while also achieving the primary business and project goals. Check the example via the link.

On this stage, the analyst reviews UX industry’s best practices as well as searches for a unique selling point. For mobile products, we are referring to Apple Human Interface Guidelines and Google Material Design. Guidelines were developed to expedite the process of resolving user pains. Guidelines for mobile apps specify navigation and interaction concepts, interface components and their styles, typography and iconography used, color palettes, and much more. Furthermore, as everything described in the guidelines is frequently already implemented as an element in the code, the developer does not need to spend time on creating it from scratch.

During these stages, business analysts will be consulting technicians, designers, marketers and other specialists to find the most suitable and elegant solutions.


Once the wireframe is entirely adhering to the user stories and you are satisfied with the requirements, we are going to the QA stage to ensure quality and consistency of the prototype.

  • Completeness. A set of requirements is regarded as complete if all of its basic pieces are represented and each component is completed with a logical end.
  • Unambiguity. Each component must be clearly and precisely stated, allowing for a distinct interpretation. The request should be legible and comprehensible.
  • Consistency. Requirements should not be in conflict with one another or wireframe.
  • Validity. Requirements should meet the expectations and needs of the final user.
  • Feasibility. The scenarios must be possible to implement.
  • Testability. We should be able to create economically feasible and simple-to-use tests for each need to indicate that the tested product meets the required functionality, performance, and current standards. This implies that each claim must be measured, and testing must be carried out under appropriate conditions.

Testing requirements is a proven way to avoid problems during the development stage. It is at this point that continuous testing begins in order to ensure the requisite quality of the created product and to avoid any business risks. It’s always better to find all hidden dangers on the analytical stage rather than during software development.

Concept design

Optionally, as a part of the analytical process, you may request concept design of the product. Conceptual design is an early stage of the design process in which we establish the broad outlines of something’s purpose and form.  It entails comprehending people’s needs and determining how to address them through products.  These are pictures that will demonstrate the concept’s “mood” and colors in further depth. 


Update logo, if you don’t have your own

Corporate identity elements: patterns, slides with a slogan that reflects the concept. The options and number of pictures will depend on the concept and product.

UI of the 1st-2nd main screens of the application/platform.

You can check the example via the link.

This is a final stage of the analytical process. After that you will have a full clear vision and be totally ready for the development process. As a next step, you will receive an estimation from our sales manager.


The better the team understands the big picture, the better the final product will be. It is crucial to have solid relationships and a deep level of understanding between the team and customer, and that is what an analyst can fully provide. The price and time estimates you will get from the development team are as precise as the requirements are. After the analytics, it’s possible to give an estimate with +- ~10% deviation. These will help to assure improved cost management, delivery, and meeting business goals. 

So if you feel like talking to our analysts and getting your wireframe, don’t hesitate to hit us up using the contact form.


Why We Have To Know The Number of Active Users In Your App

When clients initially come to us, some of the first questions they hear is: “How many people do you expect to be using your app in the first month?”. Or “How many are likely to be using it simultaneously?”

Many people answer reluctantly and uncertainly, ranging from “why do you need it” to “you’re developers, you know better”. Meanwhile, the exact answer can save the client money, and quite a lot of it, actually. Sometimes even help earn it.. 

Is it possible to save money or even make more by answering this question?

Now, let’s talk money since we want business to be profiable, right? Information about the number of users not only helps your project team, but also helps you save or get more money. How?

By knowing how many people will use the platform, we can:

  • Calculate the necessary server capacity: the client won’t have to overpay for unused resources;
  • Build a scaling system architecture 
  • Estimate the costs of load-testing
  • Build a development plan that will allow the project to go to market (or present a new version to users) as quickly as possible.

So, what we’re doing here? We’re saving money by eliminating unnecessary costs now and planning the implementation of new features in the future.

Also, this helps to make money by ensuring a quicker TTM (time-to-market). And provide the confidence that the platform is meeting its goals.

What exactly are we asking?

Depending on the specifics of the platform, it’s important for us to know:

– The maximum number of platform users per month;

– The maximum number of users on the platform online at one time;

– What exactly are users doing on the platform: e.g., posting stuff, making calls, logging in to the game — how many times a day? 

– The perceived dynamics of audience growth

What if I really don’t know?

If your project is already live, chances are there are analytics out there. Google Analytics or its counterparts allow you to estimate the number of users quickly and accurately. 

If not, you can rely on more technical data: information from databases, server load statistics, or summaries from the cloud provider console, and so on.

If you need our team to create a project from scratch, it makes sense to look at competitors’ statistics, for example, using the service SimilarWeb. If for some reason this is not possible, rely on 1000 active users – our experience suggests that it’s enough in the first months of the life of the product.

And, of course, in both cases you should consult our analysts. We’ll help you gather the necessary data and draw conclusions.

Is this important for all projects?

Yes, for all of them. It’s especially critical for systems that meet at least one of these criteria:

  • Large inbound/outbound traffic: users uploading and downloading HD video, video conferencing for 3+ users,
  • There is a requirement to ensure minimal latency: users are playing an online game, rehearsing musical instruments over a video call, or mixing a DJ set,
  • The application involves long and resource-intensive operations: compressing, converting or processing video, archiving files, routing video/audio calls, processing or generating data with neural networks.

Why not make it with the expectation of many thousands and very intense online at once and for everyone?

Firstly, a platform like that will get into production later.

If we know that only a small audience (usually called early adopters) will be using it in the first months, it is more reasonable and profitable not to postpone the launch until the balancing and scaling systems are ready and tested under load. 

Secondly, the larger the estimated load, the more expensive the system operation gets. Especially if it runs in the cloud. Focusing on big online means not only being able to scale, but having enough spare capacity here and now to handle a significant influx of users at any given time. That is, to keep a large and expensive server always on, not a small cheap one.

Thirdly, this calculation isn’t applicable to all projects at all.

For closed corporate platforms, it simply makes no sense to develop a product for an army of thousands of users.

What does the developer do with this data?

The developer will understand:

  • What kind of server you need: on-premise, cloud (AWS, Hetzner, Google Cloud, AliCloud), or a whole network of servers
  • Whether it is possible and necessary to transfer some of the load to the user device (client)
  • Which of the optimization and performance-related tasks need to be implemented immediately and which can be deferred to later sprints

Offtopic: what is the difference between server load and client load?

A simple example: let’s say we’re doing our own instagram. The user shoots a video, adds simple effects, and posts the result on their feed.

If the goal is to get to the first audience quickly and economically, the pilot build can do almost everything on the server.


  • There’s no risk of getting bogged down by platform-specific limitations: video formats, load limits, and other nuances don’t bother us. Everything is handled centrally, so you can quickly make a product for all platforms and release it simultaneously
  • There are no strict requirements for client devices: it’s easier to enter growing markets, such as Africa, SEA, Latin America. Even a super cheap phone, of which there are many in the mentioned regions, can do it
  • Our “non-Instagram” applications for certain platforms, such as web and mobile OS, are very simple. Authorization, feed, download button, and that’s it. 

And if the goal is to give full functionality to a large active audience at once, heavy server calculations lose appeal: it makes sense to harness the power of client devices immediately.


  • Fewer servers and operating costs for the same number of users
  • The user feels that the application is more responsive. In addition, if there are already a lot of clients and we have added complex new features, the responsiveness of the platform will not become lower
  • Users feel more comfortable experimenting with new functionality: it’s implemented on the client, so delays are minimal
  • Internet may not be required during content processing – it saves traffic
  • The uploaded video is published faster: it does not need to be queued for server processing
  • The easier and faster the individual operations on the server, the easier and cheaper it is to scale the server. It’s especially critical when there is a sudden influx of new users

A compromise, which often turns out to be the best option – the one that doesn’t shift the whole load on one of the parties. For example, video processing tasks, such as applying effects or graphics, are often performed on the client, while the conversion of mobile video into the required formats and resolutions is performed on the server. And in this case, the distribution of tasks between the client device and the server also depends on the planned scope.

What if we develop just a component for a live project? 

In the case of extending an already existing product, it’s necessary to find out where tasks are currently processed: on the device or on the server.

Then, based on the purpose of the future component and the forecast of the number of users and their activity on the platform after it appears, the developer will understand whether to improve the current architecture or migrate to a more efficient one.

So in the end, why are we asking about the number of users?

It all comes down to efficiency and saving your resources and money. We need to have as accurate knowledge as possible about the product’s scope and workload. It will help your project team to better plan the launch, allocate costs, and make the system more reliable in the long run.


How Digital Video as a Technology Works

tv with different types of video

In this article, we’ll try to explain what digital video is and how it works. We’ll be using a lot of examples, so even if you wanna run away before reading something difficult – fear not, we’ve got you. So lean back and enjoy the explanation on video from Nikolay, our CEO. 😉

Analog and digital video

Video can be analog and digital.

All of the real world information around us is analog. Waves in the ocean, sound, clouds floating in the sky. It’s a continuous flow of information that’s not divided into parts and can be represented as waves. People perceive exactly analog information from the world around them.    

The old video cameras, which recorded on magnetic cassettes, recorded information in analog form. Reel-to-reel tape and cassette recorders worked on the same principle. Magnetic tape was passed through the turntable’s magnetic heads, and this allowed sound and video to be played. Vinyl records were also analog. 

Such records were played back strictly in the order in which they were recorded. Further editing was very difficult. So was the transfer of such recordings to the Internet.

With the ubiquity of computers, almost all video is in digital format, as zeros and ones. When you shoot video on your phone, it’s converted from analog to digital media and stored in memory, and when you play it back, it’s converted from digital to analog. This allows you to stream your video over a network, store it on your hard drive, and edit and compress it.

What a digital video is made of

Video consists of a sequence of pictures or frames that, as they change rapidly, make it appear as if objects are moving on the screen.

This here is an example of a how a video clip is done.

What is Frame Rate

Frames on the screen change at a certain rate. The number of frames per second is the frame rate or framerate. The standard for TV is 24 frames per second, and 50 frames per second for IMAX in movie theaters.

The higher the number of frames per second, the more detail you can see with fast-moving objects in the video. 

Check out the difference between 15, 30, and 60 FPS.

What is pixel

All displays on TVs, tablets, phones and other devices are made up of little glowing bulbs – pixels. Let’s say that each pixel can display one color (technically different manufacturers implement this differently). 

In order to display an image on a display, it is necessary for each pixel on the screen to glow a certain color. 

Thanks to this technical device of screens, in digital video each frame is a set of colored dots or pixels. 

Schematic screen structure

The number of such dots horizontally and vertically is called the picture resolution. The resolution is recorded as 1024×768. The first number is the number of pixels horizontally and the second number, vertically. 

The resolution of all frames in a video is the same and this in turn is called the video resolution.

Let’s take a closer look at a single pixel. On the screen it’s a glowing dot of a certain color, but in the video file itself a pixel is stored as digital information (numbers). With this information the device will understand what color the pixel should light up on the screen. 

What are color spaces

There are different ways of representing the color of a pixel digitally, and these ways are called color spaces. 

Color spaces are set up so that any color is represented by a point that has certain coordinates in that space. 

For example, the RGB (Red Green Blue) color space is a three-dimensional color space where each color is described by a set of three coordinates – each of them is responsible for red, green and blue colors. 

Any color in this space is represented as a combination of red, green, and blue.

how color spaces work
Classic RGB palette

Here is an example of an RGB image decomposed into its constituent colors:

what is RGB
How colors in pictures mix

There are many color spaces, and they differ in the number of colors that can be encoded with them and the amount of memory required to represent the pixel color data.

The most popular spaces are RGB (used in computer graphics), YCbCr (used in video), and CMYK (used in printing)

CMYK is very similar to RGB, but has 4 base colors – Cyan, Magenta, Yellow, Key or Black.

RGB and CMYK spaces are not very efficient, because they store redundant information.  

Video uses a more efficient color space that takes advantage of human vision.

The human eye is less sensitive to the color of objects than it is to their brightness.

how human eye understand brightness
How human eyes perceive contrast

On the left side of the image, the colors of squares A and B are actually the same. It just seems to us that they are different. The brain forces us to pay more attention to brightness than to color. On the right side there is a jumper of the same color between the marked squares – so we (i.e., our brain) can easily determine that, in fact, the same color is there.

Using this feature of vision, it is possible to display a color image by separating the luminosity from the color information. Subsequently, half or even a quarter of the color information can simply be discarded in compression (representing the luminosity with a higher resolution than the color). The person will not notice the difference, and we will essentially save on storage of the information about color.

About how exactly color compression works, we will talk in the next article.

The best known space that works this way is YCbCr and its variants: YUV and YIQ.  

Here is an example of an image decomposed into components in YCbCr. Where Y’ is the luminance component, CB and CR are the blue and red color difference components.

how YCbCr works
YCbCr scheme

It is YCbCr that is used for color coding in video. Firstly, this color space allows compressing color information, and secondly, it is well suited for black and white video (e.g. surveillance cameras), as the color information (CB and CR) can simply be omitted.


What is Bit Depth

The more bits, the more colors can be encoded, and the more memory space each pixel occupies. The more colors, the better the picture looks.

For a long time it has been standard for video to use a color depth of 8 bits (Standard Dynamic Range or SDR video). Nowadays, 10-bit or 12-bit (High Dynamic Range or HDR video) is increasingly used.

compare SDR and HDR video
Bit depth contents

It should be taken into account that in different color spaces, with the same number of bits allocated per pixel, you can encode a different number of colors. 

What is Bit Rate

Bit rate is the number of bits in memory that one second of video takes. To calculate the bit rate for uncompressed video, take the number of pixels in the picture or frame, multiply by the color depth and multiply by the number of frames per second

1024 pixels X 768 pixels X 10 bits X 24 frames per second = 188743680 bits per second

That’s 23592960 bytes, 23040 kilobytes or 22.5 megabytes per second.

Those 5 minute videos would take up 6,750 megabytes or 6.59 gigabytes of memory.

This brings us to why video compression methods are needed and why they appeared. Without compression it’s impossible to store and transmit that amount of information over a network. YouTube videos would take forever to download.


This is a quick introduction in the world of video. Now that we know what it consists of and the basics of its work, we can move on to the more complicated stuff. Which will still be presented in a comprehensive way 🙂

In the next article I’ll tell you how video compression works. I’ll talk about lossless compression and lossy compression. 


Advanced iOS App Architecture Explained on MVVM with Code Examples

MVVM iOS architecture

How to share the exact same vision with changing developes’ teams? Is there a way to make new devs onboarding faster and easier to cut costs? How will the final product be affected? In this article we want to share our experience and give a clear explanation of what iOS app architecture is for all business and tech people.

We are a custom software development company. In 17 years of work, we have developed more than 60 applications on SWIFT. We regularly had to spend weeks digging into code to understand the structure and operation of another project. Some projects we created as MVP, some as MVVM, some as our own. Switching between projects and reviewing other developers’ code increased our development time by several more hours. So we decided to create a unified architecture for mobile apps.

What benefits the architecture gave us:

  1. Speed up the development process. Having spent some time on creating the architecture we can now easily make changes to the code. For instance, if we need to change a new sign-up flow, just making it work would take us 8-16 hours. Now it only takes 1-2 hours.
  2. Eliminate bugs. Not completely, but there’s now less. We’ve already developed a lot of different kinds of flows and cases. Add the settled approach to it, and we don’t have to search for solutions anymore, we just write the code. We already know what bugs can occur so we avoid them straight away.
  3. Refer projects more easily. If a project developer is away (e.g. on sick days or a vacation) we find someone who could replace them until they’re back. The substitutional developer would waste time (= client’s money) on examining the code before entering a project. Now we minimized this kind of expense since we’ve unified all the solutions and the programmer can easily continue the development.

When went on to creating an iOS app architecture, we first defined the main goals to achieve:

Simplicity and speed. One of the main goals is to make developers’ lives easier. To do this, the code must be readable and the application must have a simple and clear structure. 

Quick immersion in the project. Outsourced development doesn’t provide much time to dive into a project. It is important that when switching to another project, it does not take the developer much time to learn the application code. 

Scalability and extensibility. The application under development must be ready for large loads and be able to easily add new functionality. For this it is important that the architecture corresponds to modern development principles, such as SOLID, and the latest versions of the SDK

Constant development. You can’t make a perfect architecture all at once, it comes with time. Every developer contributes to it – we have weekly meetings where we discuss the advantages and disadvantages of the existing architecture and things we would like to improve.

The foundation of our architecture is the MVVM pattern with coordinators 

Comparing popular MV(X) patterns, we settled on MVVM. It seemed to be the best because of good speed of development and flexibility.

MVVM stands for Model, View, ViewModel:

  • Model – provides data and methods of working with it. Request to receive, check for correctness, etc.
  • View – the layer responsible for the level of graphical representation.
  • ViewModel – The mediator between the Model and View. It is responsible for changes of Model, reacting on user’s actions performed on View, and updates View, using changes from Model. The main distinctive feature from other intermediaries in MV(X) patterns is the reactive bindings of View and ViewModel, which significantly simplifies and reduces the code of working with data between these entities.

Along with the MVVM, we’ve added coordinators. These are objects that control the navigational flow of our application. They help to:

  • isolate and reuse ViewControllers
  • pass dependencies down the navigation hierarchy
  • define the uses of the application
  • implement Deep Links

We also used the DI (Dependency Enforcement) pattern in the iOS development architecture. This is a setting over objects where object dependencies are specified externally, rather than created by the object itself. We use DITranquillity, a lightweight but powerful framework with which you can configure dependencies in a declarative style. 

How to implement it?

Let’s break down our advanced iOS app architecture using a note-taking application as an example. 

Let’s create the framework for the future application. Let’s implement the necessary protocols for routing.

import UIKit
protocol Presentable {
    func toPresent() -> UIViewController?
extension UIViewController: Presentable {
    func toPresent() -> UIViewController? {
        return self
protocol Router: Presentable {
  func present(_ module: Presentable?)
  func present(_ module: Presentable?, animated: Bool)
  func push(_ module: Presentable?)
  func push(_ module: Presentable?, hideBottomBar: Bool)
  func push(_ module: Presentable?, animated: Bool)
  func push(_ module: Presentable?, animated: Bool, completion: (() -> Void)?)
  func push(_ module: Presentable?, animated: Bool, hideBottomBar: Bool, completion: (() -> Void)?)
  func popModule()
  func popModule(animated: Bool)
  func dismissModule()
  func dismissModule(animated: Bool, completion: (() -> Void)?)
  func setRootModule(_ module: Presentable?)
  func setRootModule(_ module: Presentable?, hideBar: Bool)
  func popToRootModule(animated: Bool)

Configuring AppDelegate and AppCoordintator

a graphic scheme of how delegate and coordinators interact (blocks and arrows)
A diagram of the interaction between the delegate and the coordinators

In App Delegate, we create a container for the DI. In the registerParts() method we add all our dependencies in the application. Next we initialize the AppCoordinator by passing window and container and calling the start() method, thereby giving it control.

class AppDelegate: UIResponder, UIApplicationDelegate {
    private let container = DIContainer()
    var window: UIWindow?
    private var applicationCoordinator: AppCoordinator?
    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
        // Override point for customization after application launch.
        let window = UIWindow()
        let applicationCoordinator = AppCoordinator(window: window, container: container)
        self.applicationCoordinator = applicationCoordinator
        self.window = window
        return true
    private func registerParts() {
        container.append(part: ModelPart.self)
        container.append(part: NotesListPart.self)
        container.append(part: CreateNotePart.self)
        container.append(part: NoteDetailsPart.self)

The App Coordinator determines on which script the application should run. For example, if the user isn’t authorized, authorization is shown for him, otherwise the main application script is started. In the case of the notes application, we have 1 scenario – displaying a list of notes. 

We do the same as with App Coordinator, only instead of window, we send router.

final class AppCoordinator: BaseCoordinator {
    private let window: UIWindow
    private let container: DIContainer
    init(window: UIWindow, container: DIContainer) {
        self.window = window
        self.container = container
    override func start() {
    override func start(with option: DeepLinkOption?) {
    func openNotesList() {
        let navigationController = UINavigationController()
        navigationController.navigationBar.prefersLargeTitles = true
        let router = RouterImp(rootController: navigationController)
        let notesListCoordinator = NotesListCoordinator(router: router, container: container)
        window.rootViewController = navigationController

In NoteListCoordinator, we take the dependency of the note list screen, using the method container.resolve(). Be sure to specify the type of our dependency, so the library knows which dependency to fetch. Also set up jump handlers for the following screens. The dependencies setup will be presented later.

class NotesListCoordinator: BaseCoordinator {
    private let container: DIContainer
    private let router: Router
    init(router: Router, container: DIContainer) {
        self.router = router
        self.container = container
    override func start() {
    func setNotesListRoot() {
        let notesListDependency: NotesListDependency = container.resolve()
        notesListDependency.viewModel.onNoteSelected = { [weak self] note in
            self?.pushNoteDetailsScreen(note: note)
        notesListDependency.viewModel.onCreateNote = { [weak self] in
            self?.pushCreateNoteScreen(mode: .create)

Creating a module

Each module in an application can be represented like this:

a graphic scheme of iOS module scheme (with blocks and arrows)
Module scheme in iOS application architecture

The Model layer in our application is represented by the Provider entity. Its layout is

a graphic scheme of iOS provider (with blocks and arrows)
Provider scheme in apple app architecture

The Provider is an entity in iOS app architecture, which is responsible for communicating with services and managers in order to receive, send, and process data for the screen, e.g. to contact services to retrieve data from the network or from the database.

Let’s create a protocol for communicating with our provider by mentioning the necessary fields and methods. Let’s create a structure ProviderState, where we declare the data on which our screen will depend. In the protocol, we will mention fields such as Current State with type ProviderState and its observer State with type Observable<ProviderState> and methods to change our Current State. 

Then we’ll create an implementation of our protocol, calling as the name of the protocol + “Impl”. CurrentState we mark as @Published, this property wrapper, allows us to create an observable object which automatically reports changes. BehaviorRelay could do the same thing, having both observable and observer properties, but it had a rather complicated data update flow that took 3 lines, while using @Published only took 1. Also set the access level to private(set), because the provider’s state should not change outside of the provider. The State will be an observer of CurrentState and will broadcast changes to its subscribers, namely to our future View Model. Do not forget to implement the methods that we will need when working on this screen.

struct Note {
    let id: Identifier<Self>
    let dateCreated: Date
    var text: String
    var dateChanged: Date?
protocol NotesListProvider {
    var state: Observable<NotesListProviderState> { get }
    var currentState: NotesListProviderState { get }
class NotesListProviderImpl: NotesListProvider {
    let disposeBag = DisposeBag()
    lazy var state = $currentState
    @Published private(set) var currentState = NotesListProviderState()
    init(sharedStore: SharedStore<[Note], Never>) {
        sharedStore.state.subscribe(onNext: { [weak self] notes in
            self?.currentState.notes = notes
        }).disposed(by: disposeBag)
struct NotesListProviderState {
    var notes: [Note] = []
a graphic scheme of iOS View-model
View-Model scheme in iOS development architecture

Here we’ll create a protocol, just like for the provider. Mention fields such as ViewInputData, and Events. ViewInputData is the data that will be passed directly to our viewController. Let’s create the implementation of our ViewModel, let’s subscribe the viewInputData to the state provider and change it to the necessary format for the view using the mapToViewInputData function. Create an enum Events, where we define all the events that should be processed on the screen, like view loading, button pressing, cell selection, etc. Make Events a PublishSubject type, to be able to subscribe and add new elements, subscribe and handle each event.

protocol NotesListViewModel: AnyObject {
    var viewInputData: Observable<NotesListViewInputData> { get }
    var events: PublishSubject<NotesListViewEvent> { get }
    var onNoteSelected: ((Note) -> ())? { get set }
    var onCreateNote: (() -> ())? { get set }
class NotesListViewModelImpl: NotesListViewModel {
    let disposeBag = DisposeBag()
    let viewInputData: Observable<NotesListViewInputData>
    let events = PublishSubject<NotesListViewEvent>()
    let notesProvider: NotesListProvider
    var onNoteSelected: ((Note) -> ())?
    var onCreateNote: (() -> ())?
    init(notesProvider: NotesListProvider) {
        self.notesProvider = notesProvider
        self.viewInputData = { $0.mapToNotesListViewInputData() }
        events.subscribe(onNext: { [weak self] event in
            switch event {
            case .viewDidAppear, .viewWillDisappear:
            case let .selectedNote(id):
                self?.noteSelected(id: id)
            case .createNote:
        }).disposed(by: disposeBag)
    private func noteSelected(id: Identifier<Note>) {
        if let note = notesProvider.currentState.notes.first(where: { $ == id }) {
private extension NotesListProviderState {
    func mapToNotesListViewInputData() -> NotesListViewInputData {
        return NotesListViewInputData(notes: { ($, NoteCollectionViewCell.State(text: $0.text)) })
a graphic scheme of iOS MVVM View
View scheme in iOS mobile architecture

In this layer, we configure the screen UI and bindings with the view model. The View layer represents the UIViewController. In viewWillAppear(), we subscribe to ViewInputData and give the data to render, which distributes it to the desired UI elements

  override func viewWillAppear(_ animated: Bool) {
        let disposeBag = DisposeBag()
        viewModel.viewInputData.subscribe(onNext: { [weak self] viewInputData in
            self?.render(data: viewInputData)
        }).disposed(by: disposeBag)
        self.disposeBag = disposeBag
    private func render(data: NotesListViewInputData) {
        var snapshot = DiffableDataSourceSnapshot<NotesListSection, NotesListSectionItem>()
        snapshot.appendItems( { NotesListSectionItem.note($0.0, $0.1) })

We also add event bindings, either with RxSwift or the basic way through selectors. 

    @objc private func createNoteBtnPressed() {

Now, that all the components of the module are ready, let’s proceed to link objects between themselves. The module is a class subscribed to the DIPart protocol, which primarily serves to maintain the code hierarchy by combining some parts of the system into a single common class, and in the future includes some, but not all, of the components in the list. Let’s implement the obligatory load(container:) method, where we will register our components.

final class NotesListPart: DIPart {
    static func load(container: DIContainer) {
            .as(SharedStore<[Note], Never>.self, tag: NotesListScope.self)
        container.register { NotesListProviderImpl(sharedStore: by(tag: NotesListScope.self, on: $0)) }
struct NotesListDependency {
    let viewModel: NotesListViewModel
    let viewController: NotesListViewController

We’ll register components with the method container.register(), sendingthere our object, and specifying the protocol by which it will communicate, as well as the lifetime of the object. We do the same with all the other components

Our module is ready, do not forget to add the module to the container in the AppDelegate. Let’s go to the NoteListCoordinator in the list opening function. Let’s take the required dependency through the container.resolve function, be sure to explicitly declare the type of variable. Then we create event handlers onNoteSelected and onCreateNote, and pass the viewController to the router.

 func setNotesListRoot() {
        let notesListDependency: NotesListDependency = container.resolve()
        notesListDependency.viewModel.onNoteSelected = { [weak self] note in
            self?.pushNoteDetailsScreen(note: note)
        notesListDependency.viewModel.onCreateNote = { [weak self] in
            self?.pushCreateNoteScreen(mode: .create)

Other modules and navigation are created following these steps. In conclusion, we can say that the architecture isn’t without flaws. We could mention a couple problems, such as changing one field in viewInputData forces to update the whole UI but not certain elements of it; underdeveloped common flow of work with UITabBarController and UIPageViewController.

November’22 Update

It’s been 6 months since we released this article and mentioned the issues and weak spots stated above. We’ve done some work and here’re the improvements we’ve made:

  1. Now you don’t have to update the entire provider state when altering one field.
  2. We implemented UIPageViewController and UITabBarController to our architecture.


We’ve mentioned that we had made the provider State with the custom propertyWrapper — RxPublished. It’s an alternative to Published in Combine, but in RxSwift. It “wraps up” BehaviorRelay so when we modified State we sent out an instance to the subject. And only after that the subject delivered it to its subscribers. But there was a case when we needed to update several state fields, but deliver the updated state only when the operation was completed. 

We found a prompt solution using the inout parameter and closure. The function with the parameter sent via inout returns the updated parameter to the variable defined in the function, once it’s completed. The solution is literally in three lines (and saves A LOT of time):

  1. Copy the current state;
  2. Carry out the closure;
  3. Assign the updated state to the subject.
func commit(changes: (inout State) -> ()) {
    var updatedState = stateRelay.value
    value = updatedState
a table with before and after code samples for State
State code before and after


Implementing it in the MVVM-architecture made the process of development quite easy. Check out this step-by-step tutorial:

  1. Make a module for PageViewController
  2. In the provider, prepare the data you’ll need to configure modules inside the UIPageViewController. 
  3. Do ViewModel as you usually do: modify the provider state into the view state.
  4. Add the screens DI modules in viewController using initialization. 

Please note that if you want to reuse modules you should make sure that next time you address this module the new sample gets back. To do that use the Provider property (do not confuse it with the module provider). It’s responsible for getting back a new sample when addressing the variable. Tip: use the SwiftLazy library by DITranquility that is a great alternative to the native lazy and has even better functionality with the required Provider.

  1. Configure each screen in the render function with the required data. Here’s an example:
let someDependency: SomeModuleDependency
let anptherDependency: AnotherModuleDependency
init(....) { }

func render(with data: InputData) {
    pageVC.setViewControllers([someDependency.viewController, ….])


TabBarController now has its own coordinator so we could configure an own flow for each tab. By flow we mean a coordinator and router pair. And a thing to remember — to add two child coordinators to the storage using addDependency and call the start() method. How to do this programmatically:


private typealias Flow = (Coordinator, Presentable)


override func start() {
     let flows = [someFlow(), anotherFlow()]
     let coordinators = { $0.0 }
     let controllers = flows.compactMap { $0.1 as? UINavigationController }
     router.setViewControllers(controllers: controllers)
     coordinators.forEach {
func someFlow() -> Flow {
     let coordinator = someCoordinator()
     let router = Routerlmpl(rootController: UINavigationController())
     return (coordinator, router)

As you can see all the updates are easy and quick to implement to your mobile app architecture. We plan on adding custom popup support and more cool stuff.


With the creation of the iOS app architecture, it became much easier for us to work. It’s not so scary anymore to replace a colleague on vacation and take on a new project. Solutions for this or that implementation can be viewed by colleagues without puzzling over how to implement it so that it would work properly with our architecture. 

During the year, we have already managed to add the shared storage, error handling for coordinators, improved routing logic, and we aren’t gonna stop there.

If you’re interested to know more about our iOS software development expertise, read WebRTC in iOS Explained. Creating an online conference app or introducing calls to your platform has never been this easy.


Jesse from Vodeo, ‘Fora Soft is in a perfect spot of good pricing of projects and development.’

Our copywriter Nikita talked to Jesse Janson about his experience of working with Fora Soft.

Jesse hails from the Janson family. Movie nerds might know the family by their brand, Janson Media. After all, they’ve been in the movie renting business since 1989.

Jesse’s app isn’t officially released yet, but he certainly does have something to say about his work with Fora.

Today we have Jesse Jenson with us. Jesse is a CEO of Vodeo, am I correct?

Yes. So my name is Jesse Janson, and I work at Janson Media, which is our TV and film distribution company. I am in charge of acquisitions and business development. And Vodeo is a mobile first video streaming rental service that we worked on with Fora Soft, and that’s a separate company which. Yes, I’m president of that company.

Tell me a little bit about Vodeo. What is it? How did it come to be?

Vodeo is a rental only mobile service, an application that we’re going to launch on iPhones first. The service is different and unique in the sense that the users can rent movies or TV episodes for a limited period of time. And for this, they use a credit-based system within the app, which allows the cost per movie rental or episode rental to be much lower than any other application or video rental service currently available in the market. So, within Vodeo, our users can pre-buy credits within the app that they can then use to rent movies and rent TV episodes. And each credit to the user only costs them about $0.10 right now.

Was Fora Soft your first choice?

I think it was probably our first. Definitely the first company and only company we’ve worked on for Vodeo and for the development of the application. I certainly researched and spoke to other companies as well.

But after speaking with Fora Soft, we decided to work together, and it’s been a good decision.

Tell me why you turned to Fora Soft? Why us?

The previous work that Fora Soft has done was impressive, and that was an important factor in our decision. Also, the communication was excellent. So I understood the scope of the project, the estimated time, work, and costs that it would take to get the project up and developed. And yeah, it’s been really good. The communication mostly. And then also the previous work was a big factor.

Share your before and after working with us, like what it used to be before you worked with Fora Soft and what it is now?

Sure. Well, before the Vodeo app, it was really just an idea on paper and just a concept. So we really needed Fora Soft to help us with an MVP. in the space and get that up and running. And then once we sort of had that tested and we had a first version, we worked with Fora Soft to update it. It’s still private, just for us to look at. And we went through a few versions of that, and that whole process has been excellent. And now we’re pretty much to a point where I believe we are planning to publicly launch this year, so maybe June.

Congratulations on that. So I believe since you haven’t officially launched yet, you don’t have any measurable figures, any amount of crushes or profit or anything like that, right?

No, not yet. Right now it’s very much private, and in beta. We probably have a pool of maybe 20 private users who have been testing the app on their phones – registering, renting movies and making sure everything works properly. And that whole process has been great so far. The feedback has been very good from everyone in our small private circle.

Let’s talk a little bit about difficulties. If there were any difficulties with working with us, please tell me honestly.

Yeah, no, I haven’t encountered any difficulties yet. Working with Fora Soft, every question or product feature that I’ve requested or asked about has been easily addressed by Fora Soft. I never got the impression that Fora Soft couldn’t implement a feature or an update to where we are so far with the app. Even discussing future possibilities of updates and features on our future roadmap down the line. Fora Soft has said all of it is possible, and we haven’t had any difficulties in that regard in terms of the development and adding new features. It’s been excellent.

What’s the situation where you would think like, all right, I really want this feature, but how do I implement it? Can they implement it? And you talk to Fora Soft and Fora Soft was, like, yes, sure. Easy peasy. We can do that.

Yeah, that’s been the case. Sometimes it’s not a quick “Oh, that’s easy”. Sometimes the project manager I’m working with at Fora Soft will have to bring it to the developer team and ask them if it’s a feature that could be done. Then they will come back to me and provide me with an estimate and explain how long it will take to implement the process. So, yeah, it’s been really good. Some features are easy and quick “Oh, yeah, we could do that quickly and easily”. And some are “let me just check with the developers and see if it’s possible”. And that’s always been “Yeah, it’s possible. We’re able to do it”. So it’s been great.

Qualities like determination and communication and professionalism are really important when it comes to project type of work. Can you please rate us on those qualities on a scale of ten and maybe come up with other qualities if you need to.

Yeah, I give a 10/10 across the board on everything. It’s been really great.

I really enjoy personally working with a project manager.

We’ve worked with two project managers so far at Fora Soft. We worked for a while with our first project manager, maybe a year, and she was excellent. And then there was a transition. She left or moved on and we worked with a new project manager, and that transition was super seamless and easy. A new project manager picked up right where she left off.

Working with a project manager and communicating with them has probably been the most valuable to me. Especially because I’m not a developer by any means, and I’m not able to speak developer language, code and whatnot. But working with a project manager helps that a lot. So I can communicate what I would like to see and have done.They know how to communicate with the developers and the designers and then bring that back to me and let me know how it goes.

I’ll make sure to forward this feedback to Vladimir. He’s a cool guy.

Yeah. Excellent.

Okay. And would you recommend Fora Soft to your friends or colleagues?

Yeah, I’d definitely recommend them. I think Fora Soft certainly has the skills and know-how to develop applications like this very well. Also, I felt like the price in working with Fora Soft was very competitive across the market, and that we liked as well. 

There’s a trade-off. Once you go too low in price, usually the quality of the work is reflected and is poor with the cost being so low. Sometimes, when the price is very high, it doesn’t necessarily mean that the quality will be that much and greated. So, I think Fora Soft is in a perfect spot of good pricing of projects and development. At the same time, they offer very high quality in the work that gets done.

Do you have anything else to add?

No, it’s all been great.

I’m in New York, Fora Soft is in Russia, there’s a time difference. Even with that, the communication’s been great. I don’t feel like we’re that far apart. The work’s been excellent.

It might be around a couple of years of working on this application. We’re pretty at the point where we’re ready to put it out in the public and hopefully get good feedback.

Thanks a lot, as a movie geek myself, I really hope everything will work out. Bye!


How a Technical Project Manager Saves Your Money and Nerves

it project manager

When entrusting any project to a third party for development, many people have the question, “Why can’t we do it ourselves?” Probably we can, but how effective will it be?

By outsourcing the work to a team of professionals headed by a project manager who will take some of the risks, the business not only saves money, but also gets the expertise and experience of the company. Let’s look at the benefits of such a solution and why a project needs a project manager.

Who is an IT project manager?

A project manager is a person who organizes the smooth operation of all development processes. He ensures communication between the client and the team, translates technical requirements into comprehensive language, plans the development and ensures the timely release of the product to the market.

What tasks a manager performs in Fora Soft

At Fora Soft, a manager is just like a member of a team. Our PM is not really a manager per se, he is a person who deals with processes and communications. Without him as well as without any other team member, it is impossible to imagine delivering a quality product on time.

In our company, the manager takes care of all phases of project life, from initiation to closing and handing over the result to the customer. PM:

  • assembles the team 
  • prepares the infrastructure for the start-up 
  • checks the requirements and thinks over the sequence of tasks to be convenient for the client and the team
  • prepares an IT project plan and schedule, so the team and the client always know the demo dates and the end date of the project in advance 

After the implementation of projects, there usually comes a stage that can be called support. Even after the main work is complete, the project manager stays in touch with the client, quickly answering questions and engaging developers to solve problems if something goes wrong. We care about our customers, so even if a project is already in the live version and the contract is closed, the PM is always happy to help. For example, after finishing one of the projects, a client came to us with a problem that the cost of maintaining the server had increased significantly. The PM assembled a team, the client calculated the cost of moving to other servers and the cost of using them, and then moved the project to the right server for the client. 

Why hire a team led by a PM?

– PM has a technical background and experience in handling projects of various complexity

The managers at Fora Soft understand the technical background of certain systems and have experience in bringing products to the market.

Some IT project managers are descendants of developers and analysts with tech-leadership experience. This makes it possible to correctly estimate labor costs for technical tasks and distribute priorities to achieve the goal.

In addition, our managers are constantly improving their skills in our company: they know how to lead projects according to flexible methodologies Scrum and Kanban, they pump up their knowledge of English, technical skills, analytics and sales. Each manager has their own personal development plan, which includes a set of tools that need to be learned. Once a month, managers invite their colleagues from design, testing, or development to stay up-to-date on current technology and trends.

The output of a project manager at Fora Soft is a versatile professional, who will explain technical points and intelligently communicate with stakeholders to achieve the best result.

– The PM is trained to work with the team and will always resolve any problems that arise within the development team

Globally, the business has two options – to find artists “from the outside” on the freelance markets / through friends, or hire a professional team.

In the first case, the risks for the business will be significantly higher. Freelancers can not give any guarantees of quality work on time, they are their own, and most often do not work in teams, so you need to hire each specialist separately. The designer will create product layouts, the developer will bring ideas to life, and the tester will not miss bugs in the release. You will also have to monitor the quality of their performance, which will require additional time. Such fragmentation can cost a business dearly. 

In the second case, the manager takes care of all team processes. Every project is handled by a full-fledged team which specializes not in a wide range of technologies but in a particular multimedia field where each team member has relevant experience. 

The project manager at Fora Soft manages the resources, intelligently redistributing them when necessary. There is no downtime, every developer works on the project for at least 2 months in advance. The business owner does not have to bear the risks.

In addition, the manager builds relationships within the team, motivates each member to contribute to the success of the product and suggest improvements. The business owner does not have to micromanage-they just bought time to focus on more important things-for example, the global vision for the extension

– PM will help save money and nerves

Any development, especially large and complex projects, involves budgeting and risk. We understand this, so we structure our work in such a way as to make the maximum number of useful features for a minimum cost and meet the customer’s deadlines. Moreover, our certified IT Project managers are always in touch with the client and can promptly answer any questions about the current status of the project. 

In order to understand how a PM can help you save money, let’s do the opposite of what is written above. Let’s stop describing what the project manager does, and tell what can happen if he is not there.

So let’s imagine a team without a manager.

In our hypothetical project, creating an online cinema, let’s assume there is a team of the following IT project roles: three developers, a designer, a tester and an analyst. Each of them does their job.

After a while it turns out that the developer has done the task before his estimates, and not to sit idly by, took in the development of creating a user profile. And we have a team of three developers… All of them decide to do exactly the same thing, forgetting to warn each other. It comes out too late, when QA starts testing the task. QA realizes that testing the task will be difficult, postpones it, and proceeds to what is already ready at the moment. Development has just gone up in price, and the deadline has shifted by a month.

Out comes the need for someone who will maintain a level of transparency in the team so this doesn’t happen again, and everyone is on the same page, following the plan and knowing what “plan B” is at the slightest change.

In order to do this, you need to understand how long a particular task will take. Someone needs to take charge of the planning and accounting for the risks. That’s why you need a project manager for an IT project delivery: to keep the budget from going to waste and lead the product to a successful release. 


So, here is why a business needs a professional team headed by a project manager:

  • IT project manager has specialized knowledge and experience, combining technical and management skills.
  • he’s a single point of contact for all. All information passes through the manager, he is always aware of the project progress and assumes all the IT project risks and resource allocation. 
  • the manager helps to save money. This is not only a financial benefit as a result of competent planning and mitigation of risks, but also saves unnecessary worries that something can go wrong.

By the way, without specialized knowledge in management and technology, all this is quite difficult to do. Our managers have already had prior professional training and “got all the bumps” so that the business does not have to do this. Wanna find out more? Visit our contact page, so our Sales managers can talk to you and explain everything.


Video Conferencing System Architecture: P2P vs MCU vs SFU ?

Even though WebRTC is a protocol developed to do one job, to establish low-ping high-security multimedia connections, one of its best features is flexibility. Even if you are up for a pretty articulated task of creating a video conferencing app, the options are open.

You can go full p2p, deploy a media server backend (there is quite a variety of those), or combine these approaches to your liking. You can pick desired features and find a handful of ways to implement them. Finally, you can freely choose between a solid backend or a scalable media server grid created by one of the many patterns. 

With all of these freedoms at your disposal, picking the best option might – and will – be tricky. Let us clear the fight P2P vs MCU vs SFU a bit for you.

What is P2P?

Let’s imagine it’s Christmas. Or any other mass gift giving opportunity of your choice. You’ve got quite a bunch of friends scattered across town, but everyone is too busy to throw a gift exchange party. So, you and your besties agree everyone will get their presents once they drop by each other.

When you want to exchange something with your peers, no matter whether it’s a Christmas gift or a live video feed, the obvious way is definitely peer to peer. Each of your friends comes to knock on your door and get their box, and then you visit everyone in return. Plain and simple.

For a WebRTC chat, that means all call parties are directly connected to each other, with the host only serving as a meeting point (or an address book, in our Christmas example).

This pattern works great, while:

  • your group is rather small
  • everyone is physically able to reach each other

Every gift from our example requires some time and effort to be dealt with: at least, you have to drive to a location (or to wait for someone to come to you), open the door, give the box and say merry Christmas.

  • If there are 4 members in a group, each of you needs time to handle 6 gifts – 3 to give, 3 to take.
  • When there are 5 of you, 8 gifts per person are to be taken care of.
  • Once your group increases to 6 members, your  Christmas to-do list now features 10 gifts.

At one point, there will be too many balls in the air: the amount of incoming and outgoing gifts will be too massive to handle comfortably.

Same for video calls: every single P2P stream should be encoded and sent or decoded and displayed in real time  – each operation requiring a fraction of your system’s performance, network bandwidth, and battery capacity. This fraction might be quite sensible for higher-quality video: if a 2-on-2 or even a 5-on-5 conference will work decently on any relatively up-to-date device, 10-on-10 peer to peer FullHD  call would eat up give or take 50 Mbps in bandwidth and put quite of a load even on a mid-to-high tier CPU.

p2p architecture
Peer-to-peer architecture 

Now regarding the physical ability to reach. Imagine one of your friends has recently moved to an upscale gated community. They are free to drive in and out – so, they’ll get to your front door for their presents, but your chances to reach their home for your gift are scarce.

WebRTC-wise, we are talking about corporate networks with NATs and\or VPNs. You can reach most hosts from inside the network, while being as good as unreachable from outside. In any case, your peers might be unable to see you, or vice versa, or both.

And finally – if all of you decide to pile up the gifts for a fancy Instagram photo, everyone will have to create both the box heap and the picture themselves: the presents are at the recipients’ homes.

WebRTC: peer-to-peer means no server side recording (or any other once-per-call features). At all.

Peer-to-Peer applications examples

Google Meet and secure mobile video calling apps without any server-side features like video recording.

That’s where media servers come to save the day.

WebRTC media servers: MCU and SFU

Back to the imaginary Christmas. Your bunch of friends is huge, so you figure out you’ll spend the whole holiday season waiting for someone or driving somewhere. To save your time, you pay your local coffee shop to serve as a gift distribution node. 

From now on, everyone in your group needs to reach a single location to leave or get gifts – the coffee shop.

That’s how the WebRTC media servers work. They accept calling parties’ multimedia streams and deliver them to everyone in a conference room. 

A while ago, WebRTC media servers used to come in two flavors: SFU (selective forwarding unit) and MCU (Multipoint conferencing \\ Multipoint control unit). As of today, most commercial and open-source solutions offer both SFU and MCU features, so both terms now describe features and usage patterns instead of product types.

What are those?

SFU / Selective Forwarding Unit

What SFU is?

SFU sends separate video streams of everyone to everyone.

The bartender at the coffee shop keeps track of all the gifts arriving at the place, and calls their recipients if there’s something new waiting for them. Once you receive such a call, you drop by the shop, have a ‘chino, get your box and head back home.

The bad news is: the bartender gives you calls about one gift at a time. So, if there are three new presents, you’ll have to hit the road three times in a row. If there are twenty… you probably got the point. Alternatively, you can visit the place at a certain periodicity, checking for new arrivals yourself.

Also, as your gift marathon flourishes, coffee quality degrades: the more people join in, the more time and effort the bartender dedicates to distributing gifts instead of caffeine. Remember: one gift – one call from the shop.

Media Server working as a Selective Forwarding Unit allows call participants to send their video streams once only – to the server itself. The backend will clone this stream and deliver it to every party involved in a call.

With SFU, every client consumes almost two times less bandwidth, CPU capacity, and battery power than it would in a peer-to-peer call:

  • for a 4-user call: 1 outgoing stream, 3 incoming (instead of 3 in, 3 out for p2p)
  • for a 5-user call: 1 outgoing stream, 4 incoming (would be 4 and 4 in p2p)
  • for a 10-user call: 1 out, 9 in (9 in, 9 out – p2p)
sfu architecture
SFU architecture

The drawback kicks in with users per call ratio approaching 20. “Selective” in SFU stands for the fact that this unit doesn’t forward media in bulk – it delivers media on a per-request basis. And, since WebRTC is always a p2p protocol, even if there is a server involved, every concurrent stream is a separate connection. So, for a 10-user video meetup a server has to maintain 10 ingest (“video receiving”) and 90 outgoing connections, each requiring computing power, bandwidth and, ultimately, money. But…

SFU Scalability

Once the coffee shop owner grows angry with the gift exchange intensity, you can take the next step, and pay some more shops in the neighborhood to join in. 

Depending on a particular shop’s load, some of the gift givers or receivers can be routed to another, less crowded one. The grid might grow almost infinitely, since every shop can either forward a package to their addressee, or to an alternative pick up location.

Forwarding rules are perfectly flexible. Like, Johnson’s coffee keeps gifts for your friends with first names starting A – F, and Smartducks is dedicated to parcels for downtown residents, while Randy’s Cappuccino will forward your merry Christmas to anyone who sent their own first gift since last Thursday.

One stream – one connection approach provided by SFU pattern has a feature to beat almost all of its cons. The feature is scalability software development.

Just like you forward a user’s stream to another participant, you can forward it to another server. With this in mind, the back end WebRTC architecture can be adjusted to grow and shrink depending on the number of users, conferences and traffic intensity.

E.g., if too many clients request a particular stream from one host, you can spawn a new one, clone the stream there and distribute it from a new unoccupied location.

Or, in case if you expect rush entrance on a massive conference (e.g., some 20-30 streaming users, and hundreds of view-only subscribers), you can assign two separate media server groups, one to handle incoming streams and the other to deliver it to subscribers. In this case, any load spikes on the viewing side will have zero effect on video ingest, and vice versa.

SFU applications examples

Skype and almost every other mobile messenger with a video conference and video call recording capabilities employs SFU pattern on the backend.

Receiving other users’ video as separate streams provides capabilities for adaptive UX, allows per-stream quality adjustment and improves overall call stability in a volatile environment of a cellular network.

MCU / Multipoint Conferencing Unit

What MCU is?

MCU unites all streams into 1 and sends just 1 stream to each participant.

Giving gifts is making friends, right? Now almost everyone in town is your buddy and participates in this gift exchange. The coffee shop hosting the exchange comes up with a great idea: why don’t we put all the presents for a particular person in a huge crate with their name on it. Moreover, some holiday magic is now involved: once there are new gifts for anyone, they appear in their respective boxes on their own.

Still, making Christmas magic seems to be harder work than making coffee: they might even need to hire more people to cast spells on the gift crates. And even with extra wizards on duty, there is zero chance you can rearrange the crates’ content order for your significant other to see your gift first – everyone gets the same pattern.

MCU architecture

Well, some of the MCU-related features do really ask for puns over an acronym shared with Marvel Cinematic Universe. Something marvelous is definitely involved. Media server in an MCU role has to keep only 20 connections for a 10 user conference, instead of 100 links of an SFU – one ingest, one output per user. How come? It merges all the videos and audios a user needs to receive into a single stream, and delivers it to a particular client. That’s how Zoom’s conferences are made: with MCU, even a lower-tier computer is capable of handling a 100-user live call.

Magic obviously comes at a price, though. Compositing multiple video and audio streams in real time is *much* more of a performance guzzler than any forwarding pattern. Even more, if you have to somehow exclude one’s own voice and picture from the merged grid they receive – for each of the users. 

Another drawback, though mitigable, is that a composited grid is the same for everyone who receives the video – no matter what is their screen resolution, or aspect ratio, or whatever. If there is a need to have different layouts for mobile and desktop devices – you’ll have to composite the video twice.

MCU scalability

In WebRTC video call, compared to SFU, MCU pattern has considerably less scaling potential: video compositing with sub-second delays does not allow on-the-fly load redistribution for a particular conference. Still, one can autospawn additional server instances for new calls in a virtualized environment, or, for even better efficiency, assign an additional SFU unit to redistribute composited video.

MCU applications examples

Zoom and a majority of its alternatives for massive video conferencing run off MCU-like backends. Otherwise, WebRTC video calls for 25+ participants would only be available for high-end devices.

TL;DR: what and when do I use?

Quick comparison: P2P, SFU, MCU or MCU + SFU

~1-4 users per call – P2P


  • lowest idling costs
  • easiest scaling
  • shortest TTM (time to market)
  • potentially the most secure


  • for 5+ user calls – quality might deteriorate on weaker devices
  • highest bandwidth usage (may be critical for mobile users)
  • no server side recording, video analytics or other advanced features


  • private \ group calls
  • video assistance and sales

5-20 users per call – SFU


  • easily scalable with simultaneous call number growth
  • retains UX flexibility while providing server side features
  • can have node redundancy by design: thus, most rush-proof


  • pretty traffic- and performance intensive on the client side
  • might still require a compositing MCU-like service to record calls 

Use cases:

  • E-learning: workshops and virtual classrooms
  • Corporate communications: meeting and pressrooms

20+ users per call – MCU \ MCU + SFU


  • least load on client side devices
  • capable of serving the biggest audiences
  • easily recordable (server side / client side)


  • biggest idling and running costs
  • one call capacity is limited to performance of a particular server
  • least customizable layout

Use cases:

  • Large event streaming
  • Social networking
  • Online media 


P2P, MCU, and SFU are parts of WebRTC. You can read more about WebRTC on our blog:

How to minimize latency to less than 1 sec for mass streams?
WebRTC in Android.
WebRTC security in plain language for business people.

Got another question not covered here? Feel free to contact us using this form, and our professionals will be happy to help you with everything.


What is Traefik and how to use it? Tutorial with Code Examples

traefik tutorial

With this Traefik tutorial, we will try to show you how to proxy sites and API in a few examples, automate getting certificates and even add some middleware (to add headers for example).

Please note that we use the hash symbol (#) in the code examples where we want to explain something.

What is Traefik?

It’s a reverse proxy designed to work with Docker. It allows you to proxy services in containers in a very simple and declarative way. At first you might be intimidated by labels, but you will get used to it 🙂

Why Traefik and not nginx, for example? We think that Traefik is simpler to manage. It only uses docker=compose (instead of that plus nginx.conf with nginx), yet still fullfils its function.

Create a traffic config

To begin, we should create a traffic config:

# traefik.yml

# set log level
  level: DEBUG

# enable the dashboard with useful information
  dashboard: true
  insecure: true

# providers: in our case that's what we proxy.
# at first we only need the Docker,
# here's how to proxy external services 
    # here's where you specify the network to add
    # service to get it "picked up" by the traffic
    network: traefik
    # turn off "auto-scraping" of containers by traffic
    # otherwise it will try to proxy all containers
    exposedByDefault: false

# entry points are basically just ports that will access
# to Traefik and therefore to the services it proxies
  # this is the name of the entry point for regular http traffic, usually called
  # http or web, but you can put anything in here
    # the number of entry port
    address: :80
      # set up a redirect for all requests to the https entry point
          to: https
          scheme: https
          permanent: true
  # create a https entry point on port 443, usually called
  # https or websecure
    address: :443

# ssl certificate resolvers: this is used to get certificates for domains.
# We have just one for now and later we will add another, called Wildcard Resolver
      # acme challenge type, we need it so that letsencript can understand that this is our
      # domain we need to specify the entry point on which the challenge will run
      # more info about challenges here
        entrypoint: http
      # letsencript needs your email, it will send all sorts of information there,
      # e.g. your certificate's about to go bad
      # that's where Traefik will put the certificates, it's better to run volumetric
      # that's what we'll do below
      storage: /letsencrypt/acme.json

accesslog: true
# Dockerfile
FROM traefik:v2.5.2

WORKDIR /traefik

COPY ./traefik.yml

CMD ["traefik"]

# docker-compose.yml

version: "3.8"

    build: .
    container_name: traefik
    restart: always
      # open ports for http, https, and dashboard of Traefik,
      # the last one should not be exposed outside of your local network
      # it will be accessible via ssh (see below)
      - 80:80
      - 443:443
      # traffic needs access to docker.sock to monitor the containers
      - /var/run/docker.sock:/var/run/docker.sock:ro
     # and here is the volumetric access to the certificates
      - /data/letsencrypt:/letsencrypt
      - traefik

  # for the sake of example let's connect whoami, a simple service that displays
  # information about the request in textual form
    image: "traefik/whoami"
    restart: always
      # enable Traefik for this container
      - traefik.enable=true
      # set Traefik network
      # here is the fun part: adding a router and a rule for it
      # in this case the router will be named whoami
      # and will be available at
      # be sure to add the name of the router, it has to be
      # be unique, in our case it is whoami (comes after
      # traefik.http.routers.)
      - traefik.http.routers.whoami.rule=Host(``)
      # Set through which entry point the router will be accessible
      - traefik.http.routers.whoami.entrypoints=https
      # set certresolver
      - traefik.http.routers.whoami.tls.certresolver=simple-resolver
      # you don't actually have to specify the port explicitly
      # traefik is able to figure out which port the service is listening on,
      # It might happen that one container listens to several ports at the same time.
      port listens to several # ports (e.g. rabbitMq does this), then you will
      # to create several routers and specify explicitly several ports
      - traefik

# and the networks
      name: traefik

That’s it, now you can run it and be happy that you did.

If you want to poke the dashboard, you can do so by forwarding ports via ssh

ssh -L 8080:localhost:8080

and open localhost:8080 in the browser

traefik dashboard
Traefik dashboard

Proxying external services

You know what this Traefik tutorial lacks? Information on external services!

Traefik can be used not only for services in Docker, but also for external services. It supports load balancing out of the box, i.e. if you have replicated service, you just specify all hosts and Traefik will do the rest. 

To proxy external services (outside the Docker network) you need to add provider in traefik.yml

# traefik.yml

# ...

    network: traefik
    exposedbydefault: false

  # add a file provider that will pull in data from
  # directory external
    directory: ./external

To proxy services on the local network, you must add a docker-host service, because localhost inside the container will point to the network of the container itself, not to the local network of the machine

# docker-compose.yml

version: "3.8"

  # ...
    # ...
      - traefik
      # add a shared network for the dockerhost and Traefik
      - local

    image: qoomon/docker-host
    cap_add: [ "NET_ADMIN", "NET_RAW" ]
    restart: always
      - local

# ...

      name: traefik
# Dockerfile

FROM traefik:v2.5.2

WORKDIR /traefik

COPY ./traefik.yml
# copy the folder with the external service configs
COPY ./external

CMD ["traefik"]

And also the config of the external service itself (place all configs in the external directory).

# external/example.yml
         # if the service is on an external host,
         # we simply write ip or domain
          - url: "http://123.456.789.123:4716"
         # if it’s on localhost, then type in docker-host
          - url: "http://docker-host:8132"

        - https
      # the web client will be accessible via any paths on the domain
      rule: "Host(``)"
      service: example-web-client
        certResolver: simple-resolver
        - https
      # the api will only be available at*)
      # no need to add any additional rules for the webserver
      # Traefik will route requests to /api,
      # this works just like a css specificity
      rule: "Host(``) && PathPrefix(`/api`)"
      service: example-api
        certResolver: simple-resolver

Wildcard Certificates

Traefik can do this too! Let’s rewrite docker-compose.yml so that whoami is accessible by *

First, we have to add wildcard-resolver to the traffic config.

# traefik.yml

  # ...
        # specify the dns provider, in this example it would be godaddy,
        # but Traefik knows how to work with others:
        provider: godaddy
      storage: /letsencrypt/acme.jso
# docker-compose.yml

version: "3.8"

    build: ./proxy
    container_name: traefik
    restart: always
      # specify the api keys of our provider from the environment variables
      - 80:80
      - 443:443
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /data/letsencrypt:/letsencrypt
      - traefik.enable=true
      - traefik.http.routers.api.entrypoints=http
      - local
      - traefik

    image: "traefik/whoami"
    restart: always
      - traefik.enable=true
     # change the rules for the router
      - traefik.http.routers.whoami.rule="Host(``) || HostRegexp(`{subdomain:.+}`)"
      - traefik.http.routers.whoami.entrypoints=https
     # set wildcard-resolver
      - traefik.http.routers.whoami.tls.certresolver=wildcard-resolver
     # domains on which the resolver will receive the certificates

      - traefik

    # ...


Traefik allows you to create middleware and apply it on routers and even entry points!

For example, if you need to remove some service from search results, you can always just attach X-Robots-Tag: noindex, nofollow.

# docker-compose.yml

# ...
    image: "traefik/whoami"
    reboot: always
      - traefik.enable=true
      - traefik.http.routers.whoami.rule="Host(``) || HostRegexp(`{subdomain:.+}`)"
      - traefik.http.routers.whoami.entrypoints=https
      - traefik.http.routers.whoami.tls.certresolver=wildcard-resolver
      # Creating a middle-point software where
      # noindex is a title
      # headers are middleware types
      - "traefik.http.middlewares.noindex.headers.customresponseheaders.X-Robots-Tag=noindex, nofollow"
      # Adding our middleware to the router.
      - traefik.http.routers.whoami.middlewares=noindex@docker

You can have a number of middleware attached to your router, in which case they must be specified, separated by commas.

– “traefik.http.routers.whoami.middlewares=noindex@docker, something@docker, example@file”

Middlewares can be also applied not only to routers, but also on entire entry points. In that case you create a middleware in labels anyway, you can do it in Traefik itself.

# docker-compose.yml

# ...

    # ...
      - "traefik.enable=true"
      - ""
      - "traefik.http.routers.api.entrypoints=http"
      - "traefik.http.middlewares.noindex.headers.customresponseheaders.X-Robots-Tag=noindex, nofollow"

# ...

And add in middleware traefik.yml to the entrypoint
# traefik.yml

# ...

    address: :80
          to: https
          scheme: https
          permanent: true
    address: :443
    # add http middleware
        - "noindex@docker"

# ...


This is our short tutorial on Traefik. We hope you learned something new or at least grasped the scope of how great and multi-functional Traefik is. We could be going on and on about Traefik but it’s better if you go and read their official documentation 🙂