How Digital Video as a Technology Works

tv with different types of video

In this article, we’ll try to explain what digital video is and how it works. We’ll be using a lot of examples, so even if you wanna run away before reading something difficult – fear not, we’ve got you. So lean back and enjoy the explanation on video from Nikolay, our CEO. 😉

Analog and digital video

Video can be analog and digital.

All of the real world information around us is analog. Waves in the ocean, sound, clouds floating in the sky. It’s a continuous flow of information that’s not divided into parts and can be represented as waves. People perceive exactly analog information from the world around them.    

The old video cameras, which recorded on magnetic cassettes, recorded information in analog form. Reel-to-reel tape and cassette recorders worked on the same principle. Magnetic tape was passed through the turntable’s magnetic heads, and this allowed sound and video to be played. Vinyl records were also analog. 

Such records were played back strictly in the order in which they were recorded. Further editing was very difficult. So was the transfer of such recordings to the Internet.

With the ubiquity of computers, almost all video is in digital format, as zeros and ones. When you shoot video on your phone, it’s converted from analog to digital media and stored in memory, and when you play it back, it’s converted from digital to analog. This allows you to stream your video over a network, store it on your hard drive, and edit and compress it.

What a digital video is made of

Video consists of a sequence of pictures or frames that, as they change rapidly, make it appear as if objects are moving on the screen.

This here is an example of a how a video clip is done.

What is Frame Rate

Frames on the screen change at a certain rate. The number of frames per second is the frame rate or framerate. The standard for TV is 24 frames per second, and 50 frames per second for IMAX in movie theaters.

The higher the number of frames per second, the more detail you can see with fast-moving objects in the video. 

Check out the difference between 15, 30, and 60 FPS.

What is pixel

All displays on TVs, tablets, phones and other devices are made up of little glowing bulbs – pixels. Let’s say that each pixel can display one color (technically different manufacturers implement this differently). 

In order to display an image on a display, it is necessary for each pixel on the screen to glow a certain color. 

Thanks to this technical device of screens, in digital video each frame is a set of colored dots or pixels. 

what is pixel

The number of such dots horizontally and vertically is called the picture resolution. The resolution is recorded as 1024×768. The first number is the number of pixels horizontally and the second number, vertically. 

The resolution of all frames in a video is the same and this in turn is called the video resolution.

Let’s take a closer look at a single pixel. On the screen it’s a glowing dot of a certain color, but in the video file itself a pixel is stored as digital information (numbers). With this information the device will understand what color the pixel should light up on the screen. 

What are color spaces

There are different ways of representing the color of a pixel digitally, and these ways are called color spaces or color spaces. 

Color spaces are set up so that any color is represented by a point that has certain coordinates in that space. 

For example, the RGB (Red Green Blue) color space is a three-dimensional color space where each color is described by a set of three coordinates – each of them is responsible for red, green and blue colors. 

Any color in this space is represented as a combination of red, green, and blue.

how color spaces work

Here is an example of an RGB image decomposed into its constituent colors:

what is RGB

There are many color spaces, and they differ in the number of colors that can be encoded with them and the amount of memory required to represent the pixel color data.

The most popular spaces are RGB (used in computer graphics), YCbCr (used in video), and CMYK (used in printing)

CMYK is very similar to RGB, but has 4 base colors – Cyan, Magenta, Yellow, Key or Black.

RGB and CMYK spaces are not very efficient, because they store redundant information.  

Video uses a more efficient color space that takes advantage of human vision.

The human eye is less sensitive to the color of objects than it is to their brightness.

how human eye understand brightness

On the left side of the image, the colors of squares A and B are actually the same. It just seems to us that they are different. The brain forces us to pay more attention to brightness than to color. On the right side there is a jumper of the same color between the marked squares – so we (i.e., our brain) can easily determine that, in fact, the same color is there.

Using this feature of vision, it is possible to display a color image by separating the luminosity from the color information. Subsequently, half or even a quarter of the color information can simply be discarded in compression (representing the luminosity with a higher resolution than the color). The person will not notice the difference, and we will essentially save on storage of the information about color.

About how exactly color compression works, we will talk in the next article.

The best known space that works this way is YCbCr and its variants: YUV and YIQ.  

Here is an example of an image decomposed into components in YCbCr. Where Y’ is the luminance component, CB and CR are the blue and red color difference components.

how YCbCr works

It is YCbCr that is used for color coding in video. Firstly, this color space allows compressing color information, and secondly, it is well suited for black and white video (e.g. surveillance cameras), as the color information (CB and CR) can simply be omitted.


What is Bit Depth

The more bits, the more colors can be encoded, and the more memory space each pixel occupies. The more colors, the better the picture looks.

For a long time it has been standard for video to use a color depth of 8 bits (Standard Dynamic Range or SDR video). Nowadays, 10-bit or 12-bit (High Dynamic Range or HDR video) is increasingly used.

compare SDR and HDR video

It should be taken into account that in different color spaces, with the same number of bits allocated per pixel, you can encode a different number of colors. 

What is Bit Rate

Bit rate is the number of bits in memory that one second of video takes. To calculate the bit rate for uncompressed video, take the number of pixels in the picture or frame, multiply by the color depth and multiply by the number of frames per second

1024 pixels X 768 pixels X 10 bits X 24 frames per second = 188743680 bits per second

That’s 23592960 bytes, 23040 megabytes or 22.5 gigabytes per second.

Those 5 minute videos would take up 6,750 gigabytes or 6.59 terabytes of memory.

This brings us to why video compression methods are needed and why they appeared. Without compression it’s impossible to store and transmit that amount of information over a network. YouTube videos would take forever to download.


This is a quick introduction in the world of video. Now that we know what it consists of and the basics of its work, we can move on to the more complicated stuff. Which will still be presented in a comprehensive way 🙂

In the next article I’ll tell you how video compression works. I’ll talk about lossless compression and lossy compression. 


Advanced iOS App Architecture Explained on MVVM with Code Examples

MVVM iOS architecture

How do you save your developers’ time, especially when you have to move between several projects? Is it possible to create a template of sorts for the new devs to use?

The Fora Soft iOS department decided to create a unified architecture for apps. In 16 years of work, we have developed more than 60 applications. We regularly had to spend weeks digging into code to understand the structure and operation of another project. Some projects we created as MVP, some as MVVM, some as our own. Switching between projects and reviewing other developers’ code increased our development time by several more hours. When went on to creating an iOS app architecture, we first defined the main goals to achieve:

Simplicity and speed. One of the main goals is to make developers’ lives easier. To do this, the code must be readable and the application must have a simple and clear structure. 

Quick immersion in the project. Outsourced development doesn’t provide much time to dive into a project. It is important that when switching to another project, it does not take the developer much time to learn the application code. 

Scalability and extensibility. The application under development must be ready for large loads and be able to easily add new functionality. For this it is important that the architecture corresponds to modern development principles, such as SOLID, and the latest versions of the SDK

Constant development. You can’t make a perfect architecture all at once, it comes with time. Every developer contributes to it – we have weekly meetings where we discuss the advantages and disadvantages of the existing architecture and things we would like to improve.

The foundation of our architecture is the MVVM pattern with coordinators 

Comparing popular MV(X) patterns, we settled on MVVM. It seemed to be the best because of good speed of development and flexibility.

MVVM stands for Model, View, ViewModel:

  • Model – provides data and methods of working with it. Request to receive, check for correctness, etc.
  • View – the layer responsible for the level of graphical representation.
  • ViewModel – The mediator between the Model and View. It is responsible for changes of Model, reacting on user’s actions performed on View, and updates View, using changes from Model. The main distinctive feature from other intermediaries in MV(X) patterns is the reactive bindings of View and ViewModel, which significantly simplifies and reduces the code of working with data between these entities.

Along with the MVVM, we’ve added coordinators. These are objects that control the navigational flow of our application. They help to:

  • isolate and reuse ViewControllers
  • pass dependencies down the navigation hierarchy
  • define the uses of the application
  • implement Deep Links

We also used the DI (Dependency Enforcement) pattern in the iOS development architecture. This is a setting over objects where object dependencies are specified externally, rather than created by the object itself. We use DITranquillity, a lightweight but powerful framework with which you can configure dependencies in a declarative style. 

Let’s break down our advanced iOS app architecture using a note-taking application as an example. 

Let’s create the framework for the future application. Let’s implement the necessary protocols for routing.

import UIKit
protocol Presentable {
    func toPresent() -> UIViewController?
extension UIViewController: Presentable {
    func toPresent() -> UIViewController? {
        return self
protocol Router: Presentable {
  func present(_ module: Presentable?)
  func present(_ module: Presentable?, animated: Bool)
  func push(_ module: Presentable?)
  func push(_ module: Presentable?, hideBottomBar: Bool)
  func push(_ module: Presentable?, animated: Bool)
  func push(_ module: Presentable?, animated: Bool, completion: (() -> Void)?)
  func push(_ module: Presentable?, animated: Bool, hideBottomBar: Bool, completion: (() -> Void)?)
  func popModule()
  func popModule(animated: Bool)
  func dismissModule()
  func dismissModule(animated: Bool, completion: (() -> Void)?)
  func setRootModule(_ module: Presentable?)
  func setRootModule(_ module: Presentable?, hideBar: Bool)
  func popToRootModule(animated: Bool)

Configuring AppDelegate and AppCoordintator

A diagram of the interaction between the delegate and the coordinators

In App Delegate, we create a container for the DI. In the registerParts() method we add all our dependencies in the application. Next we initialize the AppCoordinator by passing window and container and calling the start() method, thereby giving it control.

class AppDelegate: UIResponder, UIApplicationDelegate {
    private let container = DIContainer()
    var window: UIWindow?
    private var applicationCoordinator: AppCoordinator?
    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
        // Override point for customization after application launch.
        let window = UIWindow()
        let applicationCoordinator = AppCoordinator(window: window, container: container)
        self.applicationCoordinator = applicationCoordinator
        self.window = window
        return true
    private func registerParts() {
        container.append(part: ModelPart.self)
        container.append(part: NotesListPart.self)
        container.append(part: CreateNotePart.self)
        container.append(part: NoteDetailsPart.self)

The App Coordinator determines on which script the application should run. For example, if the user isn’t authorized, authorization is shown for him, otherwise the main application script is started. In the case of the notes application, we have 1 scenario – displaying a list of notes. 

We do the same as with App Coordinator, only instead of window, we send router.

final class AppCoordinator: BaseCoordinator {
    private let window: UIWindow
    private let container: DIContainer
    init(window: UIWindow, container: DIContainer) {
        self.window = window
        self.container = container
    override func start() {
    override func start(with option: DeepLinkOption?) {
    func openNotesList() {
        let navigationController = UINavigationController()
        navigationController.navigationBar.prefersLargeTitles = true
        let router = RouterImp(rootController: navigationController)
        let notesListCoordinator = NotesListCoordinator(router: router, container: container)
        window.rootViewController = navigationController

In NoteListCoordinator, we take the dependency of the note list screen, using the method container.resolve(). Be sure to specify the type of our dependency, so the library knows which dependency to fetch. Also set up jump handlers for the following screens. The dependencies setup will be presented later.

class NotesListCoordinator: BaseCoordinator {
    private let container: DIContainer
    private let router: Router
    init(router: Router, container: DIContainer) {
        self.router = router
        self.container = container
    override func start() {
    func setNotesListRoot() {
        let notesListDependency: NotesListDependency = container.resolve()
        notesListDependency.viewModel.onNoteSelected = { [weak self] note in
            self?.pushNoteDetailsScreen(note: note)
        notesListDependency.viewModel.onCreateNote = { [weak self] in
            self?.pushCreateNoteScreen(mode: .create)

Creating a module

Each module in an application can be represented like this:

Module scheme in iOS application architecture

The Model layer in our application is represented by the Provider entity. Its layout is

Provider scheme in apple app architecture

The Provider is an entity in iOS app architecture, which is responsible for communicating with services and managers in order to receive, send, and process data for the screen, e.g. to contact services to retrieve data from the network or from the database.

Let’s create a protocol for communicating with our provider by mentioning the necessary fields and methods. Let’s create a structure ProviderState, where we declare the data on which our screen will depend. In the protocol, we will mention fields such as Current State with type ProviderState and its observer State with type Observable<ProviderState> and methods to change our Current State. 

Then we’ll create an implementation of our protocol, calling as the name of the protocol + “Impl”. CurrentState we mark as @Published, this property wrapper, allows us to create an observable object which automatically reports changes. BehaviorRelay could do the same thing, having both observable and observer properties, but it had a rather complicated data update flow that took 3 lines, while using @Published only took 1. Also set the access level to private(set), because the provider’s state should not change outside of the provider. The State will be an observer of CurrentState and will broadcast changes to its subscribers, namely to our future View Model. Do not forget to implement the methods that we will need when working on this screen.

struct Note {
    let id: Identifier<Self>
    let dateCreated: Date
    var text: String
    var dateChanged: Date?
protocol NotesListProvider {
    var state: Observable<NotesListProviderState> { get }
    var currentState: NotesListProviderState { get }
class NotesListProviderImpl: NotesListProvider {
    let disposeBag = DisposeBag()
    lazy var state = $currentState
    @Published private(set) var currentState = NotesListProviderState()
    init(sharedStore: SharedStore<[Note], Never>) {
        sharedStore.state.subscribe(onNext: { [weak self] notes in
            self?.currentState.notes = notes
        }).disposed(by: disposeBag)
struct NotesListProviderState {
    var notes: [Note] = []
View-Model scheme in iOS development architecture

Here we’ll create a protocol, just like for the provider. Mention fields such as ViewInputData, and Events. ViewInputData is the data that will be passed directly to our viewController. Let’s create the implementation of our ViewModel, let’s subscribe the viewInputData to the state provider and change it to the necessary format for the view using the mapToViewInputData function. Create an enum Events, where we define all the events that should be processed on the screen, like view loading, button pressing, cell selection, etc. Make Events a PublishSubject type, to be able to subscribe and add new elements, subscribe and handle each event.

protocol NotesListViewModel: AnyObject {
    var viewInputData: Observable<NotesListViewInputData> { get }
    var events: PublishSubject<NotesListViewEvent> { get }
    var onNoteSelected: ((Note) -> ())? { get set }
    var onCreateNote: (() -> ())? { get set }
class NotesListViewModelImpl: NotesListViewModel {
    let disposeBag = DisposeBag()
    let viewInputData: Observable<NotesListViewInputData>
    let events = PublishSubject<NotesListViewEvent>()
    let notesProvider: NotesListProvider
    var onNoteSelected: ((Note) -> ())?
    var onCreateNote: (() -> ())?
    init(notesProvider: NotesListProvider) {
        self.notesProvider = notesProvider
        self.viewInputData = { $0.mapToNotesListViewInputData() }
        events.subscribe(onNext: { [weak self] event in
            switch event {
            case .viewDidAppear, .viewWillDisappear:
            case let .selectedNote(id):
                self?.noteSelected(id: id)
            case .createNote:
        }).disposed(by: disposeBag)
    private func noteSelected(id: Identifier<Note>) {
        if let note = notesProvider.currentState.notes.first(where: { $ == id }) {
private extension NotesListProviderState {
    func mapToNotesListViewInputData() -> NotesListViewInputData {
        return NotesListViewInputData(notes: { ($, NoteCollectionViewCell.State(text: $0.text)) })
View scheme in iOS mobile architecture

In this layer, we configure the screen UI and bindings with the view model. The View layer represents the UIViewController. In viewWillAppear(), we subscribe to ViewInputData and give the data to render, which distributes it to the desired UI elements

  override func viewWillAppear(_ animated: Bool) {
        let disposeBag = DisposeBag()
        viewModel.viewInputData.subscribe(onNext: { [weak self] viewInputData in
            self?.render(data: viewInputData)
        }).disposed(by: disposeBag)
        self.disposeBag = disposeBag
    private func render(data: NotesListViewInputData) {
        var snapshot = DiffableDataSourceSnapshot<NotesListSection, NotesListSectionItem>()
        snapshot.appendItems( { NotesListSectionItem.note($0.0, $0.1) })

We also add event bindings, either with RxSwift or the basic way through selectors. 

    @objc private func createNoteBtnPressed() {

Now, that all the components of the module are ready, let’s proceed to link objects between themselves. The module is a class subscribed to the DIPart protocol, which primarily serves to maintain the code hierarchy by combining some parts of the system into a single common class, and in the future includes some, but not all, of the components in the list. Let’s implement the obligatory load(container:) method, where we will register our components.

final class NotesListPart: DIPart {
    static func load(container: DIContainer) {
            .as(SharedStore<[Note], Never>.self, tag: NotesListScope.self)
        container.register { NotesListProviderImpl(sharedStore: by(tag: NotesListScope.self, on: $0)) }
struct NotesListDependency {
    let viewModel: NotesListViewModel
    let viewController: NotesListViewController

We’ll register components with the method container.register(), sendingthere our object, and specifying the protocol by which it will communicate, as well as the lifetime of the object. We do the same with all the other components

Our module is ready, do not forget to add the module to the container in the AppDelegate. Let’s go to the NoteListCoordinator in the list opening function. Let’s take the required dependency through the container.resolve function, be sure to explicitly declare the type of variable. Then we create event handlers onNoteSelected and onCreateNote, and pass the viewController to the router.

 func setNotesListRoot() {
        let notesListDependency: NotesListDependency = container.resolve()
        notesListDependency.viewModel.onNoteSelected = { [weak self] note in
            self?.pushNoteDetailsScreen(note: note)
        notesListDependency.viewModel.onCreateNote = { [weak self] in
            self?.pushCreateNoteScreen(mode: .create)

Other modules and navigation are created following these steps. In conclusion, we can say that the architecture isn’t without flaws. We could mention a couple problems, such as changing one field in viewInputData forces to update the whole UI but not certain elements of it; underdeveloped common flow of work with UITabBarController and UIPageViewController.


With the creation of the iOS app architecture, it became much easier for us to work. It’s not so scary anymore to replace a colleague on vacation and take on a new project. Solutions for this or that implementation can be viewed by colleagues without puzzling over how to implement it so that it would work properly with our architecture. 

During the year, we have already managed to add the shared storage, error handling for coordinators, improved routing logic, and we aren’t gonna stop there.


Jesse from Vodeo, ‘Fora Soft is in a perfect spot of good pricing of projects and development.’

Our copywriter Nikita talked to Jesse Janson about his experience of working with Fora Soft.

Jesse hails from the Janson family. Movie nerds might know the family by their brand, Janson Media. After all, they’ve been in the movie renting business since 1989.

Jesse’s app isn’t officially released yet, but he certainly does have something to say about his work with Fora.

Today we have Jesse Jenson with us. Jesse is a CEO of Vodeo, am I correct?

Yes. So my name is Jesse Janson, and I work at Janson Media, which is our TV and film distribution company. I am in charge of acquisitions and business development. And Vodeo is a mobile first video streaming rental service that we worked on with Fora Soft, and that’s a separate company which. Yes, I’m president of that company.

Tell me a little bit about Vodeo. What is it? How did it come to be?

Vodeo is a rental only mobile service, an application that we’re going to launch on iPhones first. The service is different and unique in the sense that the users can rent movies or TV episodes for a limited period of time. And for this, they use a credit-based system within the app, which allows the cost per movie rental or episode rental to be much lower than any other application or video rental service currently available in the market. So, within Vodeo, our users can pre-buy credits within the app that they can then use to rent movies and rent TV episodes. And each credit to the user only costs them about $0.10 right now.

Was Fora Soft your first choice?

I think it was probably our first. Definitely the first company and only company we’ve worked on for Vodeo and for the development of the application. I certainly researched and spoke to other companies as well.

But after speaking with Fora Soft, we decided to work together, and it’s been a good decision.

Tell me why you turned to Fora Soft? Why us?

The previous work that Fora Soft has done was impressive, and that was an important factor in our decision. Also, the communication was excellent. So I understood the scope of the project, the estimated time, work, and costs that it would take to get the project up and developed. And yeah, it’s been really good. The communication mostly. And then also the previous work was a big factor.

Share your before and after working with us, like what it used to be before you worked with Fora Soft and what it is now?

Sure. Well, before the Vodeo app, it was really just an idea on paper and just a concept. So we really needed Fora Soft to help us with an MVP. in the space and get that up and running. And then once we sort of had that tested and we had a first version, we worked with Fora Soft to update it. It’s still private, just for us to look at. And we went through a few versions of that, and that whole process has been excellent. And now we’re pretty much to a point where I believe we are planning to publicly launch this year, so maybe June.

Congratulations on that. So I believe since you haven’t officially launched yet, you don’t have any measurable figures, any amount of crushes or profit or anything like that, right?

No, not yet. Right now it’s very much private, and in beta. We probably have a pool of maybe 20 private users who have been testing the app on their phones – registering, renting movies and making sure everything works properly. And that whole process has been great so far. The feedback has been very good from everyone in our small private circle.

Let’s talk a little bit about difficulties. If there were any difficulties with working with us, please tell me honestly.

Yeah, no, I haven’t encountered any difficulties yet. Working with Fora Soft, every question or product feature that I’ve requested or asked about has been easily addressed by Fora Soft. I never got the impression that Fora Soft couldn’t implement a feature or an update to where we are so far with the app. Even discussing future possibilities of updates and features on our future roadmap down the line. Fora Soft has said all of it is possible, and we haven’t had any difficulties in that regard in terms of the development and adding new features. It’s been excellent.

What’s the situation where you would think like, all right, I really want this feature, but how do I implement it? Can they implement it? And you talk to Fora Soft and Fora Soft was, like, yes, sure. Easy peasy. We can do that.

Yeah, that’s been the case. Sometimes it’s not a quick “Oh, that’s easy”. Sometimes the project manager I’m working with at Fora Soft will have to bring it to the developer team and ask them if it’s a feature that could be done. Then they will come back to me and provide me with an estimate and explain how long it will take to implement the process. So, yeah, it’s been really good. Some features are easy and quick “Oh, yeah, we could do that quickly and easily”. And some are “let me just check with the developers and see if it’s possible”. And that’s always been “Yeah, it’s possible. We’re able to do it”. So it’s been great.

Qualities like determination and communication and professionalism are really important when it comes to project type of work. Can you please rate us on those qualities on a scale of ten and maybe come up with other qualities if you need to.

Yeah, I give a 10/10 across the board on everything. It’s been really great.

I really enjoy personally working with a project manager.

We’ve worked with two project managers so far at Fora Soft. We worked for a while with our first project manager, maybe a year, and she was excellent. And then there was a transition. She left or moved on and we worked with a new project manager, and that transition was super seamless and easy. A new project manager picked up right where she left off.

Working with a project manager and communicating with them has probably been the most valuable to me. Especially because I’m not a developer by any means, and I’m not able to speak developer language, code and whatnot. But working with a project manager helps that a lot. So I can communicate what I would like to see and have done.They know how to communicate with the developers and the designers and then bring that back to me and let me know how it goes.

I’ll make sure to forward this feedback to Vladimir. He’s a cool guy.

Yeah. Excellent.

Okay. And would you recommend Fora Soft to your friends or colleagues?

Yeah, I’d definitely recommend them. I think Fora Soft certainly has the skills and know-how to develop applications like this very well. Also, I felt like the price in working with Fora Soft was very competitive across the market, and that we liked as well. 

There’s a trade-off. Once you go too low in price, usually the quality of the work is reflected and is poor with the cost being so low. Sometimes, when the price is very high, it doesn’t necessarily mean that the quality will be that much and greated. So, I think Fora Soft is in a perfect spot of good pricing of projects and development. At the same time, they offer very high quality in the work that gets done.

Do you have anything else to add?

No, it’s all been great.

I’m in New York, Fora Soft is in Russia, there’s a time difference. Even with that, the communication’s been great. I don’t feel like we’re that far apart. The work’s been excellent.

It might be around a couple of years of working on this application. We’re pretty at the point where we’re ready to put it out in the public and hopefully get good feedback.

Thanks a lot, as a movie geek myself, I really hope everything will work out. Bye!


How a Technical Project Manager Saves Your Money and Nerves

it project manager

When entrusting any project to a third party for development, many people have the question, “Why can’t we do it ourselves?” Probably we can, but how effective will it be?

By outsourcing the work to a team of professionals headed by a project manager who will take some of the risks, the business not only saves money, but also gets the expertise and experience of the company. Let’s look at the benefits of such a solution and why a project needs a project manager.

Who is an IT project manager?

A project manager is a person who organizes the smooth operation of all development processes. He ensures communication between the client and the team, translates technical requirements into comprehensive language, plans the development and ensures the timely release of the product to the market.

What tasks a manager performs in Fora Soft

At Fora Soft, a manager is just like a member of a team. Our PM is not really a manager per se, he is a person who deals with processes and communications. Without him as well as without any other team member, it is impossible to imagine delivering a quality product on time.

In our company, the manager takes care of all phases of project life, from initiation to closing and handing over the result to the customer. PM:

  • assembles the team 
  • prepares the infrastructure for the start-up 
  • checks the requirements and thinks over the sequence of tasks to be convenient for the client and the team
  • prepares an IT project plan and schedule, so the team and the client always know the demo dates and the end date of the project in advance 

After the implementation of projects, there usually comes a stage that can be called support. Even after the main work is complete, the project manager stays in touch with the client, quickly answering questions and engaging developers to solve problems if something goes wrong. We care about our customers, so even if a project is already in the live version and the contract is closed, the PM is always happy to help. For example, after finishing one of the projects, a client came to us with a problem that the cost of maintaining the server had increased significantly. The PM assembled a team, the client calculated the cost of moving to other servers and the cost of using them, and then moved the project to the right server for the client. 

Why hire a team led by a PM?

– PM has a technical background and experience in handling projects of various complexity

The managers at Fora Soft understand the technical background of certain systems and have experience in bringing products to the market.

Some IT project managers are descendants of developers and analysts with tech-leadership experience. This makes it possible to correctly estimate labor costs for technical tasks and distribute priorities to achieve the goal.

In addition, our managers are constantly improving their skills in our company: they know how to lead projects according to flexible methodologies Scrum and Kanban, they pump up their knowledge of English, technical skills, analytics and sales. Each manager has their own personal development plan, which includes a set of tools that need to be learned. Once a month, managers invite their colleagues from design, testing, or development to stay up-to-date on current technology and trends.

The output of a project manager at Fora Soft is a versatile professional, who will explain technical points and intelligently communicate with stakeholders to achieve the best result.

– The PM is trained to work with the team and will always resolve any problems that arise within the development team

Globally, the business has two options – to find artists “from the outside” on the freelance markets / through friends, or hire a professional team.

In the first case, the risks for the business will be significantly higher. Freelancers can not give any guarantees of quality work on time, they are their own, and most often do not work in teams, so you need to hire each specialist separately. The designer will create product layouts, the developer will bring ideas to life, and the tester will not miss bugs in the release. You will also have to monitor the quality of their performance, which will require additional time. Such fragmentation can cost a business dearly. 

In the second case, the manager takes care of all team processes. Every project is handled by a full-fledged team which specializes not in a wide range of technologies but in a particular multimedia field where each team member has relevant experience. 

The project manager at Fora Soft manages the resources, intelligently redistributing them when necessary. There is no downtime, every developer works on the project for at least 2 months in advance. The business owner does not have to bear the risks.

In addition, the manager builds relationships within the team, motivates each member to contribute to the success of the product and suggest improvements. The business owner does not have to micromanage-they just bought time to focus on more important things-for example, the global vision for the extension

– PM will help save money and nerves

Any development, especially large and complex projects, involves budgeting and risk. We understand this, so we structure our work in such a way as to make the maximum number of useful features for a minimum cost and meet the customer’s deadlines. Moreover, our certified IT Project managers are always in touch with the client and can promptly answer any questions about the current status of the project. 

In order to understand how a PM can help you save money, let’s do the opposite of what is written above. Let’s stop describing what the project manager does, and tell what can happen if he is not there.

So let’s imagine a team without a manager.

In our hypothetical project, creating an online cinema, let’s assume there is a team of the following IT project roles: three developers, a designer, a tester and an analyst. Each of them does their job.

After a while it turns out that the developer has done the task before his estimates, and not to sit idly by, took in the development of creating a user profile. And we have a team of three developers… All of them decide to do exactly the same thing, forgetting to warn each other. It comes out too late, when QA starts testing the task. QA realizes that testing the task will be difficult, postpones it, and proceeds to what is already ready at the moment. Development has just gone up in price, and the deadline has shifted by a month.

Out comes the need for someone who will maintain a level of transparency in the team so this doesn’t happen again, and everyone is on the same page, following the plan and knowing what “plan B” is at the slightest change.

In order to do this, you need to understand how long a particular task will take. Someone needs to take charge of the planning and accounting for the risks. That’s why you need a project manager for an IT project delivery: to keep the budget from going to waste and lead the product to a successful release. 


So, here is why a business needs a professional team headed by a project manager:

  • IT project manager has specialized knowledge and experience, combining technical and management skills.
  • he’s a single point of contact for all. All information passes through the manager, he is always aware of the project progress and assumes all the IT project risks and resource allocation. 
  • the manager helps to save money. This is not only a financial benefit as a result of competent planning and mitigation of risks, but also saves unnecessary worries that something can go wrong.

By the way, without specialized knowledge in management and technology, all this is quite difficult to do. Our managers have already had prior professional training and “got all the bumps” so that the business does not have to do this. Wanna find out more? Visit our contact page, so our Sales managers can talk to you and explain everything.


Video Conferencing System Architecture: P2P vs MCU vs SFU ?

Even though WebRTC is a protocol developed to do one job, to establish low-ping high-security multimedia connections, one of its best features is flexibility. Even if you are up for a pretty articulated task of creating a video conferencing app, the options are open.

You can go full p2p, deploy a media server backend (there is quite a variety of those), or combine these approaches to your liking. You can pick desired features and find a handful of ways to implement them. Finally, you can freely choose between a solid backend or a scalable media server grid created by one of the many patterns. 

With all of these freedoms at your disposal, picking the best option might – and will – be tricky. Let us clear the fight P2P vs MCU vs SFU a bit for you.

What is P2P?

Let’s imagine it’s Christmas. Or any other mass gift giving opportunity of your choice. You’ve got quite a bunch of friends scattered across town, but everyone is too busy to throw a gift exchange party. So, you and your besties agree everyone will get their presents once they drop by each other.

When you want to exchange something with your peers, no matter whether it’s a Christmas gift or a live video feed, the obvious way is definitely peer to peer. Each of your friends comes to knock on your door and get their box, and then you visit everyone in return. Plain and simple.

For a WebRTC chat, that means all call parties are directly connected to each other, with the host only serving as a meeting point (or an address book, in our Christmas example).

This pattern works great, while:

  • your group is rather small
  • everyone is physically able to reach each other

Every gift from our example requires some time and effort to be dealt with: at least, you have to drive to a location (or to wait for someone to come to you), open the door, give the box and say merry Christmas.

  • If there are 4 members in a group, each of you needs time to handle 6 gifts – 3 to give, 3 to take.
  • When there are 5 of you, 8 gifts per person are to be taken care of.
  • Once your group increases to 6 members, your  Christmas to-do list now features 10 gifts.

At one point, there will be too many balls in the air: the amount of incoming and outgoing gifts will be too massive to handle comfortably.

Same for video calls: every single P2P stream should be encoded and sent or decoded and displayed in real time  – each operation requiring a fraction of your system’s performance, network bandwidth, and battery capacity. This fraction might be quite sensible for higher-quality video: if a 2-on-2 or even a 5-on-5 conference will work decently on any relatively up-to-date device, 10-on-10 peer to peer FullHD  call would eat up give or take 50 Mbps in bandwidth and put quite of a load even on a mid-to-high tier CPU.

p2p architecture
peer-to-peer architecture 

Now regarding the physical ability to reach. Imagine one of your friends has recently moved to an upscale gated community. They are free to drive in and out – so, they’ll get to your front door for their presents, but your chances to reach their home for your gift are scarce.

WebRTC-wise, we are talking about corporate networks with NATs and\or VPNs. You can reach most hosts from inside the network, while being as good as unreachable from outside. In any case, your peers might be unable to see you, or vice versa, or both.

And finally – if all of you decide to pile up the gifts for a fancy Instagram photo, everyone will have to create both the box heap and the picture themselves: the presents are at the recipients’ homes.

WebRTC: peer-to-peer means no server side recording (or any other once-per-call features). At all.

Peer-to-Peer applications examples

Google Meet and secure mobile video calling apps without any server-side features like video recording.

That’s where media servers come to save the day.

WebRTC media servers: MCU and SFU

Back to the imaginary Christmas. Your bunch of friends is huge, so you figure out you’ll spend the whole holiday season waiting for someone or driving somewhere. To save your time, you pay your local coffee shop to serve as a gift distribution node. 

From now on, everyone in your group needs to reach a single location to leave or get gifts – the coffee shop.

That’s how the WebRTC media servers work. They accept calling parties’ multimedia streams and deliver them to everyone in a conference room. 

A while ago, WebRTC media servers used to come in two flavors: SFU (selective forwarding unit) and MCU (Multipoint conferencing \\ Multipoint control unit). As of today, most commercial and open-source solutions offer both SFU and MCU features, so both terms now describe features and usage patterns instead of product types.

What are those?

SFU / Selective Forwarding Unit

What SFU is?

SFU sends separate video streams of everyone to everyone.

The bartender at the coffee shop keeps track of all the gifts arriving at the place, and calls their recipients if there’s something new waiting for them. Once you receive such a call, you drop by the shop, have a ‘chino, get your box and head back home.

The bad news is: the bartender gives you calls about one gift at a time. So, if there are three new presents, you’ll have to hit the road three times in a row. If there are twenty… you probably got the point. Alternatively, you can visit the place at a certain periodicity, checking for new arrivals yourself.

Also, as your gift marathon flourishes, coffee quality degrades: the more people join in, the more time and effort the bartender dedicates to distributing gifts instead of caffeine. Remember: one gift – one call from the shop.

Media Server working as a Selective Forwarding Unit allows call participants to send their video streams once only – to the server itself. The backend will clone this stream and deliver it to every party involved in a call.

With SFU, every client consumes almost two times less bandwidth, CPU capacity, and battery power than it would in a peer-to-peer call:

  • for a 4-user call: 1 outgoing stream, 3 incoming (instead of 3 in, 3 out for p2p)
  • for a 5-user call: 1 outgoing stream, 4 incoming (would be 4 and 4 in p2p)
  • for a 10-user call: 1 out, 9 in (9 in, 9 out – p2p)
sfu architecture
SFU architecture

The drawback kicks in with users per call ratio approaching 20. “Selective” in SFU stands for the fact that this unit doesn’t forward media in bulk – it delivers media on a per-request basis. And, since WebRTC is always a p2p protocol, even if there is a server involved, every concurrent stream is a separate connection. So, for a 10-user video meetup a server has to maintain 10 ingest (“video receiving”) and 90 outgoing connections, each requiring computing power, bandwidth and, ultimately, money. But…

SFU Scalability

Once the coffee shop owner grows angry with the gift exchange intensity, you can take the next step, and pay some more shops in the neighborhood to join in. 

Depending on a particular shop’s load, some of the gift givers or receivers can be routed to another, less crowded one. The grid might grow almost infinitely, since every shop can either forward a package to their addressee, or to an alternative pick up location.

Forwarding rules are perfectly flexible. Like, Johnson’s coffee keeps gifts for your friends with first names starting A – F, and Smartducks is dedicated to parcels for downtown residents, while Randy’s Cappuccino will forward your merry Christmas to anyone who sent their own first gift since last Thursday.

One stream – one connection approach provided by SFU pattern has a feature to beat almost all of its cons. The feature is scalability software development.

Just like you forward a user’s stream to another participant, you can forward it to another server. With this in mind, the back end WebRTC architecture can be adjusted to grow and shrink depending on the number of users, conferences and traffic intensity.

E.g., if too many clients request a particular stream from one host, you can spawn a new one, clone the stream there and distribute it from a new unoccupied location.

Or, in case if you expect rush entrance on a massive conference (e.g., some 20-30 streaming users, and hundreds of view-only subscribers), you can assign two separate media server groups, one to handle incoming streams and the other to deliver it to subscribers. In this case, any load spikes on the viewing side will have zero effect on video ingest, and vice versa.

SFU applications examples

Skype and almost every other mobile messenger with a video conference and video call recording capabilities employs SFU pattern on the backend.

Receiving other users’ video as separate streams provides capabilities for adaptive UX, allows per-stream quality adjustment and improves overall call stability in a volatile environment of a cellular network.

MCU / Multipoint Conferencing Unit

What MCU is?

MCU unites all streams into 1 and sends just 1 stream to each participant.

Giving gifts is making friends, right? Now almost everyone in town is your buddy and participates in this gift exchange. The coffee shop hosting the exchange comes up with a great idea: why don’t we put all the presents for a particular person in a huge crate with their name on it. Moreover, some holiday magic is now involved: once there are new gifts for anyone, they appear in their respective boxes on their own.

Still, making Christmas magic seems to be harder work than making coffee: they might even need to hire more people to cast spells on the gift crates. And even with extra wizards on duty, there is zero chance you can rearrange the crates’ content order for your significant other to see your gift first – everyone gets the same pattern.

Well, some of the MCU-related features do really ask for puns over an acronym shared with Marvel Cinematic Universe. Something marvelous is definitely involved. Media server in an MCU role has to keep only 20 connections for a 10 user conference, instead of 100 links of an SFU – one ingest, one output per user. How come? It merges all the videos and audios a user needs to receive into a single stream, and delivers it to a particular client. That’s how Zoom’s conferences are made: with MCU, even a lower-tier computer is capable of handling a 100-user live call.

Magic obviously comes at a price, though. Compositing multiple video and audio streams in real time is *much* more of a performance guzzler than any forwarding pattern. Even more, if you have to somehow exclude one’s own voice and picture from the merged grid they receive – for each of the users. 

Another drawback, though mitigable, is that a composited grid is the same for everyone who receives the video – no matter what is their screen resolution, or aspect ratio, or whatever. If there is a need to have different layouts for mobile and desktop devices – you’ll have to composite the video twice.

MCU scalability

In WebRTC video call, compared to SFU, MCU pattern has considerably less scaling potential: video compositing with sub-second delays does not allow on-the-fly load redistribution for a particular conference. Still, one can autospawn additional server instances for new calls in a virtualized environment, or, for even better efficiency, assign an additional SFU unit to redistribute composited video.

MCU applications examples

Zoom and a majority of its alternatives for massive video conferencing run off MCU-like backends. Otherwise, WebRTC video calls for 25+ participants would only be available for high-end devices.

TL;DR: what and when do I use?

~1-4 users per call – P2P


  • lowest idling costs
  • easiest scaling
  • shortest TTM (time to market)
  • potentially the most secure


  • for 5+ user calls – quality might deteriorate on weaker devices
  • highest bandwidth usage (may be critical for mobile users)
  • no server side recording, video analytics or other advanced features


  • private \ group calls
  • video assistance and sales

5-20 users per call – SFU


  • easily scalable with simultaneous call number growth
  • retains UX flexibility while providing server side features
  • can have node redundancy by design: thus, most rush-proof


  • pretty traffic- and performance intensive on the client side
  • might still require a compositing MCU-like service to record calls 

Use cases:

  • E-learning: workshops and virtual classrooms
  • Corporate communications: meeting and pressrooms

20+ users per call – MCU \ MCU + SFU


  • least load on client side devices
  • capable of serving the biggest audiences
  • easily recordable (server side / client side)


  • biggest idling and running costs
  • one call capacity is limited to performance of a particular server
  • least customizable layout

Use cases:

  • Large event streaming
  • Social networking
  • Online media 


P2P, MCU, and SFU are parts of WebRTC. You can read more about WebRTC on our blog:

How to minimize latency to less than 1 sec for mass streams?
WebRTC in Android.
WebRTC security in plain language for business people.

Got another question not covered here? Feel free to contact us using this form, and our professionals will be happy to help you with everything.


What is Traefik and how to use it? Tutorial with Code Examples

traefik tutorial

With this Traefik tutorial, we will try to show you how to proxy sites and API in a few examples, automate getting certificates and even add some middleware (to add headers for example).

Please note that we use the hash symbol (#) in the code examples where we want to explain something.

What is Traefik?

It’s a reverse proxy designed to work with Docker. It allows you to proxy services in containers in a very simple and declarative way. At first you might be intimidated by labels, but you will get used to it 🙂

Why Traefik and not nginx, for example? We think that Traefik is simpler to manage. It only uses docker=compose (instead of that plus nginx.conf with nginx), yet still fullfils its function.

Create a traffic config

To begin, we should create a traffic config:

# traefik.yml

# set log level
  level: DEBUG

# enable the dashboard with useful information
  dashboard: true
  insecure: true

# providers: in our case that's what we proxy.
# at first we only need the Docker,
# here's how to proxy external services 
    # here's where you specify the network to add
    # service to get it "picked up" by the traffic
    network: traefik
    # turn off "auto-scraping" of containers by traffic
    # otherwise it will try to proxy all containers
    exposedByDefault: false

# entry points are basically just ports that will access
# to Traefik and therefore to the services it proxies
  # this is the name of the entry point for regular http traffic, usually called
  # http or web, but you can put anything in here
    # the number of entry port
    address: :80
      # set up a redirect for all requests to the https entry point
          to: https
          scheme: https
          permanent: true
  # create a https entry point on port 443, usually called
  # https or websecure
    address: :443

# ssl certificate resolvers: this is used to get certificates for domains.
# We have just one for now and later we will add another, called Wildcard Resolver
      # acme challenge type, we need it so that letsencript can understand that this is our
      # domain we need to specify the entry point on which the challenge will run
      # more info about challenges here
        entrypoint: http
      # letsencript needs your email, it will send all sorts of information there,
      # e.g. your certificate's about to go bad
      # that's where Traefik will put the certificates, it's better to run volumetric
      # that's what we'll do below
      storage: /letsencrypt/acme.json

accesslog: true
# Dockerfile
FROM traefik:v2.5.2

WORKDIR /traefik

COPY ./traefik.yml

CMD ["traefik"]

# docker-compose.yml

version: "3.8"

    build: .
    container_name: traefik
    restart: always
      # open ports for http, https, and dashboard of Traefik,
      # the last one should not be exposed outside of your local network
      # it will be accessible via ssh (see below)
      - 80:80
      - 443:443
      # traffic needs access to docker.sock to monitor the containers
      - /var/run/docker.sock:/var/run/docker.sock:ro
     # and here is the volumetric access to the certificates
      - /data/letsencrypt:/letsencrypt
      - traefik

  # for the sake of example let's connect whoami, a simple service that displays
  # information about the request in textual form
    image: "traefik/whoami"
    restart: always
      # enable Traefik for this container
      - traefik.enable=true
      # set Traefik network
      # here is the fun part: adding a router and a rule for it
      # in this case the router will be named whoami
      # and will be available at
      # be sure to add the name of the router, it has to be
      # be unique, in our case it is whoami (comes after
      # traefik.http.routers.)
      - traefik.http.routers.whoami.rule=Host(``)
      # Set through which entry point the router will be accessible
      - traefik.http.routers.whoami.entrypoints=https
      # set certresolver
      - traefik.http.routers.whoami.tls.certresolver=simple-resolver
      # you don't actually have to specify the port explicitly
      # traefik is able to figure out which port the service is listening on,
      # It might happen that one container listens to several ports at the same time.
      port listens to several # ports (e.g. rabbitMq does this), then you will
      # to create several routers and specify explicitly several ports
      - traefik

# and the networks
      name: traefik

That’s it, now you can run it and be happy that you did.

If you want to poke the dashboard, you can do so by forwarding ports via ssh

ssh -L 8080:localhost:8080

and open localhost:8080 in the browser

traefik dashboard
Traefik dashboard

Proxying external services

You know what this Traefik tutorial lacks? Information on external services!

Traefik can be used not only for services in Docker, but also for external services. It supports load balancing out of the box, i.e. if you have replicated service, you just specify all hosts and Traefik will do the rest. 

To proxy external services (outside the Docker network) you need to add provider in traefik.yml

# traefik.yml

# ...

    network: traefik
    exposedbydefault: false

  # add a file provider that will pull in data from
  # directory external
    directory: ./external

To proxy services on the local network, you must add a docker-host service, because localhost inside the container will point to the network of the container itself, not to the local network of the machine

# docker-compose.yml

version: "3.8"

  # ...
    # ...
      - traefik
      # add a shared network for the dockerhost and Traefik
      - local

    image: qoomon/docker-host
    cap_add: [ "NET_ADMIN", "NET_RAW" ]
    restart: always
      - local

# ...

      name: traefik
# Dockerfile

FROM traefik:v2.5.2

WORKDIR /traefik

COPY ./traefik.yml
# copy the folder with the external service configs
COPY ./external

CMD ["traefik"]

And also the config of the external service itself (place all configs in the external directory).

# external/example.yml
         # if the service is on an external host,
         # we simply write ip or domain
          - url: "http://123.456.789.123:4716"
         # if it’s on localhost, then type in docker-host
          - url: "http://docker-host:8132"

        - https
      # the web client will be accessible via any paths on the domain
      rule: "Host(``)"
      service: example-web-client
        certResolver: simple-resolver
        - https
      # the api will only be available at*)
      # no need to add any additional rules for the webserver
      # Traefik will route requests to /api,
      # this works just like a css specificity
      rule: "Host(``) && PathPrefix(`/api`)"
      service: example-api
        certResolver: simple-resolver

Wildcard Certificates

Traefik can do this too! Let’s rewrite docker-compose.yml so that whoami is accessible by *

First, we have to add wildcard-resolver to the traffic config.

# traefik.yml

  # ...
        # specify the dns provider, in this example it would be godaddy,
        # but Traefik knows how to work with others:
        provider: godaddy
      storage: /letsencrypt/acme.jso
# docker-compose.yml

version: "3.8"

    build: ./proxy
    container_name: traefik
    restart: always
      # specify the api keys of our provider from the environment variables
      - 80:80
      - 443:443
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /data/letsencrypt:/letsencrypt
      - traefik.enable=true
      - traefik.http.routers.api.entrypoints=http
      - local
      - traefik

    image: "traefik/whoami"
    restart: always
      - traefik.enable=true
     # change the rules for the router
      - traefik.http.routers.whoami.rule="Host(``) || HostRegexp(`{subdomain:.+}`)"
      - traefik.http.routers.whoami.entrypoints=https
     # set wildcard-resolver
      - traefik.http.routers.whoami.tls.certresolver=wildcard-resolver
     # domains on which the resolver will receive the certificates

      - traefik

    # ...


Traefik allows you to create middleware and apply it on routers and even entry points!

For example, if you need to remove some service from search results, you can always just attach X-Robots-Tag: noindex, nofollow.

# docker-compose.yml

# ...
    image: "traefik/whoami"
    reboot: always
      - traefik.enable=true
      - traefik.http.routers.whoami.rule="Host(``) || HostRegexp(`{subdomain:.+}`)"
      - traefik.http.routers.whoami.entrypoints=https
      - traefik.http.routers.whoami.tls.certresolver=wildcard-resolver
      # Creating a middle-point software where
      # noindex is a title
      # headers are middleware types
      - "traefik.http.middlewares.noindex.headers.customresponseheaders.X-Robots-Tag=noindex, nofollow"
      # Adding our middleware to the router.
      - traefik.http.routers.whoami.middlewares=noindex@docker

You can have a number of middleware attached to your router, in which case they must be specified, separated by commas.

– “traefik.http.routers.whoami.middlewares=noindex@docker, something@docker, example@file”

Middlewares can be also applied not only to routers, but also on entire entry points. In that case you create a middleware in labels anyway, you can do it in Traefik itself.

# docker-compose.yml

# ...

    # ...
      - "traefik.enable=true"
      - ""
      - "traefik.http.routers.api.entrypoints=http"
      - "traefik.http.middlewares.noindex.headers.customresponseheaders.X-Robots-Tag=noindex, nofollow"

# ...

And add in middleware traefik.yml to the entrypoint
# traefik.yml

# ...

    address: :80
          to: https
          scheme: https
          permanent: true
    address: :443
    # add http middleware
        - "noindex@docker"

# ...


This is our short tutorial on Traefik. We hope you learned something new or at least grasped the scope of how great and multi-functional Traefik is. We could be going on and on about Traefik but it’s better if you go and read their official documentation 🙂


Jan from AppyBee, ‘You usually give me a solution that is better than mine.’

Watch the interview on Youtube

We’re happy to present our interview with Jan from AppyBee. Jan is our client who has a history of coming to us, then parting with us, and then coming back. What didn’t Jan like about Fora Soft the first time, why did he decide to return, and what is he thinking about our collaboration now? Read the interview to find out.

Hi Jan, tell us about AppyBee.

It’s an online reservation system. with which we can literally book anything. An online event, a project, group session, or person. On top of that, it’s not only booking. Communicate with clients, send push notifications, messages. Pay by bundles or smart subscriptions.

We, as a B2B company, deliver website widgets and a native app.

We’re currently focused on the sports business. We have many types of businesses, such as co-working spaces, dog-walking services, and solariums.

By sports, did you mean pro sports, like football or basketball or gyms?

Both. The main thing is personal trainers, but one can also book a yoga class, a traditional gym, where you can pay a monthly fee to train. We also provide a solution for the current, corona-affected times. If you can’t buy a subscription, you just pay for a visit. Many types of business models, yeah.

appybee client dashboard
Clients overview on AppyBee

Was Fora Soft your 1st choice?

We started with Fora 6 years ago. We weren’t really experienced back then. We had an idea. We worked with Fora Soft for a couple years and wanted to develop an MVP, and that’s what we went with. The problem is, we went with it for too long, instead of moving on. Fora Soft should’ve advised us to change, but they didn’t do that. And, as I said, we were inexperienced. So things were messy, so we decided to leave Fora Soft. We weren’t happy with them.

We then tried all kinds of solutions. Outstaffing people, freelancers, other agencies, a combination of those. In the end, it was a disaster. We have a saying where I’m originally from: “I had it, I didn’t know. I knew it, I didn’t have it anymore”.

Everything has to fit. It’s not enough to have 1 good developer, 1 good project manager. Everything has to work together: QA, design… We burned a lot of money on it.

During that time there was some info we needed from Fora Soft. What I felt is “oh crap, I need to ask them something but we’re not working anymore..”. But I got all I needed instantly, they helped us a lot.

Then we decided to go back. There was one guy there, a PM who was always there. He gave us confidence that they have it under control.

We changed technology from Bootstrap to React. React.JS, React.Native, PWA (Progressive Web Apps). So, I asked the PM if they can handle it, and he said yeah. And then he explained everything. It felt like it was a Friday evening after a hard week. You sit out there and you have that vodka. 

I should have returned to Fora Soft much earlier. Fora Soft has everything under control. Everything under one roof. It all works perfectly. 

So the story is like a boyfriend and a girlfriend. At first, it doesn’t work out, but then you end up married.

By the way, the reason I worked with an agency is that I just wanted them to solve my problems and handle the answer in the correct and professional way. So, in another company, we worked with 5 different developers within 3 months. There also was a case where we changed a huge architectural part, which was a major update. It was really buggy and slow. In fact, thinking back I’m realizing that it was OK, but back then I didn’t.

In Fora Soft we worked with the same team all the time. For me, it’s a sign that people are happy with the company. They like what they’re doing. You guys grew from 30 people to 90 people (editor’s note: it’s 110 people now 🙂 ). So, before it was a mess. You could say that we had a relationship, it didn’t work out, but then eventually it did. Now we’re back together for 5-6 months, but if I’m being honest, we should’ve done this much earlier. The breakup had to happen, but here we are.

AppyBee client profile
A client profile on AppyBee

Compare before and after working with Fora Soft. Before the first time we worked together, even.

In the beginning, there was some technology, and we didn’t know what the usual way was. What we noticed in the beginning, we had a Trello board. It’s nice, but it’s like this (shows a pack of sticky papers), like moving this around. Then we moved to another company. They taught us that with Jira we can do this, with that we can do that. They overdid it. I felt like I was wasting hours on all that. That was the main difference. I wanted to go back, I was never unhappy with the PM. He’s the good guy, he was also there to help. It’s one of the reasons I went back. He gave me confidence.

Now it’s the same project, which is important. I also got more people who’re helping me. I get to talk with the business analyst, with the designer, with the front-end, and the back-end. It’s a complete solution.

The advantage of Fora Soft is that the team’s communication system is really good.

You talk to each other within teams. It’s difficult to have an agency, an out-staffer, a freelancer. It works, but not efficiently. So here we are with the comparison.

Can you share any measurable figures, such as a number of crushes, clients, or revenue?

The revenue is difficult because another large part is the marketing, and the development cannot change that. 

However, what we do notice is the amount of money we paid to Fora Soft, let’s say, amount X. I don’t wanna name numbers now. I see it now: I pay half and I get double. I got much more value. but not only the value. It’s also the stability. You have confidence. Right now all the things that are not efficient are being solved. I get information back. So I have a confident feeling now. I sleep well now. What I’m saying is, I don’t know it’s not only the measurable things, but I sleep well.

I’m going to tell you about one situation that happened at the previous company. They did work for us. They had one guy that also was working for us from that agency full-time. I asked him questions, but I didn’t get any information back. So we were doing tasks and ping each other within Jira. We had a daily meeting, and I was asking: why aren’t you reacting to any questions we’re asking in Jira? He said, ‘ I turned off the notifications’.

We’re paying a lot of money, it’s full-time, we’re with 5-6 people in a daily meeting and he gives an answer like that. I can tell you one thing. If he said that at the table, I think it would finish differently. You should be happy that it was online and with distance. So with software you cannot just move. It’s not another jacket you could buy. It’s not possible. So you really need to think about which company you are gonna work with and it needs to be a match. You also need to understand a company. The company needs to understand our culture, but we also need to understand the company’s culture and that needs to be a match. That’s really important. The expectations. And what I also noticed is that you have different projects. On one hand, you have short term projects, maybe 1-2 months. Easily done. But you also have work such as what we do. These are long-term projects. Important structural things that one needs to understand and move forward step by step. Test it thoroughly, think about what the clients have, move the priorities, and do some mind exchange, like what we’re doing with the business analyst right now. This was also a big difference. Back then you guys didn’t have that. Fora Soft didn’t have any business analyst that I talked to. But now I get to talk with the assistant project manager. We discuss everything. From time to time when I have an idea. I don’t share it. I want you to let me see what they’re gonna come up with. 

A lot of times, I get a solution from you guys which is better than mine. That’s a good thing.

Sometimes you have the feeling that it doesn’t go fast enough. Those are the important things. And as a company you can’t only point fingers. There’s a lot of things that we can also do better on communication. I also asked the question: what can we do better as an organization? How would you want us to deliver the information? Are we happy with it? Do we need to do something more? This is also important to have a good match. You cannot only say that this needs to be done and then finish. You really need to give correct information so they really understand what you want.

Were there any difficulties while working with Fora Soft, aside from that MVP thing?

The difficulties of working with Fora Soft. What I find difficult, but that’s not only connected to Fora Soft, is that I would like to, meet the team once a month. I’m looking for a situation how I can do that. So just sit down 1 day, 2 days, once a month, maybe once in 3 months to do a structural visit. To sit down with the guys face to face, to see who’s who. And then maybe pop up some ideas, have a board, talk, drink vodka and see what comes out. The difficulty is that you’re always behind the screen and talking. So that’s on the one hand, on the other hand, I cannot think about any difficulties. 

AppyBee founder
Jan, AppyBee founder

Qualities and things such as communication, professionalism, and determination are very important when it comes to any type of project work. Can you maybe rate us on a scale of 10 on those qualities and add something else that you see important here?

– What I would see as the quality, and I can really say that, because I had a lot of companies in between. I had three or four agencies, freelancers, and outstaffing people I worked with. I even had, not gonna name names, really had to fly in the “experts” to have a problem solved, which cost in the meantime. So I can definitely say it’s a 9 on communication.  In the beginning, I didn’t understand why you guys don’t want me to talk immediately with the developer. I do communicate but mostly need to go to the project manager. Back then I didn’t see it. Right now I absolutely see the advantages. because he will arrange everything.  I talk with him daily, and he will divide the work, and it’s efficient. It’s perfectly efficient. He’s always aware of everything that happens, good or bad. And that’s the communication which makes stuff more efficient.

Would you recommend Fora Soft to your friends or colleagues who are interested in a video application?

I would recommend Fora Soft to friends, family, and colleagues. I would not recommend Fora Soft to my competition.

That’s a good one, a good one. Thank you.

I think we have a contract also on that, right? So no competition. I wouldn’t recommend you guys to my competition but I do recommend to my business, my associates, and everybody else.

Maybe anything else that you would like to add?

– No, no, actually not.  As I said, seeing the team, that would maybe change, to have a drink from time to time, advice from Fora Soft. I would also expect we’re gonna organize a specific day for all the clients and whatnot it would be nice to go. And Mr Sapunov brings a big bottle of vodka, some special Russian vodka I’m dying to drink. Other than that I’m all good.

Thank you very much. My sincerest thanks for your time. It was a pleasant experience talking to you, and I will see you later.


How to implement Picture-in-Picture mode in React.JS


Picture-in-picture (PIP) is a separate browser window with a video that sits outside the page.

You minimize the tab or even the browser where you’re watching the video, and it’s still visible in the little window. It’s handy if you’re broadcasting a screen and want to see your interlocutors or are watching a really interesting show, but someone sent a message.

That’s how Youtube picture-in-picture mode looks like

Let’s figure out how to create this window and make it work.

As of early 2022, the Picture-in-picture specification is in draft form. All browsers work differently, and the support leaves a lot to be desired.

As of early 2022, only 48% of browsers support the feature.

Let’s go to a technical guide to implementation with all the pitfalls and unexpected plot twists. Enjoy 🙂

Tutorial on how to set up PiP

First, you need a video element.

<video controls src="video.mp4"></video>

In Chrome and Safari, the PiP activation button should appear. 

In Chrome, click on the 3 dots in the bottom right corner

5 minutes and it’s done! 

What about Firefox?

Unfortunately, Mozilla doesn’t yet have full picture-in-picture support 🙁

To activate PiP in Mozilla, every user has to go into configuration (type about:config in the search box). Then find media.videocontrols.picture-in-picture.enabled and make it true.

Because of the weak support for PIP in Mozilla, we won’t look at that browser any further.

Now you can activate web picture-in-picture in all popular browsers. 

But what if this is not enough?

Could it be more convenient?

Maybe add a nice activation button?

Or automatically switch to PIP when you leave the page? 

Yes, this is all possible! 

Software PiP activation

To start, let’s implement the basic open/close functionality and connect the button.

Let’s say your browser supports picture-in-picture on the web. To open and close the PiP window, we need to:

  1. make sure the feature is supported
  2. make sure that there is no other PIP 
  3. implement the cross-browser picture-in-picture activation/deactivation function.

Check support

To make sure we can programmatically activate a PIP window, we need to know if it is activated in the browser and if there is an opening method.

You can check the activation status through the document pictureInPictureEnabled property:

"pictureInPictureEnabled" in document && document.pictureInPictureEnabled

To make sure that we can interact with the PIP window, let’s try to find a picture-in-picture activation method.

For Safari it’s webkitSetPresentationMode, for all other browsers requestPictureInPicture.

xport const canPIP = (): boolean => "pictureInPictureEnabled" in document &&

const supportsOldSafariPIP = () => {
 const video = document.createElement("video");

 return (
   canPIP() &&
   video.webkitSupportsPresentationMode &&
   typeof video.webkitSetPresentationMode === "function"

const supportsModernPIP = () => {
 const video = document.createElement("video");

 return (
   canPIP() &&
   video.requestPictureInPicture &&
   typeof video.requestPictureInPicture === "function"

const supportsPIP = (): boolean => supportsOldSafariPIP() || supportsModernPIP(

Checking for the presence of a PiP window

To determine whether or not we already have a picture-in-picture window, you can look it up in the properties of the document


Opening and closing functions

Opening function

The standard requestPictureInPicture open method.


For more support among browsers, let’s implement a fallback. To enter picture-in-picture on safari, you need to use the webkitSetPresentationMode method of the video element:


Closing function

The standard closing method:


Fallback for Safari


As a result, we have the functionality to open or close the PIP.

export const canPIP = () =>
 "pictureInPictureEnabled" in document &&

const isInPIP = () => Boolean(document.pictureInPictureElement);

const supportsOldSafariPIP = () => {
 const video = document.createElement("video");

 return (
   canPIP() &&
   video.webkitSupportsPresentationMode &&
   typeof video.webkitSetPresentationMode === "function"

const supportsModernPIP = () => {
 const video = document.createElement("video");

 return (
   canPIP() &&
   video.requestPictureInPicture &&
   typeof video.requestPictureInPicture === "function"
const supportsPIP = () =>
 supportsOldSafariPIP() || supportsModernPIP();

export const openPIP = async (video) => {
 if (isInPIP()) return;

 if (supportsOldSafariPIP())
   await video.webkitSetPresentationMode("picture-in-picture");
 if (supportsModernPIP())
   await video.requestPictureInPicture();

const closePIP = async (video) => {
 if (!isInPIP()) return;

 if (supportsOldSafariPIP())
   await video.webkitSetPresentationMode("inline");
 if (supportsModernPIP())
   await document?.exitPictureInPicture();

Now, all we have to do is enable the button.

const disablePIP = async () => {
 await closePIP(videoElement.current).catch(/*handle error*/)

const enablePIP = async () => {
 await openPIP(videoElement.current).catch(/*handle error*/)

const handleVisibility = async () => {
 if (document.visibilityState === "visible") await disablePIP();
 else await enablePIP();

const togglePIP = async () => {
 if (isInPIP()) await disablePIP()
 else await enablePIP()

Don’t forget to catch errors from asynchronous functions and connect the functionality to the button.

<button onClick={togglePIP} className={styles.Button}>
 {isPIPOn ? "Turn off PIP" : "Turn on PIP"}
How to open and close pip mode in a browser?

See? Not so much code and the button for switching between PiP and normal mode is ready!

Automatic activation of web picture-in-picture

Why do you need picture-in-picture?

To surf the Internet and watch video streams from another page!

Chatting in a video conference in your browser, you want to tell something while peeking into Google Docs but still seeing the person you’re talking to, just like in Skype. You can do that with PiP. Or you want to keep watching the movie while answering an urgent message in a messenger – this is also possible if the site where you watch the movie has developed PiP functionality.

Let’s implement the automatic opening of the PiP window when you leave the page.

Safari has the autoPictureInPicture property, it turns on the Picture-In-Picture mode only if the user is watching a fullscreen video.

To activate it, you need to make the video element property autoPictureInPicture true.

if (video && "autoPictureInPicture" in video) {
  video.autoPictureInPicture = true;

That’s it for Safari.

Chrome and similar browsers allow you to ping without a fullscreen, but the video must be visible and the focus must be on the page.

You can use the Page Visibility API to track page abandonment.

document.addEventListener("visibilitychange", async () => {
 if (document.visibilityState === "visible")
   await closePIP(video);
   await openPIP(video);

Enjoy, the picture-in-picture auto-activation is ready.

PIP Controls

PiP video has the following buttons by default:

  • pause (except when we pass a media stream to a video tag)
  • switch back to the page 
  • next/previous video

Use the media session API to configure video switching.

navigator.mediaSession.setActionHandler('nexttrack', () => {
 // set next video src
navigator.mediaSession.setActionHandler('previoustrack', () => {
 // set prev video src
Customised picture-in-picture mode

Linking with video conferencing

Let’s say we want to make a browser-based Skype with screen sharing.

It would be nice to show the demonstrator’s face. And also so that he can see himself, should, for example, his hair end up disheveled.

Javascript picture-in-picture would be perfect for that!

To display a WebRTC media stream in PiP, all you have to do is apply it to the video, and that’s it.

video.srcObject = await navigator.mediaDevices.getUserMedia({
 video: true,
 audio: true,
Implement picture-in-picture mode for video calls

In this uncomplicated way, you can show the face of the screen demonstrator. And best of all, there is no need to transmit additional video of the speaker’s face, because it’s already present in the demonstration exactly where the author wishes it to be.

This not only saves traffic for all users in the video conference but also creates a more convenient interface for the demonstrator and the audience.

The same logic works with the interlocutor in an online conference.

Anything that can be displayed in the video tag can be displayed in the PiP window.

The pitfalls

Nothing works perfectly from the first try 🙂 Here are some tips on what to do when picture in picture mode is not working.

Error: Failed to execute ‘requestPictureInPicture’.

DOMException: Failed to execute ‘requestPictureInPicture’ on ‘HTMLVideoElement’: Must be handling a user gesture if there isn’t already an element in picture-in-picture JS.

So either the browser has realized that we’re abusing the API, or you forgot to check if the window is open

In the w3 draft, the requirements are userActivationRequired and playingRequired. This means that picture-in-picture can only be activated when the user interacts and if the video is playing.

At the moment the error can be found in 2 popular cases: 

  • (Chrome) trying to navigate to PiP if the page is out of focus.
  • (Safari) attempt to navigate to PiP without user interaction 

The video in the PiP window doesn’t update

To deal with this problem in react, just change the key property along with the media stream update or src.

<video controls key={/* updated key */} src="video.mp4"></video>

Video in the PiP window freezes

From time to time a video hangs. This usually happens when the video tag disappears from the page. In such a situation, you need to call the document.exitPictureInPicture() method.

When starting a broadcast in another tab or application, the auto-opening PiP window doesn’t work (Chrome)

This problem is related to this error. The reason is that when you click on the system window to select a tab or page to show, our page loses focus. If there is no focus, the userActivationRequired condition can’t be satisfied, so you can’t open Pip right after the start of the demonstration.

However, it is possible to open a PiP window in advance, say, when the page loses focus:

document.addEventListener("blur", () => {
 // open PIP

In this case, the PiP will open before the broadcast begins.


Despite pretty weak browser support, only 48% as of early 2022, Javascript-enabled PiP is a pretty quick feature to implement and brings an amazing user experience to web app users with video or broadcasts.

However, you should consider the fact that half of the users may never use it due to poor support.

You can test this feature out in the sandbox.


How to turn on picture-in-picture on YouTube?

  1. Turn on the video
  2. Open console. For macOS, use Option + ⌘ + J. For Windows or Linux, use Shift + CTRL + J.
  3. Enter this code:
document.onclick = () => {
 document.onclick = null;

4. Press Enter.
5. Click on an empty spot on the page.


How to create a custom Internet TV streaming service: features, technical pitfalls, devices, price

Internet TV app on a smart TV and a smartphone

How to create a video streaming service like Netflix for your business? If you are not a technical person, you may feel puzzled. What technical pitfalls to foresee for each function – to not face unexpected reworks and costs? What technologies to pick for each device? How much may it cost? We’ll help in this article using our 16 years worth of experience in video software development.

Example of an Internet TV product developed by Fora Soft

Vodeo OTT application for iOS for Janson Media Internet TV service

100.000 users of Janson Media Inc. can now watch movies and series in the iOS app. Rent a film and watch it in as good quality as your Internet connection allows.

Internet TV, OTT, IPTV software development – what’s the difference

OTT and IPTV are 2 kinds of Internet TV – television delivered to viewers by the Internet Protocol – IP:

  • OTT – Over-The-Top or Television over the Internet, uses an open network. It’s the regular Internet you use for emails or website browsing. Examples: Netflix, YouTube, Hulu. Use it on the phone, tablet, smartTV, laptop, or desktop computer – at home or outside, even in another country. When you have the Internet, log in to your account and watch.
  • IPTV – Internet Protocol Television, uses a closed network and not the public Internet – you access the TV through a private managed Local Area Network – LAN, or Wide Area Network – WAN. E.g., if an Internet provider makes a net of his own cable – this is the provider’s own private LAN, not accessible from the Internet. Examples: Comstar IPTV, DIRECTV STREAM, Movistar+. Works at your home only, where the cable is.

Check our infographic below to see more on the difference between OTT and IPTV as well as Cable and Satellite TV, or read more in this detailed article.

Difference between 5 kinds of television: Broadcast TV, Satellite TV, Cable TV, IPTV, OTT

We at Fora Soft have spent years developing software for both types of Internet TV, including Set-Top-Box firmware, too. And, since OTT and IPTV interfaces extend the concepts used for Satellite and Cable TV receivers, our team is capable of developing frontends for these kinds of televisions, as well.

Features for OTT and IPTV platforms

The most frequent and interesting features of Internet TV applications. At Fora Soft we develop custom software, so if you need something extra – we can plan and implement it.

4 types of content: live, pseudo-live, on schedule, on-demand

How to create a streaming service? Start with the decision of what type of content you’ll have.

streaming content type
Difference between 4 types of content live, pseudo-live, on-schedule, on-demand

Live streaming

It’s a live TV broadcast. You stream on the air – the audience watches right when everything happens. 

How to create a live streaming service: the choice of technology is a balance between latency, scalability, and cost.

  • WebRTC – for sub-second latency but the audience of fewer than 500 viewers
  • HLS – for a big audience of 500 to millions of viewers but with a latency of 2-60 seconds
  • WebRTC with Kurento and adjustments – for sub-second latency and a big audience but for a higher cost: read more in our article 

Use cases: sports streaming, e.g. a football match, a show with questions from the public in real-time. Live streaming is more expensive than pseudo-live streaming – explained in the Pseudo-live streaming section below. So if you don’t really need real-time, it’s better to use pseudo-live streaming.

Pseudo-live streaming

On classic TV, they call it live-to-tape: a pre-made recording broadcasted to all users simultaneously. Stream is not real-time – it’s pre-recorded, then transcoded on the server before going to viewers.

You get better video quality for less money when you do not compress in real-time:

  • Less money:
    The stream has a smaller size because there’s time to compress more effectively. Each of the video resolutions has a higher degree of compression. The smaller size goes through the servers the fewer you pay for that. 
  • Better picture quality:
    What video resolution to show depends on the user’s Internet speed. The same video size has a lower resolution (worse quality) in real-time and higher resolution (better quality) when compressed before sending to users. The same user has the same video size limitation due to his Internet speed. Therefore the user gets a better quality video – with a higher resolution – with pseudo-live streaming.

Use cases: TV shows with no real-time interactions with the audience, e.g. news streaming, The Voice.

Streaming on schedule

Organizing pre-recorded streams into a grid gives a totally TV-like experience: a set of parallel channels with a set of shows going one after another on each channel.

We program the ability for the admin to create the channels and build their schedule with videos. Different versions for different countries and timezones.

Use cases: managing a TV channel with different TV show streaming, e.g. Discovery+.

Video-on-demand (VOD)

A collection of movies, series, and other content that you can buy and watch at any moment – even radio. Some of the VOD features:

  • Free and paid videos
  • Favorite videos
  • Recommended videos – set by admins or picked based on the user’s taste
  • Filter and sort by genre, popular, newest, highest rating
  • Direct search for videos – with a virtual keyboard for SmartTVs to type from a remote
  • Rate videos
  • Pin code before opening adult videos
  • Schedule video release and expiration date – when the video stops being available

Use cases: movie and series streaming and distribution, e.g. Netflix.

Video recording

tv show recording
3 types of stream recording

Recording in real-time
Click record while watching online to record a show for your home collection. Download the recording to a user device, store it in the application, or in some third-party storage service. Or all of the above.

Recording on schedule

Set a time, and the recording will switch on. For STB-based solutions you need to leave the STB switched on.

Time shifting

Some channels support timeshift – rewind the allowed hours back and watch what you’ve missed.

Video player for video streaming services

Video player with video controls and subtitles
Video player with video controls and subtitles

Video controls

Play, pause, stop, rewind, fast-forward buttons.


Upload a subtitles file to videos in formats like SRT, SUB, SSA, AQT, ASS, JSON. These are the most popular ones. If yours is not on the list, we can program support for other formats.


floating window player streaming
Picture-in-picture in Internet TV – OTT or IPTV

The video you watch now shrinks into a small one in the corner while you look through a TV guide or pick another show. Works in browser: check a PiP demo at the end of our article. Works on mobile devices as well: read more about PiP on Android in this our article.

Payment and monetization in a streaming platform

Monetization by ads

Advertising-based Video-on-Demand – AVOD: viewers watch movies and series interrupted by ads. Example: YouTube free version.

advertising in player example
Set up advertising revenue for streaming platform

Advertisements on a website page or app page but not in the video: advertising networks like Google Ads integrate and show relevant ads to visitors. Website or app owner gets income from ads views.

streaming service monetisation
Advertisements on a website page or app page but not in the video

Paid subscription

Content unlocks for watching when the user subscribes. Subscription payments are automatically charged from his card. Monthly or annual.

Paid subscription in Internet TV OTT app

Most popular, reliable, and easy for users payment systems: Stripe, PayPal. The owner registers an account with them to receive payments. Users just pay by card the way they are used to.

Buy a movie

Transactional Video-on-Demand
Transactional Video-on-Demand – TVOD (buy a movie)

Transactional Video-on-Demand – TVOD: pay for each piece of content: for one movie or one season of a show.

Hybrid model

Combine the most suitable ways of monetization for your project

A mix of the models above. For example, a free plan with ads, a subscription to unlock some part of content, and some movies and series are paid for.

Access code

Enter the code on the TV to unlock access. Convenient for IPTV apps, e.g. for hotels.

Content Delivery Network (CDN)

Content Delivery Network
What is CDN – Content Delivery Network

You can’t beat physics: the longer the distance between your server and your audience, the bigger the delay. Taking this problem on in a straightforward way, by buying or renting hardware all around the world, will cost you a fortune.

CDN solves this problem. CDN is that network of servers in almost every country that someone has already bought and is ready to rent you. Each piece of your content is downloaded on each server – but only when the 1st user requests it. So the very 1st user still has the delay, but all the following do not.

Amazon’s CloudFront and Cloudflare are CDNs that we use most frequently at Fora Soft. Read more on CloudFront and Cloudflare differences in the article.

Adaptive video quality

Adaptive video quality in Internet TV
Adaptive video quality in Internet TV

Users should watch videos in the best quality possible. The limitation is their Internet speed. The program keeps an eye on it and automatically adjusts the video resolution to keep the video from stuttering even on a volatile network.

Video processing

Different formats

Converts videos into different formats to deliver to different end-users’ devices.

Different resolutions

Converts videos in different resolutions to deliver to users with different Internet speeds.


Decreases video size without a loss in quality by removing doubling data.

Split into chunks

When the videos are delivered chunked, the user never needs to stand by until a full movie is downloaded. The playback starts almost immediately, and keeps on, while the remaining parts are being downloaded in the background. Scrobble through a movie seamlessly – once you move the playhead to the desired location, the stream will resume in a moment, no matter how long was the skipped part.

Digital Right Management (DRM)

– encryption to protect videos from stealing and unauthorized watching. The major studios will only let you sell their content if your platform is well-protected against copyright infringement.

When a user clicks play, the program checks the license server if he has the key. The server gives the decryption key if all is fine, and the player decrypts the video file with it and plays it.


Devices for which Fora Soft develops streaming television

Devices for Internet TV software development
  • Web browsers – use without download
  • Smartphones and tablets – iOS and Android
  • Desktop PCs and laptops
  • Smart TVs – Samsung, LG, Android-based STBs, Apple TV
  • Virtual reality (VR) headsets

Website is the simplest form of OTT online cinema.

  • It opens on laptops and desktop computers. 
  • If optimized for mobile devices – it looks and works well on smartphones and tablets in a browser. Optimization requires extra time and cost because the UI for each page should be re-thought over for smaller screens. 
  • Modern smart TVs open websites as well – so you can watch such OTT on a smartTV. iPhone users share screens to modern smart TVs with 3 taps too – so for them, it’s even easier, and the full-screen view looks exactly like any other movie on TV. 

So if you want the most cost-effective option, a web Internet TV application is the best.

How to make a streaming website? We use JavaScript, framework React for web development, mainly WebRTC, HLS, and Kurento technologies for video streaming, and FFMPEG and GStreamer for video processing.

iOS and Android native apps offer a better user experience. They are one tap away from the users, they don’t have to search for them in a browser. So for established media companies, we develop mobile apps. 

How to make a mobile streaming platform? We use native programming languages: Swift for iOS, Kotlin for Android, mainly WebRTC, HLS, and Kurento technologies for video streaming, and FFMPEG and GStreamer for video processing.

Smart TVs have application stores too. It’s easier to launch a TV application like Netflix than search the site and share the phone’s screen. So for better user experience, we develop applications for Smart TVs. And yes, we can build a Netflix clone for your streaming TV service.

How to create such an OTT platform or make an IPTV server? We use JavaScript for Samsung and LG Smart TVs, Swift for AppleTV, Kotlin for AndroidTV.

Examples of OTT services worldwide

When planning your Internet TV app, it’s wise to check the best examples in the industry. The ones with the biggest worldwide user base:

  • Netflix – 214 million subscribers in 2021
  • Amazon Video – 120 million
  • Disney+ – 118 million
  • HBO Max
  • Apple TV Plus

Check more indicators in this research from Statista.

? How much development of an OTT or IPTV app costs

At Fora Soft, we develop custom software and do not sell ready one. So all the functions listed above are the most useful examples based on our experience. We may develop all of them for you or just a few, or add something else.

That is why we do planning before programming like they do a blueprint before building a house. Analysts draw a wireframe, then estimate your project. Approximate indications:

The simplest VOD OTT app for 1 platform 

  • 3-5 months 
  • About $36,000

A fully functioning system with VOD movies, login, simplest payment, rating, search. 1 platform means an iOS app or a web app for example.

The simplest IPTV app for 1 platform

  • 4-6 months 
  • Around $49,000

A fully functioning system with channels like at hotels.

OTT solution similar to Netflix for 1 platform

  • From 12 months 
  • From $115,000

A fully functioning system with different types of VOD content, live shows, login, a hybrid payment system, search, ratings, recommendations, special offers.

Call us to discuss your needs. Or send us your requirements and we’ll provide you with an estimation.


Why should Android developers start building AR apps before 2024?


The phrase “augmented reality” or AR has long been on everyone’s lips and is used in many areas of life. AR is being actively implemented in mobile applications as well. A large part of the AR market is occupied by entertainment applications. Remember the PokemonGo fever of 2016? However, entertainment is not the only area with AR. Tourism, medicine, education, healthcare, retail, and other areas also actively use AR. According to studies, by the end of 2020, there were almost 600 million active users of mobile apps with AR. By 2024, a nearly three-fold growth (1.7 billion) is predicted, and the amount of revenue from such applications is estimated at $ 26 billion. The future is very close! 

That’s why in this article we’ll consider several popular tools for Android mobile app development with AR functionality, their pros and cons.

History of AR

It’s been quite a long time since the advent of AR technology and its implementation in smartphones. It was originally part of VR. In 1961, Philco Corporation (USA) developed the first Headsight virtual reality helmets. Like most inventions, they were first used for the needs of the Department of Defense. Then the technology evolved: there were various simulators, virtual helmets, and even goggles with gloves. Their distribution was not widespread, but these technologies interested NASA and the CIA. In 1990, Tom Codell coined the term “Augmented reality”. We can say that from that moment on, AR became separate from VR. In the ’90s, there were many interesting inventions: an exoskeleton, which allowed the military to virtually control cars, gaming platforms. In 1993, Sega developed the Genesis game console. However, this product did not become mass-market: users were recorded nausea and headaches during games.  The high cost of devices, scarce technical equipment, and side effects forced people to forget about VR and AR technologies in the mass segment for a while. In 1994, AR made its way into the arts for the first time with a theater production called Dancing in Cyberspace. In it, acrobats danced in virtual space. 

In 2000, in the popular game Quake, thanks to the virtual reality helmet, it became possible to chase monsters in the street. This may have inspired the future creators of the game Pokemon Go. Until the 2010s, attempts to bring AR to the masses were not very successful. 

In the 2010s, quite successful projects appeared: MARTA (an application from Volkswagen that gives step-by-step recommendations on car repair and maintenance) and Google Glass glasses. At the same time, the introduction of AR in mobile applications begins: Pokemon Go, IKEA Place, the integration of AR in various Google applications (Translate, Maps, etc.), the introduction of filters in Instagram, etc. Currently, there are more and more mobile applications with AR and their use is spreading not only in the field of entertainment.

What is AR and how it works on a smartphone

Essentially, AR is based on computer vision technology. It all starts with a device that has a camera on it. The camera scans an image of the real world. That’s why when you run most AR apps, you’re first asked to move the camera around in space for a while. Then the pre-installed AR engine analyzes this information and builds a virtual world based on it, in which it places an AR object or several objects (picture, 3D model, text, video) on the background of the original image. AR objects can be pre-stored in the phone memory or can be downloaded from the Internet in real-time. The application remembers the location of the objects, so the position of the objects does not change when the smartphone moves unless it is specifically provided by the application functionality. Objects are fixed in space with special markers – identifiers. There are 3 main methods for AR technology to work:

  • Natural markers. A virtual grid is superimposed on the surrounding world. On this grid, the AR engine identifies anchor points, which determine the exact location to which the virtual object will be attached in the future. Benefit: Real-world objects serve as natural markers. No need to create markers programmatically.
  • Artificial markers. The appearance of the AR object is tied to some specific marker created artificially, such as the place where the QR code was scanned. This technology works more reliably than with natural markers.
  • Spatial technology. In this case, the position of the AR object is attached to certain geographical coordinates. GPS/GLONASS, gyroscope, and compass data embedded in the smartphone are used.

Tools for AR in Android


Google ARCore

The first thing that comes to mind is Google’s ARCore. ARCore isn’t an SDK, but a platform for working with AR. So you have to additionally implement the graphical elements that the user interacts with. This means that we can’t do everything with ARCore alone, and we need to implement technologies to work with graphics.

There are several solutions for this. 

If you want to use Kotlin:

  • Until recently, you could use Google’s dedicated Sceneform SDK. But in 2020, Google moved Sceneform to the archive and withdrew further support for it. Currently, the Sceneform repository is maintained by enthusiasts and is available here. It must be said that the repository is updated quite frequently. However, there is still a risk of using technology that is not supported by Google.
  • Integrate OpenGL into the project. OpenGL is a library written in C++ specifically to work with graphical objects. Android provides an SDK to work with OpenGL to run on Kotlin and Java. This option is suitable if your developers know how to work with OpenGL or can figure it out quickly (which is a non-trivial task). 

If you want to use something that isn’t Kotlin:

  • Android NDK. If your developers know C++, they can use the Android NDK for development. However, they will also need to deal with graphics there. The OpenGL library already mentioned will be suitable for this task.
  • Unreal Engine. There is nothing better for dealing with graphics than game engines. Unfortunately, ARCore is no longer supported by the Unity SDK, but Unreal Engine developers can still develop applications.


Vuforia is developed by PTC. Another popular tool for developing AR applications is Vuforia. Vuforia can work with normal 2D and 3D objects as well as video and audio, unlike ARCore. You can create virtual buttons, change the background, and control occlusion. It’s a state where one object is slightly hidden by another.

Fun fact: using Vuforia, a developer can turn on ARCore under the hood. Furthermore, the official Vuforia documentation recommends that you do this. That is, while running the application, Vuforia will check to see if it is possible to use ARCore on the device and if so, it will do so. 

Unfortunately, bad news again for Kotlin fans. Vuforia can only be used in C or Unity. Also, the downside is that if you plan to publish your application for commercial purposes, you will have to buy a paid version of Vuforia (Vuforia prices). 

It works with Android 6 and up, and there is a list of recommended devices.


ARToolKit is a completely free open-source library for working with AR. Its features are:

  • support for Unity3D and OpenSceneGraph graphics libraries
  • support for single and dual cameras simultaneously
  • GPS support
  • ability to create real-time applications
  • integration with smart glasses
  • multi-language support
  • automatic camera calibration

This library is completely free. However, the documentation leaves a lot to be desired. The official website does not respond to clicks on menu items. Apparently, ARToolKit supports Android development on Unity. Using this library is quite risky.


A popular solution from Korea. It has very detailed documentation. There is an SDK to work with 2D and 3D objects. Available in Java and Unity. In Java, you need to additionally implement the work with graphics. The official website states that the SDK works on Android from version 4.3, which is a huge plus for those who want to cover the maximum number of devices. The documentation is quite detailed. However, this SDK is payable if you plan to publish the app. The prices are here.


Development by an Austrian company that was recently taken over by Qualcomm. Allows you to recognize and track 2D and 3D objects, images, scenes and work with geodata, there is integration with smart glasses. There is a Java SDK (you need to additionally implement the work with graphics), as well as Unity and Flutter. This solution is paid, but you can try the free version for 45 days.


Now there is a choice of frameworks to develop AR applications for Android. Of course, there are many more, but I have tried to collect the most popular ones. I hope this will help you with your choice. May Android be with you.

Fora Soft develops VR/AR applications. Have a look at our portfolio, look at Super Power FX, Anime Power FX, UniMerse. We are #453 of 3162 top mobile app developers’ 2022 list by TopDevelopers.

Want to have your own AR? Contact us, our technically-savvy sales team will be happy to answer all your questions.