Why We Have To Know The Number of Active Users In Your App

When clients initially come to us, some of the first questions they hear is: “How many people do you expect to be using your app in the first month?”. Or “How many are likely to be using it simultaneously?”

Many people answer reluctantly and uncertainly, ranging from “why do you need it” to “you’re developers, you know better”. Meanwhile, the exact answer can save the client money, and quite a lot of it, actually. Sometimes even help earn it.. 

Is it possible to save money or even make more by answering this question?

Now, let’s talk money since we want business to be profiable, right? Information about the number of users not only helps your project team, but also helps you save or get more money. How?

By knowing how many people will use the platform, we can:

  • Calculate the necessary server capacity: the client won’t have to overpay for unused resources;
  • Build a scaling system architecture 
  • Estimate the costs of load-testing
  • Build a development plan that will allow the project to go to market (or present a new version to users) as quickly as possible.

So, what we’re doing here? We’re saving money by eliminating unnecessary costs now and planning the implementation of new features in the future.

Also, this helps to make money by ensuring a quicker TTM (time-to-market). And provide the confidence that the platform is meeting its goals.

What exactly are we asking?

Depending on the specifics of the platform, it’s important for us to know:

– The maximum number of platform users per month;

– The maximum number of users on the platform online at one time;

– What exactly are users doing on the platform: e.g., posting stuff, making calls, logging in to the game — how many times a day? 

– The perceived dynamics of audience growth

What if I really don’t know?

If your project is already live, chances are there are analytics out there. Google Analytics or its counterparts allow you to estimate the number of users quickly and accurately. 

If not, you can rely on more technical data: information from databases, server load statistics, or summaries from the cloud provider console, and so on.

If you need our team to create a project from scratch, it makes sense to look at competitors’ statistics, for example, using the service SimilarWeb. If for some reason this is not possible, rely on 1000 active users – our experience suggests that it’s enough in the first months of the life of the product.

And, of course, in both cases you should consult our analysts. We’ll help you gather the necessary data and draw conclusions.

Is this important for all projects?

Yes, for all of them. It’s especially critical for systems that meet at least one of these criteria:

  • Large inbound/outbound traffic: users uploading and downloading HD video, video conferencing for 3+ users,
  • There is a requirement to ensure minimal latency: users are playing an online game, rehearsing musical instruments over a video call, or mixing a DJ set,
  • The application involves long and resource-intensive operations: compressing, converting or processing video, archiving files, routing video/audio calls, processing or generating data with neural networks.

Why not make it with the expectation of many thousands and very intense online at once and for everyone?

Firstly, a platform like that will get into production later.

If we know that only a small audience (usually called early adopters) will be using it in the first months, it is more reasonable and profitable not to postpone the launch until the balancing and scaling systems are ready and tested under load. 

Secondly, the larger the estimated load, the more expensive the system operation gets. Especially if it runs in the cloud. Focusing on big online means not only being able to scale, but having enough spare capacity here and now to handle a significant influx of users at any given time. That is, to keep a large and expensive server always on, not a small cheap one.

Thirdly, this calculation isn’t applicable to all projects at all.

For closed corporate platforms, it simply makes no sense to develop a product for an army of thousands of users.

What does the developer do with this data?

The developer will understand:

  • What kind of server you need: on-premise, cloud (AWS, Hetzner, Google Cloud, AliCloud), or a whole network of servers
  • Whether it is possible and necessary to transfer some of the load to the user device (client)
  • Which of the optimization and performance-related tasks need to be implemented immediately and which can be deferred to later sprints

Offtopic: what is the difference between server load and client load?

A simple example: let’s say we’re doing our own instagram. The user shoots a video, adds simple effects, and posts the result on their feed.

If the goal is to get to the first audience quickly and economically, the pilot build can do almost everything on the server.


  • There’s no risk of getting bogged down by platform-specific limitations: video formats, load limits, and other nuances don’t bother us. Everything is handled centrally, so you can quickly make a product for all platforms and release it simultaneously
  • There are no strict requirements for client devices: it’s easier to enter growing markets, such as Africa, SEA, Latin America. Even a super cheap phone, of which there are many in the mentioned regions, can do it
  • Our “non-Instagram” applications for certain platforms, such as web and mobile OS, are very simple. Authorization, feed, download button, and that’s it. 

And if the goal is to give full functionality to a large active audience at once, heavy server calculations lose appeal: it makes sense to harness the power of client devices immediately.


  • Fewer servers and operating costs for the same number of users
  • The user feels that the application is more responsive. In addition, if there are already a lot of clients and we have added complex new features, the responsiveness of the platform will not become lower
  • Users feel more comfortable experimenting with new functionality: it’s implemented on the client, so delays are minimal
  • Internet may not be required during content processing – it saves traffic
  • The uploaded video is published faster: it does not need to be queued for server processing
  • The easier and faster the individual operations on the server, the easier and cheaper it is to scale the server. It’s especially critical when there is a sudden influx of new users

A compromise, which often turns out to be the best option – the one that doesn’t shift the whole load on one of the parties. For example, video processing tasks, such as applying effects or graphics, are often performed on the client, while the conversion of mobile video into the required formats and resolutions is performed on the server. And in this case, the distribution of tasks between the client device and the server also depends on the planned scope.

What if we develop just a component for a live project? 

In the case of extending an already existing product, it’s necessary to find out where tasks are currently processed: on the device or on the server.

Then, based on the purpose of the future component and the forecast of the number of users and their activity on the platform after it appears, the developer will understand whether to improve the current architecture or migrate to a more efficient one.

So in the end, why are we asking about the number of users?

It all comes down to efficiency and saving your resources and money. We need to have as accurate knowledge as possible about the product’s scope and workload. It will help your project team to better plan the launch, allocate costs, and make the system more reliable in the long run.


How Digital Video as a Technology Works

tv with different types of video

In this article, we’ll try to explain what digital video is and how it works. We’ll be using a lot of examples, so even if you wanna run away before reading something difficult – fear not, we’ve got you. So lean back and enjoy the explanation on video from Nikolay, our CEO. 😉

Analog and digital video

Video can be analog and digital.

All of the real world information around us is analog. Waves in the ocean, sound, clouds floating in the sky. It’s a continuous flow of information that’s not divided into parts and can be represented as waves. People perceive exactly analog information from the world around them.    

The old video cameras, which recorded on magnetic cassettes, recorded information in analog form. Reel-to-reel tape and cassette recorders worked on the same principle. Magnetic tape was passed through the turntable’s magnetic heads, and this allowed sound and video to be played. Vinyl records were also analog. 

Such records were played back strictly in the order in which they were recorded. Further editing was very difficult. So was the transfer of such recordings to the Internet.

With the ubiquity of computers, almost all video is in digital format, as zeros and ones. When you shoot video on your phone, it’s converted from analog to digital media and stored in memory, and when you play it back, it’s converted from digital to analog. This allows you to stream your video over a network, store it on your hard drive, and edit and compress it.

What a digital video is made of

Video consists of a sequence of pictures or frames that, as they change rapidly, make it appear as if objects are moving on the screen.

This here is an example of a how a video clip is done.

What is Frame Rate

Frames on the screen change at a certain rate. The number of frames per second is the frame rate or framerate. The standard for TV is 24 frames per second, and 50 frames per second for IMAX in movie theaters.

The higher the number of frames per second, the more detail you can see with fast-moving objects in the video. 

Check out the difference between 15, 30, and 60 FPS.

What is pixel

All displays on TVs, tablets, phones and other devices are made up of little glowing bulbs – pixels. Let’s say that each pixel can display one color (technically different manufacturers implement this differently). 

In order to display an image on a display, it is necessary for each pixel on the screen to glow a certain color. 

Thanks to this technical device of screens, in digital video each frame is a set of colored dots or pixels. 

Schematic screen structure

The number of such dots horizontally and vertically is called the picture resolution. The resolution is recorded as 1024×768. The first number is the number of pixels horizontally and the second number, vertically. 

The resolution of all frames in a video is the same and this in turn is called the video resolution.

Let’s take a closer look at a single pixel. On the screen it’s a glowing dot of a certain color, but in the video file itself a pixel is stored as digital information (numbers). With this information the device will understand what color the pixel should light up on the screen. 

What are color spaces

There are different ways of representing the color of a pixel digitally, and these ways are called color spaces. 

Color spaces are set up so that any color is represented by a point that has certain coordinates in that space. 

For example, the RGB (Red Green Blue) color space is a three-dimensional color space where each color is described by a set of three coordinates – each of them is responsible for red, green and blue colors. 

Any color in this space is represented as a combination of red, green, and blue.

how color spaces work
Classic RGB palette

Here is an example of an RGB image decomposed into its constituent colors:

what is RGB
How colors in pictures mix

There are many color spaces, and they differ in the number of colors that can be encoded with them and the amount of memory required to represent the pixel color data.

The most popular spaces are RGB (used in computer graphics), YCbCr (used in video), and CMYK (used in printing)

CMYK is very similar to RGB, but has 4 base colors – Cyan, Magenta, Yellow, Key or Black.

RGB and CMYK spaces are not very efficient, because they store redundant information.  

Video uses a more efficient color space that takes advantage of human vision.

The human eye is less sensitive to the color of objects than it is to their brightness.

how human eye understand brightness
How human eyes perceive contrast

On the left side of the image, the colors of squares A and B are actually the same. It just seems to us that they are different. The brain forces us to pay more attention to brightness than to color. On the right side there is a jumper of the same color between the marked squares – so we (i.e., our brain) can easily determine that, in fact, the same color is there.

Using this feature of vision, it is possible to display a color image by separating the luminosity from the color information. Subsequently, half or even a quarter of the color information can simply be discarded in compression (representing the luminosity with a higher resolution than the color). The person will not notice the difference, and we will essentially save on storage of the information about color.

About how exactly color compression works, we will talk in the next article.

The best known space that works this way is YCbCr and its variants: YUV and YIQ.  

Here is an example of an image decomposed into components in YCbCr. Where Y’ is the luminance component, CB and CR are the blue and red color difference components.

how YCbCr works
YCbCr scheme

It is YCbCr that is used for color coding in video. Firstly, this color space allows compressing color information, and secondly, it is well suited for black and white video (e.g. surveillance cameras), as the color information (CB and CR) can simply be omitted.


What is Bit Depth

The more bits, the more colors can be encoded, and the more memory space each pixel occupies. The more colors, the better the picture looks.

For a long time it has been standard for video to use a color depth of 8 bits (Standard Dynamic Range or SDR video). Nowadays, 10-bit or 12-bit (High Dynamic Range or HDR video) is increasingly used.

compare SDR and HDR video
Bit depth contents

It should be taken into account that in different color spaces, with the same number of bits allocated per pixel, you can encode a different number of colors. 

What is Bit Rate

Bit rate is the number of bits in memory that one second of video takes. To calculate the bit rate for uncompressed video, take the number of pixels in the picture or frame, multiply by the color depth and multiply by the number of frames per second

1024 pixels X 768 pixels X 10 bits X 24 frames per second = 188743680 bits per second

That’s 23592960 bytes, 23040 kilobytes or 22.5 megabytes per second.

Those 5 minute videos would take up 6,750 megabytes or 6.59 gigabytes of memory.

This brings us to why video compression methods are needed and why they appeared. Without compression it’s impossible to store and transmit that amount of information over a network. YouTube videos would take forever to download.


This is a quick introduction in the world of video. Now that we know what it consists of and the basics of its work, we can move on to the more complicated stuff. Which will still be presented in a comprehensive way 🙂

In the next article I’ll tell you how video compression works. I’ll talk about lossless compression and lossy compression. 


Advanced iOS App Architecture Explained on MVVM with Code Examples

MVVM iOS architecture

How to share the exact same vision with changing developes’ teams? Is there a way to make new devs onboarding faster and easier to cut costs? How will the final product be affected? In this article we want to share our experience and give a clear explanation of what iOS app architecture is for all business and tech people.

We are a custom software development company. In 17 years of work, we have developed more than 60 applications on SWIFT. We regularly had to spend weeks digging into code to understand the structure and operation of another project. Some projects we created as MVP, some as MVVM, some as our own. Switching between projects and reviewing other developers’ code increased our development time by several more hours. So we decided to create a unified architecture for mobile apps.

What benefits the architecture gave us:

  1. Speed up the development process. Having spent some time on creating the architecture we can now easily make changes to the code. For instance, if we need to change a new sign-up flow, just making it work would take us 8-16 hours. Now it only takes 1-2 hours.
  2. Eliminate bugs. Not completely, but there’s now less. We’ve already developed a lot of different kinds of flows and cases. Add the settled approach to it, and we don’t have to search for solutions anymore, we just write the code. We already know what bugs can occur so we avoid them straight away.
  3. Refer projects more easily. If a project developer is away (e.g. on sick days or a vacation) we find someone who could replace them until they’re back. The substitutional developer would waste time (= client’s money) on examining the code before entering a project. Now we minimized this kind of expense since we’ve unified all the solutions and the programmer can easily continue the development.

When went on to creating an iOS app architecture, we first defined the main goals to achieve:

Simplicity and speed. One of the main goals is to make developers’ lives easier. To do this, the code must be readable and the application must have a simple and clear structure. 

Quick immersion in the project. Outsourced development doesn’t provide much time to dive into a project. It is important that when switching to another project, it does not take the developer much time to learn the application code. 

Scalability and extensibility. The application under development must be ready for large loads and be able to easily add new functionality. For this it is important that the architecture corresponds to modern development principles, such as SOLID, and the latest versions of the SDK

Constant development. You can’t make a perfect architecture all at once, it comes with time. Every developer contributes to it – we have weekly meetings where we discuss the advantages and disadvantages of the existing architecture and things we would like to improve.

The foundation of our architecture is the MVVM pattern with coordinators 

Comparing popular MV(X) patterns, we settled on MVVM. It seemed to be the best because of good speed of development and flexibility.

MVVM stands for Model, View, ViewModel:

  • Model – provides data and methods of working with it. Request to receive, check for correctness, etc.
  • View – the layer responsible for the level of graphical representation.
  • ViewModel – The mediator between the Model and View. It is responsible for changes of Model, reacting on user’s actions performed on View, and updates View, using changes from Model. The main distinctive feature from other intermediaries in MV(X) patterns is the reactive bindings of View and ViewModel, which significantly simplifies and reduces the code of working with data between these entities.

Along with the MVVM, we’ve added coordinators. These are objects that control the navigational flow of our application. They help to:

  • isolate and reuse ViewControllers
  • pass dependencies down the navigation hierarchy
  • define the uses of the application
  • implement Deep Links

We also used the DI (Dependency Enforcement) pattern in the iOS development architecture. This is a setting over objects where object dependencies are specified externally, rather than created by the object itself. We use DITranquillity, a lightweight but powerful framework with which you can configure dependencies in a declarative style. 

How to implement it?

Let’s break down our advanced iOS app architecture using a note-taking application as an example. 

Let’s create the framework for the future application. Let’s implement the necessary protocols for routing.

import UIKit
protocol Presentable {
    func toPresent() -> UIViewController?
extension UIViewController: Presentable {
    func toPresent() -> UIViewController? {
        return self
protocol Router: Presentable {
  func present(_ module: Presentable?)
  func present(_ module: Presentable?, animated: Bool)
  func push(_ module: Presentable?)
  func push(_ module: Presentable?, hideBottomBar: Bool)
  func push(_ module: Presentable?, animated: Bool)
  func push(_ module: Presentable?, animated: Bool, completion: (() -> Void)?)
  func push(_ module: Presentable?, animated: Bool, hideBottomBar: Bool, completion: (() -> Void)?)
  func popModule()
  func popModule(animated: Bool)
  func dismissModule()
  func dismissModule(animated: Bool, completion: (() -> Void)?)
  func setRootModule(_ module: Presentable?)
  func setRootModule(_ module: Presentable?, hideBar: Bool)
  func popToRootModule(animated: Bool)

Configuring AppDelegate and AppCoordintator

a graphic scheme of how delegate and coordinators interact (blocks and arrows)
A diagram of the interaction between the delegate and the coordinators

In App Delegate, we create a container for the DI. In the registerParts() method we add all our dependencies in the application. Next we initialize the AppCoordinator by passing window and container and calling the start() method, thereby giving it control.

class AppDelegate: UIResponder, UIApplicationDelegate {
    private let container = DIContainer()
    var window: UIWindow?
    private var applicationCoordinator: AppCoordinator?
    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
        // Override point for customization after application launch.
        let window = UIWindow()
        let applicationCoordinator = AppCoordinator(window: window, container: container)
        self.applicationCoordinator = applicationCoordinator
        self.window = window
        return true
    private func registerParts() {
        container.append(part: ModelPart.self)
        container.append(part: NotesListPart.self)
        container.append(part: CreateNotePart.self)
        container.append(part: NoteDetailsPart.self)

The App Coordinator determines on which script the application should run. For example, if the user isn’t authorized, authorization is shown for him, otherwise the main application script is started. In the case of the notes application, we have 1 scenario – displaying a list of notes. 

We do the same as with App Coordinator, only instead of window, we send router.

final class AppCoordinator: BaseCoordinator {
    private let window: UIWindow
    private let container: DIContainer
    init(window: UIWindow, container: DIContainer) {
        self.window = window
        self.container = container
    override func start() {
    override func start(with option: DeepLinkOption?) {
    func openNotesList() {
        let navigationController = UINavigationController()
        navigationController.navigationBar.prefersLargeTitles = true
        let router = RouterImp(rootController: navigationController)
        let notesListCoordinator = NotesListCoordinator(router: router, container: container)
        window.rootViewController = navigationController

In NoteListCoordinator, we take the dependency of the note list screen, using the method container.resolve(). Be sure to specify the type of our dependency, so the library knows which dependency to fetch. Also set up jump handlers for the following screens. The dependencies setup will be presented later.

class NotesListCoordinator: BaseCoordinator {
    private let container: DIContainer
    private let router: Router
    init(router: Router, container: DIContainer) {
        self.router = router
        self.container = container
    override func start() {
    func setNotesListRoot() {
        let notesListDependency: NotesListDependency = container.resolve()
        notesListDependency.viewModel.onNoteSelected = { [weak self] note in
            self?.pushNoteDetailsScreen(note: note)
        notesListDependency.viewModel.onCreateNote = { [weak self] in
            self?.pushCreateNoteScreen(mode: .create)

Creating a module

Each module in an application can be represented like this:

a graphic scheme of iOS module scheme (with blocks and arrows)
Module scheme in iOS application architecture

The Model layer in our application is represented by the Provider entity. Its layout is

a graphic scheme of iOS provider (with blocks and arrows)
Provider scheme in apple app architecture

The Provider is an entity in iOS app architecture, which is responsible for communicating with services and managers in order to receive, send, and process data for the screen, e.g. to contact services to retrieve data from the network or from the database.

Let’s create a protocol for communicating with our provider by mentioning the necessary fields and methods. Let’s create a structure ProviderState, where we declare the data on which our screen will depend. In the protocol, we will mention fields such as Current State with type ProviderState and its observer State with type Observable<ProviderState> and methods to change our Current State. 

Then we’ll create an implementation of our protocol, calling as the name of the protocol + “Impl”. CurrentState we mark as @Published, this property wrapper, allows us to create an observable object which automatically reports changes. BehaviorRelay could do the same thing, having both observable and observer properties, but it had a rather complicated data update flow that took 3 lines, while using @Published only took 1. Also set the access level to private(set), because the provider’s state should not change outside of the provider. The State will be an observer of CurrentState and will broadcast changes to its subscribers, namely to our future View Model. Do not forget to implement the methods that we will need when working on this screen.

struct Note {
    let id: Identifier<Self>
    let dateCreated: Date
    var text: String
    var dateChanged: Date?
protocol NotesListProvider {
    var state: Observable<NotesListProviderState> { get }
    var currentState: NotesListProviderState { get }
class NotesListProviderImpl: NotesListProvider {
    let disposeBag = DisposeBag()
    lazy var state = $currentState
    @Published private(set) var currentState = NotesListProviderState()
    init(sharedStore: SharedStore<[Note], Never>) {
        sharedStore.state.subscribe(onNext: { [weak self] notes in
            self?.currentState.notes = notes
        }).disposed(by: disposeBag)
struct NotesListProviderState {
    var notes: [Note] = []
a graphic scheme of iOS View-model
View-Model scheme in iOS development architecture

Here we’ll create a protocol, just like for the provider. Mention fields such as ViewInputData, and Events. ViewInputData is the data that will be passed directly to our viewController. Let’s create the implementation of our ViewModel, let’s subscribe the viewInputData to the state provider and change it to the necessary format for the view using the mapToViewInputData function. Create an enum Events, where we define all the events that should be processed on the screen, like view loading, button pressing, cell selection, etc. Make Events a PublishSubject type, to be able to subscribe and add new elements, subscribe and handle each event.

protocol NotesListViewModel: AnyObject {
    var viewInputData: Observable<NotesListViewInputData> { get }
    var events: PublishSubject<NotesListViewEvent> { get }
    var onNoteSelected: ((Note) -> ())? { get set }
    var onCreateNote: (() -> ())? { get set }
class NotesListViewModelImpl: NotesListViewModel {
    let disposeBag = DisposeBag()
    let viewInputData: Observable<NotesListViewInputData>
    let events = PublishSubject<NotesListViewEvent>()
    let notesProvider: NotesListProvider
    var onNoteSelected: ((Note) -> ())?
    var onCreateNote: (() -> ())?
    init(notesProvider: NotesListProvider) {
        self.notesProvider = notesProvider
        self.viewInputData = { $0.mapToNotesListViewInputData() }
        events.subscribe(onNext: { [weak self] event in
            switch event {
            case .viewDidAppear, .viewWillDisappear:
            case let .selectedNote(id):
                self?.noteSelected(id: id)
            case .createNote:
        }).disposed(by: disposeBag)
    private func noteSelected(id: Identifier<Note>) {
        if let note = notesProvider.currentState.notes.first(where: { $ == id }) {
private extension NotesListProviderState {
    func mapToNotesListViewInputData() -> NotesListViewInputData {
        return NotesListViewInputData(notes: { ($, NoteCollectionViewCell.State(text: $0.text)) })
a graphic scheme of iOS MVVM View
View scheme in iOS mobile architecture

In this layer, we configure the screen UI and bindings with the view model. The View layer represents the UIViewController. In viewWillAppear(), we subscribe to ViewInputData and give the data to render, which distributes it to the desired UI elements

  override func viewWillAppear(_ animated: Bool) {
        let disposeBag = DisposeBag()
        viewModel.viewInputData.subscribe(onNext: { [weak self] viewInputData in
            self?.render(data: viewInputData)
        }).disposed(by: disposeBag)
        self.disposeBag = disposeBag
    private func render(data: NotesListViewInputData) {
        var snapshot = DiffableDataSourceSnapshot<NotesListSection, NotesListSectionItem>()
        snapshot.appendItems( { NotesListSectionItem.note($0.0, $0.1) })

We also add event bindings, either with RxSwift or the basic way through selectors. 

    @objc private func createNoteBtnPressed() {

Now, that all the components of the module are ready, let’s proceed to link objects between themselves. The module is a class subscribed to the DIPart protocol, which primarily serves to maintain the code hierarchy by combining some parts of the system into a single common class, and in the future includes some, but not all, of the components in the list. Let’s implement the obligatory load(container:) method, where we will register our components.

final class NotesListPart: DIPart {
    static func load(container: DIContainer) {
            .as(SharedStore<[Note], Never>.self, tag: NotesListScope.self)
        container.register { NotesListProviderImpl(sharedStore: by(tag: NotesListScope.self, on: $0)) }
struct NotesListDependency {
    let viewModel: NotesListViewModel
    let viewController: NotesListViewController

We’ll register components with the method container.register(), sendingthere our object, and specifying the protocol by which it will communicate, as well as the lifetime of the object. We do the same with all the other components

Our module is ready, do not forget to add the module to the container in the AppDelegate. Let’s go to the NoteListCoordinator in the list opening function. Let’s take the required dependency through the container.resolve function, be sure to explicitly declare the type of variable. Then we create event handlers onNoteSelected and onCreateNote, and pass the viewController to the router.

 func setNotesListRoot() {
        let notesListDependency: NotesListDependency = container.resolve()
        notesListDependency.viewModel.onNoteSelected = { [weak self] note in
            self?.pushNoteDetailsScreen(note: note)
        notesListDependency.viewModel.onCreateNote = { [weak self] in
            self?.pushCreateNoteScreen(mode: .create)

Other modules and navigation are created following these steps. In conclusion, we can say that the architecture isn’t without flaws. We could mention a couple problems, such as changing one field in viewInputData forces to update the whole UI but not certain elements of it; underdeveloped common flow of work with UITabBarController and UIPageViewController.

November’22 Update

It’s been 6 months since we released this article and mentioned the issues and weak spots stated above. We’ve done some work and here’re the improvements we’ve made:

  1. Now you don’t have to update the entire provider state when altering one field.
  2. We implemented UIPageViewController and UITabBarController to our architecture.


We’ve mentioned that we had made the provider State with the custom propertyWrapper — RxPublished. It’s an alternative to Published in Combine, but in RxSwift. It “wraps up” BehaviorRelay so when we modified State we sent out an instance to the subject. And only after that the subject delivered it to its subscribers. But there was a case when we needed to update several state fields, but deliver the updated state only when the operation was completed. 

We found a prompt solution using the inout parameter and closure. The function with the parameter sent via inout returns the updated parameter to the variable defined in the function, once it’s completed. The solution is literally in three lines (and saves A LOT of time):

  1. Copy the current state;
  2. Carry out the closure;
  3. Assign the updated state to the subject.
func commit(changes: (inout State) -> ()) {
    var updatedState = stateRelay.value
    value = updatedState
a table with before and after code samples for State
State code before and after


Implementing it in the MVVM-architecture made the process of development quite easy. Check out this step-by-step tutorial:

  1. Make a module for PageViewController
  2. In the provider, prepare the data you’ll need to configure modules inside the UIPageViewController. 
  3. Do ViewModel as you usually do: modify the provider state into the view state.
  4. Add the screens DI modules in viewController using initialization. 

Please note that if you want to reuse modules you should make sure that next time you address this module the new sample gets back. To do that use the Provider property (do not confuse it with the module provider). It’s responsible for getting back a new sample when addressing the variable. Tip: use the SwiftLazy library by DITranquility that is a great alternative to the native lazy and has even better functionality with the required Provider.

  1. Configure each screen in the render function with the required data. Here’s an example:
let someDependency: SomeModuleDependency
let anptherDependency: AnotherModuleDependency
init(....) { }

func render(with data: InputData) {
    pageVC.setViewControllers([someDependency.viewController, ….])


TabBarController now has its own coordinator so we could configure an own flow for each tab. By flow we mean a coordinator and router pair. And a thing to remember — to add two child coordinators to the storage using addDependency and call the start() method. How to do this programmatically:


private typealias Flow = (Coordinator, Presentable)


override func start() {
     let flows = [someFlow(), anotherFlow()]
     let coordinators = { $0.0 }
     let controllers = flows.compactMap { $0.1 as? UINavigationController }
     router.setViewControllers(controllers: controllers)
     coordinators.forEach {
func someFlow() -> Flow {
     let coordinator = someCoordinator()
     let router = Routerlmpl(rootController: UINavigationController())
     return (coordinator, router)

As you can see all the updates are easy and quick to implement to your mobile app architecture. We plan on adding custom popup support and more cool stuff.


With the creation of the iOS app architecture, it became much easier for us to work. It’s not so scary anymore to replace a colleague on vacation and take on a new project. Solutions for this or that implementation can be viewed by colleagues without puzzling over how to implement it so that it would work properly with our architecture. 

During the year, we have already managed to add the shared storage, error handling for coordinators, improved routing logic, and we aren’t gonna stop there.

If you’re interested to know more about our iOS software development expertise, read WebRTC in iOS Explained. Creating an online conference app or introducing calls to your platform has never been this easy.