Categories
Uncategorized

How To Implement Screen Sharing in iOS App using ReplayKit and App Extension

Intro

Screen sharing – capturing user’s display and demonstrating it to peers during a video call. 

There’re 2 ways how you can implement screen sharing into your iOS app:

  1. Screen sharing in app. It suggests that a user can only share their screen from one particular app. If they minimize the app window, broadcasting will stop. It’s quite easy to implement.
  2. Screen sharing with extensions. This approach enables screen sharing from almost any point of the OS: e.g. Homescreen, external apps, system settings. But the implementation might be quite time-consuming.

In this article, we’ll share guides on both.

Screen sharing in app

Starting off easy – how to screen share within an app. We’ll use an Apple Framework, ReplayKit.

import ReplayKit

class ScreenShareViewController: UIViewController {

		lazy var startScreenShareButton: UIButton = {
        let button = UIButton()
        button.setTitle("Start screen share", for: .normal)
        button.setTitleColor(.systemGreen, for: .normal)
        return button
    }()
    
    lazy var stopScreenShareButton: UIButton = {
        let button = UIButton()
        button.setTitle("Stop screen share", for: .normal)
        button.setTitleColor(.systemRed, for: .normal)
        return button
    }()

		lazy var changeBgColorButton: UIButton = {
        let button = UIButton()
        button.setTitle("Change background color", for: .normal)
        button.setTitleColor(.gray, for: .normal)
        return button
    }()
    
    lazy var videoImageView: UIImageView = {
        let imageView = UIImageView()
        imageView.image = UIImage(systemName: "rectangle.slash")
        imageView.contentMode = .scaleAspectFit
        return imageView
    }()
}

Here we added it to the ViewController where recording, background color change buttons and imageView are – this is where the captured video will appear later.

the ViewController

To capture the screen, we address the RPScreenRecorder.shared() class and then call startCapture(handler: completionHandler:).

@objc func startScreenShareButtonTapped() {
		RPScreenRecorder.shared().startCapture { sampleBuffer, sampleBufferType, error in
				self.handleSampleBuffer(sampleBuffer, sampleType: sampleBufferType)
            if let error = error {
                print(error.localizedDescription)
            }
        } completionHandler: { error in
            print(error?.localizedDescription)
        }
}

Then the app asks for a permission to capture the screen: 

the permission pop-up

ReplayKit starts generating a CMSampleBuffer stream for each media type – audio or video. The stream contains the media fragment itself – the captured video – and all necessary information. 

func handleSampleBuffer(_ sampleBuffer: CMSampleBuffer, sampleType: RPSampleBufferType ) {
        switch sampleType {
        case .video:
            handleVideoFrame(sampleBuffer: sampleBuffer)
        case .audioApp:
//             handle audio app
            break
        case .audioMic:
//             handle audio mic
            break
        }
    }

The function converted into the UIImage type will then process each generated videoshot and display it on the screen.

func handleVideoFrame(sampleBuffer: CMSampleBuffer) {
        let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
        let ciimage = CIImage(cvPixelBuffer: imageBuffer)
        
        let context = CIContext(options: nil)
        var cgImage = context.createCGImage(ciimage, from: ciimage.extent)!
        let image = UIImage(cgImage: cgImage)
        render(image: image)
}

Here’s what it looks like:

generated frames

Captured screen broadcasting in WebRTC 

Usual setting: during a video call one peer wants to demonstrate the other what’s up on their screen. WebRTC is a great pick for it.

WebRTC connects 2 clients to deliver video data without any additional servers – it’s peer-to-peer connection (p2p). Check out this article to learn about it in detail. 

Data streams that clients exchange are media streams that contain audio and video streams. A video stream might be a camera image or a screen image.

To establish p2p connection successfully, configure a local mediastream that will further be delivered to the session descriptor. To do that, get an object of the RTCPeerConnectionFactory class and add the media stream packed with audio and video tracks to it.

func start(peerConnectionFactory: RTCPeerConnectionFactory) {

        self.peerConnectionFactory = peerConnectionFactory
        if self.localMediaStream != nil {
            self.startBroadcast()
        } else {
            let streamLabel = UUID().uuidString.replacingOccurrences(of: "-", with: "")
            self.localMediaStream = peerConnectionFactory.mediaStream(withStreamId: "\\(streamLabel)")
            
            let audioTrack = peerConnectionFactory.audioTrack(withTrackId: "\\(streamLabel)a0")
            self.localMediaStream?.addAudioTrack(audioTrack)

            self.videoSource = peerConnectionFactory.videoSource()
            self.screenVideoCapturer = RTCVideoCapturer(delegate: videoSource!)
            self.startBroadcast()
            
            self.localVideoTrack = peerConnectionFactory.videoTrack(with: videoSource!, trackId: "\\(streamLabel)v0")
            if let videoTrack = self.localVideoTrack  {
                self.localMediaStream?.addVideoTrack(videoTrack)
            }
            self.configureScreenCapturerPreview()
        }
    }

Pay attention to the video track configuration:

func handleSampleBuffer(sampleBuffer: CMSampleBuffer, type: RPSampleBufferType) {
        if type == .video {
            guard let videoSource = videoSource,
                  let screenVideoCapturer = screenVideoCapturer,
                  let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
            
            let width = CVPixelBufferGetWidth(pixelBuffer)
            let height = CVPixelBufferGetHeight(pixelBuffer)
            videoSource.adaptOutputFormat(toWidth: Int32(width), height: Int32(height), fps: 24)
            
            let rtcpixelBuffer = RTCCVPixelBuffer(pixelBuffer: pixelBuffer)
            let timestamp = NSDate().timeIntervalSince1970 * 1000 * 1000
            
            let videoFrame = RTCVideoFrame(buffer: rtcpixelBuffer, rotation: RTCVideoRotation._0, timeStampNs: Int64(timestamp))
            videoSource.capturer(screenVideoCapturer, didCapture: videoFrame)
        }
}

Screen sharing with App Extension

Since iOS is a quite closed and highly protected OS, it’s not that easy to address storage space outside an app. To let developers access certain features outside an app, Apple created App Extensions – external apps with access to certain relationships in iOS. They operate according to their types. App Extensions and the main app (let’s call it Containing app) don’t interact with each other directly but can share a data storing container. To ensure that, create an AppGroup on the Apple Developer website, then link the group with the Containing App and App Extension. 

containing app and extension relation
Scheme of data exchange between entities

Now to devising the App Extension. Create a new Target and select Broadcast Upload Extension. It has access to the recording stream and its further processing. Create and set up the App Group between targets. Now you can see the created folder with App Extension. There’re Info.plist, the extension file, and the swift SampleHandler file. There’s also a class with the same name written in SampleHandler that the recorded stream will process. 

The methods we can operate with are already written in this class as well: 

override func broadcastStarted(withSetupInfo setupInfo: [String : NSObject]?)
override func broadcastPaused() 
override func broadcastResumed() 
override func broadcastFinished()
override func processSampleBuffer(_ sampleBuffer: CMSampleBuffer, with sampleBufferType: RPSampleBufferType)

We know what they’re responsible for by their names. All of them, but the last one. It’s where the last CMSampleBuffer and its type arrives. In case the buffer type is .video, this is where the last shot will be.

Now let’s get to implementing screen sharing with launching iOS Broadcast. To start off, we demonstrate the RPSystemBroadcastPickerView itself and set the extension to call.

let frame = CGRect(x: 0, y: 0, width: 60, height: 60)
let systemBroadcastPicker = RPSystemBroadcastPickerView(frame: frame)
systemBroadcastPicker.autoresizingMask = [.flexibleTopMargin, .flexibleRightMargin]
if let url = Bundle.main.url(forResource: "<OurName>BroadcastExtension", withExtension: "appex", subdirectory: "PlugIns") {
    if let bundle = Bundle(url: url) {
           systemBroadcastPicker.preferredExtension = bundle.bundleIdentifier
     }
}
view.addSubview(systemBroadcastPicker)

Once a user taps “Start broadcast” the broadcast will start and the selected extension will process the state and the stream itself. But how will the Containing App know this? Since the storage container is shared, we can exchange data via the file system – e.g. UserDefaults(suiteName) and FileManager. With it we can set up a timer, check up the states within certain periods of time, record and read data along a certain track. An alternative to that is to launch a local web-socket server and address it. But in this article we’ll only cover exchanging via files.

Write the BroadcastStatusManagerImpl class that will record current broadcast status as well as communicate to the delegate on status alterations. We’ll check on the updated info using a timer with 0.5 sec frequency. 

protocol BroadcastStatusSubscriber: AnyObject {
    func onChange(status: Bool)
}

protocol BroadcastStatusManager: AnyObject {
    func start()
    func stop()
    func subscribe(_ subscriber: BroadcastStatusSubscriber)
}

final class BroadcastStatusManagerImpl: BroadcastStatusManager {

    // MARK: Private properties

    private let suiteName = "group.com.<YourOrganizationName>.<>"
    private let forKey = "broadcastIsActive"

    private weak var subscriber: BroadcastStatusSubscriber?
    private var isActiveTimer: DispatchTimer?
    private var isActive = false

    deinit {
        isActiveTimer = nil
    }

    // MARK: Public methods

    func start() {
        setStatus(true)
    }

    func stop() {
        setStatus(false)
    }

    func subscribe(_ subscriber: BroadcastStatusSubscriber) {
        self.subscriber = subscriber
        isActive = getStatus()

        isActiveTimer = DispatchTimer(timeout: 0.5, repeat: true, completion: { [weak self] in
            guard let self = self else { return }

            let newStatus = self.getStatus()

            guard self.isActive != newStatus else { return }

            self.isActive = newStatus
            self.subscriber?.onChange(status: newStatus)
        }, queue: DispatchQueue.main)

        isActiveTimer?.start()
    }

    // MARK: Private methods

    private func setStatus(_ status: Bool) {
        UserDefaults(suiteName: suiteName)?.set(status, forKey: forKey)
    }

    private func getStatus() -> Bool {
        UserDefaults(suiteName: suiteName)?.bool(forKey: forKey) ?? false
    }
}

Now we create samples of BroadcastStatusManagerImpl to the App Extension and the Containing App, so that they know the broadcast state and record it. The Containing App can’t stop the broadcast directly. This is why we subscribe to the state – this way, when it reports false, App Extension will terminate broadcasting, using the finishBroadcastWithError method. Even though, in fact, we end it with no error, this is the only method that Apple SDK provides for program broadcast termination. 

extension SampleHandler: BroadcastStatusSubscriber {
    func onChange(status: Bool) {
        if status == false {
            finishBroadcastWithError(NSError(domain: "<YourName>BroadcastExtension", code: 1, userInfo: [
                NSLocalizedDescriptionKey: "Broadcast completed"
            ]))
        }
    }
}

Now both apps know when the broadcast started and ended. Then, we need to deliver data from the last shot. To do that, we create the PixelBufferSerializer class where we declare the serializing and deserializing methods. In the SampleHandler’s processSampleBuffer method we convert CMSampleBuffer to CVPixelBuffer and then serialize it to Data. When serializing to Data it’s important to record the format type, height, width and processing increment for each surface in it. In this particular case we have two of them: luminance and chrominance, and their data. To get the buffer data, use CVPixelBuffer-kind functions.

While testing from iOS to Android we’ve faced this problem: the device just wouldn’t display the screen shared. It’s that Android OS doesn’t support the irregular resolution the video had. We’ve solved it by just turning it into 1080×720. 

Once having serialized into Data, record the link to the bytes gained into the file.

memcpy(mappedFile.memory, baseAddress, data.count)

Then create the BroadcastBufferContext class in the Containing App. Its operation logic is alike BroadcastStatusManagerImpl: the file discerns each timer iteration and reports on the data for further processing. The stream itself comes in 60 FPS, but it’s better to read it with 30 FPS, since the system doesn’t perform well when processing in 60 FPS due to lack of the resource. 

func subscribe(_ subscriber: BroadcastBufferContextSubscriber) {
        self.subscriber = subscriber

        framePollTimer = DispatchTimer(timeout: 1.0 / 30.0, repeat: true, completion: { [weak self] in
            guard let mappedFile = self?.mappedFile else {
                return
            }

            var orientationValue: Int32 = 0
            mappedFile.read(at: 0 ..< 4, to: &orientationValue)
            self?.subscriber?.newFrame(Data(
                bytesNoCopy: mappedFile.memory.advanced(by: 4),
                count: mappedFile.size - 4,
                deallocator: .none
            ))
        }, queue: DispatchQueue.main)
        framePollTimer?.start()
    }

Deserialize it all back to CVPixelBuffer, likewise we serialized it but in reverse. Then we configure the video track by setting up the extension and FPS.

videoSource.adaptOutputFormat(toWidth: Int32(width), height: Int32(height), fps: 60)

Now add the frame RTCVideoFrame(buffer: rtcpixelBuffer, rotation: RTCVideoRotation._0, timeStampNs: Int64(timestamp)). This track goes to the local stream.

localMediaStream.addVideoTrack(videoTrack)

Conclusion 

Implementing screen sharing in iOS is not that easy as it may seem. Reservedness and security of the OS force developers into looking for workarounds to deal with such tasks. We’ve found some – check out the result in our Fora Soft Video Calls app. Download on AppStore.

Categories
Uncategorized

Advanced iOS App Architecture Explained on MVVM with Code Examples

MVVM iOS architecture

How to share the exact same vision with changing developes’ teams? Is there a way to make new devs onboarding faster and easier to cut costs? How will the final product be affected? In this article we want to share our experience and give a clear explanation of what iOS app architecture is for all business and tech people.

We are a custom software development company. In 17 years of work, we have developed more than 60 applications on SWIFT. We regularly had to spend weeks digging into code to understand the structure and operation of another project. Some projects we created as MVP, some as MVVM, some as our own. Switching between projects and reviewing other developers’ code increased our development time by several more hours. So we decided to create a unified architecture for mobile apps.

What benefits the architecture gave us:

  1. Speed up the development process. Having spent some time on creating the architecture we can now easily make changes to the code. For instance, if we need to change a new sign-up flow, just making it work would take us 8-16 hours. Now it only takes 1-2 hours.
  2. Eliminate bugs. Not completely, but there’s now less. We’ve already developed a lot of different kinds of flows and cases. Add the settled approach to it, and we don’t have to search for solutions anymore, we just write the code. We already know what bugs can occur so we avoid them straight away.
  3. Refer projects more easily. If a project developer is away (e.g. on sick days or a vacation) we find someone who could replace them until they’re back. The substitutional developer would waste time (= client’s money) on examining the code before entering a project. Now we minimized this kind of expense since we’ve unified all the solutions and the programmer can easily continue the development.

When went on to creating an iOS app architecture, we first defined the main goals to achieve:

Simplicity and speed. One of the main goals is to make developers’ lives easier. To do this, the code must be readable and the application must have a simple and clear structure. 

Quick immersion in the project. Outsourced development doesn’t provide much time to dive into a project. It is important that when switching to another project, it does not take the developer much time to learn the application code. 

Scalability and extensibility. The application under development must be ready for large loads and be able to easily add new functionality. For this it is important that the architecture corresponds to modern development principles, such as SOLID, and the latest versions of the SDK

Constant development. You can’t make a perfect architecture all at once, it comes with time. Every developer contributes to it – we have weekly meetings where we discuss the advantages and disadvantages of the existing architecture and things we would like to improve.

The foundation of our architecture is the MVVM pattern with coordinators 

Comparing popular MV(X) patterns, we settled on MVVM. It seemed to be the best because of good speed of development and flexibility.

MVVM stands for Model, View, ViewModel:

  • Model – provides data and methods of working with it. Request to receive, check for correctness, etc.
  • View – the layer responsible for the level of graphical representation.
  • ViewModel – The mediator between the Model and View. It is responsible for changes of Model, reacting on user’s actions performed on View, and updates View, using changes from Model. The main distinctive feature from other intermediaries in MV(X) patterns is the reactive bindings of View and ViewModel, which significantly simplifies and reduces the code of working with data between these entities.

Along with the MVVM, we’ve added coordinators. These are objects that control the navigational flow of our application. They help to:

  • isolate and reuse ViewControllers
  • pass dependencies down the navigation hierarchy
  • define the uses of the application
  • implement Deep Links

We also used the DI (Dependency Enforcement) pattern in the iOS development architecture. This is a setting over objects where object dependencies are specified externally, rather than created by the object itself. We use DITranquillity, a lightweight but powerful framework with which you can configure dependencies in a declarative style. 

How to implement it?

Let’s break down our advanced iOS app architecture using a note-taking application as an example. 

Let’s create the framework for the future application. Let’s implement the necessary protocols for routing.

import UIKit
 
protocol Presentable {
    func toPresent() -> UIViewController?
}
 
extension UIViewController: Presentable {
    func toPresent() -> UIViewController? {
        return self
    }
}
protocol Router: Presentable {
  
  func present(_ module: Presentable?)
  func present(_ module: Presentable?, animated: Bool)
  
  func push(_ module: Presentable?)
  func push(_ module: Presentable?, hideBottomBar: Bool)
  func push(_ module: Presentable?, animated: Bool)
  func push(_ module: Presentable?, animated: Bool, completion: (() -> Void)?)
  func push(_ module: Presentable?, animated: Bool, hideBottomBar: Bool, completion: (() -> Void)?)
  
  func popModule()
  func popModule(animated: Bool)
  
  func dismissModule()
  func dismissModule(animated: Bool, completion: (() -> Void)?)
  
  func setRootModule(_ module: Presentable?)
  func setRootModule(_ module: Presentable?, hideBar: Bool)
  
  func popToRootModule(animated: Bool)
}

Configuring AppDelegate and AppCoordintator

a graphic scheme of how delegate and coordinators interact (blocks and arrows)
A diagram of the interaction between the delegate and the coordinators

In App Delegate, we create a container for the DI. In the registerParts() method we add all our dependencies in the application. Next we initialize the AppCoordinator by passing window and container and calling the start() method, thereby giving it control.

@main
class AppDelegate: UIResponder, UIApplicationDelegate {
    private let container = DIContainer()
    
    var window: UIWindow?
    private var applicationCoordinator: AppCoordinator?
    
    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
        // Override point for customization after application launch.
        
        registerParts()
        
        let window = UIWindow()
        let applicationCoordinator = AppCoordinator(window: window, container: container)
        self.applicationCoordinator = applicationCoordinator
        self.window = window
        
        window.makeKeyAndVisible()
        applicationCoordinator.start()
        
        return true
    }
 
    private func registerParts() {
        container.append(part: ModelPart.self)
        container.append(part: NotesListPart.self)
        container.append(part: CreateNotePart.self)
        container.append(part: NoteDetailsPart.self)
    }
}

The App Coordinator determines on which script the application should run. For example, if the user isn’t authorized, authorization is shown for him, otherwise the main application script is started. In the case of the notes application, we have 1 scenario – displaying a list of notes. 

We do the same as with App Coordinator, only instead of window, we send router.

final class AppCoordinator: BaseCoordinator {
    private let window: UIWindow
    private let container: DIContainer
    
    init(window: UIWindow, container: DIContainer) {
        self.window = window
        self.container = container
    }
    
    override func start() {
        openNotesList()
    }
    
    override func start(with option: DeepLinkOption?) {
        
    }
    
    func openNotesList() {
        let navigationController = UINavigationController()
        navigationController.navigationBar.prefersLargeTitles = true
        
        let router = RouterImp(rootController: navigationController)
        
        let notesListCoordinator = NotesListCoordinator(router: router, container: container)
        notesListCoordinator.start()
        addDependency(notesListCoordinator)
        
        window.rootViewController = navigationController
    }
}

In NoteListCoordinator, we take the dependency of the note list screen, using the method container.resolve(). Be sure to specify the type of our dependency, so the library knows which dependency to fetch. Also set up jump handlers for the following screens. The dependencies setup will be presented later.

class NotesListCoordinator: BaseCoordinator {
    private let container: DIContainer
    private let router: Router
    
    init(router: Router, container: DIContainer) {
        self.router = router
        self.container = container
    }
    
    override func start() {
        setNotesListRoot()
    }
    
    func setNotesListRoot() {
        let notesListDependency: NotesListDependency = container.resolve()
        router.setRootModule(notesListDependency.viewController)
        notesListDependency.viewModel.onNoteSelected = { [weak self] note in
            self?.pushNoteDetailsScreen(note: note)
        }
        notesListDependency.viewModel.onCreateNote = { [weak self] in
            self?.pushCreateNoteScreen(mode: .create)
        }

Creating a module

Each module in an application can be represented like this:

a graphic scheme of iOS module scheme (with blocks and arrows)
Module scheme in iOS application architecture

The Model layer in our application is represented by the Provider entity. Its layout is

a graphic scheme of iOS provider (with blocks and arrows)
Provider scheme in apple app architecture

The Provider is an entity in iOS app architecture, which is responsible for communicating with services and managers in order to receive, send, and process data for the screen, e.g. to contact services to retrieve data from the network or from the database.

Let’s create a protocol for communicating with our provider by mentioning the necessary fields and methods. Let’s create a structure ProviderState, where we declare the data on which our screen will depend. In the protocol, we will mention fields such as Current State with type ProviderState and its observer State with type Observable<ProviderState> and methods to change our Current State. 

Then we’ll create an implementation of our protocol, calling as the name of the protocol + “Impl”. CurrentState we mark as @Published, this property wrapper, allows us to create an observable object which automatically reports changes. BehaviorRelay could do the same thing, having both observable and observer properties, but it had a rather complicated data update flow that took 3 lines, while using @Published only took 1. Also set the access level to private(set), because the provider’s state should not change outside of the provider. The State will be an observer of CurrentState and will broadcast changes to its subscribers, namely to our future View Model. Do not forget to implement the methods that we will need when working on this screen.

struct Note {
    let id: Identifier<Self>
    let dateCreated: Date
    var text: String
    var dateChanged: Date?
}
 
protocol NotesListProvider {
    var state: Observable<NotesListProviderState> { get }
    var currentState: NotesListProviderState { get }
}
 
class NotesListProviderImpl: NotesListProvider {
    let disposeBag = DisposeBag()
    
    lazy var state = $currentState
    @Published private(set) var currentState = NotesListProviderState()
    
    init(sharedStore: SharedStore<[Note], Never>) {
        sharedStore.state.subscribe(onNext: { [weak self] notes in
            self?.currentState.notes = notes
        }).disposed(by: disposeBag)
    }
}
 
struct NotesListProviderState {
    var notes: [Note] = []
}
a graphic scheme of iOS View-model
View-Model scheme in iOS development architecture

Here we’ll create a protocol, just like for the provider. Mention fields such as ViewInputData, and Events. ViewInputData is the data that will be passed directly to our viewController. Let’s create the implementation of our ViewModel, let’s subscribe the viewInputData to the state provider and change it to the necessary format for the view using the mapToViewInputData function. Create an enum Events, where we define all the events that should be processed on the screen, like view loading, button pressing, cell selection, etc. Make Events a PublishSubject type, to be able to subscribe and add new elements, subscribe and handle each event.

protocol NotesListViewModel: AnyObject {
    var viewInputData: Observable<NotesListViewInputData> { get }
    var events: PublishSubject<NotesListViewEvent> { get }
    
    var onNoteSelected: ((Note) -> ())? { get set }
    var onCreateNote: (() -> ())? { get set }
}
 
class NotesListViewModelImpl: NotesListViewModel {
    let disposeBag = DisposeBag()
    
    let viewInputData: Observable<NotesListViewInputData>
    let events = PublishSubject<NotesListViewEvent>()
    
    let notesProvider: NotesListProvider
    
    var onNoteSelected: ((Note) -> ())?
    var onCreateNote: (() -> ())?
    
    init(notesProvider: NotesListProvider) {
        self.notesProvider = notesProvider
        
        self.viewInputData = notesProvider.state.map { $0.mapToNotesListViewInputData() }
        
        events.subscribe(onNext: { [weak self] event in
            switch event {
            case .viewDidAppear, .viewWillDisappear:
                break
            case let .selectedNote(id):
                self?.noteSelected(id: id)
            case .createNote:
                self?.onCreateNote?()
            }
        }).disposed(by: disposeBag)
    }
    
    private func noteSelected(id: Identifier<Note>) {
        if let note = notesProvider.currentState.notes.first(where: { $0.id == id }) {
            onNoteSelected?(note)
        }
    }
}
 
private extension NotesListProviderState {
    func mapToNotesListViewInputData() -> NotesListViewInputData {
        return NotesListViewInputData(notes: self.notes.map { ($0.id, NoteCollectionViewCell.State(text: $0.text)) })
    }
}
a graphic scheme of iOS MVVM View
View scheme in iOS mobile architecture

In this layer, we configure the screen UI and bindings with the view model. The View layer represents the UIViewController. In viewWillAppear(), we subscribe to ViewInputData and give the data to render, which distributes it to the desired UI elements

  override func viewWillAppear(_ animated: Bool) {
        super.viewDidAppear(animated)
        
        let disposeBag = DisposeBag()
        
        viewModel.viewInputData.subscribe(onNext: { [weak self] viewInputData in
            self?.render(data: viewInputData)
        }).disposed(by: disposeBag)
        
        self.disposeBag = disposeBag
    }
    
    private func render(data: NotesListViewInputData) {
        var snapshot = DiffableDataSourceSnapshot<NotesListSection, NotesListSectionItem>()
        snapshot.appendSections([.list])
        snapshot.appendItems(data.notes.map { NotesListSectionItem.note($0.0, $0.1) })
        dataSource.apply(snapshot)
    }

We also add event bindings, either with RxSwift or the basic way through selectors. 

    @objc private func createNoteBtnPressed() {
        viewModel.events.onNext(.createNote)
    }

Now, that all the components of the module are ready, let’s proceed to link objects between themselves. The module is a class subscribed to the DIPart protocol, which primarily serves to maintain the code hierarchy by combining some parts of the system into a single common class, and in the future includes some, but not all, of the components in the list. Let’s implement the obligatory load(container:) method, where we will register our components.

final class NotesListPart: DIPart {
    static func load(container: DIContainer) {
        container.register(SharedStore.notesListScoped)
            .as(SharedStore<[Note], Never>.self, tag: NotesListScope.self)
            .lifetime(.objectGraph)
        
        container.register { NotesListProviderImpl(sharedStore: by(tag: NotesListScope.self, on: $0)) }
            .as(NotesListProvider.self)
            .lifetime(.objectGraph)
        
        container.register(NotesListViewModelImpl.init(notesProvider:)).as(NotesListViewModel.self).lifetime(.objectGraph)
        container.register(NotesListViewController.init(viewModel:)).lifetime(.objectGraph)
        container.register(NotesListDependency.init(viewModel:viewController:)).lifetime(.prototype)
    }
}
 
struct NotesListDependency {
    let viewModel: NotesListViewModel
    let viewController: NotesListViewController
}

We’ll register components with the method container.register(), sendingthere our object, and specifying the protocol by which it will communicate, as well as the lifetime of the object. We do the same with all the other components

Our module is ready, do not forget to add the module to the container in the AppDelegate. Let’s go to the NoteListCoordinator in the list opening function. Let’s take the required dependency through the container.resolve function, be sure to explicitly declare the type of variable. Then we create event handlers onNoteSelected and onCreateNote, and pass the viewController to the router.

 func setNotesListRoot() {
        let notesListDependency: NotesListDependency = container.resolve()
        router.setRootModule(notesListDependency.viewController)
        notesListDependency.viewModel.onNoteSelected = { [weak self] note in
            self?.pushNoteDetailsScreen(note: note)
        }
        notesListDependency.viewModel.onCreateNote = { [weak self] in
            self?.pushCreateNoteScreen(mode: .create)
        }
    }

Other modules and navigation are created following these steps. In conclusion, we can say that the architecture isn’t without flaws. We could mention a couple problems, such as changing one field in viewInputData forces to update the whole UI but not certain elements of it; underdeveloped common flow of work with UITabBarController and UIPageViewController.

November’22 Update

It’s been 6 months since we released this article and mentioned the issues and weak spots stated above. We’ve done some work and here’re the improvements we’ve made:

  1. Now you don’t have to update the entire provider state when altering one field.
  2. We implemented UIPageViewController and UITabBarController to our architecture.

State

We’ve mentioned that we had made the provider State with the custom propertyWrapper — RxPublished. It’s an alternative to Published in Combine, but in RxSwift. It “wraps up” BehaviorRelay so when we modified State we sent out an instance to the subject. And only after that the subject delivered it to its subscribers. But there was a case when we needed to update several state fields, but deliver the updated state only when the operation was completed. 

We found a prompt solution using the inout parameter and closure. The function with the parameter sent via inout returns the updated parameter to the variable defined in the function, once it’s completed. The solution is literally in three lines (and saves A LOT of time):

  1. Copy the current state;
  2. Carry out the closure;
  3. Assign the updated state to the subject.
func commit(changes: (inout State) -> ()) {
    var updatedState = stateRelay.value
    changes(&updatedState)
    value = updatedState
}
a table with before and after code samples for State
State code before and after

UIPageViewController

Implementing it in the MVVM-architecture made the process of development quite easy. Check out this step-by-step tutorial:

  1. Make a module for PageViewController
  2. In the provider, prepare the data you’ll need to configure modules inside the UIPageViewController. 
  3. Do ViewModel as you usually do: modify the provider state into the view state.
  4. Add the screens DI modules in viewController using initialization. 

Please note that if you want to reuse modules you should make sure that next time you address this module the new sample gets back. To do that use the Provider property (do not confuse it with the module provider). It’s responsible for getting back a new sample when addressing the variable. Tip: use the SwiftLazy library by DITranquility that is a great alternative to the native lazy and has even better functionality with the required Provider.

  1. Configure each screen in the render function with the required data. Here’s an example:
 ViewController
….
let someDependency: SomeModuleDependency
let anptherDependency: AnotherModuleDependency
….
init(....) { }

func render(with data: InputData) {
    someDependency.viewModel.setup(data.dataForSomeModule)
    anotherDependency.viewModel.setup(data.dataForAnotherModule)
    …
    pageVC.setViewControllers([someDependency.viewController, ….])
}

UITabBarController

TabBarController now has its own coordinator so we could configure an own flow for each tab. By flow we mean a coordinator and router pair. And a thing to remember — to add two child coordinators to the storage using addDependency and call the start() method. How to do this programmatically:

TabBarCoordinator

private typealias Flow = (Coordinator, Presentable)

  …

override func start() {
     let flows = [someFlow(), anotherFlow()]
     let coordinators = flows.map { $0.0 }
     let controllers = flows.compactMap { $0.1 as? UINavigationController }
     router.setViewControllers(controllers: controllers)
     coordinators.forEach {
          addDependency($0)
          $0.start()
     }
}
func someFlow() -> Flow {
     let coordinator = someCoordinator()
     let router = Routerlmpl(rootController: UINavigationController())
     return (coordinator, router)
}

As you can see all the updates are easy and quick to implement to your mobile app architecture. We plan on adding custom popup support and more cool stuff.

Conclusion

With the creation of the iOS app architecture, it became much easier for us to work. It’s not so scary anymore to replace a colleague on vacation and take on a new project. Solutions for this or that implementation can be viewed by colleagues without puzzling over how to implement it so that it would work properly with our architecture. 

During the year, we have already managed to add the shared storage, error handling for coordinators, improved routing logic, and we aren’t gonna stop there.

If you’re interested to know more about our iOS software development expertise, read WebRTC in iOS Explained. Creating an online conference app or introducing calls to your platform has never been this easy.