Categories
Uncategorized

How to Choose a Live Streaming Protocol? RTMP vs HLS vs WebRTC

cover image

When streaming a video, the delivery mostly depends on which live streaming protocol is used. Each protocol performs differently in terms of latency, scalability, and supporting devices. While there’s no universal solution to each live streaming need, you still have freedom to pick the best match for your specific requirements.

How does streaming work?

For the end user, live streaming is all about a quasi-real experience with a screen and a camera standing in-between them and the source. In reality, there’s much more backstage. 

It does start with capturing whatever is about to go live on camera. It does end with playing the content on an end-user device. But to make this happen, it takes four more milestones to complete.

  1. Once having recorded even a byte of information, video and audio get compressed with an encoder. In short, encoder converts raw media material into a piece of digital information that can be displayed on many devices. It also reduces the file size from GBs to MBs to ensure quick data delivery and playback.
  2. Compressed data then gets ingested into a streaming platform for further processing. 
  3. The platform resource transcodes the stream and packages it into its final format and protocol. The latter is basically the way data travels from one communicating system to another. 
  4. To get to its end user, the stream is then delivered across the internet, in most cases via a content delivery network (CDN). In turn, CDN is a system of servers located in different physical locations.

Aaand — cut! Here’s where a user sees the livestream on their device. A long way to go, huh? But in fact, this journey may take less than 0,5 second or 1-2 minutes. This is called latency and it varies from one protocol to another. As well as many other parameters. 

There’re quite a few live streaming protocols to choose from, but we’ll elaborate on three most commonly used ones: RTMP, HLS and WebRTC. In short, here’s the difference: 

protocols comparison
Live Streaming protocols (RTMP, HLS, WebRTC) comparison

Now in detail.

RTMP

Real-Time Messaging Protocol (RTMP) is probably the most widely supported media streaming protocol from both sides of the process. It’s compatible with many professional recording devices and also quite easy to ingest into streaming platforms like YouTube, Twitch, and others. 

Supporting low-latency streaming, RTMP delivers data at roughly the same pace as cable broadcasting — it takes only 5 seconds to transmit information. This is due to using a firehose approach that builds a steady stream of available data and delivers it to numerous users in real time. It just keeps flowing!

Yet, this video streaming protocol is no longer supported by browsers. This results in additional conversion and transcoding the stream into a HTTP-based technology, prolonging the overall latency to 6-30 seconds. 

HLS

HTTP Live Streaming, developed by Apple as a part of QuickTime, Safari, OS X, and iOS software, is a perfect pick if you stream to a large audience, even millions at a time. HLS is supported by all browsers and almost any device (set-top boxes, too). 

The protocol supports adaptive bitrate streaming which provides the best video quality no matter the connection, software, or device. Basically it’s key to the best viewer experience.

Usually, to cover a large audience with broad geography stably and with the lowest latency, CDN is used. 

The only major drawback of HLS seems to be the latency — prioritizing quality, it may reach 45 seconds when used end-to-end. 

Apple presented the solution in 2019: Low Latency HLS shrinks the delay to less than 2 seconds and is currently supported by all browsers, as well as Android, Linux, Microsoft and, obviously, macOS devices, several set-top boxes and Smart TVs. 

WebRTC

RTC in WebRTC stands for “real-time communication” which suggests this protocol is perfect for an interactive video environment. With the minimum latency possible (less than 0,5 second) WebRTC is mostly used for video conferencing with not so many participants (usually 6 or less). We use it a lot in our projects for video conferencing. Check out the ProVideoMeeting or CirrusMED portfolios.

But media servers like Kurento allow WebRTC scaling to an audience of up to 1 million viewers. Yet, a more popular approach would be to use WebRTC as an ingest and repackage it into more scalable HLS. 

Apart from providing the absolute minimum latency possible, WebRTC doesn’t require any additional equipment for a streamer to do the recording. 

Conclusion

Thus, when choosing a live streaming protocol for your streaming goals, ask yourself these questions:

  • What kind of infrastructure am I willing to establish or already have? What recording devices and software are at my disposal?
  • What kind of content I want to deliver and which latency is acceptable? How “live” do I want the streaming to be?
  • What system scalability do I expect? How many users I guesstimate will watch the stream at a time? 

One protocol may not satisfy all your needs. But a good mix of two can. Hit us up to get a custom solution to turn your ideas into action.

Categories
Uncategorized

How Digital Video as a Technology Works

tv with different types of video

In this article, we’ll try to explain what digital video is and how it works. We’ll be using a lot of examples, so even if you wanna run away before reading something difficult – fear not, we’ve got you. So lean back and enjoy the explanation on video from Nikolay, our CEO. 😉

Analog and digital video

Video can be analog and digital.

All of the real world information around us is analog. Waves in the ocean, sound, clouds floating in the sky. It’s a continuous flow of information that’s not divided into parts and can be represented as waves. People perceive exactly analog information from the world around them.    

The old video cameras, which recorded on magnetic cassettes, recorded information in analog form. Reel-to-reel tape and cassette recorders worked on the same principle. Magnetic tape was passed through the turntable’s magnetic heads, and this allowed sound and video to be played. Vinyl records were also analog. 

Such records were played back strictly in the order in which they were recorded. Further editing was very difficult. So was the transfer of such recordings to the Internet.

With the ubiquity of computers, almost all video is in digital format, as zeros and ones. When you shoot video on your phone, it’s converted from analog to digital media and stored in memory, and when you play it back, it’s converted from digital to analog. This allows you to stream your video over a network, store it on your hard drive, and edit and compress it.

What a digital video is made of

Video consists of a sequence of pictures or frames that, as they change rapidly, make it appear as if objects are moving on the screen.

This here is an example of a how a video clip is done.

What is Frame Rate

Frames on the screen change at a certain rate. The number of frames per second is the frame rate or framerate. The standard for TV is 24 frames per second, and 50 frames per second for IMAX in movie theaters.

The higher the number of frames per second, the more detail you can see with fast-moving objects in the video. 

Check out the difference between 15, 30, and 60 FPS.

What is pixel

All displays on TVs, tablets, phones and other devices are made up of little glowing bulbs – pixels. Let’s say that each pixel can display one color (technically different manufacturers implement this differently). 

In order to display an image on a display, it is necessary for each pixel on the screen to glow a certain color. 

Thanks to this technical device of screens, in digital video each frame is a set of colored dots or pixels. 

Schematic screen structure

The number of such dots horizontally and vertically is called the picture resolution. The resolution is recorded as 1024×768. The first number is the number of pixels horizontally and the second number, vertically. 

The resolution of all frames in a video is the same and this in turn is called the video resolution.

Let’s take a closer look at a single pixel. On the screen it’s a glowing dot of a certain color, but in the video file itself a pixel is stored as digital information (numbers). With this information the device will understand what color the pixel should light up on the screen. 

What are color spaces

There are different ways of representing the color of a pixel digitally, and these ways are called color spaces. 

Color spaces are set up so that any color is represented by a point that has certain coordinates in that space. 

For example, the RGB (Red Green Blue) color space is a three-dimensional color space where each color is described by a set of three coordinates – each of them is responsible for red, green and blue colors. 

Any color in this space is represented as a combination of red, green, and blue.

how color spaces work
Classic RGB palette

Here is an example of an RGB image decomposed into its constituent colors:

what is RGB
How colors in pictures mix

There are many color spaces, and they differ in the number of colors that can be encoded with them and the amount of memory required to represent the pixel color data.

The most popular spaces are RGB (used in computer graphics), YCbCr (used in video), and CMYK (used in printing)

CMYK is very similar to RGB, but has 4 base colors – Cyan, Magenta, Yellow, Key or Black.

RGB and CMYK spaces are not very efficient, because they store redundant information.  

Video uses a more efficient color space that takes advantage of human vision.

The human eye is less sensitive to the color of objects than it is to their brightness.

how human eye understand brightness
How human eyes perceive contrast

On the left side of the image, the colors of squares A and B are actually the same. It just seems to us that they are different. The brain forces us to pay more attention to brightness than to color. On the right side there is a jumper of the same color between the marked squares – so we (i.e., our brain) can easily determine that, in fact, the same color is there.

Using this feature of vision, it is possible to display a color image by separating the luminosity from the color information. Subsequently, half or even a quarter of the color information can simply be discarded in compression (representing the luminosity with a higher resolution than the color). The person will not notice the difference, and we will essentially save on storage of the information about color.

About how exactly color compression works, we will talk in the next article.

The best known space that works this way is YCbCr and its variants: YUV and YIQ.  

Here is an example of an image decomposed into components in YCbCr. Where Y’ is the luminance component, CB and CR are the blue and red color difference components.

how YCbCr works
YCbCr scheme

It is YCbCr that is used for color coding in video. Firstly, this color space allows compressing color information, and secondly, it is well suited for black and white video (e.g. surveillance cameras), as the color information (CB and CR) can simply be omitted.

 

What is Bit Depth

The more bits, the more colors can be encoded, and the more memory space each pixel occupies. The more colors, the better the picture looks.

For a long time it has been standard for video to use a color depth of 8 bits (Standard Dynamic Range or SDR video). Nowadays, 10-bit or 12-bit (High Dynamic Range or HDR video) is increasingly used.

compare SDR and HDR video
Bit depth contents

It should be taken into account that in different color spaces, with the same number of bits allocated per pixel, you can encode a different number of colors. 

What is Bit Rate

Bit rate is the number of bits in memory that one second of video takes. To calculate the bit rate for uncompressed video, take the number of pixels in the picture or frame, multiply by the color depth and multiply by the number of frames per second

1024 pixels X 768 pixels X 10 bits X 24 frames per second = 188743680 bits per second

That’s 23592960 bytes, 23040 kilobytes or 22.5 megabytes per second.

Those 5 minute videos would take up 6,750 megabytes or 6.59 gigabytes of memory.

This brings us to why video compression methods are needed and why they appeared. Without compression it’s impossible to store and transmit that amount of information over a network. YouTube videos would take forever to download.

Conclusion

This is a quick introduction in the world of video. Now that we know what it consists of and the basics of its work, we can move on to the more complicated stuff. Which will still be presented in a comprehensive way 🙂

In the next article I’ll tell you how video compression works. I’ll talk about lossless compression and lossy compression. 

Categories
Uncategorized

How to estimate time and effort for a software development project as a developer

Estimating IT projects is a pain. Whoever gave promises they couldn’t keep, only to work overtime just to meet the deadline they have set up for themselves?

When I started my path and tried to estimate my time spent while being a developer, I always underestimated things. Every time there would appear a job I didn’t account for. Colleagues told me to multiply my estimates by 2, 3, the number Pi. Only it didn’t help to increase the estimation accuracy, just added other problems. For example, when I had to explain where the high numbers came from.

15 years have passed since then. Over this time, I’ve estimated over 250 projects, got a lot of experience, and now I’m willing to share my thoughts on the topic.

Hopefully, this article will improve the quality of the estimations you’re giving.

Why estimate?

No more than 29% of projects end up in success, according to the research by The Standish Group, conducted in 2015. The other 71% either failed or exceeded the triple limitation system: deadline, functionality, budget.

From these statistics, we can assume that project estimation is often not what it should be. Does it mean that this process is pointless? There’s even a movement on the Internet that invites you to not estimate anything and just write a code, so that whatever happens – happens (search by #noestimates).

Not having any estimations does sound appealing, but let me give you an example. Imagine that you come to a restaurant and order a steak and a bottle of wine but there are no prices on the menu. You ask a waiter: “how much?”, and he goes, “depends on how long it takes the chef to cook. Order, please. You’ll get your food, you’ll eat it and then we’ll tell you how much it cost”.

There can also be an Agile-like option: “The chef will cook your meal, and you’ll be paying as he proceeds. Do this, please, unless you’re out of money. When you have no more money, the chef will stop cooking. The steak won’t be ready, perhaps, or it will be just reaching the state when it’s edible. If it’s not edible, though.. Sorry, it’s your problem”.

This is approximately how customers in the IT-sphere feel when they’re offered to start a job without estimations.

In the example above we’d ideally like to get an exact price for the steak. At the very least, it’d be fine if we just got a price range. This way we can check whether we want to go to this restaurant, choose a cheaper one, go get a cheeseburger or stay home and cook a salad.

Going to the restaurant without any understanding as to what to expect is not the decision that a person in the right mind will take.

I hope I could convince you that estimation is an important part of a decision making process on any project. The estimation can be as close or far from reality as possible, but it’s needed nevertheless.

Reasons for underestimation

Ignoring probability theory

Imagine the following situation. A developer is approached by a manager, and the manager would like to know how long it will take the developer to finish a task. The developer has done something like that in the past and can give the “most probable” estimation. Let it be 10 days. There’s a probability that the completion of the task will last for 12 days, but the chance is lower than that of 10 days. There’s also a chance that the task would be completed in 8 days but this probability is lower as well.

It’s often assumed that estimation for a task or a project is distributed according to the normal distribution law (read more about it here). If you show estimation distributions as a graph, you’ll get this:

X shows estimation while Y shows probabilities that the estimation will turn out to be correct and a task will consume the precise amount of time. In the center, you can see the point of the highest probability. It goes with our 10-day estimation.

The area under the curve shows a probability of 100%. It means that if we go with the most probable estimation, we’ll finish the project by the deadline with the 50% chance (the area under the graph before the 10-hour estimation is half of a figure, therefore it’s 50%). So, if we go with this principle, we’ll be able to only miss 50% of the deadlines.

This is only if the distribution of probabilities does go hand-in-hand with the normal distribution. In this case, the possibility of finishing a project earlier than a possible estimation equals to finishing it later than a possible estimation. However, it’s also normal that something goes wrong with the project, and we finish it later. Of course, a miracle can happen, and we finish earlier but what is the chance of that? In other words, the amount of negative possibilities is always higher than that of positive ones.

If we go with this idea in mind, the distribution will look like this:

For this to be easier to read, let me represent this information as a cumulative graph. It will show a possibility of finishing the project earlier than a deadline or just in time:

Turns out, if we take the “most possible” estimation in 10 days, the possibility of the task being completed in that period or earlier is less than 50%

Ignoring the current level of uncertainty

As we work on a project/task, we keep learning new information. We get feedback from the manager, designer, tester, customer, and other team members. This knowledge keeps renewing. We don’t know much about the project from the beginning, but we learn more as we go along, and we can mention exactly how long it took us once we’ve finished the project.

What we know directly affects how precise our estimation is.

Luiz Laranjeira’s (Ph.D., Associate Professor at The University of Brasilia) research also shows that the accuracy of estimating a software project depends on how clear the requirements are (Luiz Laranjeira, 1990). The clearer the requirements are – the more accurate the estimation is. Usually, the estimation isn’t clear because the uncertainty is involved in the project/task. Therefore, the only way of reducing uncertainty is by reducing it in the project/task.

Considering this research and common sense, as we decrease the uncertainty on a task/project, we increase the estimation accuracy.

This graph is here to make it easier to understand. In reality, the most possible estimation may change as uncertainty decreases.

Dependency between precise estimation and the project stage

Luiz Laranjeira went on with his research and figured out how numerically dependent estimation spread in on a project stage (level of uncertainty).

Taking optimistic, pessimistic and the most possible estimations (optimistic estimation is the earliest period of finishing the project out of all, while pessimistic is vice versa, the latest) and show how their ratios change over time, from start to finish of the project, we’ll get the following picture:

This is called a cone of uncertainty. The horizontal axis stands for the time between the start and the finish of the project. The main project stages are mentioned there. The vertical axis shows a relative margin of error in the estimation.

So, at the start of the initial concept, the most possible estimation may vary from the optimistic one by 400%. When the UI is ready, the spread of estimation goes between 0.8 and 1,25 relative to the most possible estimation.

This data can be found in the table down below:

Life-cycle stageoptimistic estimationpessimistic estimation
Initial concept0.25х
Business requirements (agreed definition of the product)0.5х
Functional and non-functional requirements0.67х1.5х
User Interface0.8х1.25х
Thoroughly thought realization0.9х1.15х
Finished product

It’s very important to note that the cone doesn’t get narrower as time goes by. For it to narrow, one needs to manage the project and take action to lower uncertainty. If one doesn’t do that, they’ll get something like this:

The green area is called a cloud of uncertainty. The estimation is subject to major deviation up to the very end of the project.

To move on the cone to the most-right point where there’s no uncertainty, we need to create a finished product :). So, as long as the product is not ready, there will always be uncertainty, and the estimation can’t be 100% precise. However, you can affect the estimation accuracy by lowering uncertainty. With this, any action targeted at lowering uncertainty also lowers the estimation spread.

This model is used in many companies, NASA included. Some adapt it to consider volatility in requirements. You can read about that in detail in “Software Estimation: Demystifying the Black Art”.

What is a good estimation?

There are plenty of options to answer this question. However, in reality, if the estimation deviates by more than 20%, the manager doesn’t have a room for maneuver. If the estimation is somewhere around 20%, the project can be finished successfully by managing functionality, deadlines, team size, etc. It does sound logical, so let’s stop at this definition of good estimation, for example. This decision has to be taken on the organizational level. Some risk and a 40-50% deviation is OK for them; others see 10% as a lot.

So, if out estimation differs from the actual result by not more than 20%, we consider it good.

Practice. Estimating a project on various stages

Let’s imagine that a project manager has approached you and asked to estimate a function or a project.

To start with, you have to study available requirements and figure out the life-cycle stage of a project definition.

What you do next depends on the stage you’re on:

Stage 1. Initial concept

If a manager approaches you and asks how long it will take to create an app where doctors will consult patients, you are on Stage 1.

When does it make sense to make estimations on this stage?

On a pre-sale stage. When you need to realize whether it’s worth it to further discuss the project. All in all, it’s better to avoid estimations on this stage and try to lower the uncertainty as soon as possible. After that, we can move on to the next stage.

What do you need to estimate on this stage?

Actual labor time data on a similar finished project.

What tools are the most suitable for this stage?

  • An estimation by analogy

Estimation algorithm

Actually, estimating the project on this stage is an impossible task. You can only see how long a similar project took to launch.

For example, this is how you could put your estimation into words: “I don’t know how long this project will take as I lack data. However, project X which was similar to this one took Y time. To give at least an approximate estimation, it’s imperative to make requirements clearer”.

If there’s no data from similar projects, then lowering the uncertainty and moving to the next stage is the only way to estimate here.

How to move to the next stage?

For this to happen, the requirements must be clarified. You need to understand what the app is for and its functionality.

Ideally, one should have skills in gathering and analyzing requirements.

To improve that skill, it’s recommended to read “Software requirements” by Karl Wiegers and Joy Beatty.

To gather preliminary requirements, you might use this questionnaire:

  • What’s the purpose of the app? What problems will it solve?
  • What is the target audience? (for the task above that could be doctor, patient, administrator)
  • What problems will each type of users solve in the app?
  • What platforms is the app for?

After figuring these things out, you will have an image of the app in your head with all the necessary information. With this, we’re moving to Stage 2.

Stage 2. An agreed definition of the product

We have an understanding here, although not very detailed, about what the app will do and what won’t. 

When does it make sense to make estimations on this stage?

Again, on the pre-sale stage. Right when one needs to decide whether it’s worth it to complete the task or project, whether they have enough money, whether the deadlines are affordable. You need to check if the value that the project brings is worth the resources that need to be involved.

What do you need to estimate on this stage?

Quite a few finished projects and their estimations OR huge experience in the area of development to which the project is related. These two combined would be even better!

What tools are the most suitable for this stage?

  • An estimation by analogy
  • A top-to-bottom estimation

Estimation algorithm

If there was a project like this before, the approximate estimation would be the time spent on that project.

If there is no data on projects like that, you need to split the project into the main functional units, then estimate every block according to those that were done on other projects.

For example, with the app where the doctors would consult patients, we could have got something like that:

  • Registration
  • Appointment scheduling system
  • Notification system
  • Video consultation
  • Feedback system
  • Payment system

You could estimate the “registration” block by using something similar from another project and for the “feedback system” a block from a different project.

If there are blocks that were never done before or they lack data, you can either compare the necessary labor time against other blocks or reduce uncertainty and use the estimation method from the next stage.

For example, the “feedback system” module might seem twice as difficult as the “registration” module. Therefore, for the feedback, we could get an estimation twice as high as the registration.

The method of comparing one block against the other is not exactly precise, and it’s better used in the situation where the number of the blocks that were never done isn’t higher than 20% of the blocks that do have historic data. Otherwise, it’s just a guess.

After this, we summarize estimation of all blocks, and it will be the most possible one. The optimistic and pessimistic estimations can be calculated using the coefficients appropriate for the current stage – x0,5 and x2 (check the coefficient sheet).

Ideally, you should let your manager know what’s going on, and they will have to deal with it.

If the manager can’t deal with it and asks for one, single number, there are ways to do that.

How to calculate one estimation out of three? It will be answered down below in the corresponding chapter.

How to move to the next stage?

Prepare a full list of requirements. There are quite a few documentation ways, but we’ll look into a widely used one with a User Story.

We need to understand who will be using each block and what they’ll be doing with the blocks. 

For example, for the “feedback system” block we would end up with these bullet points  after gathering and analyzing requirements:

  • A patient can check all feedback about the chosen doctor
  • The patient can leave feedback for the doctor after a video consultation with him
  • The doctor can see feedback from the patients
  • The doctor can leave a comment on feedback
  • An administrator can see all feedback
  • The administrator can edit any feedback
  • The administrator can delete feedback

You will also need to collect and write down all requirements that are not functionality-based. To do that, use this check-list:

  • What platforms is it for?
  • What operating systems need to be supported?
  • What do you need to integrate with?
  • How fast is it supposed to work?
  • How many users at the same time can use the tool?

Clarifying this stage will move you to the next one.

Stage 3. The requirements are gathered and analyzed

This stage has a full list of what each user can do in the system. There is also a list of non-functional requirements.

When does it make sense to make estimations on this stage?

When you need to give an approximate estimation for the project before you begin working with the Time & Materials model. The estimation of tasks from this stage can be used to prioritize some of them on the project, to plan the release dates and the whole project budget. You can also use those to control the team’s efficiency on the project.

What do you need to estimate on this stage?

  • The list of functional requirements
  • The list of non-functional requirements

What tools are the most suitable for this stage?

  • An estimation by analogy
  • A top-to-bottom estimation

Estimation algorithm

You need to decompose each task (split it into components). The smaller the components are, the more precise will the estimation be.

To do it on the best of your abilities, you need to represent everything that needs to be done on paper.

For example, for our User Story that goes like “a patient can see all feedback about the chosen doctor”, we could get something like this:

We split the task here into three parts:

  • Create infrastructure in the database
  • Create the DAL level for data samples
  • Create a UI where the feedback will appear

If you could, you can write down the UI functionality and approve it with whoever asks for estimation. It will eliminate lots of questions, make the estimation more precise, and be a good quality of life change.

If you want to improve your interface design skills, it’s recommended to read two books: “The Humane Interface” by Jef Raskin and “About Face. The essentials of interaction design” by Alan Cooper.

Then you need to imagine what exactly will be done for each task and estimate how long it will take. Here you have to calculate time, not guess it. You have to know what you will do to finish each subtask.

If there are tasks that take more than 8 hours, split them into subtasks.

The estimation received after having done this can be considered optimistic as it most likely uses the shortest path from point A to point B, given that we haven’t forgotten anything.

Now it’s about time we thought about things that we’ve probably missed and correct the estimation accordingly. Usually, the checklist helps here. This is an example of such a list:

  • Testing
  • Design
  • Test data creation
  • Support for different screen resolutions

After completing this list, we have to add the tasks we might have missed to the task list:

Go through each task and subtask and think about what could go wrong, what is missed. Oftentimes, this analysis reveals things without which you can’t end up with a best-case scenario. Add them to your estimation:

After you calculate this, too, your estimation will be even closer to the optimistic one than to the most possible one. If we take a look at the cone, the estimation will be closer to its lowest line.

The exception here might be if you’ve done a similar task before and can speak with authority that you know how it’s done and how long it takes. In that case, your estimation would be called “the most possible” and it’d go along with the 1x line on the cone. Otherwise, your estimation is optimistic.

The other two estimations can be calculated with the coefficients according to this stage: x0,67 and x1.5 (check out the coefficient table).

If you calculate the estimation from the example above, we’ll get this:

  • Optimistic estimation: 14 hours
  • The most possible estimation: 20 hours
  • Pessimistic estimation: 31 hours

How to move to the next stage?

By designing the UI. Creating wireframes would be the best way to go.

There are multiple programs for that but I’d recommend Balsamiq and Axure RP.

Prototyping is another huge topic that is not for this article.

Having a wireframe means that we’re on the next stage.

Stage 4. The interface is designed

We have a wireframe here as well as the full list of what each user will do in the system. We also have a list of non-functional requirements.

When does it make sense to make estimations on this stage?

To create an exact estimation by the Fixed Price model. You can also do that for everything that was mentioned in the previous stage.

What do you need to estimate on this stage?

  • Prepared wireframes
  • A list of functional requirements
  • A list of non-functional requirements

What tools are the most suitable for this stage?

  • An estimation by analogy
  • A top-to-bottom estimation

Estimation algorithm

The same as in the previous stage. The difference is in accuracy. Having a planned interface, you won’t have to think as much and the possibility of missing something is lower.

How to move to the next stage?

Design the app architecture and thoroughly think about the realization. We won’t check that option as it is used quite rarely. With that being said, the estimation algorithm after thinking about architecture won’t differ from one on this stage. The difference is, once again, in accuracy increase.

Retrieving one estimation from the range of estimations

If you have the three types of estimation ready, we can use the knowledge by Tom DeMarco to retrieve one estimation. In his book “Waltzing with bears” he mentioned that absolute possibility can be obtained by integrating the area under the curve (in the graph we had before). The original calculation template can be downloaded from here or from here without registration. You need to insert three numbers in the template and receive a result as a list of estimations with their corresponding probabilities.

For example, for our estimations of 14, 20, and 31 hours we’ll have something like this:

You can choose any probability you deem decent for your organization, but I’d recommend 85%,

Don’t know how to estimate? Speak up!

If you don’t know what you’re asked or how to implement the functionality you need to estimate? Let your manager know, give him an approximate estimation, if that’s possible, and suggest actions that will make the estimation more precise.

For example, if you don’t know whether the technology works to finish the task, ask for some time to create a prototype that will either confirm your estimation or show what you’ve missed. If you are not sure that the task is doable, say that from the beginning. These things need to be confirmed before you’ve put their weight on your shoulders.

It’s very important to provide a manager with this information, otherwise, he can blindly trust you and have no idea that there’s a chance of missing the deadline by 500% of the time or simply not finish the product with the technology or requirements you have.

A good manager will always be by your side. You and he are in the same boat, and sometimes his career depends on whether you’ll finish on time even more than yours.

Doubts? Don’t promise

Many organizations and developers help their projects fail by taking on responsibilities too early on the cone of uncertainty. It’s risky as the possible result jumps between 100% and 1600%.

Efficient organizations and developers postpone decision making up until the moment when the cone is narrower.

Usually, this is normal for organizations that are on the more mature CMMI module level. Their actions to make the cone narrow down are clearly stated and are followed.

You can see the quality of estimations and its increase in the projects of the U.S. Air Force when they moved to the more mature CMMI level:

There’s something to think about here. Other companies’ statistics confirm this correlation.

Even here, accuracy of the estimations can’t be achieved only with estimation methods. It’s inextricably linked to the efficiency of project management. It doesn’t depend only on developers, but also on project managers and senior management.

Conclusion

  • It’s nearly impossible to give an absolutely correct estimation. You can, however, affect the range in which the estimation will fluctuate. To do that, try and lower the uncertainty level on the project
  • Making estimation be more accurate can be achieved through splitting tasks into components. As you decompose things, you’ll think what and how you will do things in details
  • Use checklists to lower the possibility of missing something as you estimate
  • Use the uncertainty cone to understand the range where your estimation will most probably fluctuate
  • Always be comparing the given estimate with the time that was actually spent on a task. It will help you improve your estimation skill, understand what you’ve missed, and use it as you move further.

Useful books

There is a lot of literature on the topic but I’ll recommend the two books that must be read.

  • Software Estimation: Demystifying the Black Art by Steve McConnell
  • Waltzing With Bears: Managing Risks On Software Projects by Tom DeMarco and Timothy Lister.