Smarter workflow to reduce costs

With TV channels becoming popular on streaming platforms, the costs of running OTT service are increasingly appearing on the radar of the CFOs. Reducing in content or customer experience investments are not likely, so the pressure shifts to the engineers to find smarter ways to run the services. In this article, I will look at the catch-up TV service and share my thoughts on how to reduce its associated CAPEX and OPEX costs.

Stay fit to survive

OTT streaming services have entered the main stage and are increasingly the preferred platforms for video consumption. What started as a viewing behavior for digital natives, it is now not uncommon to see older generations that did not grow up in the era of ubiquitous technology watching video on their mobile phones or tablets. When my mom prefers to watch her favorite Netflix shows on her iPad instead of on her 55-in TV, this is a definite sign of the times.

As more consumers embrace a streaming service and more content is on offer, hours of viewing time increase, and the costs of running the OTT platform multiply. Facing fierce competition from the multitude of players in the field, service providers need to be more efficient on their technical expenditures to be able to invest in content and improve the customer experience.

One expense that is on every CFO’s radar is their video distribution infrastructure cost. They are especially concerned that as the number of streams and viewership hours increase, the expansion, replacement, and maintenance costs will outpace the revenue growth. This is particularly true for those that offer both live TV and catch-up services.

When live TV first appeared on streaming platforms, there were not many TV channels and catch-up TV was not the “go-to” service that it is today. As the consumer expectations for a replica of their STB service over OTT have grown, service operators tackled the challenge by adding more hardware. During this time, the live TV and catch-up workflows in OTT have remained for the most part unchanged. If you look at the workflows (more about this below), the inefficiencies are not difficult to spot.

For pay-TV and OTT operators offering live TV and catch-up services, it’s time to re-think these workflows.

1 + 1 = Inefficiency

As of today, most streaming platforms address live TV and catch-up TV with two separate, independent workflows – one for live, the other for catch-up.

Let’s see how they work and where the inefficiency comes in.

In the example below (Figure 1), the live TV channel (channel1) has four programs scheduled for broadcast: Local news, Soap opera, Live Sports, and Local news.

Live and catch-up parallel workflows

Figure 1. Live and catch-up parallel workflows

At the time when Soap opera is being broadcasted, the workflows for streaming service are as follows:

  • the linear TV stream goes through a live packager, Soap opera as well as the other three programs are encrypted using the same “live” key, which in this example is valid for 24 hours (the “live“ key not only allows viewers to watch what is currently being broadcasted, but also allows other time-shift services within the 24-hour live window, for example, replay the previous program or start over the ongoing one)
  • for Soap opera to be available for catch-up after the 24 hours, it will go through the catch-up packager, where it is encrypted using a different key (e.g., Soap opera unique key)

As a result, the same video asset, Soap opera, is packaged twice, stored as two separate copies with two different keys: a channel1 24-hour live key, and a Soap opera unique key.

When the next program, Live Sports, starts, it will also get its own key (e.g., Live Sports unique key) for catch-up out of the 24-hour live window. So, in this example, the operator will have packaged and stored the four programs twice.

If you are adding more live TV channels and offering more catch-up, not only storage cost increases, but perhaps more significantly, the infrastructure and operational cost induced by the parallel packaging workflows will soar.

Is there is a way to stop the waste caused by packaging and encrypting the same video asset twice? I think the answer is “Yes”!

Changing content key at program boundaries

All other considerations aside, an obvious solution is to have one workflow.

Looking at Figure 2 below, when Soap opera begins its broadcast, the packager will request a unique key (e.g. Soap opera unique key) from the DRM server to encrypt Soap opera. When the broadcast is finished, Soap opera will be immediately available for catch-up playback. When Live Sports begins, the packager then requests another unique key to encrypt Live Sports. And so on, and so forth.

In short, instead of using one “live” key for the whole channel and then re-encrypting each asset again using its own unique key for later catch-up, each program is encrypted only once, using its own unique key during the live broadcast.

One workflow for live and catch-up

Figure 2. One workflow for live and catch-up

Comparing Figure 2 and Figure 1, we see that there is only one copy for each program. And more importantly, we only have one packager.

This leads to savings in both hardware and storage when you are building or expanding catch-up TV service. We no longer need a linear asset for the 24-hour live window and then another TV asset for catch-up after that. We only have one catch-up asset, immediately usable after the broadcast.

Logical and easy enough…so what is holding us back?

Client storms

Different from the broadcast network where content is “pushed” to client devices, DRM in OTT obligates each client device to request licenses individually over HTTP connections. When it comes to live TV, this characteristic of DRM becomes its pitfall.

If each program is encrypted using its own key, then during the live broadcast, at the beginning of a program, all the client devices will try to get the new key (wrapped in a license) from the DRM server at about the same time. This will generate high peaks of license requests from client devices, a phenomenon known as “client storms”. The dashboard in the operator’s service monitoring room may look like Figure 3  (below).

Client Storms

Figure 3. Client Storms

What makes client storms so dreadful for operators is that, when the high volume of license requests is compounded with issues of internet connectivity (stability and latency) and the DRM server’s capacity, some client devices may not be able to get the license in time. This in turn leads to the delays in playback on the device, or worse, playback failure.

What will happen next is likely to be the nightmarish situation where angry calls flood the customer support hotline or even mass cancellation of subscriptions. Disruption to viewing experience through no fault of the viewer is a serious problem.

Conquering the storms

Fortunately, there are ways to disperse the storms.

The idea in the video industry is that if a client device could get the new key in advance, then client storms do not need to happen and there will be no disruptions during the live broadcast. DASH Industry Forum suggests indicating future keys beforehand so client devices can choose a random time to request licenses. PlayReady scalable key rotation and Widevine’s group license feature both follow this line and allow a key hierarchy where a single DRM license can unlock a group of channels or contents.

As a high-performing and scalable multi-DRM solution, Irdeto Control is in a good position to leverage what is already achieved and bring the problem of client storms to a close. Recently I’ve been experimenting to let Irdeto Control handle future keys without requiring changes to packagers or players.

To sum up, I think changing content key at program-boundaries can stop packaging and encrypting TV channels twice and thus significantly cut the cost of catch-up service. Are you also interested in this topic? Get in touch and let’s think together!