Industry Challenges with Live Media Transport over Cloud Infrastructure
Author: Tor Blomdell
Moving back to Sweden from Silicon Valley, I was struck by the state of transport, and especially the in-efficiency of last mile transport in the Stockholm area. We have invested billions in e-commerce, online retailing, and great analytics systems, and meanwhile, it all comes down to a poor consumer experience when the last mile is broken. The same holds true for media, we’ve invested billions in smart transcoder farms, advertising solutions, machine learning and graphics systems in the cloud, but meanwhile last mile transport into and out of the cloud is broken; each vendor uses different delivery mechanisms, we rely on old formats never intended for the state of the internet, and we haven’t updated our talent pool to meet the new realities. This will have to change in order to unleash the next generation of media experiences
Media is getting more and more competitive, with larger brands moving into original content and consumers asking for more immersive and personalized TV experiences. Further, the traditional distribution model with linear TV over fixed infrastructure is moving to general purpose networks and OTT delivery for both managing cost as well as flexibility for the Pay-TV providers. This fundamental shift requires media companies across the value chain to change their workflows from traditional linear distribution to more flexible workflows that enables tailoring and personalized ad insertion of content. The change requires more content to be captured from Tier 1 events, through remote production and more formats that increases audience engagement, as well as finding more efficient ways of producing Tier 2 and 3 sports in order to offer even more content and capture the value of the sport bundles in order to keep the current structure and monetize, fixed and mobile, broadband investments.
The shift in workflows from on-prem, dedicated appliances and longer release cycles and quality is under pressure as cloud providers and OTT providers is redesigning their systems and architecture to radically change the cost, elasticity, and pace of innovation for media distribution in order to improve the ability to personalize content. As Media, and especially Live, is peak intense, and building a system that can cope with elasticity for peaks has been key for consumer distribution. This change in distribution is now influencing broadcasters and media production as content must be ingested into cloud properties, and now in more formats and utilizing different protection mechanisms. Hence, functions such as storage, media processing, as well as play-out is all being offloaded to the cloud in order to lower the cost of running services and increasing the agility of innovation.
The shift to cloud compute and the elasticity of the cloud all sounds great, however, the reality of the workflow transition is a huge step for many broadcasters and media companies as current employees’ skill sets and workflows now all of a sudden are changing. In our experience there’s multiple areas that need to be addressed as an industry in order to unlock the next level of consumer experiences and innovation:
- Change of control, shifting loads and high value content from on-prem data centers to data centers and cloud facilities outside of the broadcaster’s direct control
- Security – securing my content when getting into and out from the cloud and keeping control inside the cloud. Who manage the risk of the content gets hacked and how do I know who tampered with it
- Workflow consistency – ensuring that you have consistency in management, orchestration and control while working with your current on-prem infrastructure as well as cloud infrastructure in a hybrid way. And how do you limit the number of appliances that you have to put out at stadiums where rack space is limited?
- Talent – current talent pool knows how to manage on-prem equipment but going into Dockers, K8s, certificates, REST API, auto scaling, real time dashboards, and IP configuration of virtual private networks in data centers is a different skill set and knowledge. Further, attracting this talent requires a different brand and the ability to teach the cloud native population about the media specific requirements and strict SLAs for time critical applications
- Cost– It’s always the rumors about the cost of cloud compute and what does it mean for 24/7 services and at scale, won’t the cost eat up the benefits and how do I secure my profitability as I move workflows to the cloud.
- Quality and availability – finally, availability and architecture for a redundant 24/7 service is very different compared to current operations. And the architecture needs to change in order to build resilient, high-quality workflows for critical content
Looking at the future of media transport, it’s clear that virtualization and software-based solutions on top of current internet infrastructure will drastically capture a large share of the market as agility, efficiency, and elasticity becomes key drivers to manage the profitability pressure in the market. That doesn’t mean that current managed infrastructure will disappear as it still has benefits in terms of quality, reliability, and control. However, companies need to adapt their workflows to the application and work with new technology in order to build out system tailored for flexibility, across infrastructure, in order to fix the broken last mile delivery and ensure that immersive content can be delivered across properties in a cohesive way.