Net Insight and the evolution of video compression and lossless video compression

In a world where best-effort is the norm, we created the Nimbra to enable ultra-reliable broadcast grade long-distance media networking across any infrastructure.


As unlikely as it may seem, the story of video compression and lossless video compression actually reaches back almost a century to 1929 and the first conversations around the topic of inter-frame compression. First proposed for analog video, this approach – which involves saving a key image and then only saving changes to that image as they occur from frame to frame – is still employed in digital video today.

As video production and video distribution have evolved to become key tenets of a huge global media & entertainment industry, so have the underlying video compression technologies and video compression standards. Discrete Cosine Transform (DCT) becoming an image/video compression algorithm in the early 1970s was a significant milestone. But even more critical was the development of the first international video compression standard: first published in 1984, H.120’s primary application was in video conferencing.

It is arguably with the publication of H.262 / MPEG-2 in the mid-’90s that video coding matured in the sense that we understand it today. Since that time the frequency of new video compression standards, and updates to those standards, has increased steadily. Without a doubt, the most important single development of the last 20 years has been the publication of H.264/MPEG-4 AVC in 2013. Still in widespread use around the world today, H.264 managed to deliver a sufficiently flexible type of video compression standard for high-quality, digital video.

Whilst H.264 continues to be a global benchmark, recent years have witnessed an unprecedented flurry of innovation around video compression technology. Building on the foundations of H.264, H.265 (High-Efficiency Video Coding) is geared towards production in the new, very high-quality formats of 4K/UHD and 8K. JPEG-2000 video compression, which was the first codec to enable lossless encoding and achieve very low latency, remains in widespread use. Then there is JPEG-XS, a visually lossless video compression standard that is expected to become integral to remote production and streaming.

Simultaneously, we have witnessed the emergence of ultra-fast broadband and fiber internet services that – in theory at least – will dramatically reduce the need for video compression techniques. But the reality is that the global outlook is still extremely varied, and in some emerging markets it’s likely to be several years before there is widespread access to ultra-high-speed domestic internet. In the meantime, it is certain that new 5G services will help to deliver more consistent high-quality video to users on the move – wherever they might be.

In the course of this text, we will explore the role of lossless video compression with particular reference to the age of streaming. We will see how JPEG-XS, in particular, has already been incorporated into leading Net Insight product lines. There will also be coverage of how we perceive video compression developing in the near future for applications including remote production and 5G-based contribution. In particular, it is anticipated that 5G will herald new opportunities for direct connectivity via IP that eliminate the need for complex and latency-adding processing.

Maintaining its long tradition of providing video compression technologies that support the complete spectrum of video production, Net Insight will continue to be at the forefront of video technology for broadcast and streaming. Accordingly, this paper concludes with a look at some new and emerging video compression techniques and technologies and considers how they might impact video compression during the next few years.

Video Compression Standards – Overview

There are several different video compression algorithms/formats/standards/codecs, but the basic concepts remain the same. They will either deliver better image quality at the same compressed bitrate or a lower compressed bitrate for the same image quality.

Most codecs use “lossy” video compression techniques and methods, which means that when a video is compressed, some redundant spatial and temporal information is reduced. Lossy typically means compression ratios of 50:1 up to 100:1. In this case, the compression becomes visible but remains perfectly adequate for some applications.

Visually “lossless” video compression is sometimes used when the goal is to reduce file and stream sizes by only a slight amount in order to keep picture quality identical to the original source. Lossless video compression typically achieves lower compression ratios of 10:1 to 20:1.

A particular advantage to the more recent video compression technologies is that they do not suffer the problem of quality loss through propagation. Earlier technologies, such as MPEG, do propagate and become less dependable when many encoding cycles occur.

Commonly used lossless video compression techniques

Here is a brief overview of the compression technologies used most commonly by professional media for contribution, production, and professional media distribution.


Part 22 of the SMPTE ST 2110 defines a standardized way for transporting JPEG XS compressed video over IP workflows. JPEG-XS is a visually lossless video compression standard. It provides relatively low latency and low complexity, particularly in comparison to H.264 or H.265, and as such is likely to play a major role in Remi (remote production model) productions and live event streaming.

The video compression standard JPEG XS has been designed in such a way as to enable efficient deployment on several platforms, such as FPGA, ASIC, CPU, and GPU. Typical compression ratios are up to 10:1 for 4:4:4, 4:2:2, and 4:2:0 images, although higher ratios can be accommodated according to the image type and application requirements. There is also extensive support for RAW Bayer, RGB, and other pixel formats.

JPEG 2000

The JPEG 2000 video compression standard uses a technique based on wavelet technology. This enables images to be compressed in lossy and visually lossless video compression modes. The intra-frame nature of JPEG2000 allows every frame to be encoded independently. In comparison, inter-frame encoding formats (such as MPEG-4 video compression) need to work with Groups of Pictures (GOP) that require a longer processing time. This makes JPEG 2000 video compression ideal for critical low latency contribution and remote production applications.

In a similar manner to JPEG 1992, JPEG 2000 has come to be regarded as being robust to bit errors caused by noisy communication channels. This capability is attributed to the coding of data in small independent blocks. The standard is also notable for the use of flexible file formats (JP2 and JPX) that facilitate the management of color-space information, metadata, and interactivity in networked applications. In addition, there is extensive support for HDR, which has become increasingly popular in film and high-end TV production.

MPEG-4 / H.264

The MPEG-4 video compression (also known as H.264/AVC) was standardized in 2003 and remains the widely adopted codec for streaming. It builds on the concepts of earlier standards such as MPEG-2 and offers better compression efficiency and greater flexibility in compressing, transmitting and storing video. As the demand for higher video resolution continues, further efficiencies are required for MPEG-4 video compression.

ST 2110 (standards suite)

ST 2110 is not a video compression standard, but does contain specifications for video compression and transportation and is increasingly integral to broadcast center new builds or refurbishments. It also has the practical and economic advantages of connection using standard IT COTS switches.

In the facility, uncompressed is predominantly used. Connecting different facilities, stadiums, etc, with uncompressed streams consumes a large amount of bandwidth, which is why effective compression needs to be used.

Networks and equipment supporting SMPTE standard 2110 can carry uncompressed video over IP. ST 2110 supports different video compression formats like ST 2110-22 for JPEG-XS, but other video compression formats can be used to transport ST 2110 content such as JPEG 2000 using TR-01, MPEG-4 using 2022-2, and JPEG-XS using TR-07. The choice of video compression technique is determined by the cost and availability of bandwidth.


HEVC (High-Efficiency Video Coding) or H.265 is the successor to MPEG-4. In most implementations, it halves the compressed bitrate for the same image quality. HEVC also supports 8K resolution. Developed collaboratively by more than a dozen organizations worldwide, HEVC uses integer DCT (discrete cosine) and DST (discrete sine) transforms with 4×4 and 8×8 block sizes, in contrast to H.264/AVC’s use of DCT with 4×4 and 8×8 block sizes.

While its improved data compression capabilities have been highlighted, a slower than expected adoption to date has been attributed in some quarters to an opaque licensing scheme.

Managing unmanaged networks

Network capacity today is not yet limitless and ubiquitous. The global picture remains highly varied in terms of access to high-speed internet and mobile networks. Although investment in, and implementation of, new networks is intensifying, it is likely to be many years yet before fast and robust connectivity is achieved on a global level. Consequently, there is an ongoing need to adapt and process content in order to complement different network types, with the implication that different solutions will be employed for different scenarios.

The need for different networks depends very much on the use case: Production, Contribution, or Distribution.

Video compression formats like JPEG 2000, JPEG XS, and emerging codecs like MPEG-I Part 3 H.266 (also known as Versatile Video Coding/VVC) can extend the network capacity to manage 4K UHD and 8K UHD streams over IP – all while keeping latency ultra-low and maintaining a visually lossless video compression quality.

Media networking needs to adapt to different scenarios like high/low latency, high/low bandwidth (equating to low/high cost), reliable/unreliable networks, and whether the production is high-end (Tier 1) or at the lower end (Tier 3).

Unmanaged networks (internet) with the public cloud can transport and process in high quality with higher latency and lower bandwidth, which serves contribution and distribution markets well on the basis that you have the appropriate media transport and video compression technology and systems in place.

Some production environments can accept higher latency, shifting the transport to cloud-based infrastructure and using Automatic Repeat Request (ARQ) transport – which is an error control mechanism for achieving reliable transmission of data over an unreliable link – over the public Internet or unmanaged IP. We believe it is beneficial if this can be done in one system with easy, uniform management.

Our products support multiple video compression algorithms

Although this area of development is still in its relative infancy when viewed in a historical industry context, it is increasingly apparent that cloud-based solutions are going to form the standard foundation for IP production and distribution. This means that transport solutions will need to handle high-quality, low delay, multi-camera productions for high-end productions spanning sport, studio shows, live events, and more.

The technologies specified for these kinds of demanding productions need to handle a large number of signals with less compression than today’s MPEG4/HEVC solutions. The ideal solution also needs to support a variety of video compression formats for ultimate flexibility including uncompressed, JPEG XS, and JPEG2000, as well as MPEG4 and HEVC.

Net Insight incorporates advanced video encoding in both the Aperi and the Nimbra products. The encoding schemes used are JPEG2000, H.264/MPEG-4, JPEG XS, and H.265/HEVC.

The Nimbra and Aperi platforms also support uncompressed transport for HD 1080p and 4K UHD. Net Insight focuses on open standards such as ST2110 to ensure the widest and most robust interoperability. This also means that it is possible to combine uncompressed or light video compression technology with other video compression algorithms in the same system to allow for more advanced and potentially more cost-efficient workflows. These workflows could also interconnect multiple production processing sites.

IP networking has revolutionized cloud contribution and distribution with the SMPTE standards at the core of new live workflows, distributed workflows, and cloud-based operations. Video compression will continue to advance to enable even greater flexibility, scalability, and accessibility allowing users to transport high-bandwidth 4K and 8K content over cost-effective COTS GigabitE networks.


World’s largest Nimbra networks to HD

EBU has selected Nimbra platform for the Eurovision fiber network

The European Broadcasting Union (EBU)

World’s largest alliance of public service media organisation

Since 2004, when the EBU selected Net Insight’s Nimbra platform, the EUROVISION Fibre Network (FiNE) has evolved with Net Insight’s latest products and features.

The EUROVISION satellite and fibre network is one of the largest and most rock solid in the world. It delivers more than 80,000 hours of programming every year, the majority of which is live sports. Its undisputed reputation for flexibility and reliability is reflected in the prestigious events it regularly carries on the EUROVISION network.

“We continue our solid relationship with Net Insight for the development of our network since we aim to be the standardbearer for QoS and reliability around the world.”

Related resources


Cloud Ingest of Live Video

As cloud production becomes an integral part of broadcasters’ live workflows, the corresponding cloud infrastructure becomes an integral part of the media transport network.


Open Insight #2

Welcome to the second edition of Open Insight, where I will share thoughts and updates with our shareholders, other stakeholders and anyone with a general interest in the company.


What is video compression?

Video compression is the process of converting digital video into a format that takes up less capacity when it is stored or transmitted. Video compression formats (in the form of an algorithm) do this by shrinking the total number of bits needed to represent a given image or video sequence.

What is video compression used for?

Until network bandwidth is increased dramatically and ubiquitously to carry large volumes of uncompressed video there will be a need for video compression. The aim of compression is to maximise quality and efficiency while minimising cost.

In need of smarter ways to move forward?

Get ready to start.