Copy
You're receiving this email because you are a Member or have expressed an interest in Society of Motion Picture and Television Engineers - SMPTE and/or HPA.
Table of Contents
Broadcasting Live Events in UHD
The Journal

 
The current issue of the SMPTE Motion Imaging Journal is now Available in the Digital Library.

Additional Articles for Online Reading Are Only Available in the Digital Library!
July 2015

Hot Button Discussion
Broadcasting Live Events in UHD
By Michael Goldman 

Much of the conversation regarding the broadcast industry’s next great leap forward into the world of Ultra High Definition (UHD) has centered around how broadcasters will be building or rebuilding their wider infrastructures on IT-based foundations capable of handling the high-bandwidth data that the UHD broadcast paradigm requires. Less debated are the nuances of the front end of the UHD transition—image capture. This is largely due to the belief that ultra high-resolution cameras have become so common that this shouldn’t be much of an issue. But that notion is simplistic, according to many broadcast professionals, because it is references modern digital cinematography camera systems with high-resolution imaging sensors, none of which are particularly applicable to conversations involving the broadcast of live events, particularly where sports and action are concerned. But figuring out how to shoot and broadcast that kind of content specifically is crucial to broadcasters because it is live content—sporting events, concerts, breaking news, and the like—that modern, IT-based streaming services like Netflix are not addressing. That means such content remains the province of major broadcast entities as the UHD era dawns, and they need to shoot such events so that the images will translate well on UHD televisions configured for watching 4K resolution movies with a variety of other image characteristic improvements—greater dynamic range, higher frame rates, better color, among other things.  
 
Klaus Weber, worldwide product marketing manager for imaging products at Grass Valley, and an active participant in 
the EBU’s Beyond HD initiative, spoke last October at the SMPTE Technical Conference, and wrote a paper on this subject, which published in the April 2015 online edition of the SMPTE Motion Imaging Journal. He suggests the needs in terms of image capture presented by live UHD broadcasts “present a complete new challenge to the market.” Unlike many technology challenges in various categories, Weber cautions that despite incredible daily innovation inside the labs of virtually every major technology manufacturer in this space, there will be, in his view, no quick or easy or complete camera solution for this challenge in the foreseeable future. Rather, the industry will have to start learning how to juggle the twin arts of compromise and flexibility when it comes to solving the problems posed by major, live events slated for UHD broadcast.

“Remember, we are talking about cameras for live productions, so let’s say the main focus is on cameras that have a cable in-between the camera head and the camera base station,” Weber explains. “We are not talking about camcorders here or digital cinematography cameras used for cinematography applications—only for live environments. And these cameras are to be used for live productions in UHD. That means we are supposed to consider that UHD means—first of all, a higher pixel count. Yet, on the other hand, in the total UHD standard, it is much more than just a higher pixel count. We have other requirements like higher dynamic range, higher frame rate, extended color gamut, possibly higher bit depth, and so on. These things mean we want essentially better pixels. The problem is, the idea of better pixels is completely opposite the idea of more pixels, and right now, it is not possible to combine them [on a digital camera’s imaging sensor]. You can’t [accommodate] both ideas at the same time in the camera—it doesn’t exactly work that way.”
 
Weber adds that this dichotomy is important given the nature of the kinds of content that broadcasters need to capture and broadcast live, particularly live sports. He elaborates that there are a suite of options and compromises that broadcasters therefore have to evaluate in deciding what camera systems or configurations to pursue for such events if they mean to shoot them for UHD broadcast.

“If you want true, native UHD, then you need four times more pixels than what is available in an HD camera—double the amount horizontal and double the amount vertical,” he explains. “So what are your possibilities? You can keep the size of the pixels as they are in an HD camera, and then your imager will get four times larger. Then you basically come to what they do with digital cinematography cameras with larger imagers. Having three large imagers with a prism beam splitter is not practically manageable, so that means keeping the large pixels on a true 4K imager, and that means a single imager camera. We have had this for quite a while with digital cinematography. The problem is, for many live events, this has been proven to require certain compromises, such as needing to use film lenses or having limited zoom ranges because of the large PL mount lenses, which give you very short depth of field. And that kind of depth of field is not usable in many live events.”
 
In other words, the first potential compromise involves relying on a true 4K imaging sensor exclusively to give the broadcast good sensitivity and dynamic range, but at the cost of significant optical problems on a UHD telecast. Weber calls this “not a preferred solution for most, if not all, live productions.”
 
Alternatively, he continues, “you can make your pixels four times smaller, and squeeze four times as many of them onto the same 2/3-in. imager as we currently use in HD. If we do that, though, our pixel performance gets much lower because you need four times more light to generate the same amount of signal charge. In other words, your sensitivity goes down by about two F-stops, and actually, if you look at it in more detail, you will realize that since some parts of the pixels cannot be made smaller, the area available for collecting light will actually be less than one fourth for each pixel. That means the pixel performance will be even lower. There are some cameras out today that are trying this approach, but they are not usable for many kinds of live productions.”

Next, Weber says broadcasters can opt “to make your pixels more simple. Instead of having five transistors per pixels, you can take out two of the transistors in every pixel, and create more space for your photo-diode by removing those two transistors, giving you more simple CMOS pixels.”
 
Naturally, he continues, there is a cost to this approach, as well. That cost involves the loss of the ability to use a global shutter. “That means you go to a rolling shutter, and up until now, at least, a rolling shutter has never been accepted for high-end applications, because it introduces a lot of artifacts as we have seen from consumer cameras and phone cameras, which are not traditionally acceptable for broadcast.”


Yet another methodology being introduced to the market is the notion of keeping pixel size and imager size the way they are today on HD cameras, but to have three full 2/3-in. HD progressive imagers in an RGB camera. "This approach provides more than an HD image, closer to 3K in the red, green, and blue channels, and then [via software], you do a kind of up-conversion to 4K," Weber explains. "This does not give you native 4K resolution, but it allows you to have the same sensitivity as a regular HD camera, but with better resolution, much closer to a native 4K imager. In our work [at Grass Valley], we have found this permits dynamic range close to 15 F-stops, which is the level required to perform HDR operations for the UHD standards."
All of these approaches, by their nature, involve some level of compromise regarding what it means to capture a “UHD” image for a live event. However, doesn’t it seem logical that they are just temporary stops on the way to “true 4K” live broadcast cameras that, given the importance of this type of programming and the nature of the industry’s technological progress in recent years, will eventually rise up and solve these problems once and for all? One might expect so, but don’t count on a solution any time soon, Weber suggests. The nature of the problem to begin with is fundamentally different than with other technologies, and limits the pace at which a solution can likely be invented, he says.
 
“Maybe we will get that higher sensitivity [in UHD that we have in HD], but it won’t be soon,” he insists. “If you look back the last 20 years or so at how fast the development in sensitivity has been in terms of imaging technology, it took us between 5 to 10 years just to double it. And as I’ve said, native 4K images with four times more pixels need four times the light. So you have to figure it will be a time frame of at least 10 to 20 years until we can figure out how to compensate for that, not one or two years. This is not the same kind of problem as with processing speed, or RAM memories or hard-drive capacities, which can double every 12 to 18 months. That is simply not the case with imaging technology. So, unless someone invents an entirely new kind of imaging technology, which I don’t expect, then it will take us at least five if not 10 to 20 years before we can compensate for this sensitivity problem. And that means we will have to live with these compromises for quite a while.”

Long enough to be well into the UHD era, at least, Weber suggests. But if that is the case, what does he suggest broadcasters do in the meantime in terms of producing live sporting events in UHD?
“I think that different applications will require different cameras, or possibly a mix of cameras—two or maybe even more different kinds of cameras for one production in many cases,” he suggests. “For example, for a soccer match or basketball or American football, or any sport where you shoot wide-angle shots of the complete field, and then lots of closeups, you might want different cameras for those different types of shots. The wide-angle shots would get some benefit from native 4K images, because of the small details available in the picture, and there are native 4K imagers that might give you that extra benefit in the static resolution and with the higher pixel count. The sensitivity in those shots might not be too much of a problem, because even if you need to open up your lens iris to the maximum position, you will still have good depth of field because it is a wide-angle shot. And rolling shutter might not be a big problem, because the wide-angle shot means less movement—relatively static shots.
 
“On the other hand, looking at the other camera positions, which do the closeups using large zoom range lenses, to show faces or fights or emotions of people and so on—those shots do not benefit from a real high native 4K pixel count, because the resolution on the closeup is not the most important topic,” he continues. “But high depth of field is important on those shots, and you can only get that from a highly sensitive camera that lets you close down your lens iris to a mid position, such as F5.6, F8, or something like that. This is where a solution that keeps [the imaging sensor the same as HD] and up-converts to 4K would out-perform the native 4K imager for the live application.
 
“So you will need to make a decision which camera technology is offering the better solution for the specific application. And for achieving the best possible results, you might well need a mixture of both kinds of systems, with the idea being that native 4K imagers will be best for wide-angle shots and more sensitive cameras with larger pixels will be better for other camera positions and doing closeups.”

Meanwhile, the industry as a whole is pushing ahead with all kinds of initiatives large and small. It has been widely reported, for example, that Japan broadcaster, NHK, is planning to broadcast the 2020 Olympic Games from Tokyo not in 4K, but rather in 8K, or what they call Super Hi-Vision. As reported recently in Newswatch, NHK conducted test broadcasts of Women’s World Cup Soccer matches in 8K using the Ikegami 8K field production camera system, which debuted at NAB earlier this year. But Weber says that initiative involves “an entirely different kind of shooting than what people are used to seeing in HD, or even in 4K,” and in any case, it is not something that will be part of a routine broadcasting methodology even by 2020, let alone any time shortly afterward. Therefore, he differentiates between the coming UHD paradigm shift and the industry’s experiments with 8K, because one, he suggests, will eventually be happening routinely, while the other will be a niche product, at best, for many years to come, in his view.

“Very likely, by 2020, NHK will not be using 2/3-in. imagers to shoot 8K,” he says. “They will be using cameras with likely larger imagers [in the direction of Ikegami system’s single, 33-million pixel Super 35mm CMOS sensor]. But 8K requires a completely different kind of shooting than what people are used to seeing in HD, or even 4K. It would have to include more wide-angle shots [because of shorter depth of field], and more static shots with less movement [because of higher resolution]. Plus, the 8K image has to be seen with a much shorter viewing distance to have any real impact—in other words, on much larger viewing screens. That means it would be a different workflow, different look, and a different shooting style from today’s HD, or even 4K. I think we are a long way from deciding yet if the market is even really looking for that or would accept that type of shooting [for live sports]. That makes it a different subject compared to discussing today’s UHD, which specifically, means four times today’s HD. UHD will permit people to still shoot in an HD style, without changing to different shooting techniques or more wide-angle shots and less closeups and cuts.
 
“But, to do the kind of shooting they do today [in HD], for the time being, people will need to accept some kind of compromise when it comes to [what kind of camera systems or imagers] they use. The question is, what is the best compromise at this moment at time, or any moment in time. Right now, I believe that for getting the best possible images from all different camera positions, the best compromise is probably to have a mix of cameras.”

News Briefs

ATSC 3.0 Cleveland Tests
TV Technology recently ran an in-depth column by Bob Kovacs, detailing the latest advances being made toward the future ATSC 3.0 standard through an ongoing series of transmission tests in the Cleveland area. Kovacs reports that officials from GatesAir, LG, and Zenith have been participating in the tests, and are predicting that there will be a candidate standard for ATSC 3.0 by the end of this year. They are making that prediction based on tests with an experimental, high-power ATSC 3.0 transmitter in a Cleveland suburb owned by Tribune Broadcasting. Since May, what is now being called the GatesAir/LG/Zenith Futurecast proposal has been tested through head-to-head comparisons with ATSC 1.0 for several hours a day. This is the second such comparison project, following a similar series of tests in Madison, Wisconsin, last October, and Kovacs’ reports that the Cleveland tests were configured to take into account lessons learned from that first go-round with, to date, information collected from more than 75,000 data points, including “far more challenging conditions than those experienced in Madison, including tall downtown buildings, as well as transmission near a large body of water (Lake Erie).” The article states that the results all-round are promising, so much so that “the most robust Futurecast signal can be received even if noise exceeds the signal level by 1.2 dB.” Furthermore, the Futurecast ATSC 3.0 proposal, utilizing OFDM modulation and HEVC encoding, can reportedly allow a 6 MHz broadcast channel to efficiently carry about 26 Mbits/sec of data. 

Technicolor Birthday
As part of the ongoing 100th anniversary celebration of the birth of the Technicolor Motion Picture Corporation and, by the extension, the beginning of the historic technical and creative motion picture processes that followed, American Cinematographer magazine recently posted an exclusive interview with authors James Layton and David Pierce about what they learned about the company’s birth and importance while researching and writing their new book about Technicolor’s golden years, The Dawn of Technicolor 1915-1935, which was recently published by the George Eastman House. In the article, the authors discuss their decision to focus the book exclusively on the era dating from the company’s birth in 1915 through the time when Eastman Kodak’s 35mm color motion-picture film eventually rendered obsolete Technicolor’s famed three-strip color cameras. They also talk extensively about the many challenges and false starts the company had along the way to perfecting its three-strip process that eventually revolutionized cinema, and the importance of the business and sales talents of the company’s founder, Dr. Herbert Kalmus, in keeping funding flowing during those lean years until those technical processes could be perfected. But Layton and Pierce also discuss more detailed nuggets that can be found in their book, such as how Technicolor crews and Hollywood crews interacted on set, who the great Technicolor cameramen were, and what the lessons of Technicolor’s first 20 years of innovation are for today’s digital wizards in Hollywood, among other topics.    
 
HPA Honors for Leon Silverman
The important efforts by Hollywood post-production executive Leon Silverman to advance the post industry’s role in the filmmaking and technology sectors of the entertainment business over the course of more than three decades will be celebrated in November by the Hollywood Post Alliance (HPA) when the organization gives one of its highest honors, the HPA Lifetime Achievement Award, to Silverman at the HPA Awards Gala at the Skirball Center in Los Angeles. HPA and SMPTE Executive Director Barbara Lange recently announced the award, stating it was timed to coincide with the 10th anniversary of the HPA Awards, for the purpose of “shining a light on Silverman’s role in the organization, his significant contributions to the post-production industry, and his unfailing support of the vision of so many filmmakers.” Silverman’s role in the HPA’s very existence was central, having served as the organization’s co-founder in 2002 along with a coalition of other post-production executives, and then serving as its president ever since. He is currently general manager of the Walt Disney Studios Digital Studio, and previously served as president of the LaserPacific Media Corporation and, simultaneously, as Vice President of Entertainment Imaging and Director of Strategic Business Development in the Entertainment Imaging Services unit of Kodak after that company acquired LaserPacific. Silverman also serves as Governor for SMPTE’s Hollywood region and is a SMPTE Fellow, Associate Member of the American Society of Cinematographers, Affiliate Member of the American Cinema Editors, and was recently invited as a member of the Academy of Motion Picture Arts and Sciences.
 
You're receiving this email because you are a Member or have expressed an interest in Society of Motion Picture and Television Engineers - SMPTE and/or HPA. Please Note: If you unsubscribe below, you will no longer receive ANY SMPTE or HPA email.

You may unsubscribe if you no longer wish to receive our emails.

Society of Motion Picture and Television Engineers | 3 Barker Avenue | White Plains | NY | 10601