straight talk on the technical realities of the Red camera
There is a lot of hype right now about Red camera outperforming high-end HD cameras and even 35mm film cameras. We're going to tell you why we think it's just that -- hype.
The Red camera proselytizers have taken advantage of rampant confusion and misinformation in a nascent digital world. Even for engineers it's difficult to keep track of today's specs and formats, and for producers and directors, it's truly daunting. We tend to seek out something easy to explain it all, and the fallback has become the size of the number in front of the K.
"4K is bigger than 2K, so 4K must be better." It only stands to reason.
But in truth it's much more complicated than that, and the number before the K is almost meaningless without a lot of other requisite information.
The Red camera takes advantage of this Big-K phenomenon by severely sacrificing the overall quality of the image just to increase the number before the K. The Red camp then proclaims that they've beat the other cameras.
Again, let us reiterate that we are not saying Red is a bad camera. Also, let us be clear that we are not saying that misleading use of specs is necessarily coming from the official Red representatives as much as from its proselytes.
We think that the Red One camera is very impressive, especially for the price, and certainly beats many or most HD cameras out there by a large margin in all sorts of ways. We'd also like to seperate what we're talking about here from subjective taste. Red's literature says "just look at the results," and we agree with them. If you like the look of it, that's all that matters -- we're not refuting your taste. But after testing cameras and viewing uncompressed images on professional equipment, we preferred some other high-end HD (specifically F23 and Genesis) and 35mm film to Red. That's just our taste.
But, in this article, we'll put those subjective analyses away and tell you why we think that the TECHNICAL SPECS alone do not assert, as some would claim, that Red beats 35mm film or cameras like Genesis and F23/F35. In fact, these specs assert quite the opposite. This article is about mastering images for THEATRICAL CINEMA, not for TV or internet or any other such medium.
So we've agreed with the Red camp: you can't go by the specs, you have to look at the footage. But, there ARE people out there touting the Red camera's supremacy based on specs alone, and that's what we're addressing. So let's see what you CAN know about a camera just by the specs.
This article is not meant to show that high-end HD cameras and film have more RESOLUTION than Red One cameras -- it's meant to show that there is some overstated hype out there about the Red camera, and that some digital cameras that we like (F23 and Genesis) see and record more IMAGE INFORMATION THAT IS IMPORTANT FOR CINEMA than Red One does.
There is a glut of aggressive and cacophonous information and misinformation out there right now about the incredibly complicated world of digital imaging. Many companies and individuals have a stake in dumbing things down so that there is a single word that they can yell over the din, to get your attention. We do not think that the merits of a camera or format can be summed up in one utterance, like "4K."
You can't tell that a camera is better or worse by one number (just as you can't tell that Red is better or worse from "4K") and you need to know a lot more about a camera's imaging to evaluate it. For every big number like "4K" that the Red camp throws out, we could throw back numbers like these about some other high-end HD digital cinema cameras (again, F23/F35 and Genesis):
We agree that these two specs don't necessarily mean anything on their own (believe us, we can recite all the reasons that Red camp will use to jump on these two boasts -- and they're right), but we're just showing you that a big number doesn't prove a camera is better, because each camera can have its own big number. So, let's drop the battle for big numbers and get into some real information.
Even though this article is mostly about how resolution count is not the way to determine which camera gathers a technically better picture for cinema, we do need to touch on resolution and sharpness.
Preliminarily, we would like to point out that the Red camp's definitions of the terms used in their specs seem to us to shift elusively (for example "resolution" in one moment refers to sheer count of photosites on the image sensor, then in the next to "effective resolution" in real-world tests). It's hard to pin them down, but, without going into details here (see their literature), the Red techies themselves, upon being confronted with hard numbers, back down from 4K and say that the camera is "effectively 3.2K." High-end HD cameras on the other hand don't have a confusing spec: their "effective" resolutions matches their published resolution. The high-end HD cameras that we will compare the Red to (like Genesis, F23, and F35) are 1920x1080, which is 1.9K.
Just about every feature film you see projected in the cinema today (even if it was shot on film) was mastered in 1.9K or 2K digital files. 1.9K and 2K are effectively interchangeable, firstly because the numbers are simply so close as to not matter, and secondly because most "2K" films are scanned for full academy aperture and only use 1828 pixels across for the usable image, whereas 1.9K projects use the whole aperture for image (so, 1.9K can be more than 2K when both are recorded to film). So, whether it's 1.8K or 1.9K or 2K doesn't really matter -- they're all about the same. The bottom line is, even films shot on film and displayed on film look perfectly sharp on a giant movie screen even at 1.8K (less than 1.9K).
PERCEPTUAL SHARPNESS and RESOLUTION are not the same thing at all. For an in-depth explanation of this, look up "Modulation Transfer Function" on Google or search Panavision's web site for a video we enjoyed of John Galt explaining it. What matters for us here is that extremely high "resolution" in the strict sense of the word is only useful for things like spy-sattelites and micro-fiche. That kind of resolution is for when you want to look very closely at (or magnify) a small still image, not when you want to sit far away from a big moving image, like you do in a movie theater, or even when watching a TV. For the purposes of cinema (and not spy satellites), once RESOLUTION has exceeded a reasonable threshold, then PERCEPTUAL SHARPNESS does not effectively increase with increased RESOLUTION.
We feel (and you may disagree) that 35mm film and high-end HD have already well exceeded that resolution threshold -- after all, you can't see pixels in the cinema, and the images look sharp and crisp. We also feel that cinema is about richness of an image, and we would prefer, once the resolution threshold has been exceeded, to capture an image with more richness and depth over an image with more superfluous resolution but less color information.
Resolution, whether measured in "effective resolution" (like MTFs) or in nominal resolution (sheer count of pixels or photosites) is only a measure of one aspect of image information. If you have resolution but no color or luminance information, you don't have an image. You could build a 1-bit camera that has one-bit per pixel with fantastic resolution, but it wouldn't perceptually look like a proper image. It would do great in resolution tests (shooting test patterns) and in pixel-count comparisons, but it would look terrible as an image. Every pixel would be black or white. Not only would there be no color in our imaginary 1-bit camera, there'd be no gray either -- just black or white. Any good imaging system needs to record color and luminance information to have an image at all. But how much?
Luminance information and color information are effectively the same thing for our purposes, because a color image is created by the luminance of several component colors. That means that COLOR is the relative value and LUMINANCE is the absolute value of these components. (Even formats that record more information about luminance than they do about color do the same thing, because, even in those formats, luminance is an absolute value and color is a relative value.)
How much of this information do you need to make an image of cinema quality? How much resolution and how much color information? Motion picture imaging, which has traditionally been done on 35mm, has a history of recording images of incredible breadth of color and luminance information. This breadth is necessary for the image to look rich and beautiful on the screen, and it's also necessary for the image to be adjustable without degradation in normal color grading (color correction). When light comes in through the taking lens, how much digital data needs to be recorded about it to store a 35mm-quality image? Well, a standard has been developed for this -- a standard that digitally represents what 35mm film can do.
There is only one standard for file formats that is used indusry-wide for theatrical cinema mastering -- it's the one kind of file that is ingested for color-grading, it is the one sort of file that is used for film scans, the file type that visual effects houses work with, the one file-type that is sent to film recorders. It's been the standard since at least the early 90's and hasn't changed and isn't in process of changing -- Kodak invented the standard to be the digital equivalent of film, and we think they did a great job. It's called a DPX file (it's interchangeable for our purposes with Cineon file). These files, we agree with Kodak, are just like film. Film has 3 layers: one for red, one for green one for blue. And, each pixel in a DPX file has three pieces of information ("channels") in it: one for red, one for green, one for blue. How much information about these channels? 10-bits per channel; that's 30-bits per-pixel. Kodak determined that that's how much information you need to get the full-breadth and depth of film data, and it's what's been used and continues to be used as the standard for cinema.
Also, although DPX files can be any pixel dimension, the standard has been and continues to be 2048 across (again, of the 2048, 1828 are usually used for the final image area).
To get all the information of film, you need 1.8K pixels, you need 3 color channels, and you need 10-bits of data per channel.
Note that Kodak could have made any specs they felt were required here to fully achieve the quality of 35mm film: for example, they could have prescribed a file that has more resolution and less color/luminance information. Here's one they could have used: 4K pixels and 7.5-bits-per-pixel -- that would have actually been the exact same file size as the spec they did choose, and they even could have said the word, "4K." But the standard that Kodak determined necessary and that is still today the full-quality standard of professional cinema imaging is 2K and 30-bits-per-pixel. You need resolution (lots of pixels) and you also need color depth (lots of bits-per-pixel) to get a full 35mm film quality image. Additionally, the consensus amongst imaging professionals is that it's so important to protect the full 10-bits-per-channel of color data, that most professional color correction engines do 16-bit calculations on 10-bit files just to ensure that there will be no rounding errors.
So, as you can see, color depth is information that is just as important and just as coveted as resolution for cinema imaging.
Let us also mention here that DPX file size is very simple to calculate and very meaningful. DPX files simply store all of the prescribed data just as it is, so the number of bits in each file is very simply:
(number of pixels) x (10-bits per channel) x (3 channels)
Plus there is a tiny bit extra for metadata. So, the data size of a DPX file basically has nothing missing and nothing extra, so its size faithfully represents the data amount necessary for 35mm film-style digital image data.
When Kodak was figuring out how much digital information you need to truly represent 35mm film negative, they determined that 30-bits per pixel actually wasn't enough if the file was a straight linear sample from the camera or film scanner.
If you were going to use the linear sample straight from the camera or scanner, it had to be much higher than 10-bits per channel and 30-bits per pixel. So, in the interest of making manageable file sizes, they made 10-bit logarithmic files instead of, say 12- or 14- or 16-bit linear files. This means that the camera samples at a depth much higher than 30-bits per pixel and then that image data is made logarithmic before being re-quantized to 30-bits. Even 30-bits is enough only if you have logarithmic density characteristics that are made from a higher-depth linear sample. But if the density in your file is linear (like the camera sensor), then you need more than 30-bits.
This is going to be important later, so we'd just like to clarify a little.
Another way to think of this is that the blacks need more quantization steps than the highlights do in order to fit the perceptual information of 35mm film into a 30-bit file. 30 TOTAL bits are enough to achieve this film quality if and only if the quantization steps are unevenly distributed in a logarithmic way. If the steps ARE even -- linear distribution rather than logarithmic, which is how an image sensor works -- then the image is going to need a total of many more bits than 30-per-pixel for there to be enough quantization steps at the bottom end. So, Kodak's answer: sample at significantly higher than 30 so that the bottom end of the image has enough samples, then do a logarithmic density transform and finally requantize to fit the image into a 30-bit file. If the requantized file is linear, too much information is lost.
This all means that the initial sample at the camera or scanner must always be significantly greater than 30 bits to get film quality; and the subsequent storage file can be requantized to 30-bits and still maintain that quality if and only if the image is made logarithmic before being re-quantizing.
THE DIGITAL CAMERA
One problem for people that are trying to get all that information from a digital camera instead of from a film-scanner is that it's too much data for most real-world devices to gather and store at 24 frames per second (and faster). It's a real challenge to build an image sensor that can see that kind of resolution and depth, to build processors that can handle that kind of throughput, and to concoct a data storage medium that can record that fast and hold enough data. After all, a 1920x1080 DPX file is 7.9 megabytes per FRAME.
It's a challenge, but it's happening. And in a world of quickly changing technology, camera manufacturers are racing to do their best to overcome the herculean demands of that kind of imaging quality.
There are all kinds of work-arounds -- and these are good work arounds; we're not knocking them. For example, many camera sensors don't have three photosites per pixel -- they only have 1 or 2 photosites per pixel, and then they use an algorithm where each pixel borrows information from some of its neighbors. Most low and medium-end tape and file formats have a scheme of lop-sided bit-depths for the 3 channels to favor luminance information over chroma information. Another way to work around the data size is to lower the bit-depth. Many cameras and file formats only store 8-bits of data per-channel instead of 10 (that's 4-times less data). All of these work arounds are fine, especially for projects that are not going to theatrical cinema.
And, of course, there's one more work-around: digital compression. Many file formats will, for example, take an image whose specs demand 10 megabytes and just store it in one megabyte without changing the spec. How do they do this? With a clever compression scheme. A compression scheme means that the file doesn't really contain all the data demanded by its target spec. When done well, compression can be a great way to overcome some of the data problems we're talking about. Compression uses a very clever algorithm so that (hopefully) there is no perceptual evidence that information is missing, even though it IS missing: the scheme tries to only throw out bits and bytes that you won't notice are gone. Of course, it doesn't always work that you don't notice. Compression schemes are proprietary and are getting better every year, and you can't tell how good a compression scheme is just by it's numbers, because it's perceptual, not just mathematical -- a state-of-the art compression scheme may be able to compress a 10 megabyte image to 1 megabyte and make it look visually better than an older compression scheme can do in 5MB.
Of course, though, at some point you can't hide it any more no matter how good the scheme is. Obviously, you can't take an image file whose specs call for 10 megabytes and store it in 1-bit. For things like internet videos, large amounts of compression (even compression with very obvious visual artifacts) is fine and is the norm. But, for mastering of cinema images, it is critical to neutralize any evidence of compression (that's why DPX files -- still the standard for theatrical cinema -- are completely uncompressed). Uncompressed files are the only way to have ALL the prescribed information.
Sometimes compression can be visible in one mode of display, but not another. For example, one compression scheme may work fairly well if you don't adjust the image color (as is normally done to every film in post production color-grading) but reveals artifacts as soon as you make an adjustment, or another scheme may hide compression artifacts on a CRT monitor but not on an LCD monitor. So compression, like the other work-arounds mentioned above, can severely limit color information -- after all, there is a lot of information missing from a compressed file, even if it's hard to see that it's missing in some display methods.
Before saying why we think F23/F35 and Genesis technically beat Red in this arena of color depth, we'd like to again point out that we do believe that Red is a good product that certainly has many innovations in this race to capture image information digitally. For example, they use Bayer pattern in their image sensor, which is great -- it's the best way to subsample (to have fewer than 3 photosites per pixel). They have a great compression scheme -- it looks amazing considering the data rates. And they have innovative image processing to perceptually smooth over what's been lost by compression and subsampling.
The thing is, despite how well Red has done in making up for subsampling and compression, the other cameras have beat them. We think that the F23/F35 and Genesis are farther along in the race: firstly, we believe that their work-arounds are better than Red's, secondly, and more importantly, they don't need as many work-arounds, because they've actually hit the target spec.
In this section, we are going to talk about the image flowing out of the camera, before it gets to the recording device.
F23, F35 and Genesis are not subsampled at all. They have actually managed to make sensors that have pixel-counts above the theatrical resolution threshold (these cameras are 1.9K, when we know that 1.8K is already above the threshold) AND they have three photosites per pixel (one per channel) AND they've met the 10-bit-per channel requisite: the camera gathers 14-bits of (oversampled) data which is converted to 10-bit logarithmic data. So they've done it -- they've met all the specs for a full film quality image: resolution, bit-depth, full color sampling, no compression. (Remember, we're just talking about the image flowing out of the camera here -- before it goes into the record-device.)
The Red, on the other hand (still just talking about the image before it's recorded), is so busy trying to outperform the other cameras in just the one area of superfluous resolution that it hasn't got to the full cinema quality of the other requisite aspects. They're using up all their photosites on resolution (so they can say 3.2K or 4K is a bigger number than 1.9K) and are truncating color information. The Red sensor only has one sample per pixel instead of three, which means it is 3-times color subsampled. The Red camera samples at 12-bits (as opposed to the other cameras' 42-bits-per-pixel sample which is prepared for a 30-bit file before being captured) and passes it straight through as 12-bits of linear data. That means Red has 12-bits of linear color/luminance information per pixel compared to Genesis/F23/F35's 30-bits of true logarithmic data. As we mentioned earlier, 30-bits is required for full 35mm film color depth, and even 30-bits is only enough if it's sampled at greater than 30-bits and made logarithmic BEFORE being re-quantized. Genesis/F23/F35 achieve this filmic color depth: 3 channels of 14-bit sample, for a total of a 42-bit sample-per-pixel, which is made logarithmic before re-quantized to 30-bits-per pixel. Red gets only one 12-bit sample per pixel and can't take advantage of the efficiency of logarithmic density characteristics because it doesn't oversample at the sensor, so the resulting file is 12-bits of linear data per pixel.
To give an actual rather than theoretical analogy of the importance of bit depth, think of the ArriScan film scanner. That's the device made by Arri to scan film for digital intermediate color grading and ultimately theatrical and home-video release. The ArriScan is basically a digital camera (that shoots already-exposed film) that is a true workhorse for actually released theatrical films. The ArriScan has EXACTLY THE SAME specs for color depth as the Genesis/F23/F35. It gathers 14-bits of linear data per channel (42 per pixel) at the sensor, makes it logarithmic (while still 14-bit), and then requantizes it to 10-bit before sending it to the storage file. Again, Red gets 12-bits instead of 42 per pixel and can't take advantage of logarithmic density.
Now, again, we are not saying Red is a bad camera. The Red techs tout their use of a Bayer pattern sensor to overcome the subsampling and they tout the revolutionary quality of their compression scheme ("RedCode") and their image processing -- and they're right. They can be proud. Those are beautiful technological advances -- very impressive; they've done great in these methods of compensating for subsampling and compression. It's just that they got bested by the other cameras that don't need so many work arounds.
On top of all of this, it is important to note that fewer total photosites (which F23/F35 and Genesis have compared to Red) can sometimes be an ADVANTAGE in reducing noise and increasing dynamic range and sensitivity to light. We're not claiming that such an advantage is certain in this particular comparison, because we're only looking at the specs and the results of the specs (not the proprietary inner-workings of how the hardware achieves those specs). But, whether that phenomenon is exemplified here or not, our own tests showed that Genesis and F23/F35 have an increased dynamic range over Red's published specs and over it's measured specs.
With respect to all of these matters on image gathering, the Red camp is sure to accuse us of not taking into account the fact their image is "raw," well we actually have taken that into account. In a later section we will show why Red's concept of "raw" does not belong in this section or in the next section on image recording.
Of course, the image flowing out of the camera has to be recorded and stored. Now, as we've seen, the image flowing out of F23/F35 and Genesis is uncompressed 1920x1080 10-bit-per-channel RGB. These cameras, in normal configuration, are paired up with a tape-deck or solid state recorder that easily snaps on as a standard accessory and is the normal method of recording its images. That tape deck takes HDCAMSR cassettes, and can record to the tape in two modes: 440 megabits/sec and 880 Mb/sec. The solid state recorders are fully uncompressed at 189.6 Mb/sec. The first of the tape deck’s datarates translates to 1.9 megabytes per frame, the second to 3.8 megabytes per frame. And uncompressed translates to 7.9 megabytes per frame. In SQ mode, the tape image is compressed with a ratio of 4.2-to-1. In HQ mode, with a ratio of 2.1-to-1. And the solid state recorder has no compression at all (1-to-1 ratio).
Red records its information to drives or chips, and also has 2 modes. By our own calculations and measurements, it shoots 1 megabyte per frame in one mode and 1.2 megabytes per frame in the other. Some of the folks on the Red board say it's 1.5 megabytes per frame, which is a bit more. We think they did their math wrong, but we'll use their higher spec for the benefit of the doubt.
Now, that means that they're trying to store a "4K" image in 1MB or 1.5MB whereas the F23/F35 and Genesis are trying to store a 1.9K image in 1.9MB or 3.8MB or 7.9MB. The compression is obviously very much higher in Red camera.
It is difficult to assign a compression ratio to Red, because there is disagreement on whether compression ratios should be calculated by comparing the camera files with the image-sensor's data or with the finished file's data. We think you should compare it to the finished file, because we think compression ratio is meant to enumerate the ratio between how much data you actually have and how much your spec demands (which is how much the de-compression scheme has to inflate the image), but ratios calculated on the Red message board compare the camera data rate to the amount of data that the sensor originally gathered (of course, you don't have this dilemma with F23 and Genesis because the amount of data that the sensor gathered is fully equal to the spec for the final DPX file). A DPX file of Red Camera's pixel dimensions (4096x2304) is 36 megabytes per frame. If the Red shoots 1.5MB per frame, then that is one twenty-fourth of 36MB. We'd say that the camera has a compression ratio of 24-to-1. The Red camp seems to calculates their compression at somewhere around 12-to-1 by comparing the camera data rate to their subsampled sensor instead of to the final file that the processing software creates. Either way, you can see that Red is very much more compressed than the F23/F35/Genesis ratios of 4.2-to-1, 2.1-to-1 and 1-to-1 (uncompressed).
At this point, we must reiterate, as mentioned above, that the mere math of compression ratios does not prove that one image looks better than another -- because compression schemes are proprietary and they're perceptual (not just mathematical). Well, Red's literature would have you believe that all competing compression schemes are in the stone ages (they often use the word "wavelet," which is supposed to prove that their compression, not just their vocabulary, is better), but we don't think so -- we think that other state-of-the-art cameras are also state-of-the-art. We think (admittedly subjectively) that HDCAMSR compression is as good as it gets -- we think it's indistinguishable from uncompressed, whereas Red's scheme shows artifacts. You don't have to agree with us on which compression is visually better, but two things are for sure here: it is possible to shoot F23/F35 and Genesis uncompressed using the solid state recorders if you're worried about compression and that RedCode would have to be a LOT better than HDCAMSR just to equal it visually.
Bottom line: in recording the image data from the sensor, the Red records considerably less data (actual bytes) about the image than F23/F35 and Genesis do, whether measured by absolute amount or by ratio.
"RAW" IS A RED HERRING
We should be able to leave the comparison here, but we know that if we do, the Red proselytizers will refute our comparison by saying that we're ignoring the fact that the Red camera shoots "raw." "Raw" is another attempt to take advantage of the Big-K effect. It's a word to yell over the din.
"Raw" is a buzzword that comes from the world of digital-SLRs (digital still cameras). In that world, "raw" is indeed a fantastic advantage. In stills, "raw" is the name a file format that stores uncompressed image-sensor data, and, usually, the only option besides "raw" for getting data out of a digital-SLR is JPEG, which is a file format that is extremely compressed. So, in stills, "raw" is a far superior to the alternatives because it's uncompressed. In motion imaging, we believe that Red's version of "raw" is inferior to the HDCAMSR alternative because Red "raw" is MORE compressed than HDCAMSR, not less.
Let's forget the digital stills use of the word "raw" and look at what "raw" means in our comparison of motion-picture imaging cameras.
The Red camp tries to draw a differentiation between itself and other cameras by saying that Red shoots "raw." This means that raw data from the sensor is not processed into an image in the camera; instead it is processed into an image later by a computer running Red's proprietary software.
"Raw" is just a distraction. It is not a HOW of imaging; it's just a WHEN and WHERE.
"Raw" is not a distinction of any substance. F23/F35, Genesis, and Red all do the same thing: they take "raw" information from the sensor -- this is "raw" data, not yet an image -- and then they run that data through proprietary software that turns it into an image. The only difference is that the F23/F35 and Genesis image processing is in the camera, and Red's is in a separate computer.
If the equipment is configured correctly (the cameras and/or Red's software) then the step of turning raw sensor data into an image is in no way a degradation or a truncation -- it's just a transform.
If anything, this is an advantage for F23, F35 and Genesis, not a disadvantage, because the image processing software for F23, F35 and Genesis has access to ALL the information from the sensor (before it's been compressed for recording) and because the processing is done in real time -- you don't have to wait for a computer to render it. That's pretty handy. Additionally, the fact that the processing is in the camera means that the image can be made logarithmic BEFORE being re-quantized to 10-bits-per-channel -- which is essential. This means that F23, F35, and Genesis can take advantage of the fact that logarithmic density characteristics get more perceptually important information out of 30-bits than linear density characteristics do. The fact that Red is "raw" means it's using linear density characteristics in writing its files, which are more perceptually inefficient than logarithmic characteristics -- Red records fewer bits-per-pixel and is forced to use them less efficiently because it is "raw."
It's been stated on message boards and elsewhere that the fact that Red is "raw" makes it higher quality than other cameras because it doesn't throw away sensor information. But this is deceptive doublespeak -- it is a misleading way to make it sound like oversampling at the sensor is a bad thing, which it obviously isn't. The actual files from F23/F35/Genesis are 30-bits-per pixel compared with Red's 12-bits-per-pixel -- 30-bits is more information about the original sensor data than 12-bits, not less. The fact that F23/F35/Genesis oversample at 42-bits at the sensor is only a huge advantage for them, not a disadvantage. F23/F35/Genesis get more information from the sensor than Red (by 42-to-12) and more information in the resulting capture file than red (by 30-to-12). The fact that 30 is less than 42 does not also make it less than 12. Again, oversampling is only an advantage, because the image can be made logarithmic before re-quantizing, thereby using the 30-bits in a much more useful way than if it were a simple 30-bit linear sample. F23/F35/Genesis meet the target spec 30-bits-per pixel, and can use that 30-bits more efficiently by taking a luxurious 42-bit sample at the sensor. Red, on the other hand, doesn't even meet (let alone exceed) the target spec at the sensor, and subsequently cannot use logarithmic density to make the capture file more efficient -- it's stuck in 12-bit linear.
30-bits-per pixel is the minimum for film-quality motion imaging (it's the target spec), and even 30-bits is enough only if the sensor samples at greater than 30 bits and the image is mapped to logarithmic density characteristics before being re-quantized. F23/F35/Genesis actually achieve this, by sampling at 42-bit, then going to logarithmic density characteristics, then re-quantizing to 30-bits. Red only ever gets 12-bits to begin with, which is already BELOW the spec, so if they requantized like the other cameras do, they'd just be even farther below the spec. By saying that "raw" is higher quality in this case is mischaracterizing the issue: re-quantizing is an ADVANTAGE if first you oversample (compared to the target spec), then you make it logarithmic, then you re-sample to the target spec -- you get better quality out of your target spec. Of course, with the Red camera, re-quantizing would NOT be an advantage because it's already below the target spec, so re-quantizing would make it even worse. Red's version of "Raw" is an advantage over an imaginary camera that, say, samples at 12-bits and then re-quantizes to 8-bits. But, it's not an advantage over real-life cameras that simply get a lot more data than Red does: Red samples at 12-bits and sends 12-bit linear files to the recorder; F23/F35/Genesis sample at 42-bit and send 30-bit logarithmic files to the recorder.
Some people claim that the processing of F23, F35 and Genesis "bakes in" a look that you can't undo in post-production and therefore Red has more information about the image available in post, but this "baking in" problem is only true if you have bad settings in the camera (like, if you don't use the usual logarithmic settings in the camera that most people use and you crush blacks down to zero in the settings). Actually, when the equipment is operated in the usual and correct manner, you have MORE information about the image in post from F23, F35 and Genesis -- which is our whole point through this article -- that F23, F35 and Genesis get more color and luminance information. F23, F35 and Genesis get more breadth and depth, more range for color-grading from the richer HDCAMSR or uncompressed file. Also, the exact same "baking in" problem applies to Red's own processing software just as it does to F23, F35 and Genesis -- if you have bad settings, you'll truncate data. So, if trained professionals are handling the equipment (whether it's F23, F35, Genesis, Red Camera, or Red Software) in its usual configuration, then there will be no "baked in" data truncation.
We believe that we've compared the cameras fairly by not including Red's definition of "raw" in the previous sections on image gathering and recording, because the gathering of a raw image from the sensor and its subsequent processing has been examined equally and fairly for all cameras -- we just skipped over the WHEN and WHERE while speaking about it above, since the article is about technical quality of motion imaging, not time and place of motion imaging.
Some statements from the Red message board will imply that by doing the image processing later you somehow get increased dynamic range or that you don't need as many bytes to store the same amount of data or other such things. This is self-evidently absurd -- one bit is one bit, and the number of bits you captured is the amount of information actually stored about the image. If anything, the only trick to make bits truly more efficient (more real information about the image per bit, not just throwing information away using compression), is to use logarithmic density characteristics. This is because one bit is always one bit, but logarithmic density characteristics allow the most perceptually important 30-bits to be chosen from the 42 sampled bits. One bit is always one bit, but there's no rule restricting you from choosing more perceptually important bits instead of just linearly spaced bits. The F23/F35/Genesis do this, but Red can't, because it's "raw" and linear and not oversampled, so it's stuck. The amount of bits in the capture file is the real image information captured, and you can't squeeze more out later (even if you perceptually cover up the lack later), but what you CAN do is choose at the time of capture the most important bits to pack using oversampling and logarithmic density characteristics. But, Red doesn't oversample or have the ability to use logarithmic characteristics; artificially inflating the data later, as Red does, does not mean you captured more real data per bit at the camera -- it just means your trying to cover up that you didn't.
A reality that's being misrepresented as an advantage for Red when it's really a disadvantage -- is that IF the Red camera stored all the data from the image sensor without compressing it (which it doesn't do), then the camera file would STILL be smaller than the final DPX file, but that's just because Red is subsampled (one photosite per pixel instead of 3). One byte is still one byte. The "raw" file is not higher quality just because it's subsampled. Putting off the image processing till later does not magically recover data that wasn't captured. F23/F35 and Genesis SEE all the information and STORE all the information of the final DPX file -- they are not subsampled -- so there doesn't have to be any confusion with those cameras about WHEN subsampling gets inflated to the target spec.
Sony has been able to build image processing software and native hardware that can fit inside the camera and do full-quality lossless processing in real time. Red didn't build it on-board and they can't do it in real time. That's not an advantage for Red. Red's image processing software has to work much harder than Sony's because the camera only gathers one piece of data per pixel instead of three, and the processing software has to work hard to turn that subsampled data into an intelligible color image.
It is absurd to say that Red's image processing software is better than Sony's just because it's loaded into a personal computer instead of into a dedicated processing board. If you want to compare the innards of the processing software (which we haven't done here -- we just compared the results of the software), then you'd have to actually get inside Red's proprietary software and inside Sony's -- you can't say which is better just by saying WHERE the software is housed. We're just comparing the RESULTS of the software, not the inner workings of the software.
And, as we've shown, you get all the data from F23/F35 and Genesis -- every last bit and byte of information about the full breadth and range of the image -- nothing from its specs as we've described them is thrown away or compromised or subsampled (except for the mild HDCAMSR compression, as we discussed, which is much milder than Red's compression). Of course, if you operate the camera incorrectly you may lose information, but the same goes for Red camera AND for its image-processing software.
APPLES TO APPLES
A lot of hype that we've heard out there and seen on the Red message board recently confounds some issues by switching between definitions when convenient. Specifically here, we would like to address the issue of comparing Red's CMOS sensor to digital-still cameras.
Red has a CMOS sensor (same technology as digital still cameras) whereas F23/F35 and Genesis have CCD sensors. These are just two different technologies -- not inherently better or worse. Now, the still imaging world and the motion imaging world have VERY different naming standards that seem to get conflated with one another.
Still cameras advertise a "megapixel" count. That's the number of photosites on the image sensor: it's not a bit depth or a file format or an indication of color-subsampling, or anything -- it's just a raw count of photosites on the sensor. Now, in the still world that's fair -- that's how everyone labels the cameras for comparison, so it's apples to apples. It's an agreed-upon naming scheme.
In the motion imaging world in which Red is gaining techie supporters and touting specs (and we're talking specifically about theatrical cinema, not broadcast TV or web or anything like that), there is an industry standard measure of file size: "2K," "4K" and so forth. But, as we've discussed, these standards come from post-production file types, NOT cameras and image sensors. The reality is that "2K" and "4K" files -- as they are used in the real world to master theatrical cinema -- have 3-channels and 10-bits per channel and are uncompressed.
As we have shown, Red is much farther from this benchmark than F23/F35 and Genesis are -- in both gathering and storage. There is talk on the Red message board saying things like, "a Canon still camera with 12 million photosites is 12-megapixel, and, likewise, 4K Red camera is 4K." But that's apples to oranges. Number of megapixels (meaning sheer photosite count) is a standardized still-camera industry rating of a cameras sensor, whereas "4K" is a motion-imaging term for a post-production file-type. We agree that Red is "4K" by certain definitions, and that Red has more "resolution" than F23/F35 and Genesis -- Red is a very impressive camera. But we also think that Red is much farther from a 4K or even a 2K DPX file than F23/F35 and Genesis are in the gathering and storage of INFORMATION THAT IS IMPORTANT FOR THEATRICAL CINEMA.
(Thanks so much for all the feedback from the first article. We have done one rebuttal to Indie4k's dissection of our piece, which we've posted here.)