Alexa vs Red camera advice

Dominik Bauch Oct 15, 2016

  1. You clearly never worked at Complete Post.

    The Engineering department frequently would use the TAF just to see if there was "a picture" there, to run through alignment of the Ranks or Spirits and make sure there were no alignment or maintenance issues. For real world pictures? No way. Not even close. We had 8 Ranks of different vintages at one time, and over the years eventually upgraded to 11 Spirits (some HD, some 2K, some 4K), None of them could reliably make usable pictures from a client's negative based on the TAF.

    Maybe we have different ideas about what "usable pictures" were. All I did was simply have the DP shoot a reasonable grayscale chart on the first day and then I saved that correction in a memory. Generally, that would be a vague starting point for PECs, negative gains, and all that other stuff.

    In truth, some of the worst film scans I've ever gotten were those made from test films. I've had my share of (polite) screaming matches at Cinesite, ILM, and Technicolor from low-level staff people who would blithely set up a scan according to some phantom procedure, and then blindly scan a piece of film and never bother to look at it. If I complained and said, "the blue channel is 50% low!", their answer would be, "well, the test film looked fine." :eek:

    Eventually, I got the scanning & recording guys (I almost just used a more colorful word, but decided to be polite) to realize that every piece of film was different, every project was different, and they'd have to use good judgement to scan the material to get that specific project within reasonable range for whites and blacks. Once I demonstrated to them it only added about 5 minutes to their setup, they very reluctantly agreed to do it my way. It cut hours off my setup time, just because we were starting with a balanced scan.

    We now return you to our Alexa thread. Again, I really like the Alexa, I think there are a lot of good things about the camera, but the later Red cameras (starting with Dragon) made a lot of progress, and I'm liking what I see so far with Weapon and Helium. It all really depends on lenses, lighting, and exposure, and if you're optimized on all three, these cameras can make perfectly fine pictures.
     
  2. Yes, I never worked at Complete, thank god.
    But at places I had worked- Post Group, Modern VideoFilm, Post Logic and some others I now forget, all had used TAF to get ready for the telecine session. And at the time, I was that engineer, that would be working on the telecine using the "Bullshit"...
    At Modern VF, I'm sure you are well aware, in two buildings in Burbank and Glendale, we had more telecine installations, than anyone else in LA. And that would include every flavor of Cintel, Shadow or Spirit imaginable to go with every flavor of DaVinci. And yet, TAF was used everywhere.
     
  3. I also worked at Modern for 4-5 years with Lou Levinson in the room next to me, and trust me, we never used a TAF film to line up a film. I believe you knew Lou.

    One key problem is that it's a reference tied to the moment that film was made. The problem is, if you're doing a project in a different year, with a different emulsion, the reference has no meaning. It could conceivably work, provided you had a hundred different pieces of TAF film. But one emulsion? No way. In any given week, I might use 12 different kinds of emulsions, just as today a busy colorist might rely on pictures from a dozen different digital cameras. They all look different in terms of a Raw signal. There is nothing to unify them except our eyes, a calibrated monitor, a known color space, and scopes.

    There was an interesting system Kodak had in the early 2000s, one that Filmlight's Peter Postma was part of (when he was a color scientist with Kodak), called the "HD Processor," which was a system that basically required the user to calibrate a specific piece of test film and compare it to a stored still within Kodak's own box. Each system was built around a specific negative emulsion. The advantage of this was, if a DP asked, "what does my negative actually look like?", we could push a button and say, "according to Kodak, this is what the negative would look like if printed." But again, that was a known standard, very precise calibration of negative stocks, and a very specific test card shot under the DP's own conditions. Even then, the wild card were the idiots at the lab developing the film, and that was enough chaos factor to screw with the process. The Kodak system was not successful, but a lot of that was due to Kodak crashing and burning during that decade and film use evaporating.

    I got to know Peter a bit when he and I did these demos, and while I liked the system (and I have huge respect for Peter as a color scientist and as a person), the inherent imprecision Kodak's system had on me was not pleasant to deal with. And it was light years beyond the simple TAF system.

    I have hundreds of pages of Kodak technical docs in my files that I've held on to for many years. If anybody is interested in the nuts and bolts of the Telecine Alignment Film, here it is:

    https://spaces.hightail.com/space/MW59x

    As dated as it is, it had some interesting ideas... at least for 1990. It was at least a step towards quantifying the mysteries of color correction, but the real truth is that it never worked and it had a lot of inherent flaws. It did not put us in the ballpark for good color on a project, particularly in the emulsions didn't match. It was just an engineering tool to see if the machine could put out a basic picture. Later tools, like the DSC charts from Dave Corley in Toronto, actually do work and actually can be used with digital or film in order to create a good starting point from cameras in a Rec709 environment. But that's because it's shot by the DP in that space, at that time.
     
  4. Of coarse I knew Lou. I actually worked with him at Post Logic after I left MVF.
    So, if you didn't use TAF, fine. I had used it and pretty much everyone I worked with also did. I know that, because I always was on hand to help colorists to get going at the beginning of each session and TAF was always used to get telecine in a known state to avoid any shenanigans. If DP had a question about the image, the first thing everyone would do is to lace up TAF film to make sure telecine was not in some weird state.
    To say TAF was bullshit is... well bullshit...
     
  5. Absolutely. THAT we could and did use the TAF for. The problem is, I did feature projects from the 1960s, from the 1970s, from the 1980s, and the 1990s. I did movies from lo-con print, from IP, from dupe negs, from CRIs, and from cut-negative scans. I did B&W movies, films from Europe, films from Hollywood studios, early color, modern color, across many eras and domains. Tell me: which TAF would you grab to do one of these? Even if I was doing a modern-era TV series or modern feature film, the TAF would have absolutely no relation to the picture the DP shot. All it would do would be to verify that the telecine (or scanner) was putting out pictures. Read the manuals on the TAF that I posted.

    On the other hand: I used a sizing chart every day of my life when I was color-correcting film, and I was fairly paranoid about absolutely making sure the sizing was correct. But that was (again) a known standard. Even then, when people deviated from the standard -- as with 3-perf 16x9 Super 35mm -- they'd shoot a chart on the set. So once again, we had a perfect loop of a system that was specifically shot for a specific project on the same camera. There was no guesswork involved. (And I can think of several projects where the traditional TV-AR35 sizing charts didn't apply, since there was no official SMPTE sizing film for S35 and weird aspect ratios.)

    What's interesting to consider is that the test charts for one project didn't work for the next project. Even if they shot on the same film -- for example, the 500 ISO Kodak 5219 (my favorite negative stock, one that never had a TAF) -- the setup chart for that project most likely would not work for another TV series or feature film using the same stock. Why? Different exposures, different lenses, different lighting, different labs... pick a reason. Would it be ballparkish? Yes. But I could just use a setup memory from a similar project and get the same results.

    I think Kodak abandoned the TAF films by the mid-1990s, so I don't think they even made it to the Vision2 stocks. BTW, I know the variability in dailies and final color drove DPs and colorists crazy, and it's one thing that lead to standardized charts like the one Yuri Neyman sells through his company Gamma & Density, and the ones I mentioned before through DSC. On a per-project basis, the charts can work OK, provided it's similar lighting conditions and lenses and all that stuff.

    Mike Most went through all this stuff at Encore in the 1980s and 1990s, and maybe he can weigh in with his thoughts. They had a different system in place over there, but everybody in town (Modern, Complete, Post Group, Sunset, Matchframe, Laser-Pacific, Anderson, Unitel, Varitel, and many other post companies that are now out of business) all had systems that essentially worked. But a great deal of it boiled down to getting a good colorist in the chair, plus having good communications with the DP. That actually transcended any charts at all. To me, it still does, even with digital capture.
     
  6. (very fascinating, a bit far from the OP though)
     
  7. EXTREMELY fascinating, for those of us too young to have been there.

    But on the original topic: from an on-set perspective, it really depends on what kinds of projects you're shooting. One of the more recent ICG magazines had a piece about The Get Down, which I thought looked amazing. It was shot on Red, and the Cinematographer states the he shot it mostly at 320 or 400 ISO. From everything I've done with Red, to me, that's the sweet spot. Arri looks better with less light. With Red, you need to feed it more light.

    The flip side, is that in terms of the camera itself, since the Red is more modular, if you know exactly what you'll use, you can create a fully functional rig for less money than an Arri. The Arri cameras, I find, are much more simple; they're harder to fuck up. But there are certain features a Red can do that an Arri cannot. The only question is: will you use those features? 99% of the sets I'm on have no use for those feature sets.

    So. More time spent on thought and knowledge of the system, to trade off for an overall smaller and cheaper camera? Red, with a bit heavier lighting budget. Simpler camera that makes a great image straight out of the box but costs you more? Arri, but you'll save money on lighting over time.

    In my personal opinion (and I know that I'm in the minority on this) some of my overall favorite-looking projects, both features and TV, that have come out in the past 5 years, have been shot on Red. But Arri footage looks better on set and in dailies. With Red, you have to have a bunch of people all along the chain taking full advantage of what the camera offers in order to get the maximum out of it. And even at that, this isn't to say Arri projects look bad in the end by any means.
     
    Marc Wielage likes this.
  8. I agree, Red camera is certainly capable of beautiful images.
    Personally, Red reminds me a PC with Windows- many, many options with the opportunity to custom configure just the camera you want, but at times this wealth of choices can become overwhelming and confusing both to the operator and to the post personnel.
    Alexa on the other hand is like Mac. Not that many choices, most of the stuff is hidden behind pretty, but very useful interface and without too much effort great result is easer to achieve.
    On the other hand, I also love Sony F65 images and I absolutely abhor F55.
    Just though I'd throw it out there:eek:
     
  9. I digress! (I have a degree in Digression!)
     
    Andrew Webb and Walter Volpatto like this.

  10. When all the planets align, for a brief moment, the F55 makes a lovely image.
    Definitely not a camera to underexpose or overcrank.
    I had a go with the Varicam and was quite happy.
     
    Marc Wielage likes this.

  11. Jake, you are going the wrong way here. The point of anamorphic is to oversample your vertical, not undersample your horizontal. A 2904 horizontal resolution can't ever become 5808.
     
    Adam Hawkey and Marc Wielage like this.
  12. thank you....
     
    Adam Hawkey likes this.
  13. I often go to a 4096x1716 on a master prime anamorphic using some nuke routines and it looks awesome(so 1716 is "averaging" down from the open gate while the 4096 interperlates up but it looks perfect). But back to the helium vs alexa, I personally think the open gate of the alexa is more important on the look/feel then having the extra pixels. On low budget also you can't remove back walls and stuff, so I find I often really need something wider then 35mm on the vertical because I'm in rooms where I'm just a bit too close to the talent. Also with the open gate I can get closer to the talent and still keep the same crop then I'd have on an epic. Anamorphic's are really important in medium & low budget stuff, because you instantly get a "film" look even though the production can just be nominal.
     
  14. You guys keep missing my point and that is disappointing.
    The difference between your approach of using just numbers without taking into an account other aspects of the process is not really that much different from Netflix dictating the camera's sensor based on only one aspect- sensor resolution, while neglecting to even mention the underlying technology, mainly compression. What you're missing is using a combination of analog compression with digital matrices. In your world of strictly digital acquisition no one would argue, that 2+2=4. But that is not what we have here. We have analog 2:1 compression combined with digital representation of such. If you can show me any studies of combining analog compression with digital reproduction I would be very interested to see it. You position is really not that different, as saying 120FPS have much higher temporal resolution than 24FPS, therefore we should all use that, right? At the end of the day, looking at two images- Alexa Anamorphic next to 5k Weapon on 4k screen should be the only consideration that counts. And I'm not convinced many will prefer "true" 4k image over "virtual" one.
    Until then, let's agree to disagree...
     
  15. Thank you:D
     

  16. Correct. It's not really becoming 5808x2160, it's basically becoming 2904x1080 with proper proportions. And even those numbers have little real world relevance as the Alexa is usually debayered to 2K. The only way it becomes what Jake is suggesting is if you apply the de-anamophose step, then scale it.
     
    Marc Wielage likes this.
  17. I agree in principle, but I'm not sure that those who are shooting the material would always prefer anamorphic over spherical, for many practical and artistic reasons. So it's not really a completely logical comparison except in cases where anamorphic is specifically desirable.
     
  18. bit confused here where has 2904 x 2160 come from your full sensor read in 4:3 is 2880 x 2160 but your frame line sits at 2570 x 2150 for scope in raw ( bare with me i have done this for over 3 month and sleeping empties my head out)

    i think there is funky size for prores but its still 2994 x 2160 but there is there is a bit of missing picture left and right due to something to do how pro res encodes so its still a 2880 x2160 picture

    did i miss a memo somewhere is there another size some where :)
     
  19. Actually, if you want to be specific, the current anamorphic mode on the Alexa is 2560x2145, an aspect ratio of 6:5 that yields a "proper" 2.39:1 image matching the current DCI expectation when unsqueezed with a factor of 2x...
     
    gavin nugent likes this.

  20. i,m sure you right Micheal not done a job with it for a while i just did,nt understand where 2904 had come from
     

Share This Page