Which intermediary format should I use for H.264 Drone footage?

ChristopherAllain Jul 1, 2015

  1. I worked with some drone footage recently from a DJI Inspire. It records in UHD H.264.

    I know that in a situation like this, you should batch all of the footage to an Intermediary codec like ProRes. So here's the question: If the source material is an 8-bit H.264, is there any advantage to using ProRes 444 over ProRes 422 HQ or even ProRes 422 LT?


    Obviously if I'm not going to gain any benefit, I'd much rather store ProRes 422 LT files instead of ProRes 444.
     
  2. Jack Jones Colourist

    Jack Jones Colourist Original Member

    564
    261
    0
    Experiment ;)

    Do a short test and see what the benefits are and see which ends up best value - HDD space vs quality.
     
    Juan Salvo likes this.
  3. Yeah LT at 4k is spreading the data rate pretty thin. I wouldn't go below hq at those kind of rasters.
     
  4. Huh? ProRes isn't fixed data rate. It scales with the size of the raster. The UHD files (it's not 4K if it's not 4096, damn it!) will be 4X the file size of the HD version.

    That said, clearly the way to go for this UHD(!!!) footage is 16bit DPX.
     
    Robert Houllahan likes this.
  5. Convert it to Apple Animation. :p

    /s
     
  6. i call it 4k just to get your dander up. worked!
    doesn't each flavor of prores have its peak data rate that it can scale up to though?
     
  7. Yes. See page 21.
     
  8. Huh? Chart shows prores lt HD as 82Mb/s and UHD(!!!!!!!!!!!!!!!!) as 328 (328/4=82) so an exactly linear correlation between frame size and data rate. And that scales up (I believe) to 8K.
     
  9. Without any testing I'd go with Prores 4:2:2 (non-LT, non-HQ). The bit depth is not an issue here, it's just that you don't want to aggregate more compression artifacts when using a highly compressed codec like LT.

    But, depending on your workflow you may want to use LT or DNxHD 36 for editorial and then later relink to the H.264 camera masters in finishing.
     
    Marc Wielage likes this.
  10. BTW, I am working with footage with a DJI vehicle with a 4k integrated camera. I don't know if they all have the same camera system or not, but mine has awful aliasing. Stills are fantastic looking. The moving video is too sharp and shows terrible aliasing. You may need to run the material through a low pass filter.
     
  11. Thanks to everyone.

    Igor- I agree about the aliasing. The DJI Inspire doesn't have an adjustable iris. As a result the only method you have to expose properly is by speeding up the shutter speed, resulting in very choppy looking footage. The ND filter it ships with helps, but is woefully inadequate for a sunny day. The slowest they were able to get the shutter was 1/200 or about 43.2°. We were able to dramatically improve the footage by adding some motion blur, but it's just not the same as capturing it properly in camera.
     
  12. I have overexposure in nearly every shot where there is something light colored on the ground like a white car. Completely blown out.
     
  13. Why do you need to transcode, again?

    I mean, people still do that?
     
  14. Every day. Especially with crap H.264 Long-GOP formats.
     
    Clark Bierbaum likes this.
  15. Yeah, mostly because they want to, instead of being required to do so.
    *cough* FCP7 *cough* P2 Cards *cough*

    Given today's standards of CPU clock speeds and multi-threading architecture, cutting direct H.264 is not actually that computationally difficult as it has been in the past; especially with Adobe Mercury Engine in play for Premiere Pro. H.264 files are also relatively tiny (because they carry very little keyframes in comparison to intra). It used to be an absolute nightmare. Having that said, editing multiple streams of H.264 totally blows even today. Unless you like dropped frames, unresponsive playback, and a warmer processor then go nuts. Intraframe codecs are definitely more edit friendly, no argument there, but at the cost of hard drive space and bus speed saturation as each frame is now discrete. Also you lose post time during transcoding, if the material wasn't intra to begin with.

    Last year I graded a short that was shot on the FS700, and I opted to re-wrap the .MTS files to keep the H.264 video stream instead of transcoding the entire thing to PR422. No problems, even on a shitty external USB 3.0 client drive.
    Honestly it's up to you mate.. how do you like your eggs? over easy, hard boiled, soft boiled, raw.. depends on your mood and the rest of the dish. :cool:

    P.S Congrats Marc for +3000 posted comments. Holy macaroni!
     
    Marc Wielage likes this.
  16. Diarrhea of the mouth. Plus I read 2000 words a minute and talk fast.
     
  17. I just wrapped a feature project as a DIT where the producers wanted to use some DJI footage. When we showed it side-by-side with our A and B cameras (Alexa shooting to a Codex drive), we pushed hard to state that they just don't match. Love the idea of the thing. But it's EXTREMELY limited in controls on-the-day.

    As for why people still transcode? Predictability, and uniformity. I know people on this forum come from all levels; from the indie just dipping their toes in, to the colorists that have been grading broadcast or feature material longer than I've been alive. From the lower end, it's harder to see. Especially with the tools getting better and better. In one sense, Premiere (and FCPX, and other tools) are really good at hopping between formats and making up for your lack of knowledge and uniformity with good behind-the-scenes rendering and adjusting. On the other hand, because the equipment is low-end relative to the needs of high budget work, one can get used to certain hiccups and glitches.

    On a $1m feature, all material is shot on one camera, or a few lower end cameras. On an $8m feature, there are often 2 or 3 formats floating around, typically a combination of Alexa, Red, and some other random stuff. On the $50m features, there are a bunch of cameras being used, especially when you start adding in stunts, splinters, VFX, and Aerials.

    From the editor's standpoint, you want uniformity. If all content is sound-synced DNxHD 110, then you know that if you have an issue, it's not due to weird GPU/CPU/Software-renderer conflicts. If you're working on a 3-minute marketing piece and you're a small shop, that doesn't sound like a big deal, because you're used to running into those things, to the point that they almost become invisible. If you're working with 200 hours of footage between 5 camera formats from 3 different units and your Avid project alone takes 5 or 15 minutes to open and you have to spit out a client review file by the end of the day after having made 15 different changes spread between 4 reels . . . well, you don't want to be dealing with any mismatches.

    In other words, it's all a question of scope and scale. As much as it seems like the improvements in hardware and software would help the big-guys, in practice it's usually the opposite. There is big value in uniformity. It's why there are still near-set DITs/nextLab/Outpost/etc. operators that transcode and review footage for editorial.

    In regards to the original question? Igor's method, stated above, would be my advice.
     
    Ryan Nguyen likes this.
  18. I am having massive aliasing issues with DJI captured material. No one caught it looking at offline files during the editorial.
     
    Marc Wielage likes this.
  19. I wish the GoPro would just go away. I'd much rather people use a slightly larger camera with more manual functions like the Blackmagic Pocket Camera.
     
  20. Marc Wielage likes this.

Share This Page