Cinema cameras need to be able to record at 4k (real 4k, 4048x2028) at upto 32bit per channel. 24 times a second.
Thats a lot of data. Now, RED at the time didn't have a way of recording to large disk arrays, (unlike the alexa) so they used their own SSD pack things.
This limited the amount of space, and shooting time.
So they needed a way of doing more than RLE compression.
This meant that they had to start chucking away some data. With standard JPEG, you throw away 3/4 of the blue and green channels (kinda, its a different colour space) and then compress the rest.
The problem? in VFX the blue and green channels are crucial for "pulling a key" (green/blue screen work, the less clear, the more manuakl cleanup needed, which costs $$$). So all that 4k resolution will be useless because in practice, the bit that the VFX team need will be < HD res.
So RED used JPEG2000 that uses wavelets to compress things. Roughly speaking, instead of storing a per pixel value, you group chunks of the image together and store the _change in frequency_, that is the difference in colour between pixels.
This doesn't reduce the resolution so much and doesn't produce square artefacts like oldschool JPEG. The problem is that its quite CPU intensive. To the point that it would take >30 seconds to decode a frame.
GPUs make it trivial to do it in real time now, but back then, its was a massive faff.
Also, RED are masters of bullshit and marketing. There is quality loss, its just they never tell you that.
Thats a lot of data. Now, RED at the time didn't have a way of recording to large disk arrays, (unlike the alexa) so they used their own SSD pack things.
This limited the amount of space, and shooting time.
So they needed a way of doing more than RLE compression.
This meant that they had to start chucking away some data. With standard JPEG, you throw away 3/4 of the blue and green channels (kinda, its a different colour space) and then compress the rest.
The problem? in VFX the blue and green channels are crucial for "pulling a key" (green/blue screen work, the less clear, the more manuakl cleanup needed, which costs $$$). So all that 4k resolution will be useless because in practice, the bit that the VFX team need will be < HD res.
So RED used JPEG2000 that uses wavelets to compress things. Roughly speaking, instead of storing a per pixel value, you group chunks of the image together and store the _change in frequency_, that is the difference in colour between pixels.
This doesn't reduce the resolution so much and doesn't produce square artefacts like oldschool JPEG. The problem is that its quite CPU intensive. To the point that it would take >30 seconds to decode a frame.
GPUs make it trivial to do it in real time now, but back then, its was a massive faff.
Also, RED are masters of bullshit and marketing. There is quality loss, its just they never tell you that.