-
Notifications
You must be signed in to change notification settings - Fork 1
Description
I have a few questions (motivated by partial ignorance rather than suggestions):
- In a table as below, the CoC is hard-coded to produce the DOF table.
cooke-cinematography-lens-depth-of-field-chart.pdf (cookeoptics.com)
In VES list as people are very interested in DOF effect (as well as stylization like aperture shapes for bokeh)... and some suggest the "CoC" is what should be transported to effects...
Is it possible to derive that -- what is used as constant CoC to generate this sort of tables,linked above based on other variables if we have Min and Max (I know complicated as halfway is not in center or always a power function based on distance 2:1) and FD? Is it just a function curve that stops at half-"infinity" (hyperfocal)? As in it could be approximated and provided as a curve by lens vendor?
In effects parlance, the bokeh ball has a maximum size... which could be derived from something similar to CoC (a non-constant value) I guess???
///
- I also am not clear about Entrance Pupil point - I understand the idea, and I have seen implementations and had to read about it years ago now, as it's similar to how I used it for no-parallax point in panaroma stitching
And I am ok with the CG camera perspective transform (image plane) being located based on that virtual aperture location.
It's not that straight forward in practice, and I am assuming it will vary based on FD at least... would this be approximated with just a function curve too?
As in could it be parametrized with a few points?
I am wary of any process that requires shooting grids/checkerboard. I understand it might be in i/ data.
///
- For lens adapter affecting the flange distance, I saw somewhere on some ASWF list maybe someone putting a link for lens/camera data in JSON schema form, that included the adapter transform. Can't find that reference to forward as reference.
This reminds me, if you ever used one of these with focal reducer... it will often shift the center (critical when dealing with fisheye), even putting the same lens on same camera might generate 2 different centers at the size of photosites (known issue for people doing stereo), even the physical aperture hole might be not exactly a circle and aligned... I am not an optical lens designer and out of scope here, but I see in active machine vision benches they use a micro-translation thing to callibrate the center, I always wondered if someone could make a simple lens illuminating cap with micro-controller (knobs) to move the light dot to align center.. where in-camera (or in attached monitor) for example generates colored pixels to visually align or capture at least micro-offsets of center? I understand this would generate a blurry circle if placed on illumination cap, which is fine. Without such scheme, any tech marketing mention of sub-pixel accuracy is not substantiable. I have seen ad-hoc technique tutorialized by Syntheyses to collect a greyscale gradient shooting a flat illuminated surface.
- For lens softness at the edges, vignetting, distortion - are you missing a base definition which would be simply two circles - where the internal circle is the defined known "good coverage" as Cine-Lens calls it, which should be part of the lens specs but is not necessarily what is imaged. For example, if you use a certain lens designed for full-frame 35mm on a larger sensor, it will image something outside of the good coverage circle (which is what the lens vendor should spec) - this extra stuff might still be more useful than just cropping for post and is sometimes a look even... (e.g. shooting with a full-frame lens on a Red Monstro, you might have only 7K of 8 of good coverage as the sensor is larger than full frame).
Pierre