Overview[]
There are 3 image buffers in the camera:
- Bitmap VRAM (BMP) - 8-bit, for displaying overlays
- LiveView VRAM (LV) - usually YUV422, for displaying LiveView and Playback image.
- Recording VRAM (HD) - YUV422, used for recording (but it's also updated while not recording and in photo mode). Usually it has higher resolution than LV.
Exception: LV is not YUV422 on SD monitors (luma is the same, color data is unknown).
struct vram_info { uint8_t * vram; // buffer address int width; int pitch; int height; };
BMP to LV mapping[]
Usually, there's a 1:1 mapping between BMP and LV coordinates. Sometimes the LV image has black bars.
Effective LV image (not considering black bars) is always 3:2 (sensor size).
Exceptions:
- On SD monitors and certain cameras (e.g. 1100D), BMP-LV mapping is not 1:1.
- On HDMI 1080i, LV has double resolution.
LV crop area[]
The placement of the effective LV image area (always 3:2, excluding black bars) is described by this structure:
struct bmp_ov_loc_size { int x0; //live view x offset within OSD int y0; //live view y offset within OSD int x_ex; //live view x extend (x0 + x_ex = xmax) int y_ex; //live view y extend int x_max; // x0 + x_ex int y_max; // y0 + y_ex int off_169; // width of one 16:9 bar int off_1610; // width of one 16:10 bar };
Global variable:
extern struct bmp_ov_loc_size os;
On this crop area (x0...x_max, y0...y_max) you should draw cropmarks, which are always 3:2 bitmaps.
Pixels are not always square.
LV to HD mapping[]
HD image does not contain any black bars and is centered on LV effective image (the part without black bars, as described by "os" structure). It may have a different aspect ratio. LV image will always include the HD image (i.e. you always you see what you record, plus maybe some transparent bars).
Focus peaking and Magic Zoom use data from the HD buffer, because it has higher resolution.
Normalized coordinates: (0,0)...(720,480)[]
Sometimes it's easier to work with these coordinates. For example, ghost image is created in Play mode and displayed in LiveView mode, and sometimes the image buffers in these two modes have different sizes.
Using normalized coordinates may result in roundoff errors.
Normalized area covers the effective LiveView image, without black bars.
Coordinate transforms[]
To convert coordinates between image buffers, ML uses 2D homogeneous transformations with scaling and translation components. There's no assumption regarding relative placement of the 3 buffers (it can be arbitrary). The only assumption is that the image buffers have parallel axes (i.e. you don't need rotations).
Computations are fixed-point (a scaling factor of 1024 is actually 1 in real units).
// 2D scaling + translation // [ sx 0 tx ] // [ 0 sy ty ] // [ 0 0 1 ]
// inverse: // [ 1/sx 0 -tx/sx ] // [ 0 1/sy -ty/sy ] // [ 0 0 1 ]
struct trans2d // 2D homogeneous transformation matrix with translation and scaling components { int tx; int ty; int sx; // * 1024 int sy; // * 1024 };
Macros (which you should use):
// offsets on one axis, in pixels #define BM2LV_X(x) ((x) * bm2lv.sx / 1024 + bm2lv.tx) #define BM2LV_Y(y) ((y) * bm2lv.sy / 1024 + bm2lv.ty)
Similar macros for LV2HD, BM2HD, LV2BM and so on.
// scaling a distance between image buffers #define BM2LV_DX(x) (BM2LV_X(x) - BM2LV_X(0)) #define BM2LV_DY(y) (BM2LV_Y(y) - BM2LV_Y(0))
and so on. These macros will only apply the scaling factor, without translation.
// offsets in image matrix, in bytes #define BM(x,y) (x + y * vram_bm.pitch) #define LV(x,y) (x + y * vram_lv.pitch) #define HD(x,y) (x + y * vram_lv.pitch) #define BM2LV(x,y) (BM2LV_Y(y) * vram_lv.pitch + BM2LV_X(x)) #define LV2BM(x,y) (LV2BM_Y(y) * vram_bm.pitch + LV2BM_X(x)) ...
Use these macros to access a pixel when you know the coordinates.
Example: int x,y; // some coords in BMP space
uint32_t * lv = get_yuv422_vram()->vram; uint8_t * hd = get_yuv422_hd_vram()->vram; uint8_t * bm = bmp_vram();
uint8_t one_bmp_pixel = bm[BM(x, y)] uint32_t two_lv_pixels = lv[BM2LV(x, y)/4] // UYVY uint8_t one_hd_luma_pixel = hd[BM2HD(x, y)+1] // UYVY
Looping through an image buffer (16:9 area):
for(int y = os.y0 + os.off_169; y < os.y_max - os.off_169; y += 2 ) { for (int x = os.x0; x < os.x_max; x += 2) { // do something with pixel at (x,y) // those coords are in BMP space } }
For optimizations, look in zebra.c, for example at false colors.
Exercise[]
Draw something in the center of the LiveView image, in all 3 image buffers, without hardcoding coordinates. Make sure it works on external monitors.