Color Calibration Notes ******* A. There are flare images which were manually marked for removal. ******* B. We provide manually drawn masks which indicate the region where hair is present per camera. ******* C. Color correction procedure: 1. Subtract average black color from all pixels and clamp pixels with negative component to [0,0,0] (camera noise correction) 2. Transform the color of each pixel using camera's "camTRGB" transformation (per-camera color correction). The camTRGB matrix is stored in row-major format. 3. Multiply each channel of each pixel with the light's per-channel scaling "lightWBFactor" (per-light color correction) ******* D. How did we determine the color calibration parameters? ******* D1. Per-camera color calibration (camTRGB) First, we acquired the three color calibration chart datasets (with the Macbeth ColorChecker chart). Note that in any particular dataset, not all of the color patches are visible from all the cameras. Also, in some images the chart is at a steep grazing angle. Thus, for each camera we picked an image where the chart is visible and at a good angle. Next, we sampled a small rectangular patch for each color, for each camera. From this we extracted the average color of each chart, which was taken as the color for the entire patch. Finally, we used the published reference RGB colors to compute a 3x3 transformation from the acquired colors to the reference colors. Since there are brightness variations in the images, we used the following procedure to learn a transformation leaves out scale (brightness variations among the cameras). 1. Scale each pixel in the camera color patch: a. Find the s = length of the projection of the reference color onto the pixel color b. Multiply the pixel by s 2. Find a 3x3 RGB transform T matching the scaled pixels to the reference color (using linear least squares) 3. Store T 4. Transform each pixel according to T 5. Repeat steps 1-5 until T is close to identity 6. Concatenate all the stored T's --> This is the per-camera 3x3 color transformation ('TRGB' = 3x3 "T"ransform in "RGB" space) (The camTRGB matrix is stored in row-major format.) ******* D2. Per-light white balance (lightWBFactor) 1. Apply the per-camera TRGB transformation. 2. Choose a grey patch, which is expected to have a perfectly grey color. 3. For each light, compute a "white balance" factor that scales each RGB channel to the average intensity. 4. Note there are 16 camera images per light, so there are 16 white balance factors per light. The values are similar, so I just used the median scale in each channel. ******* E. The domeCalibration stores the camera and light locations, and the camera projection matrices. ******* F. The average images (thumbnails beside each link) show color-corrected images of the hair in each camera, averaged over all lights (but leaving out the flare images). ******* ******* Last Modified: December 4, 2008 Will Chang, wychang@cs.ucsd.edu