Skin color detection Devise a simple skin color detector (Forsyth and Fleck 1999; Jones and Rehg 2001; Vezhnevets, Sazonov, and Andreeva 2003; Kakumanu, Makrogiannis, and Bourbakis 2007 ) based on chromaticity or other color properties. 1\. Take a variety of photographs of people and calculate the \(x y\) chromaticity values for each pixel. 2\. Crop the photos or otherwise indicate with a painting tool which pixels are likely to be skin (e.g. face and arms). 3\. Calculate a color (chromaticity) distribution for these pixels. You can use something as simple as a mean and covariance measure or as complicated as a mean-shift segmentation algorithm (see Section 5.3.2). You can optionally use non-skin pixels to model the background distribution. 4\. Use your computed distribution to find the skin regions in an image. One easy way to visualize this is to paint all non-skin pixels a given color, such as white or black. 5\. How sensitive is your algorithm to color balance (scene lighting)? 6\. Does a simpler chromaticity measurement, such as a color ratio \((2.116)\), work just as well?

Short Answer

Expert verified
The algorithm creates a color distribution model using the chromaticity values of skin pixels from a variety of images. This model is then used to detect skin regions in new images. The model's sensitivity to color balance and simplicity can then be tested.

Step by step solution

01

- Data Gathering

Capture a variety of photographs, ensuring diversity. Use a tool to calculate the \(x\) and \(y\) chromaticity values for each pixel in an image. These values can then be stored in a data structure such as an array.
02

- Preprocessing

Crop the images or use a digital painting tool to indicate which pixels are skin (e.g., face and arms). This is important to identify the skin pixels in your dataset.
03

- Creation of Color Distribution Model

Calculate a color (chromaticity) distribution for these skin pixels. This can be done using statistical measures like mean and covariance or complex algorithms like the mean-shift segmentation algorithm. An optional step can be to use non-skin pixels to model the background distribution.
04

- Apply Model on Image

Use the computed distribution to find the skin regions in a new image. This can easily be visualized by painting all non-skin pixels a given color, such as white or black.
05

- Test Sensitivity to Color Balance

Analyze how sensitive the developed algorithm is to color balance (or scene lighting). This can be done by changing the scene lighting in your test images and observing how your algorithm handles these changes.
06

- Test with Simpler Chromaticity Measurement

Investigate if simpler chromaticity measurements, such as a color ratio \(2.116\) for example, work just as well. This step is about optimizing your model and understanding how much can be simplified without losing the accuracy of results.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Write a program that lets you interactively create a set of rectangles and then modify their " pose" (2D transform). You should implement the following steps: 1\. Open an empty window ("canvas"). 2\. Shift drag (rubber-band) to create a new rectangle. 3\. Select the deformation mode (motion model): translation, rigid, similarity, affine, or perspective. 4\. Drag any corner of the outline to change its transformation. This exercise should be built on a set of pixel coordinate and transformation classes, either implemented by yourself or from a software library. Persistence of the created representation (save and load) should also be supported (for each rectangle, save its transformation).

White point balancing-tricky A common (in-camera or post-processing) technique for performing white point adjustment is to take a picture of a white piece of paper and to adjust the RGB values of an image to make this a neutral color. 1\. Describe how you would adjust the RGB values in an image given a sample "white color" of \(\left(R_{w}, G_{w}, B_{w}\right)\) to make this color neutral (without changing the exposure too much). 2\. Does your transformation involve a simple (per-channel) scaling of the RGB values or do you need a full \(3 \times 3\) color twist matrix (or something else)? 3\. Convert your RGB values to XYZ. Does the appropriate correction now only depend on the XY (or xy) values? If so, when you convert back to RGB space, do you need a full \(3 \times 3\) color twist matrix to achieve the same effect? 4\. If you used pure diagonal scaling in the direct RGB mode but end up with a twist if you work in XYZ space, how do you explain this apparent dichotomy? Which approach is correct? (Or is it possible that neither approach is actually correct?) If you want to find out what your camera actually does, continue on to the next exercise.

-numbers and shutter speeds List the common f-numbers and shutter speeds that your camera provides. On older model SLRs, they are visible on the lens and shutter speed dials. On newer cameras, you have to look at the electronic viewfinder (or LCD screen/indicator) as you manually adjust exposures. 1\. Do these form geometric progressions; if so, what are the ratios? How do these relate to exposure values (EVs)? 2\. If your camera has shutter speeds of \(\frac{1}{60}\) and \(\frac{1}{125}\), do you think that these two speeds are exactly a factor of two apart or a factor of \(125 / 60=2.083\) apart? 3\. How accurate do you think these numbers are? Can you devise some way to measure exactly how the aperture affects how much light reaches the sensor and what the exact exposure times actually are?

how the intersection of two 2D lines can be expressed as their cross product, assuming the lines are expressed as homogeneous coordinates. 1\. If you are given more than two lines and want to find a point \(\tilde{\boldsymbol{x}}\) that minimizes the sum of squared distances to each line, $$ D=\sum_{i}\left(\bar{x} \cdot \bar{l}_{i}\right)^{2} $$ how can you compute this quantity? (Hint: Write the dot product as \(\tilde{\boldsymbol{x}}^{T} \overline{\boldsymbol{l}}_{i}\) and turn the squared quantity into a quadratic form, \(\overline{\boldsymbol{x}}^{T} \boldsymbol{A} \tilde{\boldsymbol{x}}\).) 2\. To fit a line to a bunch of points, you can compute the centroid (mean) of the points as well as the covariance matrix of the points around this mean. Show that the line passing through the centroid along the major axis of the covariance ellipsoid (largest eigenvector) minimizes the sum of squared distances to the points. 3\. These two approaches are fundamentally different, even though projective duality tells us that points and lines are interchangeable. Why are these two algorithms so apparently different? Are they actually minimizing different objectives?

In-camera color processing-challenging If your camera supports a RAW pixel mode, take a pair of RAW and JPEG images, and see if you can infer what the camera is doing when it converts the RAW pixel values to the final color- corrected and gamma-compressed ight-bit JPEG pixel values. 1\. Deduce the pattern in your color filter array from the correspondence between colocated RAW and color-mapped pixel values. Use a color checker chart at this stage if it makes your life easier. You may find it helpful to split the RAW image into four separate images (subsampling even and odd columns and rows) and to treat each of these new images as a "virtual" sensor. 2\. Evaluate the quality of the demosaicing algorithm by taking pictures of challenging scenes which contain strong color edges (such as those shown in in Section 10.3.1). 3\. If you can take the same exact picture after changing the color balance values in your camera, compare how these settings affect this processing. 4\. Compare your results against those presented by Chakrabarti, Scharstein, and Zickler \((2009)\) or use the data available in their database of color images. \({ }^{26}\)

See all solutions

Recommended explanations on Computer Science Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free