how the intersection of two 2D lines can be expressed as their cross product, assuming the lines are expressed as homogeneous coordinates. 1\. If you are given more than two lines and want to find a point \(\tilde{\boldsymbol{x}}\) that minimizes the sum of squared distances to each line, $$ D=\sum_{i}\left(\bar{x} \cdot \bar{l}_{i}\right)^{2} $$ how can you compute this quantity? (Hint: Write the dot product as \(\tilde{\boldsymbol{x}}^{T} \overline{\boldsymbol{l}}_{i}\) and turn the squared quantity into a quadratic form, \(\overline{\boldsymbol{x}}^{T} \boldsymbol{A} \tilde{\boldsymbol{x}}\).) 2\. To fit a line to a bunch of points, you can compute the centroid (mean) of the points as well as the covariance matrix of the points around this mean. Show that the line passing through the centroid along the major axis of the covariance ellipsoid (largest eigenvector) minimizes the sum of squared distances to the points. 3\. These two approaches are fundamentally different, even though projective duality tells us that points and lines are interchangeable. Why are these two algorithms so apparently different? Are they actually minimizing different objectives?

Short Answer

Expert verified
The intersection of two 2D lines in homogeneous coordinates can be expressed as their cross product. The point that minimizes the sum of the squared distances to multiple lines is found by rewriting the dot product and transforming it into a quadratic form. To fit a line to a set of points, the sum of squared distances is minimized by calculating the centroid and covariance matrix, and drawing a line through the centroid along the major axis of the covariance ellipsoid. The two approaches appear different due to projective duality and because they minimize different objectives.

Step by step solution

01

Intersection of Two 2D Lines

In the homogeneous coordinates, intersection point \( \tilde{\boldsymbol{p}} \) of two lines \( \tilde{\boldsymbol{l_1}}\) and \(\tilde{\boldsymbol{l_2}} \) can be calculated as their cross product. \( \tilde{\boldsymbol{p}} = \tilde{\boldsymbol{l_1}} \times \tilde{\boldsymbol{l_2}} \).
02

Minimizing the Sum of Squared Distances

Given multiple lines, we can find the point \( \tilde{\boldsymbol{x}} \) that minimizes the sum of squares of distances to each line, expressed as \( D=\sum_{i}\left(\breve{\boldsymbol{x}} \cdot \breve{\boldsymbol{l}}_{i}\right)^{2} \). We can rewrite the dot product as \( \tilde{\boldsymbol{x}}^{T} \breve{\boldsymbol{l}}_{i} \) and turn the square quantity into a quadratic form, \( \breve{\boldsymbol{x}}^{T} \boldsymbol{A} \tilde{\boldsymbol{x}} \) where A is the coefficients matrix.
03

Fitting a Line to Points

For a set of points, the centroid (mean) and covariance matrix can be calculated. The line that passes through the centroid along the major axis of the covariance ellipsoid (largest eigenvector) minimizes the sum of squared distances to the points.
04

Comparison of Different Approaches

Although points and lines are interchangeable due to projective duality, the two algorithms look different because one minimizes the distance between a point and lines while the other minimizes the distance from points to a line. They are minimizing different objectives.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Write a program that lets you interactively create a set of rectangles and then modify their " pose" (2D transform). You should implement the following steps: 1\. Open an empty window ("canvas"). 2\. Shift drag (rubber-band) to create a new rectangle. 3\. Select the deformation mode (motion model): translation, rigid, similarity, affine, or perspective. 4\. Drag any corner of the outline to change its transformation. This exercise should be built on a set of pixel coordinate and transformation classes, either implemented by yourself or from a software library. Persistence of the created representation (save and load) should also be supported (for each rectangle, save its transformation).

Ex 2.6: Noise level calibration Estimate the amount of noise in your camera by taking repeated shots of a scene with the camera mounted on a tripod. (Purchasing a remote shutter release is a good investment if you own a DSLR.) Altematively, take a scene with constant color regions (such as a color checker chart) and estimate the variance by fitting a smooth function to each color region and then taking differences from the predicted function. 1\. Plot your estimated variance as a function of level for each of your color channels separately. 2\. Change the ISO setting on your camera; if you cannot do that, reduce the overall light in your scene (tum off lights, draw the curtains, wait until dusk). Does the amount of no?se vary a lot with \(1 \mathrm{SO} / \mathrm{gain}\) ? 3\. Compare your camera to another one at a different price point or year of make. Is there evidence to suggest that "you get what you pay for""? Does the quality of digital cameras seem to be improving over time?

In-camera color processing-challenging If your camera supports a RAW pixel mode, take a pair of RAW and JPEG images, and see if you can infer what the camera is doing when it converts the RAW pixel values to the final color- corrected and gamma-compressed ight-bit JPEG pixel values. 1\. Deduce the pattern in your color filter array from the correspondence between colocated RAW and color-mapped pixel values. Use a color checker chart at this stage if it makes your life easier. You may find it helpful to split the RAW image into four separate images (subsampling even and odd columns and rows) and to treat each of these new images as a "virtual" sensor. 2\. Evaluate the quality of the demosaicing algorithm by taking pictures of challenging scenes which contain strong color edges (such as those shown in in Section 10.3.1). 3\. If you can take the same exact picture after changing the color balance values in your camera, compare how these settings affect this processing. 4\. Compare your results against those presented by Chakrabarti, Scharstein, and Zickler \((2009)\) or use the data available in their database of color images. \({ }^{26}\)

-numbers and shutter speeds List the common f-numbers and shutter speeds that your camera provides. On older model SLRs, they are visible on the lens and shutter speed dials. On newer cameras, you have to look at the electronic viewfinder (or LCD screen/indicator) as you manually adjust exposures. 1\. Do these form geometric progressions; if so, what are the ratios? How do these relate to exposure values (EVs)? 2\. If your camera has shutter speeds of \(\frac{1}{60}\) and \(\frac{1}{125}\), do you think that these two speeds are exactly a factor of two apart or a factor of \(125 / 60=2.083\) apart? 3\. How accurate do you think these numbers are? Can you devise some way to measure exactly how the aperture affects how much light reaches the sensor and what the exact exposure times actually are?

Skin color detection Devise a simple skin color detector (Forsyth and Fleck 1999; Jones and Rehg 2001; Vezhnevets, Sazonov, and Andreeva 2003; Kakumanu, Makrogiannis, and Bourbakis 2007 ) based on chromaticity or other color properties. 1\. Take a variety of photographs of people and calculate the \(x y\) chromaticity values for each pixel. 2\. Crop the photos or otherwise indicate with a painting tool which pixels are likely to be skin (e.g. face and arms). 3\. Calculate a color (chromaticity) distribution for these pixels. You can use something as simple as a mean and covariance measure or as complicated as a mean-shift segmentation algorithm (see Section 5.3.2). You can optionally use non-skin pixels to model the background distribution. 4\. Use your computed distribution to find the skin regions in an image. One easy way to visualize this is to paint all non-skin pixels a given color, such as white or black. 5\. How sensitive is your algorithm to color balance (scene lighting)? 6\. Does a simpler chromaticity measurement, such as a color ratio \((2.116)\), work just as well?

See all solutions

Recommended explanations on Computer Science Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free