15
CS5320/6320 Computer Vision Class Project Team Members: Eric Johnson, Randy Hamburger Weekly Report – March 26, 2007 Accomplishments: Both: - Decided to abandon the published equations and derive the inverse perspective projection mapping by hand. Recall from the last update that after a great deal of hunting, we were finally able to piece together a working inverse perspective mapping technique based on the work of Bertozzi and Broggi combined with a little help from Jiang and Maud (to remove the left-handed coordinate system). Unfortunately, their equations were found to violate one of the fundamentals of the perspective projection equation – straight lines do not remain straight when their mapping is applied. This was investigated in the document BertozziAndBroggi_InversePerspEqnsTest.pdf Given this fundamental inaccuracy in their transformation, it was finally decided to break down and derive the mapping ourselves so that we could be sure that accurate results could be obtained. After all, if straight lines don’t even remain straight, there is no reason to believe that their transformation will get locations in the real world correct. Without this, it would be impossible to report lane marker locations accurately in real-world coordinates even if they can be identified correctly in the images. - Derived a functional, if unrefined, method for the image to world mapping based on the planar road assumption. (pages 1-3 of the derivation which is included at the end of this document). Note that this derivation uses a more convenient set of coordinate systems which relate image row and column location directly to a right-handed world coordinate system which is well-suited to fitting a polynomial to the lane markers using the familiar form y(x). Also note that it was assumed that the camera is located directly above the real- world coordinate system. This greatly simplifies the derivation (preventing errors), but this coordinate system may not be that used by the autonomous vehicle for navigation purposes. If this is the case, then this mapping may still be used and a simple rigid transformation can convert the output points from our equations to their equivalents in the coordinate system employed by the vehicle. Since we have assumed a planar road, this will only require a simple 2D rotation and translation. - Performed a quick test of the new method on our MATLAB-generated road scene with the following results (see invPerspTest_Ours.m ):

CS5320/6320 Computer Vision Class Projecthamburge/ProjectReport4_1.pdf · CS5320/6320 Computer Vision Class Project Team Members: Eric Johnson, Randy Hamburger Weekly Report – March

  • Upload
    others

  • View
    14

  • Download
    0

Embed Size (px)

Citation preview

Page 1: CS5320/6320 Computer Vision Class Projecthamburge/ProjectReport4_1.pdf · CS5320/6320 Computer Vision Class Project Team Members: Eric Johnson, Randy Hamburger Weekly Report – March

CS5320/6320 Computer Vision Class Project

Team Members: Eric Johnson, Randy Hamburger Weekly Report – March 26, 2007 Accomplishments: Both:

- Decided to abandon the published equations and derive the inverse perspective projection mapping by hand. • Recall from the last update that after a great deal of hunting, we were finally

able to piece together a working inverse perspective mapping technique based on the work of Bertozzi and Broggi combined with a little help from Jiang and Maud (to remove the left-handed coordinate system).

• Unfortunately, their equations were found to violate one of the fundamentals of the perspective projection equation – straight lines do not remain straight when their mapping is applied. This was investigated in the document BertozziAndBroggi_InversePerspEqnsTest.pdf

• Given this fundamental inaccuracy in their transformation, it was finally decided to break down and derive the mapping ourselves so that we could be sure that accurate results could be obtained. After all, if straight lines don’t even remain straight, there is no reason to believe that their transformation will get locations in the real world correct. Without this, it would be impossible to report lane marker locations accurately in real-world coordinates even if they can be identified correctly in the images.

- Derived a functional, if unrefined, method for the image to world mapping based on the planar road assumption. (pages 1-3 of the derivation which is included at the end of this document). • Note that this derivation uses a more convenient set of coordinate systems

which relate image row and column location directly to a right-handed world coordinate system which is well-suited to fitting a polynomial to the lane markers using the familiar form y(x).

• Also note that it was assumed that the camera is located directly above the real-world coordinate system. This greatly simplifies the derivation (preventing errors), but this coordinate system may not be that used by the autonomous vehicle for navigation purposes. If this is the case, then this mapping may still be used and a simple rigid transformation can convert the output points from our equations to their equivalents in the coordinate system employed by the vehicle. Since we have assumed a planar road, this will only require a simple 2D rotation and translation.

- Performed a quick test of the new method on our MATLAB-generated road scene with the following results (see invPerspTest_Ours.m):

Page 2: CS5320/6320 Computer Vision Class Projecthamburge/ProjectReport4_1.pdf · CS5320/6320 Computer Vision Class Project Team Members: Eric Johnson, Randy Hamburger Weekly Report – March

Preliminary Results of Our Inverse Perspective Mapping

100 200 300 400 500 600

50

100

150

200

250

300

350

400

• Visually, the results look very good, lending confidence to our results. The

only problem is that the image is flipped vertically, indicating that we had some kind of sign error somewhere in this preliminary test script.

Eric: (had some free time over Spring break)

1. Finished the inverse perspective mapping derivation leading to a simplified image to world mapping, the world to image mapping, and several relations useful for simplifying the calibration of the necessary camera parameters. These results are on pages 4-8 of the derivation found at the end of this document.

2. Created a test of the derived equations in Mathcad to verify that (unlike the published equations), the mapping performs as expected. This document can be viewed at: http://www.eng.utah.edu/~hamburge/Our_Inverse_Persp_Eqns_Test.pdf For convenience, two of they key sets of figures from this document are shown below. They illustrate that this new mapping satisfies the basic criteria that straight lines should remain straight – an important improvement over the results obtained using the published equations.

Page 3: CS5320/6320 Computer Vision Class Projecthamburge/ProjectReport4_1.pdf · CS5320/6320 Computer Vision Class Project Team Members: Eric Johnson, Randy Hamburger Weekly Report – March

100 200 300

100

200

Image Points

rgrid

cgrid

5 0 5

10

15

20

Mapped to World Points (Bird's Eye View)

xgrid

ygrid

5 0 55

10

15

20

25

30

World Points (Bird's Eye View)

xgrid

ygrid

50 100 150 200 250 300

50

100

150

200

Mapped to Image Points

rgrid

cgrid

3. Revised the MATLAB script used to test our inverse perspective mapping functions (see invPerspTest_Ours2.m) to use the simplified image to world expressions and added a comparison of the output to the parameters originally used to generate the MATLAB test image. The result of this comparison is shown below.

The correspondence isn’t perfect, but after quite a bit of head scratching, checking over the derivation of the equations, and fiddling with the parameters, this was the best result which could be obtained. It was finally concluded that there must be some slight inaccuracy in the way that MATLAB generates the image. Still, the error is fairly negligible up to about 50 feet in front of the camera and, after that, the errors are well within what should be expected given

Page 4: CS5320/6320 Computer Vision Class Projecthamburge/ProjectReport4_1.pdf · CS5320/6320 Computer Vision Class Project Team Members: Eric Johnson, Randy Hamburger Weekly Report – March

that the data in the image gets coarser and coarser the farther away from the camera the points are due to the perspective effect.

4. Revised the MATLAB script used to test the equations based on Bertozzi and Broggi’s work to also provide a comparison of their result versus the nominal lane marker and stop line positions (see invPerspTest_Broggi.m). The results are shown below.

These results confirm our worry that these published equations not only warped straight lines into curves, but also introduce significant inaccuracies in the location of features in the world coordinates. Here the error in the stop line location is around 25 feet vs. less than 5 feet of error using our mapping. Given these findings, our efforts in deriving the new equations seem well justified.

5. Created a script to calibrate the camera parameters for the test data provided for the autonomous vehicle project (see invPerspCalibrate.m). a. Several good test images were identified from the avi files we generated

and then the parameters were varied using a combination of guess and check with intuition provided by the mapping equations until good results were obtained. With surprisingly little effort, the results in the sample images below were obtained.

b. Looking at the results, the inverse-perspective-mapped images seem to be quite accurate and have a useful amount of resolution up to about 20~25 m (around 75 feet) in front of the camera. This is very encouraging.

c. Note that in the inverse-perspective-mapped images, the lane marker width and orientation are very uniform, unlike in the original images. This should make detection in the mapped images much easier. Also note that the results of the mapping look good even with some slight hills and road crowning which violate the original assumption of a planar surface. Given this, we hope that the errors from these effects will be tolerable.

Page 5: CS5320/6320 Computer Vision Class Projecthamburge/ProjectReport4_1.pdf · CS5320/6320 Computer Vision Class Project Team Members: Eric Johnson, Randy Hamburger Weekly Report – March
Page 6: CS5320/6320 Computer Vision Class Projecthamburge/ProjectReport4_1.pdf · CS5320/6320 Computer Vision Class Project Team Members: Eric Johnson, Randy Hamburger Weekly Report – March
Page 7: CS5320/6320 Computer Vision Class Projecthamburge/ProjectReport4_1.pdf · CS5320/6320 Computer Vision Class Project Team Members: Eric Johnson, Randy Hamburger Weekly Report – March

Randy: (was traveling for business during Spring break)

1. Reviewed the weekly report. 2. Updated web page.

Next Steps:

1. Finish up the last details of the inverse perspective mapping. a. The results with our hand-derived equations seem to be correct and we

have developed a working set of camera parameters for the available image data.

b. Now we just need to come up with a faster interpolation method than MATLAB’s griddata function. With horizontal lines remaining straight using our equations, this should be pretty easy.

2. Create the ground truth answers for a few of the test images. 3. Start putting together our building blocks into a working lane detection

algorithm. 4. Create a utility to display the results for convenient visual feedback as we

develop the algorithm.

Page 8: CS5320/6320 Computer Vision Class Projecthamburge/ProjectReport4_1.pdf · CS5320/6320 Computer Vision Class Project Team Members: Eric Johnson, Randy Hamburger Weekly Report – March
Page 9: CS5320/6320 Computer Vision Class Projecthamburge/ProjectReport4_1.pdf · CS5320/6320 Computer Vision Class Project Team Members: Eric Johnson, Randy Hamburger Weekly Report – March
Page 10: CS5320/6320 Computer Vision Class Projecthamburge/ProjectReport4_1.pdf · CS5320/6320 Computer Vision Class Project Team Members: Eric Johnson, Randy Hamburger Weekly Report – March
Page 11: CS5320/6320 Computer Vision Class Projecthamburge/ProjectReport4_1.pdf · CS5320/6320 Computer Vision Class Project Team Members: Eric Johnson, Randy Hamburger Weekly Report – March
Page 12: CS5320/6320 Computer Vision Class Projecthamburge/ProjectReport4_1.pdf · CS5320/6320 Computer Vision Class Project Team Members: Eric Johnson, Randy Hamburger Weekly Report – March
Page 13: CS5320/6320 Computer Vision Class Projecthamburge/ProjectReport4_1.pdf · CS5320/6320 Computer Vision Class Project Team Members: Eric Johnson, Randy Hamburger Weekly Report – March
Page 14: CS5320/6320 Computer Vision Class Projecthamburge/ProjectReport4_1.pdf · CS5320/6320 Computer Vision Class Project Team Members: Eric Johnson, Randy Hamburger Weekly Report – March
Page 15: CS5320/6320 Computer Vision Class Projecthamburge/ProjectReport4_1.pdf · CS5320/6320 Computer Vision Class Project Team Members: Eric Johnson, Randy Hamburger Weekly Report – March