Click here to load reader

Flattening Curved Documents in Imagesdaniel/daniel_papersfordownload/liang-j_cvpr2005.pdf · Flattening Curved Documents in Images Jian Liang, Daniel DeMenthon, David Doermann Language

  • View

  • Download

Embed Size (px)

Text of Flattening Curved Documents in Imagesdaniel/daniel_papersfordownload/liang-j_cvpr2005.pdf ·...

  • Flattening Curved Documents in Images

    Jian Liang, Daniel DeMenthon, David DoermannLanguage And Media Processing Laboratory

    University of MarylandCollege Park, MD, 20770



    Compared to scanned images, document picturescaptured by camera can suffer from distortions due toperspective and page warping. It is necessary to re-store a frontal planar view of the page before otherOCR techniques can be applied. In this paper we de-scribe a novel approach for flattening a curved docu-ment in a single picture captured by an uncalibratedcamera. To our knowledge this is the first reportedmethod able to process general curved documents in im-ages without camera calibration. We propose to modelthe page surface by a developable surface, and exploitthe properties (parallelism and equal line spacing) ofthe printed textual content on the page to recover thesurface shape. Experiments show that the output im-ages are much more OCR friendly than the originalones. While our method is designed to work with anygeneral developable surfaces, it can be adapted for typ-ical special cases including planar pages, scans of thickbooks, and opened books.

    1. Introduction

    Digital cameras have proliferated rapidly in recentyears due to their small size, ease of use, fast response,rich set of features, and dropping price. For the OCRcommunity, they present an attractive alternative toscanners as imaging devices for capturing documentsbecause of their flexibility. However, compared to digi-tal scans, camera captured document images often suf-fer from many degradations both from intrinsic limitsof the devices and because of the unconstrained exter-nal environment. Among many new challenges, one ofthe most important is the distortion due to perspec-tive and curved pages. Current OCR techniques aredesigned to work with scans of flat 2D documents, andcannot handle distortions involving 3D factors.

    One way of dealing with these 3D factors is to usespecial equipments such as structured light to measurethe 3D range data of the document, and recover the 2Dplane of the page [1, 12]. The requirement for costlyequipment, however, makes these approaches unattrac-tive.

    The problem of recovering planar surface orienta-tions from images has been addressed by many re-searchers inside the general framework of shape estima-tion [5, 7, 10], and applied to the removal of perspectivein images of flat documents [3, 4, 11]. However, pagewarping adds a non-linear, non-parametric process ontop of this, making it much more difficult to recover the3D shape. As a way out, people add in more domainknowledge and constraints. For example, when scan-ning thick books, the portion near the book spine formsa cylinder shape [8], and results in curved text lines inthe image. Zhang and Tan [16] estimate the cylindershape from the varying shade in the image, assumingthat flatbed scanners have a fixed light projection di-rection. In terms of camera captured document images,Cao et al. [2] use a parametrical approach to estimatethe cylinder shape of an opened book. Their methodrelies on text lines formed by bottom-up clustering ofconnected components. Apart from the cylinder shapeassumption, they also have a restriction on the posethat requires the image plane to be parallel to the gen-eratrix of the page cylinder. Gumerov et al. [6] presenta method for shape estimation from single views of de-velopable surfaces. They do not require cylinder shapesand special poses. However, they require correspon-dences between closed contours in the image and inthe unrolled page. They propose to use the rectilin-ear page boundaries or margins in document images ascontours. This may not be applicable when part of thepage is occluded.

    Another way out is to bypass the shape estimationstep, and come up with an approximate flat view of thepage, with what we call shape-free methods. For scansof thick bound volumes, Zhang and Tan [15] have an-

  • other method for straightening curved text lines. Theyfind text line curves by clustering connected compo-nents, and move the components to restore straighthorizontal baselines. The shape is still unknown butimage can be OCRed. Under the same cylinder shapeand parallel view assumptions as Cao et. al have, Tsoiet al. [14] flatten images of opened books by a bilin-ear morphing operation which maps the curved pageboundaries to a rectangle. Their method is also shape-free. Although shape-free methods are simpler, theycan only deal with small distortions and can not beapplied when shape and pose are arbitrary.

    Our goal is to restore a frontal planar image of awarped document page from a single picture capturedby an uncalibrated digital camera. Our method isbased on two key observations: 1) a curved documentpage can be modeled by a developable surface, and2) printed textual content on the page forms textureflow fields that provide strong constraints on the un-derlying surface shape [9]. More specifically, we extracttwo texture flow fields from the textual area in the pro-jected image, which represent the local orientations ofprojected text lines and vertical character strokes re-spectively. The intrinsic parallelism of the texture flowvectors on the curved page is used to detect the pro-jected rulings, and the equal text line spacing propertyon the page is used to compute the vanishing points ofthe surface rulings. Then a developable surface is fittedto the rulings and texture flow fields, and the surfaceis unrolled to generate the flat page image.

    Printed textual content provides the most promi-nent and stable visual features in document images[3, 11, 2, 15]. In real applications, other visual cuesare not as reliable. For example, shade may be biasedby multiple light sources; contours and edges may beoccluded. In term of the way of using printed textualcontent in images, our work differs from [15, 2] in thatwe do not rely on connected component analysis whichmay have difficulty with figures or tables. The mixtureof text and non-text elements will also make traditionalshape-from-texture techniques difficult to apply, whileour texture flow based method can still work. Over-all, compared to others’ work, our method does notrequire a flat page, does not require 3D range data,does not require camera calibration, does not requirespecial shapes or poses, and can be applied to arbitrarydevelopable document pages.

    The remainder of this paper is organized into fivesections. Section 2 introduces developable surfaces anddescribes the texture flow fields generated by printedtext on document pages. Section 3 focuses on textureflow field extraction. We describe the details of surfaceestimation in Section 4, and discuss the experimental








    Figure 1. Strip approximation of a developable surface.

    results in Section 5. Section 6 concludes the paper.

    2. Problem Modeling

    The shape of a smoothly rolled document page canbe modeled by a developable surface. A developablesurface can be mapped isometrically onto a Euclideanplane, or in plain English, can be unrolled onto a planewithout tearing or stretching. This process is called de-velopment. Development does not change the intrinsicproperties of the surface such as curve length or angleformed by curves.

    Rulings play a very important role in defining de-velopable surfaces. Through any point on the surfacethere is one and only one ruling, except for the degen-erated case of a plane. Any two rulings do not inter-sect except for conic vertices. All points along a rulingshare a common tangent plane. It is well known inelementary differential geometry that given sufficientdifferentiability a developable surface is either a plane,a cylinder, a cone, the collection of the tangents of acurve in space, or a composition of these types. On acylindrical surface, all rulings are parallel; on a conicsurface, all rulings intersect at the conic vertex; for thetangent surface case, the rulings are the tangent linesof the underlying space curve; only in the planar caseare rulings not uniquely defined.

    The fact that all points along a ruling of a devel-opable surface share a common tangent plane to thesurface leads to the result that the surface is the enve-lope of a one-parameter family of planes, which are itstangent planes. Therefore a developable surface can bepiecewise approximated by planar strips that belong tothe family of tangent planes (Fig. 1). Although this isonly a first order approximation, it is sufficient for ourapplication. The group of planar strips can be fully de-scribed by a set of reference points {Pi} along a curveon the surface, and the surface normals {Ni} at thesepoints.

    Suppose that for every point on a developable sur-face we select a tangent vector; we say that the tan-gents are parallel with respect to the underlying surface

  • if when the surface is developed, all tangents are par-allel in the 2D space. A developable surface coveredby a uniformly distributed non-isotropic texture canresult in the perception of a parallel tangent field. Ondocument pages, the texture of printed textual contentforms two parallel tangent fields: the first field followsthe local text line orientation, and the second field fol-lows the vertical character stroke orientation. Since thetext line orientation is more prominent, we call the firstfield the major tangent field and the second the minortangent field.

    The two 3D tangent fields are projected to two 2Dflow fields in camera captured images, which we callthe major and minor texture flow fields, denoted asEM and Em. The 3D rulings on the surface are alsoprojected to 2D lines on the image, which we call the2D rulings or projected rulings.

    The texture flow fields and 2D rulings are not di-rectly visible. Section 3 introduces our method of ex-tracting texture flow from textual regions of documentimages. The texture flow is used in Section 4 to deriveprojected rulings, find vanishing points of rulings, andestimate the page shape.

    3. Texture Flow Computation

    We are only interested in texture flow produced byprinted textual content in the image, therefore we needto first detect the textual area and textual content.Among various text detection schemes proposed in theliterature we adopt a simple one since this is not ourfocus in this work. We use an edge detector to find pix-els with strong gradient, and apply an open operator toexpand those pixels into textual area. Although sim-ple, this method works well for document images withsimple backgrounds. Then we use Niblack’s adaptivethresholding [13] to get binary images of textual con-tent (Fig. 2). The binarization does not have to beperfect, since we only use it to compute the textureflow fields, not for OCR.

    The local texture flow direction can be viewed as alocal skew direction. We divide the image into smallblocks, and use projection profile analysis to computethe local skew at the center of each block. Instead ofcomputing one skew angle, we compute several promi-nent skew angles as candidates. Initially their confi-dence values represent the energy of the correspondingprojection profiles. A relaxation process follows to ad-just confidences in such a way that the candidates thatagree with neighbors get higher confidences. As a re-sult, the local text line directions are found. The relax-ation process is necessary because due to randomnessin small image blocks, the text line orientations may


    (b) (c)

    Figure 2. Text area detection and text binarization. (a) Adocument image captured by a camera. (b) Detected textarea. (c) Binary text image.

    not initially be the most prominent. We use interpola-tion and extrapolation to fill a dense texture flow fieldEM that covers every pixel. Next, we remove the majortexture flow directions from the local skew candidates,reset confidences for the remaining candidates, and ap-ply the relaxation again. This time the results are thelocal vertical character stroke orientations. We com-pute a dense minor texture flow field Em in the sameway.

    Fig. 3 shows the major and minor texture flow fieldscomputed from a binarized text image. Notice thatEM is quite good in Fig. 3(c) even though two figuresare embedded in the text. Overall, EM is much moreaccurate than Em.

    4. Page Shape Estimation

    4.1. Finding Projected Rulings

    Consider a developable surface D, a ruling R on D,the tangent plane T at R, and a parallel tangent fieldV defined on D. For a group of points {Pi} alongR, all the tangents {V(Pi)} at these points lie on T ,and are parallel. Suppose the camera projection maps{Pi} to {pi}, and {V(Pi)} to {v(pi)}. Then underorthographic projection, {v(pi)} are parallel lines onthe image plane; under spherical projection, {v(pi)}all lie on great circles on the view sphere that intersectat two common points; and under perspective projec-tion, {v(pi)} are lines that share a common vanishingpoint. Therefore, theoretically if we have EM or Em,we can detect projected rulings by testing the textureflow orientations along a ruling candidate against the

  • (a)

    (b) (c)

    (d) (e)

    Figure 3. Texture flow detection. (a) The four local skewcandidates in a small block. After relaxation the two middlecandidates are eliminated. (b) (c) Visualization of majortexture flow field. (d) (e) Visualization of minor textureflow field.

    above principles.However, due to the errors in estimated EM and Em,

    we have found that texture flow at individual pixels hastoo much noise for this direct method to work well.Instead, we propose to use small blocks of texture flowfield to increase the robustness of ruling detection.

    In a simplified case, consider the image of a cylindri-cal surface covered with a parallel tangent field underorthographic projection. Suppose we take two smallpatches of the same shape (this is possible for a cylin-der surface) on the surface along a ruling. We canshow that the two tangent sub-fields in the two patchesproject to two identical texture flow sub-fields in theimage. This idea can be expanded to general devel-opable surfaces and perspective projections, as locallya developable surface can be approximated by cylin-der surfaces, and the projection can be approximatedby orthographic projection. If the two patches are nottaken along the same ruling, however, the above prop-erty will not hold. Therefore we have the followingpseudo-code for detection of a 2D ruling that passesthrough a given point (x, y) (see Fig. 4):

    1. For each ruling direction candidate θ ∈ [0, π) dothe following

    (a) Fix the line l(θ, x, y) that passes through(x, y) and has angle θ with respect to the x-axis

    (b) Slide the center of a window along l at equalsteps and collect the major texture flow fieldinside the window as a sub-field {Ei}ni=1,where n is the number of such sub-fields

    (c) The score of the candidate l(θ, x, y) is

    s(θ) =∑n

    i=2 d(Ei−1, Ei)n


    where d(Ei−1, Ei) measures the difference be-tween two sub-fields, which in our implemen-tation is the sum of squared differences

    2. Output the θ that corresponds to the smallest s(θ)as ruling direction

    We have found that the result has weak sensitivityto a large range of window sizes or moving steps.

    To find a group of projected rulings that cover thewhole text area, first a group of reference points areautomatically selected, then for each point a projectedruling is computed. Because any two rulings do notintersect inside the 3D page, we have an additionalrestriction that two nearby projected rulings must notintersect inside the textual area.

    As Fig. 4 shows, our ruling detection scheme worksbetter in high curvature parts of the surface than inflat parts. One reason is that in flat parts the rulingsare not uniquely defined. On the other hand, note thatwhen the surface curvature is small, the shape recoveryis not sensitive to the ruling detection result, so thereduced accuracy in ruling computation does not havesevere adverse effects on the final result.

    4.2. Computing Vanishing Points of Rulings

    We compute the vanishing points of rulings basedon the equal text line spacing property in documents.For printed text lines in a paragraph, the line spac-ing is usually fixed. When a 3D ruling intersects withthese text lines, the intersections are equidistant in 3Dspace. Under perspective projection, if the 3D ruling isnot parallel to the image plane, these intersections willproject to non-equidistant points on the image, andthe changes of distances can reveal the vanishing pointposition:

    Let {Pi}∞i=−∞ be a set of points along a line in 3Dspace such that |PiPi+1| is constant. A perspective pro-jection maps Pi to pi on the image plane. Then by theinvariance of cross ratio we have

  • c

    d e

    (a) (b)

    (c) (d) (e)

    Figure 4. Projected ruling estimation. (a) Two projectedruling candidates and three image patches along the rulingcandidates. (b) The estimated rulings. (c)(d)(e) Enlargedimage patches. Notice that (c) and (d) have similar textureflow (but dissimilar texture) and are both different from(e).

    |pipj ||pkpl||pipk||pjpl|

    =|PiPj ||PkPl||PiPk||PjPl|

    =|i− j||k − l||i− k||j − l|

    ,∀i, j, k, l. (2)

    And as a result we have


    =14,∀i, (3)


    =12,∀i, (4)

    where v is the vanishing point corresponding to p∞ orp−∞.

    We will come back to Eq. 4 and Eq. 3 after wedescribe how we find {pi}. We use a modified pro-jection profile analysis to find the intersections of aprojected ruling and text lines. Usually a projectionprofile is built by projecting pixels in a fixed direc-tion onto a base line, such that each bin of the profileis

    ∑I(x, y : ax + by = 0). We call this a linear pro-

    jection profile, which is suitable for straight text lines.When text lines are curved, we project pixels along thecurve onto the base line (the projected ruling in ourcontext), such that each bin is

    ∑I(x, y : f(x, y) = 0)

    where f defines the curve. We call the result a curve-based projection profile (CBPP). The peaks of a CBPPcorresponds to positions where text lines intersect thebase line (assuming text pixels have intensity 1). Fig. 5






    1 2 3


    Figure 5. Computing the vanishing point of a 2D ruling.(a) A 2D ruling in the document image. (b) The curvebased projection profile (CBPP) along the ruling in (a).(c) The smoothed and binarized CBPP with three textblocks identified. In each text block, the line spacing be-tween top lines in the image (to the left in the profile graph)is smaller than that between lower lines (although this isnot very visible to the eye). Such difference is due toperspective foreshortening and is explored to recover thevanishing point. In this particular case, the true vanish-ing point is (−3083.70, 6225.06) and the estimated value is(−3113, 5907) (both in pixel units).

    shows how we identify the text line positions along aruling.

    The sequence of text line positions is clustered intoK groups, {pki }Kk=1, such that each group {pki }

    nki=1 satis-

    fies Eq. 3 within an error threshold. The purpose of theclustering is to separate text paragraphs, and removeparagraphs that have less than three lines.

    To find the best vanishing point v that satisfies Eq. 4for every group of {pki }

    nki=1, first we represent p

    ki by its

    1D coordinate aki along the ruling r (the origin can beany point on r). We write aki = b

    ki +e

    ki , where e

    ki is the

    error term and bki is the true but unknown position oftext line.

    Under the assumption that eki follows a normal dis-tribution, the best v should minimize the error function

  • E({aki }, {bki }) =K∑



    (eki )2




    (aki − bki )2, (5)

    under the constraints that

    0 = F kj (c, {bki })

    =(bkj − bkj+1)(bkj+2 − c)(bkj − bkj+2)(bkj+1 − c)

    − 12, (6)

    for k = 1, . . . ,K, j = 1, . . . , nk − 2,

    where c is the 1D coordinate of v on r. The best ccan be solved by Lagrange method,

    c = argmin∀c,{bk


    E({aki }, {bki }) +K∑



    λkj Fkj (c, {bki })


    where {λki } are coefficients to be determined.

    4.3. Developable Surface Fitting

    The projected rulings {ri}ni=1 divide the image ofthe document page into slices (Fig. 4(b)). Supposethat the optical center of the camera is O, we call theplane Si that passes through O and ri a dividing plane.The name comes from the fact that the pencil of planes{Si}ni=1 divides the page surface D into pieces. Eachpiece will be approximated by a planar strip Pi, andthe whole page is fitted by a group of planar strips.The development of D is accomplished by unfoldingthe set {Pi}n+1i=1 strip by strip onto the plane. In theunfolded plane, we expect the text lines to be contin-uous, parallel, straight, and orthogonal to the verticalcharacter stroke direction. Therefore we propose thefollowing four principles that a group of strips {Pi}n+1i=1should satisfy in order to generate a flat image of thedesired properties.

    Continuity Two adjacent planar strips are con-tinuous at the dividing plane. Formally, for(i=1,. . . ,n), let L−i be the intersection line of Piand Si, L+i be the intersection line of Pi+1 andSi, and αi be the angle between L−i and L

    +i ,

    then the continuity principle requires that αi = 0.The overall measure of continuity is defined asMc =


    Parallelism The texture flow directions inside a pla-nar strip map to parallel directions on the unfoldedplane. If we select M sample points in the imagearea that correspond to Pi, and after unfolding,the texture flow directions at these points map todirections {βij}Mj=1, the measure of parallelism isMp =

    ∑i var({βij}Mj=1), where var(·) is the sample

    variance function. Ideally Mp is zero.

    Orthogonality The major and minor texture flowfields inside a planar strip map to orthogonal direc-tions on the unfolded plane. If we select M samplepoints in the image area that corresponds to Pi,and after unfolding, the major and minor textureflow vectors at these points map to unit-lengthvectors {tij}Mj=1 and {vij}Mj=1, then the measureof orthogonality is Mo =


    ∑Mj=1 |tij · vij |. Mo

    should be zero in the ideal case.

    Smoothness The group of strips {Pi}n+1i=1 is globallysmooth, that is, the change between the normals oftwo adjacent planar strips is not abrupt. SupposeNi is the normal to Pi, the measure of smooth-ness is defined as Ms =

    ∑i |Ni−1 − 2Ni + Ni+1|,

    which is essentially the sum of the magnitudesof the second derivatives of {Ni}. The smooth-ness measure is introduced as a regularization termthat keeps the surface away from weird configura-tions.

    Ideally, the measures Mc, Mp, and Mo should all bezero. Ms would not be zero except for planar surfaces,but a smaller value is still preferred. In practice wewant all of them to be as small as possible.

    The four measures are determined by the focallength f0 (which decides the position of the opticalcenter O), the projected rulings {ri}, and the normals{Ni}, where the normals and the focal length are theunknowns. The following pseudo-code describes howwe iteratively fit the planar strips to the developablesurface.

    1. Compute an initial estimate of Ni and f0

    2. For each of Mc, Mp, Mo and Ms (denoted as Mx):

    (a) Adjust f0 to minimize Mx(b) Adjust each Ni to minimize Mx

    3. Repeat 2 until Mc, Mp, Mo and Ms are belowpreset thresholds (currently selected through ex-periments and hopefully adjustable adaptively infuture work)

    The initial values of Ni and f0 are computed in thefollowing way:

  • For any point its normal is N = T × R, where Ris the unit vector in the direction of a 3D ruling R,and T is any other unit vector on the tangent planeat that point. Since we have the vanishing point ofR, R is determined by f0 only. As for T we simplyassume it to be parallel to the image plane. Therefore{Ni} are all determined by the unknown f0. We doa one dimensional search within a predefined range ofpossible focal lengths to find the value that minimizes∑

    x Mx as the initial f0. And from f0 we compute theinitial {Ni}.

    5. Experiments and Discussion

    We have applied our method to both synthetic andreal images. The synthetic images are generated bywarping a flat document image around a predefineddevelopable surface. Fig. 6 demonstrates some of thepreliminary results of our experiments. We choose touse the OCR recognition rate as evaluation measure,since OCR is one of the final applications and otherapplications like search and indexing also rely on OCRresults. As the encouraging recognition rates show, theunwarped images are much more OCR friendly thanthe original ones. The size of both the warped andflattened images are in the range of 2000 × 2000 to3000 × 3000 pixels. Subsequent experiments are beingconducted to test the system’s performance on largescale data, and the results will be reported in a futurepublication.

    The reason that results obtained from synthetic im-ages are better than from real images is because syn-thetic images have better quality in terms of focus,noise and contrast. Since different parts of the pageare at different distances from the camera, not everypart can be clearly focused. Also the uneven lightingcreates additional difficulty for the OCR engine.

    Our method can handle document pages that formgeneral developable surfaces. If additional domainknowledge is available to constrain the family of thedevelopable surfaces, it is possible to adapt our methodto the following special cases that are very typical inpractice.

    • Planar documents under perspective projectionThe common convergence point of the major tex-ture flow field EM is the horizontal vanishing pointof the plane, while Em gives the vertical vanishingpoint. Once the two vanishing points are known,techniques similar to [3], [12], and [4] can be ap-plied to remove the perspective.

    • Scanned images of thick bound books

    CRR = 21.14% CRR = 96.13%

    CRR = 22.27% CRR = 91.32%

    CRR = 23.44% CRR = 79.66%

    Figure 6. Unwarped document images. Left three arewarped images and right three are flattened images. CRRstands for ‘Character Recognition Rate’. OCR results arefrom ScanSoft OmniPage.

    In scans of thick bound volumes, the distortedarea is not large enough to justify an expensivesurface estimation process. Therefore the shape-free methods in [15] is appropriate. However, ourmethod could do a better job of recovering the textline directions using the major texture flow fieldEM when figures, tables, and other non-characterconnected components are present.

  • • Cylinder shaped pages of opened books

    In this case, the vertical character strokes coincidewith the rulings, so we can find the vertical van-ishing point Vv using both the minor texture flowfield and the estimated projected rulings. By nu-merical integration we can find two contours thatfollow the major texture flow. It can be shownthat the four intersections of any two lines passingthrough Vv and the two contours are coplanar andthey are the four vertices of a rectangle. Thesefour points lead to a horizontal vanishing pointVh, and together with Vv they determines f0. Theshape of the page is determined, too, once f0 isknown. The unrolling process would be the sameas that for general developable surfaces.

    6. Conclusion

    In this paper we propose a method for flatteningcurved documents in camera captured pictures. Wemodel a document page by a developable surface. Ourcontributions include a robust method of extractingtexture flow fields from textual area, a novel approachfor projected ruling detection and vanishing point com-putation, and the usage of texture flow fields and pro-jected rulings to recover the surface. Our method isunique in that it does not require a calibrated camera.The proposed system is able to process images of doc-ument pages that form general developable surfaces,and can be adapted for typical degenerated cases: pla-nar pages under perspective projection, scans of thickbooks, and cylinder shaped pages of opened books. Toour knowledge this is the first reported method thatcan work on general warped document pages withoutcamera calibration. The restoration of a frontal planarview of a warped document from a single picture is thefirst part of our planed work on camera-captured doc-ument processing. An interesting question that followsis the adaptation to multiple views. Also, the shapeinformation can be used to enhance the text image, inparticular to balance the uneven lighting and reducethe blur due to the lack of depth-of-field.


    [1] M. S. Brown and W. B. Seales. Image restorationof arbitrarily warped documents. IEEE Transac-tions on Pattern Analysis and Machine Intellegence,26(10):1295–1306, October 2004.

    [2] H. Cao, X. Ding, and C. Liu. Rectifying the bounddocument image captured by the camera: A modelbased approach. In Proceedings of the International

    Conference on Document Analysis and Recognition,pages 71–75, 2003.

    [3] P. Clark and M. Mirmehdi. On the recovery of ori-ented documents from single images. In Proceedingsof Advanced Concepts for Intelligent Vision Systems,pages 190–197, 2002.

    [4] C. R. Dance. Perspective estimation for documentimages. In Proceedings of SPIE Document Recognitionand Retrieval IX, volume 4670, pages 244–254, 2002.

    [5] J. M. Francos and H. H. Permuter. Parametric esti-mation of the orientation of textured planar surfaces.IEEE Transactions on Image Processing, 10(3):403–418, Mar 2001.

    [6] N. Gumerov, A. Zandifar, R. Duraiswarni, and L. S.Davis. Structure of applicable surfaces from singleviews. pages 482–496, 2004.

    [7] W. L. Hwang, C.-S. Lu, and P.-C. Chung. Shapefrom texture: Estimation of planar surface orienta-tion through the ridge surfaces of continuous wavelettransform. IEEE Transactions on Image Processing,7(5):773–780, May 1998.

    [8] T. Kanungo, R. M. Haralick, and I. Phillips. Globaland local document degradation models. In Pro-ceedings of the International Conference on DocumentAnalysis and Recognition, pages 730–734, 1993.

    [9] D. C. Knill. Contour into texture: Information contentof surface contours and texture flow. Journal of theOptical Society of America A, 18(1):12–35, Jan 2001.

    [10] D. Liebowitz and A. Zisserman. Metric rectificationfor perspective images of planes. In Proceedings of theConference on Computer Vision and Pattern Recogni-tion, pages 482–488, 1998.

    [11] M. Pilu. Extraction of illusory linear clues in perspec-tively skewed documents. In Proceedings of the Con-ference on Computer Vision and Pattern Recognition,volume 1, pages 363–368, 2001.

    [12] M. Pilu. Undoing paper curl distortion using applica-ble surfaces. In Proceedings of the Conference on Com-puter Vision and Pattern Recognition, volume 1, pages67–72, 2001.

    [13] Ø. D. Trier and T. Taxt. Evaluation of binarizationmethods for document images. IEEE Transactions onPattern Analysis and Machine Intellegence, 17(3):312–315, 1995.

    [14] Y.-C. Tsoi and M. S. Brown. Geometric and shad-ing correction for images of printed materials a unifiedapproach using boundary. In Proceedings of the Con-ference on Computer Vision and Pattern Recognition,pages 240–246, 2004.

    [15] Z. Zhang and C. L. Tan. Correcting document imagewarping based on regression of curved text lines. InProceedings of the International Conference on Docu-ment Analysis and Recognition, volume 1, pages 589–593, 2003.

    [16] Z. Zhang, C. L. Tan, and L. Fan. Restoration of curveddocument images through 3d shape modeling. In Pro-ceedings of the Conference on Computer Vision andPattern Recognition, pages 10–15, 2004.