7
A Method for Online Interpolation of Packet-Loss Blocks in Streaming Video Rumana Aktar, Kannappan Palaniappan, Jeffrey Uhlmann Department of Electrical Engineering and Computer Science University of Missouri - Columbia Abstract—In this paper we examine and apply a linear-time matrix transformation for online interpolation of missing data blocks in frames of a streaming video sequence. We show that the resulting algorithm produces interpolated pixels that are sufficiently consistent within the context of a single frame that the missing block/tile is typically unnoticed by a viewer of the video sequence. Given the strenuous time constraints imposed by streaming video, this is essentially the only standard of performance that can be applied. 1. Introduction In this paper we examine the use of a linear-time matrix transformation as a basis for an approach to online inter- polation of missing data blocks in frames of a streaming video sequence. Unlike conventional in-painting (in-filling) applications, the real-time demands of streaming video tend to constrain the set of feasible algorithms to only those with run-time computational complexity that is linear in the number of pixels to be interpolated. Most such approaches involve simple linear averaging of pixels on the perimeter of the missing block, and the result is a relatively homogeneous blurred patch which tends to stand out strongly even though the frame is visible for only a fraction of a second within the video sequence. Our approach, by contrast, produces a result with sufficient high-frequency detail (texture) to often allow it to go unnoticed by most viewers. Image inpainting has been subject of interest in computer vision community for long time [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22]. Studies have summarized approaches followed in inpainting researches mainly in five categories [23], [24], [25], [1]: structural or partial differential equation (PDE) based, texture-based, exemplar-based, hybrid, and fast inpainting. Bertalmio et al. [11] proposed the first PDE based in- painting which propagates information from neighboring ge- ometric structures toward isophotes or edge direction. This simple approach works good for small holes but fails to ap- proximate missing data in large textured regions (generating blurring effects). Tschumperle et al. [22] presented a PDE based system where a large set of previous vector-valued regularization approaches were expressed by common local expression. Leveraging the local filtering properties of the proposed equations, it was able to achieve a certain level of accuracy. Another effective inpainting system for recovering sharp edges at a certain level is Total Variation (TV) Reg- ularization [26] which is an adaption of TV model [27]. Though robust to small holes, it also fails to handle large textured area and curved structures. To handle this, Chan et al. [12] extended this work to Curvature-Driven Diffusions (CCD) model which worked better for curved structures. Alexei et al. [28] proposed a non-parametric texture- based method aiming at preserving as many local structures as possible and grows a new image from an initial seed one pixel at a time based on Markov Random Field model. Another texture-based approach is resynthesis of complex textures [29], [30] which builds up missing region or output by successively adding pixels that closely match target area from input image. Hitoshi et al. [13] combined texture synthesis and image inpainting resulting in a system which overcame the limitations of both approaches. Elad et al. [31] described inpainting model which fills holes in overlapping textures and cartoon image layers based on sparse repre- sentation based image decomposition and morphological component analysis. Exemplar-based methods are designed based on assign- ing a priority of missing pixels. Once inpainting priorities have been determined, it finds geo-structurally best matching patch from surrounding known region to fill holes. Though computationally expensive, many examples-based methods are proven to be successful in filling larger holes [39], [17], [19], [40], [41]. Often using PDE based methods as basic approaches, hybrid algorithms combine other inpainting methods such as texture synthesis to leverage the advantages of different approaches resulting in more robust recovery [35], [36], [37]. Manuel et al. [32] proposed a simple and fast method based on anisotropic diffusion model extended with the notion of user-provided diffusion barriers. Using information from analyzing stationary first-order transport equations, Folkmar et al. [33] developed a fast noniterative inpainting algorithm. Komal et al. [34] proposed a fast image inpaint- ing algorithm which considers missing pixels as a level set and fills lost pixels assessing image smoothness estimator as well as gradient information. Past approaches to the inpainting problem have implic- itly assumed off-line applications in which their relatively large running times would not be problematic in practice. In other words, their focus was almost exclusively on the 978-1-7281-4732-1/19/$31.00 c 2019 IEEE

A Method for Online Interpolation of Packet-Loss Blocks in ...faculty.missouri.edu/uhlmannj/Fast-InPainting.pdf · from input image. Hitoshi et al. [13] combined texture synthesis

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Method for Online Interpolation of Packet-Loss Blocks in ...faculty.missouri.edu/uhlmannj/Fast-InPainting.pdf · from input image. Hitoshi et al. [13] combined texture synthesis

A Method for Online Interpolation of Packet-Loss Blocks in Streaming Video

Rumana Aktar, Kannappan Palaniappan, Jeffrey UhlmannDepartment of Electrical Engineering and Computer Science

University of Missouri - Columbia

Abstract—In this paper we examine and apply a linear-timematrix transformation for online interpolation of missing datablocks in frames of a streaming video sequence. We show thatthe resulting algorithm produces interpolated pixels that aresufficiently consistent within the context of a single frame thatthe missing block/tile is typically unnoticed by a viewer of thevideo sequence. Given the strenuous time constraints imposedby streaming video, this is essentially the only standard ofperformance that can be applied.

1. Introduction

In this paper we examine the use of a linear-time matrixtransformation as a basis for an approach to online inter-polation of missing data blocks in frames of a streamingvideo sequence. Unlike conventional in-painting (in-filling)applications, the real-time demands of streaming video tendto constrain the set of feasible algorithms to only thosewith run-time computational complexity that is linear in thenumber of pixels to be interpolated. Most such approachesinvolve simple linear averaging of pixels on the perimeter ofthe missing block, and the result is a relatively homogeneousblurred patch which tends to stand out strongly even thoughthe frame is visible for only a fraction of a second withinthe video sequence. Our approach, by contrast, produces aresult with sufficient high-frequency detail (texture) to oftenallow it to go unnoticed by most viewers.

Image inpainting has been subject of interest in computervision community for long time [11], [12], [13], [14],[15], [16], [17], [18], [19], [20], [21], [22]. Studies havesummarized approaches followed in inpainting researchesmainly in five categories [23], [24], [25], [1]: structuralor partial differential equation (PDE) based, texture-based,exemplar-based, hybrid, and fast inpainting.

Bertalmio et al. [11] proposed the first PDE based in-painting which propagates information from neighboring ge-ometric structures toward isophotes or edge direction. Thissimple approach works good for small holes but fails to ap-proximate missing data in large textured regions (generatingblurring effects). Tschumperle et al. [22] presented a PDEbased system where a large set of previous vector-valuedregularization approaches were expressed by common localexpression. Leveraging the local filtering properties of theproposed equations, it was able to achieve a certain level ofaccuracy. Another effective inpainting system for recovering

sharp edges at a certain level is Total Variation (TV) Reg-ularization [26] which is an adaption of TV model [27].Though robust to small holes, it also fails to handle largetextured area and curved structures. To handle this, Chan etal. [12] extended this work to Curvature-Driven Diffusions(CCD) model which worked better for curved structures.

Alexei et al. [28] proposed a non-parametric texture-based method aiming at preserving as many local structuresas possible and grows a new image from an initial seedone pixel at a time based on Markov Random Field model.Another texture-based approach is resynthesis of complextextures [29], [30] which builds up missing region or outputby successively adding pixels that closely match target areafrom input image. Hitoshi et al. [13] combined texturesynthesis and image inpainting resulting in a system whichovercame the limitations of both approaches. Elad et al. [31]described inpainting model which fills holes in overlappingtextures and cartoon image layers based on sparse repre-sentation based image decomposition and morphologicalcomponent analysis.

Exemplar-based methods are designed based on assign-ing a priority of missing pixels. Once inpainting prioritieshave been determined, it finds geo-structurally best matchingpatch from surrounding known region to fill holes. Thoughcomputationally expensive, many examples-based methodsare proven to be successful in filling larger holes [39], [17],[19], [40], [41].

Often using PDE based methods as basic approaches,hybrid algorithms combine other inpainting methods suchas texture synthesis to leverage the advantages of differentapproaches resulting in more robust recovery [35], [36],[37].

Manuel et al. [32] proposed a simple and fast methodbased on anisotropic diffusion model extended with thenotion of user-provided diffusion barriers. Using informationfrom analyzing stationary first-order transport equations,Folkmar et al. [33] developed a fast noniterative inpaintingalgorithm. Komal et al. [34] proposed a fast image inpaint-ing algorithm which considers missing pixels as a level setand fills lost pixels assessing image smoothness estimatoras well as gradient information.

Past approaches to the inpainting problem have implic-itly assumed off-line applications in which their relativelylarge running times would not be problematic in practice.In other words, their focus was almost exclusively on the

978-1-7281-4732-1/19/$31.00 c©2019 IEEE

Page 2: A Method for Online Interpolation of Packet-Loss Blocks in ...faculty.missouri.edu/uhlmannj/Fast-InPainting.pdf · from input image. Hitoshi et al. [13] combined texture synthesis

quality of the end result rather than the satisfaction of severereal-time performance constraints. In the next section, wediscuss real-time applications in which fast interpolation isrequired. These include streaming video, in which packetlosses produce undefined blocks within frames, and interac-tive visualization systems in which data may be missing forsome patches of the dynamically-changing viewing region.We then introduce our approach and examine its strengthsand limitations in realistic broadcast and interactive visu-alization contexts. We conclude with a discussion of ourresults and prospects for possible improvements to the al-gorithm within real-time constraints.

2. Real-Time Video Interpolation

Figure 1: Example of a corrupted MPEG frame with multiple bit errorsleading to corrupted tiles ([6], [1]). In most streaming video contexts it ispossible to apply in-painting methods using successive-frame informationto fill the missing patches so that they will likely go unnoticed by a casualviewer. However, some high data-rate applications are not amenable to suchapproaches and thus demand a simpler/faster alternative.

Conventional approaches to image interpolation/in-painting have focused on the preservation of local imageproperties such that a viewer will not be able to discern anyartifacts or discontinuities revealing the use of an algorithmto fill a missing portion of the image. For classical oil-painting restoration the missing portion may be the result ofphysical damage, and the the in-painting method typicallyinvolves an actual painter who attempts to replicate themissing fragment based on knowledge of what existed priorto the damage. In some cases, however, there is no extantinformation about the details of the missing fragment andthe restorer can only try to replace it with something thatis consistent both with the surrounding region and with thestyle of the original artist.

In the case of digitally scanned photographs with miss-ing fragments due to physical degradation, the use of askilled artist for restoration is often impractical and auto-mated tools are needed. A variety of methods have beendeveloped that span a wide range of mathematics and al-gorithmic sophistication. The simplest approaches involvedetermining the color for each missing pixel as a weighted-average of colors based on distances to nearest known pixels.The result of such an approach tends to be a smoothly-varying patch of color with little or no fine-detail consistentwith the surrounding region. Thus unless it is very small,a patch interpolated in this way will be readily apparent toeven a casual viewer of the image.

More sophisticated methods attempt to maintain thecontinuity of edge features that appear to extend into themissing region, but the intersection of these extensions canproduce artificial features that are highly distinctive andreadily distracting to a viewer. Improved methods maintainsome continuity of boundary features but enforce constraintson how these features are blended as they intersect withinthe patch. In other words, an attempt is made to achieve abalance between blurring and detail so as to minimize theextent to which the result “stands out” to a human viewer.

The state-of-the-art for automated filling of imagepatches is achieved by replicating texture patches surround-ing the missing region so that there is strong consistency be-tween the result and the surrounding region. This approachis least discernible to human viewers because it essentiallymanufactures artificial detail that has no basis in terms ofwhat actually may have existed within the region. In otherwords, the goal is purely to generate a plausible resultthat will be accepted as genuine by a viewer. It must beemphasized that this is very different from a goal of inferringas accurately as possible an approximation to the actualdetails of what may have existed in the missing region.

Real-time interpolation of missing blocks in frames ofa streaming video sequence has a unique set of constraints.On the one hand, it only needs to fill the missing patch withfidelity sufficient to not draw attention to itself during thefraction of a second in which it is visible. On the other hand,the interpolated result must be computed within the evensmaller fraction of a second that exists between successiveframes. It would seem, therefore, that either of the two de-scribed approaches may be viable. Unfortunately, the state-of-the-art texture filling methods are too computationallyintensive1, and the deficiencies of simpler methods becomeeven more pronounced when viewed between successiveframes.

One reason why video interpolation is so challengingis because the in-painted region needs to be consistentwith both the corresponding regions of the preceding andfollowing frames, and the latter is not generally available inthe real-time streaming context. An immediately obviousapproach would be to simply replicate the same region

1. Even methods that are described as rapid, e.g., [2], do not attempt toachieve real-time performance but are instead focused solely on reducingthe high computational cost of prior methods.

Page 3: A Method for Online Interpolation of Packet-Loss Blocks in ...faculty.missouri.edu/uhlmannj/Fast-InPainting.pdf · from input image. Hitoshi et al. [13] combined texture synthesis

(a) Matrix, M (b) Matrix, S

(c) Matrix, S′ (d) Matrix, M ′

Figure 2: Interpolaiton of missing data in dscale method. (a) Matrix, M represents a typical corrupted block with missing data (each 0 is a missingpixel). (b) Matrix, S shows the scaling data for Matrix, M (numbers are presented upto two decimal for space limitation). Multiplication along each rowor column of nonzero elements is equal to 1 (unity) in S. Each 0 element is replaced by 1 in (c) Matrix, S′. Finally (d) Matrix, M ′ stages interploatedA matrix or intended recoved data (decimal poins are discarded for visualization).

from the preceding frame. If the camera/content is relativelystatic then such an approach can often produce satisfactoryresults. When there is fast motion, however, the result willtend to stand out strongly to viewers because of its starkinconsistency with the direction of motion. Motion trackingcan potentially be applied to mitigate this issue but ischallenging to combine with interpolation within the real-time constraints of high frame-rate video.

Another significant reason why video in-painting ischallenging is because the regions to be filled tend to berectangular. This is because missing data in a video stream isusually a consequence of missing or corrupted data packetscorresponding to rectangular tiles defined by the compressedvideo format. Figure 1 (a) shows this effect in a frame ofMPEG video with multiple bit errors [6]. Corrupted tiles canbe easily recognized by failed checksums and can either berendered in corrupted form (as shown in or can be replacedwith an interpolated tile Figure 1 (b)) . However, unlessthe interpolation is suffiiciently smooth across the horizontaland vertical boundaries of the missing tiles, the rectilinearborders will stand out as distinctive feaures and will tend tobe strongly distracting.

3. Linear-Time Matrix Interpolation

A rigorous definition of what constitutes a “real-time”algorithm is clearly dependent on a variety of application-specific assumptions [3]. For example, faster hardware orparallelization may enable a given algorithm to achieve aspecific real-time performance threshold. For present pur-poses we will avoid such issues by requiring our approach toscale linearly with the number of pixels to be interpolated.

In other words, the coefficient on the linear running timemay or may not satisfy demands of a given application, butat least it is unambigously clear how much improvement isneeded from a combination of code optimization and fasterhardware.

To this end we have developed an algorithm based ona linear-time algorithm for scaling the rows and columnsof a given matrix (not necessarily square) such that themagnitude of the product of the nonzero elements in eachrow or product is unity [5]. This algorithm has been ex-ploited for efficiently identifying scale-equivalent matricesand for computing a special type of generalized matrixinverse that is consistent with respect to arbitrary diagonaltransformations [7], which has found significant applicationsin robotics and process control applications [10], [9], [8].

To give a feel for the core algorithm, which we will referto as dscale, consider its use with a nonnegative m×n matrixM representing pixel intensities of an image. Evaluatingdscale(M) gives a decomposition

M = D · S · E (1)

where D and E are diagonal matrices and the product ofthe nonzero elements in each row and column of S is unity.This scaling (i.e., S) is unique and takes O(mn) time tocompute [50]. What is notable about this decomposition isthat zero elements of M are unchanged. If missing pixels inM are taken to be zeros, then they will correspond to zeroelements in S. The critical observation to be made is that ifthose elements of S are replaced with unity to form S′, theproduct of the elements in each row and column of S′ willstill be unity. In other words, S′ represents a valid dscale

Page 4: A Method for Online Interpolation of Packet-Loss Blocks in ...faculty.missouri.edu/uhlmannj/Fast-InPainting.pdf · from input image. Hitoshi et al. [13] combined texture synthesis

decomposition of a matrix M ′ with no missing elements:

M ′.= D−1 · S′ · E−1. (2)

(a) Ground truth (b) Corrupted Fr. (c) Recovered Fr.

(d) P1, P ∗1 (e) P2, P ∗2 (f) P3, P ∗3

Figure 3: Example of dscale interpolation for single image Circuit. (a)original image or ground truth, (b) corrupted image produced by inserting3 black (zero intensity) patches, region around each patch is highlightedwith red rectangle, P1, P2, P3 (c) interpolated result, region around eachrecovered patch is highlighted with red rectangle, P ∗1 , P

∗2 , P

∗3 and (d)-to-

(f) show highlighted area P1, P2, P3 in top row and P ∗1 , P∗2 , P

∗3 in bottom

row.

Moreover, the missing elements of M are replaced witha value that is scale-consistent with respect to the othervalues in its row and column. This is an important prop-erty because, for example, if the original image representsmultispectral information for which the rows and columnsare defined in different units, the interpolated result will beconsistent with those units, i.e., if the units are changed theresulting interpolation will be identical up to that changeof relative scale. Figure 2 presents an example of matrixdecomposition in dscale approach.

To apply this approach for image interpolation we needonly to identify each region to fill and identify a suitablewindow so that the missing pixels can be estimated fromthe frame of existing pixels. The principal constraints indetermining the dimensions of the frame are that it includea representative sample of local pixels and that it not extendinto a strongly dissimilar region of the image. For ourexperiments in the following section we interpolate basedon a window that is 15% larger than the patch to be in-filled. In other words, no content-based method is appliedto optimize the window size.

4. Experimental Results

In this section we examine the approach described in theprevious section with examples involving typical streamingvideo sequences and sequences generated from an interac-tive visualization system. We begin by noting that standardmetrics for comparing the fidelity of processed image resultsto known ground truth do not generally apply to the problemof interest here because there is no way to divine missingcontent, so the only meaningful metric is visual consistency.This is demonstrated by the example of Figure 4, where ared car in the missing tile cannot possibly be interpolatedfrom information available in the pixels surrounding thepatch.

Examples images used in this section are collected fromVIRAT [43] (Figure 6, 9), VIVID [44] (Figure 4, 7) andMaize (Figure 5, 8) dataset. VIRAT is large-scale bench-mark dataset which has been widely used in surveillanceand computer vision application such as mosaicking [45],[46], [47], [48].

Figure 4: Ground truth (left) contains a red car that cannot be seen in thecenter image due to the missing tile. No interpolation method can divinethe presence of the car from the information available, so comparing theregion with the car to that of the dscale-interpolated result (right) is not ameaningful measure of effectiveness.

The example of Figure 5 is similar in that the finedetails of the rows of crops obscured by the patch cannotpossibly be inferred from the surrounding region. In this

Figure 5: Ground truth (left) is a frame of video that shows a dirt roadseparating rows of crops from a field. The fine details of the missingregion cannot be divined, but the dscale result (right) is qualitatively similarenough to potentially avoid drawing attention to itself within the contextof the streaming video.

case the dscale method effectively captures texture featuresalong horizontal and vertical directions. However, the samedoes not hold for textural features are strongly oriented indiagonal direction. The example of Figure 6 is a glaringexample.

In the examples of Figures 7-9 we compare dscale to thepartial-differential equation (PDE) method of [49], which

Page 5: A Method for Online Interpolation of Packet-Loss Blocks in ...faculty.missouri.edu/uhlmannj/Fast-InPainting.pdf · from input image. Hitoshi et al. [13] combined texture synthesis

Figure 6: The dscale method is unable to effectively maintain the texturalfeatures of the dirt road that are diagonally oriented.

is typical of a family of related methods that attempt toextend features surrounding the missing region into thecenter of the region. Such methods are iterative and far toocomputationally expensive for real-time/online applicationsbut are much faster than so-called texture-aware methods[28], [29], [30]. The first comparison example, Figure 7,shows a patch obscuring the boundary between two types ofpaved surfaces. The dscale result is clearly more consistent

Figure 7: Top-left shows the original frame of video and Top-right showsthe missing tile. The PDE method (Bottom-left) produces a strongly blurredresult while dscale introduces qualitatively less distracting artifacts.

with the overall content of the image than the PDE result.The example of Figure 8 is a frame of video showing

rows of crops. Like the previous example, dscale introducesartifacts that are visually more consistent with the surround-ing texture features than the blurred result from the PDEmethod.

The next example, Figure 8, is a frame of video showinga rural highway. The dscale result does not accuratelymaintain continuity of the road, but its artifacts are certainlyless distracting than the blurred result from the PDE method.

Figure 8: Top-left shows the original frame of video and Top-right showsthe missing tile. The PDE method (Bottom-left) produces a strongly blurredregion in the shape of the missing tile. The dscale result maintains texturalfeatures that are somewhat more effective at masking the rectilinear shapeof the missing tile.

Figure 9: Top-left shows the original frame of video and Top-rightshows the missing tile. The PDE method (Bottom-left) produces a blurredrectangle over the road that is almost as distracting as a uniform patch.The dscale result (bottom-right) introduces significant artifacts, but theyare qualitatively consistent with the overall image.

5. Discussion

In this paper we have examined a low-complexity linear-time algorithm for the real-time interpolation of lost tiles inframes from streaming video and interactive visualizationapplications. Our results show that the algorithm is very

Page 6: A Method for Online Interpolation of Packet-Loss Blocks in ...faculty.missouri.edu/uhlmannj/Fast-InPainting.pdf · from input image. Hitoshi et al. [13] combined texture synthesis

effective in many contexts but is susceptible to producingnoticeable orientation-related artifacts. Future work will ex-amine means for mitigating the isotropic sensitivity of thealgorithm by rotating the interpolation window to align withstrong orientation features that may exist. Other potentialimprovements may derive from a simple method to deter-mine the local direction of motion that can be exploited tofurther mask distracting artifacts. In all cases, however, thereal-time efficiency constraint will be the dominant obstacle.

References

[1] Muxi Chen and J.K. Uhlmann “Fast Digital Inpainting for VideoApplications,” University of Missouri Project Report, 2016.

[2] Chih-Wei Fang and J.-J.J. Lien. Rapid Image Completion SystemUsing Multiresolution Patch-Based Directional and Nondirectional Ap-proaches. In: Image Processing, IEEE Transactions on 18.12 (2009), pp.27692779. ISSN: 1057-7149. DOI: 10.1109/TIP.2009.2027635.

[3] S.J. Julier and J.K. Uhlmann, “Building a million beacon map,” SensorFusion and Decentralized Control in Robotic Systems IV Conference,Volume 4571, Pages 10-21, 2001.

[4] Mil Mascaras and J.K. Uhlmann, “Expression of a Real Matrix as aDifference of a Matrix and its Transpose Inverse,” Journal de Cienciae Ingenieria, Vol. 11, No. 1, 2019.

[5] U.G. Rothblum and S.A. Zenios, “Scalings of Matrices SatisfyingLine-Product Constraints and Generalizations,” Linear Algebra and ItsApplications, 175: 159-175, 1992.

[6] S.A. Shanawaz and S.R. Done, “Digital Video, MPEG and AssociatedArtifacts,” Imperial College Technical Report, (London, UK), June 14,1996.

[7] Jeffrey Uhlmann, “A Generalized Matrix Inverse that is Consistentwith Respect to Diagonal Transformations,” SIAM Journal on MatrixAnalysis (SIMAX), 2018.

[8] J.K. Uhlmann, “On the Relative Gain Array (RGA) with Singular andRectangular Matrices,” Applied Mathematics Letters, Vol. 93, 2019.

[9] Jeffrey Uhlmann, “A Rank-Preserving Generalized Matrix Inverse forConsistency with Respect to Similarity,” IEEE Control Systems Letters,ISSN: 2475-1456, 2018.

[10] Bo Zhang and Jeffrey Uhlmann, “Applying a Unit-Consistent Gener-alized Matrix Inverse for Stable Control of Robotic Systems,” ASME J.of Mechanisms and Robotics, 11(3), 2019.

[11] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Imageinpainting,” in Proceedings of the 27th annual conference on Computergraphics and interactive techniques. ACM Press/Addison-WesleyPublishing Co., 2000, pp. 417–424.

[12] T. F. Chan and J. Shen, “Nontexture inpainting by curvature-drivendiffusions,” Journal of Visual Communication and Image Representa-tion, vol. 12, no. 4, pp. 436–449, 2001.

[13] H. Yamauchi, J. Haber, and H.-P. Seidel, “Image restoration usingmultiresolution texture synthesis and image inpainting,” in ProceedingsComputer Graphics International 2003. IEEE, 2003, pp. 120–125.

[14] O. Le Meur, J. Gautier, and C. Guillemot, “Examplar-based inpaintingbased on local geometry,” in 2011 18th IEEE international conferenceon image processing. IEEE, 2011, pp. 3401–3404.

[15] H. Grossauer, “A combined pde and texture synthesis approach toinpainting,” in European conference on computer vision. Springer,2004, pp. 214–224.

[16] J. Shen and T. F. Chan, “Mathematical models for local nontextureinpaintings,” SIAM Journal on Applied Mathematics, vol. 62, no. 3, pp.1019–1043, 2002.

[17] A. Criminisi, P. Perez, and K. Toyama, “Object removal by exemplar-based inpainting,” in 2003 IEEE Computer Society Conference onComputer Vision and Pattern Recognition, 2003. Proceedings., vol. 2.IEEE, 2003, pp. II–II.

[18] Z. Xu and J. Sun, “Image inpainting by patch propagation using patchsparsity,” IEEE transactions on image processing, vol. 19, no. 5, pp.1153–1165, 2010.

[19] A. Wong and J. Orchard, “A nonlocal-means approach to exemplar-based inpainting,” in 2008 15th IEEE International Conference on ImageProcessing. IEEE, 2008, pp. 2600–2603.

[20] Q. Chen, Y. Zhang, and Y. Liu, “Image inpainting with improvedexemplar-based approach,” in International Workshop on MultimediaContent Analysis and Mining. Springer, 2007, pp. 242–251.

[21] Y.-L. Chang, Z. Yu Liu, and W. Hsu, “Vornet: Spatio-temporally con-sistent video inpainting for object removal,” in Proceedings of the IEEEConference on Computer Vision and Pattern Recognition Workshops,2019, pp. 0–0.

[22] D. Tschumperle and R. Deriche, “Vector-valued image regularizationwith pdes: A common framework for different applications,” IEEEtransactions on pattern analysis and machine intelligence, vol. 27, no. 4,pp. 506–517, 2005.

[23] J. Joshua and G. Darsan, “Digital inpainting techniques: A survey,”Intern. J. of Latest Research in Engineering and Technology, vol. 2, pp.34–36, 2016.

[24] R. Suthar and M. K. R. Patel, “A survey on various image inpaintingtechniques to restore image,” Int. Journal of Engineering Research andApplications, vol. 4, no. 2, pp. 85–88, 2014.

[25] K. s Mahajan and M. Vaidya, “Image in painting techniques: Asurvey,” IOSR Journal of Computer Engineering (IOSRJCE), vol. 5,no. 4, pp. 45–49, 2012.

[26] P. Getreuer, “Total variation inpainting using split bregman,” ImageProcessing On Line, vol. 2, pp. 147–157, 2012.

[27] T. F. Chan, S. H. Kang, and J. Shen, “Total variation denoising andenhancement of color images based on the cb and hsv color models,”Journal of Visual Communication and Image Representation, vol. 12,no. 4, pp. 422–435, 2001.

[28] A. A. Efros and T. K. Leung, “Texture synthesis by non-parametricsampling,” in Proceedings of the seventh IEEE international conferenceon computer vision, vol. 2. IEEE, 1999, pp. 1033–1038.

[29] P. Harrison, “A non-hierarchical procedure for re-synthesis of com-plex textures,” 2001.

[30] D. J. Heeger and J. R. Bergen, “Pyramid-based texture analy-sis/synthesis,” in Proceedings of the 22nd annual conference on Com-puter graphics and interactive techniques. Citeseer, 1995, pp. 229–238.

[31] M. Elad, J.-L. Starck, P. Querre, and D. L. Donoho, “Simultaneouscartoon and texture image inpainting using morphological componentanalysis (mca),” Applied and Computational Harmonic Analysis, vol. 19,no. 3, pp. 340–358, 2005.

[32] M. M. O. B. B. Richard and M. Y.-S. Chang, “Fast digital imageinpainting,” in Appeared in the Proceedings of the International Con-ference on Visualization, Imaging and Image Processing (VIIP 2001),Marbella, Spain, 2001, pp. 106–107.

[33] F. Bornemann and T. Marz, “Fast image inpainting based on coher-ence transport,” Journal of Mathematical Imaging and Vision, vol. 28,no. 3, pp. 259–278, 2007.

[34] A. Telea, “An image inpainting technique based on the fast marchingmethod,” Journal of graphics tools, vol. 9, no. 1, pp. 23–34, 2004.

[35] J.-L. Starck, M. Elad, and D. L. Donoho, “Image decomposition viathe combination of sparse representations and a variational approach,”IEEE transactions on image processing, vol. 14, no. 10, pp. 1570–1582,2005.

[36] J. Wu and Q. Ruan, “A novel hybrid image inpainting model,” in 2008International Conference on Audio, Language and Image Processing.IEEE, 2008, pp. 138–142.

Page 7: A Method for Online Interpolation of Packet-Loss Blocks in ...faculty.missouri.edu/uhlmannj/Fast-InPainting.pdf · from input image. Hitoshi et al. [13] combined texture synthesis

[37] L. Cai and T. Kim, “Context-driven hybrid image inpainting,” IETImage Processing, vol. 9, no. 10, pp. 866–873, 2015.

[38] S. A. Basith and S. R. Done, “Digital video, mpeg and associatedartifacts,” Imperial College London, 1996.

[39] I. Drori, D. Cohen-Or, and H. Yeshurun, “Fragment-based imagecompletion,” in ACM Transactions on graphics (TOG), vol. 22, no. 3.ACM, 2003, pp. 303–312.

[40] J. C. Hung, C.-H. Huang, Y.-C. Liao, N. C. Tang, and T.-J. Chen,“Exemplar-based image inpainting base on structure construction.” JSW,vol. 3, no. 8, pp. 57–64, 2008.

[41] C.-W. Fang and J.-J. J. Lien, “Rapid image completion system usingmultiresolution patch-based directional and nondirectional approaches,”IEEE Transactions on Image Processing, vol. 18, no. 12, pp. 2769–2779,2009.

[42] S. A. Basith and S. R. Done, “Digital video, mpeg and associatedartifacts,” Imperial College London, 1996.

[43] S. Oh, A. Hoogs, A. Perera, N. Cuntoor, C.-C. Chen, J. T. Lee,S. Mukherjee, J. Aggarwal, H. Lee, L. Davis et al., “A large-scalebenchmark dataset for event recognition in surveillance video,” in CVPR2011. IEEE, 2011, pp. 3153–3160.

[44] R. Collins, X. Zhou, and S. K. Teh, “An open source trackingtestbed and evaluation web site,” in IEEE International Workshop onPerformance Evaluation of Tracking and Surveillance, vol. 2, no. 6,2005, p. 35.

[45] R. Aktar, V. S. Prasath, H. Aliakbarpour, U. Sampathkumar,G. Seetharaman, and K. Palaniappan, “Video haze removal and poissonblending based mini-mosaics for wide area motion imagery,” in 2016IEEE Applied Imagery Pattern Recognition Workshop (AIPR). IEEE,2016, pp. 1–7.

[46] R. Aktar, “Automatic geospatial content summarization and visibilityenhancement by dehazing in aerial imagery,” Ph.D. dissertation, Uni-versity of Missouri–Columbia, 2017.

[47] R. Aktar, H. Aliakbarpour, F. Bunyak, G. Seetharaman, and K. Pala-niappan, “Performance evaluation of feature descriptors for aerial im-agery mosaicking,” in 2018 IEEE Applied Imagery Pattern RecognitionWorkshop (AIPR). IEEE, 2018, pp. 1–7.

[48] R. Aktar, H. AliAkbarpour, F. Bunyak, T. Kazic, G. Seetharaman,and K. Palaniappan, “Geospatial content summarization of uav aerialimagery using mosaicking,” in Geospatial Informatics, Motion Imagery,and Network Analytics VIII, vol. 10645. International Society for Opticsand Photonics, 2018, p. 106450I.

[49] J. DErrico, “Inpaint-nans, MATLAB Central File Exchange,”http://www.mathworks.com/matlabcentral/fileexchange/4551-inpaint-nans/, 2004, [Online; accessed 10-August-2019].

[50] U. G. Rothblum and S. A. Zenios, “Scalings of matrices satisfyingline-product constraints and generalizations,” Linear algebra and itsapplications, vol. 175, pp. 159–175, 1992.