13
Research Article Improved ORB-SLAM2 Algorithm Based on Information Entropy and Image Sharpening Adjustment Kaiqing Luo , 1,2,3 Manling Lin , 1 Pengcheng Wang , 1 Siwei Zhou , 1 Dan Yin , 1 and Haolan Zhang 4 1 School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou 510006, China 2 Guangdong Provincial Engineering Research Center for Optoelectronic Instrument, South China Normal University, Guangzhou 510006, China 3 SCNU Qingyuan Institute of Science and Technology Innovation Co., Ltd., Qingyuan 511517, China 4 e SCDM Center, NingboTech University, Ningbo 315100, China Correspondence should be addressed to Kaiqing Luo; [email protected] and Haolan Zhang; [email protected] Received 31 July 2020; Revised 27 August 2020; Accepted 7 September 2020; Published 23 September 2020 Academic Editor: Sanghyuk Lee Copyright © 2020 Kaiqing Luo et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Simultaneous Localization and Mapping (SLAM) has become a research hotspot in the field of robots in recent years. However, most visual SLAM systems are based on static assumptions which ignored motion effects. If image sequences are not rich in texture information or the camera rotates at a large angle, SLAM system will fail to locate and map. To solve these problems, this paper proposes an improved ORB-SLAM2 algorithm based on information entropy and sharpening processing. e information entropy corresponding to the segmented image block is calculated, and the entropy threshold is determined by the adaptive algorithm of image entropy threshold, and then the image block which is smaller than the information entropy threshold is sharpened. e experimental results show that compared with the ORB-SLAM2 system, the relative trajectory error decreases by 36.1% and the absolute trajectory error decreases by 45.1% compared with ORB-SLAM2. Although these indicators are greatly improved, the processing time is not greatly increased. To some extent, the algorithm solves the problem of system localization and mapping failure caused by camera large angle rotation and insufficient image texture information. 1. Introduction Simultaneous Localization and Mapping [1] (SLAM) is described as follows: the robot enters an unknown envi- ronment, uses laser or visual sensors to determine its own posture information, and reconstructs a three-dimensional map of the surrounding environment in real time. e system is a hot spot in robot research at present, and it has important theoretical significance and application value for the realization of autonomous control and mission planning of robots. e current SLAM system is mainly divided into two categories according to different sensor types: laser SLAM and visual SLAM. A SLAM that uses lidar as an external sensoriscalledalaserSLAM,andaSLAMthatusesacamera as an external sensor is called a visual SLAM. e advantage of a lidar is its wide viewing range, but the disadvantage is the high price. e angular resolution is not high enough, which affects the accuracy of modelling. e sensor camera of the visual SLAM has the characteristics of high cost performance ratio, wide application range, and rich infor- mation collection. erefore, the visual SLAM has developed rapidly since the 21st century. According to the classification of visual sensors, visual SLAM systems divided into monocular SLAM, binocular SLAM, and RGB-D SLAM. According to the image pro- cessing methods, they are divided into direct method and indirect method, such as feature point method and outline feature method. According to the construction of the map, the degree of sparseness can be divided into sparse, dense, semidense, etc. e iconic research results of visual SLAM are Mono-SLAM, PTAM, ORB-SLAM, ORB-SLAM2, etc. Hindawi Mathematical Problems in Engineering Volume 2020, Article ID 4724310, 13 pages https://doi.org/10.1155/2020/4724310

ImprovedORB-SLAM2AlgorithmBasedonInformationEntropy ...downloads.hindawi.com/journals/mpe/2020/4724310.pdf · In 2007, Andrew Davison proposed Mono-SLAM [2], which is the real-time

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: ImprovedORB-SLAM2AlgorithmBasedonInformationEntropy ...downloads.hindawi.com/journals/mpe/2020/4724310.pdf · In 2007, Andrew Davison proposed Mono-SLAM [2], which is the real-time

Research ArticleImproved ORB-SLAM2 Algorithm Based on Information Entropyand Image Sharpening Adjustment

Kaiqing Luo 123 Manling Lin 1 Pengcheng Wang 1 Siwei Zhou 1 Dan Yin 1

and Haolan Zhang 4

1School of Physics and Telecommunication Engineering South China Normal University Guangzhou 510006 China2Guangdong Provincial Engineering Research Center for Optoelectronic Instrument South China Normal UniversityGuangzhou 510006 China3SCNU Qingyuan Institute of Science and Technology Innovation Co Ltd Qingyuan 511517 China4-e SCDM Center NingboTech University Ningbo 315100 China

Correspondence should be addressed to Kaiqing Luo kqluoscnueducn and Haolan Zhang haolanzhangnitzjueducn

Received 31 July 2020 Revised 27 August 2020 Accepted 7 September 2020 Published 23 September 2020

Academic Editor Sanghyuk Lee

Copyright copy 2020 Kaiqing Luo et al is is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Simultaneous Localization and Mapping (SLAM) has become a research hotspot in the field of robots in recent years Howevermost visual SLAM systems are based on static assumptions which ignored motion effects If image sequences are not rich intexture information or the camera rotates at a large angle SLAM system will fail to locate and map To solve these problems thispaper proposes an improved ORB-SLAM2 algorithm based on information entropy and sharpening processing e informationentropy corresponding to the segmented image block is calculated and the entropy threshold is determined by the adaptivealgorithm of image entropy threshold and then the image block which is smaller than the information entropy threshold issharpened e experimental results show that compared with the ORB-SLAM2 system the relative trajectory error decreases by361 and the absolute trajectory error decreases by 451 compared with ORB-SLAM2 Although these indicators are greatlyimproved the processing time is not greatly increased To some extent the algorithm solves the problem of system localization andmapping failure caused by camera large angle rotation and insufficient image texture information

1 Introduction

Simultaneous Localization and Mapping [1] (SLAM) isdescribed as follows the robot enters an unknown envi-ronment uses laser or visual sensors to determine its ownposture information and reconstructs a three-dimensionalmap of the surrounding environment in real time esystem is a hot spot in robot research at present and it hasimportant theoretical significance and application value forthe realization of autonomous control and mission planningof robots

e current SLAM system is mainly divided into twocategories according to different sensor types laser SLAMand visual SLAM A SLAM that uses lidar as an externalsensor is called a laser SLAM and a SLAM that uses a cameraas an external sensor is called a visual SLAM e advantage

of a lidar is its wide viewing range but the disadvantage isthe high price e angular resolution is not high enoughwhich affects the accuracy of modelling e sensor cameraof the visual SLAM has the characteristics of high costperformance ratio wide application range and rich infor-mation collectionerefore the visual SLAM has developedrapidly since the 21st century

According to the classification of visual sensors visualSLAM systems divided into monocular SLAM binocularSLAM and RGB-D SLAM According to the image pro-cessing methods they are divided into direct method andindirect method such as feature point method and outlinefeature method According to the construction of the mapthe degree of sparseness can be divided into sparse densesemidense etc e iconic research results of visual SLAMare Mono-SLAM PTAM ORB-SLAM ORB-SLAM2 etc

HindawiMathematical Problems in EngineeringVolume 2020 Article ID 4724310 13 pageshttpsdoiorg10115520204724310

In 2007 Andrew Davison proposed Mono-SLAM [2]which is the real-time implementation of SfM (Structurefrom Motion) so it is also known as Real-Time Structurefrom Motion Mono-SLAM is a SLAM based on probabi-listic mapping and has closed-loop correction functionHowever Mono-SLAM can only handle small scenes in realtime and only has good effects in small scenes In the sameyear Georg Klein and David Murray proposed PTAM [3](Parallel Tracking and Mapping) and the innovation is todivide the system into two threads tracking and mappingand the concept of key frames is proposed PTAM is nolonger a processing sequence but a key frame that contains alarge amount of information In 2015 MUR-ARTAL R et alproposed ORB-SLAM In 2016 ORB-SLAM2 was alsoproposed ORB-SLAM [4] is an improved monocular SLAMsystem based on feature points of PTAMe algorithm canrun quickly in real time As the indoor environment and thewide outdoor environment are robust to vigorous sportsORB-SLAM2 [5] increases the scope of application on thebasis of ORB-SLAM It is a set that is based on monocularbinocular and RGB-D ORB-SLAM2 is more accurate thanthe previous solution and can work in real time on astandard CPU

ORB-SLAM2 has improved greatly in running time andaccuracy but there are still some problems to be solved[6 7] e feature point extraction is poorly robust in anenvironment with sudden changing in lighting too strong ortoo weak light intensity and too little texture informationwhich cause the problem of system tracking failure underdynamic environment the feature points lose when thecamera rotates at a large angle Common cameras oftenproduce image blur when moving quickly and solvingmotion blur becomes an important direction in visualSLAM Image restoration technology can reduce the am-biguity of the image and can restore it from the originalimage to a relatively clear image e deblurring method isbased on the maximum posterior probability and the normof sparse representation is used for deblurring [8] ecomputational efficiency is improved but the computationalcost is still very large e image is segmented into manyimage blocks with the same fuzzy mode and the pointdiffusion function of each image block is predicted byconvolution neural network Finally the clear image isobtained by deconvolution but the processing time is toolong [9] e literature [10] uses multiscale convolutionalneural networks to restore clear images in end-to-end formand optimizes the results by multiscale loss function Be-cause the same network parameters are used for differentfuzzy kernels the network model is too large e literature[11] reduces the network model and designs a spatial changecyclic neural network which can achieve good deblurringeffect in dynamic scene but the generalization ability isgeneral It also has semantic segmentation architectures anddeep learning methods based on deep Convolutional NeuralNetwork (CNN) for semantic segmentation to identifyrelevant objects in the environment [12]

Because the SLAM system needs real time the com-putation of neural network is large the processing time islong and the generality is lacking and another method is

adopted in this paper e algorithm in this paper startsfrom the perspective of increasing the image richness ispaper adds a sharpening adjustment algorithm based on theinformation entropy threshold in the feature point ex-traction part of the tracking thread By preprocessing theimage block the information entropy of the image block iscompared with the threshold If the information entropy isbelow the threshold it will be sharpened to enhance therichness of the information and facilitate subsequentprocessing If the information entropy is above thethreshold the richness of the information is consideredsufficient and no processing is performed In order toincrease the generality this paper proposes the algorithmwill automatically calculate the information entropythreshold of the scene according to different scenes Ex-perimental results show that the improved algorithm im-proves the accuracy and robustness of ORB-SLAM2 in thecase of the rotation at a large angle when the texture in-formation is not rich

2 The Framework of the Algorithm

e ORB-SLAM2 system is divided into three threadsnamely Tracking Local Mapping and Local Closing In theentire process the ORB algorithm is used for feature pointdetection and matching and the BA (Bundle Adjustment)algorithm is used to perform nonlinear iterative optimiza-tion of the three procedures to obtain accurate camera poseand 3D map data Since the ORB-SLAM2 system reliesheavily on the extraction and matching of feature pointsrunning in an environment with rich textures or dealingwith video will fail to obtain enough stable matching pointpairs us the beam adjustment method lacks sufficientinput information e posture deviation cannot be effec-tively corrected [13] e algorithm of this paper prepro-cesses the image before extracting the feature points of thetracking thread and adds a sharpening adjustment algorithmbased on the information entropy threshold

21 Overall Framework e overall system framework ofthe improved algorithm is shown in Figure 1 ere are twosteps in our programe first step automatically determinesthe information entropy threshold of the scene based on theadaptive information entropy algorithm e second stepcompares the threshold and the information entropy of eachimage block and performs the following operations

e system first accepts a series of image sequences fromthe camera passes the image frames to the tracking threadafter initialization extracts ORB features from the images inthis thread performs pose estimation based on the previousframe tracks the local map optimizes the pose determinesthe key frame according to the rules and passes it to theLocal Mapping thread to complete the construction of thelocal map Finally Local Closing performs closed-loop de-tection and closed-loop correction In the algorithm of thispaper the sharpening adjustment based on the informationentropy threshold is added to the image preprocessing partbefore the ORB feature detection in the Tracking thread

2 Mathematical Problems in Engineering

22Tracking-read e input to the tracking thread is eachframe of the video When the system is not initialized thethread tries to initialize using the first two frames mainly toinitialize andmatch the feature points of the first two framesere are two initialization methods in the ORB-SLAM2system namely the initialization of monocular camerabinocular camera and RGB-D camera e experiment inthis article mainly uses monocular camera

After the initialization is complete the system starts totrack the camera position and posture e system first usesthe uniform velocity model to track the position and postureWhen the uniform velocity model fails the key frame modelis used for tracking In the case when both types of trackingfail the system will trigger the relocation function to relocatethe frame After the system obtains each frame it will usefeature points to detect and describe and use feature de-scriptors for tracking and pose calculation In the process offeature point extraction image quality plays an importantrole erefore this article creatively adds a preprocessingalgorithm to improve image quality before the image featurepoint extraction step

At the same time the system will track the local mapmatch the current frame with related key frames to form aset and find the corresponding key points e beam ad-justment method is used to track the local map points tominimize the reprojection error thereby optimizing thecamera posture of the current frame Finally according tothe conditions of whether to generate a key frame ajudgment is made to generate a key frame Specify thecurrent frame that meets certain conditions as a key framee key frames selected in the ldquotrackingrdquo thread need to beinserted into the map for construction e key framecontains map points as feature points When more than acertain number of key frames are collected their key pointswill be added to the map and become map points Delete themap points that do not meet the conditions

23 Local Mapping-read e input of the thread is the keyframe inserted by the tracking thread On the basis of newlyadded key frames maintain and add new local map pointse

key frame input by the thread is generated according to therules set during tracking so the result of Local Mapping threadis also closely related to the quality of the image used for featurepoint extraction in the tracking process Image adaptive pre-processing algorithm also improves Local Mapping threadwhen optimizing the extraction of feature points

During the construction of the map local BA optimi-zation will be performed Use BA to minimize reprojectionerrors and optimize Map points and poses Because BArequires a lot of mathematical operations which are relatedto key frames Local Mapping will delete redundant keyframes in order to reduce time consumption

24 Local Closing -read In the continuous movement ofthe camera it is completely consistent with the actual thatthe camera pose calculated by the computer and the mappoints is obtained by the triangulation ere is a certainerror between them While the number of frames increasesthe error gradually accumulates In order to reduce theseaccumulated errors the most effective method is closed-loopcorrection ORB-SLAM2 uses the closed-loop detectionmethod When the camera re-enters the previous scene thesystem detects Closed-loop and global BA optimization isperformed to reduce the cumulative error erefore whenthe ORB-SLAM2 system is applied to a large-scale scene itshows higher robustness and usability e input of thethread is the key frame screened by the Local Mappingthread e word bag vector of the current key frame isstored in the database of the global word bag vector to speedup the matching of subsequent frames At the same time itdetects whether there is a loopback and if it occurs it passesthe pose Graph optimization to optimize the pose of all keyframes and reduce the accumulated drift error After thepose optimization is completed a thread is started to executethe global BA to obtain the most map points and key poseresults of the entire system

25 -e Process of Sharpening Adjustment AlgorithmBased on Information Entropy Motion-blurred images aremore disturbed by noise which impact the extraction of

Camera FrameSharpening adjustmentbased on information

entropyTracking

Key frame

Local mappingLocal closer

Figure 1 Algorithm system framework

Mathematical Problems in Engineering 3

feature points in the ORB-SLAM2 system e number ofstable feature points found on these images is insufficiente feature points cannot even be found in the images withtoo much blurs e insufficient number will then affect theaccuracy of the front-end pose estimation When the featurepoints are less than the number specified by the system thepose estimation algorithm cannot be performed so that theback-end cannot obtain the front-end information and thetracking fails Based on this background this paper proposesa sharpening adjustment algorithm based on informationentropy preprocessing the image before the tracking thread

In images with poor texture and blurred images thesharpening of the image can make the corner information ofthe image more prominent Because when the ORB feature isextracted in the tracking thread it is easier to detect thefeature point and enhance the systemrsquos stability And thescreening based on the information entropy threshold isadded and the image sharpening adjustment is performedonly when the information entropy of the image block is lessthan the threshold It can reduce the time of sharpeningadjustment ensure the real-time performance of the systemand maintain the integrity of the image information If theimage block information entropy is bigger than thethreshold it will not be processed

e process of sharpening adjustment algorithm basedon information entropy threshold is as follows and thealgorithm flow chart is shown in Figure 2

(1) At first an image input by the frame is converted intoa grayscale image and the image is expanded into an8-layer image pyramid under the effect of the scalingfactor and then each layer of the pyramid image isdivided into image blocks

(2) Calculate the information entropy E of the imageblock and compare the obtained information en-tropy E with the information entropy threshold E0e image block with the information entropy E lessthan the threshold E0 indicates that the image blockcontains less effective information and the effect ofORB feature extraction is poor so you need tosharpen first to enhance the detail in the image

(3) After the sharpening process is performed on theimage block with the information entropy less thanthe threshold ORB feature points are extracted withthe image block with the information entropy greaterthan the threshold e feature points are extractedusing the FAST feature point extraction algorithm inthe pyramid e quadtree homogenization algo-rithm is used for homogenization processing whichmakes the distribution of extracted feature pointsmore uniform It avoids the phenomenon of clus-tering of feature points thus the algorithm becomesmore robust

(4) en perform a BRIEF description of the featurepoints to generate a binary descriptor of the featurepoints after the homogenization process e featurepoints generated with the BRIEF descriptor at thistime are called ORB features which have invariance

of viewpoints and invariance of lighting ORB fea-tures which are used in the later graph matching andrecognition in the ORB-SLAM2 system

3 The Novel Image Preprocessing Algorithm

We propose and add a sharpening adjustment algorithmbased on information entropy Before the tracking threadthe image is preprocessed e above algorithm frameworkroughly describes the specific operation of the algorithme following will explain in detail the relevant principlesand details of the improved algorithm Algorithms used inthis article are proposed including the ORB algorithm inORB-SLAM2 the sharpening adjustment algorithm theprinciple and algorithm of information entropy and theadaptive information entropy threshold algorithm

31ORBAlgorithm As explained in the previous article theORB-SLAM2 system relies heavily on the extraction andmatching of feature points erefore running in an envi-ronment with rich texture or video processing will not beable to obtain enough stable matching point pairs resultingin loss of system position and attitude tracking Case inorder to better explain the improved system in this articlewe first introduce the feature point extraction algorithmused by the system

e focus of image information processing is theextraction of feature points e ORB algorithm combinesthe two methods FAST feature point detection and BRIEFfeature descriptor [14] To make further optimization andimprovement on their basis in 2007 the FAST cornerdetection algorithm was proposed Compared with otherfeature point detection algorithms the FAST algorithmhas better real-time detection and robustness e core ofthe FAST algorithm is to take a pixel and compare it withthe gray value of the points around it For comparison ifthe gray value of this pixel differs from most of thesurrounding pixels it is considered to be a feature point

e specific steps of FAST corner detection featurepoints each pixel is detected with the detected pixel as thecenter p 3 pixels with a radius of 16 pixels on a circle set agray value threshold t Comparing the 16 pixels on the circlewhen there are consecutive n (12) pixels the gray value isgreater than Ip + t or less than Ip minus t (Ip is the gray value ofpoint p) then p is determined as a feature point In order toimprove the efficiency of detection a simplified judgment isfirst performed It is only necessary to detect whether thegray values of 1 5 9 and 13 satisfied the above conditionswhen at least 3 points are met continue to detect theremaining 12 points (Figure 3)

e concept of mass points is introduced to obtain thescale and rotation invariance of FAST corner points ecentroid refers to the gray value of the image block as thecenter of weight e specific steps are as follows

(1) In a small image block B the moment defining theimage block B is

4 Mathematical Problems in Engineering

mpq 1113944xyϵB

xpy

qI(x y) p q 0 1 (1)

(2) You can find the centroid of the image block bymoment

C m10

m00m01

m001113888 1113889 (2)

(3) Connect the geometric center O and the centroid Cof the image block to get a direction vector OC

rarren

the feature point direction can be defined as follows

Buildingan imagepyramid

Local entropy screening

Dividingimageblocks

Calculate E

E le E0

N

Y

Sharpenadjustment Match

Generatedescriptor

ICP motionestimation

E is the local entropyE0 is the local entropy threshold

ORB featureextraction

Figure 2 Process of sharpening adjustment algorithm based on information entropy

1 23

456

7

8910

11

1213 p14

1516

Figure 3 FAST feature point detection

Mathematical Problems in Engineering 5

θ arctanm01

m101113888 1113889 (3)

rough the above methods FAST corners have a de-scription of scale and rotation which greatly improves therobustness of their representation between different imagesSo this improved FAST is called Oriented FAST in the ORB

After the feature point detection is completed the fea-ture description needs to be used to represent and store thefeature point information to improve the efficiency of thesubsequent matching e ORB algorithm uses the BRIEFdescriptor whose main idea is to distribute a certainprobability around the feature point Randomly select sev-eral point pairs and then combine the gray values of thesepoint pairs into a binary string Finally use this binary as thedescriptor of this feature point

e BRIEF feature descriptor determines the binarydescriptor by comparing the pair of gray values As shown inFigure 4 if the gray value at the pixel x is less than the grayvalue at y the binary value of the corresponding bit isassigned to 1 otherwise it is 0

τ(p x y) 1 I(x)lt I(y)

0 I(x)ge I(y)1113896 (4)

Among them I(x) and I(y) are the gray values at thecorresponding corners after image smoothing

For n (256) (x y) test point pairs the corresponding n-dimensional BRIEF descriptor can be formed according tothe following formula

fn(p) 11139441leilen

2iminus tτ(p x y) (5)

However the BRIEF descriptor does not have rotationinvariance the ORB algorithm improves it When calcu-lating the BRIEF descriptor the ORB will solve these cor-responding feature points in the main direction therebyensuring that the rotation angle is different e point pairsselected for the same feature point are the same

32 Sharpening Adjustment In the process of image trans-mission and conversion the sharpness of the image is reducedIn essence the target contour and detail edges in the image areblurred In the extraction of feature points what we need to dois to traverse the image and the pixels with large difference insurrounding pixels are selected as the feature points whichrequires the imagersquos target contour and detail edge informationto be prominent erefore we introduced image sharpening

e purpose of image sharpening is to make the edgesand contours of the image clear and enhance the details ofthe image [15] e methods of image sharpening includestatistical difference method discrete spatial differencemethod and spatial high-pass filtering method Generallythe energy of the image is mainly concentrated in the lowfrequency e frequency where the noise is located ismainly in the high-frequency band e edge information ofthe image is mainly concentrated in the high-frequencyband Usually smoothing is used to remove the noise at highfrequency but this also makes the edge and contour of the

image blurred which affects the extraction of the featurepoints e root cause of the blur is that the image issubjected to an average or integral operation So performingan inverse operation (a differential operation) on the imagecan restore the picture to clear

In order to reduce the adverse effects we adopt themethod of high-pass filtering (second-order differential) inthe spatial domain and use the Laplacian operator to per-form a convolution operation with each pixel of the image toincrease the variance between each element in the matrixand the surrounding elements to achieve sharp image If thevariables of the convolution are the sequence x(n) and h(n)the result of the convolution is

y(n) 1113944infin

iminusinfinx(i)h(n minus i) x(n)lowast h(n) (6)

e convolution operation of the divided image blocks isactually to use the convolution kernel to slide on the imagee pixel gray value is multiplied with the value on thecorresponding convolution kernel en all the multipliedvalues are added as the gray value of the pixel on the imagecorresponding to the middle pixel of the convolution kernele expression of the convolution function is as follows

dst(x y) 1113944

0lexprimeltkernelcols

0leyprimeltkernelrows

kernel xprime yprime( 1113857

lowast src x + xprime minus anchorx y + yprime minus anchor y( 1113857

(7)

where anchor is the reference point of the kernel kernel isthe convolution kernel and the convolution template here isthe matrix form of the Laplacian variant operator as shownbelow

H

0 minus1 0

minus1 5 minus1

0 minus1 0

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ (8)

e Laplacian deformation operator uses the secondderivative information of the image and is isotropic

Figure 4 BRIEF description

6 Mathematical Problems in Engineering

e results of image discrete convolution on the classicLenna image using the sharpening algorithm adopted in thispaper are shown in Figure 5

From the processing results it can be seen that the high-pass filtering effect of the image sharpening algorithm isobvious which enhances the details and edge information ofthe image and the noise becomes small so this sharpeningalgorithm is selected

33 Screening Based on Information Entropy In 1948Shannon proposed the concept of ldquoinformation entropyrdquo tosolve the problem of quantitative measurement of infor-mation In theory entropy is a measure of the degree ofdisorder of information which is used to measure theuncertainty of information in an image e larger theentropy value is the higher the degree of disorder In imageprocessing entropy can reflect the information richness ofthe image and display the amount of information containedin the image e information entropy calculation formulais

H(x) minus 1113944255

0p xi( 1113857log2 p xi( 1113857 (9)

where p(xi) is the probability of a pixel with grayscalei(i 0 255) in the image When the probability is closerto 1 the uncertainty of the information is smaller and thereceiver can predict the transmitted content e amount ofinformation in the belt is less and vice versa

If the amount of information contained in the image isexpressed by information entropy then the entropy value ofan image of size m times n is defined as follows

pij f(i j)

1113936Mi11113936

Nj1f(i j)

(10)

H minus 1113944

M

i11113944

N

j1pijlog2pij (11)

where f(i j) is the gray level at the point (i j) in the imagepij is the gray distribution probability at the point and H isthe entropy of the image If m times n is taken as (i j) a localneighborhood in the center thenH is called the informationentropy value of the image [16]

From the perspective of information theory when theprobability of image appearance is small the informationentropy value of the image is large indicating that theamount of information transmitted is large In the extractionof feature points the information entropy reflects the texturecontained in the partial image information richness or imagepixel gradient change degree e larger the informationentropy value is the richer the image texture information isandmore obvious the changing of the image pixel gradient is[17]

erefore the effect of ORB feature point extraction isgood and the image block does not require detail en-hancement e local information entropy value is low sothe effect of ORB feature point extraction is poor which isshown in Figure 6 After sharpening adjustments to enhancedetails we optimize the effect of feature point extraction[18] which is shown in Figure 7 By comparing the featurepoint extraction of ORB-SLAM2 it can be intuitively seenthat the optimized extraction algorithm accuracy is better

34 Adaptive Information Entropy -reshold Because thenumber of the information entropy is closely related to thescene different video sequences in different scenes havedifferent information richness so the information entropythreshold of different scenes must also be different In eachdifferent scene it requires repeated experiments to set theinformation entropy threshold multiple times for matchingcalculations to get the corresponding threshold valueHowever the experience value in different scenes differsgreatlye threshold value is not universal which will affectthe image preprocessing and feature extraction It will resultin the failure of quickly obtaining better matching resultsSo the adaptive algorithm of information entropy is par-ticularly important [19]

Original image

(a)

Sharpen

(b)

Figure 5 Sharpening adjustment

Mathematical Problems in Engineering 7

In view of the above problems we propose an adaptivemethod of information entropy threshold which adjusts thethreshold according to different scenariose self-adjustingformula is

E0 H(i)ave

2+ δ (12)

where H(i)ave is the average value of the information en-tropy in this scene which can be obtained by obtaining theinformation entropy of each frame in a video in the sceneunder the first run and then divided by the number offrames i is the number of frames in the video sequence andδ is the correction factor after experiment it is 03 whichworks best rough the above formula the calculated E0 isthe information entropy threshold of the scene

4 Experimental Results and Analysis

All the experiments are conducted on a laptop computere operating system is Linux 1604 64 bit the processorIntel (R) Core (TM) i5-7300U CPU 250GHz the op-erating environment is CLion 2019 and opencv330 and the

program uses C++ Compilatione back-end uses G2O forposture optimization based on posture maps to generatemotion trajectories e image sequence reading speed ofthis algorithm is 30 frames per second compared with theopen source ORB-SLAM2 system

41 Comparison of Two-Frame Image Matching In thispaper the rgbd_dataset_freiburg1_desk dataset is selectedfrom which two images with blurry images and large ro-tation angles are extracted and the image detection andmatching processing are performed using the ORB algo-rithm ORB+ information entropy and ORB+ informationentropy + sharpening extraction algorithm And comparethe data such as extraction effect key points matchingnumber matching number and matching rate after RAN-SAC filtering e matching result statistics of the image areshown in Table 1 and the matching effect is shown inFigures 8 9 and 10

Table 1 shows the results of the feature point detectionand matching through the ORB ORB+ information en-tropy and ORB+ information entropy + sharpening Itproves that the correct matching points and the matchingrate of ORB andORB+ information entropy are basically thesame e matching effect diagram of the latter shows amismatch with a large error For ORB+ informationentropy + sharpening the number of correct matchingpoints of the extraction algorithm has increased signifi-cantly and its matching rate compared toORB+ information entropy and ORB+ information entro-py + sharpening extraction algorithms have been respec-tively improved by 367 and 388

e results show that adding the information entropyscreening and sharpening process proposed by this algo-rithm can effectively increase the number of matches andimprove the matching rate e accuracy of image featuredetection and matching are improved which provide moreaccurate and effective for subsequent process as trackingmapping and loop detection

42 Comparison of Image Matching in Visual OdometerFor the timestamp sequence of different blur degrees androtation angles of the image (that is a sequence of frames170ndash200) ORB-SLAM2 algorithm and the feature detectionand matching algorithm in the improved algorithm of thispaper are introduced to perform two adjacent framersquos fea-ture detection and matching To compare the matchingpoints of the two algorithms in different images the image isclear between 170 and 180 frames and the motion blur startsbetween 180 and 190 frames Maximum rotation angle isbetween 190 and 200 framesemost dramatic motion bluris shown in Figure 11

It can be seen from Figure 12 that the improved al-gorithm can match more feature points than the originalalgorithm e number of matching points of the im-proved algorithm is more than 50 and tracking loss doesnot occur even if the image is blurred due to fast motionWhen the resolution of the image is reduced the

Figure 6 Feature point extraction based on ORB-SLAM2

Figure 7 Extraction of sharpening adjustment feature points basedon information entropy

8 Mathematical Problems in Engineering

improved algorithm still has more matching featurepoints and has strong anti-interference ability androbustness

43 Analysis of Trajectory Tracking Accuracy e gbd_da-taset_freiburg1_desk dataset in the TUM data is selected inthe experiment We use the real trajectory of the robot

provided by TUM to compare it with the motion trajectorycalculated by the algorithm to verify the improvement effectof the proposed algorithm Here we take the average absolutetrajectory error e average relative trajectory error and theaverage tracking time are standard e comprehensive tra-jectory tracking accuracy and real-time performance arecompared between the ORB-SLAM2 system and the im-proved SLAM system

Figure 8 Matching effect of ORB

Figure 9 Matching effect of ORB+ information entropy

Figure 10 Matching effect of ORB+ information entropy + sharpening

Table 1 Comparison of matching rates of three feature detection and matching algorithms

Model Key points Matches RANSCK Match rate ()ORB 10001004 254 87 3425+information entropy 10001002 255 86 3373+sharpening 1000983 284 133 4683

Mathematical Problems in Engineering 9

We use the information entropy of image blocks todetermine the amount of information e algorithmsharpens the image blocks with small information entropyenhances local image details and extracts local feature pointsthat can characterize image information for the back-end asadjacent framese correlation basis of key framematchingenhances robustness and reduces the loss of motion trackingcaused by the failure of interframe matching at the sametime Based on the matching result the R and t transfor-mation relationship between frames is calculated and theback-end uses g2o to optimize the pose based on the positionand attitude map Finally the motion trajectory is generatedwhich is shown in Figure 13

In terms of trajectory tracking accuracy the absolutetrajectory error reflects the difference between the true valueof the camerarsquos pose and the estimated value of the SLAMsystem e absolute trajectory error root mean squareRMSE (x) is defined as follows

RMSE(x)

1113936ni1 xei minus xsi

2

n

1113971

(13)

Among them xei represents the estimated position ofthe ith frame in the image sequence and xsi representsthe standard value of the position of the ith frame in theimage sequence Taking the rgbd_dataset_-freiburg1_desk dataset as an example the error betweenthe optimized motion trajectory and the standard value isshown in Figure 14

e average absolute trajectory error refers to directlycalculating the difference between the real value of thecamerarsquos pose and the estimated value of the SLAM systeme program first aligns the real value and the estimatedvalue according to the time stamp of the pose and thencalculates each pair of poses e difference is very suitablefor evaluating the performance of visual SLAM systems eaverage relative trajectory error is used to calculate thedifference between the pose changes on the same twotimestamps After alignment with the timestamps both thereal pose and the estimated pose are calculated at every sameperiod of time then the difference is calculated It is suitable

for estimating the drift of the system e definition of theabsolute trajectory error in frame i is as follows

Fi Qminus1i Spi (14)

Among them Qi represents the real pose of the frameand pi represents the algorithm estimated pose of the frameS isin Sim (3) is the similarity transformation matrix fromestimated pose to true pose

is experiment comprehensively compares the threedatasets rgbd_dataset_freiburg1_deskroom360 of whichthe desk dataset is an ordinary scene the 360 dataset is thedataset collected when the camera performs a 360-degreerotation motion and the room dataset is a scene collectedwhen the image sequence texture is not rich e ORB-SLAM2 generates different trajectory results each time Inorder to eliminate the randomness of the experimentalresults this article uses 6 experiments on each dataset underthe same experimental environment to calculate the averageabsolute trajectory error the average relative trajectory

Figure 11 200th frame image

20

40

60

80

100

120

140

160

180

Num

ber o

f mat

ches

175 180 185 190 195 200170Time stamp

OursORBSLAM2

Figure 12 Comparison between the algorithm of this paper andORB-SLAM2 motion blur

Figure 13 Movement track

10 Mathematical Problems in Engineering

error and the average tracking time Tables 2ndash4 are plottedin Figures 15ndash17 to compare the algorithm performance byanalyzing the trajectory

Under the evaluation standard of absolute trajectoryerror the algorithm in this paper has certain advantagesIn the scene where the camera rotates 360deg the absolutetrajectory error is increased by 35 and in the ordinaryscene the absolute trajectory error is increased by 48 Inthe case of not rich texture the absolute trajectory errorimprovement is the largest indicating that the algorithmin this paper has a considerable improvement on scenes

with rich image sequence textures ere is also a certainimprovement in the situation of large-angle camerarotation

ere is an advantage in trajectory error in ordinaryscenes it is 175 smaller than the average relative trajectoryerror of the ORB-SLAM2 system In the scene where thecamera rotates 360deg and the image sequence texture is notrich the algorithm in this paper is better than the ORB-SLAM2 system which has been improved by more than40 It proves that the algorithm in this article is indeedimproved compared to the ORB-SLAM2 system

Table 2 Average absolute trajectory error

Average absolute trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 0057630667 0029899333360 0113049667 0072683167Room 0229222167 0111185333

Table 3 Average relative trajectory error

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00362615 0029899360 01534532 0078611Room 01592775 0092167

Table 4 Average tracking time

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00244937 00289299360 00186606 00225014Room 00220836 00282757

0082

0048

0014

ndash05ndash05

ndash10

0000

05

05

05

x (m)

y (m)

z (m

)

10

10

Error mapped onto trajectory

10

15

20

25

15

Reference

Figure 14 Absolute trajectory error

Mathematical Problems in Engineering 11

Due to the sharpening of the local image the averagetracking time is slightly increased but the increase is not large

5 Conclusion

We propose an improved ORB-SLAM2 algorithm which isbased on adaptive information entropy screening algorithmand sharpening adjustment algorithm Our advantage is thatthe information entropy threshold can be automaticallycalculated according to different scenes with generalitySharpening the picture blocks below the information en-tropy threshold increases the clarity in order to better extractthe feature points e combination of information entropyscreening image block and sharpening algorithm solves thecamerarsquos large-angle rotation and the system positioningand mapping failure caused by the lack of texture infor-mation Although the processing time increases it improvesthe processing accuracy We compare the average absolutetrajectory error the average relative trajectory error and theaverage tracing time in the trajectory error with ORB-SLAM2 e experimental results show that the improvedmodel based on information entropy and sharpening im-proved algorithm achieve better results compared withORB-SLAM2 is article innovatively combines the ORB-SLAM2 system with adaptive information entropy ecombination of sharpening algorithms improves the accu-racy and robustness of the systemwithout a noticeable gap inprocessing time

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (NSFC)-Guangdong big data ScienceCenter Project under Grant U1911401 and in part by theSouth China Normal University National UndergraduateInnovation and Entrepreneurship Training Program underGrant 201910574057

References

[1] D Schleicher L M Bergasa R Barea E Lopez andM Ocana ldquoReal-time simultaneous localization and mappingusing a wide-angle stereo camerardquo in Proceedings of the IEEEWorkshop on Distributed Intelligent Systems Collective In-telligence and its Applications (DISrsquo06) pp 2090ndash2095 Bei-jing China October 2006

[2] A J Davison I D Reid N D Molton and O StasseldquoMonoSLAM real-time single camera SLAMrdquo IEEE Trans-actions on Pattern Analysis and Machine Intelligence vol 29no 6 pp 1052ndash1067 2007

[3] G Klein and D Murray ldquoParallel tracking and mapping forsmall ARworkspacesrdquo in Proceedings of the 2007 6th IEEE and

000

005

010

015

020

025M

ean

abso

lute

traj

ecto

ry er

ror

360 RoomDesk

OursORBSLAM2

Figure 15 Average absolute trajectory error

360 RoomDesk

OursORBSLAM2

0000

0005

0010

0015

0020

0025

0030

Mea

n tr

acki

ng ti

me

Figure 17 Average tracking time

360 RoomDesk

OursORBSLAM2

000

002

004

006

008

012

010

016

014

018

Mea

n re

lativ

e tra

ject

ory

erro

r

Figure 16 Average relative trajectory error

12 Mathematical Problems in Engineering

ACM International Symposium on Mixed and AugmentedReality pp 225ndash234 Nara Japan November 2007

[4] R Mur-Artal J M M Montiel and J D Tardos ldquoORB-SLAM a versatile and accurate monocular SLAM systemrdquoIEEE Transactions on Robotics vol 29 no 6 pp 1052ndash10672007

[5] R Mur-Artal and J D Tardos ldquoORB-SLAM2 an open-sourceSLAM system for monocular stereo and RGB-D camerasrdquoIEEE Transactions on Robotics vol 33 no 5 pp 1255ndash12622017

[6] K Di W Wan and J Chen ldquoSLAM visual odometer opti-mization algorithm based on information entropyrdquo Journal ofAutomation vol 47 no 6 pp 770ndash779 2020

[7] M Quan H Wei and J Chen ldquoSLAM visual odometeroptimization algorithm based on information entropyrdquoJournal of Automation vol 1 no 8 pp 180ndash198 2020

[8] L Xu S Zheng and J Jia ldquoUnnatural L0 sparse represen-tation for natural image deblurrirdquo in Proceedings of the 2013IEEE Conference on Computer Vision and Pattern Recognitionpp 1107ndash1114 Portland OR USA June 2013

[9] J Sun W Cao Z Xu and J Ponce ldquoLearning a convolutionalneural network for non-uniform motion blur removalrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 769ndash777 Boston MA USA June2015

[10] S Nah T H Kim and K M Lee ldquoDeep multi-scale con-volutional neural network for dynamic scene deblurringrdquo inProceedings of the 2017 IEEE Conference on Computer Visionand Pattern Recognition (CVPR) pp 257ndash265 Honolulu HIUSA July 2017

[11] J Zhang J Pan J Ren et al ldquoDynamic scene deblurring usingspatially variant recurrent neural networksrdquo in Proceedings ofthe 2018 IEEECVF Conference on Computer Vision andPattern Recognition pp 2521ndash2529 Salt Lake City UT USAJune 2018

[12] G Chen W Chen H Yu et al ldquoResearch on autonomousnavigation method of mobile robot based on semantic ORB-SLAM2 algorithmrdquoMachine Tool amp Hydraulics vol 48 no 9pp 16ndash20 2020

[13] X Qiu and C Zhao ldquoOverview of ORB-SLAM system op-timization framework analysisrdquo Navigation Positioning andTiming vol 6 no 3 pp 46ndash57 2019

[14] E Rosten and T Drummond ldquoMachine learning for high-speed corner detectionrdquo in Proceedings of the EuropeanConference on Computer Vision pp 430ndash443 Graz Austria2006

[15] A Foi V Katkovnik and K Egiazarian ldquoPointwise shape-adaptive DCT for high-quality denoising and deblocking ofgrayscale and color imagesrdquo IEEE Transactions on ImageProcessing vol 16 no 5 pp 1395ndash1411 2007

[16] Y Yu andWH Tardos ldquoORB-SLAM a versatile and accuratemonocular SLAM systemrdquo IEEE Transactions on Roboticsvol 33 no 5 pp 1255ndash1262 2017

[17] R Xu S Liu and J Chen ldquoe rationality and application ofinformation entropy description in imagesrdquo InformationTechnology vol 11 no 5 pp 59ndash61 2005

[18] Y Zhu ldquoOverview of image enhancement algorithmsrdquo In-formation and Computer (-eoretical Edition) vol 16 no 6pp 104ndash106 2017

[19] Z Yuan ldquoImage threshold automatic selection based on one-dimensional entropyrdquo Journal of Gansu University of Tech-nology vol 1 no 1 pp 84ndash88 1993

Mathematical Problems in Engineering 13

Page 2: ImprovedORB-SLAM2AlgorithmBasedonInformationEntropy ...downloads.hindawi.com/journals/mpe/2020/4724310.pdf · In 2007, Andrew Davison proposed Mono-SLAM [2], which is the real-time

In 2007 Andrew Davison proposed Mono-SLAM [2]which is the real-time implementation of SfM (Structurefrom Motion) so it is also known as Real-Time Structurefrom Motion Mono-SLAM is a SLAM based on probabi-listic mapping and has closed-loop correction functionHowever Mono-SLAM can only handle small scenes in realtime and only has good effects in small scenes In the sameyear Georg Klein and David Murray proposed PTAM [3](Parallel Tracking and Mapping) and the innovation is todivide the system into two threads tracking and mappingand the concept of key frames is proposed PTAM is nolonger a processing sequence but a key frame that contains alarge amount of information In 2015 MUR-ARTAL R et alproposed ORB-SLAM In 2016 ORB-SLAM2 was alsoproposed ORB-SLAM [4] is an improved monocular SLAMsystem based on feature points of PTAMe algorithm canrun quickly in real time As the indoor environment and thewide outdoor environment are robust to vigorous sportsORB-SLAM2 [5] increases the scope of application on thebasis of ORB-SLAM It is a set that is based on monocularbinocular and RGB-D ORB-SLAM2 is more accurate thanthe previous solution and can work in real time on astandard CPU

ORB-SLAM2 has improved greatly in running time andaccuracy but there are still some problems to be solved[6 7] e feature point extraction is poorly robust in anenvironment with sudden changing in lighting too strong ortoo weak light intensity and too little texture informationwhich cause the problem of system tracking failure underdynamic environment the feature points lose when thecamera rotates at a large angle Common cameras oftenproduce image blur when moving quickly and solvingmotion blur becomes an important direction in visualSLAM Image restoration technology can reduce the am-biguity of the image and can restore it from the originalimage to a relatively clear image e deblurring method isbased on the maximum posterior probability and the normof sparse representation is used for deblurring [8] ecomputational efficiency is improved but the computationalcost is still very large e image is segmented into manyimage blocks with the same fuzzy mode and the pointdiffusion function of each image block is predicted byconvolution neural network Finally the clear image isobtained by deconvolution but the processing time is toolong [9] e literature [10] uses multiscale convolutionalneural networks to restore clear images in end-to-end formand optimizes the results by multiscale loss function Be-cause the same network parameters are used for differentfuzzy kernels the network model is too large e literature[11] reduces the network model and designs a spatial changecyclic neural network which can achieve good deblurringeffect in dynamic scene but the generalization ability isgeneral It also has semantic segmentation architectures anddeep learning methods based on deep Convolutional NeuralNetwork (CNN) for semantic segmentation to identifyrelevant objects in the environment [12]

Because the SLAM system needs real time the com-putation of neural network is large the processing time islong and the generality is lacking and another method is

adopted in this paper e algorithm in this paper startsfrom the perspective of increasing the image richness ispaper adds a sharpening adjustment algorithm based on theinformation entropy threshold in the feature point ex-traction part of the tracking thread By preprocessing theimage block the information entropy of the image block iscompared with the threshold If the information entropy isbelow the threshold it will be sharpened to enhance therichness of the information and facilitate subsequentprocessing If the information entropy is above thethreshold the richness of the information is consideredsufficient and no processing is performed In order toincrease the generality this paper proposes the algorithmwill automatically calculate the information entropythreshold of the scene according to different scenes Ex-perimental results show that the improved algorithm im-proves the accuracy and robustness of ORB-SLAM2 in thecase of the rotation at a large angle when the texture in-formation is not rich

2 The Framework of the Algorithm

e ORB-SLAM2 system is divided into three threadsnamely Tracking Local Mapping and Local Closing In theentire process the ORB algorithm is used for feature pointdetection and matching and the BA (Bundle Adjustment)algorithm is used to perform nonlinear iterative optimiza-tion of the three procedures to obtain accurate camera poseand 3D map data Since the ORB-SLAM2 system reliesheavily on the extraction and matching of feature pointsrunning in an environment with rich textures or dealingwith video will fail to obtain enough stable matching pointpairs us the beam adjustment method lacks sufficientinput information e posture deviation cannot be effec-tively corrected [13] e algorithm of this paper prepro-cesses the image before extracting the feature points of thetracking thread and adds a sharpening adjustment algorithmbased on the information entropy threshold

21 Overall Framework e overall system framework ofthe improved algorithm is shown in Figure 1 ere are twosteps in our programe first step automatically determinesthe information entropy threshold of the scene based on theadaptive information entropy algorithm e second stepcompares the threshold and the information entropy of eachimage block and performs the following operations

e system first accepts a series of image sequences fromthe camera passes the image frames to the tracking threadafter initialization extracts ORB features from the images inthis thread performs pose estimation based on the previousframe tracks the local map optimizes the pose determinesthe key frame according to the rules and passes it to theLocal Mapping thread to complete the construction of thelocal map Finally Local Closing performs closed-loop de-tection and closed-loop correction In the algorithm of thispaper the sharpening adjustment based on the informationentropy threshold is added to the image preprocessing partbefore the ORB feature detection in the Tracking thread

2 Mathematical Problems in Engineering

22Tracking-read e input to the tracking thread is eachframe of the video When the system is not initialized thethread tries to initialize using the first two frames mainly toinitialize andmatch the feature points of the first two framesere are two initialization methods in the ORB-SLAM2system namely the initialization of monocular camerabinocular camera and RGB-D camera e experiment inthis article mainly uses monocular camera

After the initialization is complete the system starts totrack the camera position and posture e system first usesthe uniform velocity model to track the position and postureWhen the uniform velocity model fails the key frame modelis used for tracking In the case when both types of trackingfail the system will trigger the relocation function to relocatethe frame After the system obtains each frame it will usefeature points to detect and describe and use feature de-scriptors for tracking and pose calculation In the process offeature point extraction image quality plays an importantrole erefore this article creatively adds a preprocessingalgorithm to improve image quality before the image featurepoint extraction step

At the same time the system will track the local mapmatch the current frame with related key frames to form aset and find the corresponding key points e beam ad-justment method is used to track the local map points tominimize the reprojection error thereby optimizing thecamera posture of the current frame Finally according tothe conditions of whether to generate a key frame ajudgment is made to generate a key frame Specify thecurrent frame that meets certain conditions as a key framee key frames selected in the ldquotrackingrdquo thread need to beinserted into the map for construction e key framecontains map points as feature points When more than acertain number of key frames are collected their key pointswill be added to the map and become map points Delete themap points that do not meet the conditions

23 Local Mapping-read e input of the thread is the keyframe inserted by the tracking thread On the basis of newlyadded key frames maintain and add new local map pointse

key frame input by the thread is generated according to therules set during tracking so the result of Local Mapping threadis also closely related to the quality of the image used for featurepoint extraction in the tracking process Image adaptive pre-processing algorithm also improves Local Mapping threadwhen optimizing the extraction of feature points

During the construction of the map local BA optimi-zation will be performed Use BA to minimize reprojectionerrors and optimize Map points and poses Because BArequires a lot of mathematical operations which are relatedto key frames Local Mapping will delete redundant keyframes in order to reduce time consumption

24 Local Closing -read In the continuous movement ofthe camera it is completely consistent with the actual thatthe camera pose calculated by the computer and the mappoints is obtained by the triangulation ere is a certainerror between them While the number of frames increasesthe error gradually accumulates In order to reduce theseaccumulated errors the most effective method is closed-loopcorrection ORB-SLAM2 uses the closed-loop detectionmethod When the camera re-enters the previous scene thesystem detects Closed-loop and global BA optimization isperformed to reduce the cumulative error erefore whenthe ORB-SLAM2 system is applied to a large-scale scene itshows higher robustness and usability e input of thethread is the key frame screened by the Local Mappingthread e word bag vector of the current key frame isstored in the database of the global word bag vector to speedup the matching of subsequent frames At the same time itdetects whether there is a loopback and if it occurs it passesthe pose Graph optimization to optimize the pose of all keyframes and reduce the accumulated drift error After thepose optimization is completed a thread is started to executethe global BA to obtain the most map points and key poseresults of the entire system

25 -e Process of Sharpening Adjustment AlgorithmBased on Information Entropy Motion-blurred images aremore disturbed by noise which impact the extraction of

Camera FrameSharpening adjustmentbased on information

entropyTracking

Key frame

Local mappingLocal closer

Figure 1 Algorithm system framework

Mathematical Problems in Engineering 3

feature points in the ORB-SLAM2 system e number ofstable feature points found on these images is insufficiente feature points cannot even be found in the images withtoo much blurs e insufficient number will then affect theaccuracy of the front-end pose estimation When the featurepoints are less than the number specified by the system thepose estimation algorithm cannot be performed so that theback-end cannot obtain the front-end information and thetracking fails Based on this background this paper proposesa sharpening adjustment algorithm based on informationentropy preprocessing the image before the tracking thread

In images with poor texture and blurred images thesharpening of the image can make the corner information ofthe image more prominent Because when the ORB feature isextracted in the tracking thread it is easier to detect thefeature point and enhance the systemrsquos stability And thescreening based on the information entropy threshold isadded and the image sharpening adjustment is performedonly when the information entropy of the image block is lessthan the threshold It can reduce the time of sharpeningadjustment ensure the real-time performance of the systemand maintain the integrity of the image information If theimage block information entropy is bigger than thethreshold it will not be processed

e process of sharpening adjustment algorithm basedon information entropy threshold is as follows and thealgorithm flow chart is shown in Figure 2

(1) At first an image input by the frame is converted intoa grayscale image and the image is expanded into an8-layer image pyramid under the effect of the scalingfactor and then each layer of the pyramid image isdivided into image blocks

(2) Calculate the information entropy E of the imageblock and compare the obtained information en-tropy E with the information entropy threshold E0e image block with the information entropy E lessthan the threshold E0 indicates that the image blockcontains less effective information and the effect ofORB feature extraction is poor so you need tosharpen first to enhance the detail in the image

(3) After the sharpening process is performed on theimage block with the information entropy less thanthe threshold ORB feature points are extracted withthe image block with the information entropy greaterthan the threshold e feature points are extractedusing the FAST feature point extraction algorithm inthe pyramid e quadtree homogenization algo-rithm is used for homogenization processing whichmakes the distribution of extracted feature pointsmore uniform It avoids the phenomenon of clus-tering of feature points thus the algorithm becomesmore robust

(4) en perform a BRIEF description of the featurepoints to generate a binary descriptor of the featurepoints after the homogenization process e featurepoints generated with the BRIEF descriptor at thistime are called ORB features which have invariance

of viewpoints and invariance of lighting ORB fea-tures which are used in the later graph matching andrecognition in the ORB-SLAM2 system

3 The Novel Image Preprocessing Algorithm

We propose and add a sharpening adjustment algorithmbased on information entropy Before the tracking threadthe image is preprocessed e above algorithm frameworkroughly describes the specific operation of the algorithme following will explain in detail the relevant principlesand details of the improved algorithm Algorithms used inthis article are proposed including the ORB algorithm inORB-SLAM2 the sharpening adjustment algorithm theprinciple and algorithm of information entropy and theadaptive information entropy threshold algorithm

31ORBAlgorithm As explained in the previous article theORB-SLAM2 system relies heavily on the extraction andmatching of feature points erefore running in an envi-ronment with rich texture or video processing will not beable to obtain enough stable matching point pairs resultingin loss of system position and attitude tracking Case inorder to better explain the improved system in this articlewe first introduce the feature point extraction algorithmused by the system

e focus of image information processing is theextraction of feature points e ORB algorithm combinesthe two methods FAST feature point detection and BRIEFfeature descriptor [14] To make further optimization andimprovement on their basis in 2007 the FAST cornerdetection algorithm was proposed Compared with otherfeature point detection algorithms the FAST algorithmhas better real-time detection and robustness e core ofthe FAST algorithm is to take a pixel and compare it withthe gray value of the points around it For comparison ifthe gray value of this pixel differs from most of thesurrounding pixels it is considered to be a feature point

e specific steps of FAST corner detection featurepoints each pixel is detected with the detected pixel as thecenter p 3 pixels with a radius of 16 pixels on a circle set agray value threshold t Comparing the 16 pixels on the circlewhen there are consecutive n (12) pixels the gray value isgreater than Ip + t or less than Ip minus t (Ip is the gray value ofpoint p) then p is determined as a feature point In order toimprove the efficiency of detection a simplified judgment isfirst performed It is only necessary to detect whether thegray values of 1 5 9 and 13 satisfied the above conditionswhen at least 3 points are met continue to detect theremaining 12 points (Figure 3)

e concept of mass points is introduced to obtain thescale and rotation invariance of FAST corner points ecentroid refers to the gray value of the image block as thecenter of weight e specific steps are as follows

(1) In a small image block B the moment defining theimage block B is

4 Mathematical Problems in Engineering

mpq 1113944xyϵB

xpy

qI(x y) p q 0 1 (1)

(2) You can find the centroid of the image block bymoment

C m10

m00m01

m001113888 1113889 (2)

(3) Connect the geometric center O and the centroid Cof the image block to get a direction vector OC

rarren

the feature point direction can be defined as follows

Buildingan imagepyramid

Local entropy screening

Dividingimageblocks

Calculate E

E le E0

N

Y

Sharpenadjustment Match

Generatedescriptor

ICP motionestimation

E is the local entropyE0 is the local entropy threshold

ORB featureextraction

Figure 2 Process of sharpening adjustment algorithm based on information entropy

1 23

456

7

8910

11

1213 p14

1516

Figure 3 FAST feature point detection

Mathematical Problems in Engineering 5

θ arctanm01

m101113888 1113889 (3)

rough the above methods FAST corners have a de-scription of scale and rotation which greatly improves therobustness of their representation between different imagesSo this improved FAST is called Oriented FAST in the ORB

After the feature point detection is completed the fea-ture description needs to be used to represent and store thefeature point information to improve the efficiency of thesubsequent matching e ORB algorithm uses the BRIEFdescriptor whose main idea is to distribute a certainprobability around the feature point Randomly select sev-eral point pairs and then combine the gray values of thesepoint pairs into a binary string Finally use this binary as thedescriptor of this feature point

e BRIEF feature descriptor determines the binarydescriptor by comparing the pair of gray values As shown inFigure 4 if the gray value at the pixel x is less than the grayvalue at y the binary value of the corresponding bit isassigned to 1 otherwise it is 0

τ(p x y) 1 I(x)lt I(y)

0 I(x)ge I(y)1113896 (4)

Among them I(x) and I(y) are the gray values at thecorresponding corners after image smoothing

For n (256) (x y) test point pairs the corresponding n-dimensional BRIEF descriptor can be formed according tothe following formula

fn(p) 11139441leilen

2iminus tτ(p x y) (5)

However the BRIEF descriptor does not have rotationinvariance the ORB algorithm improves it When calcu-lating the BRIEF descriptor the ORB will solve these cor-responding feature points in the main direction therebyensuring that the rotation angle is different e point pairsselected for the same feature point are the same

32 Sharpening Adjustment In the process of image trans-mission and conversion the sharpness of the image is reducedIn essence the target contour and detail edges in the image areblurred In the extraction of feature points what we need to dois to traverse the image and the pixels with large difference insurrounding pixels are selected as the feature points whichrequires the imagersquos target contour and detail edge informationto be prominent erefore we introduced image sharpening

e purpose of image sharpening is to make the edgesand contours of the image clear and enhance the details ofthe image [15] e methods of image sharpening includestatistical difference method discrete spatial differencemethod and spatial high-pass filtering method Generallythe energy of the image is mainly concentrated in the lowfrequency e frequency where the noise is located ismainly in the high-frequency band e edge information ofthe image is mainly concentrated in the high-frequencyband Usually smoothing is used to remove the noise at highfrequency but this also makes the edge and contour of the

image blurred which affects the extraction of the featurepoints e root cause of the blur is that the image issubjected to an average or integral operation So performingan inverse operation (a differential operation) on the imagecan restore the picture to clear

In order to reduce the adverse effects we adopt themethod of high-pass filtering (second-order differential) inthe spatial domain and use the Laplacian operator to per-form a convolution operation with each pixel of the image toincrease the variance between each element in the matrixand the surrounding elements to achieve sharp image If thevariables of the convolution are the sequence x(n) and h(n)the result of the convolution is

y(n) 1113944infin

iminusinfinx(i)h(n minus i) x(n)lowast h(n) (6)

e convolution operation of the divided image blocks isactually to use the convolution kernel to slide on the imagee pixel gray value is multiplied with the value on thecorresponding convolution kernel en all the multipliedvalues are added as the gray value of the pixel on the imagecorresponding to the middle pixel of the convolution kernele expression of the convolution function is as follows

dst(x y) 1113944

0lexprimeltkernelcols

0leyprimeltkernelrows

kernel xprime yprime( 1113857

lowast src x + xprime minus anchorx y + yprime minus anchor y( 1113857

(7)

where anchor is the reference point of the kernel kernel isthe convolution kernel and the convolution template here isthe matrix form of the Laplacian variant operator as shownbelow

H

0 minus1 0

minus1 5 minus1

0 minus1 0

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ (8)

e Laplacian deformation operator uses the secondderivative information of the image and is isotropic

Figure 4 BRIEF description

6 Mathematical Problems in Engineering

e results of image discrete convolution on the classicLenna image using the sharpening algorithm adopted in thispaper are shown in Figure 5

From the processing results it can be seen that the high-pass filtering effect of the image sharpening algorithm isobvious which enhances the details and edge information ofthe image and the noise becomes small so this sharpeningalgorithm is selected

33 Screening Based on Information Entropy In 1948Shannon proposed the concept of ldquoinformation entropyrdquo tosolve the problem of quantitative measurement of infor-mation In theory entropy is a measure of the degree ofdisorder of information which is used to measure theuncertainty of information in an image e larger theentropy value is the higher the degree of disorder In imageprocessing entropy can reflect the information richness ofthe image and display the amount of information containedin the image e information entropy calculation formulais

H(x) minus 1113944255

0p xi( 1113857log2 p xi( 1113857 (9)

where p(xi) is the probability of a pixel with grayscalei(i 0 255) in the image When the probability is closerto 1 the uncertainty of the information is smaller and thereceiver can predict the transmitted content e amount ofinformation in the belt is less and vice versa

If the amount of information contained in the image isexpressed by information entropy then the entropy value ofan image of size m times n is defined as follows

pij f(i j)

1113936Mi11113936

Nj1f(i j)

(10)

H minus 1113944

M

i11113944

N

j1pijlog2pij (11)

where f(i j) is the gray level at the point (i j) in the imagepij is the gray distribution probability at the point and H isthe entropy of the image If m times n is taken as (i j) a localneighborhood in the center thenH is called the informationentropy value of the image [16]

From the perspective of information theory when theprobability of image appearance is small the informationentropy value of the image is large indicating that theamount of information transmitted is large In the extractionof feature points the information entropy reflects the texturecontained in the partial image information richness or imagepixel gradient change degree e larger the informationentropy value is the richer the image texture information isandmore obvious the changing of the image pixel gradient is[17]

erefore the effect of ORB feature point extraction isgood and the image block does not require detail en-hancement e local information entropy value is low sothe effect of ORB feature point extraction is poor which isshown in Figure 6 After sharpening adjustments to enhancedetails we optimize the effect of feature point extraction[18] which is shown in Figure 7 By comparing the featurepoint extraction of ORB-SLAM2 it can be intuitively seenthat the optimized extraction algorithm accuracy is better

34 Adaptive Information Entropy -reshold Because thenumber of the information entropy is closely related to thescene different video sequences in different scenes havedifferent information richness so the information entropythreshold of different scenes must also be different In eachdifferent scene it requires repeated experiments to set theinformation entropy threshold multiple times for matchingcalculations to get the corresponding threshold valueHowever the experience value in different scenes differsgreatlye threshold value is not universal which will affectthe image preprocessing and feature extraction It will resultin the failure of quickly obtaining better matching resultsSo the adaptive algorithm of information entropy is par-ticularly important [19]

Original image

(a)

Sharpen

(b)

Figure 5 Sharpening adjustment

Mathematical Problems in Engineering 7

In view of the above problems we propose an adaptivemethod of information entropy threshold which adjusts thethreshold according to different scenariose self-adjustingformula is

E0 H(i)ave

2+ δ (12)

where H(i)ave is the average value of the information en-tropy in this scene which can be obtained by obtaining theinformation entropy of each frame in a video in the sceneunder the first run and then divided by the number offrames i is the number of frames in the video sequence andδ is the correction factor after experiment it is 03 whichworks best rough the above formula the calculated E0 isthe information entropy threshold of the scene

4 Experimental Results and Analysis

All the experiments are conducted on a laptop computere operating system is Linux 1604 64 bit the processorIntel (R) Core (TM) i5-7300U CPU 250GHz the op-erating environment is CLion 2019 and opencv330 and the

program uses C++ Compilatione back-end uses G2O forposture optimization based on posture maps to generatemotion trajectories e image sequence reading speed ofthis algorithm is 30 frames per second compared with theopen source ORB-SLAM2 system

41 Comparison of Two-Frame Image Matching In thispaper the rgbd_dataset_freiburg1_desk dataset is selectedfrom which two images with blurry images and large ro-tation angles are extracted and the image detection andmatching processing are performed using the ORB algo-rithm ORB+ information entropy and ORB+ informationentropy + sharpening extraction algorithm And comparethe data such as extraction effect key points matchingnumber matching number and matching rate after RAN-SAC filtering e matching result statistics of the image areshown in Table 1 and the matching effect is shown inFigures 8 9 and 10

Table 1 shows the results of the feature point detectionand matching through the ORB ORB+ information en-tropy and ORB+ information entropy + sharpening Itproves that the correct matching points and the matchingrate of ORB andORB+ information entropy are basically thesame e matching effect diagram of the latter shows amismatch with a large error For ORB+ informationentropy + sharpening the number of correct matchingpoints of the extraction algorithm has increased signifi-cantly and its matching rate compared toORB+ information entropy and ORB+ information entro-py + sharpening extraction algorithms have been respec-tively improved by 367 and 388

e results show that adding the information entropyscreening and sharpening process proposed by this algo-rithm can effectively increase the number of matches andimprove the matching rate e accuracy of image featuredetection and matching are improved which provide moreaccurate and effective for subsequent process as trackingmapping and loop detection

42 Comparison of Image Matching in Visual OdometerFor the timestamp sequence of different blur degrees androtation angles of the image (that is a sequence of frames170ndash200) ORB-SLAM2 algorithm and the feature detectionand matching algorithm in the improved algorithm of thispaper are introduced to perform two adjacent framersquos fea-ture detection and matching To compare the matchingpoints of the two algorithms in different images the image isclear between 170 and 180 frames and the motion blur startsbetween 180 and 190 frames Maximum rotation angle isbetween 190 and 200 framesemost dramatic motion bluris shown in Figure 11

It can be seen from Figure 12 that the improved al-gorithm can match more feature points than the originalalgorithm e number of matching points of the im-proved algorithm is more than 50 and tracking loss doesnot occur even if the image is blurred due to fast motionWhen the resolution of the image is reduced the

Figure 6 Feature point extraction based on ORB-SLAM2

Figure 7 Extraction of sharpening adjustment feature points basedon information entropy

8 Mathematical Problems in Engineering

improved algorithm still has more matching featurepoints and has strong anti-interference ability androbustness

43 Analysis of Trajectory Tracking Accuracy e gbd_da-taset_freiburg1_desk dataset in the TUM data is selected inthe experiment We use the real trajectory of the robot

provided by TUM to compare it with the motion trajectorycalculated by the algorithm to verify the improvement effectof the proposed algorithm Here we take the average absolutetrajectory error e average relative trajectory error and theaverage tracking time are standard e comprehensive tra-jectory tracking accuracy and real-time performance arecompared between the ORB-SLAM2 system and the im-proved SLAM system

Figure 8 Matching effect of ORB

Figure 9 Matching effect of ORB+ information entropy

Figure 10 Matching effect of ORB+ information entropy + sharpening

Table 1 Comparison of matching rates of three feature detection and matching algorithms

Model Key points Matches RANSCK Match rate ()ORB 10001004 254 87 3425+information entropy 10001002 255 86 3373+sharpening 1000983 284 133 4683

Mathematical Problems in Engineering 9

We use the information entropy of image blocks todetermine the amount of information e algorithmsharpens the image blocks with small information entropyenhances local image details and extracts local feature pointsthat can characterize image information for the back-end asadjacent framese correlation basis of key framematchingenhances robustness and reduces the loss of motion trackingcaused by the failure of interframe matching at the sametime Based on the matching result the R and t transfor-mation relationship between frames is calculated and theback-end uses g2o to optimize the pose based on the positionand attitude map Finally the motion trajectory is generatedwhich is shown in Figure 13

In terms of trajectory tracking accuracy the absolutetrajectory error reflects the difference between the true valueof the camerarsquos pose and the estimated value of the SLAMsystem e absolute trajectory error root mean squareRMSE (x) is defined as follows

RMSE(x)

1113936ni1 xei minus xsi

2

n

1113971

(13)

Among them xei represents the estimated position ofthe ith frame in the image sequence and xsi representsthe standard value of the position of the ith frame in theimage sequence Taking the rgbd_dataset_-freiburg1_desk dataset as an example the error betweenthe optimized motion trajectory and the standard value isshown in Figure 14

e average absolute trajectory error refers to directlycalculating the difference between the real value of thecamerarsquos pose and the estimated value of the SLAM systeme program first aligns the real value and the estimatedvalue according to the time stamp of the pose and thencalculates each pair of poses e difference is very suitablefor evaluating the performance of visual SLAM systems eaverage relative trajectory error is used to calculate thedifference between the pose changes on the same twotimestamps After alignment with the timestamps both thereal pose and the estimated pose are calculated at every sameperiod of time then the difference is calculated It is suitable

for estimating the drift of the system e definition of theabsolute trajectory error in frame i is as follows

Fi Qminus1i Spi (14)

Among them Qi represents the real pose of the frameand pi represents the algorithm estimated pose of the frameS isin Sim (3) is the similarity transformation matrix fromestimated pose to true pose

is experiment comprehensively compares the threedatasets rgbd_dataset_freiburg1_deskroom360 of whichthe desk dataset is an ordinary scene the 360 dataset is thedataset collected when the camera performs a 360-degreerotation motion and the room dataset is a scene collectedwhen the image sequence texture is not rich e ORB-SLAM2 generates different trajectory results each time Inorder to eliminate the randomness of the experimentalresults this article uses 6 experiments on each dataset underthe same experimental environment to calculate the averageabsolute trajectory error the average relative trajectory

Figure 11 200th frame image

20

40

60

80

100

120

140

160

180

Num

ber o

f mat

ches

175 180 185 190 195 200170Time stamp

OursORBSLAM2

Figure 12 Comparison between the algorithm of this paper andORB-SLAM2 motion blur

Figure 13 Movement track

10 Mathematical Problems in Engineering

error and the average tracking time Tables 2ndash4 are plottedin Figures 15ndash17 to compare the algorithm performance byanalyzing the trajectory

Under the evaluation standard of absolute trajectoryerror the algorithm in this paper has certain advantagesIn the scene where the camera rotates 360deg the absolutetrajectory error is increased by 35 and in the ordinaryscene the absolute trajectory error is increased by 48 Inthe case of not rich texture the absolute trajectory errorimprovement is the largest indicating that the algorithmin this paper has a considerable improvement on scenes

with rich image sequence textures ere is also a certainimprovement in the situation of large-angle camerarotation

ere is an advantage in trajectory error in ordinaryscenes it is 175 smaller than the average relative trajectoryerror of the ORB-SLAM2 system In the scene where thecamera rotates 360deg and the image sequence texture is notrich the algorithm in this paper is better than the ORB-SLAM2 system which has been improved by more than40 It proves that the algorithm in this article is indeedimproved compared to the ORB-SLAM2 system

Table 2 Average absolute trajectory error

Average absolute trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 0057630667 0029899333360 0113049667 0072683167Room 0229222167 0111185333

Table 3 Average relative trajectory error

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00362615 0029899360 01534532 0078611Room 01592775 0092167

Table 4 Average tracking time

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00244937 00289299360 00186606 00225014Room 00220836 00282757

0082

0048

0014

ndash05ndash05

ndash10

0000

05

05

05

x (m)

y (m)

z (m

)

10

10

Error mapped onto trajectory

10

15

20

25

15

Reference

Figure 14 Absolute trajectory error

Mathematical Problems in Engineering 11

Due to the sharpening of the local image the averagetracking time is slightly increased but the increase is not large

5 Conclusion

We propose an improved ORB-SLAM2 algorithm which isbased on adaptive information entropy screening algorithmand sharpening adjustment algorithm Our advantage is thatthe information entropy threshold can be automaticallycalculated according to different scenes with generalitySharpening the picture blocks below the information en-tropy threshold increases the clarity in order to better extractthe feature points e combination of information entropyscreening image block and sharpening algorithm solves thecamerarsquos large-angle rotation and the system positioningand mapping failure caused by the lack of texture infor-mation Although the processing time increases it improvesthe processing accuracy We compare the average absolutetrajectory error the average relative trajectory error and theaverage tracing time in the trajectory error with ORB-SLAM2 e experimental results show that the improvedmodel based on information entropy and sharpening im-proved algorithm achieve better results compared withORB-SLAM2 is article innovatively combines the ORB-SLAM2 system with adaptive information entropy ecombination of sharpening algorithms improves the accu-racy and robustness of the systemwithout a noticeable gap inprocessing time

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (NSFC)-Guangdong big data ScienceCenter Project under Grant U1911401 and in part by theSouth China Normal University National UndergraduateInnovation and Entrepreneurship Training Program underGrant 201910574057

References

[1] D Schleicher L M Bergasa R Barea E Lopez andM Ocana ldquoReal-time simultaneous localization and mappingusing a wide-angle stereo camerardquo in Proceedings of the IEEEWorkshop on Distributed Intelligent Systems Collective In-telligence and its Applications (DISrsquo06) pp 2090ndash2095 Bei-jing China October 2006

[2] A J Davison I D Reid N D Molton and O StasseldquoMonoSLAM real-time single camera SLAMrdquo IEEE Trans-actions on Pattern Analysis and Machine Intelligence vol 29no 6 pp 1052ndash1067 2007

[3] G Klein and D Murray ldquoParallel tracking and mapping forsmall ARworkspacesrdquo in Proceedings of the 2007 6th IEEE and

000

005

010

015

020

025M

ean

abso

lute

traj

ecto

ry er

ror

360 RoomDesk

OursORBSLAM2

Figure 15 Average absolute trajectory error

360 RoomDesk

OursORBSLAM2

0000

0005

0010

0015

0020

0025

0030

Mea

n tr

acki

ng ti

me

Figure 17 Average tracking time

360 RoomDesk

OursORBSLAM2

000

002

004

006

008

012

010

016

014

018

Mea

n re

lativ

e tra

ject

ory

erro

r

Figure 16 Average relative trajectory error

12 Mathematical Problems in Engineering

ACM International Symposium on Mixed and AugmentedReality pp 225ndash234 Nara Japan November 2007

[4] R Mur-Artal J M M Montiel and J D Tardos ldquoORB-SLAM a versatile and accurate monocular SLAM systemrdquoIEEE Transactions on Robotics vol 29 no 6 pp 1052ndash10672007

[5] R Mur-Artal and J D Tardos ldquoORB-SLAM2 an open-sourceSLAM system for monocular stereo and RGB-D camerasrdquoIEEE Transactions on Robotics vol 33 no 5 pp 1255ndash12622017

[6] K Di W Wan and J Chen ldquoSLAM visual odometer opti-mization algorithm based on information entropyrdquo Journal ofAutomation vol 47 no 6 pp 770ndash779 2020

[7] M Quan H Wei and J Chen ldquoSLAM visual odometeroptimization algorithm based on information entropyrdquoJournal of Automation vol 1 no 8 pp 180ndash198 2020

[8] L Xu S Zheng and J Jia ldquoUnnatural L0 sparse represen-tation for natural image deblurrirdquo in Proceedings of the 2013IEEE Conference on Computer Vision and Pattern Recognitionpp 1107ndash1114 Portland OR USA June 2013

[9] J Sun W Cao Z Xu and J Ponce ldquoLearning a convolutionalneural network for non-uniform motion blur removalrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 769ndash777 Boston MA USA June2015

[10] S Nah T H Kim and K M Lee ldquoDeep multi-scale con-volutional neural network for dynamic scene deblurringrdquo inProceedings of the 2017 IEEE Conference on Computer Visionand Pattern Recognition (CVPR) pp 257ndash265 Honolulu HIUSA July 2017

[11] J Zhang J Pan J Ren et al ldquoDynamic scene deblurring usingspatially variant recurrent neural networksrdquo in Proceedings ofthe 2018 IEEECVF Conference on Computer Vision andPattern Recognition pp 2521ndash2529 Salt Lake City UT USAJune 2018

[12] G Chen W Chen H Yu et al ldquoResearch on autonomousnavigation method of mobile robot based on semantic ORB-SLAM2 algorithmrdquoMachine Tool amp Hydraulics vol 48 no 9pp 16ndash20 2020

[13] X Qiu and C Zhao ldquoOverview of ORB-SLAM system op-timization framework analysisrdquo Navigation Positioning andTiming vol 6 no 3 pp 46ndash57 2019

[14] E Rosten and T Drummond ldquoMachine learning for high-speed corner detectionrdquo in Proceedings of the EuropeanConference on Computer Vision pp 430ndash443 Graz Austria2006

[15] A Foi V Katkovnik and K Egiazarian ldquoPointwise shape-adaptive DCT for high-quality denoising and deblocking ofgrayscale and color imagesrdquo IEEE Transactions on ImageProcessing vol 16 no 5 pp 1395ndash1411 2007

[16] Y Yu andWH Tardos ldquoORB-SLAM a versatile and accuratemonocular SLAM systemrdquo IEEE Transactions on Roboticsvol 33 no 5 pp 1255ndash1262 2017

[17] R Xu S Liu and J Chen ldquoe rationality and application ofinformation entropy description in imagesrdquo InformationTechnology vol 11 no 5 pp 59ndash61 2005

[18] Y Zhu ldquoOverview of image enhancement algorithmsrdquo In-formation and Computer (-eoretical Edition) vol 16 no 6pp 104ndash106 2017

[19] Z Yuan ldquoImage threshold automatic selection based on one-dimensional entropyrdquo Journal of Gansu University of Tech-nology vol 1 no 1 pp 84ndash88 1993

Mathematical Problems in Engineering 13

Page 3: ImprovedORB-SLAM2AlgorithmBasedonInformationEntropy ...downloads.hindawi.com/journals/mpe/2020/4724310.pdf · In 2007, Andrew Davison proposed Mono-SLAM [2], which is the real-time

22Tracking-read e input to the tracking thread is eachframe of the video When the system is not initialized thethread tries to initialize using the first two frames mainly toinitialize andmatch the feature points of the first two framesere are two initialization methods in the ORB-SLAM2system namely the initialization of monocular camerabinocular camera and RGB-D camera e experiment inthis article mainly uses monocular camera

After the initialization is complete the system starts totrack the camera position and posture e system first usesthe uniform velocity model to track the position and postureWhen the uniform velocity model fails the key frame modelis used for tracking In the case when both types of trackingfail the system will trigger the relocation function to relocatethe frame After the system obtains each frame it will usefeature points to detect and describe and use feature de-scriptors for tracking and pose calculation In the process offeature point extraction image quality plays an importantrole erefore this article creatively adds a preprocessingalgorithm to improve image quality before the image featurepoint extraction step

At the same time the system will track the local mapmatch the current frame with related key frames to form aset and find the corresponding key points e beam ad-justment method is used to track the local map points tominimize the reprojection error thereby optimizing thecamera posture of the current frame Finally according tothe conditions of whether to generate a key frame ajudgment is made to generate a key frame Specify thecurrent frame that meets certain conditions as a key framee key frames selected in the ldquotrackingrdquo thread need to beinserted into the map for construction e key framecontains map points as feature points When more than acertain number of key frames are collected their key pointswill be added to the map and become map points Delete themap points that do not meet the conditions

23 Local Mapping-read e input of the thread is the keyframe inserted by the tracking thread On the basis of newlyadded key frames maintain and add new local map pointse

key frame input by the thread is generated according to therules set during tracking so the result of Local Mapping threadis also closely related to the quality of the image used for featurepoint extraction in the tracking process Image adaptive pre-processing algorithm also improves Local Mapping threadwhen optimizing the extraction of feature points

During the construction of the map local BA optimi-zation will be performed Use BA to minimize reprojectionerrors and optimize Map points and poses Because BArequires a lot of mathematical operations which are relatedto key frames Local Mapping will delete redundant keyframes in order to reduce time consumption

24 Local Closing -read In the continuous movement ofthe camera it is completely consistent with the actual thatthe camera pose calculated by the computer and the mappoints is obtained by the triangulation ere is a certainerror between them While the number of frames increasesthe error gradually accumulates In order to reduce theseaccumulated errors the most effective method is closed-loopcorrection ORB-SLAM2 uses the closed-loop detectionmethod When the camera re-enters the previous scene thesystem detects Closed-loop and global BA optimization isperformed to reduce the cumulative error erefore whenthe ORB-SLAM2 system is applied to a large-scale scene itshows higher robustness and usability e input of thethread is the key frame screened by the Local Mappingthread e word bag vector of the current key frame isstored in the database of the global word bag vector to speedup the matching of subsequent frames At the same time itdetects whether there is a loopback and if it occurs it passesthe pose Graph optimization to optimize the pose of all keyframes and reduce the accumulated drift error After thepose optimization is completed a thread is started to executethe global BA to obtain the most map points and key poseresults of the entire system

25 -e Process of Sharpening Adjustment AlgorithmBased on Information Entropy Motion-blurred images aremore disturbed by noise which impact the extraction of

Camera FrameSharpening adjustmentbased on information

entropyTracking

Key frame

Local mappingLocal closer

Figure 1 Algorithm system framework

Mathematical Problems in Engineering 3

feature points in the ORB-SLAM2 system e number ofstable feature points found on these images is insufficiente feature points cannot even be found in the images withtoo much blurs e insufficient number will then affect theaccuracy of the front-end pose estimation When the featurepoints are less than the number specified by the system thepose estimation algorithm cannot be performed so that theback-end cannot obtain the front-end information and thetracking fails Based on this background this paper proposesa sharpening adjustment algorithm based on informationentropy preprocessing the image before the tracking thread

In images with poor texture and blurred images thesharpening of the image can make the corner information ofthe image more prominent Because when the ORB feature isextracted in the tracking thread it is easier to detect thefeature point and enhance the systemrsquos stability And thescreening based on the information entropy threshold isadded and the image sharpening adjustment is performedonly when the information entropy of the image block is lessthan the threshold It can reduce the time of sharpeningadjustment ensure the real-time performance of the systemand maintain the integrity of the image information If theimage block information entropy is bigger than thethreshold it will not be processed

e process of sharpening adjustment algorithm basedon information entropy threshold is as follows and thealgorithm flow chart is shown in Figure 2

(1) At first an image input by the frame is converted intoa grayscale image and the image is expanded into an8-layer image pyramid under the effect of the scalingfactor and then each layer of the pyramid image isdivided into image blocks

(2) Calculate the information entropy E of the imageblock and compare the obtained information en-tropy E with the information entropy threshold E0e image block with the information entropy E lessthan the threshold E0 indicates that the image blockcontains less effective information and the effect ofORB feature extraction is poor so you need tosharpen first to enhance the detail in the image

(3) After the sharpening process is performed on theimage block with the information entropy less thanthe threshold ORB feature points are extracted withthe image block with the information entropy greaterthan the threshold e feature points are extractedusing the FAST feature point extraction algorithm inthe pyramid e quadtree homogenization algo-rithm is used for homogenization processing whichmakes the distribution of extracted feature pointsmore uniform It avoids the phenomenon of clus-tering of feature points thus the algorithm becomesmore robust

(4) en perform a BRIEF description of the featurepoints to generate a binary descriptor of the featurepoints after the homogenization process e featurepoints generated with the BRIEF descriptor at thistime are called ORB features which have invariance

of viewpoints and invariance of lighting ORB fea-tures which are used in the later graph matching andrecognition in the ORB-SLAM2 system

3 The Novel Image Preprocessing Algorithm

We propose and add a sharpening adjustment algorithmbased on information entropy Before the tracking threadthe image is preprocessed e above algorithm frameworkroughly describes the specific operation of the algorithme following will explain in detail the relevant principlesand details of the improved algorithm Algorithms used inthis article are proposed including the ORB algorithm inORB-SLAM2 the sharpening adjustment algorithm theprinciple and algorithm of information entropy and theadaptive information entropy threshold algorithm

31ORBAlgorithm As explained in the previous article theORB-SLAM2 system relies heavily on the extraction andmatching of feature points erefore running in an envi-ronment with rich texture or video processing will not beable to obtain enough stable matching point pairs resultingin loss of system position and attitude tracking Case inorder to better explain the improved system in this articlewe first introduce the feature point extraction algorithmused by the system

e focus of image information processing is theextraction of feature points e ORB algorithm combinesthe two methods FAST feature point detection and BRIEFfeature descriptor [14] To make further optimization andimprovement on their basis in 2007 the FAST cornerdetection algorithm was proposed Compared with otherfeature point detection algorithms the FAST algorithmhas better real-time detection and robustness e core ofthe FAST algorithm is to take a pixel and compare it withthe gray value of the points around it For comparison ifthe gray value of this pixel differs from most of thesurrounding pixels it is considered to be a feature point

e specific steps of FAST corner detection featurepoints each pixel is detected with the detected pixel as thecenter p 3 pixels with a radius of 16 pixels on a circle set agray value threshold t Comparing the 16 pixels on the circlewhen there are consecutive n (12) pixels the gray value isgreater than Ip + t or less than Ip minus t (Ip is the gray value ofpoint p) then p is determined as a feature point In order toimprove the efficiency of detection a simplified judgment isfirst performed It is only necessary to detect whether thegray values of 1 5 9 and 13 satisfied the above conditionswhen at least 3 points are met continue to detect theremaining 12 points (Figure 3)

e concept of mass points is introduced to obtain thescale and rotation invariance of FAST corner points ecentroid refers to the gray value of the image block as thecenter of weight e specific steps are as follows

(1) In a small image block B the moment defining theimage block B is

4 Mathematical Problems in Engineering

mpq 1113944xyϵB

xpy

qI(x y) p q 0 1 (1)

(2) You can find the centroid of the image block bymoment

C m10

m00m01

m001113888 1113889 (2)

(3) Connect the geometric center O and the centroid Cof the image block to get a direction vector OC

rarren

the feature point direction can be defined as follows

Buildingan imagepyramid

Local entropy screening

Dividingimageblocks

Calculate E

E le E0

N

Y

Sharpenadjustment Match

Generatedescriptor

ICP motionestimation

E is the local entropyE0 is the local entropy threshold

ORB featureextraction

Figure 2 Process of sharpening adjustment algorithm based on information entropy

1 23

456

7

8910

11

1213 p14

1516

Figure 3 FAST feature point detection

Mathematical Problems in Engineering 5

θ arctanm01

m101113888 1113889 (3)

rough the above methods FAST corners have a de-scription of scale and rotation which greatly improves therobustness of their representation between different imagesSo this improved FAST is called Oriented FAST in the ORB

After the feature point detection is completed the fea-ture description needs to be used to represent and store thefeature point information to improve the efficiency of thesubsequent matching e ORB algorithm uses the BRIEFdescriptor whose main idea is to distribute a certainprobability around the feature point Randomly select sev-eral point pairs and then combine the gray values of thesepoint pairs into a binary string Finally use this binary as thedescriptor of this feature point

e BRIEF feature descriptor determines the binarydescriptor by comparing the pair of gray values As shown inFigure 4 if the gray value at the pixel x is less than the grayvalue at y the binary value of the corresponding bit isassigned to 1 otherwise it is 0

τ(p x y) 1 I(x)lt I(y)

0 I(x)ge I(y)1113896 (4)

Among them I(x) and I(y) are the gray values at thecorresponding corners after image smoothing

For n (256) (x y) test point pairs the corresponding n-dimensional BRIEF descriptor can be formed according tothe following formula

fn(p) 11139441leilen

2iminus tτ(p x y) (5)

However the BRIEF descriptor does not have rotationinvariance the ORB algorithm improves it When calcu-lating the BRIEF descriptor the ORB will solve these cor-responding feature points in the main direction therebyensuring that the rotation angle is different e point pairsselected for the same feature point are the same

32 Sharpening Adjustment In the process of image trans-mission and conversion the sharpness of the image is reducedIn essence the target contour and detail edges in the image areblurred In the extraction of feature points what we need to dois to traverse the image and the pixels with large difference insurrounding pixels are selected as the feature points whichrequires the imagersquos target contour and detail edge informationto be prominent erefore we introduced image sharpening

e purpose of image sharpening is to make the edgesand contours of the image clear and enhance the details ofthe image [15] e methods of image sharpening includestatistical difference method discrete spatial differencemethod and spatial high-pass filtering method Generallythe energy of the image is mainly concentrated in the lowfrequency e frequency where the noise is located ismainly in the high-frequency band e edge information ofthe image is mainly concentrated in the high-frequencyband Usually smoothing is used to remove the noise at highfrequency but this also makes the edge and contour of the

image blurred which affects the extraction of the featurepoints e root cause of the blur is that the image issubjected to an average or integral operation So performingan inverse operation (a differential operation) on the imagecan restore the picture to clear

In order to reduce the adverse effects we adopt themethod of high-pass filtering (second-order differential) inthe spatial domain and use the Laplacian operator to per-form a convolution operation with each pixel of the image toincrease the variance between each element in the matrixand the surrounding elements to achieve sharp image If thevariables of the convolution are the sequence x(n) and h(n)the result of the convolution is

y(n) 1113944infin

iminusinfinx(i)h(n minus i) x(n)lowast h(n) (6)

e convolution operation of the divided image blocks isactually to use the convolution kernel to slide on the imagee pixel gray value is multiplied with the value on thecorresponding convolution kernel en all the multipliedvalues are added as the gray value of the pixel on the imagecorresponding to the middle pixel of the convolution kernele expression of the convolution function is as follows

dst(x y) 1113944

0lexprimeltkernelcols

0leyprimeltkernelrows

kernel xprime yprime( 1113857

lowast src x + xprime minus anchorx y + yprime minus anchor y( 1113857

(7)

where anchor is the reference point of the kernel kernel isthe convolution kernel and the convolution template here isthe matrix form of the Laplacian variant operator as shownbelow

H

0 minus1 0

minus1 5 minus1

0 minus1 0

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ (8)

e Laplacian deformation operator uses the secondderivative information of the image and is isotropic

Figure 4 BRIEF description

6 Mathematical Problems in Engineering

e results of image discrete convolution on the classicLenna image using the sharpening algorithm adopted in thispaper are shown in Figure 5

From the processing results it can be seen that the high-pass filtering effect of the image sharpening algorithm isobvious which enhances the details and edge information ofthe image and the noise becomes small so this sharpeningalgorithm is selected

33 Screening Based on Information Entropy In 1948Shannon proposed the concept of ldquoinformation entropyrdquo tosolve the problem of quantitative measurement of infor-mation In theory entropy is a measure of the degree ofdisorder of information which is used to measure theuncertainty of information in an image e larger theentropy value is the higher the degree of disorder In imageprocessing entropy can reflect the information richness ofthe image and display the amount of information containedin the image e information entropy calculation formulais

H(x) minus 1113944255

0p xi( 1113857log2 p xi( 1113857 (9)

where p(xi) is the probability of a pixel with grayscalei(i 0 255) in the image When the probability is closerto 1 the uncertainty of the information is smaller and thereceiver can predict the transmitted content e amount ofinformation in the belt is less and vice versa

If the amount of information contained in the image isexpressed by information entropy then the entropy value ofan image of size m times n is defined as follows

pij f(i j)

1113936Mi11113936

Nj1f(i j)

(10)

H minus 1113944

M

i11113944

N

j1pijlog2pij (11)

where f(i j) is the gray level at the point (i j) in the imagepij is the gray distribution probability at the point and H isthe entropy of the image If m times n is taken as (i j) a localneighborhood in the center thenH is called the informationentropy value of the image [16]

From the perspective of information theory when theprobability of image appearance is small the informationentropy value of the image is large indicating that theamount of information transmitted is large In the extractionof feature points the information entropy reflects the texturecontained in the partial image information richness or imagepixel gradient change degree e larger the informationentropy value is the richer the image texture information isandmore obvious the changing of the image pixel gradient is[17]

erefore the effect of ORB feature point extraction isgood and the image block does not require detail en-hancement e local information entropy value is low sothe effect of ORB feature point extraction is poor which isshown in Figure 6 After sharpening adjustments to enhancedetails we optimize the effect of feature point extraction[18] which is shown in Figure 7 By comparing the featurepoint extraction of ORB-SLAM2 it can be intuitively seenthat the optimized extraction algorithm accuracy is better

34 Adaptive Information Entropy -reshold Because thenumber of the information entropy is closely related to thescene different video sequences in different scenes havedifferent information richness so the information entropythreshold of different scenes must also be different In eachdifferent scene it requires repeated experiments to set theinformation entropy threshold multiple times for matchingcalculations to get the corresponding threshold valueHowever the experience value in different scenes differsgreatlye threshold value is not universal which will affectthe image preprocessing and feature extraction It will resultin the failure of quickly obtaining better matching resultsSo the adaptive algorithm of information entropy is par-ticularly important [19]

Original image

(a)

Sharpen

(b)

Figure 5 Sharpening adjustment

Mathematical Problems in Engineering 7

In view of the above problems we propose an adaptivemethod of information entropy threshold which adjusts thethreshold according to different scenariose self-adjustingformula is

E0 H(i)ave

2+ δ (12)

where H(i)ave is the average value of the information en-tropy in this scene which can be obtained by obtaining theinformation entropy of each frame in a video in the sceneunder the first run and then divided by the number offrames i is the number of frames in the video sequence andδ is the correction factor after experiment it is 03 whichworks best rough the above formula the calculated E0 isthe information entropy threshold of the scene

4 Experimental Results and Analysis

All the experiments are conducted on a laptop computere operating system is Linux 1604 64 bit the processorIntel (R) Core (TM) i5-7300U CPU 250GHz the op-erating environment is CLion 2019 and opencv330 and the

program uses C++ Compilatione back-end uses G2O forposture optimization based on posture maps to generatemotion trajectories e image sequence reading speed ofthis algorithm is 30 frames per second compared with theopen source ORB-SLAM2 system

41 Comparison of Two-Frame Image Matching In thispaper the rgbd_dataset_freiburg1_desk dataset is selectedfrom which two images with blurry images and large ro-tation angles are extracted and the image detection andmatching processing are performed using the ORB algo-rithm ORB+ information entropy and ORB+ informationentropy + sharpening extraction algorithm And comparethe data such as extraction effect key points matchingnumber matching number and matching rate after RAN-SAC filtering e matching result statistics of the image areshown in Table 1 and the matching effect is shown inFigures 8 9 and 10

Table 1 shows the results of the feature point detectionand matching through the ORB ORB+ information en-tropy and ORB+ information entropy + sharpening Itproves that the correct matching points and the matchingrate of ORB andORB+ information entropy are basically thesame e matching effect diagram of the latter shows amismatch with a large error For ORB+ informationentropy + sharpening the number of correct matchingpoints of the extraction algorithm has increased signifi-cantly and its matching rate compared toORB+ information entropy and ORB+ information entro-py + sharpening extraction algorithms have been respec-tively improved by 367 and 388

e results show that adding the information entropyscreening and sharpening process proposed by this algo-rithm can effectively increase the number of matches andimprove the matching rate e accuracy of image featuredetection and matching are improved which provide moreaccurate and effective for subsequent process as trackingmapping and loop detection

42 Comparison of Image Matching in Visual OdometerFor the timestamp sequence of different blur degrees androtation angles of the image (that is a sequence of frames170ndash200) ORB-SLAM2 algorithm and the feature detectionand matching algorithm in the improved algorithm of thispaper are introduced to perform two adjacent framersquos fea-ture detection and matching To compare the matchingpoints of the two algorithms in different images the image isclear between 170 and 180 frames and the motion blur startsbetween 180 and 190 frames Maximum rotation angle isbetween 190 and 200 framesemost dramatic motion bluris shown in Figure 11

It can be seen from Figure 12 that the improved al-gorithm can match more feature points than the originalalgorithm e number of matching points of the im-proved algorithm is more than 50 and tracking loss doesnot occur even if the image is blurred due to fast motionWhen the resolution of the image is reduced the

Figure 6 Feature point extraction based on ORB-SLAM2

Figure 7 Extraction of sharpening adjustment feature points basedon information entropy

8 Mathematical Problems in Engineering

improved algorithm still has more matching featurepoints and has strong anti-interference ability androbustness

43 Analysis of Trajectory Tracking Accuracy e gbd_da-taset_freiburg1_desk dataset in the TUM data is selected inthe experiment We use the real trajectory of the robot

provided by TUM to compare it with the motion trajectorycalculated by the algorithm to verify the improvement effectof the proposed algorithm Here we take the average absolutetrajectory error e average relative trajectory error and theaverage tracking time are standard e comprehensive tra-jectory tracking accuracy and real-time performance arecompared between the ORB-SLAM2 system and the im-proved SLAM system

Figure 8 Matching effect of ORB

Figure 9 Matching effect of ORB+ information entropy

Figure 10 Matching effect of ORB+ information entropy + sharpening

Table 1 Comparison of matching rates of three feature detection and matching algorithms

Model Key points Matches RANSCK Match rate ()ORB 10001004 254 87 3425+information entropy 10001002 255 86 3373+sharpening 1000983 284 133 4683

Mathematical Problems in Engineering 9

We use the information entropy of image blocks todetermine the amount of information e algorithmsharpens the image blocks with small information entropyenhances local image details and extracts local feature pointsthat can characterize image information for the back-end asadjacent framese correlation basis of key framematchingenhances robustness and reduces the loss of motion trackingcaused by the failure of interframe matching at the sametime Based on the matching result the R and t transfor-mation relationship between frames is calculated and theback-end uses g2o to optimize the pose based on the positionand attitude map Finally the motion trajectory is generatedwhich is shown in Figure 13

In terms of trajectory tracking accuracy the absolutetrajectory error reflects the difference between the true valueof the camerarsquos pose and the estimated value of the SLAMsystem e absolute trajectory error root mean squareRMSE (x) is defined as follows

RMSE(x)

1113936ni1 xei minus xsi

2

n

1113971

(13)

Among them xei represents the estimated position ofthe ith frame in the image sequence and xsi representsthe standard value of the position of the ith frame in theimage sequence Taking the rgbd_dataset_-freiburg1_desk dataset as an example the error betweenthe optimized motion trajectory and the standard value isshown in Figure 14

e average absolute trajectory error refers to directlycalculating the difference between the real value of thecamerarsquos pose and the estimated value of the SLAM systeme program first aligns the real value and the estimatedvalue according to the time stamp of the pose and thencalculates each pair of poses e difference is very suitablefor evaluating the performance of visual SLAM systems eaverage relative trajectory error is used to calculate thedifference between the pose changes on the same twotimestamps After alignment with the timestamps both thereal pose and the estimated pose are calculated at every sameperiod of time then the difference is calculated It is suitable

for estimating the drift of the system e definition of theabsolute trajectory error in frame i is as follows

Fi Qminus1i Spi (14)

Among them Qi represents the real pose of the frameand pi represents the algorithm estimated pose of the frameS isin Sim (3) is the similarity transformation matrix fromestimated pose to true pose

is experiment comprehensively compares the threedatasets rgbd_dataset_freiburg1_deskroom360 of whichthe desk dataset is an ordinary scene the 360 dataset is thedataset collected when the camera performs a 360-degreerotation motion and the room dataset is a scene collectedwhen the image sequence texture is not rich e ORB-SLAM2 generates different trajectory results each time Inorder to eliminate the randomness of the experimentalresults this article uses 6 experiments on each dataset underthe same experimental environment to calculate the averageabsolute trajectory error the average relative trajectory

Figure 11 200th frame image

20

40

60

80

100

120

140

160

180

Num

ber o

f mat

ches

175 180 185 190 195 200170Time stamp

OursORBSLAM2

Figure 12 Comparison between the algorithm of this paper andORB-SLAM2 motion blur

Figure 13 Movement track

10 Mathematical Problems in Engineering

error and the average tracking time Tables 2ndash4 are plottedin Figures 15ndash17 to compare the algorithm performance byanalyzing the trajectory

Under the evaluation standard of absolute trajectoryerror the algorithm in this paper has certain advantagesIn the scene where the camera rotates 360deg the absolutetrajectory error is increased by 35 and in the ordinaryscene the absolute trajectory error is increased by 48 Inthe case of not rich texture the absolute trajectory errorimprovement is the largest indicating that the algorithmin this paper has a considerable improvement on scenes

with rich image sequence textures ere is also a certainimprovement in the situation of large-angle camerarotation

ere is an advantage in trajectory error in ordinaryscenes it is 175 smaller than the average relative trajectoryerror of the ORB-SLAM2 system In the scene where thecamera rotates 360deg and the image sequence texture is notrich the algorithm in this paper is better than the ORB-SLAM2 system which has been improved by more than40 It proves that the algorithm in this article is indeedimproved compared to the ORB-SLAM2 system

Table 2 Average absolute trajectory error

Average absolute trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 0057630667 0029899333360 0113049667 0072683167Room 0229222167 0111185333

Table 3 Average relative trajectory error

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00362615 0029899360 01534532 0078611Room 01592775 0092167

Table 4 Average tracking time

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00244937 00289299360 00186606 00225014Room 00220836 00282757

0082

0048

0014

ndash05ndash05

ndash10

0000

05

05

05

x (m)

y (m)

z (m

)

10

10

Error mapped onto trajectory

10

15

20

25

15

Reference

Figure 14 Absolute trajectory error

Mathematical Problems in Engineering 11

Due to the sharpening of the local image the averagetracking time is slightly increased but the increase is not large

5 Conclusion

We propose an improved ORB-SLAM2 algorithm which isbased on adaptive information entropy screening algorithmand sharpening adjustment algorithm Our advantage is thatthe information entropy threshold can be automaticallycalculated according to different scenes with generalitySharpening the picture blocks below the information en-tropy threshold increases the clarity in order to better extractthe feature points e combination of information entropyscreening image block and sharpening algorithm solves thecamerarsquos large-angle rotation and the system positioningand mapping failure caused by the lack of texture infor-mation Although the processing time increases it improvesthe processing accuracy We compare the average absolutetrajectory error the average relative trajectory error and theaverage tracing time in the trajectory error with ORB-SLAM2 e experimental results show that the improvedmodel based on information entropy and sharpening im-proved algorithm achieve better results compared withORB-SLAM2 is article innovatively combines the ORB-SLAM2 system with adaptive information entropy ecombination of sharpening algorithms improves the accu-racy and robustness of the systemwithout a noticeable gap inprocessing time

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (NSFC)-Guangdong big data ScienceCenter Project under Grant U1911401 and in part by theSouth China Normal University National UndergraduateInnovation and Entrepreneurship Training Program underGrant 201910574057

References

[1] D Schleicher L M Bergasa R Barea E Lopez andM Ocana ldquoReal-time simultaneous localization and mappingusing a wide-angle stereo camerardquo in Proceedings of the IEEEWorkshop on Distributed Intelligent Systems Collective In-telligence and its Applications (DISrsquo06) pp 2090ndash2095 Bei-jing China October 2006

[2] A J Davison I D Reid N D Molton and O StasseldquoMonoSLAM real-time single camera SLAMrdquo IEEE Trans-actions on Pattern Analysis and Machine Intelligence vol 29no 6 pp 1052ndash1067 2007

[3] G Klein and D Murray ldquoParallel tracking and mapping forsmall ARworkspacesrdquo in Proceedings of the 2007 6th IEEE and

000

005

010

015

020

025M

ean

abso

lute

traj

ecto

ry er

ror

360 RoomDesk

OursORBSLAM2

Figure 15 Average absolute trajectory error

360 RoomDesk

OursORBSLAM2

0000

0005

0010

0015

0020

0025

0030

Mea

n tr

acki

ng ti

me

Figure 17 Average tracking time

360 RoomDesk

OursORBSLAM2

000

002

004

006

008

012

010

016

014

018

Mea

n re

lativ

e tra

ject

ory

erro

r

Figure 16 Average relative trajectory error

12 Mathematical Problems in Engineering

ACM International Symposium on Mixed and AugmentedReality pp 225ndash234 Nara Japan November 2007

[4] R Mur-Artal J M M Montiel and J D Tardos ldquoORB-SLAM a versatile and accurate monocular SLAM systemrdquoIEEE Transactions on Robotics vol 29 no 6 pp 1052ndash10672007

[5] R Mur-Artal and J D Tardos ldquoORB-SLAM2 an open-sourceSLAM system for monocular stereo and RGB-D camerasrdquoIEEE Transactions on Robotics vol 33 no 5 pp 1255ndash12622017

[6] K Di W Wan and J Chen ldquoSLAM visual odometer opti-mization algorithm based on information entropyrdquo Journal ofAutomation vol 47 no 6 pp 770ndash779 2020

[7] M Quan H Wei and J Chen ldquoSLAM visual odometeroptimization algorithm based on information entropyrdquoJournal of Automation vol 1 no 8 pp 180ndash198 2020

[8] L Xu S Zheng and J Jia ldquoUnnatural L0 sparse represen-tation for natural image deblurrirdquo in Proceedings of the 2013IEEE Conference on Computer Vision and Pattern Recognitionpp 1107ndash1114 Portland OR USA June 2013

[9] J Sun W Cao Z Xu and J Ponce ldquoLearning a convolutionalneural network for non-uniform motion blur removalrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 769ndash777 Boston MA USA June2015

[10] S Nah T H Kim and K M Lee ldquoDeep multi-scale con-volutional neural network for dynamic scene deblurringrdquo inProceedings of the 2017 IEEE Conference on Computer Visionand Pattern Recognition (CVPR) pp 257ndash265 Honolulu HIUSA July 2017

[11] J Zhang J Pan J Ren et al ldquoDynamic scene deblurring usingspatially variant recurrent neural networksrdquo in Proceedings ofthe 2018 IEEECVF Conference on Computer Vision andPattern Recognition pp 2521ndash2529 Salt Lake City UT USAJune 2018

[12] G Chen W Chen H Yu et al ldquoResearch on autonomousnavigation method of mobile robot based on semantic ORB-SLAM2 algorithmrdquoMachine Tool amp Hydraulics vol 48 no 9pp 16ndash20 2020

[13] X Qiu and C Zhao ldquoOverview of ORB-SLAM system op-timization framework analysisrdquo Navigation Positioning andTiming vol 6 no 3 pp 46ndash57 2019

[14] E Rosten and T Drummond ldquoMachine learning for high-speed corner detectionrdquo in Proceedings of the EuropeanConference on Computer Vision pp 430ndash443 Graz Austria2006

[15] A Foi V Katkovnik and K Egiazarian ldquoPointwise shape-adaptive DCT for high-quality denoising and deblocking ofgrayscale and color imagesrdquo IEEE Transactions on ImageProcessing vol 16 no 5 pp 1395ndash1411 2007

[16] Y Yu andWH Tardos ldquoORB-SLAM a versatile and accuratemonocular SLAM systemrdquo IEEE Transactions on Roboticsvol 33 no 5 pp 1255ndash1262 2017

[17] R Xu S Liu and J Chen ldquoe rationality and application ofinformation entropy description in imagesrdquo InformationTechnology vol 11 no 5 pp 59ndash61 2005

[18] Y Zhu ldquoOverview of image enhancement algorithmsrdquo In-formation and Computer (-eoretical Edition) vol 16 no 6pp 104ndash106 2017

[19] Z Yuan ldquoImage threshold automatic selection based on one-dimensional entropyrdquo Journal of Gansu University of Tech-nology vol 1 no 1 pp 84ndash88 1993

Mathematical Problems in Engineering 13

Page 4: ImprovedORB-SLAM2AlgorithmBasedonInformationEntropy ...downloads.hindawi.com/journals/mpe/2020/4724310.pdf · In 2007, Andrew Davison proposed Mono-SLAM [2], which is the real-time

feature points in the ORB-SLAM2 system e number ofstable feature points found on these images is insufficiente feature points cannot even be found in the images withtoo much blurs e insufficient number will then affect theaccuracy of the front-end pose estimation When the featurepoints are less than the number specified by the system thepose estimation algorithm cannot be performed so that theback-end cannot obtain the front-end information and thetracking fails Based on this background this paper proposesa sharpening adjustment algorithm based on informationentropy preprocessing the image before the tracking thread

In images with poor texture and blurred images thesharpening of the image can make the corner information ofthe image more prominent Because when the ORB feature isextracted in the tracking thread it is easier to detect thefeature point and enhance the systemrsquos stability And thescreening based on the information entropy threshold isadded and the image sharpening adjustment is performedonly when the information entropy of the image block is lessthan the threshold It can reduce the time of sharpeningadjustment ensure the real-time performance of the systemand maintain the integrity of the image information If theimage block information entropy is bigger than thethreshold it will not be processed

e process of sharpening adjustment algorithm basedon information entropy threshold is as follows and thealgorithm flow chart is shown in Figure 2

(1) At first an image input by the frame is converted intoa grayscale image and the image is expanded into an8-layer image pyramid under the effect of the scalingfactor and then each layer of the pyramid image isdivided into image blocks

(2) Calculate the information entropy E of the imageblock and compare the obtained information en-tropy E with the information entropy threshold E0e image block with the information entropy E lessthan the threshold E0 indicates that the image blockcontains less effective information and the effect ofORB feature extraction is poor so you need tosharpen first to enhance the detail in the image

(3) After the sharpening process is performed on theimage block with the information entropy less thanthe threshold ORB feature points are extracted withthe image block with the information entropy greaterthan the threshold e feature points are extractedusing the FAST feature point extraction algorithm inthe pyramid e quadtree homogenization algo-rithm is used for homogenization processing whichmakes the distribution of extracted feature pointsmore uniform It avoids the phenomenon of clus-tering of feature points thus the algorithm becomesmore robust

(4) en perform a BRIEF description of the featurepoints to generate a binary descriptor of the featurepoints after the homogenization process e featurepoints generated with the BRIEF descriptor at thistime are called ORB features which have invariance

of viewpoints and invariance of lighting ORB fea-tures which are used in the later graph matching andrecognition in the ORB-SLAM2 system

3 The Novel Image Preprocessing Algorithm

We propose and add a sharpening adjustment algorithmbased on information entropy Before the tracking threadthe image is preprocessed e above algorithm frameworkroughly describes the specific operation of the algorithme following will explain in detail the relevant principlesand details of the improved algorithm Algorithms used inthis article are proposed including the ORB algorithm inORB-SLAM2 the sharpening adjustment algorithm theprinciple and algorithm of information entropy and theadaptive information entropy threshold algorithm

31ORBAlgorithm As explained in the previous article theORB-SLAM2 system relies heavily on the extraction andmatching of feature points erefore running in an envi-ronment with rich texture or video processing will not beable to obtain enough stable matching point pairs resultingin loss of system position and attitude tracking Case inorder to better explain the improved system in this articlewe first introduce the feature point extraction algorithmused by the system

e focus of image information processing is theextraction of feature points e ORB algorithm combinesthe two methods FAST feature point detection and BRIEFfeature descriptor [14] To make further optimization andimprovement on their basis in 2007 the FAST cornerdetection algorithm was proposed Compared with otherfeature point detection algorithms the FAST algorithmhas better real-time detection and robustness e core ofthe FAST algorithm is to take a pixel and compare it withthe gray value of the points around it For comparison ifthe gray value of this pixel differs from most of thesurrounding pixels it is considered to be a feature point

e specific steps of FAST corner detection featurepoints each pixel is detected with the detected pixel as thecenter p 3 pixels with a radius of 16 pixels on a circle set agray value threshold t Comparing the 16 pixels on the circlewhen there are consecutive n (12) pixels the gray value isgreater than Ip + t or less than Ip minus t (Ip is the gray value ofpoint p) then p is determined as a feature point In order toimprove the efficiency of detection a simplified judgment isfirst performed It is only necessary to detect whether thegray values of 1 5 9 and 13 satisfied the above conditionswhen at least 3 points are met continue to detect theremaining 12 points (Figure 3)

e concept of mass points is introduced to obtain thescale and rotation invariance of FAST corner points ecentroid refers to the gray value of the image block as thecenter of weight e specific steps are as follows

(1) In a small image block B the moment defining theimage block B is

4 Mathematical Problems in Engineering

mpq 1113944xyϵB

xpy

qI(x y) p q 0 1 (1)

(2) You can find the centroid of the image block bymoment

C m10

m00m01

m001113888 1113889 (2)

(3) Connect the geometric center O and the centroid Cof the image block to get a direction vector OC

rarren

the feature point direction can be defined as follows

Buildingan imagepyramid

Local entropy screening

Dividingimageblocks

Calculate E

E le E0

N

Y

Sharpenadjustment Match

Generatedescriptor

ICP motionestimation

E is the local entropyE0 is the local entropy threshold

ORB featureextraction

Figure 2 Process of sharpening adjustment algorithm based on information entropy

1 23

456

7

8910

11

1213 p14

1516

Figure 3 FAST feature point detection

Mathematical Problems in Engineering 5

θ arctanm01

m101113888 1113889 (3)

rough the above methods FAST corners have a de-scription of scale and rotation which greatly improves therobustness of their representation between different imagesSo this improved FAST is called Oriented FAST in the ORB

After the feature point detection is completed the fea-ture description needs to be used to represent and store thefeature point information to improve the efficiency of thesubsequent matching e ORB algorithm uses the BRIEFdescriptor whose main idea is to distribute a certainprobability around the feature point Randomly select sev-eral point pairs and then combine the gray values of thesepoint pairs into a binary string Finally use this binary as thedescriptor of this feature point

e BRIEF feature descriptor determines the binarydescriptor by comparing the pair of gray values As shown inFigure 4 if the gray value at the pixel x is less than the grayvalue at y the binary value of the corresponding bit isassigned to 1 otherwise it is 0

τ(p x y) 1 I(x)lt I(y)

0 I(x)ge I(y)1113896 (4)

Among them I(x) and I(y) are the gray values at thecorresponding corners after image smoothing

For n (256) (x y) test point pairs the corresponding n-dimensional BRIEF descriptor can be formed according tothe following formula

fn(p) 11139441leilen

2iminus tτ(p x y) (5)

However the BRIEF descriptor does not have rotationinvariance the ORB algorithm improves it When calcu-lating the BRIEF descriptor the ORB will solve these cor-responding feature points in the main direction therebyensuring that the rotation angle is different e point pairsselected for the same feature point are the same

32 Sharpening Adjustment In the process of image trans-mission and conversion the sharpness of the image is reducedIn essence the target contour and detail edges in the image areblurred In the extraction of feature points what we need to dois to traverse the image and the pixels with large difference insurrounding pixels are selected as the feature points whichrequires the imagersquos target contour and detail edge informationto be prominent erefore we introduced image sharpening

e purpose of image sharpening is to make the edgesand contours of the image clear and enhance the details ofthe image [15] e methods of image sharpening includestatistical difference method discrete spatial differencemethod and spatial high-pass filtering method Generallythe energy of the image is mainly concentrated in the lowfrequency e frequency where the noise is located ismainly in the high-frequency band e edge information ofthe image is mainly concentrated in the high-frequencyband Usually smoothing is used to remove the noise at highfrequency but this also makes the edge and contour of the

image blurred which affects the extraction of the featurepoints e root cause of the blur is that the image issubjected to an average or integral operation So performingan inverse operation (a differential operation) on the imagecan restore the picture to clear

In order to reduce the adverse effects we adopt themethod of high-pass filtering (second-order differential) inthe spatial domain and use the Laplacian operator to per-form a convolution operation with each pixel of the image toincrease the variance between each element in the matrixand the surrounding elements to achieve sharp image If thevariables of the convolution are the sequence x(n) and h(n)the result of the convolution is

y(n) 1113944infin

iminusinfinx(i)h(n minus i) x(n)lowast h(n) (6)

e convolution operation of the divided image blocks isactually to use the convolution kernel to slide on the imagee pixel gray value is multiplied with the value on thecorresponding convolution kernel en all the multipliedvalues are added as the gray value of the pixel on the imagecorresponding to the middle pixel of the convolution kernele expression of the convolution function is as follows

dst(x y) 1113944

0lexprimeltkernelcols

0leyprimeltkernelrows

kernel xprime yprime( 1113857

lowast src x + xprime minus anchorx y + yprime minus anchor y( 1113857

(7)

where anchor is the reference point of the kernel kernel isthe convolution kernel and the convolution template here isthe matrix form of the Laplacian variant operator as shownbelow

H

0 minus1 0

minus1 5 minus1

0 minus1 0

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ (8)

e Laplacian deformation operator uses the secondderivative information of the image and is isotropic

Figure 4 BRIEF description

6 Mathematical Problems in Engineering

e results of image discrete convolution on the classicLenna image using the sharpening algorithm adopted in thispaper are shown in Figure 5

From the processing results it can be seen that the high-pass filtering effect of the image sharpening algorithm isobvious which enhances the details and edge information ofthe image and the noise becomes small so this sharpeningalgorithm is selected

33 Screening Based on Information Entropy In 1948Shannon proposed the concept of ldquoinformation entropyrdquo tosolve the problem of quantitative measurement of infor-mation In theory entropy is a measure of the degree ofdisorder of information which is used to measure theuncertainty of information in an image e larger theentropy value is the higher the degree of disorder In imageprocessing entropy can reflect the information richness ofthe image and display the amount of information containedin the image e information entropy calculation formulais

H(x) minus 1113944255

0p xi( 1113857log2 p xi( 1113857 (9)

where p(xi) is the probability of a pixel with grayscalei(i 0 255) in the image When the probability is closerto 1 the uncertainty of the information is smaller and thereceiver can predict the transmitted content e amount ofinformation in the belt is less and vice versa

If the amount of information contained in the image isexpressed by information entropy then the entropy value ofan image of size m times n is defined as follows

pij f(i j)

1113936Mi11113936

Nj1f(i j)

(10)

H minus 1113944

M

i11113944

N

j1pijlog2pij (11)

where f(i j) is the gray level at the point (i j) in the imagepij is the gray distribution probability at the point and H isthe entropy of the image If m times n is taken as (i j) a localneighborhood in the center thenH is called the informationentropy value of the image [16]

From the perspective of information theory when theprobability of image appearance is small the informationentropy value of the image is large indicating that theamount of information transmitted is large In the extractionof feature points the information entropy reflects the texturecontained in the partial image information richness or imagepixel gradient change degree e larger the informationentropy value is the richer the image texture information isandmore obvious the changing of the image pixel gradient is[17]

erefore the effect of ORB feature point extraction isgood and the image block does not require detail en-hancement e local information entropy value is low sothe effect of ORB feature point extraction is poor which isshown in Figure 6 After sharpening adjustments to enhancedetails we optimize the effect of feature point extraction[18] which is shown in Figure 7 By comparing the featurepoint extraction of ORB-SLAM2 it can be intuitively seenthat the optimized extraction algorithm accuracy is better

34 Adaptive Information Entropy -reshold Because thenumber of the information entropy is closely related to thescene different video sequences in different scenes havedifferent information richness so the information entropythreshold of different scenes must also be different In eachdifferent scene it requires repeated experiments to set theinformation entropy threshold multiple times for matchingcalculations to get the corresponding threshold valueHowever the experience value in different scenes differsgreatlye threshold value is not universal which will affectthe image preprocessing and feature extraction It will resultin the failure of quickly obtaining better matching resultsSo the adaptive algorithm of information entropy is par-ticularly important [19]

Original image

(a)

Sharpen

(b)

Figure 5 Sharpening adjustment

Mathematical Problems in Engineering 7

In view of the above problems we propose an adaptivemethod of information entropy threshold which adjusts thethreshold according to different scenariose self-adjustingformula is

E0 H(i)ave

2+ δ (12)

where H(i)ave is the average value of the information en-tropy in this scene which can be obtained by obtaining theinformation entropy of each frame in a video in the sceneunder the first run and then divided by the number offrames i is the number of frames in the video sequence andδ is the correction factor after experiment it is 03 whichworks best rough the above formula the calculated E0 isthe information entropy threshold of the scene

4 Experimental Results and Analysis

All the experiments are conducted on a laptop computere operating system is Linux 1604 64 bit the processorIntel (R) Core (TM) i5-7300U CPU 250GHz the op-erating environment is CLion 2019 and opencv330 and the

program uses C++ Compilatione back-end uses G2O forposture optimization based on posture maps to generatemotion trajectories e image sequence reading speed ofthis algorithm is 30 frames per second compared with theopen source ORB-SLAM2 system

41 Comparison of Two-Frame Image Matching In thispaper the rgbd_dataset_freiburg1_desk dataset is selectedfrom which two images with blurry images and large ro-tation angles are extracted and the image detection andmatching processing are performed using the ORB algo-rithm ORB+ information entropy and ORB+ informationentropy + sharpening extraction algorithm And comparethe data such as extraction effect key points matchingnumber matching number and matching rate after RAN-SAC filtering e matching result statistics of the image areshown in Table 1 and the matching effect is shown inFigures 8 9 and 10

Table 1 shows the results of the feature point detectionand matching through the ORB ORB+ information en-tropy and ORB+ information entropy + sharpening Itproves that the correct matching points and the matchingrate of ORB andORB+ information entropy are basically thesame e matching effect diagram of the latter shows amismatch with a large error For ORB+ informationentropy + sharpening the number of correct matchingpoints of the extraction algorithm has increased signifi-cantly and its matching rate compared toORB+ information entropy and ORB+ information entro-py + sharpening extraction algorithms have been respec-tively improved by 367 and 388

e results show that adding the information entropyscreening and sharpening process proposed by this algo-rithm can effectively increase the number of matches andimprove the matching rate e accuracy of image featuredetection and matching are improved which provide moreaccurate and effective for subsequent process as trackingmapping and loop detection

42 Comparison of Image Matching in Visual OdometerFor the timestamp sequence of different blur degrees androtation angles of the image (that is a sequence of frames170ndash200) ORB-SLAM2 algorithm and the feature detectionand matching algorithm in the improved algorithm of thispaper are introduced to perform two adjacent framersquos fea-ture detection and matching To compare the matchingpoints of the two algorithms in different images the image isclear between 170 and 180 frames and the motion blur startsbetween 180 and 190 frames Maximum rotation angle isbetween 190 and 200 framesemost dramatic motion bluris shown in Figure 11

It can be seen from Figure 12 that the improved al-gorithm can match more feature points than the originalalgorithm e number of matching points of the im-proved algorithm is more than 50 and tracking loss doesnot occur even if the image is blurred due to fast motionWhen the resolution of the image is reduced the

Figure 6 Feature point extraction based on ORB-SLAM2

Figure 7 Extraction of sharpening adjustment feature points basedon information entropy

8 Mathematical Problems in Engineering

improved algorithm still has more matching featurepoints and has strong anti-interference ability androbustness

43 Analysis of Trajectory Tracking Accuracy e gbd_da-taset_freiburg1_desk dataset in the TUM data is selected inthe experiment We use the real trajectory of the robot

provided by TUM to compare it with the motion trajectorycalculated by the algorithm to verify the improvement effectof the proposed algorithm Here we take the average absolutetrajectory error e average relative trajectory error and theaverage tracking time are standard e comprehensive tra-jectory tracking accuracy and real-time performance arecompared between the ORB-SLAM2 system and the im-proved SLAM system

Figure 8 Matching effect of ORB

Figure 9 Matching effect of ORB+ information entropy

Figure 10 Matching effect of ORB+ information entropy + sharpening

Table 1 Comparison of matching rates of three feature detection and matching algorithms

Model Key points Matches RANSCK Match rate ()ORB 10001004 254 87 3425+information entropy 10001002 255 86 3373+sharpening 1000983 284 133 4683

Mathematical Problems in Engineering 9

We use the information entropy of image blocks todetermine the amount of information e algorithmsharpens the image blocks with small information entropyenhances local image details and extracts local feature pointsthat can characterize image information for the back-end asadjacent framese correlation basis of key framematchingenhances robustness and reduces the loss of motion trackingcaused by the failure of interframe matching at the sametime Based on the matching result the R and t transfor-mation relationship between frames is calculated and theback-end uses g2o to optimize the pose based on the positionand attitude map Finally the motion trajectory is generatedwhich is shown in Figure 13

In terms of trajectory tracking accuracy the absolutetrajectory error reflects the difference between the true valueof the camerarsquos pose and the estimated value of the SLAMsystem e absolute trajectory error root mean squareRMSE (x) is defined as follows

RMSE(x)

1113936ni1 xei minus xsi

2

n

1113971

(13)

Among them xei represents the estimated position ofthe ith frame in the image sequence and xsi representsthe standard value of the position of the ith frame in theimage sequence Taking the rgbd_dataset_-freiburg1_desk dataset as an example the error betweenthe optimized motion trajectory and the standard value isshown in Figure 14

e average absolute trajectory error refers to directlycalculating the difference between the real value of thecamerarsquos pose and the estimated value of the SLAM systeme program first aligns the real value and the estimatedvalue according to the time stamp of the pose and thencalculates each pair of poses e difference is very suitablefor evaluating the performance of visual SLAM systems eaverage relative trajectory error is used to calculate thedifference between the pose changes on the same twotimestamps After alignment with the timestamps both thereal pose and the estimated pose are calculated at every sameperiod of time then the difference is calculated It is suitable

for estimating the drift of the system e definition of theabsolute trajectory error in frame i is as follows

Fi Qminus1i Spi (14)

Among them Qi represents the real pose of the frameand pi represents the algorithm estimated pose of the frameS isin Sim (3) is the similarity transformation matrix fromestimated pose to true pose

is experiment comprehensively compares the threedatasets rgbd_dataset_freiburg1_deskroom360 of whichthe desk dataset is an ordinary scene the 360 dataset is thedataset collected when the camera performs a 360-degreerotation motion and the room dataset is a scene collectedwhen the image sequence texture is not rich e ORB-SLAM2 generates different trajectory results each time Inorder to eliminate the randomness of the experimentalresults this article uses 6 experiments on each dataset underthe same experimental environment to calculate the averageabsolute trajectory error the average relative trajectory

Figure 11 200th frame image

20

40

60

80

100

120

140

160

180

Num

ber o

f mat

ches

175 180 185 190 195 200170Time stamp

OursORBSLAM2

Figure 12 Comparison between the algorithm of this paper andORB-SLAM2 motion blur

Figure 13 Movement track

10 Mathematical Problems in Engineering

error and the average tracking time Tables 2ndash4 are plottedin Figures 15ndash17 to compare the algorithm performance byanalyzing the trajectory

Under the evaluation standard of absolute trajectoryerror the algorithm in this paper has certain advantagesIn the scene where the camera rotates 360deg the absolutetrajectory error is increased by 35 and in the ordinaryscene the absolute trajectory error is increased by 48 Inthe case of not rich texture the absolute trajectory errorimprovement is the largest indicating that the algorithmin this paper has a considerable improvement on scenes

with rich image sequence textures ere is also a certainimprovement in the situation of large-angle camerarotation

ere is an advantage in trajectory error in ordinaryscenes it is 175 smaller than the average relative trajectoryerror of the ORB-SLAM2 system In the scene where thecamera rotates 360deg and the image sequence texture is notrich the algorithm in this paper is better than the ORB-SLAM2 system which has been improved by more than40 It proves that the algorithm in this article is indeedimproved compared to the ORB-SLAM2 system

Table 2 Average absolute trajectory error

Average absolute trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 0057630667 0029899333360 0113049667 0072683167Room 0229222167 0111185333

Table 3 Average relative trajectory error

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00362615 0029899360 01534532 0078611Room 01592775 0092167

Table 4 Average tracking time

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00244937 00289299360 00186606 00225014Room 00220836 00282757

0082

0048

0014

ndash05ndash05

ndash10

0000

05

05

05

x (m)

y (m)

z (m

)

10

10

Error mapped onto trajectory

10

15

20

25

15

Reference

Figure 14 Absolute trajectory error

Mathematical Problems in Engineering 11

Due to the sharpening of the local image the averagetracking time is slightly increased but the increase is not large

5 Conclusion

We propose an improved ORB-SLAM2 algorithm which isbased on adaptive information entropy screening algorithmand sharpening adjustment algorithm Our advantage is thatthe information entropy threshold can be automaticallycalculated according to different scenes with generalitySharpening the picture blocks below the information en-tropy threshold increases the clarity in order to better extractthe feature points e combination of information entropyscreening image block and sharpening algorithm solves thecamerarsquos large-angle rotation and the system positioningand mapping failure caused by the lack of texture infor-mation Although the processing time increases it improvesthe processing accuracy We compare the average absolutetrajectory error the average relative trajectory error and theaverage tracing time in the trajectory error with ORB-SLAM2 e experimental results show that the improvedmodel based on information entropy and sharpening im-proved algorithm achieve better results compared withORB-SLAM2 is article innovatively combines the ORB-SLAM2 system with adaptive information entropy ecombination of sharpening algorithms improves the accu-racy and robustness of the systemwithout a noticeable gap inprocessing time

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (NSFC)-Guangdong big data ScienceCenter Project under Grant U1911401 and in part by theSouth China Normal University National UndergraduateInnovation and Entrepreneurship Training Program underGrant 201910574057

References

[1] D Schleicher L M Bergasa R Barea E Lopez andM Ocana ldquoReal-time simultaneous localization and mappingusing a wide-angle stereo camerardquo in Proceedings of the IEEEWorkshop on Distributed Intelligent Systems Collective In-telligence and its Applications (DISrsquo06) pp 2090ndash2095 Bei-jing China October 2006

[2] A J Davison I D Reid N D Molton and O StasseldquoMonoSLAM real-time single camera SLAMrdquo IEEE Trans-actions on Pattern Analysis and Machine Intelligence vol 29no 6 pp 1052ndash1067 2007

[3] G Klein and D Murray ldquoParallel tracking and mapping forsmall ARworkspacesrdquo in Proceedings of the 2007 6th IEEE and

000

005

010

015

020

025M

ean

abso

lute

traj

ecto

ry er

ror

360 RoomDesk

OursORBSLAM2

Figure 15 Average absolute trajectory error

360 RoomDesk

OursORBSLAM2

0000

0005

0010

0015

0020

0025

0030

Mea

n tr

acki

ng ti

me

Figure 17 Average tracking time

360 RoomDesk

OursORBSLAM2

000

002

004

006

008

012

010

016

014

018

Mea

n re

lativ

e tra

ject

ory

erro

r

Figure 16 Average relative trajectory error

12 Mathematical Problems in Engineering

ACM International Symposium on Mixed and AugmentedReality pp 225ndash234 Nara Japan November 2007

[4] R Mur-Artal J M M Montiel and J D Tardos ldquoORB-SLAM a versatile and accurate monocular SLAM systemrdquoIEEE Transactions on Robotics vol 29 no 6 pp 1052ndash10672007

[5] R Mur-Artal and J D Tardos ldquoORB-SLAM2 an open-sourceSLAM system for monocular stereo and RGB-D camerasrdquoIEEE Transactions on Robotics vol 33 no 5 pp 1255ndash12622017

[6] K Di W Wan and J Chen ldquoSLAM visual odometer opti-mization algorithm based on information entropyrdquo Journal ofAutomation vol 47 no 6 pp 770ndash779 2020

[7] M Quan H Wei and J Chen ldquoSLAM visual odometeroptimization algorithm based on information entropyrdquoJournal of Automation vol 1 no 8 pp 180ndash198 2020

[8] L Xu S Zheng and J Jia ldquoUnnatural L0 sparse represen-tation for natural image deblurrirdquo in Proceedings of the 2013IEEE Conference on Computer Vision and Pattern Recognitionpp 1107ndash1114 Portland OR USA June 2013

[9] J Sun W Cao Z Xu and J Ponce ldquoLearning a convolutionalneural network for non-uniform motion blur removalrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 769ndash777 Boston MA USA June2015

[10] S Nah T H Kim and K M Lee ldquoDeep multi-scale con-volutional neural network for dynamic scene deblurringrdquo inProceedings of the 2017 IEEE Conference on Computer Visionand Pattern Recognition (CVPR) pp 257ndash265 Honolulu HIUSA July 2017

[11] J Zhang J Pan J Ren et al ldquoDynamic scene deblurring usingspatially variant recurrent neural networksrdquo in Proceedings ofthe 2018 IEEECVF Conference on Computer Vision andPattern Recognition pp 2521ndash2529 Salt Lake City UT USAJune 2018

[12] G Chen W Chen H Yu et al ldquoResearch on autonomousnavigation method of mobile robot based on semantic ORB-SLAM2 algorithmrdquoMachine Tool amp Hydraulics vol 48 no 9pp 16ndash20 2020

[13] X Qiu and C Zhao ldquoOverview of ORB-SLAM system op-timization framework analysisrdquo Navigation Positioning andTiming vol 6 no 3 pp 46ndash57 2019

[14] E Rosten and T Drummond ldquoMachine learning for high-speed corner detectionrdquo in Proceedings of the EuropeanConference on Computer Vision pp 430ndash443 Graz Austria2006

[15] A Foi V Katkovnik and K Egiazarian ldquoPointwise shape-adaptive DCT for high-quality denoising and deblocking ofgrayscale and color imagesrdquo IEEE Transactions on ImageProcessing vol 16 no 5 pp 1395ndash1411 2007

[16] Y Yu andWH Tardos ldquoORB-SLAM a versatile and accuratemonocular SLAM systemrdquo IEEE Transactions on Roboticsvol 33 no 5 pp 1255ndash1262 2017

[17] R Xu S Liu and J Chen ldquoe rationality and application ofinformation entropy description in imagesrdquo InformationTechnology vol 11 no 5 pp 59ndash61 2005

[18] Y Zhu ldquoOverview of image enhancement algorithmsrdquo In-formation and Computer (-eoretical Edition) vol 16 no 6pp 104ndash106 2017

[19] Z Yuan ldquoImage threshold automatic selection based on one-dimensional entropyrdquo Journal of Gansu University of Tech-nology vol 1 no 1 pp 84ndash88 1993

Mathematical Problems in Engineering 13

Page 5: ImprovedORB-SLAM2AlgorithmBasedonInformationEntropy ...downloads.hindawi.com/journals/mpe/2020/4724310.pdf · In 2007, Andrew Davison proposed Mono-SLAM [2], which is the real-time

mpq 1113944xyϵB

xpy

qI(x y) p q 0 1 (1)

(2) You can find the centroid of the image block bymoment

C m10

m00m01

m001113888 1113889 (2)

(3) Connect the geometric center O and the centroid Cof the image block to get a direction vector OC

rarren

the feature point direction can be defined as follows

Buildingan imagepyramid

Local entropy screening

Dividingimageblocks

Calculate E

E le E0

N

Y

Sharpenadjustment Match

Generatedescriptor

ICP motionestimation

E is the local entropyE0 is the local entropy threshold

ORB featureextraction

Figure 2 Process of sharpening adjustment algorithm based on information entropy

1 23

456

7

8910

11

1213 p14

1516

Figure 3 FAST feature point detection

Mathematical Problems in Engineering 5

θ arctanm01

m101113888 1113889 (3)

rough the above methods FAST corners have a de-scription of scale and rotation which greatly improves therobustness of their representation between different imagesSo this improved FAST is called Oriented FAST in the ORB

After the feature point detection is completed the fea-ture description needs to be used to represent and store thefeature point information to improve the efficiency of thesubsequent matching e ORB algorithm uses the BRIEFdescriptor whose main idea is to distribute a certainprobability around the feature point Randomly select sev-eral point pairs and then combine the gray values of thesepoint pairs into a binary string Finally use this binary as thedescriptor of this feature point

e BRIEF feature descriptor determines the binarydescriptor by comparing the pair of gray values As shown inFigure 4 if the gray value at the pixel x is less than the grayvalue at y the binary value of the corresponding bit isassigned to 1 otherwise it is 0

τ(p x y) 1 I(x)lt I(y)

0 I(x)ge I(y)1113896 (4)

Among them I(x) and I(y) are the gray values at thecorresponding corners after image smoothing

For n (256) (x y) test point pairs the corresponding n-dimensional BRIEF descriptor can be formed according tothe following formula

fn(p) 11139441leilen

2iminus tτ(p x y) (5)

However the BRIEF descriptor does not have rotationinvariance the ORB algorithm improves it When calcu-lating the BRIEF descriptor the ORB will solve these cor-responding feature points in the main direction therebyensuring that the rotation angle is different e point pairsselected for the same feature point are the same

32 Sharpening Adjustment In the process of image trans-mission and conversion the sharpness of the image is reducedIn essence the target contour and detail edges in the image areblurred In the extraction of feature points what we need to dois to traverse the image and the pixels with large difference insurrounding pixels are selected as the feature points whichrequires the imagersquos target contour and detail edge informationto be prominent erefore we introduced image sharpening

e purpose of image sharpening is to make the edgesand contours of the image clear and enhance the details ofthe image [15] e methods of image sharpening includestatistical difference method discrete spatial differencemethod and spatial high-pass filtering method Generallythe energy of the image is mainly concentrated in the lowfrequency e frequency where the noise is located ismainly in the high-frequency band e edge information ofthe image is mainly concentrated in the high-frequencyband Usually smoothing is used to remove the noise at highfrequency but this also makes the edge and contour of the

image blurred which affects the extraction of the featurepoints e root cause of the blur is that the image issubjected to an average or integral operation So performingan inverse operation (a differential operation) on the imagecan restore the picture to clear

In order to reduce the adverse effects we adopt themethod of high-pass filtering (second-order differential) inthe spatial domain and use the Laplacian operator to per-form a convolution operation with each pixel of the image toincrease the variance between each element in the matrixand the surrounding elements to achieve sharp image If thevariables of the convolution are the sequence x(n) and h(n)the result of the convolution is

y(n) 1113944infin

iminusinfinx(i)h(n minus i) x(n)lowast h(n) (6)

e convolution operation of the divided image blocks isactually to use the convolution kernel to slide on the imagee pixel gray value is multiplied with the value on thecorresponding convolution kernel en all the multipliedvalues are added as the gray value of the pixel on the imagecorresponding to the middle pixel of the convolution kernele expression of the convolution function is as follows

dst(x y) 1113944

0lexprimeltkernelcols

0leyprimeltkernelrows

kernel xprime yprime( 1113857

lowast src x + xprime minus anchorx y + yprime minus anchor y( 1113857

(7)

where anchor is the reference point of the kernel kernel isthe convolution kernel and the convolution template here isthe matrix form of the Laplacian variant operator as shownbelow

H

0 minus1 0

minus1 5 minus1

0 minus1 0

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ (8)

e Laplacian deformation operator uses the secondderivative information of the image and is isotropic

Figure 4 BRIEF description

6 Mathematical Problems in Engineering

e results of image discrete convolution on the classicLenna image using the sharpening algorithm adopted in thispaper are shown in Figure 5

From the processing results it can be seen that the high-pass filtering effect of the image sharpening algorithm isobvious which enhances the details and edge information ofthe image and the noise becomes small so this sharpeningalgorithm is selected

33 Screening Based on Information Entropy In 1948Shannon proposed the concept of ldquoinformation entropyrdquo tosolve the problem of quantitative measurement of infor-mation In theory entropy is a measure of the degree ofdisorder of information which is used to measure theuncertainty of information in an image e larger theentropy value is the higher the degree of disorder In imageprocessing entropy can reflect the information richness ofthe image and display the amount of information containedin the image e information entropy calculation formulais

H(x) minus 1113944255

0p xi( 1113857log2 p xi( 1113857 (9)

where p(xi) is the probability of a pixel with grayscalei(i 0 255) in the image When the probability is closerto 1 the uncertainty of the information is smaller and thereceiver can predict the transmitted content e amount ofinformation in the belt is less and vice versa

If the amount of information contained in the image isexpressed by information entropy then the entropy value ofan image of size m times n is defined as follows

pij f(i j)

1113936Mi11113936

Nj1f(i j)

(10)

H minus 1113944

M

i11113944

N

j1pijlog2pij (11)

where f(i j) is the gray level at the point (i j) in the imagepij is the gray distribution probability at the point and H isthe entropy of the image If m times n is taken as (i j) a localneighborhood in the center thenH is called the informationentropy value of the image [16]

From the perspective of information theory when theprobability of image appearance is small the informationentropy value of the image is large indicating that theamount of information transmitted is large In the extractionof feature points the information entropy reflects the texturecontained in the partial image information richness or imagepixel gradient change degree e larger the informationentropy value is the richer the image texture information isandmore obvious the changing of the image pixel gradient is[17]

erefore the effect of ORB feature point extraction isgood and the image block does not require detail en-hancement e local information entropy value is low sothe effect of ORB feature point extraction is poor which isshown in Figure 6 After sharpening adjustments to enhancedetails we optimize the effect of feature point extraction[18] which is shown in Figure 7 By comparing the featurepoint extraction of ORB-SLAM2 it can be intuitively seenthat the optimized extraction algorithm accuracy is better

34 Adaptive Information Entropy -reshold Because thenumber of the information entropy is closely related to thescene different video sequences in different scenes havedifferent information richness so the information entropythreshold of different scenes must also be different In eachdifferent scene it requires repeated experiments to set theinformation entropy threshold multiple times for matchingcalculations to get the corresponding threshold valueHowever the experience value in different scenes differsgreatlye threshold value is not universal which will affectthe image preprocessing and feature extraction It will resultin the failure of quickly obtaining better matching resultsSo the adaptive algorithm of information entropy is par-ticularly important [19]

Original image

(a)

Sharpen

(b)

Figure 5 Sharpening adjustment

Mathematical Problems in Engineering 7

In view of the above problems we propose an adaptivemethod of information entropy threshold which adjusts thethreshold according to different scenariose self-adjustingformula is

E0 H(i)ave

2+ δ (12)

where H(i)ave is the average value of the information en-tropy in this scene which can be obtained by obtaining theinformation entropy of each frame in a video in the sceneunder the first run and then divided by the number offrames i is the number of frames in the video sequence andδ is the correction factor after experiment it is 03 whichworks best rough the above formula the calculated E0 isthe information entropy threshold of the scene

4 Experimental Results and Analysis

All the experiments are conducted on a laptop computere operating system is Linux 1604 64 bit the processorIntel (R) Core (TM) i5-7300U CPU 250GHz the op-erating environment is CLion 2019 and opencv330 and the

program uses C++ Compilatione back-end uses G2O forposture optimization based on posture maps to generatemotion trajectories e image sequence reading speed ofthis algorithm is 30 frames per second compared with theopen source ORB-SLAM2 system

41 Comparison of Two-Frame Image Matching In thispaper the rgbd_dataset_freiburg1_desk dataset is selectedfrom which two images with blurry images and large ro-tation angles are extracted and the image detection andmatching processing are performed using the ORB algo-rithm ORB+ information entropy and ORB+ informationentropy + sharpening extraction algorithm And comparethe data such as extraction effect key points matchingnumber matching number and matching rate after RAN-SAC filtering e matching result statistics of the image areshown in Table 1 and the matching effect is shown inFigures 8 9 and 10

Table 1 shows the results of the feature point detectionand matching through the ORB ORB+ information en-tropy and ORB+ information entropy + sharpening Itproves that the correct matching points and the matchingrate of ORB andORB+ information entropy are basically thesame e matching effect diagram of the latter shows amismatch with a large error For ORB+ informationentropy + sharpening the number of correct matchingpoints of the extraction algorithm has increased signifi-cantly and its matching rate compared toORB+ information entropy and ORB+ information entro-py + sharpening extraction algorithms have been respec-tively improved by 367 and 388

e results show that adding the information entropyscreening and sharpening process proposed by this algo-rithm can effectively increase the number of matches andimprove the matching rate e accuracy of image featuredetection and matching are improved which provide moreaccurate and effective for subsequent process as trackingmapping and loop detection

42 Comparison of Image Matching in Visual OdometerFor the timestamp sequence of different blur degrees androtation angles of the image (that is a sequence of frames170ndash200) ORB-SLAM2 algorithm and the feature detectionand matching algorithm in the improved algorithm of thispaper are introduced to perform two adjacent framersquos fea-ture detection and matching To compare the matchingpoints of the two algorithms in different images the image isclear between 170 and 180 frames and the motion blur startsbetween 180 and 190 frames Maximum rotation angle isbetween 190 and 200 framesemost dramatic motion bluris shown in Figure 11

It can be seen from Figure 12 that the improved al-gorithm can match more feature points than the originalalgorithm e number of matching points of the im-proved algorithm is more than 50 and tracking loss doesnot occur even if the image is blurred due to fast motionWhen the resolution of the image is reduced the

Figure 6 Feature point extraction based on ORB-SLAM2

Figure 7 Extraction of sharpening adjustment feature points basedon information entropy

8 Mathematical Problems in Engineering

improved algorithm still has more matching featurepoints and has strong anti-interference ability androbustness

43 Analysis of Trajectory Tracking Accuracy e gbd_da-taset_freiburg1_desk dataset in the TUM data is selected inthe experiment We use the real trajectory of the robot

provided by TUM to compare it with the motion trajectorycalculated by the algorithm to verify the improvement effectof the proposed algorithm Here we take the average absolutetrajectory error e average relative trajectory error and theaverage tracking time are standard e comprehensive tra-jectory tracking accuracy and real-time performance arecompared between the ORB-SLAM2 system and the im-proved SLAM system

Figure 8 Matching effect of ORB

Figure 9 Matching effect of ORB+ information entropy

Figure 10 Matching effect of ORB+ information entropy + sharpening

Table 1 Comparison of matching rates of three feature detection and matching algorithms

Model Key points Matches RANSCK Match rate ()ORB 10001004 254 87 3425+information entropy 10001002 255 86 3373+sharpening 1000983 284 133 4683

Mathematical Problems in Engineering 9

We use the information entropy of image blocks todetermine the amount of information e algorithmsharpens the image blocks with small information entropyenhances local image details and extracts local feature pointsthat can characterize image information for the back-end asadjacent framese correlation basis of key framematchingenhances robustness and reduces the loss of motion trackingcaused by the failure of interframe matching at the sametime Based on the matching result the R and t transfor-mation relationship between frames is calculated and theback-end uses g2o to optimize the pose based on the positionand attitude map Finally the motion trajectory is generatedwhich is shown in Figure 13

In terms of trajectory tracking accuracy the absolutetrajectory error reflects the difference between the true valueof the camerarsquos pose and the estimated value of the SLAMsystem e absolute trajectory error root mean squareRMSE (x) is defined as follows

RMSE(x)

1113936ni1 xei minus xsi

2

n

1113971

(13)

Among them xei represents the estimated position ofthe ith frame in the image sequence and xsi representsthe standard value of the position of the ith frame in theimage sequence Taking the rgbd_dataset_-freiburg1_desk dataset as an example the error betweenthe optimized motion trajectory and the standard value isshown in Figure 14

e average absolute trajectory error refers to directlycalculating the difference between the real value of thecamerarsquos pose and the estimated value of the SLAM systeme program first aligns the real value and the estimatedvalue according to the time stamp of the pose and thencalculates each pair of poses e difference is very suitablefor evaluating the performance of visual SLAM systems eaverage relative trajectory error is used to calculate thedifference between the pose changes on the same twotimestamps After alignment with the timestamps both thereal pose and the estimated pose are calculated at every sameperiod of time then the difference is calculated It is suitable

for estimating the drift of the system e definition of theabsolute trajectory error in frame i is as follows

Fi Qminus1i Spi (14)

Among them Qi represents the real pose of the frameand pi represents the algorithm estimated pose of the frameS isin Sim (3) is the similarity transformation matrix fromestimated pose to true pose

is experiment comprehensively compares the threedatasets rgbd_dataset_freiburg1_deskroom360 of whichthe desk dataset is an ordinary scene the 360 dataset is thedataset collected when the camera performs a 360-degreerotation motion and the room dataset is a scene collectedwhen the image sequence texture is not rich e ORB-SLAM2 generates different trajectory results each time Inorder to eliminate the randomness of the experimentalresults this article uses 6 experiments on each dataset underthe same experimental environment to calculate the averageabsolute trajectory error the average relative trajectory

Figure 11 200th frame image

20

40

60

80

100

120

140

160

180

Num

ber o

f mat

ches

175 180 185 190 195 200170Time stamp

OursORBSLAM2

Figure 12 Comparison between the algorithm of this paper andORB-SLAM2 motion blur

Figure 13 Movement track

10 Mathematical Problems in Engineering

error and the average tracking time Tables 2ndash4 are plottedin Figures 15ndash17 to compare the algorithm performance byanalyzing the trajectory

Under the evaluation standard of absolute trajectoryerror the algorithm in this paper has certain advantagesIn the scene where the camera rotates 360deg the absolutetrajectory error is increased by 35 and in the ordinaryscene the absolute trajectory error is increased by 48 Inthe case of not rich texture the absolute trajectory errorimprovement is the largest indicating that the algorithmin this paper has a considerable improvement on scenes

with rich image sequence textures ere is also a certainimprovement in the situation of large-angle camerarotation

ere is an advantage in trajectory error in ordinaryscenes it is 175 smaller than the average relative trajectoryerror of the ORB-SLAM2 system In the scene where thecamera rotates 360deg and the image sequence texture is notrich the algorithm in this paper is better than the ORB-SLAM2 system which has been improved by more than40 It proves that the algorithm in this article is indeedimproved compared to the ORB-SLAM2 system

Table 2 Average absolute trajectory error

Average absolute trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 0057630667 0029899333360 0113049667 0072683167Room 0229222167 0111185333

Table 3 Average relative trajectory error

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00362615 0029899360 01534532 0078611Room 01592775 0092167

Table 4 Average tracking time

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00244937 00289299360 00186606 00225014Room 00220836 00282757

0082

0048

0014

ndash05ndash05

ndash10

0000

05

05

05

x (m)

y (m)

z (m

)

10

10

Error mapped onto trajectory

10

15

20

25

15

Reference

Figure 14 Absolute trajectory error

Mathematical Problems in Engineering 11

Due to the sharpening of the local image the averagetracking time is slightly increased but the increase is not large

5 Conclusion

We propose an improved ORB-SLAM2 algorithm which isbased on adaptive information entropy screening algorithmand sharpening adjustment algorithm Our advantage is thatthe information entropy threshold can be automaticallycalculated according to different scenes with generalitySharpening the picture blocks below the information en-tropy threshold increases the clarity in order to better extractthe feature points e combination of information entropyscreening image block and sharpening algorithm solves thecamerarsquos large-angle rotation and the system positioningand mapping failure caused by the lack of texture infor-mation Although the processing time increases it improvesthe processing accuracy We compare the average absolutetrajectory error the average relative trajectory error and theaverage tracing time in the trajectory error with ORB-SLAM2 e experimental results show that the improvedmodel based on information entropy and sharpening im-proved algorithm achieve better results compared withORB-SLAM2 is article innovatively combines the ORB-SLAM2 system with adaptive information entropy ecombination of sharpening algorithms improves the accu-racy and robustness of the systemwithout a noticeable gap inprocessing time

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (NSFC)-Guangdong big data ScienceCenter Project under Grant U1911401 and in part by theSouth China Normal University National UndergraduateInnovation and Entrepreneurship Training Program underGrant 201910574057

References

[1] D Schleicher L M Bergasa R Barea E Lopez andM Ocana ldquoReal-time simultaneous localization and mappingusing a wide-angle stereo camerardquo in Proceedings of the IEEEWorkshop on Distributed Intelligent Systems Collective In-telligence and its Applications (DISrsquo06) pp 2090ndash2095 Bei-jing China October 2006

[2] A J Davison I D Reid N D Molton and O StasseldquoMonoSLAM real-time single camera SLAMrdquo IEEE Trans-actions on Pattern Analysis and Machine Intelligence vol 29no 6 pp 1052ndash1067 2007

[3] G Klein and D Murray ldquoParallel tracking and mapping forsmall ARworkspacesrdquo in Proceedings of the 2007 6th IEEE and

000

005

010

015

020

025M

ean

abso

lute

traj

ecto

ry er

ror

360 RoomDesk

OursORBSLAM2

Figure 15 Average absolute trajectory error

360 RoomDesk

OursORBSLAM2

0000

0005

0010

0015

0020

0025

0030

Mea

n tr

acki

ng ti

me

Figure 17 Average tracking time

360 RoomDesk

OursORBSLAM2

000

002

004

006

008

012

010

016

014

018

Mea

n re

lativ

e tra

ject

ory

erro

r

Figure 16 Average relative trajectory error

12 Mathematical Problems in Engineering

ACM International Symposium on Mixed and AugmentedReality pp 225ndash234 Nara Japan November 2007

[4] R Mur-Artal J M M Montiel and J D Tardos ldquoORB-SLAM a versatile and accurate monocular SLAM systemrdquoIEEE Transactions on Robotics vol 29 no 6 pp 1052ndash10672007

[5] R Mur-Artal and J D Tardos ldquoORB-SLAM2 an open-sourceSLAM system for monocular stereo and RGB-D camerasrdquoIEEE Transactions on Robotics vol 33 no 5 pp 1255ndash12622017

[6] K Di W Wan and J Chen ldquoSLAM visual odometer opti-mization algorithm based on information entropyrdquo Journal ofAutomation vol 47 no 6 pp 770ndash779 2020

[7] M Quan H Wei and J Chen ldquoSLAM visual odometeroptimization algorithm based on information entropyrdquoJournal of Automation vol 1 no 8 pp 180ndash198 2020

[8] L Xu S Zheng and J Jia ldquoUnnatural L0 sparse represen-tation for natural image deblurrirdquo in Proceedings of the 2013IEEE Conference on Computer Vision and Pattern Recognitionpp 1107ndash1114 Portland OR USA June 2013

[9] J Sun W Cao Z Xu and J Ponce ldquoLearning a convolutionalneural network for non-uniform motion blur removalrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 769ndash777 Boston MA USA June2015

[10] S Nah T H Kim and K M Lee ldquoDeep multi-scale con-volutional neural network for dynamic scene deblurringrdquo inProceedings of the 2017 IEEE Conference on Computer Visionand Pattern Recognition (CVPR) pp 257ndash265 Honolulu HIUSA July 2017

[11] J Zhang J Pan J Ren et al ldquoDynamic scene deblurring usingspatially variant recurrent neural networksrdquo in Proceedings ofthe 2018 IEEECVF Conference on Computer Vision andPattern Recognition pp 2521ndash2529 Salt Lake City UT USAJune 2018

[12] G Chen W Chen H Yu et al ldquoResearch on autonomousnavigation method of mobile robot based on semantic ORB-SLAM2 algorithmrdquoMachine Tool amp Hydraulics vol 48 no 9pp 16ndash20 2020

[13] X Qiu and C Zhao ldquoOverview of ORB-SLAM system op-timization framework analysisrdquo Navigation Positioning andTiming vol 6 no 3 pp 46ndash57 2019

[14] E Rosten and T Drummond ldquoMachine learning for high-speed corner detectionrdquo in Proceedings of the EuropeanConference on Computer Vision pp 430ndash443 Graz Austria2006

[15] A Foi V Katkovnik and K Egiazarian ldquoPointwise shape-adaptive DCT for high-quality denoising and deblocking ofgrayscale and color imagesrdquo IEEE Transactions on ImageProcessing vol 16 no 5 pp 1395ndash1411 2007

[16] Y Yu andWH Tardos ldquoORB-SLAM a versatile and accuratemonocular SLAM systemrdquo IEEE Transactions on Roboticsvol 33 no 5 pp 1255ndash1262 2017

[17] R Xu S Liu and J Chen ldquoe rationality and application ofinformation entropy description in imagesrdquo InformationTechnology vol 11 no 5 pp 59ndash61 2005

[18] Y Zhu ldquoOverview of image enhancement algorithmsrdquo In-formation and Computer (-eoretical Edition) vol 16 no 6pp 104ndash106 2017

[19] Z Yuan ldquoImage threshold automatic selection based on one-dimensional entropyrdquo Journal of Gansu University of Tech-nology vol 1 no 1 pp 84ndash88 1993

Mathematical Problems in Engineering 13

Page 6: ImprovedORB-SLAM2AlgorithmBasedonInformationEntropy ...downloads.hindawi.com/journals/mpe/2020/4724310.pdf · In 2007, Andrew Davison proposed Mono-SLAM [2], which is the real-time

θ arctanm01

m101113888 1113889 (3)

rough the above methods FAST corners have a de-scription of scale and rotation which greatly improves therobustness of their representation between different imagesSo this improved FAST is called Oriented FAST in the ORB

After the feature point detection is completed the fea-ture description needs to be used to represent and store thefeature point information to improve the efficiency of thesubsequent matching e ORB algorithm uses the BRIEFdescriptor whose main idea is to distribute a certainprobability around the feature point Randomly select sev-eral point pairs and then combine the gray values of thesepoint pairs into a binary string Finally use this binary as thedescriptor of this feature point

e BRIEF feature descriptor determines the binarydescriptor by comparing the pair of gray values As shown inFigure 4 if the gray value at the pixel x is less than the grayvalue at y the binary value of the corresponding bit isassigned to 1 otherwise it is 0

τ(p x y) 1 I(x)lt I(y)

0 I(x)ge I(y)1113896 (4)

Among them I(x) and I(y) are the gray values at thecorresponding corners after image smoothing

For n (256) (x y) test point pairs the corresponding n-dimensional BRIEF descriptor can be formed according tothe following formula

fn(p) 11139441leilen

2iminus tτ(p x y) (5)

However the BRIEF descriptor does not have rotationinvariance the ORB algorithm improves it When calcu-lating the BRIEF descriptor the ORB will solve these cor-responding feature points in the main direction therebyensuring that the rotation angle is different e point pairsselected for the same feature point are the same

32 Sharpening Adjustment In the process of image trans-mission and conversion the sharpness of the image is reducedIn essence the target contour and detail edges in the image areblurred In the extraction of feature points what we need to dois to traverse the image and the pixels with large difference insurrounding pixels are selected as the feature points whichrequires the imagersquos target contour and detail edge informationto be prominent erefore we introduced image sharpening

e purpose of image sharpening is to make the edgesand contours of the image clear and enhance the details ofthe image [15] e methods of image sharpening includestatistical difference method discrete spatial differencemethod and spatial high-pass filtering method Generallythe energy of the image is mainly concentrated in the lowfrequency e frequency where the noise is located ismainly in the high-frequency band e edge information ofthe image is mainly concentrated in the high-frequencyband Usually smoothing is used to remove the noise at highfrequency but this also makes the edge and contour of the

image blurred which affects the extraction of the featurepoints e root cause of the blur is that the image issubjected to an average or integral operation So performingan inverse operation (a differential operation) on the imagecan restore the picture to clear

In order to reduce the adverse effects we adopt themethod of high-pass filtering (second-order differential) inthe spatial domain and use the Laplacian operator to per-form a convolution operation with each pixel of the image toincrease the variance between each element in the matrixand the surrounding elements to achieve sharp image If thevariables of the convolution are the sequence x(n) and h(n)the result of the convolution is

y(n) 1113944infin

iminusinfinx(i)h(n minus i) x(n)lowast h(n) (6)

e convolution operation of the divided image blocks isactually to use the convolution kernel to slide on the imagee pixel gray value is multiplied with the value on thecorresponding convolution kernel en all the multipliedvalues are added as the gray value of the pixel on the imagecorresponding to the middle pixel of the convolution kernele expression of the convolution function is as follows

dst(x y) 1113944

0lexprimeltkernelcols

0leyprimeltkernelrows

kernel xprime yprime( 1113857

lowast src x + xprime minus anchorx y + yprime minus anchor y( 1113857

(7)

where anchor is the reference point of the kernel kernel isthe convolution kernel and the convolution template here isthe matrix form of the Laplacian variant operator as shownbelow

H

0 minus1 0

minus1 5 minus1

0 minus1 0

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ (8)

e Laplacian deformation operator uses the secondderivative information of the image and is isotropic

Figure 4 BRIEF description

6 Mathematical Problems in Engineering

e results of image discrete convolution on the classicLenna image using the sharpening algorithm adopted in thispaper are shown in Figure 5

From the processing results it can be seen that the high-pass filtering effect of the image sharpening algorithm isobvious which enhances the details and edge information ofthe image and the noise becomes small so this sharpeningalgorithm is selected

33 Screening Based on Information Entropy In 1948Shannon proposed the concept of ldquoinformation entropyrdquo tosolve the problem of quantitative measurement of infor-mation In theory entropy is a measure of the degree ofdisorder of information which is used to measure theuncertainty of information in an image e larger theentropy value is the higher the degree of disorder In imageprocessing entropy can reflect the information richness ofthe image and display the amount of information containedin the image e information entropy calculation formulais

H(x) minus 1113944255

0p xi( 1113857log2 p xi( 1113857 (9)

where p(xi) is the probability of a pixel with grayscalei(i 0 255) in the image When the probability is closerto 1 the uncertainty of the information is smaller and thereceiver can predict the transmitted content e amount ofinformation in the belt is less and vice versa

If the amount of information contained in the image isexpressed by information entropy then the entropy value ofan image of size m times n is defined as follows

pij f(i j)

1113936Mi11113936

Nj1f(i j)

(10)

H minus 1113944

M

i11113944

N

j1pijlog2pij (11)

where f(i j) is the gray level at the point (i j) in the imagepij is the gray distribution probability at the point and H isthe entropy of the image If m times n is taken as (i j) a localneighborhood in the center thenH is called the informationentropy value of the image [16]

From the perspective of information theory when theprobability of image appearance is small the informationentropy value of the image is large indicating that theamount of information transmitted is large In the extractionof feature points the information entropy reflects the texturecontained in the partial image information richness or imagepixel gradient change degree e larger the informationentropy value is the richer the image texture information isandmore obvious the changing of the image pixel gradient is[17]

erefore the effect of ORB feature point extraction isgood and the image block does not require detail en-hancement e local information entropy value is low sothe effect of ORB feature point extraction is poor which isshown in Figure 6 After sharpening adjustments to enhancedetails we optimize the effect of feature point extraction[18] which is shown in Figure 7 By comparing the featurepoint extraction of ORB-SLAM2 it can be intuitively seenthat the optimized extraction algorithm accuracy is better

34 Adaptive Information Entropy -reshold Because thenumber of the information entropy is closely related to thescene different video sequences in different scenes havedifferent information richness so the information entropythreshold of different scenes must also be different In eachdifferent scene it requires repeated experiments to set theinformation entropy threshold multiple times for matchingcalculations to get the corresponding threshold valueHowever the experience value in different scenes differsgreatlye threshold value is not universal which will affectthe image preprocessing and feature extraction It will resultin the failure of quickly obtaining better matching resultsSo the adaptive algorithm of information entropy is par-ticularly important [19]

Original image

(a)

Sharpen

(b)

Figure 5 Sharpening adjustment

Mathematical Problems in Engineering 7

In view of the above problems we propose an adaptivemethod of information entropy threshold which adjusts thethreshold according to different scenariose self-adjustingformula is

E0 H(i)ave

2+ δ (12)

where H(i)ave is the average value of the information en-tropy in this scene which can be obtained by obtaining theinformation entropy of each frame in a video in the sceneunder the first run and then divided by the number offrames i is the number of frames in the video sequence andδ is the correction factor after experiment it is 03 whichworks best rough the above formula the calculated E0 isthe information entropy threshold of the scene

4 Experimental Results and Analysis

All the experiments are conducted on a laptop computere operating system is Linux 1604 64 bit the processorIntel (R) Core (TM) i5-7300U CPU 250GHz the op-erating environment is CLion 2019 and opencv330 and the

program uses C++ Compilatione back-end uses G2O forposture optimization based on posture maps to generatemotion trajectories e image sequence reading speed ofthis algorithm is 30 frames per second compared with theopen source ORB-SLAM2 system

41 Comparison of Two-Frame Image Matching In thispaper the rgbd_dataset_freiburg1_desk dataset is selectedfrom which two images with blurry images and large ro-tation angles are extracted and the image detection andmatching processing are performed using the ORB algo-rithm ORB+ information entropy and ORB+ informationentropy + sharpening extraction algorithm And comparethe data such as extraction effect key points matchingnumber matching number and matching rate after RAN-SAC filtering e matching result statistics of the image areshown in Table 1 and the matching effect is shown inFigures 8 9 and 10

Table 1 shows the results of the feature point detectionand matching through the ORB ORB+ information en-tropy and ORB+ information entropy + sharpening Itproves that the correct matching points and the matchingrate of ORB andORB+ information entropy are basically thesame e matching effect diagram of the latter shows amismatch with a large error For ORB+ informationentropy + sharpening the number of correct matchingpoints of the extraction algorithm has increased signifi-cantly and its matching rate compared toORB+ information entropy and ORB+ information entro-py + sharpening extraction algorithms have been respec-tively improved by 367 and 388

e results show that adding the information entropyscreening and sharpening process proposed by this algo-rithm can effectively increase the number of matches andimprove the matching rate e accuracy of image featuredetection and matching are improved which provide moreaccurate and effective for subsequent process as trackingmapping and loop detection

42 Comparison of Image Matching in Visual OdometerFor the timestamp sequence of different blur degrees androtation angles of the image (that is a sequence of frames170ndash200) ORB-SLAM2 algorithm and the feature detectionand matching algorithm in the improved algorithm of thispaper are introduced to perform two adjacent framersquos fea-ture detection and matching To compare the matchingpoints of the two algorithms in different images the image isclear between 170 and 180 frames and the motion blur startsbetween 180 and 190 frames Maximum rotation angle isbetween 190 and 200 framesemost dramatic motion bluris shown in Figure 11

It can be seen from Figure 12 that the improved al-gorithm can match more feature points than the originalalgorithm e number of matching points of the im-proved algorithm is more than 50 and tracking loss doesnot occur even if the image is blurred due to fast motionWhen the resolution of the image is reduced the

Figure 6 Feature point extraction based on ORB-SLAM2

Figure 7 Extraction of sharpening adjustment feature points basedon information entropy

8 Mathematical Problems in Engineering

improved algorithm still has more matching featurepoints and has strong anti-interference ability androbustness

43 Analysis of Trajectory Tracking Accuracy e gbd_da-taset_freiburg1_desk dataset in the TUM data is selected inthe experiment We use the real trajectory of the robot

provided by TUM to compare it with the motion trajectorycalculated by the algorithm to verify the improvement effectof the proposed algorithm Here we take the average absolutetrajectory error e average relative trajectory error and theaverage tracking time are standard e comprehensive tra-jectory tracking accuracy and real-time performance arecompared between the ORB-SLAM2 system and the im-proved SLAM system

Figure 8 Matching effect of ORB

Figure 9 Matching effect of ORB+ information entropy

Figure 10 Matching effect of ORB+ information entropy + sharpening

Table 1 Comparison of matching rates of three feature detection and matching algorithms

Model Key points Matches RANSCK Match rate ()ORB 10001004 254 87 3425+information entropy 10001002 255 86 3373+sharpening 1000983 284 133 4683

Mathematical Problems in Engineering 9

We use the information entropy of image blocks todetermine the amount of information e algorithmsharpens the image blocks with small information entropyenhances local image details and extracts local feature pointsthat can characterize image information for the back-end asadjacent framese correlation basis of key framematchingenhances robustness and reduces the loss of motion trackingcaused by the failure of interframe matching at the sametime Based on the matching result the R and t transfor-mation relationship between frames is calculated and theback-end uses g2o to optimize the pose based on the positionand attitude map Finally the motion trajectory is generatedwhich is shown in Figure 13

In terms of trajectory tracking accuracy the absolutetrajectory error reflects the difference between the true valueof the camerarsquos pose and the estimated value of the SLAMsystem e absolute trajectory error root mean squareRMSE (x) is defined as follows

RMSE(x)

1113936ni1 xei minus xsi

2

n

1113971

(13)

Among them xei represents the estimated position ofthe ith frame in the image sequence and xsi representsthe standard value of the position of the ith frame in theimage sequence Taking the rgbd_dataset_-freiburg1_desk dataset as an example the error betweenthe optimized motion trajectory and the standard value isshown in Figure 14

e average absolute trajectory error refers to directlycalculating the difference between the real value of thecamerarsquos pose and the estimated value of the SLAM systeme program first aligns the real value and the estimatedvalue according to the time stamp of the pose and thencalculates each pair of poses e difference is very suitablefor evaluating the performance of visual SLAM systems eaverage relative trajectory error is used to calculate thedifference between the pose changes on the same twotimestamps After alignment with the timestamps both thereal pose and the estimated pose are calculated at every sameperiod of time then the difference is calculated It is suitable

for estimating the drift of the system e definition of theabsolute trajectory error in frame i is as follows

Fi Qminus1i Spi (14)

Among them Qi represents the real pose of the frameand pi represents the algorithm estimated pose of the frameS isin Sim (3) is the similarity transformation matrix fromestimated pose to true pose

is experiment comprehensively compares the threedatasets rgbd_dataset_freiburg1_deskroom360 of whichthe desk dataset is an ordinary scene the 360 dataset is thedataset collected when the camera performs a 360-degreerotation motion and the room dataset is a scene collectedwhen the image sequence texture is not rich e ORB-SLAM2 generates different trajectory results each time Inorder to eliminate the randomness of the experimentalresults this article uses 6 experiments on each dataset underthe same experimental environment to calculate the averageabsolute trajectory error the average relative trajectory

Figure 11 200th frame image

20

40

60

80

100

120

140

160

180

Num

ber o

f mat

ches

175 180 185 190 195 200170Time stamp

OursORBSLAM2

Figure 12 Comparison between the algorithm of this paper andORB-SLAM2 motion blur

Figure 13 Movement track

10 Mathematical Problems in Engineering

error and the average tracking time Tables 2ndash4 are plottedin Figures 15ndash17 to compare the algorithm performance byanalyzing the trajectory

Under the evaluation standard of absolute trajectoryerror the algorithm in this paper has certain advantagesIn the scene where the camera rotates 360deg the absolutetrajectory error is increased by 35 and in the ordinaryscene the absolute trajectory error is increased by 48 Inthe case of not rich texture the absolute trajectory errorimprovement is the largest indicating that the algorithmin this paper has a considerable improvement on scenes

with rich image sequence textures ere is also a certainimprovement in the situation of large-angle camerarotation

ere is an advantage in trajectory error in ordinaryscenes it is 175 smaller than the average relative trajectoryerror of the ORB-SLAM2 system In the scene where thecamera rotates 360deg and the image sequence texture is notrich the algorithm in this paper is better than the ORB-SLAM2 system which has been improved by more than40 It proves that the algorithm in this article is indeedimproved compared to the ORB-SLAM2 system

Table 2 Average absolute trajectory error

Average absolute trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 0057630667 0029899333360 0113049667 0072683167Room 0229222167 0111185333

Table 3 Average relative trajectory error

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00362615 0029899360 01534532 0078611Room 01592775 0092167

Table 4 Average tracking time

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00244937 00289299360 00186606 00225014Room 00220836 00282757

0082

0048

0014

ndash05ndash05

ndash10

0000

05

05

05

x (m)

y (m)

z (m

)

10

10

Error mapped onto trajectory

10

15

20

25

15

Reference

Figure 14 Absolute trajectory error

Mathematical Problems in Engineering 11

Due to the sharpening of the local image the averagetracking time is slightly increased but the increase is not large

5 Conclusion

We propose an improved ORB-SLAM2 algorithm which isbased on adaptive information entropy screening algorithmand sharpening adjustment algorithm Our advantage is thatthe information entropy threshold can be automaticallycalculated according to different scenes with generalitySharpening the picture blocks below the information en-tropy threshold increases the clarity in order to better extractthe feature points e combination of information entropyscreening image block and sharpening algorithm solves thecamerarsquos large-angle rotation and the system positioningand mapping failure caused by the lack of texture infor-mation Although the processing time increases it improvesthe processing accuracy We compare the average absolutetrajectory error the average relative trajectory error and theaverage tracing time in the trajectory error with ORB-SLAM2 e experimental results show that the improvedmodel based on information entropy and sharpening im-proved algorithm achieve better results compared withORB-SLAM2 is article innovatively combines the ORB-SLAM2 system with adaptive information entropy ecombination of sharpening algorithms improves the accu-racy and robustness of the systemwithout a noticeable gap inprocessing time

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (NSFC)-Guangdong big data ScienceCenter Project under Grant U1911401 and in part by theSouth China Normal University National UndergraduateInnovation and Entrepreneurship Training Program underGrant 201910574057

References

[1] D Schleicher L M Bergasa R Barea E Lopez andM Ocana ldquoReal-time simultaneous localization and mappingusing a wide-angle stereo camerardquo in Proceedings of the IEEEWorkshop on Distributed Intelligent Systems Collective In-telligence and its Applications (DISrsquo06) pp 2090ndash2095 Bei-jing China October 2006

[2] A J Davison I D Reid N D Molton and O StasseldquoMonoSLAM real-time single camera SLAMrdquo IEEE Trans-actions on Pattern Analysis and Machine Intelligence vol 29no 6 pp 1052ndash1067 2007

[3] G Klein and D Murray ldquoParallel tracking and mapping forsmall ARworkspacesrdquo in Proceedings of the 2007 6th IEEE and

000

005

010

015

020

025M

ean

abso

lute

traj

ecto

ry er

ror

360 RoomDesk

OursORBSLAM2

Figure 15 Average absolute trajectory error

360 RoomDesk

OursORBSLAM2

0000

0005

0010

0015

0020

0025

0030

Mea

n tr

acki

ng ti

me

Figure 17 Average tracking time

360 RoomDesk

OursORBSLAM2

000

002

004

006

008

012

010

016

014

018

Mea

n re

lativ

e tra

ject

ory

erro

r

Figure 16 Average relative trajectory error

12 Mathematical Problems in Engineering

ACM International Symposium on Mixed and AugmentedReality pp 225ndash234 Nara Japan November 2007

[4] R Mur-Artal J M M Montiel and J D Tardos ldquoORB-SLAM a versatile and accurate monocular SLAM systemrdquoIEEE Transactions on Robotics vol 29 no 6 pp 1052ndash10672007

[5] R Mur-Artal and J D Tardos ldquoORB-SLAM2 an open-sourceSLAM system for monocular stereo and RGB-D camerasrdquoIEEE Transactions on Robotics vol 33 no 5 pp 1255ndash12622017

[6] K Di W Wan and J Chen ldquoSLAM visual odometer opti-mization algorithm based on information entropyrdquo Journal ofAutomation vol 47 no 6 pp 770ndash779 2020

[7] M Quan H Wei and J Chen ldquoSLAM visual odometeroptimization algorithm based on information entropyrdquoJournal of Automation vol 1 no 8 pp 180ndash198 2020

[8] L Xu S Zheng and J Jia ldquoUnnatural L0 sparse represen-tation for natural image deblurrirdquo in Proceedings of the 2013IEEE Conference on Computer Vision and Pattern Recognitionpp 1107ndash1114 Portland OR USA June 2013

[9] J Sun W Cao Z Xu and J Ponce ldquoLearning a convolutionalneural network for non-uniform motion blur removalrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 769ndash777 Boston MA USA June2015

[10] S Nah T H Kim and K M Lee ldquoDeep multi-scale con-volutional neural network for dynamic scene deblurringrdquo inProceedings of the 2017 IEEE Conference on Computer Visionand Pattern Recognition (CVPR) pp 257ndash265 Honolulu HIUSA July 2017

[11] J Zhang J Pan J Ren et al ldquoDynamic scene deblurring usingspatially variant recurrent neural networksrdquo in Proceedings ofthe 2018 IEEECVF Conference on Computer Vision andPattern Recognition pp 2521ndash2529 Salt Lake City UT USAJune 2018

[12] G Chen W Chen H Yu et al ldquoResearch on autonomousnavigation method of mobile robot based on semantic ORB-SLAM2 algorithmrdquoMachine Tool amp Hydraulics vol 48 no 9pp 16ndash20 2020

[13] X Qiu and C Zhao ldquoOverview of ORB-SLAM system op-timization framework analysisrdquo Navigation Positioning andTiming vol 6 no 3 pp 46ndash57 2019

[14] E Rosten and T Drummond ldquoMachine learning for high-speed corner detectionrdquo in Proceedings of the EuropeanConference on Computer Vision pp 430ndash443 Graz Austria2006

[15] A Foi V Katkovnik and K Egiazarian ldquoPointwise shape-adaptive DCT for high-quality denoising and deblocking ofgrayscale and color imagesrdquo IEEE Transactions on ImageProcessing vol 16 no 5 pp 1395ndash1411 2007

[16] Y Yu andWH Tardos ldquoORB-SLAM a versatile and accuratemonocular SLAM systemrdquo IEEE Transactions on Roboticsvol 33 no 5 pp 1255ndash1262 2017

[17] R Xu S Liu and J Chen ldquoe rationality and application ofinformation entropy description in imagesrdquo InformationTechnology vol 11 no 5 pp 59ndash61 2005

[18] Y Zhu ldquoOverview of image enhancement algorithmsrdquo In-formation and Computer (-eoretical Edition) vol 16 no 6pp 104ndash106 2017

[19] Z Yuan ldquoImage threshold automatic selection based on one-dimensional entropyrdquo Journal of Gansu University of Tech-nology vol 1 no 1 pp 84ndash88 1993

Mathematical Problems in Engineering 13

Page 7: ImprovedORB-SLAM2AlgorithmBasedonInformationEntropy ...downloads.hindawi.com/journals/mpe/2020/4724310.pdf · In 2007, Andrew Davison proposed Mono-SLAM [2], which is the real-time

e results of image discrete convolution on the classicLenna image using the sharpening algorithm adopted in thispaper are shown in Figure 5

From the processing results it can be seen that the high-pass filtering effect of the image sharpening algorithm isobvious which enhances the details and edge information ofthe image and the noise becomes small so this sharpeningalgorithm is selected

33 Screening Based on Information Entropy In 1948Shannon proposed the concept of ldquoinformation entropyrdquo tosolve the problem of quantitative measurement of infor-mation In theory entropy is a measure of the degree ofdisorder of information which is used to measure theuncertainty of information in an image e larger theentropy value is the higher the degree of disorder In imageprocessing entropy can reflect the information richness ofthe image and display the amount of information containedin the image e information entropy calculation formulais

H(x) minus 1113944255

0p xi( 1113857log2 p xi( 1113857 (9)

where p(xi) is the probability of a pixel with grayscalei(i 0 255) in the image When the probability is closerto 1 the uncertainty of the information is smaller and thereceiver can predict the transmitted content e amount ofinformation in the belt is less and vice versa

If the amount of information contained in the image isexpressed by information entropy then the entropy value ofan image of size m times n is defined as follows

pij f(i j)

1113936Mi11113936

Nj1f(i j)

(10)

H minus 1113944

M

i11113944

N

j1pijlog2pij (11)

where f(i j) is the gray level at the point (i j) in the imagepij is the gray distribution probability at the point and H isthe entropy of the image If m times n is taken as (i j) a localneighborhood in the center thenH is called the informationentropy value of the image [16]

From the perspective of information theory when theprobability of image appearance is small the informationentropy value of the image is large indicating that theamount of information transmitted is large In the extractionof feature points the information entropy reflects the texturecontained in the partial image information richness or imagepixel gradient change degree e larger the informationentropy value is the richer the image texture information isandmore obvious the changing of the image pixel gradient is[17]

erefore the effect of ORB feature point extraction isgood and the image block does not require detail en-hancement e local information entropy value is low sothe effect of ORB feature point extraction is poor which isshown in Figure 6 After sharpening adjustments to enhancedetails we optimize the effect of feature point extraction[18] which is shown in Figure 7 By comparing the featurepoint extraction of ORB-SLAM2 it can be intuitively seenthat the optimized extraction algorithm accuracy is better

34 Adaptive Information Entropy -reshold Because thenumber of the information entropy is closely related to thescene different video sequences in different scenes havedifferent information richness so the information entropythreshold of different scenes must also be different In eachdifferent scene it requires repeated experiments to set theinformation entropy threshold multiple times for matchingcalculations to get the corresponding threshold valueHowever the experience value in different scenes differsgreatlye threshold value is not universal which will affectthe image preprocessing and feature extraction It will resultin the failure of quickly obtaining better matching resultsSo the adaptive algorithm of information entropy is par-ticularly important [19]

Original image

(a)

Sharpen

(b)

Figure 5 Sharpening adjustment

Mathematical Problems in Engineering 7

In view of the above problems we propose an adaptivemethod of information entropy threshold which adjusts thethreshold according to different scenariose self-adjustingformula is

E0 H(i)ave

2+ δ (12)

where H(i)ave is the average value of the information en-tropy in this scene which can be obtained by obtaining theinformation entropy of each frame in a video in the sceneunder the first run and then divided by the number offrames i is the number of frames in the video sequence andδ is the correction factor after experiment it is 03 whichworks best rough the above formula the calculated E0 isthe information entropy threshold of the scene

4 Experimental Results and Analysis

All the experiments are conducted on a laptop computere operating system is Linux 1604 64 bit the processorIntel (R) Core (TM) i5-7300U CPU 250GHz the op-erating environment is CLion 2019 and opencv330 and the

program uses C++ Compilatione back-end uses G2O forposture optimization based on posture maps to generatemotion trajectories e image sequence reading speed ofthis algorithm is 30 frames per second compared with theopen source ORB-SLAM2 system

41 Comparison of Two-Frame Image Matching In thispaper the rgbd_dataset_freiburg1_desk dataset is selectedfrom which two images with blurry images and large ro-tation angles are extracted and the image detection andmatching processing are performed using the ORB algo-rithm ORB+ information entropy and ORB+ informationentropy + sharpening extraction algorithm And comparethe data such as extraction effect key points matchingnumber matching number and matching rate after RAN-SAC filtering e matching result statistics of the image areshown in Table 1 and the matching effect is shown inFigures 8 9 and 10

Table 1 shows the results of the feature point detectionand matching through the ORB ORB+ information en-tropy and ORB+ information entropy + sharpening Itproves that the correct matching points and the matchingrate of ORB andORB+ information entropy are basically thesame e matching effect diagram of the latter shows amismatch with a large error For ORB+ informationentropy + sharpening the number of correct matchingpoints of the extraction algorithm has increased signifi-cantly and its matching rate compared toORB+ information entropy and ORB+ information entro-py + sharpening extraction algorithms have been respec-tively improved by 367 and 388

e results show that adding the information entropyscreening and sharpening process proposed by this algo-rithm can effectively increase the number of matches andimprove the matching rate e accuracy of image featuredetection and matching are improved which provide moreaccurate and effective for subsequent process as trackingmapping and loop detection

42 Comparison of Image Matching in Visual OdometerFor the timestamp sequence of different blur degrees androtation angles of the image (that is a sequence of frames170ndash200) ORB-SLAM2 algorithm and the feature detectionand matching algorithm in the improved algorithm of thispaper are introduced to perform two adjacent framersquos fea-ture detection and matching To compare the matchingpoints of the two algorithms in different images the image isclear between 170 and 180 frames and the motion blur startsbetween 180 and 190 frames Maximum rotation angle isbetween 190 and 200 framesemost dramatic motion bluris shown in Figure 11

It can be seen from Figure 12 that the improved al-gorithm can match more feature points than the originalalgorithm e number of matching points of the im-proved algorithm is more than 50 and tracking loss doesnot occur even if the image is blurred due to fast motionWhen the resolution of the image is reduced the

Figure 6 Feature point extraction based on ORB-SLAM2

Figure 7 Extraction of sharpening adjustment feature points basedon information entropy

8 Mathematical Problems in Engineering

improved algorithm still has more matching featurepoints and has strong anti-interference ability androbustness

43 Analysis of Trajectory Tracking Accuracy e gbd_da-taset_freiburg1_desk dataset in the TUM data is selected inthe experiment We use the real trajectory of the robot

provided by TUM to compare it with the motion trajectorycalculated by the algorithm to verify the improvement effectof the proposed algorithm Here we take the average absolutetrajectory error e average relative trajectory error and theaverage tracking time are standard e comprehensive tra-jectory tracking accuracy and real-time performance arecompared between the ORB-SLAM2 system and the im-proved SLAM system

Figure 8 Matching effect of ORB

Figure 9 Matching effect of ORB+ information entropy

Figure 10 Matching effect of ORB+ information entropy + sharpening

Table 1 Comparison of matching rates of three feature detection and matching algorithms

Model Key points Matches RANSCK Match rate ()ORB 10001004 254 87 3425+information entropy 10001002 255 86 3373+sharpening 1000983 284 133 4683

Mathematical Problems in Engineering 9

We use the information entropy of image blocks todetermine the amount of information e algorithmsharpens the image blocks with small information entropyenhances local image details and extracts local feature pointsthat can characterize image information for the back-end asadjacent framese correlation basis of key framematchingenhances robustness and reduces the loss of motion trackingcaused by the failure of interframe matching at the sametime Based on the matching result the R and t transfor-mation relationship between frames is calculated and theback-end uses g2o to optimize the pose based on the positionand attitude map Finally the motion trajectory is generatedwhich is shown in Figure 13

In terms of trajectory tracking accuracy the absolutetrajectory error reflects the difference between the true valueof the camerarsquos pose and the estimated value of the SLAMsystem e absolute trajectory error root mean squareRMSE (x) is defined as follows

RMSE(x)

1113936ni1 xei minus xsi

2

n

1113971

(13)

Among them xei represents the estimated position ofthe ith frame in the image sequence and xsi representsthe standard value of the position of the ith frame in theimage sequence Taking the rgbd_dataset_-freiburg1_desk dataset as an example the error betweenthe optimized motion trajectory and the standard value isshown in Figure 14

e average absolute trajectory error refers to directlycalculating the difference between the real value of thecamerarsquos pose and the estimated value of the SLAM systeme program first aligns the real value and the estimatedvalue according to the time stamp of the pose and thencalculates each pair of poses e difference is very suitablefor evaluating the performance of visual SLAM systems eaverage relative trajectory error is used to calculate thedifference between the pose changes on the same twotimestamps After alignment with the timestamps both thereal pose and the estimated pose are calculated at every sameperiod of time then the difference is calculated It is suitable

for estimating the drift of the system e definition of theabsolute trajectory error in frame i is as follows

Fi Qminus1i Spi (14)

Among them Qi represents the real pose of the frameand pi represents the algorithm estimated pose of the frameS isin Sim (3) is the similarity transformation matrix fromestimated pose to true pose

is experiment comprehensively compares the threedatasets rgbd_dataset_freiburg1_deskroom360 of whichthe desk dataset is an ordinary scene the 360 dataset is thedataset collected when the camera performs a 360-degreerotation motion and the room dataset is a scene collectedwhen the image sequence texture is not rich e ORB-SLAM2 generates different trajectory results each time Inorder to eliminate the randomness of the experimentalresults this article uses 6 experiments on each dataset underthe same experimental environment to calculate the averageabsolute trajectory error the average relative trajectory

Figure 11 200th frame image

20

40

60

80

100

120

140

160

180

Num

ber o

f mat

ches

175 180 185 190 195 200170Time stamp

OursORBSLAM2

Figure 12 Comparison between the algorithm of this paper andORB-SLAM2 motion blur

Figure 13 Movement track

10 Mathematical Problems in Engineering

error and the average tracking time Tables 2ndash4 are plottedin Figures 15ndash17 to compare the algorithm performance byanalyzing the trajectory

Under the evaluation standard of absolute trajectoryerror the algorithm in this paper has certain advantagesIn the scene where the camera rotates 360deg the absolutetrajectory error is increased by 35 and in the ordinaryscene the absolute trajectory error is increased by 48 Inthe case of not rich texture the absolute trajectory errorimprovement is the largest indicating that the algorithmin this paper has a considerable improvement on scenes

with rich image sequence textures ere is also a certainimprovement in the situation of large-angle camerarotation

ere is an advantage in trajectory error in ordinaryscenes it is 175 smaller than the average relative trajectoryerror of the ORB-SLAM2 system In the scene where thecamera rotates 360deg and the image sequence texture is notrich the algorithm in this paper is better than the ORB-SLAM2 system which has been improved by more than40 It proves that the algorithm in this article is indeedimproved compared to the ORB-SLAM2 system

Table 2 Average absolute trajectory error

Average absolute trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 0057630667 0029899333360 0113049667 0072683167Room 0229222167 0111185333

Table 3 Average relative trajectory error

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00362615 0029899360 01534532 0078611Room 01592775 0092167

Table 4 Average tracking time

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00244937 00289299360 00186606 00225014Room 00220836 00282757

0082

0048

0014

ndash05ndash05

ndash10

0000

05

05

05

x (m)

y (m)

z (m

)

10

10

Error mapped onto trajectory

10

15

20

25

15

Reference

Figure 14 Absolute trajectory error

Mathematical Problems in Engineering 11

Due to the sharpening of the local image the averagetracking time is slightly increased but the increase is not large

5 Conclusion

We propose an improved ORB-SLAM2 algorithm which isbased on adaptive information entropy screening algorithmand sharpening adjustment algorithm Our advantage is thatthe information entropy threshold can be automaticallycalculated according to different scenes with generalitySharpening the picture blocks below the information en-tropy threshold increases the clarity in order to better extractthe feature points e combination of information entropyscreening image block and sharpening algorithm solves thecamerarsquos large-angle rotation and the system positioningand mapping failure caused by the lack of texture infor-mation Although the processing time increases it improvesthe processing accuracy We compare the average absolutetrajectory error the average relative trajectory error and theaverage tracing time in the trajectory error with ORB-SLAM2 e experimental results show that the improvedmodel based on information entropy and sharpening im-proved algorithm achieve better results compared withORB-SLAM2 is article innovatively combines the ORB-SLAM2 system with adaptive information entropy ecombination of sharpening algorithms improves the accu-racy and robustness of the systemwithout a noticeable gap inprocessing time

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (NSFC)-Guangdong big data ScienceCenter Project under Grant U1911401 and in part by theSouth China Normal University National UndergraduateInnovation and Entrepreneurship Training Program underGrant 201910574057

References

[1] D Schleicher L M Bergasa R Barea E Lopez andM Ocana ldquoReal-time simultaneous localization and mappingusing a wide-angle stereo camerardquo in Proceedings of the IEEEWorkshop on Distributed Intelligent Systems Collective In-telligence and its Applications (DISrsquo06) pp 2090ndash2095 Bei-jing China October 2006

[2] A J Davison I D Reid N D Molton and O StasseldquoMonoSLAM real-time single camera SLAMrdquo IEEE Trans-actions on Pattern Analysis and Machine Intelligence vol 29no 6 pp 1052ndash1067 2007

[3] G Klein and D Murray ldquoParallel tracking and mapping forsmall ARworkspacesrdquo in Proceedings of the 2007 6th IEEE and

000

005

010

015

020

025M

ean

abso

lute

traj

ecto

ry er

ror

360 RoomDesk

OursORBSLAM2

Figure 15 Average absolute trajectory error

360 RoomDesk

OursORBSLAM2

0000

0005

0010

0015

0020

0025

0030

Mea

n tr

acki

ng ti

me

Figure 17 Average tracking time

360 RoomDesk

OursORBSLAM2

000

002

004

006

008

012

010

016

014

018

Mea

n re

lativ

e tra

ject

ory

erro

r

Figure 16 Average relative trajectory error

12 Mathematical Problems in Engineering

ACM International Symposium on Mixed and AugmentedReality pp 225ndash234 Nara Japan November 2007

[4] R Mur-Artal J M M Montiel and J D Tardos ldquoORB-SLAM a versatile and accurate monocular SLAM systemrdquoIEEE Transactions on Robotics vol 29 no 6 pp 1052ndash10672007

[5] R Mur-Artal and J D Tardos ldquoORB-SLAM2 an open-sourceSLAM system for monocular stereo and RGB-D camerasrdquoIEEE Transactions on Robotics vol 33 no 5 pp 1255ndash12622017

[6] K Di W Wan and J Chen ldquoSLAM visual odometer opti-mization algorithm based on information entropyrdquo Journal ofAutomation vol 47 no 6 pp 770ndash779 2020

[7] M Quan H Wei and J Chen ldquoSLAM visual odometeroptimization algorithm based on information entropyrdquoJournal of Automation vol 1 no 8 pp 180ndash198 2020

[8] L Xu S Zheng and J Jia ldquoUnnatural L0 sparse represen-tation for natural image deblurrirdquo in Proceedings of the 2013IEEE Conference on Computer Vision and Pattern Recognitionpp 1107ndash1114 Portland OR USA June 2013

[9] J Sun W Cao Z Xu and J Ponce ldquoLearning a convolutionalneural network for non-uniform motion blur removalrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 769ndash777 Boston MA USA June2015

[10] S Nah T H Kim and K M Lee ldquoDeep multi-scale con-volutional neural network for dynamic scene deblurringrdquo inProceedings of the 2017 IEEE Conference on Computer Visionand Pattern Recognition (CVPR) pp 257ndash265 Honolulu HIUSA July 2017

[11] J Zhang J Pan J Ren et al ldquoDynamic scene deblurring usingspatially variant recurrent neural networksrdquo in Proceedings ofthe 2018 IEEECVF Conference on Computer Vision andPattern Recognition pp 2521ndash2529 Salt Lake City UT USAJune 2018

[12] G Chen W Chen H Yu et al ldquoResearch on autonomousnavigation method of mobile robot based on semantic ORB-SLAM2 algorithmrdquoMachine Tool amp Hydraulics vol 48 no 9pp 16ndash20 2020

[13] X Qiu and C Zhao ldquoOverview of ORB-SLAM system op-timization framework analysisrdquo Navigation Positioning andTiming vol 6 no 3 pp 46ndash57 2019

[14] E Rosten and T Drummond ldquoMachine learning for high-speed corner detectionrdquo in Proceedings of the EuropeanConference on Computer Vision pp 430ndash443 Graz Austria2006

[15] A Foi V Katkovnik and K Egiazarian ldquoPointwise shape-adaptive DCT for high-quality denoising and deblocking ofgrayscale and color imagesrdquo IEEE Transactions on ImageProcessing vol 16 no 5 pp 1395ndash1411 2007

[16] Y Yu andWH Tardos ldquoORB-SLAM a versatile and accuratemonocular SLAM systemrdquo IEEE Transactions on Roboticsvol 33 no 5 pp 1255ndash1262 2017

[17] R Xu S Liu and J Chen ldquoe rationality and application ofinformation entropy description in imagesrdquo InformationTechnology vol 11 no 5 pp 59ndash61 2005

[18] Y Zhu ldquoOverview of image enhancement algorithmsrdquo In-formation and Computer (-eoretical Edition) vol 16 no 6pp 104ndash106 2017

[19] Z Yuan ldquoImage threshold automatic selection based on one-dimensional entropyrdquo Journal of Gansu University of Tech-nology vol 1 no 1 pp 84ndash88 1993

Mathematical Problems in Engineering 13

Page 8: ImprovedORB-SLAM2AlgorithmBasedonInformationEntropy ...downloads.hindawi.com/journals/mpe/2020/4724310.pdf · In 2007, Andrew Davison proposed Mono-SLAM [2], which is the real-time

In view of the above problems we propose an adaptivemethod of information entropy threshold which adjusts thethreshold according to different scenariose self-adjustingformula is

E0 H(i)ave

2+ δ (12)

where H(i)ave is the average value of the information en-tropy in this scene which can be obtained by obtaining theinformation entropy of each frame in a video in the sceneunder the first run and then divided by the number offrames i is the number of frames in the video sequence andδ is the correction factor after experiment it is 03 whichworks best rough the above formula the calculated E0 isthe information entropy threshold of the scene

4 Experimental Results and Analysis

All the experiments are conducted on a laptop computere operating system is Linux 1604 64 bit the processorIntel (R) Core (TM) i5-7300U CPU 250GHz the op-erating environment is CLion 2019 and opencv330 and the

program uses C++ Compilatione back-end uses G2O forposture optimization based on posture maps to generatemotion trajectories e image sequence reading speed ofthis algorithm is 30 frames per second compared with theopen source ORB-SLAM2 system

41 Comparison of Two-Frame Image Matching In thispaper the rgbd_dataset_freiburg1_desk dataset is selectedfrom which two images with blurry images and large ro-tation angles are extracted and the image detection andmatching processing are performed using the ORB algo-rithm ORB+ information entropy and ORB+ informationentropy + sharpening extraction algorithm And comparethe data such as extraction effect key points matchingnumber matching number and matching rate after RAN-SAC filtering e matching result statistics of the image areshown in Table 1 and the matching effect is shown inFigures 8 9 and 10

Table 1 shows the results of the feature point detectionand matching through the ORB ORB+ information en-tropy and ORB+ information entropy + sharpening Itproves that the correct matching points and the matchingrate of ORB andORB+ information entropy are basically thesame e matching effect diagram of the latter shows amismatch with a large error For ORB+ informationentropy + sharpening the number of correct matchingpoints of the extraction algorithm has increased signifi-cantly and its matching rate compared toORB+ information entropy and ORB+ information entro-py + sharpening extraction algorithms have been respec-tively improved by 367 and 388

e results show that adding the information entropyscreening and sharpening process proposed by this algo-rithm can effectively increase the number of matches andimprove the matching rate e accuracy of image featuredetection and matching are improved which provide moreaccurate and effective for subsequent process as trackingmapping and loop detection

42 Comparison of Image Matching in Visual OdometerFor the timestamp sequence of different blur degrees androtation angles of the image (that is a sequence of frames170ndash200) ORB-SLAM2 algorithm and the feature detectionand matching algorithm in the improved algorithm of thispaper are introduced to perform two adjacent framersquos fea-ture detection and matching To compare the matchingpoints of the two algorithms in different images the image isclear between 170 and 180 frames and the motion blur startsbetween 180 and 190 frames Maximum rotation angle isbetween 190 and 200 framesemost dramatic motion bluris shown in Figure 11

It can be seen from Figure 12 that the improved al-gorithm can match more feature points than the originalalgorithm e number of matching points of the im-proved algorithm is more than 50 and tracking loss doesnot occur even if the image is blurred due to fast motionWhen the resolution of the image is reduced the

Figure 6 Feature point extraction based on ORB-SLAM2

Figure 7 Extraction of sharpening adjustment feature points basedon information entropy

8 Mathematical Problems in Engineering

improved algorithm still has more matching featurepoints and has strong anti-interference ability androbustness

43 Analysis of Trajectory Tracking Accuracy e gbd_da-taset_freiburg1_desk dataset in the TUM data is selected inthe experiment We use the real trajectory of the robot

provided by TUM to compare it with the motion trajectorycalculated by the algorithm to verify the improvement effectof the proposed algorithm Here we take the average absolutetrajectory error e average relative trajectory error and theaverage tracking time are standard e comprehensive tra-jectory tracking accuracy and real-time performance arecompared between the ORB-SLAM2 system and the im-proved SLAM system

Figure 8 Matching effect of ORB

Figure 9 Matching effect of ORB+ information entropy

Figure 10 Matching effect of ORB+ information entropy + sharpening

Table 1 Comparison of matching rates of three feature detection and matching algorithms

Model Key points Matches RANSCK Match rate ()ORB 10001004 254 87 3425+information entropy 10001002 255 86 3373+sharpening 1000983 284 133 4683

Mathematical Problems in Engineering 9

We use the information entropy of image blocks todetermine the amount of information e algorithmsharpens the image blocks with small information entropyenhances local image details and extracts local feature pointsthat can characterize image information for the back-end asadjacent framese correlation basis of key framematchingenhances robustness and reduces the loss of motion trackingcaused by the failure of interframe matching at the sametime Based on the matching result the R and t transfor-mation relationship between frames is calculated and theback-end uses g2o to optimize the pose based on the positionand attitude map Finally the motion trajectory is generatedwhich is shown in Figure 13

In terms of trajectory tracking accuracy the absolutetrajectory error reflects the difference between the true valueof the camerarsquos pose and the estimated value of the SLAMsystem e absolute trajectory error root mean squareRMSE (x) is defined as follows

RMSE(x)

1113936ni1 xei minus xsi

2

n

1113971

(13)

Among them xei represents the estimated position ofthe ith frame in the image sequence and xsi representsthe standard value of the position of the ith frame in theimage sequence Taking the rgbd_dataset_-freiburg1_desk dataset as an example the error betweenthe optimized motion trajectory and the standard value isshown in Figure 14

e average absolute trajectory error refers to directlycalculating the difference between the real value of thecamerarsquos pose and the estimated value of the SLAM systeme program first aligns the real value and the estimatedvalue according to the time stamp of the pose and thencalculates each pair of poses e difference is very suitablefor evaluating the performance of visual SLAM systems eaverage relative trajectory error is used to calculate thedifference between the pose changes on the same twotimestamps After alignment with the timestamps both thereal pose and the estimated pose are calculated at every sameperiod of time then the difference is calculated It is suitable

for estimating the drift of the system e definition of theabsolute trajectory error in frame i is as follows

Fi Qminus1i Spi (14)

Among them Qi represents the real pose of the frameand pi represents the algorithm estimated pose of the frameS isin Sim (3) is the similarity transformation matrix fromestimated pose to true pose

is experiment comprehensively compares the threedatasets rgbd_dataset_freiburg1_deskroom360 of whichthe desk dataset is an ordinary scene the 360 dataset is thedataset collected when the camera performs a 360-degreerotation motion and the room dataset is a scene collectedwhen the image sequence texture is not rich e ORB-SLAM2 generates different trajectory results each time Inorder to eliminate the randomness of the experimentalresults this article uses 6 experiments on each dataset underthe same experimental environment to calculate the averageabsolute trajectory error the average relative trajectory

Figure 11 200th frame image

20

40

60

80

100

120

140

160

180

Num

ber o

f mat

ches

175 180 185 190 195 200170Time stamp

OursORBSLAM2

Figure 12 Comparison between the algorithm of this paper andORB-SLAM2 motion blur

Figure 13 Movement track

10 Mathematical Problems in Engineering

error and the average tracking time Tables 2ndash4 are plottedin Figures 15ndash17 to compare the algorithm performance byanalyzing the trajectory

Under the evaluation standard of absolute trajectoryerror the algorithm in this paper has certain advantagesIn the scene where the camera rotates 360deg the absolutetrajectory error is increased by 35 and in the ordinaryscene the absolute trajectory error is increased by 48 Inthe case of not rich texture the absolute trajectory errorimprovement is the largest indicating that the algorithmin this paper has a considerable improvement on scenes

with rich image sequence textures ere is also a certainimprovement in the situation of large-angle camerarotation

ere is an advantage in trajectory error in ordinaryscenes it is 175 smaller than the average relative trajectoryerror of the ORB-SLAM2 system In the scene where thecamera rotates 360deg and the image sequence texture is notrich the algorithm in this paper is better than the ORB-SLAM2 system which has been improved by more than40 It proves that the algorithm in this article is indeedimproved compared to the ORB-SLAM2 system

Table 2 Average absolute trajectory error

Average absolute trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 0057630667 0029899333360 0113049667 0072683167Room 0229222167 0111185333

Table 3 Average relative trajectory error

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00362615 0029899360 01534532 0078611Room 01592775 0092167

Table 4 Average tracking time

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00244937 00289299360 00186606 00225014Room 00220836 00282757

0082

0048

0014

ndash05ndash05

ndash10

0000

05

05

05

x (m)

y (m)

z (m

)

10

10

Error mapped onto trajectory

10

15

20

25

15

Reference

Figure 14 Absolute trajectory error

Mathematical Problems in Engineering 11

Due to the sharpening of the local image the averagetracking time is slightly increased but the increase is not large

5 Conclusion

We propose an improved ORB-SLAM2 algorithm which isbased on adaptive information entropy screening algorithmand sharpening adjustment algorithm Our advantage is thatthe information entropy threshold can be automaticallycalculated according to different scenes with generalitySharpening the picture blocks below the information en-tropy threshold increases the clarity in order to better extractthe feature points e combination of information entropyscreening image block and sharpening algorithm solves thecamerarsquos large-angle rotation and the system positioningand mapping failure caused by the lack of texture infor-mation Although the processing time increases it improvesthe processing accuracy We compare the average absolutetrajectory error the average relative trajectory error and theaverage tracing time in the trajectory error with ORB-SLAM2 e experimental results show that the improvedmodel based on information entropy and sharpening im-proved algorithm achieve better results compared withORB-SLAM2 is article innovatively combines the ORB-SLAM2 system with adaptive information entropy ecombination of sharpening algorithms improves the accu-racy and robustness of the systemwithout a noticeable gap inprocessing time

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (NSFC)-Guangdong big data ScienceCenter Project under Grant U1911401 and in part by theSouth China Normal University National UndergraduateInnovation and Entrepreneurship Training Program underGrant 201910574057

References

[1] D Schleicher L M Bergasa R Barea E Lopez andM Ocana ldquoReal-time simultaneous localization and mappingusing a wide-angle stereo camerardquo in Proceedings of the IEEEWorkshop on Distributed Intelligent Systems Collective In-telligence and its Applications (DISrsquo06) pp 2090ndash2095 Bei-jing China October 2006

[2] A J Davison I D Reid N D Molton and O StasseldquoMonoSLAM real-time single camera SLAMrdquo IEEE Trans-actions on Pattern Analysis and Machine Intelligence vol 29no 6 pp 1052ndash1067 2007

[3] G Klein and D Murray ldquoParallel tracking and mapping forsmall ARworkspacesrdquo in Proceedings of the 2007 6th IEEE and

000

005

010

015

020

025M

ean

abso

lute

traj

ecto

ry er

ror

360 RoomDesk

OursORBSLAM2

Figure 15 Average absolute trajectory error

360 RoomDesk

OursORBSLAM2

0000

0005

0010

0015

0020

0025

0030

Mea

n tr

acki

ng ti

me

Figure 17 Average tracking time

360 RoomDesk

OursORBSLAM2

000

002

004

006

008

012

010

016

014

018

Mea

n re

lativ

e tra

ject

ory

erro

r

Figure 16 Average relative trajectory error

12 Mathematical Problems in Engineering

ACM International Symposium on Mixed and AugmentedReality pp 225ndash234 Nara Japan November 2007

[4] R Mur-Artal J M M Montiel and J D Tardos ldquoORB-SLAM a versatile and accurate monocular SLAM systemrdquoIEEE Transactions on Robotics vol 29 no 6 pp 1052ndash10672007

[5] R Mur-Artal and J D Tardos ldquoORB-SLAM2 an open-sourceSLAM system for monocular stereo and RGB-D camerasrdquoIEEE Transactions on Robotics vol 33 no 5 pp 1255ndash12622017

[6] K Di W Wan and J Chen ldquoSLAM visual odometer opti-mization algorithm based on information entropyrdquo Journal ofAutomation vol 47 no 6 pp 770ndash779 2020

[7] M Quan H Wei and J Chen ldquoSLAM visual odometeroptimization algorithm based on information entropyrdquoJournal of Automation vol 1 no 8 pp 180ndash198 2020

[8] L Xu S Zheng and J Jia ldquoUnnatural L0 sparse represen-tation for natural image deblurrirdquo in Proceedings of the 2013IEEE Conference on Computer Vision and Pattern Recognitionpp 1107ndash1114 Portland OR USA June 2013

[9] J Sun W Cao Z Xu and J Ponce ldquoLearning a convolutionalneural network for non-uniform motion blur removalrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 769ndash777 Boston MA USA June2015

[10] S Nah T H Kim and K M Lee ldquoDeep multi-scale con-volutional neural network for dynamic scene deblurringrdquo inProceedings of the 2017 IEEE Conference on Computer Visionand Pattern Recognition (CVPR) pp 257ndash265 Honolulu HIUSA July 2017

[11] J Zhang J Pan J Ren et al ldquoDynamic scene deblurring usingspatially variant recurrent neural networksrdquo in Proceedings ofthe 2018 IEEECVF Conference on Computer Vision andPattern Recognition pp 2521ndash2529 Salt Lake City UT USAJune 2018

[12] G Chen W Chen H Yu et al ldquoResearch on autonomousnavigation method of mobile robot based on semantic ORB-SLAM2 algorithmrdquoMachine Tool amp Hydraulics vol 48 no 9pp 16ndash20 2020

[13] X Qiu and C Zhao ldquoOverview of ORB-SLAM system op-timization framework analysisrdquo Navigation Positioning andTiming vol 6 no 3 pp 46ndash57 2019

[14] E Rosten and T Drummond ldquoMachine learning for high-speed corner detectionrdquo in Proceedings of the EuropeanConference on Computer Vision pp 430ndash443 Graz Austria2006

[15] A Foi V Katkovnik and K Egiazarian ldquoPointwise shape-adaptive DCT for high-quality denoising and deblocking ofgrayscale and color imagesrdquo IEEE Transactions on ImageProcessing vol 16 no 5 pp 1395ndash1411 2007

[16] Y Yu andWH Tardos ldquoORB-SLAM a versatile and accuratemonocular SLAM systemrdquo IEEE Transactions on Roboticsvol 33 no 5 pp 1255ndash1262 2017

[17] R Xu S Liu and J Chen ldquoe rationality and application ofinformation entropy description in imagesrdquo InformationTechnology vol 11 no 5 pp 59ndash61 2005

[18] Y Zhu ldquoOverview of image enhancement algorithmsrdquo In-formation and Computer (-eoretical Edition) vol 16 no 6pp 104ndash106 2017

[19] Z Yuan ldquoImage threshold automatic selection based on one-dimensional entropyrdquo Journal of Gansu University of Tech-nology vol 1 no 1 pp 84ndash88 1993

Mathematical Problems in Engineering 13

Page 9: ImprovedORB-SLAM2AlgorithmBasedonInformationEntropy ...downloads.hindawi.com/journals/mpe/2020/4724310.pdf · In 2007, Andrew Davison proposed Mono-SLAM [2], which is the real-time

improved algorithm still has more matching featurepoints and has strong anti-interference ability androbustness

43 Analysis of Trajectory Tracking Accuracy e gbd_da-taset_freiburg1_desk dataset in the TUM data is selected inthe experiment We use the real trajectory of the robot

provided by TUM to compare it with the motion trajectorycalculated by the algorithm to verify the improvement effectof the proposed algorithm Here we take the average absolutetrajectory error e average relative trajectory error and theaverage tracking time are standard e comprehensive tra-jectory tracking accuracy and real-time performance arecompared between the ORB-SLAM2 system and the im-proved SLAM system

Figure 8 Matching effect of ORB

Figure 9 Matching effect of ORB+ information entropy

Figure 10 Matching effect of ORB+ information entropy + sharpening

Table 1 Comparison of matching rates of three feature detection and matching algorithms

Model Key points Matches RANSCK Match rate ()ORB 10001004 254 87 3425+information entropy 10001002 255 86 3373+sharpening 1000983 284 133 4683

Mathematical Problems in Engineering 9

We use the information entropy of image blocks todetermine the amount of information e algorithmsharpens the image blocks with small information entropyenhances local image details and extracts local feature pointsthat can characterize image information for the back-end asadjacent framese correlation basis of key framematchingenhances robustness and reduces the loss of motion trackingcaused by the failure of interframe matching at the sametime Based on the matching result the R and t transfor-mation relationship between frames is calculated and theback-end uses g2o to optimize the pose based on the positionand attitude map Finally the motion trajectory is generatedwhich is shown in Figure 13

In terms of trajectory tracking accuracy the absolutetrajectory error reflects the difference between the true valueof the camerarsquos pose and the estimated value of the SLAMsystem e absolute trajectory error root mean squareRMSE (x) is defined as follows

RMSE(x)

1113936ni1 xei minus xsi

2

n

1113971

(13)

Among them xei represents the estimated position ofthe ith frame in the image sequence and xsi representsthe standard value of the position of the ith frame in theimage sequence Taking the rgbd_dataset_-freiburg1_desk dataset as an example the error betweenthe optimized motion trajectory and the standard value isshown in Figure 14

e average absolute trajectory error refers to directlycalculating the difference between the real value of thecamerarsquos pose and the estimated value of the SLAM systeme program first aligns the real value and the estimatedvalue according to the time stamp of the pose and thencalculates each pair of poses e difference is very suitablefor evaluating the performance of visual SLAM systems eaverage relative trajectory error is used to calculate thedifference between the pose changes on the same twotimestamps After alignment with the timestamps both thereal pose and the estimated pose are calculated at every sameperiod of time then the difference is calculated It is suitable

for estimating the drift of the system e definition of theabsolute trajectory error in frame i is as follows

Fi Qminus1i Spi (14)

Among them Qi represents the real pose of the frameand pi represents the algorithm estimated pose of the frameS isin Sim (3) is the similarity transformation matrix fromestimated pose to true pose

is experiment comprehensively compares the threedatasets rgbd_dataset_freiburg1_deskroom360 of whichthe desk dataset is an ordinary scene the 360 dataset is thedataset collected when the camera performs a 360-degreerotation motion and the room dataset is a scene collectedwhen the image sequence texture is not rich e ORB-SLAM2 generates different trajectory results each time Inorder to eliminate the randomness of the experimentalresults this article uses 6 experiments on each dataset underthe same experimental environment to calculate the averageabsolute trajectory error the average relative trajectory

Figure 11 200th frame image

20

40

60

80

100

120

140

160

180

Num

ber o

f mat

ches

175 180 185 190 195 200170Time stamp

OursORBSLAM2

Figure 12 Comparison between the algorithm of this paper andORB-SLAM2 motion blur

Figure 13 Movement track

10 Mathematical Problems in Engineering

error and the average tracking time Tables 2ndash4 are plottedin Figures 15ndash17 to compare the algorithm performance byanalyzing the trajectory

Under the evaluation standard of absolute trajectoryerror the algorithm in this paper has certain advantagesIn the scene where the camera rotates 360deg the absolutetrajectory error is increased by 35 and in the ordinaryscene the absolute trajectory error is increased by 48 Inthe case of not rich texture the absolute trajectory errorimprovement is the largest indicating that the algorithmin this paper has a considerable improvement on scenes

with rich image sequence textures ere is also a certainimprovement in the situation of large-angle camerarotation

ere is an advantage in trajectory error in ordinaryscenes it is 175 smaller than the average relative trajectoryerror of the ORB-SLAM2 system In the scene where thecamera rotates 360deg and the image sequence texture is notrich the algorithm in this paper is better than the ORB-SLAM2 system which has been improved by more than40 It proves that the algorithm in this article is indeedimproved compared to the ORB-SLAM2 system

Table 2 Average absolute trajectory error

Average absolute trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 0057630667 0029899333360 0113049667 0072683167Room 0229222167 0111185333

Table 3 Average relative trajectory error

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00362615 0029899360 01534532 0078611Room 01592775 0092167

Table 4 Average tracking time

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00244937 00289299360 00186606 00225014Room 00220836 00282757

0082

0048

0014

ndash05ndash05

ndash10

0000

05

05

05

x (m)

y (m)

z (m

)

10

10

Error mapped onto trajectory

10

15

20

25

15

Reference

Figure 14 Absolute trajectory error

Mathematical Problems in Engineering 11

Due to the sharpening of the local image the averagetracking time is slightly increased but the increase is not large

5 Conclusion

We propose an improved ORB-SLAM2 algorithm which isbased on adaptive information entropy screening algorithmand sharpening adjustment algorithm Our advantage is thatthe information entropy threshold can be automaticallycalculated according to different scenes with generalitySharpening the picture blocks below the information en-tropy threshold increases the clarity in order to better extractthe feature points e combination of information entropyscreening image block and sharpening algorithm solves thecamerarsquos large-angle rotation and the system positioningand mapping failure caused by the lack of texture infor-mation Although the processing time increases it improvesthe processing accuracy We compare the average absolutetrajectory error the average relative trajectory error and theaverage tracing time in the trajectory error with ORB-SLAM2 e experimental results show that the improvedmodel based on information entropy and sharpening im-proved algorithm achieve better results compared withORB-SLAM2 is article innovatively combines the ORB-SLAM2 system with adaptive information entropy ecombination of sharpening algorithms improves the accu-racy and robustness of the systemwithout a noticeable gap inprocessing time

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (NSFC)-Guangdong big data ScienceCenter Project under Grant U1911401 and in part by theSouth China Normal University National UndergraduateInnovation and Entrepreneurship Training Program underGrant 201910574057

References

[1] D Schleicher L M Bergasa R Barea E Lopez andM Ocana ldquoReal-time simultaneous localization and mappingusing a wide-angle stereo camerardquo in Proceedings of the IEEEWorkshop on Distributed Intelligent Systems Collective In-telligence and its Applications (DISrsquo06) pp 2090ndash2095 Bei-jing China October 2006

[2] A J Davison I D Reid N D Molton and O StasseldquoMonoSLAM real-time single camera SLAMrdquo IEEE Trans-actions on Pattern Analysis and Machine Intelligence vol 29no 6 pp 1052ndash1067 2007

[3] G Klein and D Murray ldquoParallel tracking and mapping forsmall ARworkspacesrdquo in Proceedings of the 2007 6th IEEE and

000

005

010

015

020

025M

ean

abso

lute

traj

ecto

ry er

ror

360 RoomDesk

OursORBSLAM2

Figure 15 Average absolute trajectory error

360 RoomDesk

OursORBSLAM2

0000

0005

0010

0015

0020

0025

0030

Mea

n tr

acki

ng ti

me

Figure 17 Average tracking time

360 RoomDesk

OursORBSLAM2

000

002

004

006

008

012

010

016

014

018

Mea

n re

lativ

e tra

ject

ory

erro

r

Figure 16 Average relative trajectory error

12 Mathematical Problems in Engineering

ACM International Symposium on Mixed and AugmentedReality pp 225ndash234 Nara Japan November 2007

[4] R Mur-Artal J M M Montiel and J D Tardos ldquoORB-SLAM a versatile and accurate monocular SLAM systemrdquoIEEE Transactions on Robotics vol 29 no 6 pp 1052ndash10672007

[5] R Mur-Artal and J D Tardos ldquoORB-SLAM2 an open-sourceSLAM system for monocular stereo and RGB-D camerasrdquoIEEE Transactions on Robotics vol 33 no 5 pp 1255ndash12622017

[6] K Di W Wan and J Chen ldquoSLAM visual odometer opti-mization algorithm based on information entropyrdquo Journal ofAutomation vol 47 no 6 pp 770ndash779 2020

[7] M Quan H Wei and J Chen ldquoSLAM visual odometeroptimization algorithm based on information entropyrdquoJournal of Automation vol 1 no 8 pp 180ndash198 2020

[8] L Xu S Zheng and J Jia ldquoUnnatural L0 sparse represen-tation for natural image deblurrirdquo in Proceedings of the 2013IEEE Conference on Computer Vision and Pattern Recognitionpp 1107ndash1114 Portland OR USA June 2013

[9] J Sun W Cao Z Xu and J Ponce ldquoLearning a convolutionalneural network for non-uniform motion blur removalrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 769ndash777 Boston MA USA June2015

[10] S Nah T H Kim and K M Lee ldquoDeep multi-scale con-volutional neural network for dynamic scene deblurringrdquo inProceedings of the 2017 IEEE Conference on Computer Visionand Pattern Recognition (CVPR) pp 257ndash265 Honolulu HIUSA July 2017

[11] J Zhang J Pan J Ren et al ldquoDynamic scene deblurring usingspatially variant recurrent neural networksrdquo in Proceedings ofthe 2018 IEEECVF Conference on Computer Vision andPattern Recognition pp 2521ndash2529 Salt Lake City UT USAJune 2018

[12] G Chen W Chen H Yu et al ldquoResearch on autonomousnavigation method of mobile robot based on semantic ORB-SLAM2 algorithmrdquoMachine Tool amp Hydraulics vol 48 no 9pp 16ndash20 2020

[13] X Qiu and C Zhao ldquoOverview of ORB-SLAM system op-timization framework analysisrdquo Navigation Positioning andTiming vol 6 no 3 pp 46ndash57 2019

[14] E Rosten and T Drummond ldquoMachine learning for high-speed corner detectionrdquo in Proceedings of the EuropeanConference on Computer Vision pp 430ndash443 Graz Austria2006

[15] A Foi V Katkovnik and K Egiazarian ldquoPointwise shape-adaptive DCT for high-quality denoising and deblocking ofgrayscale and color imagesrdquo IEEE Transactions on ImageProcessing vol 16 no 5 pp 1395ndash1411 2007

[16] Y Yu andWH Tardos ldquoORB-SLAM a versatile and accuratemonocular SLAM systemrdquo IEEE Transactions on Roboticsvol 33 no 5 pp 1255ndash1262 2017

[17] R Xu S Liu and J Chen ldquoe rationality and application ofinformation entropy description in imagesrdquo InformationTechnology vol 11 no 5 pp 59ndash61 2005

[18] Y Zhu ldquoOverview of image enhancement algorithmsrdquo In-formation and Computer (-eoretical Edition) vol 16 no 6pp 104ndash106 2017

[19] Z Yuan ldquoImage threshold automatic selection based on one-dimensional entropyrdquo Journal of Gansu University of Tech-nology vol 1 no 1 pp 84ndash88 1993

Mathematical Problems in Engineering 13

Page 10: ImprovedORB-SLAM2AlgorithmBasedonInformationEntropy ...downloads.hindawi.com/journals/mpe/2020/4724310.pdf · In 2007, Andrew Davison proposed Mono-SLAM [2], which is the real-time

We use the information entropy of image blocks todetermine the amount of information e algorithmsharpens the image blocks with small information entropyenhances local image details and extracts local feature pointsthat can characterize image information for the back-end asadjacent framese correlation basis of key framematchingenhances robustness and reduces the loss of motion trackingcaused by the failure of interframe matching at the sametime Based on the matching result the R and t transfor-mation relationship between frames is calculated and theback-end uses g2o to optimize the pose based on the positionand attitude map Finally the motion trajectory is generatedwhich is shown in Figure 13

In terms of trajectory tracking accuracy the absolutetrajectory error reflects the difference between the true valueof the camerarsquos pose and the estimated value of the SLAMsystem e absolute trajectory error root mean squareRMSE (x) is defined as follows

RMSE(x)

1113936ni1 xei minus xsi

2

n

1113971

(13)

Among them xei represents the estimated position ofthe ith frame in the image sequence and xsi representsthe standard value of the position of the ith frame in theimage sequence Taking the rgbd_dataset_-freiburg1_desk dataset as an example the error betweenthe optimized motion trajectory and the standard value isshown in Figure 14

e average absolute trajectory error refers to directlycalculating the difference between the real value of thecamerarsquos pose and the estimated value of the SLAM systeme program first aligns the real value and the estimatedvalue according to the time stamp of the pose and thencalculates each pair of poses e difference is very suitablefor evaluating the performance of visual SLAM systems eaverage relative trajectory error is used to calculate thedifference between the pose changes on the same twotimestamps After alignment with the timestamps both thereal pose and the estimated pose are calculated at every sameperiod of time then the difference is calculated It is suitable

for estimating the drift of the system e definition of theabsolute trajectory error in frame i is as follows

Fi Qminus1i Spi (14)

Among them Qi represents the real pose of the frameand pi represents the algorithm estimated pose of the frameS isin Sim (3) is the similarity transformation matrix fromestimated pose to true pose

is experiment comprehensively compares the threedatasets rgbd_dataset_freiburg1_deskroom360 of whichthe desk dataset is an ordinary scene the 360 dataset is thedataset collected when the camera performs a 360-degreerotation motion and the room dataset is a scene collectedwhen the image sequence texture is not rich e ORB-SLAM2 generates different trajectory results each time Inorder to eliminate the randomness of the experimentalresults this article uses 6 experiments on each dataset underthe same experimental environment to calculate the averageabsolute trajectory error the average relative trajectory

Figure 11 200th frame image

20

40

60

80

100

120

140

160

180

Num

ber o

f mat

ches

175 180 185 190 195 200170Time stamp

OursORBSLAM2

Figure 12 Comparison between the algorithm of this paper andORB-SLAM2 motion blur

Figure 13 Movement track

10 Mathematical Problems in Engineering

error and the average tracking time Tables 2ndash4 are plottedin Figures 15ndash17 to compare the algorithm performance byanalyzing the trajectory

Under the evaluation standard of absolute trajectoryerror the algorithm in this paper has certain advantagesIn the scene where the camera rotates 360deg the absolutetrajectory error is increased by 35 and in the ordinaryscene the absolute trajectory error is increased by 48 Inthe case of not rich texture the absolute trajectory errorimprovement is the largest indicating that the algorithmin this paper has a considerable improvement on scenes

with rich image sequence textures ere is also a certainimprovement in the situation of large-angle camerarotation

ere is an advantage in trajectory error in ordinaryscenes it is 175 smaller than the average relative trajectoryerror of the ORB-SLAM2 system In the scene where thecamera rotates 360deg and the image sequence texture is notrich the algorithm in this paper is better than the ORB-SLAM2 system which has been improved by more than40 It proves that the algorithm in this article is indeedimproved compared to the ORB-SLAM2 system

Table 2 Average absolute trajectory error

Average absolute trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 0057630667 0029899333360 0113049667 0072683167Room 0229222167 0111185333

Table 3 Average relative trajectory error

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00362615 0029899360 01534532 0078611Room 01592775 0092167

Table 4 Average tracking time

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00244937 00289299360 00186606 00225014Room 00220836 00282757

0082

0048

0014

ndash05ndash05

ndash10

0000

05

05

05

x (m)

y (m)

z (m

)

10

10

Error mapped onto trajectory

10

15

20

25

15

Reference

Figure 14 Absolute trajectory error

Mathematical Problems in Engineering 11

Due to the sharpening of the local image the averagetracking time is slightly increased but the increase is not large

5 Conclusion

We propose an improved ORB-SLAM2 algorithm which isbased on adaptive information entropy screening algorithmand sharpening adjustment algorithm Our advantage is thatthe information entropy threshold can be automaticallycalculated according to different scenes with generalitySharpening the picture blocks below the information en-tropy threshold increases the clarity in order to better extractthe feature points e combination of information entropyscreening image block and sharpening algorithm solves thecamerarsquos large-angle rotation and the system positioningand mapping failure caused by the lack of texture infor-mation Although the processing time increases it improvesthe processing accuracy We compare the average absolutetrajectory error the average relative trajectory error and theaverage tracing time in the trajectory error with ORB-SLAM2 e experimental results show that the improvedmodel based on information entropy and sharpening im-proved algorithm achieve better results compared withORB-SLAM2 is article innovatively combines the ORB-SLAM2 system with adaptive information entropy ecombination of sharpening algorithms improves the accu-racy and robustness of the systemwithout a noticeable gap inprocessing time

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (NSFC)-Guangdong big data ScienceCenter Project under Grant U1911401 and in part by theSouth China Normal University National UndergraduateInnovation and Entrepreneurship Training Program underGrant 201910574057

References

[1] D Schleicher L M Bergasa R Barea E Lopez andM Ocana ldquoReal-time simultaneous localization and mappingusing a wide-angle stereo camerardquo in Proceedings of the IEEEWorkshop on Distributed Intelligent Systems Collective In-telligence and its Applications (DISrsquo06) pp 2090ndash2095 Bei-jing China October 2006

[2] A J Davison I D Reid N D Molton and O StasseldquoMonoSLAM real-time single camera SLAMrdquo IEEE Trans-actions on Pattern Analysis and Machine Intelligence vol 29no 6 pp 1052ndash1067 2007

[3] G Klein and D Murray ldquoParallel tracking and mapping forsmall ARworkspacesrdquo in Proceedings of the 2007 6th IEEE and

000

005

010

015

020

025M

ean

abso

lute

traj

ecto

ry er

ror

360 RoomDesk

OursORBSLAM2

Figure 15 Average absolute trajectory error

360 RoomDesk

OursORBSLAM2

0000

0005

0010

0015

0020

0025

0030

Mea

n tr

acki

ng ti

me

Figure 17 Average tracking time

360 RoomDesk

OursORBSLAM2

000

002

004

006

008

012

010

016

014

018

Mea

n re

lativ

e tra

ject

ory

erro

r

Figure 16 Average relative trajectory error

12 Mathematical Problems in Engineering

ACM International Symposium on Mixed and AugmentedReality pp 225ndash234 Nara Japan November 2007

[4] R Mur-Artal J M M Montiel and J D Tardos ldquoORB-SLAM a versatile and accurate monocular SLAM systemrdquoIEEE Transactions on Robotics vol 29 no 6 pp 1052ndash10672007

[5] R Mur-Artal and J D Tardos ldquoORB-SLAM2 an open-sourceSLAM system for monocular stereo and RGB-D camerasrdquoIEEE Transactions on Robotics vol 33 no 5 pp 1255ndash12622017

[6] K Di W Wan and J Chen ldquoSLAM visual odometer opti-mization algorithm based on information entropyrdquo Journal ofAutomation vol 47 no 6 pp 770ndash779 2020

[7] M Quan H Wei and J Chen ldquoSLAM visual odometeroptimization algorithm based on information entropyrdquoJournal of Automation vol 1 no 8 pp 180ndash198 2020

[8] L Xu S Zheng and J Jia ldquoUnnatural L0 sparse represen-tation for natural image deblurrirdquo in Proceedings of the 2013IEEE Conference on Computer Vision and Pattern Recognitionpp 1107ndash1114 Portland OR USA June 2013

[9] J Sun W Cao Z Xu and J Ponce ldquoLearning a convolutionalneural network for non-uniform motion blur removalrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 769ndash777 Boston MA USA June2015

[10] S Nah T H Kim and K M Lee ldquoDeep multi-scale con-volutional neural network for dynamic scene deblurringrdquo inProceedings of the 2017 IEEE Conference on Computer Visionand Pattern Recognition (CVPR) pp 257ndash265 Honolulu HIUSA July 2017

[11] J Zhang J Pan J Ren et al ldquoDynamic scene deblurring usingspatially variant recurrent neural networksrdquo in Proceedings ofthe 2018 IEEECVF Conference on Computer Vision andPattern Recognition pp 2521ndash2529 Salt Lake City UT USAJune 2018

[12] G Chen W Chen H Yu et al ldquoResearch on autonomousnavigation method of mobile robot based on semantic ORB-SLAM2 algorithmrdquoMachine Tool amp Hydraulics vol 48 no 9pp 16ndash20 2020

[13] X Qiu and C Zhao ldquoOverview of ORB-SLAM system op-timization framework analysisrdquo Navigation Positioning andTiming vol 6 no 3 pp 46ndash57 2019

[14] E Rosten and T Drummond ldquoMachine learning for high-speed corner detectionrdquo in Proceedings of the EuropeanConference on Computer Vision pp 430ndash443 Graz Austria2006

[15] A Foi V Katkovnik and K Egiazarian ldquoPointwise shape-adaptive DCT for high-quality denoising and deblocking ofgrayscale and color imagesrdquo IEEE Transactions on ImageProcessing vol 16 no 5 pp 1395ndash1411 2007

[16] Y Yu andWH Tardos ldquoORB-SLAM a versatile and accuratemonocular SLAM systemrdquo IEEE Transactions on Roboticsvol 33 no 5 pp 1255ndash1262 2017

[17] R Xu S Liu and J Chen ldquoe rationality and application ofinformation entropy description in imagesrdquo InformationTechnology vol 11 no 5 pp 59ndash61 2005

[18] Y Zhu ldquoOverview of image enhancement algorithmsrdquo In-formation and Computer (-eoretical Edition) vol 16 no 6pp 104ndash106 2017

[19] Z Yuan ldquoImage threshold automatic selection based on one-dimensional entropyrdquo Journal of Gansu University of Tech-nology vol 1 no 1 pp 84ndash88 1993

Mathematical Problems in Engineering 13

Page 11: ImprovedORB-SLAM2AlgorithmBasedonInformationEntropy ...downloads.hindawi.com/journals/mpe/2020/4724310.pdf · In 2007, Andrew Davison proposed Mono-SLAM [2], which is the real-time

error and the average tracking time Tables 2ndash4 are plottedin Figures 15ndash17 to compare the algorithm performance byanalyzing the trajectory

Under the evaluation standard of absolute trajectoryerror the algorithm in this paper has certain advantagesIn the scene where the camera rotates 360deg the absolutetrajectory error is increased by 35 and in the ordinaryscene the absolute trajectory error is increased by 48 Inthe case of not rich texture the absolute trajectory errorimprovement is the largest indicating that the algorithmin this paper has a considerable improvement on scenes

with rich image sequence textures ere is also a certainimprovement in the situation of large-angle camerarotation

ere is an advantage in trajectory error in ordinaryscenes it is 175 smaller than the average relative trajectoryerror of the ORB-SLAM2 system In the scene where thecamera rotates 360deg and the image sequence texture is notrich the algorithm in this paper is better than the ORB-SLAM2 system which has been improved by more than40 It proves that the algorithm in this article is indeedimproved compared to the ORB-SLAM2 system

Table 2 Average absolute trajectory error

Average absolute trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 0057630667 0029899333360 0113049667 0072683167Room 0229222167 0111185333

Table 3 Average relative trajectory error

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00362615 0029899360 01534532 0078611Room 01592775 0092167

Table 4 Average tracking time

Average relative trajectory error differencem ORB-SLAM2 e proposed algorithmDesk 00244937 00289299360 00186606 00225014Room 00220836 00282757

0082

0048

0014

ndash05ndash05

ndash10

0000

05

05

05

x (m)

y (m)

z (m

)

10

10

Error mapped onto trajectory

10

15

20

25

15

Reference

Figure 14 Absolute trajectory error

Mathematical Problems in Engineering 11

Due to the sharpening of the local image the averagetracking time is slightly increased but the increase is not large

5 Conclusion

We propose an improved ORB-SLAM2 algorithm which isbased on adaptive information entropy screening algorithmand sharpening adjustment algorithm Our advantage is thatthe information entropy threshold can be automaticallycalculated according to different scenes with generalitySharpening the picture blocks below the information en-tropy threshold increases the clarity in order to better extractthe feature points e combination of information entropyscreening image block and sharpening algorithm solves thecamerarsquos large-angle rotation and the system positioningand mapping failure caused by the lack of texture infor-mation Although the processing time increases it improvesthe processing accuracy We compare the average absolutetrajectory error the average relative trajectory error and theaverage tracing time in the trajectory error with ORB-SLAM2 e experimental results show that the improvedmodel based on information entropy and sharpening im-proved algorithm achieve better results compared withORB-SLAM2 is article innovatively combines the ORB-SLAM2 system with adaptive information entropy ecombination of sharpening algorithms improves the accu-racy and robustness of the systemwithout a noticeable gap inprocessing time

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (NSFC)-Guangdong big data ScienceCenter Project under Grant U1911401 and in part by theSouth China Normal University National UndergraduateInnovation and Entrepreneurship Training Program underGrant 201910574057

References

[1] D Schleicher L M Bergasa R Barea E Lopez andM Ocana ldquoReal-time simultaneous localization and mappingusing a wide-angle stereo camerardquo in Proceedings of the IEEEWorkshop on Distributed Intelligent Systems Collective In-telligence and its Applications (DISrsquo06) pp 2090ndash2095 Bei-jing China October 2006

[2] A J Davison I D Reid N D Molton and O StasseldquoMonoSLAM real-time single camera SLAMrdquo IEEE Trans-actions on Pattern Analysis and Machine Intelligence vol 29no 6 pp 1052ndash1067 2007

[3] G Klein and D Murray ldquoParallel tracking and mapping forsmall ARworkspacesrdquo in Proceedings of the 2007 6th IEEE and

000

005

010

015

020

025M

ean

abso

lute

traj

ecto

ry er

ror

360 RoomDesk

OursORBSLAM2

Figure 15 Average absolute trajectory error

360 RoomDesk

OursORBSLAM2

0000

0005

0010

0015

0020

0025

0030

Mea

n tr

acki

ng ti

me

Figure 17 Average tracking time

360 RoomDesk

OursORBSLAM2

000

002

004

006

008

012

010

016

014

018

Mea

n re

lativ

e tra

ject

ory

erro

r

Figure 16 Average relative trajectory error

12 Mathematical Problems in Engineering

ACM International Symposium on Mixed and AugmentedReality pp 225ndash234 Nara Japan November 2007

[4] R Mur-Artal J M M Montiel and J D Tardos ldquoORB-SLAM a versatile and accurate monocular SLAM systemrdquoIEEE Transactions on Robotics vol 29 no 6 pp 1052ndash10672007

[5] R Mur-Artal and J D Tardos ldquoORB-SLAM2 an open-sourceSLAM system for monocular stereo and RGB-D camerasrdquoIEEE Transactions on Robotics vol 33 no 5 pp 1255ndash12622017

[6] K Di W Wan and J Chen ldquoSLAM visual odometer opti-mization algorithm based on information entropyrdquo Journal ofAutomation vol 47 no 6 pp 770ndash779 2020

[7] M Quan H Wei and J Chen ldquoSLAM visual odometeroptimization algorithm based on information entropyrdquoJournal of Automation vol 1 no 8 pp 180ndash198 2020

[8] L Xu S Zheng and J Jia ldquoUnnatural L0 sparse represen-tation for natural image deblurrirdquo in Proceedings of the 2013IEEE Conference on Computer Vision and Pattern Recognitionpp 1107ndash1114 Portland OR USA June 2013

[9] J Sun W Cao Z Xu and J Ponce ldquoLearning a convolutionalneural network for non-uniform motion blur removalrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 769ndash777 Boston MA USA June2015

[10] S Nah T H Kim and K M Lee ldquoDeep multi-scale con-volutional neural network for dynamic scene deblurringrdquo inProceedings of the 2017 IEEE Conference on Computer Visionand Pattern Recognition (CVPR) pp 257ndash265 Honolulu HIUSA July 2017

[11] J Zhang J Pan J Ren et al ldquoDynamic scene deblurring usingspatially variant recurrent neural networksrdquo in Proceedings ofthe 2018 IEEECVF Conference on Computer Vision andPattern Recognition pp 2521ndash2529 Salt Lake City UT USAJune 2018

[12] G Chen W Chen H Yu et al ldquoResearch on autonomousnavigation method of mobile robot based on semantic ORB-SLAM2 algorithmrdquoMachine Tool amp Hydraulics vol 48 no 9pp 16ndash20 2020

[13] X Qiu and C Zhao ldquoOverview of ORB-SLAM system op-timization framework analysisrdquo Navigation Positioning andTiming vol 6 no 3 pp 46ndash57 2019

[14] E Rosten and T Drummond ldquoMachine learning for high-speed corner detectionrdquo in Proceedings of the EuropeanConference on Computer Vision pp 430ndash443 Graz Austria2006

[15] A Foi V Katkovnik and K Egiazarian ldquoPointwise shape-adaptive DCT for high-quality denoising and deblocking ofgrayscale and color imagesrdquo IEEE Transactions on ImageProcessing vol 16 no 5 pp 1395ndash1411 2007

[16] Y Yu andWH Tardos ldquoORB-SLAM a versatile and accuratemonocular SLAM systemrdquo IEEE Transactions on Roboticsvol 33 no 5 pp 1255ndash1262 2017

[17] R Xu S Liu and J Chen ldquoe rationality and application ofinformation entropy description in imagesrdquo InformationTechnology vol 11 no 5 pp 59ndash61 2005

[18] Y Zhu ldquoOverview of image enhancement algorithmsrdquo In-formation and Computer (-eoretical Edition) vol 16 no 6pp 104ndash106 2017

[19] Z Yuan ldquoImage threshold automatic selection based on one-dimensional entropyrdquo Journal of Gansu University of Tech-nology vol 1 no 1 pp 84ndash88 1993

Mathematical Problems in Engineering 13

Page 12: ImprovedORB-SLAM2AlgorithmBasedonInformationEntropy ...downloads.hindawi.com/journals/mpe/2020/4724310.pdf · In 2007, Andrew Davison proposed Mono-SLAM [2], which is the real-time

Due to the sharpening of the local image the averagetracking time is slightly increased but the increase is not large

5 Conclusion

We propose an improved ORB-SLAM2 algorithm which isbased on adaptive information entropy screening algorithmand sharpening adjustment algorithm Our advantage is thatthe information entropy threshold can be automaticallycalculated according to different scenes with generalitySharpening the picture blocks below the information en-tropy threshold increases the clarity in order to better extractthe feature points e combination of information entropyscreening image block and sharpening algorithm solves thecamerarsquos large-angle rotation and the system positioningand mapping failure caused by the lack of texture infor-mation Although the processing time increases it improvesthe processing accuracy We compare the average absolutetrajectory error the average relative trajectory error and theaverage tracing time in the trajectory error with ORB-SLAM2 e experimental results show that the improvedmodel based on information entropy and sharpening im-proved algorithm achieve better results compared withORB-SLAM2 is article innovatively combines the ORB-SLAM2 system with adaptive information entropy ecombination of sharpening algorithms improves the accu-racy and robustness of the systemwithout a noticeable gap inprocessing time

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (NSFC)-Guangdong big data ScienceCenter Project under Grant U1911401 and in part by theSouth China Normal University National UndergraduateInnovation and Entrepreneurship Training Program underGrant 201910574057

References

[1] D Schleicher L M Bergasa R Barea E Lopez andM Ocana ldquoReal-time simultaneous localization and mappingusing a wide-angle stereo camerardquo in Proceedings of the IEEEWorkshop on Distributed Intelligent Systems Collective In-telligence and its Applications (DISrsquo06) pp 2090ndash2095 Bei-jing China October 2006

[2] A J Davison I D Reid N D Molton and O StasseldquoMonoSLAM real-time single camera SLAMrdquo IEEE Trans-actions on Pattern Analysis and Machine Intelligence vol 29no 6 pp 1052ndash1067 2007

[3] G Klein and D Murray ldquoParallel tracking and mapping forsmall ARworkspacesrdquo in Proceedings of the 2007 6th IEEE and

000

005

010

015

020

025M

ean

abso

lute

traj

ecto

ry er

ror

360 RoomDesk

OursORBSLAM2

Figure 15 Average absolute trajectory error

360 RoomDesk

OursORBSLAM2

0000

0005

0010

0015

0020

0025

0030

Mea

n tr

acki

ng ti

me

Figure 17 Average tracking time

360 RoomDesk

OursORBSLAM2

000

002

004

006

008

012

010

016

014

018

Mea

n re

lativ

e tra

ject

ory

erro

r

Figure 16 Average relative trajectory error

12 Mathematical Problems in Engineering

ACM International Symposium on Mixed and AugmentedReality pp 225ndash234 Nara Japan November 2007

[4] R Mur-Artal J M M Montiel and J D Tardos ldquoORB-SLAM a versatile and accurate monocular SLAM systemrdquoIEEE Transactions on Robotics vol 29 no 6 pp 1052ndash10672007

[5] R Mur-Artal and J D Tardos ldquoORB-SLAM2 an open-sourceSLAM system for monocular stereo and RGB-D camerasrdquoIEEE Transactions on Robotics vol 33 no 5 pp 1255ndash12622017

[6] K Di W Wan and J Chen ldquoSLAM visual odometer opti-mization algorithm based on information entropyrdquo Journal ofAutomation vol 47 no 6 pp 770ndash779 2020

[7] M Quan H Wei and J Chen ldquoSLAM visual odometeroptimization algorithm based on information entropyrdquoJournal of Automation vol 1 no 8 pp 180ndash198 2020

[8] L Xu S Zheng and J Jia ldquoUnnatural L0 sparse represen-tation for natural image deblurrirdquo in Proceedings of the 2013IEEE Conference on Computer Vision and Pattern Recognitionpp 1107ndash1114 Portland OR USA June 2013

[9] J Sun W Cao Z Xu and J Ponce ldquoLearning a convolutionalneural network for non-uniform motion blur removalrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 769ndash777 Boston MA USA June2015

[10] S Nah T H Kim and K M Lee ldquoDeep multi-scale con-volutional neural network for dynamic scene deblurringrdquo inProceedings of the 2017 IEEE Conference on Computer Visionand Pattern Recognition (CVPR) pp 257ndash265 Honolulu HIUSA July 2017

[11] J Zhang J Pan J Ren et al ldquoDynamic scene deblurring usingspatially variant recurrent neural networksrdquo in Proceedings ofthe 2018 IEEECVF Conference on Computer Vision andPattern Recognition pp 2521ndash2529 Salt Lake City UT USAJune 2018

[12] G Chen W Chen H Yu et al ldquoResearch on autonomousnavigation method of mobile robot based on semantic ORB-SLAM2 algorithmrdquoMachine Tool amp Hydraulics vol 48 no 9pp 16ndash20 2020

[13] X Qiu and C Zhao ldquoOverview of ORB-SLAM system op-timization framework analysisrdquo Navigation Positioning andTiming vol 6 no 3 pp 46ndash57 2019

[14] E Rosten and T Drummond ldquoMachine learning for high-speed corner detectionrdquo in Proceedings of the EuropeanConference on Computer Vision pp 430ndash443 Graz Austria2006

[15] A Foi V Katkovnik and K Egiazarian ldquoPointwise shape-adaptive DCT for high-quality denoising and deblocking ofgrayscale and color imagesrdquo IEEE Transactions on ImageProcessing vol 16 no 5 pp 1395ndash1411 2007

[16] Y Yu andWH Tardos ldquoORB-SLAM a versatile and accuratemonocular SLAM systemrdquo IEEE Transactions on Roboticsvol 33 no 5 pp 1255ndash1262 2017

[17] R Xu S Liu and J Chen ldquoe rationality and application ofinformation entropy description in imagesrdquo InformationTechnology vol 11 no 5 pp 59ndash61 2005

[18] Y Zhu ldquoOverview of image enhancement algorithmsrdquo In-formation and Computer (-eoretical Edition) vol 16 no 6pp 104ndash106 2017

[19] Z Yuan ldquoImage threshold automatic selection based on one-dimensional entropyrdquo Journal of Gansu University of Tech-nology vol 1 no 1 pp 84ndash88 1993

Mathematical Problems in Engineering 13

Page 13: ImprovedORB-SLAM2AlgorithmBasedonInformationEntropy ...downloads.hindawi.com/journals/mpe/2020/4724310.pdf · In 2007, Andrew Davison proposed Mono-SLAM [2], which is the real-time

ACM International Symposium on Mixed and AugmentedReality pp 225ndash234 Nara Japan November 2007

[4] R Mur-Artal J M M Montiel and J D Tardos ldquoORB-SLAM a versatile and accurate monocular SLAM systemrdquoIEEE Transactions on Robotics vol 29 no 6 pp 1052ndash10672007

[5] R Mur-Artal and J D Tardos ldquoORB-SLAM2 an open-sourceSLAM system for monocular stereo and RGB-D camerasrdquoIEEE Transactions on Robotics vol 33 no 5 pp 1255ndash12622017

[6] K Di W Wan and J Chen ldquoSLAM visual odometer opti-mization algorithm based on information entropyrdquo Journal ofAutomation vol 47 no 6 pp 770ndash779 2020

[7] M Quan H Wei and J Chen ldquoSLAM visual odometeroptimization algorithm based on information entropyrdquoJournal of Automation vol 1 no 8 pp 180ndash198 2020

[8] L Xu S Zheng and J Jia ldquoUnnatural L0 sparse represen-tation for natural image deblurrirdquo in Proceedings of the 2013IEEE Conference on Computer Vision and Pattern Recognitionpp 1107ndash1114 Portland OR USA June 2013

[9] J Sun W Cao Z Xu and J Ponce ldquoLearning a convolutionalneural network for non-uniform motion blur removalrdquo inProceedings of the IEEE Conference on Computer Vision andPattern Recognition pp 769ndash777 Boston MA USA June2015

[10] S Nah T H Kim and K M Lee ldquoDeep multi-scale con-volutional neural network for dynamic scene deblurringrdquo inProceedings of the 2017 IEEE Conference on Computer Visionand Pattern Recognition (CVPR) pp 257ndash265 Honolulu HIUSA July 2017

[11] J Zhang J Pan J Ren et al ldquoDynamic scene deblurring usingspatially variant recurrent neural networksrdquo in Proceedings ofthe 2018 IEEECVF Conference on Computer Vision andPattern Recognition pp 2521ndash2529 Salt Lake City UT USAJune 2018

[12] G Chen W Chen H Yu et al ldquoResearch on autonomousnavigation method of mobile robot based on semantic ORB-SLAM2 algorithmrdquoMachine Tool amp Hydraulics vol 48 no 9pp 16ndash20 2020

[13] X Qiu and C Zhao ldquoOverview of ORB-SLAM system op-timization framework analysisrdquo Navigation Positioning andTiming vol 6 no 3 pp 46ndash57 2019

[14] E Rosten and T Drummond ldquoMachine learning for high-speed corner detectionrdquo in Proceedings of the EuropeanConference on Computer Vision pp 430ndash443 Graz Austria2006

[15] A Foi V Katkovnik and K Egiazarian ldquoPointwise shape-adaptive DCT for high-quality denoising and deblocking ofgrayscale and color imagesrdquo IEEE Transactions on ImageProcessing vol 16 no 5 pp 1395ndash1411 2007

[16] Y Yu andWH Tardos ldquoORB-SLAM a versatile and accuratemonocular SLAM systemrdquo IEEE Transactions on Roboticsvol 33 no 5 pp 1255ndash1262 2017

[17] R Xu S Liu and J Chen ldquoe rationality and application ofinformation entropy description in imagesrdquo InformationTechnology vol 11 no 5 pp 59ndash61 2005

[18] Y Zhu ldquoOverview of image enhancement algorithmsrdquo In-formation and Computer (-eoretical Edition) vol 16 no 6pp 104ndash106 2017

[19] Z Yuan ldquoImage threshold automatic selection based on one-dimensional entropyrdquo Journal of Gansu University of Tech-nology vol 1 no 1 pp 84ndash88 1993

Mathematical Problems in Engineering 13