Multispectral Image Matching Using SIFT and SURF Algorithm: A Review

M.F.M. Shaharom
K.N. Tahar

SIFT and SURF image matching were used in many industries such as survey and mapping, geology, medical and automotive. Multispectral sensors offered today become new challenge for researchers to study the performances of SIFT and SURF algorithms on multispectral image. Basically, multispectral image consists of more than three bands. As a result, the differences between those bands leads to nonlinear intensity between images. Both algorithm detectors using ‘blob detector’ that extracting the feature points as a key point for image matching later on. Hence, the less visibility of the feature on the multispectral images was one of the issues need to be solved. Many researchers investigate and propose a new strategy to extract and match the feature point using SIFT and SURF on multispectral image. The image fusions, combinations of different descriptors and revised or alteration of the algorithm themselves were among the approached taken by researchers in order to achieved good results.

Multispectral Image Matching Using SIFT and SURF Algorithm: A Review

Shaharom, M. F. M. and Tahar, K. N.*

School of Surveying Science and Geomatics, College of Built Environment, Universiti Teknologi MARA, 40450 Shah Alam, Selangor Darul Ehsan, Malaysia. Malaysia

Email:khairul0127@uitm.edu.my*

*Corresponding Author

Abstract

SIFT and SURF image matching were used in many industries such as survey and mapping, geology, medical and automotive. Multispectral sensors offered today become new challenge for researchers to study the performances of SIFT and SURF algorithms on multispectral image. Basically, multispectral image consists of more than three bands. As a result, the differences between those bands leads to nonlinear intensity between images. Both algorithm detectors using ‘blob detector’ that extracting the feature points as a key point for image matching later on. Hence, the less visibility of the feature on the multispectral images was one of the issues need to be solved. Many researchers investigate and propose a new strategy to extract and match the feature point using SIFT and SURF on multispectral image. The image fusions, combinations of different descriptors and revised or alteration of the algorithm themselves were among the approached taken by researchers in order to achieved good results.

Keywords: Algorithm, Bands, Image, Matching, Multispectral, Processing, SIFT, SURF

1. Introduction

The Inspirations from human vision helps researcher on understanding and develops the computer vision (CV) to detect object on the image [1] and [2]. CV understandings make the stereoscopic images transform into 3D display format [3]. Determination of height or depth for an object in the image require conjugate point. Conjugate points can be either manually or automatically extract using algorithm. There were many algorithms proposed by researchers. One of the establish feature points extraction were namely Scale Invariant Feature Transform (SIFT) proposed by [4]. These algorithms were invariant to scale, rotation and illumination of the images. SIFT consists of two main strategy where firstly all the feature points candidates were extracted by detector. Second, all the feature points selected earlier will be provided by its descriptions using descriptor in SIFT [5]. The descriptions will be used as an identification for each feature point candidate in image matching later on. Speed-Up Robust Feature (SURF) algorithm was another feature extraction in CV. SURF developed by [6] was basically inspired by SIFT technique in reducing the complexity for fast computations. Nowadays, researcher were not only focuses in RGB spectral image, but open to wider spectral such as Red, Red Edge, and Near Infrared [7] [8] [9] [10] and [11]. In additions, multispectral images also been used in geomatics and surveying fields such as vegetation cover estimations,tree counting and costal three dimensional modelling [12] and [13]. Hence, this situation needs to be discovered by researchers (image matching in multispectral images). This situation become a challenge for researchers to analyse SIFT and SURF algorithms performance in multispectral images. The strategy, performances and the results are discussed in this paper.

2. SIFT and SURF Image Matching

2.1 Feature Points Extractions (Detector)

Feature points extraction began with detection computations. Detector is the tools for finding the interest point on the images that will act as feature points candidate later on. In SIFT algorithm, there were three stages in extracting the feature points. First, the image was regenerating into several octaves. At the same time, those images being blurred using Gaussian blur operator across octaves [14]. Equation 1 illustrates blurred image (Laplacian of Gaussian) algorithm [15].

Equation 1

L is representing Laplacian of Gaussian image. G is representing the Gaussian first image of blurring and I is equal to original image; where the x, y are the location coordinates and is the parameter of the scale (the greater the value of the sigma the blurrier the image). Equation 2 illustrates gaussian blur operator algorithm [16].

Equation 2

In order to extract the feature point candidate, the Gaussian images were subtracted between them. These are called Difference of Gaussian image (DoG). The used of DoG technique was actually an improvement from Lindeberg method where the DoG was approximated to scale-normalized Laplacian of Gauss [17]. Equation 3 illustrates difference of gaussian (GoG) algorithm [4].

Equation 3

Next, all the DoG images being overlay to determine the potential feature points. The total of eight (8) neighbouring pixel surrounding the feature point were taken into account as well as the upper and lower scale. The point candidates of feature point were selected, if it is local extrema as illustrated in Figure 1.

Figure 1: Finding Feature Point by Local Extrema [3]

Finally, the feature point candidates were localized by removing noise using Taylor series expansion of scale space. Hessian Matrix of 2x2 (H) were used to discriminate the edges feature. On the other hand, SURF used the intermediate image namely integral image as a medium to extract the feature points. The integral image in SURF were simplified the complexity of SIFT algorithm detector [18].

The original image is converted into the integral image by Equation 4 (Integral image equation) [5].

Equation 4

Basically, SURF detector applies Hessians Matrix to extract feature points candidate. SURF detector upgrades SIFT detector into another level where the points extraction made through scaling and localisation simultaneously [19]. The identification of candidate feature point was made using filter algorithm on integral image that makes the computations faster than SIFT technique.

2.2 Feature Point Descriptions (Descriptor)

Both SIFT and SURF algorithms have same conception where the descriptions of feature point being used as an element for image matching later on. The descriptor was used as a finger print or identity for each feature point extracted by detector. Descriptions carry two important element which the value of gradient magnitude and its orientations [4]. Equation 5 and Equation 6 illustrate gradient magnitude formula in SIFT and orientation computation in SIFT respectively [17].

Equation 5

Equation 6

Descriptor contains magnitude and orientation values which are stored in bins. In the application of SIFT, Histogram of Gradient (HoG) are used and the values are stored in histogram bins. It is using 4x4 matrix per window containing eight (8) directions for each sample. As a result, SIFT containing 128 bins for each feature point descriptions as illustrated in Figure 2. One important thing, all the description for SIFT technique were computed from DoG image. SIFT descriptor was complexed and requires large information to be stored. As a result, the duration of image matching computation becoming longer in terms of computation performances [21]. Next, SURF descriptions using Haar-wavelet technique where the description for each feature point were compute from the integral image, not in original image [22]. Equation 7 illustrates component in SURF descriptions algorithm [23].

Equation 7

Figure 2: Descriptions in SIFT [20]

Figure 3: Image matching; a) without RANSAC, b) with RANSAC [21]

Descriptions in SURF containing four component where the first two were component based from gradient magnitude and the remaining two were orientations of both summation of X-directions and Y-directions [22]>. Therefore, the SURF descriptions store only 64 bins (dimensional feature vectors).

2.3 Feature Points Matching

SIFT and SURF algorithms provides an information of feature points and its descriptions. In image matching, the identical feature point in first image must be match correctly in another image. Hence the Random Sample Consensus (RANSAC) is one of the model fitting analyst for image matching computations [24]. RANSAC were used on many image matching applications such as in image stitching [25], face recognitions [26] and in deep space exploration [27]. RANSAC will estimate the model parameter and filtered the outliers (gross error) of feature matching. Next, recomputed the sample data using least squares technique. In other words, RANSAC used iterative method to filter the feature matching points from the given sampled data [28].

As an example, two stereopair images were tested for image matching “with and without” RANSAC algorithm as in Figure 3. Upon application of RANSAC, incorrect matches in Figure 3(a) were omitted to produce perfect feature matches as in Figure 3(b). Fast Library for Approximate Nearest Neighbor (FLANN) was another example of feature matching algorithm. FLANN matcher advantage was optimizing a fast nearest neighbor detection in huge data sample [29]. It is also comparing and computing the ratio of similarity using Euclidian distance by certain value of distance ratio [30]. In conclusion SIFT and SURF feature matching performances were not good enough to give good matching result [15] and [31]. Thus, both of the algorithms need supports from matcher algorithms like RANSAC or FLANN feature matching to eliminates outliers matching data.

3. Multispectral Image Matching

Multispectral images contained various range of light spectrum that forming more than three bands [32] and [33]. There are many fields benefited from multispectral image for examples agriculture, geological mapping and remote sensing [34] [35] and [36]. Imaging with various spectrum can lead to non-linear intensity between images [37].

Other than that, multispectral images produced from satellite imager were multi-temporal, multi-perspective and also acquired from different sensor in certain cases [38]. Sufficient amount of sun illuminations during data captured also needed to produce good multispectral images [39]. Due to illumination and contrast variation in multispectral image, the performance of descriptor in SIFT alone was not very good. Hence, In remote sensing field, a fusion of SIFT and Gabor feature extractions were proposed [40] and [41]. Gabor filter performs well on extracting features in urban and tree areas [42]. As a result, fusion between SIFT and Gabor filter descriptors gave a good result. In additions, combination of SIFT detector and Edge Oriented Histogram (EOH) descriptors techniques were used in image matching between visible (VS) and Long Wave Infrared (LWIR) [43]ages were happened. At the same time, combination of visible image and other spectral bands were improving the visibility of the features on the image as in Figure 4. The descriptions in EOH is focusing in targeted edge with calculation of its direction or orientations with certain range of angle and the values were stored in histogram bins. EOH descriptor were based on region information (local edge directions) [44] and [45], thus, it gave advantage for matching later on rather than extracting feature point using it’s pixel information (other descriptor techniques).

Not only in satellite images, those situations can also be found in multispectral camera images. Thus, combination of Local Contrast magnitude (LC) descriptor and SIFT (LC-SIFT) was proposed by [46] to cope with non-linear intensity issues in multispectral image. LC descriptor focusing the edge of the feature in image where it uses the minimum and maximum grey levels to estimate the magnitude. Hence, LC-SIFT is actually the fusion of SIFT detector with LC descriptor to form a new hybrid CV. The proposed LC-SIFT were robust to non-linear intensity issues. Hence, it can be said that the bands fusion was another strategy in faces above issues. The image fusion between RGB-NIR as in Figure 5 was made in order to improve the visibility of features in multispectral image [47].

Figure 4: Image matching of SIFT and EOH descriptor in multispectral LWIR image [42]

Figure 5: RGB and NIR images [45]

Figure 6: Local extrema difference between SURF and N-SURF [49]

SURF matching technique also facing difficulties on extracting the correct matches. In certain cases, SURF algorithm being upgraded into Normalized-SURF (N-SURF) for better matching results. As shown in Figure 6, Local extrema detection structure in N-SURF slightly different with conventional SURF where all the amount of feature were equally extracted across different scene and bands [48]. In order words, the N-SURF extracts the feature points candidates using 3 by 3 neighbouring points on a single scale space. After that, the features were only selected if the cumulative distribution function CDF of the feature is consistent.

Last but not least, SIFT and SURF have been used in many CV applications especially in performing the automatic tie point extractions for three-dimensional model development [50], multispectral image registration which is generating image fusion between RGB and other bands [51], object recognition [52] and medical investigation of x-ray images [53]. It shows that not all situations where SIFT and SURF algorithm failed to extract the feature point in multispectral images. It depends on the characteristics of feature on the image themselves. Images containing ‘round feature shape’ like vegetation, shrubbery area and etc were suited with ‘blob’ detectors (SIFT and SURF) since it more efficient on those conditions [54] [55] and [56]. SURF detectors performed efficiently in multispectral face recognitions where the combinations of conventional SURF detectors and FREAK descriptions extracted highest number of feature matching compare others algorithms [57]. Micasense RedEdge multispectral images also been tested using SURF techniques and achieve less than one pixel of RMSE when its combined with MSAC matcher algorithm [58].

4. Conclusions

In conclusions, SIFT and SURF algorithm consist of two parts, detectors and descriptors. The detectors for both algorithms were based on blob detectors. Both descriptors were based on two separated values (gradient of magnitude and its orientation). SIFT descriptor was more complex and containing 128 bins for each feature points. SURF descriptor carries only 64 bins which make the computation faster. At certain cases, both algorithms performances were not very good when multispectral images being used. However, the used of other matcher like RANSAC, FLANN or MSAC may reduce the miss match feature point issues. The combination of other descriptors like EOH and FREAK also helps both SIFT and SURF matching well performed. In remote sensing fields, the cross-band matching for multispectral images are important in order to produce fusion image for further study. Hence, the use of SIFT and SURF as matcher algorithm are the solutions. Other than that, SIFT and SURF were also capable to improved visibility of features in non RGB images by combining the RGB with other multispectral images. In my opinion, both SIFT and SURF are important algorithms to be learn and understand well because these CV algorithms are located in between conventional CV(s) and new modern CV(s). Indirectly, a comprehensive workflow on fundamental CV technology can be traced and digest. For future work, Multispectral three-dimensional model from SIFT and SURF technique are needed to be carried out in order to find out the model’s accuracies.

Acknowledgements

College of Built Environment, Universiti Teknologi MARA (UiTM), Research Management Centre (RMC) and Ministry of Higher Education (MOHE) are greatly acknowledged for providing the Fundamental Research Grant Scheme (Title: Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Feature (SURF) Modelling In Fulfilling Multispectral Object Reconstruction, Grant No. FRGS/1/2021/WAB07/UITM/02/2) and GPK fund (Grant No. 600-RMC/GPK 5/3 (223/2020)) to enable this research to be carried out. The authors would also like to thank the people who were directly or indirectly involved in this research.

References

[1] Kumar, S., Mi, J., Zhang, Q., Chang, B., Le, H., Khoshabeh, R. and Nguyen, T., (2021). Human-Inspired Camera: A Novel Camera System for Computer Vision. 2021 18th International SoC Design Conference (ISOCC), 29-30. https://doi.org/10.1109/ICETC.2010.5529412.

[2] Barik, D. and Mondal, M., (2010). Object Identification for Computer Vision using Image Segmentation. 2010 2nd International Conference on Education Technology and Computer, Vol. 2, pV2-170-V2-172. https://doi.org/10.1109/ICETC.2010.5529412.

[3] Zhang, X. and Xu, S., (2020). Research on Image Processing Technology of Computer Vision Algorithm. 2020 International Conference on Computer Vision, Image and Deep Learning (CVIDL), 122-124. https://doi.org/10.1109/CVIDL51233.2020.00030.

[4] Lowe, D. G., (2004). Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision, Vol. 60(2), 91–110. https://doi.org/10.1023/B:VISI.0000029664.99615.94.

[5] Campbell, N., (2008). Obtaining Feature Correspondences. Notes, Vol. 2(1), 1–10.

[6] Bay, H., Tuytelaars, T. and Van Gool, L., (2006). SURF: Speeded up Robust Features. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 3951,404-417. https://doi.org/10.1007/11744023_32.

[7] Pamart, A., Guillon, O., Faraci, S., Gattet, E., Genevois, M., Vallet, J. M. and De Luca, L., (2017). Multispectral Photogrammetric Data Acquisition and Processing for Wall Paintings Studies. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. Vol. XLII-2/W3, 2, 559–566. https://doi.org/10.5194/isprs-archives-XLII-2-W3-559-2017.

[8] Minak, R. and Langhammer, J., (2016). Use of a Multispectral UAV Photogrammetry for Detection and Tracking of Forest Disturbance Dynamics. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 711–718. https://doi.org/10.5194/isprsarchives-XLI-B8-711-2016.

[9] Abd Mukti, S. N. and Tahar, K. N., (2022). Detection of Potholes on Road Surfaces Using Photogrammetry and Remote Sensing Methods (Review). Scientific and Technical Journal of Information Technologies, Mechanics and Optics, Vol. 22(3), 459–471. https://doi.org/10.17586/2226-1494-2022-22-3-459-471.

[10] Javadnejad, F., Gillins, D. T., Parrish, C. E. and Slocum, R. K., (2020). A Photogrammetric Approach to Fusing Natural Colour and Thermal Infrared UAS Imagery in 3D Point Cloud Generation. International Journal of Remote Sensing, Vol. 41(1), 211–237. https://doi.org/10.1080/01431161.2019.1641241.

[11] Moreno, L., Ramos, V., Pohl, M. and Huguet, F., (2018). Comparative Study of Multispectral Satellite Images and RGB Images Taken from Drones for Vegetation Cover Estimation. Proceedings of the 2018 IEEE 38th Central America and Panama Convention, CONCAPAN 2018. https://doi.org/10.1109/CONCAPAN.2018.8596362.

[12] Harikumar, A., D’Odorico, P. and Ensminger, I., (2020). A Fuzzy Approach to Individual Tree Crown Delineation in UAV Based Photogrammetric Multispectral Data. 2020 IEEE International Geoscience and Remote Sensing Symposium, 4132-4135. https://doi.org/10.1109/IGARSS39084.2020.9324303.

[13] James, D., Collin, A., Mury, A., Letard, M. and Guillot, B., (2021). Uav Multispectral Optical Contribution to Coastal 3D Modelling. International Geoscience and Remote Sensing Symposium (IGARSS), 2021 July, 7951–7954. https://doi.org/10.1109/IGARSS47720.2021.9553865.

[14] Karagiannis, G., Ant?n Castro, F. and Mioc, D., (2016). Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. 3, 23–28. https://doi.org/10.5194/isprs-annals-III-2-23-2016.

[15] Guo, R., Li, S., Cai, R. and Sun, X., (2019). Research on Image Matching Algorithm Based on Improved SIFT UAV. Journal of Physics: Conference Series, Vol. 1423(1). https://doi.org/10.1088/1742-6596/1423/1/012028

[16] Wang, S., Guo, Z. and Liu, Y., (2021). An Image Matching Method Based on SIFT Feature Extraction and FLANN Search Algorithm Improvement. Journal of Physics: Conference Series, Vol. 2037(1). https://doi.org/10.1088/1742-6596/2037/1/012122.

[17] Lingua, A., Marenchino, D. and Nex, F., (2009). Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications. Sensors, Vol. 9(5), 3745–3766. https://doi.org/10.3390/s90503745.

[18] Banerjee, A. and Mistry, D., (2017). Comparison of Feature Detection and Matching Approaches: SIFT and SURF. Global Research and Development Journal for Engineering, Vol.2(4), 7–13.

[19] Bay, H., Ess, A., Tuytelaars, T. and Van Gool, L., (2008). Speeded-Up Robust Features (SURF). Computer Vision and Image Understanding, Vol. 110(3), 346–359. https://doi.org/10.1016/j.cviu.2007.09.014.

[20] Hatami, N., Gavet, Y. and Debayle, J., (2019). Bag of Recurrence Patterns Representation for Time-Series Classification. Pattern Analysis and Applications, Vol. 22(3), 877–887. https://doi.org/10.1007/s10044-018-0703-6.

[21] Karami, E., Prasad, S. and Shehata, M., (2017). Image Matching Using SIFT, SURF, BRIEF and ORB: Performance Comparison for Distorted Images. 2015 Newfoundland Electrical and Computer Engineering Conference. https://doi.org/10.48550/arxiv.1710.02726.

[22] Teke, M. and Temizel, A., (2010a). Multi-spectral Satellite Image Registration Using Scale-Restricted Surf. Proceedings - International Conference on Pattern Recognition, March 2014, 2310–2313. https://doi.org/10.1109/ICPR.2010.565.

[23] Hassaballah, M., Alshazly, H. A. and Ali, A. A., (2019). Analysis and Evaluation of Keypoint Descriptors for Image Matching. Studies in Computational Intelligence, Vol. 804, 113–140.> https://doi.org/10.1007/978-3-030-03000-1_5.

[24] Fischler, M. A. and Bolles, R. C., (1981). RANSAC: Random Sample Paradigm for Model Consensus: A Apphcatlons to Image Fitting with Analysis and Automated Cartography. Graphics and Image Processing, Vol.24(6), 381–395.

[25] Bakar, S. A., Jiang, X., Gui, X., Li, G. and Li, Z., (2020). Image Stitching for Chest Digital Radiography Using the SIFT and SURF Feature Extraction by RANSAC Algorithm. Journal of Physics: Conference Series, Vol. 1624(4). https://doi.org/10.1088/1742-6596/1624/4/042023.

[26] Vinay, A., Rao, A. S., Shekhar, V. S., Akshay Kumar, C., Murthy, K. N. B. and Natarajan, S., (2015). Feature Extraction using ORB-RANSAC for Face Recognition. Procedia Computer Science, Vol. 70, 174–184. https://doi.org/10.1016/j.procs.2015.10.068.

[27] Chen, Y. and Gao, J., (2019). SURF-Based Image Matching Method for Landing on Small Celestial Bodies, Proceedings of the 2019 International Conference on Modeling, Analysis, Simulation Technologies and Applications (MASTA 2019), Vol. 168, 401–407. https://doi.org/10.2991/masta-19.2019.68.

[28]Liu, J. and Bu, F., (2019). Improved RANSAC Features Image Matching Method Based on SURF. The Journal of Engineering, Vol.2019(23), 9118–9122. https://doi.org/10.1049/joe.2018.9198.

[29] Vijayan, V. and Kp, P., (2019). FLANN Based Matching with SIFT Descriptors for Drowsy Features Extraction. Proceedings of the IEEE International Conference Image Information Processing ,2019 November, 600–605. https://doi.org/10.1109/ICIIP47207.2019.8985924.

[30] Raheem, H. A., (2022). Video Important Shot Detection Based on ORB Algorithm and FLANN Technique, 8th International Engineering Conference on Sustainable Technology and Development (IEC). https://doi.org/10.1109/IEC54822.2022.9807488.

[31] Golovnin, O. and Rybnikov, D., (2021). Benchmarking of Feature Detectors and Matchers Using OpenCV-Python Wrapper. Proceedings of ITNT 2021 - 7th IEEE International Conference on Information Technology and Nanotechnology. 1-6. https://doi.org/10.1109/ITNT52450.2021.9649278.

[32] Li, Yanping. (2019). A Novel Fast Retina Keypoint Extraction Algorithm for Multispectral Images Using Geometric Algebra. IEEE Access,7, 167895–167903. https://doi.org/10.1109/ACCESS.2019.2954081.

[33] Mathys, A., Jadinon, R. and Hallot, P., (2019). Exploiting 3D Multispectral Texture for a Better Feature Identification for Cultural Heritage. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. 4(2/W6), 91–97. https://doi.org/10.5194/isprs-annals-IV-2-W6-91-2019.

[34] Cui, Z. and Kerekes, J. P., (2018). Potential of Red Edge Spectral Bands in Future Landsat Satellites on Agroecosystem Canopy Green Leaf Area Index Retrieval. Remote Sensing, Vol. 10(9). https://doi.org/10.3390/rs10091458.

[35] Fu, B., Shi, P., Fu, H., Ninomiya, Y. and Du, J., (2019). Geological Mapping Using Multispectral Remote Sensing Data in the Western China, IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, 5583–5586. https://doi.org/10.1109/IGARSS.2019.8898880.

[36] Yuan, K., Zhuang, X., Schaefer, G., Feng, J., Guan, L. and Fang, H., (2021). Deep-Learning-Based Multispectral Satellite Image Segmentation for Water Body Detection. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 14, 7422–7434. https://doi.org/10.1109/JSTARS.2021.3098678.

[37] Ma, T., Ma, J. and Yu, K., (2019). A Local Feature Descriptor Based on Oriented Structure Maps with Guided Filtering for Multispectral Remote Sensing Image Matching. Remote Sensing, Vol. 11(8). https://doi.org/10.3390/rs11080951.

[38] Chang, H. H. and Chan, W. C., (2021). Automatic Registration of Remote Sensing Images Based on Revised SIFT with Trilateral Computation and Homogeneity Enforcement. IEEE Transactions on Geoscience and Remote Sensing, Vol. 59(9), 7635–7650. https://doi.org/10.1109/TGRS.2021.3052926.

[39] Soria, X., Sappa, A. D. and Akbarinia, A., (2017). Multispectral Single-Sensor RGB-NIR Imaging: New Challenges and Opportunities, Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), 3–8. https://doi.org/10.1109/IPTA.2017.8310105.

[40] Joshi, C. and Mukherjee, S., (2018). Empirical Analysis of SIFT, Gabor and Fused Feature Classification Using SVM for Multispectral Satellite Image Retrieval. 2017 4th International Conference on Image Information Processing, ICIIP 2017, 2018-Janua, 542–547.

[41] Li, R., Zhao, H., Zhang, X., Ge, X., Yuan, Z. and Zou, Q., (2021). Automatic Matching of Multispectral Images Based on Nonlinear Diffusion of Image Structures. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing , Vol. 14, 762–774. https://doi.org/10.1109/JSTARS.2020.3043379.

[42] Marmol, U., (2011). Use of Gabor Filters for Texture Classification of Airborne Images and LIDAR Data. Archiwum Fotogrametrii, Kartografii i Teledetekcji, Vol. 22, 325–336.

[43] Aguilera, C., Barrera, F., Lumbreras, F., Sappa, A. D. and Toledo, R., (2012). Multispectral Image Feature Points. Sensors (Switzerland), Vol. 12(9), 12661–12672. https://doi.org/10.3390/s120912661.

[44] Timotius, I. K. and Setyawan, I., (2014). Using Edge Orientation Histograms in Face-Based Gender Classification. 2014 International Conference on Information Technology Systems and Innovation, ICITSI 2014 - Proceedings, November, 93–98. https://doi.org/10.1109/ICITSI.2014.7048244.

[45] Li, Yong, Shi, X., Wei, L., Zou, J. and Chen, F., (2015). Assigning Main Orientation to an EOH Descriptor on Multispectral Images. https://doi.org/10.3390/s150715595.

[46] Saleem, S. and Sablatnig, R., (2014). A Robust SIFT Descriptor for Multispectral Images. IEEE Signal Processing Letters, Vol. 21(4), 400–403. https://doi.org/10.1109/LSP.2014.2304073.

[47] Brown, M. and Susstrunk, S., (2011). Multi-spectral SIFT for Scene Category Recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 177–184. https://doi.org/10.1109/CVPR.2011.5995637.

[48] Jhan, J. P. and Rau, J. Y., (2019). A Normalized Surf for Multispectral Image Matching and Band Co-Registration. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives , Vol. 42(2/W13), 393–399. https://doi.org/10.5194/isprs-archives-XLII-2-W13-393-2019.

[49] Jhan, J., (2021). A Generalized Tool for Accurate and Efficient Image Registration of UAV Multi-lens Multispectral Cameras by N-SURF Matching. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 14, 6353–6362. https://doi.org/10.1109/JSTARS.2021.3079404.

[50] Nyimbili, P. H., Demirel, H., Seker, D. Z. and Erden, T., (2016). Structure from Motion (SfM) - Approaches and Applications.International Scientific Conference on Applied Sciences, September, 27–30.

[51] Teke, M. and Temizel, A., (2010b). Multi-spectral Satellite Image Registration Using Scale-Restricted Surf.Proceedings - International Conference on Pattern Recognition, August, 2310–2313. https://doi.org/10.1109/ICPR.2010.565.

[52] Sykora, P., Kamencay, P. and Hudec, R., (2014). Comparison of SIFT and SURF Methods for Use on Hand Gesture Recognition based on Depth Map. AASRI Procedia, Vol. 9(Csp), 19–24. https://doi.org/10.1016/j.aasri.2014.09.005.

[53] Bhende, P. G., (2016). Application of SIFT and SURF Detectors for Medical X-Ray Images. Journal of Medical Science and Clinical Research, Vol. 04(03), 9641–9650. https://doi.org/10.18535/jmscr/v4i3.09.

[54] Xi, W., Shi, Z. and Li, D., (2017). Comparisons of Feature Extraction Algorithm Based on Unmanned Aerial Vehicle Image. Open Physics, Vol. 15(1), 472–478. https://doi.org/10.1515/phys-2017-0053.

[55] Moghimi, A., Celik, T. and Mohammadzadeh, A., (2021). Comparison of Keypoint Detectors and Descriptors for Relative Radiometric Normalization of Bitemporal Remote Sensing Images, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 14, 4063–4073. https://doi.org/10.1109/JSTARS.2021.3069919.

[56] Manni, F., Mamprin, M., Zinger, S., Shan, C. and Holthuizen, R., (2018). Multispectral Image Analysis for Patient Tissue Tracking During Complex Interventions 25th IEEE International Conference on Image Processing (ICIP) 3149–3153. https://doi.org/10.1109/ICIP.2018.8451263.

[57] Diarra, M. and Gouton, P., (2016). A Comparative Study of Descriptors and Detectors in Multispectral Face Recognition, 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS) , 209–214. https://doi.org/10.1109/SITIS.2016.41.

[58] Fern?ndez, C. I., Haddadi, A., Leblon, B., Wang, J. and Wang, K., (2021). Comparison between Three Registration Methods in the Case of Non-Georeferenced Close-Range Multispectral Images, Remote Sens., Vol. 13, 8205–8208. https://doi.org/10.3390/rs13030396.