Flood Event Detection and Assessment using Sentinel-1 SAR-C Time Series and Machine Learning Classifiers Impacted on Agricultural Area, Northeastern, Thailand

P. Khamphilung
S. Konyai
D. Slack
K. Chaibandit
N. Prasertsri

This study presents image classification techniques using Sentinel-1A microwave SAR-C imagery to detect agricultural vulnerability area resulting from a massive flood in Ubon Ratchathani province, Thailand, which occurred in 2019. Two time series of selected images were used in analytical processes: namely S1A_IW_GRDH acquired on August 10th, 2019, representing the pre- flood event, and S1A_IW_GRDH acquired on 9th September 2019 represents the massive flood in this area. Prior to the classification, these data were preformed pre-processing processes, such as calibration, speckle filtering and terrain correction. The preprocessed data were then classified using 3 machine learning classifier algorithms, namely, Random Forest (RF), K-Dimensional Tree (KDTree KNN), and Maximum Likelihood for comparing classification accuracy derived from each classifier. There are 4 land use/land cover (LULC) classes derived from the dataset, i.e., (1) paddy rice, (2) water body, (3) residential area, and (4) vegetation, respectively. The second map was used to determine the extent of flooding and non-water area based on backscattering coefficient derived from Sigma0_VV polarization using band math calculation obtained from the histogram. The extracted flooded area aimed at creating the flooded water mask for overlaying with the classified LULC maps derived from each classifier. Finally, the LULC maps were overlaid with flooded event map that occurred on September 9, 2019, for quantifying affected area. The results indicated that paddy rice was damaged by flooded with the area of 98 km2 classified by RF achieving the overall accuracy of 94.60%. The KDTree KNN classifier identified the affected area of 85 km2 with the overall accuracy of 93%, while the Maximum Likelihood classifier detected the flooded area of 91 km2 with the overall accuracy of 93.36%, respectively.

Flood Event Detection and Assessment using Sentinel-1 SAR-C Time Series and Machine Learning Classifiers Impacted on Agricultural Area, Northeastern, Thailand

Khamphilung, P.,1* Konyai, S.,2 Slack, D.,3 Chaibandit, K.4 and Prasertsri, N.5

1Digital Innovation Research Cluster for Integrated Disaster Management in the Watershed, Mahasarakham University, Kantharawichai, Maha Sarakham 44150, Thailand

2Department of Agricultural Engineering, Faculty of Engineering, Khon Kaen University, Muang sub-District, Muang district, Khon Kaen, Thailand

3Department of Civil and Architectural Engineering and Mechanics, 1209 E. Second St. P.O. Box 210072 Tucson, AZ 85721, USA

4Faculty of Engineering and Architecture, Rajamangala University of Technology Isan Nakhon Ratchasima, Thailand

5Department of Geoinformatics, Faculty of Informatics, Mahasarakham University Khamriang sub-district, Khantarawichai District, Maha Sarakham, Thailand

*Corresponding Author

Abstract

This study presents image classification techniques using Sentinel-1A microwave SAR-C imagery to detect agricultural vulnerability area resulting from a massive flood in Ubon Ratchathani province, Thailand, which occurred in 2019. Two time series of selected images were used in analytical processes: namely S1A_IW_GRDH acquired on August 10th, 2019, representing the pre-flood event, and S1A_IW_GRDH acquired on 9th September 2019 represents the massive flood in this area. Prior to the classification, these data were preformed pre-processing processes, such as calibration, speckle filtering and terrain correction. The preprocessed data were then classified using 3 machine learning classifier algorithms, namely, Random Forest (RF), K-Dimensional Tree (KDTree KNN), and Maximum Likelihood for comparing classification accuracy derived from each classifier. There are 4 land use/land cover (LULC) classes derived from the dataset, i.e., (1) paddy rice, (2) water body, (3) residential area, and (4) vegetation, respectively. The second map was used to determine the extent of flooding and non-water area based on backscattering coefficient derived from Sigma0VV polarization using band math calculation obtained from the histogram. The extracted flooded area aimed at creating the flooded water mask for overlaying with the classified LULC maps derived from each classifier. Finally, the LULC maps were overlaid with flooded event map that occurred on September 9, 2019, for quantifying affected area. The results indicated that paddy rice was damaged by flooded with the area of 98 km2 classified by RF achieving the overall accuracy of 94.60%. The KDTree KNN classifier identified the affected area of 85 km2 with the overall accuracy of 93%, while the Maximum Likelihood classifier detected the flooded area of 91 km2with the overall accuracy of 93.36%, respectively.

Keywords: Flood Detection, Machine Learning Classifiers, Microwave Remote Sensing, Random Forest

1. Introduction

Rice is an important economic crop of Thailand exported to different regions to the world market, and has long been a staple food for Thai people in traditional of living and survival [1] and [2]. It is obvious that the physical properties of rice plantations that contribute to the quality and production of rice in each area. There are spatial factors based on natural conditions that affect rice yields, such as droughts, floods, etc., especially flash flood that occur frequently in Northeast Thailand. As a result, rice fields can be damaged, which effect on rice production in terms of the quantity and quality lead to significant decreased quality of life among farmer societies [3]. Therefore, farmers have to re-cultivate new crops cycle dismissed from annual crop calendar after natural disasters with higher expenses.

In addition, government agencies can use such natural disasters information to manage affected area by benefit providing and assistance offering to farmers. Nowadays, to monitor natural environment can be made easily by using real time or near real time from Earth Observation System (EOS) information since remote sensing technology presented to the real world with various sensors for the earth phenomenon monitoring missions [4]. Remote sensing sensors technology; however, divided into passive and active instruments. The acquisition system based on EMR radiated from the sun has known as passive sensor, which scan and retrieve reflected energy from the earth surface characteristics. The active sensor, on the other hand, platforms carried EMR on board providing certain energy intensity transmit to surfaces and collect some portions of backscattered energy interacted with various earth surface properties [5]. One of the most limitation of passive remote sensing is to detect the earth information in time of the sun illuminating to the earth surface. Thus, most of these earth observation systems are sun-synchronized orbit, which some crucial information, such as thermal infrared can only be detected in daytime. Moreover, the image qualities are usually fluctuated from weather conditions comparing between wavelengths and particles diameter (natural and man-made particles) in the air, such as water droplets, snow, and hails. Obviously, microwave remote sensing technology provided active sensors avoiding and improving the limitations of passive sensors, especially unstable weather conditions described above. The reflectance energy emitted from surface and sub-surface derived from microwave remote sensing has known as backscattering coefficient (s0). Backscattered energy to sensor varies from several factors, such as dielectric constant of each surface properties, including specific wavelength (l) from higher frequency and narrower lambda (K-band: 0.86 cm.) to lower frequency and larger lambda (P-band: 68.0 cm.), including polarization characteristics. There are two polarizations: dual-pol and quad-pol. The dual-pol instrument provided VV, VH polarizations, such as the Sentinel-1 (Synthetic Aperture Radar: SAR-C) available for free from ESA (European Space Agency) collecting the data in active antenna SAR-C band. The qual-pol data; for example, are usually retrieved from ALOS PALSAR providing of HV, HH, VH, VV polarizations, and HV, HH, VH, VV polarizations from RADARSAT-2 imageries. The intensity of backscattering coefficient consists of physically relevant factors, namely, nadir, azimuth flight direction, look direction, depression (g), incident (q) and slant range. Some of these factors are physically uncontrollable conditions interacted with each land cover type. Additionally, using SAR imagery are more likely advantages in some phenomenon, such as rapid flood in monsoon season comparing with archived data collected from similar sensors or some different optical data.It is obvious that the intensity of backscattered energy from flooded area are slightly decreased compared with other land cover types. Therefore, to discriminate of flooded and non-flooded area can easily be achieved by using this data, especially SAR-C band instrument interacted with surface echo the portion of low backscattered in flooded area. Land use/ land cover classification in remote sensing tradition has involved unsupervised and supervised classification algorithms, such as maximum likelihood and minimum distance classifiers. SAR imagery classification: however, there are also deployed this approach into analytical algorithms, such as, Random Forest, KDTree KNN and Maximum Likelihood classifiers. Moreover, there is parametric SAR acquiring in Single Look Complex (SLC) data comprises complex imagery with amplitude and phase provided by Sentinel-1 satellite sensor system, which allow to classify from H-a and Entropy depending on the relevant intensity of backscattering values from each land cover type. Therefore, this research aims at utilizing the time-series of Sentinel-1 imageries for paddy field monitoring corresponding to unexpected of natural disasters, especially annual flood that can be an important assistance tool for public and private sectors for planning and recovering on affected area by utilizing the potential emerged modern technology for life quality improvement and agricultural sustainability in this region.

2. Materials and Methods

2.1 Study Area

The study area located at 15.08° N to 15.60° N° and 104.22° to 105.26°E in Muaeng district, Ubon Ratchathani Northeastern, Thailand. The subset image covered the area of 406.4 km2 surrounding by various LULC types. Ubon Ratchathani province was affected by the flood situation in 2019. A total of 25 districts, 184 sub-districts, and 1,724 villages were damaged, along with 18,790 houses. Additionally, 21,691 people were evacuated, and farmland covering an area of 1,023.28 km2 was affected. The study area located at along the Mun river (Figure 1).

Figure 1: The location of study area

2.2 Data Preparation and Collection

This study uses the Sentinel-1 dual polarizations: VV and VH Interferometric Wide Swath (IW) mode with GRD product type for evaluation and classification the physical changes focuses on the affected land use/ landcover in Ubon Ratchatani and Yasothorn provinces, Northeast Thailand during the massive flood in 2019. The main objectives of the study are for land use/ land cover classifying focused on before and after the massive flood occured in this area by comparing of the image classification algorithms provided by SNAP platform namely, Random Forest (RF) K-Dimensional Tree KNN ( KDTree KNN), and Maximum Likelihood classifiers [6]. Prior to classification, the subset of Sentine-1A data were performed preprocessing processes, namely terrain correction, calibration and speckle filtering by using Sentinel toolbox [7]. The data were arranged in σ0VV and σ0VH data format and converting to the flooding area by using Linear to from dB algorithm for converting the virtual VV and VH bands into backscatter coefficients (s0)by using the equation [8].

Equation 1

Where:

σ0 (dB) = backscattering image in dB

σ0 = Sigma naught image

Then, these data were stacked to the images time series for RGB combinations. The training area were derived based on backscattering properties, visual interpretation, and high-resolution supplement data, such as Google map and the reference land use map collected from LDD, Thailand. The processed data represents flooded was on September 10, 2019, and before flooded acquired on October 8, 2019, respectively. The first image used for water bodies extraction from lower backscattering ranges from the image histogram, and the second image the days before flooded was used for landcover classification for damaged quantifying from affected area [9] [10] and [11]. Water bodies extraction derived from Sigma0VH using band math calculation. Land use/landcover classification divided into 3 main approaches described above using the same vector data set evaluated and compared with google maps and other existing referenced source collected from LDD, Thailand. These vector data applied to Random Forest, KD Tree KNN, and Maximum likelihood classifiers, respectively.

Table 1: The dates of Sentinel-1A time series

Data

Date of Acquisition

Polarization

Acquisition MODE

Sentinel-1A

10/8/2019

VV, VH

IW GRD

Sentinel-1A

9/9/2019

VV, VH

IW GRD

STACKING

10/8/2019

VV, VV-VH, VV/VH (R,G,B)

IW GRD

Table 2: Land cover and training data description

Class Name

Training AOIs

Validation Masks

Training Pixels

Class Description

Waterbodies

21

30

2,500

Lakes, rivers, pond, and reservoirs

Paddy rice

31

80

2,500

Paddy field, cropland

Vegetation

12

50

756

Trees, shrubs

Residential area

29

70

2,500

Buildup area, rural and urban

Total

93

160

8,256

2.3 Image Classification

2.3.1 Random forest classifier

Each stacked image layer was classified individually from the same selected training area data set compared with the existing LULC vector data, including visual interpretation from supplement information [12]. The training data set details are shown in Table 2. Paddy field sample set were derived from visual interpretation corresponded to prior spatial patterns knowledge, including existing vector data from Land development Department (LDD) of Thailand and ESA 10 m global LULC provided by Google Earth Engine [13] [14] [15] and [16]. Paddy field is the main target class due to the primary crop cultivation by traditional subsistence crop in this region. The subset imageries were created new vector data containers represented for each LULC classes [17]. For Random Forest classification method, there were 5,000 training samples created by random stratified sampling method, and 50 trees, respectively. The advantages of this classifier are quick prediction, handles unbalanced data, and each decision tree has a high variance, but minimum bias [18] [19] [20] and [21]. The random forest classifier uses the Gini Index as an attribute selection measure, which measures the impurity of an attribute with respect to the classes [22] and [23]. For a given training set T, selecting one case (pixel) at random and saying that it belongs to some class Ci, the Gini index can be written as:

Equation 2

where f (Ci, T)/ |T| is the probability that the selected case belongs to class Ci.

2.3.2 KDTree KNN classifier

The training data set derived from the same method was input to the algorithm. The KDTree KNN classifier is a machine learning algorithm based on the k-nearest neighbors (KNN) algorithm. It uses a data structure called a KD tree to speed up the computation of distances between data points. This algorithm is very simple to understand and implement. The key benefit of the classifier is that splits a large 3D space into small regions [24] and [25]. The algorithm begins by searching the KD tree for the k nearest neighbors of the data point [26]. This is done by starting at the root of the tree and recursively traversing the tree, choosing the subtree that is on the same side of the hyperplane as the query point [27]. Once the k nearest neighbors is found, the algorithm assigns the class label of the majority of the neighbors to the query point.

2.3.3 Maximum likelihood classifier

In the SNAP (Sentinel Application Platform) software package, the Maximum Likelihood classifier is available as a built-in tool for image classification [28]. It is used to classify pixels in a remotely sensed image into different land cover classes based on their spectral characteristics. The ML classifier in SNAP works by first computing the statistical properties of the training data for each class. This method can be developed for a large variety of estimation situations, including provided statistical information to evaluate training set [29] and [30]. This involves calculating the mean and covariance matrix of the training samples for each class in the feature space. The feature space is defined by the spectral bands of the remotely sensed image, such as the red, green, and blue bands. The ML classifier in SNAP allows the user to specify the number of classes to be classified, as well as the training data for each class.

2.4 Flood Area Delineation

Flooded extend area was derived from Sentinel-1 SAR-C band acquiring on 9/9/2019. This image was subset for creating the area of interest (aoi). Then, it was performed pre-processing processes, such as calibration, speckle filtering, and terrain correction. Then, the binary threshold was determined for separating water and non-water area from Sigma0VV band processed histogram based on logarithmic (Log10) information [31] [32] and [33]. The histogram indicated that the water area corresponds to Sigma0VV by the values of backscattering coefficient less than 6.38E-2. Therefore, the band math expression was figured out from the formula:

if Sigma a0VV < 6.38×10 -2 then 1, else Non-water

Equation 3

If this condition is true, then the flooded area turned into white, otherwise showed in black represented for non-water area. In order to avoid false positive detection, the extracted flooded area was subtracted from existing surface waterbodies classified from the first date data (8/8/2019). As the results, hazard flood area was significantly discriminated from surface water resources area. The flooded data was exported to QGIS and evaluated LULC affected area.

2.5 Affected Area

The water mask area derived from the process described above was erased from the water bodies figured out from classification methods. Then, this data was overlaid by classified LULC data from each classifier aimed at affected area determination using GIS software. The results will be shown in qualitative and quantitative aspects compared with image classifications.

2.6 Accuracy Assessment

The classified maps were evaluated individually comparing with the vector data collected from LDD, Thailand, the information from Google map, including visual interpretation data from optical sensors. The referenced vector data from LDD was converted from polygon data representation into point data for performing accuracy assessment. There were 160 masks, and 8,256 pixels represent 4 LULC classes prepared for this process. The objective of this process was to compare the classified map with the reference data for correctness evaluation between classified data and reference data as it has presented in remote sensing and image classification traditional method [34] and [35].

The results can be shown in user’s accuracy, producers’ accuracy, overall accuracy, and kappa statistics, respectively. Flooded area; however, was compared with 3 referent flood mapping data sources; namely, Thailand Flood Monitoring System provided by GISTDA, Thailand, mean annual world flood map data using MODIS time series maintained by Dartmouth flood observatory, and NRT global flood mapping by NASA. The study process illustrated in the Figure 2.

3. Results and Discussion

3.1 Image Classification

The backscattering coefficients were consistent with the training areas used in the classification procedures showed in Figure 3 represented the σ0VV and σ0VH ranges for the water body mask area. The histogram shows that the σ0VH can separate water sources better than σ0VV due to higher data scattering values ranging from -22 dB to -19 dB. The training area corresponding to the maximum reflection in each LULC class was selected, and band combinations were used to determine the radiometric information for selecting the sample groups before classifying with the classifiers. The results of classifying Sentinel-1 images using all three methods were showed in Figure 4. The s0VV and VH for Residential mask area evaluation process found that s0VV showed better reflectance than s0VH with data range between (-6 dB) to (-1 dB) whereas s0VH graph shows instability and is quite difficult to isolate this type of land cover with values ranging from (-16dB) to (-9dB). The s0VV and VH for Paddy rice mask area showed that the s0VV can distinguish this type of land cover better than s0VH with a data ranging between (-8dB) and (-5dB), respectively. There was a clear normal distribution and a data clustered of compared to s0VH ranged between (-19.5dB) to (-15.5 dB), which was dispersed in a too wide range causing high error in classification. For s0VV and VH from vegetation target class evaluation, it appeared that the s0VH has a higher discrimination potential than s0 VV used for classifying land cover in this layer, where s0VH has a dispersion between (-13dB) to (-12.5 dB). The curve showed a normal distribution with the s0VV values ranged from (-8dB) to (-6 dB). This property was clearly separated this class from each other resulting in lower accurate compared with other methods. The affected area from flooding were showed in Figure 5 and Table 3. The results from each classifier were overlaid with extracted water mask area.

Figure 2: Image classification procedures

Figure 3: Backscattering coefficients compared with s0VV and s0VH for each LULC class, (a) s0VV of water mask area, (b) s0VH of water mask area, (c) s0VV of residential area, (d) s0VH of residential area, (e) s0VV of paddy rice, (f) s0VH of paddy rice, (g) s0VV of vegetation area, and (h) s0VH of vegetation area

Figure 4: Sentinel-1A classification results (a) Random Forest, (b) KDTree KNN, and (c) Maximum Likelihood

Figure 5: Flood area overlaid with each classifier result (a) RF, (b) KDTree KNN, and (c) Maximum likelihood

The flooded area overlaid with RF classifier showed an impact on paddy rice of 98 km2, residential area of 30 km2, and vegetation of 78 km2. In the case of KDTree KNN classifier, the flooded area occurred on paddy rice covering 85 km2, 25 km2 for residential area, and 70 km2 impacted on vegetation. Lastly, the impacted area on paddy rice, residential area, and vegetation determined by Maximum Likelihood were 91 km2, 22 km2, and 98 km2, respectively.

3.2 Flooded and Affected Area Delineations

The affected area was carried out by overlaying technique from the flood detection image [36]. Prior to overlaying, flood area was discriminated from existing water area. The process of classifying flooded areas described above, the results of the classification were showed in Figure 5. The flooded event map was used for a water mask extracted from the flooded event. Then, these data were overlaid by using QGIS to examine flood-affected areas by overlaying with land use data and existing natural water sources to reduce the redundancy LULC with flooded mask and water area [37]. The study found that most of the flooding was caused by overflows of the Moon River, which occurs annually in this area (Table 3).

3.3 Accuracy Assessment

Due to the fact that accuracy assessment has not been carried out in SNAP provided in this application. However, there are relevant published papers showed how to evaluate classification by comparing classification possibility enhanced from mask manager tool. The overall accuracy can be calculated by using the proposed method by components, namely True positive (TP), True Negatives (TN), False Positives (FP), False Negative (FN) [38] [39] and [40].

Equation 4

Equation 5

Equation 6

where:

TP is the True Positives, which means that the actual class and the predicted class are both positive.

TN is the True Negatives, which means that the actual and predicted class are both negative.

FP is the False Positives, which means that the actual class is negative whereas the predicted class is positive.

FN is the False Negative, which means that the actual class is FN is the False Negative, which means that the actual class is positive, but the predicted class is negative.

Equation 7

Equation 8

Where:

ECA is estimated change agreement

The evaluation of LULC classes is showed in Figure 6. The figure presents class accuracy, precision, correlation, and error rate derived from equation 4 to 7. From the image, it can be observed that Random Forest and KDTree KNN classifiers have similar accuracy values when classifying the vegetation class, with accuracies of 0.97, and error rate of 0.0239, and 0.025, respectively. On the other hand, the Maximum Likelihood classifier achieves the highest accuracy of 0.96 and error rate of 0.038. However, for paddy rice class, the accuracy and error rates of all three classifiers are not significantly different. Regarding the residential area, Random Forest and KDTree KNN provided the best classification results, with an accuracy of 0.94, and error rate of 0.05. Interestingly, all classifiers perform similarly with high accuracy for water bodies class due to the low backscattering coefficient scattered from this particular land cover type compared with other classes. The overall accuracy, affected, and non-affected area are shown in Table 3. The overall accuracy after classification showed that the values obtained from Random Forest, KDtree KNN, and Maximum Likelihood were 94.60, 93.00, and 93.36, respectively. Additionally, Random Forest showed that the residential area had the largest flood-affected area among the classifiers by the area of 30 km2. This is because the Random Forest classifier can increase the number of forests during the classification process within the SNAP application, resulting in improved accuracy. However, this approach requires more computer resources and processing time.

Table 3: Accuracy assessment for each classifier and land use classes showing flooded and non-flooded area

Classifiers

OA (Overall accuracy)

Kappa

Total(km2)

Flooded (km2)

Non-flooded (km2)

RF

94.60

0.41

Water body

14

-

-

Paddy rice

281

98

183

Residential area

68

30

38

Vegetation

192

78

114

KDTree KNN

93.00

0.39

Water body

10

-

-

Paddy rice

235

85

150

Residential area

52

25

27

Vegetation

258

70

188

Maximum Likelihood

93.36

0.46

Water body

12

-

-

Paddy rice

255

91

164

Residential area

53

22

31

Vegetation

235

98

137

Figure 6: The area of classification results from each classifier

As a result, Random Forest achieves higher accuracy compared to other classifiers, while KDTree KNN and Maximum Likelihood show flood-affected areas of 25 km2 and 22 km2 respectively. It is also worth noting that Maximum Likelihood provided the most accurate classification for vegetation and shows a flood-affected area of 98 km2, while Random Forest covers 78 km2 and KDTree KNN covers the area of 70 km2.

4. Conclusion

This study derived the LULC affected area from hazard flood in Ubon Ratchathani province, Thailand by using microwave remote sensing imageries focused on the vulnerability of paddy fields. The time series of Sentinel-1A with SAR-C band were utilized for supervised classification by comparing of Random Forest (RF), KDTree KNN, and Maximum

Likelihood classifiers on pre-flood and on flood events, including water area extraction based on backscattering coefficient corresponded to flooded area after the massive risk phenomenon. The classification results indicated that the RF classifier showed the highest accuracy assessment at 94.60% overall accuracy, significantly higher than any other classifiers. The KDtreeKNN and Maximum Likelihood; however, showed the similarity results in not much more in significant differences by the correlation of 93.00% and 93.36%, respectively. Additionally, water area extraction showed strong discrimination from LULC based on the backscattering coefficients (s0), ranging between -22 to -19 dB echo (s0VH) derived from band math calculation. The flooded map data was overlaid with the results of the 3 classification methods. The analysis revealed that the RF classifier detected the flooded area of 98 km2for paddy rice, 30 km2 for residential area, and 78 km2 for vegetation. The KDTree KNN classifier identified the affected area of 85 km2 for paddy rice, and 70 km2 for vegetation. Finally, the Maximum Likelihood classifier indicated that the most affected area was vegetation, covering 98 km2, followed by paddy rice at 91 km2. Moreover, the minor impacted area of 22 km2 was observed in residential. In this study, it was found that Random Forest classifier produced the most accurate results in LULC classification. Although the result of classification using RF algorithm provided the highest accuracy, other classifiers also performed similarly with minor differences. However, analysts can choose the appropriate method based on area's conditions and the development of the classification tool, which needs continuous improvement. This choice can be based on efficiency and popularity of using the RF machine learning algorithm, as evident in the majority of articles published in academic journals. Furthermore, flooded event data was overlaid with the classified map to assess the damage areas caused by flooding. These methods effectively monitored flooding by deriving classification maps and comparing them with various references data sources. In the future, researchers plan to incorporate real-time data into the analytical process to identify and monitor repeatedly flooded areas. This will be done using the Google Earth Engine cloud-based platform, which allows users to analyze and process geospatial data, including other relevant data to enhance and improve disaster management for natural disaster that may occur elsewhere.

Acknowledgements

This study was supported under that framework of international cooperation program managed by the Mahasarakham University, Thailand. Phusit’s work was supported by Mahasarkham University (No.6517004/2565). The author would like to thank in advance for European Space Agency (ESA) for the Sentinel-1A satellite imageries distribution utilized in this paper.

References

[1] Keithmaleesatti, S., Angkaew, R. and Robson, M. G., (2022). Impact of Water Fluctuation from a Dam on the Mekong River on the Hatching Success of Two Sandbar-Nesting Birds: A Case Study from Bueng Kan Province, Thailand. Water. Vol. 14(11). https://doi.org/10.3390/w14111755.

[2] Samanta, D. and Sanyal, G., (2012). Classification of SAR Images Based on Entropy. International Journal of Information Technology and Computer Science , Vol. 4(12), 82-86. https://doi.org/10.5815/ijitcs.2012.12.09.

[3] Chadsuthi, S., Chalvet-Monfray, K., Geawduanglek, S., Wongnak, P. and Cappelle, J., (2022). Spatial–Temporal Patterns and Risk Factors for Human Leptospirosis in Thailand, 2012–2018. Sci Rep., Vol. 12(1). https://doi.org/10.1038/s41598-022-09079-y.

[4] Pedzisai, E., Mutanga, O., Odindi, J. and Bangira, T., (2023). A Novel Change Detection and Threshold-Based Ensemble of Scenarios Pyramid for Flood Extent Mapping using Sentinel-1 Data. Heliyon. Vol. 9(3). https://doi.org/10.1016/j.heliyon.2023.e13332.

[5] Hamidi, E., Peter, B. G., Muñoz, D. F., Moftakhari, H. and Moradkhani, H., (2023). Fast Flood Extent Monitoring with SAR Change Detection Using Google Earth Engine. IEEE Transactions on Geoscience and Remote Sensing . Vol. 61, 1–19. https://doi.org/10.1109/TGRS.2023.3240097.

[6] Makinde, E. O. and Oyelade, E. O., (2020). Land Cover Mapping using Sentinel-1 SAR and Landsat 8 Imageries of Lagos State for 2017. Environ Sci Pollut Res ., Vol. 27(1), 66–74. https://doi.org/10.1007/s11356-019-05589-x.

[7] Nhangumbe, M., Nascetti, A. and Ban, Y., (2023). Multi-Temporal Sentinel-1 SAR and Sentinel-2 MSI Data for Flood Mapping and Damage Assessment in Mozambique. ISPRS International Journal of Geo-Information . Vol. 12(2). https://doi.org/10.3390/ijgi12020053.

[8] Uddin, K., Matin, M. A. and Meyer, F. J., (2019). Operational Flood Mapping Using Multi-Temporal Sentinel-1 SAR Images: A Case Study from Bangladesh. Remote Sensing. Vol.11(13). https://doi.org/10.3390/rs11131581.

[9] Ahmed, M. R., Rahaman, K. R., Kok, A. and Hassan, Q. K., (2017). Remote Sensing-Based Quantification of the Impact of Flash Flooding on the Rice Production: A Case Study Over Northeastern Bangladesh. Sensors . Vol. 17(10). https://doi.org/10.3390/s17102347.

[10] Dao, P. D. and Liou, Y. A., (2015). Object-Based Flood Mapping and Affected Rice Field Estimation with Landsat 8 OLI and MODIS Data. Remote Sensing . Vol. 7(5). https://doi.org/10.3390/rs70505077.

[11] Ponnurangam, G. G., Setiyono, T. D., Maunahan, A., Satapathy, S. S., Quicho, E. and Gatti, L., (2019). Quantitative Assessment of Rice Crop Damage POST Titli Cyclone in Srikakulam, Andhra Pradesh Using Geo-Spatial Techniques. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences . Vol. 42(3/W6). 483-489, https://doi.org/10.5194/isprs-archives-XLII-3-W6-483-2019.

[12] Banque, X., Lopez-Sanchez, J. M., Monells, D., Ballester, D., Duro, J. and Koudogbo, F., (2015). Polarimetry-Based Land Cover Classification with Sentinel-1 Data. Proceedings of the Conference 26-30 January 2015 in Frascati, Italy, Vol. 729. https://ui.adsabs.harvard.edu/abs/2015ESASP.729E..13B/abstract.

[13] Devaraj, S., Sudalaimuthu, K. K., Budamala, V., Pattabiraman, B. and Ahamed, K., (2023). Mapping and Assessing the Spatial Extent of Floods Using Sentinel 1 SAR Data: An Approach Based on Flood Index Estimation. Copernicus Meetings; Report No.: EGU23-15702. https://meetingorganizer.copernicus.org/EGU23/EGU23-15702.html.

[14] Downs, B., Kettner, A. J., Chapman, B. D., Brakenridge, G. R., O’Brien, A. J. and Zuffada, C., (2023). Assessing the Relative Performance of GNSS-R Flood Extent Observations: Case Study in South Sudan. IEEE Transactions on Geoscience and Remote Sensin g. Vol. 61, 1–13. https://doi.org/10.1109/TGRS.2023.3237461.

[15] Mansaray, L., Huang, W., Zhang, D., Huang, J. and Li, J., (2017). Mapping Rice Fields in Urban Shanghai, Southeast China, Using Sentinel-1A and Landsat 8 Datasets. Remote Sensing. Vol. 9(3). https://doi.org/10.3390/rs9030257.

[16] Thomas, M., Tellman, E., Osgood, D. E., DeVries, B., Islam, A. S. and Steckler, M. S., (2023). A Framework to Assess Remote Sensing Algorithms for Satellite-Based Flood Index Insurance. I EEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. Vol. 16, 2589-2604, https://doi.org/10.1109/JSTARS.2023.3244098.

[17] Abdikan, S., Balik Sanli, F., Üstüner, M. and Calò, F., (2016). Land Cover Mapping Using Sentinel-1 SAR Data. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 757–761. https://doi.org/10.5194/isprs-archives-XLI-B7-757-2016.

[18] Mishra, D., Pathak, G., Singh, B. P., Mohit, Sihag, P. and Rajeev, (2022). Crop Classification by Using dual-pol SAR Vegetation Indices Derived from Sentinel-1 SAR-C Data. Environ Monit Assess. Vol. 195(1). https://doi.org/10.1007/s10661-022-10591-x.

[19] Lam, C. N., Niculescu, S. and Bengoufa, S., (2023). Monitoring and Mapping Floods and Floodable Areas in the Mekong Delta (Vietnam) Using Time-Series Sentinel-1 Images, Convolutional Neural Network, Multi-Layer Perceptron, and Random Forest. Remote Sensing. Vol. 15(8). https://doi.org/10.3390/rs15082001.

[20] Billah, M., Islam, A. K. M. S., Mamoon, W. B. and Rahman, M. R., (2023). Random Forest Classifications for Landuse Mapping to Assess Rapid Flood Damage Using Sentinel-1 and Sentinel-2 Data. Remote Sensing Applications: Society and Environment . Vol. 30. https://doi.org/10.1016/j.rsase.2023.100947.

[21] Siddique, M., Ahmed, T. and Husain, M. S., (2023). An Integrated Image Classification Approach to Detect the Flood Prone Areas using Sentinel-1 Images. 10th International Conference on Computing for Sustainable Global Development (INDIACom) , 655–660.

[22] Hosseini, M. and Lim, S., (2023). Burned Area Detection using Sentinel-1 SAR Data: Case Study A of Kangaroo Island, South Australia. Applied Geography. Vol. 151. https://doi.org/10.1016/j.apgeog.2022.102854.

[23] Valdivieso-Ros, C., Alonso-Sarria, F. and Gomariz-Castillo, F., (2023). Effect of the Synergetic Use of Sentinel-1, Sentinel-2, LiDAR and Derived Data in Land Cover Classification of a Semiarid Mediterranean Area Using Machine Learning Algorithms. Remote Sensing. Vol. 15(2). https://doi.org/10.3390/rs15020312.

[24] Dahhani, S., Raji, M., Hakdaoui, M. and Lhissou, R., (2023). Land Cover Mapping Using Sentinel-1 Time-Series Data and Machine-Learning Classifiers in Agricultural Sub-Saharan Landscape. Remote Sensing. Vol. 15(1). https://doi.org/10.3390/rs15010065.

[25] Silveira, E. M. O., Radeloff, V. C., Martinuzzi, S., Martinez Pastur, G. J., Bono, J. and Politi, N., (2023). Nationwide Native Forest Structure Maps for Argentina Based on Forest Inventory Data, SAR Sentinel-1 and Vegetation Metrics from Sentinel-2 Imagery. Remote Sensing of Environment . Vol. 285. https://doi.org/10.1016/j.rse.2022.113391.

[26] Altarez, R. D. D., Apan, A. and Maraseni, T., (2023). Deep Learning U-Net Classification of Sentinel-1 and 2 Fusions Effectively Demarcates Tropical Montane Forest’s Deforestation. Remote Sensing Applications:Society and Environment.Vol. 29. https://doi.org/10.1016/j.rse.2022.113391.

[27] Chen, Q., Song, F., Liu, X., Zhang, S., Lei, T. and Jiang, P., (2023). Remote Sensing Image Registration of Disaster-Affected Areas Based on Deep Learning Feature Matching. Second International Conference on Digital Society and Intelligent Systems (DSInS 2022), 596–604. https://doi.org/10.1117/12.2673374.

[28] Tanim, A. H., McRae, C. B., Tavakol-Davani, H. and Goharian, E., (2022). Flood Detection in Urban Areas Using Satellite Imagery and Machine Learning. Water. Vol. 14(7). https://doi.org/10.3390/w14071140.

[29] Kruasilp, J., Pattanakiat, S., Phutthai, T., Vardhanabindu, P. and Nakmuenwai, P., (2023). Evaluation of Land Use Land Cover Changes in Nan Province, Thailand, Using Multi-Sensor Satellite Data and Google Earth Engine. Environment and Natural Resources Journal. Vol. 21(2), 186–97. https://ph02.tci-thaijo.org/index.php/ennrj/article/view/248525.

[30] Wu, X., Zhang, Z., Xiong, S., Zhang, W., Tang, J. and Li, Z., (2023). A Near-Real-Time Flood Detection Method Based on Deep Learning and SAR Images. Remote Sensing. Vol. 15(8). https://doi.org/10.3390/rs15082046.

[31] Adedeji, O., Olusola, A., Babamaaji, R. and Adelabu, S., (2021). An Assessment of Flood event along Lower Niger using Sentinel-1 Imagery. Environ Monit Assess. Vol. 193(12). https://doi.org/10.1007/s10661-021-09647-1.

[32] Sharif, M., Heidari, S. and Hosseini, S. M., (2023). Mapping of Urban Flood Inundation using 3D Digital Surface Model and Sentinel-1 Images. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences [Internet]. Copernicus GmbH; 2023 , 715–720. https://doi.org/10.5194/isprs-annals-X-4-W1-2022-715-2023.

[33] Xie, C., Zhuang, L., Guo, J. and Lei, Z., (2023). Flood Monitoring from Sentinel-1 SAR Images Based on Convolutional Neural Networks: A Case Study in Xinxiang City. In: Yan L, Duan H, Deng Y, editors. Advances in Guidance, Navigation and Control. Singapore: Springer Nature, 604–610. https://doi.org/10.1007/978-981-19-6613-2_60.

[34] Demissie, B., Vanhuysse, S., Grippa, T., Flasse, C. and Wolff, E., (2023). Using Sentinel-1 and Google Earth Engine Cloud Computing for Detecting Historical Flood Hazards in Tropical Urban Regions: A Case of Dar es Salaam. Geomatics, Natural Hazards and Risk. Vol. 14(1). https://doi.org/10.1080/19475705.2023.2202296.

[35] Andrew, O., Apan, A., Paudyal, D. R. and Perera, K., (2023). Convolutional Neural Network-Based Deep Learning Approach for Automatic Flood Mapping Using NovaSAR-1 and Sentinel-1 Data. ISPRS International Journal of Geo-Information . Vol. 12(5). https://doi.org/10.3390/ijgi12050194.

[36] Kimijima, S. and Nagai, M., (2023). High Spatiotemporal Flood Monitoring Associated with Rapid Lake Shrinkage Using Planet Smallsat and Sentinel-1 Data. Remote Sensing. Vol. 15(4). https://doi.org/10.3390/rs15041099.

[37] Tupas, M. E., Roth, F., Bauer-Marschallinger, B. and Wagner, W., (2023). An Intercomparison of Sentinel-1 Based Change Detection Algorithms for Flood Mapping. Remote Sensing. Vol. 15(5). https://doi.org/10.3390/rs15051200.

[38] Onojeghuo, A. O., Miao, Y. and Blackburn, G. A., (2023). Deep ResU-Net Convolutional Neural Networks Segmentation for Smallholder Paddy Rice Mapping Using Sentinel 1 SAR and Sentinel 2 Optical Imagery. Remote Sensing . Vol. 15(6). https://doi.org/10.3390/rs15061517.

[39] Dadhich, G., Miyazaki, H. and Babel, M., (2019). Applications of Sentinel-1 Synthetic Aperture Radar Imagery for Floods Damage Assessment: A Case Study of Nakhon Si Thammarat, Thailand. International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences , 1927-1931. https://doi.org/10.5194/isprs-archives-XLII-2-W13-1927-2019.

[40] Seo, D. K., Kim, Y. H., Eo, Y. D., Lee, M. H. and Park, W. Y., (2018). Fusion of SAR and Multispectral Images Using Random Forest Regression for Change Detection. ISPRS International Journal of Geo-Information. Vol. 7(10). https://doi.org/10.3390/ijgi7100401.