UAV forest fire patrol protection plan
UAV forest fire patrol protection plan
author: sherry
2023-01-16
I Background
Most of the forest areas affiliated to the Eastern Tianshan State Forestry Administration are in the back mountain area. The forest areas are remote, with complex terrain and steep mountains. It is difficult for patrol personnel to reach some fire-prone areas. No communication, no electricity, man-made activities, lightning strikes) forest areas, especially areas prone to lightning strikes and fires are located in the hinterland of high mountains. It is difficult to ensure that the fire prevention work in forest areas can be achieved only by existing ground monitoring and manual daily patrol There are no dead spots and full coverage, which cannot meet the requirements of timely discovery of fires. Moreover, due to reasons such as poor traffic in forested areas and lack of coverage of public network communications, it is difficult to transmit fire information immediately after a fire is discovered, and it is easy to delay fighters and cause small fires. The fire spread into a conflagration. UAVs are equipped with multiple loads such as visible light, infrared, and multi/hyperspectral, and can conduct long-term, large-area, and fixed-point aerial monitoring of forest areas and grasslands to achieve efficient and comprehensive monitoring of forests and grasslands, and improve the monitoring of forest and grass fires. Timeliness and recognition ability lay the foundation for early disposal of forest and grassland fires.
As the primary task of ecological resource protection, forest fire prevention is the safety guarantee for the construction of ecological civilization. The rapid development of UAV technology has injected new power into forest fire prevention, and the feasibility of applying UAV to forest fire prevention warning. According to the level of forest fire risk areas, UAV aerial patrol technology is used to monitor and manage forest fire risk areas by adjusting the patrol frequency, so as to realize forest fire identification, fire transmission, fire fighting assistance and monitoring, disaster investigation and evaluation In order to provide reference for the further promotion of the application technology of UAV in forest fire warning.
II Technical ideas and main content
UAVs are used for forest fire prevention and early warning, and dynamic basic elements such as sky, ground, and air are included in the monitoring and management category to establish a complete forest fire prevention and early warning system. According to the requirements of forest protection zone management and protection, the monitoring area is divided into several flight grids, and a drone grid patrol mechanism is established; the drone remote control system is used to configure infrared thermal imagers, anemometers and other instruments and equipment , using intelligent service technologies such as the Internet of Things and 3S to realize real-time monitoring in the region. Immediately transmit the collected data to the ranger's mobile app, ground control station and fire prevention command department for data analysis, fire point location, and fire monitoring to provide decision-making basis for the ground fire prevention command department and implement fire fighting assistance , Guarantee the firefighting route and the best evacuation route, monitor the residual fire after the disaster, carry out fire investigation, and systematically assess the damage after the disaster, so as to form a comprehensive forest fire prevention and control early warning service platform with comprehensive coverage, short patrol management cycle, efficient and accurate.
Technical route of forest fire prevention warning
Main implementation content:
1. UAV patrol warning and key area screening
According to the analysis and evaluation results of the forest fire risk census, combined with the full implementation of the national forest chief system, the forest fire risk area is divided into a standard square grid, and a UAV grid patrol mechanism is established. Through multiple sorties and multi-voyage UAVs equipped with special multi-spectral or hyperspectral cameras, the corresponding patrol monitoring tasks are completed in accordance with the scheduled flight routes for the divided areas. After collecting high-definition image data, they can be processed by back-end platform software or The algorithm conducts two-dimensional modeling on the images collected in the demarcated area, timely discovers emergencies such as fire hazards and illegal burning in the patrol area, and performs intelligent fire identification and early warning, providing a basis for decision-making and deployment of the emergency command center, and providing information for the forest Fighting fires buys precious time.
In addition, after this data collection, it is also hoped that through the analysis of the collected data, the chlorophyll, water content, pests and diseases, vegetation coverage and other indicators of the vegetation in the area will be calibrated, and the key areas that need to be patrolled in the later stage of fire prevention will be screened out to further realize key areas. Forest fire warning.
2. Key 3D modeling, positioning and screening
Through the drone equipped with a visible light camera, 3D modeling is carried out on the key areas that have been screened and the key fire inspection areas calibrated by each unit in the early stage. At the same time, the 3D modeling can meet the needs of fire-fighting facilities, residential areas, cultural relics, rivers, etc. in the area. Protected facilities are calibrated. Once a fire occurs, geographic information support can be provided for fire rescue to ensure the timeliness of rescue.
3.Daily Patrol
①Daily carry: Daily carry UAV patrols according to the established route;
②Route planning: select the commanding heights according to the terrain and plan the automatic flight mission of the UAV, covering 50 meters on both sides of the route;
③Patrol record: Take pictures and video records of key areas during the flight of the UAV;
④Supplementary inspection: For areas that cannot be reached, the patrolman can choose other take-off points for supplementary inspection;
⑤Problem checking: For suspected problems, obtain coordinate information through drone photo information.
4 Flying patrol for fire situation plan
During the daily patrol process, once an abnormally high temperature point is found, it can be calibrated on the map of the cloud platform, and the fire high temperature warning notice will be sent to the leaders and forest rangers, and the latitude and longitude coordinates (including pictures and videos) of the high temperature point will be marked at the same time. When the man-machine flies to the fire area, the people and wild animals in the fire area can be expelled through the megaphone.
5 Flight patrol for residual fire investigation
Visible light and thermal imaging mounted on the UAV can quickly find embers and abnormally high temperature points, predict whether there is a resurgence phenomenon, and calibrate it on the cloud platform map to provide timely and effective rescue information for fire rescue.
Flight Area
Area 1:
Area 2:
Area 3:
1. Scheme design
1) UAV patrol warning and key area screening
Through multiple sorties and multi-voyage UAVs equipped with special multi-spectral or hyperspectral cameras, the corresponding patrol monitoring tasks are completed in accordance with the scheduled flight routes for the divided areas. After collecting high-definition image data, they can be processed by back-end platform software or The algorithm performs two-dimensional modeling on the images collected in the demarcated area. In addition, after the data is collected this time, the collected data can be analyzed to calibrate the results of forest chlorophyll concentration and humidity inversion in the area, so as to screen out the need for fire protection in the later stage. In the key areas, the forest fire prevention warning will be carried out.
4.1 Inversion of vegetation chlorophy11
4.1.1 Measured spectrum and detection of vegetation chlorophyll
During the project period, the project completed the on-the-spot investigation of forest fire prevention areas in Xinjiang, and set sampling points according to the principle of uniform distribution. During the field measurement, the weather was clear and the water surface was calm. The actual spectrum of the vegetation was collected using a portable surface feature spectrometer, and the spectrum of the vegetation canopy leaf and the radiance value of the sky light and the whiteboard were measured respectively, and the remote sensing reflectance of the vegetation was calculated and derived. At the same time, the vegetation Actual detection of chlorophyll content.
4.1.2 Inversion of vegetation chlorophyll
1) Spectral reflectance simulation
In order to establish a chlorophyll retrieval model suitable for unmanned multispectral data, according to formula (1), the spectral response function of UAV multispectral is used to simulate the measured spectral curve of vegetation to UAV multispectral visible and near-infrared bands.
In the formula: R is the simulated multi-spectral vegetation reflectance of UAV; SRF(λ) is the spectral response function of UAV multi-spectrum; λ1 and λ2 represent the upper and lower bounds of the simulated band respectively; reflectivity.
2) Inversion of vegetation chlorophyll
CatBoost model: CatBoost is a new open source machine learning library proposed by Russian Yandex scholars in 2017. It is composed of Categorical and Boosting and is a new gradient boosting algorithm based on a symmetric decision tree-based learner. It improves the gradient estimation in the traditional GBDT (Gradient Boosting Decision Tree, GBDT) algorithm by means of orderly promotion, and can efficiently handle the categorical features in the gradient boosting decision tree features. The CatBoost algorithm overcomes the gradient deviation and prediction offset problems existing in the traditional Boosting framework, which can reduce the occurrence of overfitting. During the model training process, CatBoost uses a serial method to integrate multiple base learners. The training sample set remains unchanged in each round, and the sample weights are continuously updated through the learning results of the previous round, thereby gradually reducing the deviation caused by noise points. . There are dependencies among multiple weak learners generated by training, and the final result is obtained by weighting the regression values of all weak learners. Compared with other integrated learning algorithms in the Boosting family, such as XGBoost and LightGBM, CatBoost performs better in terms of algorithm accuracy, etc. It can automatically process discrete feature data, which is suitable for regression problems with multiple input features and noisy sample data , the model has stronger robustness and generalization performance.
The inversion of vegetation chlorophyll concentration is divided into two steps, which are respectively constructing training data set, model training and model inversion. According to the ratio of 8:2, the measured sample set is randomly divided into training set and test set, and the coverage of test samples is guaranteed. Different chlorophyll concentration intervals. The combined reflectance and chlorophyll concentration of the screened bands were modeled, and the modeling algorithm was the CatBoost model. Input the training data set to build the model, determine the optimal parameter configuration of the machine learning model through hyperparameter tuning, combine the coefficient of determination (R2), root mean square error (Root Mean Square Error, RMSE) and mean absolute percentage error (Mean Absolute Percentage Error , MAPE) and other mathematical indicators to quantitatively evaluate the model accuracy. Finally, the CatBoost algorithm was used to invert the spatial distribution of chlorophyll concentration of UAV multispectral forest vegetation in Xinjiang, and the distribution map of chlorophyll concentration of vegetation was obtained.
4.2 Vegetation humidity
In order to provide some effective basis for forest fire prevention, the study found that in the forest environment, the humidity of the forest environment can be detected through the gray value of the forest image by establishing the functional relationship between the gray value of the image and the humidity value. When the humidity in the forest is lower than a certain humidity threshold, relevant staff in the forest can carry out corresponding fire prevention work on the forest.
The study found that the gray value of the vegetation surface pixels in the image can effectively reflect its water content. In the range of visible light, the higher the water content of the vegetation surface, the smaller the gray value of the image (the gray value range of the image is 0 to 255), the lower the water content, the larger the gray value of the image. Obviously, the corresponding water content can be obtained by analyzing the gray value of the vegetation image.
The processing of forest images first requires the use of image acquisition devices to acquire images, and at the same time measure the vegetation humidity in the collected images, and use digital image processing technology to process the acquired images, and then use the formula (1) to calculate the average gray value of the image. Finally, the relationship between the gray value of the vegetation image and the water content of the vegetation is established, and the humidity distribution map of the vegetation in the monitoring area of Xinjiang is obtained.
Among them, avg is the average gray value of the image, f (x, y) is the image gray value of the image at point (x, y), and N is the number of image points of f (x, y)0.
The distribution map of vegetation humidity in this project decided to use polynomials as the model to establish the kernel function. By using matlab software to model the data, the relationship between the gray value of the vegetation image and the water content of the vegetation was obtained, and then the boundary was obtained. The humidity distribution map of the vegetation in the monitoring area.
4.3 Normalized Difference Vegetation Index, Drought Stress Index
1) Normalized Difference Vegetation Index
Calculating the normalized difference vegetation index (NDVI) is a commonly used vegetation index, which can well reflect the growth state of plants and the spatial distribution density of vegetation. closely related.
NDVI calculation formula:
In the formula: N is the normalized difference vegetation index, b3 is the reflectance of the red band, and b4 is the reflectance of the near-infrared band.
2) Drought index
The dryness index generally uses the bare soil index, but in the regional environment, there is still a considerable part of the construction land, which also causes the "drying" of the surface, so the dryness index can be synthesized by the two, which is the bare soil index and the building index arithmetic mean of .
The formula for calculating bare soil index S is:
The calculation formula of building index I is:
In the formula, b2, b3, b4, b8, b11 are the spectral reflectance of corresponding bands respectively.
4.4 Calculation of vegetation coverage
There is a very significant linear correlation between vegetation coverage and NDVI, and vegetation coverage information can be directly extracted by establishing a conversion relationship between the two. The principle of the pixel dichotomy model: Assuming that the information S of each pixel observed by the remote sensing sensor can be expressed as the information SV contributed by the green vegetation part and the information contributed by the bare soil part, decompose S into SV and SS two parts:
Assuming that the proportion of vegetation coverage area is the vegetation coverage degree Ci, the proportion of bare soil area is 1-Ci. If the remote sensing information obtained from pure pixels covered by vegetation is Sveg, then SV = Ci*Sveg, and SS=(1-Ci)*Ssoil similarly, then:
NDVI is an important index factor reflecting the growth state of surface vegetation. Using the extremely significant linear correlation between vegetation coverage and NDVI, the vegetation coverage is extracted on the basis of the pixel dichotomy model. The time series of vegetation coverage based on the inversion of the above formula is as follows:
In the formula, NDVIV is the NDVI value of the vegetation coverage part, NDVIS is the NDVI value of the soil part, and Ci is the vegetation coverage. The value of NDVIV and NDVIs is the key to the application of the pixel dichotomy model. Due to the influence of atmospheric clouds, surface humidity and light conditions, NDVIs is no longer a fixed value close to 0, and its variation range is usually between -0.1 and 0.2. between. For pure vegetation pixels, the vegetation type and its composition, the spatial distribution of vegetation and the seasonal changes of vegetation growth will all cause the spatiotemporal variation of NDVIV.
4.5 Comprehensive evaluation model
Considering that the influencing factors of the ecological environment are complex and there are many indicators, the entropy weight evaluation model is selected to evaluate each indicator and influencing factors, and the corresponding weights are obtained. At the same time, in order to eliminate the subjective opinions of AHP, the TOPSIS comprehensive evaluation model is introduced, which has the advantages of objectivity and accuracy. Combining the weight of each index in the entropy weight method with the TOPSIS model, the final score combining subjectivity and objectiveness is obtained to evaluate the level of forest fire prevention.
1) Entropy weight method
The entropy weight method is an objective weighting method to determine the index weight according to the amount of information contained in the index statistics. The importance of indicators can be judged according to the entropy value. When an indicator brings more information, the smaller its entropy value, and the greater the degree of dispersion, it will be given greater weight. The entropy weight method has strong objectivity and reduces the interference of subjective factors, but it can only calculate the weight data, so it generally needs to be used in combination with other evaluation methods.
2) TOPSIS method
The TOPSIS method is also known as the superior-inferior solution distance method. Under the index system, calculate the closeness of the evaluation vector of each evaluation object to the optimal target. The basic idea is: according to the matrix formed by the original data, determine the optimal and worst solutions, then calculate the distance between each object and the optimal and worst solutions, and finally calculate the degree of closeness, which is used as the standard for evaluating the quality of objects.
3) Entropy weight—TOPSIS model construction
4) Calculate the closeness of each object to the positive ideal solution.
The degree of closeness indicates the closeness between the evaluation target of each year and the optimal target, and the value range is [0,1]. The larger the value, the lower the probability of forest fire occurrence in the region. Calculated as follows.
5) Classify according to the degree of closeness.
According to the research results of existing scholars, the degree of closeness is divided into 4 levels to indicate the level of forest fire danger. The lower the level, the more prone to fire, that is, the first level is the key area prone to fire.
Proximity | fire danger level |
(0,0.3] | level one |
(0.3,0.6] | Secondary |
(0.6,0.8] | Level three |
(0.8,1] | Level 4 |
Carry out multi-spectral cameras on UAVs to carry out 3D modeling of the key areas that have been screened and the key fire inspection areas calibrated by each unit in the early stage, and at the same time, fire protection facilities, residential areas, cultural relics, rivers and other facilities that need to be protected in the area Perform target detection calibration. The target detection algorithm is implemented using a deep convolutional neural network model.
2 Deep convolutional neural network
Convolutional Neural Networks (CNN) is a feed-forward neural network that includes convolution calculations and has a deep structure. It is one of the most commonly used representative algorithms for deep learning. It uses deep convolution to simulate the hierarchical perception and local receptive field in the human visual perception mechanism to process some unstructured data and integrate low, medium and high levels of features in an end-to-end manner, thus Obtain rich feature information and improve the accuracy of semantic segmentation of remote sensing images. The basic structure of a convolutional neural network generally includes a data input layer, a convolutional layer, a pooling layer, a fully connected layer, and an output layer.
3 Target recognition of fire protection facilities, residential areas, cultural relics, rivers, and forest protection areas.The technical route of different ground object recognition based on deep convolutional neural network is shown in the figure. Based on this model, automatic recognition of fire protection facilities, residential areas, cultural relics, rivers, and forest protection areas is carried out based on the multi-spectral image data of UAVs, so as to obtain Location distribution map of fire protection facilities, residential areas, cultural relics, rivers, and forest protection areas in key areas.
4 Target detection
Object detection technology is one of the popular directions in computer vision and digital image processing. Target detection technology is required in fields such as intelligent video surveillance, industrial monitoring, and robot navigation. It is of great significance to minimize labor costs through computer vision technology. In recent years, object detection has become a hot spot in theoretical and applied research. It is an important field of computer vision and image processing, and it is also a basic algorithm in the field of identification.
YOLOv5 network model
YOLOv5 (You Only Look Once) was proposed by Ultralytics LLC in May 2020. Its image reasoning speed is as fast as 0.007 s, that is, it can process 140 frames per second, which meets the needs of real-time detection of video images. At the same time, the structure is more Small, the weight data file of YOLOv5s version is 1/9 of YOLOv4, and the size is 27 MB. Its network model structure is shown in the figure.
It can be seen from the above figure that the model is mainly divided into 4 parts, namely Input, Backbone, Neck and Prediction.
1) Input port
The input side includes three parts: Mosaic data enhancement, image size processing, and adaptive anchor frame calculation. Like YOLOv4, YOLOv5 uses the Mosaic method to enhance data. This method is ideal for small target detection and meets the needs of target detection.
In the YOLO algorithm, the size of the input image needs to be transformed into a fixed size, and then sent to the detection model for training. At the same time, the initial anchor needs to be set before the network training. The network model is trained on the basis of the basic anchor frame to obtain the predicted frame, and compared with the real frame, the reverse update is performed according to the difference, and the parameters of the network model are iteratively adjusted.
2) Backbone
Backbone contains Focus structure and CSP structure. The Focus structure does not exist in the YOLOv3 and v4 versions, and its key step is the slice operation, as shown in Figure 8. For example, the original image 416* 416 * 3 is connected to the Focus structure, and through the slice operation, it becomes a feature map of 208* 208 * 12, and then a 32 convolution kernel operation is performed to become a feature map of 208* 208* 32 picture.
YOLOv4 currently only uses the CSP structure in the backbone network. In the v5 version, two CSP structures are designed, CSP1 _X and CSP2 _X. Among them, the CSP1_X structure is mainly used in the Backbone network, and the CSP2_X structure is mainly used in the Neck structure.
3) Neck
Neck adopts the structure of FPN+PAN. FPN is top-down, and uses upsampling to transfer and fuse information to obtain predicted feature maps. PAN adopts a bottom-up feature pyramid, and the specific structure is shown in Figure 9.
Figure 9 PAN structure diagram
4) Prediction
Prediction includes Bounding box loss function and non-maximum suppression (NMS). GIOU_Loss is used as the loss function in YOLOv5, which effectively solves the problem when the bounding boxes do not coincide. In the target detection and prediction result processing stage, for the screening of many target frames that appear, the weighted NMS operation is used to obtain the optimal target frame.
5 Flying patrol for fire situation plan
During the daily patrol process, the UAV is equipped with a multi-spectral camera to detect high-temperature point targets in the key areas that have been screened and the key fire prevention inspection areas calibrated by each unit in the early stage, and locate the coordinates of high-temperature points to provide technical support for firefighters in a timely manner. , the same target detection algorithm is implemented using a deep convolutional neural network model.
6 Flight patrol for residual fire investigation
Visible light and thermal imaging mounted on the UAV can quickly detect residual fire and abnormal high temperature points, predict whether there is a resurgence phenomenon, and calibrate it on the map of the cloud platform to provide timely and effective rescue information for fire rescue. / Abnormally high temperature point target detection algorithm is realized by deep convolutional neural network model.
Related News
Precision fertilization by UAV for rice at tillering stage in cold region based on hyperspectral remote sensing prescription map
2023-02-01 204Application of UAV-hyperspectral imaging for rice growth monitoring
2023-01-18 302Application of hyperspectral camera in industrial inspection
2023-01-17 200Estimation Scheme of Rice Yield Based on UAV Hyperspectral Images
2023-01-17 152Surveillance of pine wood nematode disease based on satellite remote sensing images
2023-01-16 147Application of Hyperspectral Imager in Disguised Target Recognition(part 2)
2023-01-13 185Application of hyperspectral imager in detection of exogenous pests in jujube fruit
2023-01-11 238Application of UAV Hyperspectral in Garbage Sorting
2023-01-10 211