The models were devised during the NSW flooding.
“In 2021, DCS sponsored nine Charles Sturt University students to carry out image processing projects using Amazon Web Services (AWS), a cloud-based computing and Aerial Imagery Project (API) platform,” senior lecturer in information technology Dr David Tien says.
The Charles Sturt student team members were Adam Blewitt, Andrew Smith, Patrick Funnel, Cameron Nyberg, Darren Sheehan, Ian Blott, Regan Frank, Thomas Godfrey, and Michael Senkic.
|
Blewitt says the purpose of the AWS Aerial Imaging Project was to automate the process of extracting geospatial information of the floods using machine learning models, reducing the amount of time required to obtain such information from two days of work to two hours, improving the accuracy of the information, and consolidating work that was being duplicated across multiple government agencies.
In the wake of a flood, Spatial Services can fly an aircraft over flood-affected regions to collect multiband (RGB and near-infrared) aerial images.
These images are then passed to a specialist for processing, to stitch the images together and perform orthorectification, the process of turning an image of a curved surface into a flat one suitable for maps and removing distortion from aircraft motion.
Blewitt explains that previously, these images of the flooded region would then be passed onto multiple agencies such as SES and Department of Primary Industries (DPI) for their own use.
“Organisations would then manually review and extract geospatial information from these images, such as the boundary and size of the flooded area. This required significant time and expertise and delayed the ability of government departments to make informed decisions in the wake of a natural disaster,” Blewitt points out.
The Charles Sturt student team used images provided by Spatial Services of past flooding events in the Hawkesbury-Nepean (March 2021), Brewarrina (April 2021), and Lower Clarence (March 2021) to train machine learning models to perform semantic segmentation of the aerial flood images.
Blewitt defines semantic segmentation as “the process of allocating each pixel in an image to one of several categories, the ultimate goal in this project being to isolate all pixels of the flood into a single category.”
“We experimented with several models including convolutional neural networks, gaussian mixture models, and complex decision trees, each using different learning algorithms and methods to perform the same semantic segmentation task.”
“The models can be used on any flood, provided the appropriate near infrared aerial images have been taken.”
Spatial Services executive director Narelle Underwood said the agency was pleased to be able to partner with Charles Sturt University to support students undertaking a project with real world applications.
“This collaboration provides the students with access to our subject matter experts across a range of areas, with the outcomes providing benefits for Spatial Services and the community,” Underwood concludes.