Pre-processing imagery could improve automatic classification
Pre-processing imagery could improve automatic classification and avoid the ‘Golden Hammer’ bias
I read a post on LinkedIn recently which emphasised the challenge of extracting vector data from satellite image for continuous monitoring. The responses reiterated the problems encountered that even today the efficient classification of pixels in shadow regions due to tree cover and buildings from remote sensing images remains a non-trivial exercise.
Recent developments for the classification of remote sensing images revolve around Deep Learning techniques such as U-Net semantic segmentation. In spite of these developments, which underpin the state-of-the-art image classification for remote sensing images, the solutions for shadow pixel classification still seem to be considered complex. In this blog post, I would like to highlight the reasons why these solutions remain complex despite advances in deep learning implementations readily available in most of the GIS and image processing software. We at 1Spatial have been able to address this problem with a simpler, yet effective, solution.
Here, I would like to highlight the elephant in the room – namely Data Pre-Processing. Although considered a relatively straight forward step in the case of satellite image classification, data pre-processing and therefore data engineering is a significant aspect which cannot be devoid of domain knowhow. Even using apriori labelling, selecting shadow pixels should not be a random exercise of selecting shadow pixels without considering the class of the underlying feature pixels as in the case of supervised learning. Current approaches in use today generally work with data obtained either from multiple sources or with a time lapse image data stack from a single source. In either case, the volume of data that needs to be analysed is enormous, resulting in a significant latency in real time processing. To alleviate this data and time complexity in efficient image pre-processing, new approaches should be adopted, wherein the responsibility of classification should not be entirely at the mercy of the classification efficiency of deep learning algorithms. Current methods treat generic deep learning-based algorithms as a Maslow's Hammer for all classification problems without any regard either to the source of the data or the fidelity or the sufficiency and suitability of the data. Such a golden hammer approach leads to much more effort being spent on fine tuning the generic deep learning algorithms rather than pre-treating the data on which these algorithms learn and operate.
Digital transformation is business specific. Generic solutions cannot be applied to every case without an understanding of the domain, the business model, data stack and data utility. This improved classification of imagery is still just the preliminary step though: The classified vector data then needs to be cleaned, compared with existing features to detect the changes and then used to infer the updates required. This process needs to be automated using rules which is also something that 1Spatial can help with.
Find out more
Get in touch to learn more about our approach, and see how we can help you with your geospatial digital transformation and applications with our expertise in digital imaging, computer vision, machine learning and AI domain.
Author: Vindhya Kothuri - Senior AI and Machine Learning Specialist - 1Spatial