Towards Filament Segmentation Using Deep Neural Networks

Jan 15, 2019 - Nov 10, 2019

"Toward Filament Segmentation Using Deep Neural Networks" [pdf]

Azim Ahmadzadeh, Sushant S. Mahajan, Dustin J. Kempton, Rafal A. Angryk, and Shihao Ji

IEEE International Conference on Big Data 2019



Abstract

We use a well-known deep neural network framework, called Mask R-CNN, for identification of solar filaments in full-disk H-α images from Big Bear Solar Observatory (BBSO). The image data, collected from BBSO's archive, are integrated with the spatiotemporal metadata of filaments retrieved from the Heliophysics Events Knowledgeable (HEK) system. This integrated data is then treated as the ground-truth in the training process of the model. The available spatial metadata are the output of a currently running filament-detection module developed and maintained by the Feature Finding Team; an international consortium selected by NASA. Despite the known challenges in the identification and characterization of filaments by the existing module, which in turn are inherited into any other module that intends to learn from such outputs, Mask R-CNN shows promising results. Trained and validated on two years worth of BBSO data, this model is then tested on the three following years. Our case-by-case and overall analyses show that Mask R-CNN can clearly compete with the existing module and in some cases even perform better. Several cases of false positives and false negatives, that are correctly segmented by this model are also shown. The results presented in this study introduce a proof of concept in benefits of employing deep neural networks for detection of solar events, and in particular, filaments. The overall advantages of using the proposed model are two-fold: First, deep neural networks' performance generally improves as more annotated data, or better annotations are provided. Second, such a model can be scaled up to detect other solar events, as well as a single multi-purpose module.

The prominent visual difference between filaments and sunspots is illustrated in Fig. 1. While this fundamental difference makes the classification of events fairly straightforward from the point of view of image processing algorithms, the background texture in H-α images is what serves as the most challenging part in segmentation problem. Presence of dark granularities in the background of H-α images makes it difficult, and sometimes impossible, even for human experts, to correctly differentiate between what is known as filaments' barbs and the background. This is specially important because one expectation from a reliable filament detection module is to characterize the shape and structure of the detected filaments and this can only be achieved in a high resolution segmentation of filaments that creates a pixel-level mask for each filament. In other words, determining an approximate vicinity of an event instance would not provide enough information for the scientific analysis of filaments. In this preliminary work, we exclude sunspot instances from our detection module, as we would like to start building a system from a single

Fig. 1. Several instances of filaments and sunspots, with very different shape structures, in two H-α images.

segmentation component, being filament detection, and upscale to more event instances, in the future.


Data

Full disk H-α images of the Sun are captured by multiple telescopes across the globe: the Big Bear Solar Observatory (BBSO) in California, the Kanzelhohe Solar Observatory (KSO) in Austria, the Catania Astrophysical Observatory (CAO) in Italy, Meudon and Pic du Midi Observatories in France, the Huairou Solar Observing Station (HSOS) and the Yunnan Astronomical Observatory (YNAO) in China. In this preliminary work, we rely solely on the images provided by BBSO. The public archive of BBSO provides full-disk snapshots of the Sun in H-α filter, since 1997, on a daily basis. BBSO provides 2048 X 2048-pixel images which are the highest in resolution compared to all other instruments producing a similar product.

Dome on the main BBSO building viewed from Big Bear Lake. (wikipedia)

State of the Art

n 2005, a software for automation of detection and characterization of filaments was introduced by Bernasconi et al., and became part of the solar event detection suite managed by the FFT. Bernasconi’s algorithm for detection of filaments is diligent and in many cases, specially when the observations are very clear, has an overall good performance; the filaments are often spotted with an acceptable estimation and the chilarity corresponding to each filament that is calculated based on the detected characteristics of the barbs agrees with the experts’ labels with a 72% accuracy. Having said that, there is still a huge gap between the expected results and what the current module offers.

A few examples of such cases are illustrated here.

  • The first case, depicts a behavior that is clearly a design choice: artificial bridges generated for filaments that are spatially close to each other, to report a filament channel, instead of multiple small filaments found in the same filament channel. Although, this decision is certainly of value for several queries, we believe that a filament detection module should remain independent of how different research objectives are defined. That is, it should only report what is observed, and leave the aggregation of filaments, if necessary, to the data-cleaning and pre-processing specific to each study.

  • The second case represents a few examples of where Bernasconi’s model misses some filaments. Without a thorough investigation of every component of the software, it is not possible to confidently spot the issue, however, it seems that a combination of some extra constraints may have prevented such detections. There are many examples of such cases, present in almost every image, and it does not seem that reasons such as low contrast, corrupt images, or difficulties on detection closer to the limb of the Sun, could justify the majority of such examples.

  • The third case is perhaps more interesting. Less often than the previous case, regions are annotated as filaments that are clearly not. The fact that in such cases, there are usually several other segmentations, eliminates the hypothesis of a general shift of segmentations due to time differences between the report and the timestamp of the image. It is more likely that, this is caused by incorrect choices of seeds in the process of threshold-based clustering. That is, small regions that are identified to be within filaments’ regions, and then used as seeds, might have been chosen incorrectly because of a “bad” pre-defined threshold. The presence of thin clouds in the atmosphere at the time of the observation also interferes with the threshold based procedure for determining the seeds for filament masks.

Our Module & Results

The filament detection problem stands as a specific application of the overarching object detection task which has been completely dominated by different deep neural network architectures, since 2009. AlexNet (2012), R-CNN (2014), ResNet (2016), and YOLO (2016) are four of the most known models among many. In this work, we employ one of the improved versions of R-CNN, called Mask R-CNN (2017), with ResNet-50-FPN backbone architecture.

At the end, we analyze performance of Mask R-CNN on filament detection, in juxtaposition with Bernasconi’s segmentations reported to HEK, using the IoU metric (in two versions; IoU pairwise and IoU batch, see the paper for details).

On the right, we present the box-plots of IoU pairwise for 30 images, as well as IoU batch, for each image. These images are selected randomly and the limited number of images allows a more visible visualization. While the degree of similarity to the Bernasconi’s detection should not be considered as the objective, the plot shows that our model is overall in agreement with HEK’s reports, with IoU pairwise averaging at 0.67, slightly above the average of IoU batch at 0.59.

To obtain a better insight into these results, let us look into a few specific cases. One interesting case is the box-plot corresponding to the image id ending with ‘4522’, that shows a relatively large interquartile range for IoU pariwise. As both HEK’s reports and our segmentations on this particular image are shown on the right, there are several small dark regions that in HEK’s reports are all connected with artificial bridges to form a filament channel, whereas in our segmentations, this is avoided.

Although, this is simply a design choice, in our box-plot comparison this is reflected as high variance of IoU, but it should not be interpreted as an inaccurate segmentation. Another interesting case corresponds to the image id ending with ‘1700’, where IoU batch is significantly low (i.e., less than 0.2). Tracking down the corresponding observation, shown below, reveals the reason; the original BBSO’s observation had produced a defected image based on which, any segmentation is spurious, hence very low alignment of segmentations.


To summarize, we compare performance of Mask R-CNN on the same instances that we discussed the current filament detection module's shortcomings before. In the figure below, the blue regions in the first row correspond to Bernasoni's segmentations. The second row contains the original images and the filaments as they are. And the red regions in the last row map to the filaments as Mask R-CNN detects them. It is important to note that these 9 instances (among many) are handpicked to highlight the known issues with the current module, but the decision was made completely unaware of Mask R-CNN's performance. It seems that Mask R-CNN indeed does an overall good job in detection of filaments, when compared to Bernasconi's. However, the main issue with this model is it's coarse segmentation. This is in fact a serious weakness of this model, but we believe that in our future work we will be able to tweak the model toward finer segmentation.

Our trained model, although still far from being robust and an operational-ready software, encourages us to explore using deep neural networks for the detection of solar features. One of the avenues toward our future work, is to test this model on data from other observatories in the GONG full disk H-alpha network. We would like to see how different the performance of Mask R-CNN will be compared to segmentation on BBSO images, given that different instruments produce slightly different but comparable observations. In parallel, we plan to investigate on the possibility of increasing the resolution of segmentation taking into account the trade-off between adding more noise to the detected regions and the possibility of characterizing the filaments structure, i.e., the barbs and the spine, with a granularity comparable to the size of a pixel.