new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

Sep 2

The Stellar Populations and Rest-Frame Colors of Star-Forming Galaxies at $z \approx 8$: Exploring the Impact of Filter Choice and Star Formation History Assumption with JADES

Our understanding of the physical properties of star-forming galaxies during the Epoch of Reionization (EoR, at z > 6) suffers from degeneracies among the apparent properties of the stars, the nebular gas, and the dust. These degeneracies are most prominent with photometry, which has insufficient (1) spectral resolution and (2) rest-frame spectral coverage. We explore ways to break these degeneracies with a sample of N = 22 high-redshift star-forming galaxies at 7 < z_{phot} leq 9, using some of the deepest existing imaging from JWST/NIRCam and JWST/MIRI with JADES. Key to this study is the imaging from JWST/MIRI at 7.7 mum, which provides coverage of the rest-frame I-band at the observed redshifts. We infer stellar population properties and rest-frame colors using a variety of filter sets and star formation history assumptions to explore the impact of these choices. Evaluating these quantities both with and without the 7.7 mum data point shows that dense spectral coverage with JWST/NIRCam (eight or more filters, including at least one medium-band) can compensate for lacking the rest-frame I-band coverage for the vast majority (approx 80%) of our sample. Furthermore, these galaxy properties are most consistently determined by assuming the delayed-tau star formation history, which provides the smallest offsets and scatters around these offsets when including JWST/MIRI. Within extragalactic surveys like JADES and CEERS, our findings suggest that robust characterization of the stellar population properties and rest-frame colors for high-redshift star-forming galaxies is possible with JWST/NIRCam alone at z approx 8.

Transfer learning for galaxy feature detection: Finding Giant Star-forming Clumps in low redshift galaxies using Faster R-CNN

Giant Star-forming Clumps (GSFCs) are areas of intensive star-formation that are commonly observed in high-redshift (z>1) galaxies but their formation and role in galaxy evolution remain unclear. High-resolution observations of low-redshift clumpy galaxy analogues are rare and restricted to a limited set of galaxies but the increasing availability of wide-field galaxy survey data makes the detection of large clumpy galaxy samples increasingly feasible. Deep Learning, and in particular CNNs, have been successfully applied to image classification tasks in astrophysical data analysis. However, one application of DL that remains relatively unexplored is that of automatically identifying and localising specific objects or features in astrophysical imaging data. In this paper we demonstrate the feasibility of using Deep learning-based object detection models to localise GSFCs in astrophysical imaging data. We apply the Faster R-CNN object detection framework (FRCNN) to identify GSFCs in low redshift (z<0.3) galaxies. Unlike other studies, we train different FRCNN models not on simulated images with known labels but on real observational data that was collected by the Sloan Digital Sky Survey Legacy Survey and labelled by volunteers from the citizen science project `Galaxy Zoo: Clump Scout'. The FRCNN model relies on a CNN component as a `backbone' feature extractor. We show that CNNs, that have been pre-trained for image classification using astrophysical images, outperform those that have been pre-trained on terrestrial images. In particular, we compare a domain-specific CNN -`Zoobot' - with a generic classification backbone and find that Zoobot achieves higher detection performance and also requires smaller training data sets to do so. Our final model is capable of producing GSFC detections with a completeness and purity of >=0.8 while only being trained on ~5,000 galaxy images.

Exploring Geometry of Blind Spots in Vision Models

Despite the remarkable success of deep neural networks in a myriad of settings, several works have demonstrated their overwhelming sensitivity to near-imperceptible perturbations, known as adversarial attacks. On the other hand, prior works have also observed that deep networks can be under-sensitive, wherein large-magnitude perturbations in input space do not induce appreciable changes to network activations. In this work, we study in detail the phenomenon of under-sensitivity in vision models such as CNNs and Transformers, and present techniques to study the geometry and extent of "equi-confidence" level sets of such networks. We propose a Level Set Traversal algorithm that iteratively explores regions of high confidence with respect to the input space using orthogonal components of the local gradients. Given a source image, we use this algorithm to identify inputs that lie in the same equi-confidence level set as the source image despite being perceptually similar to arbitrary images from other classes. We further observe that the source image is linearly connected by a high-confidence path to these inputs, uncovering a star-like structure for level sets of deep networks. Furthermore, we attempt to identify and estimate the extent of these connected higher-dimensional regions over which the model maintains a high degree of confidence. The code for this project is publicly available at https://github.com/SriramB-98/blindspots-neurips-sub