In a Tortoiseshell: In this excerpt from his Junior Paper on supernova detection, T. Lukas Mäkinen uses key terms to effectively scaffold his discussion of a new technique for minimizing biases in samples of detected supernovae. By maintaining a clear focus and providing a sequence of consistent and well-defined key terms, Lukas provides the reader with a framework through which to understand the technical contents of his paper, allowing even a lay reader to finish the paper with a strong impression of Lukas’s work and its significance.
Excerpt / T. Lucas Mäkinen
SNIa are extremely rare events. Within our galaxy, for instance, this specific type of explosion only occurs about once per century [5]. As a result, we need to look further and further out into space in order to catch a glimpse of SNIa in distant galaxies. As we increase the depth of our surveys, however, it becomes trickier to identify SNIa the further away we look. There are two steps to adding a Type Ia supernova to a survey. First, the event must be bright enough to be picked up by a scanning telescope. We call this step the detection. Then, the supernova must have distinct enough a signal to be selected for spectroscopic follow-up. This introduces an intuitive bias to our samples of SNIa: brighter supernovae are easier to see than dimmer ones. Specifically, at a given redshift bin from an observer, brighter supernovae are more likely to be selected. This means that in a given survey, there are likely to be a higher number of bright supernovae per high redshift bin. (This bias was first recorded by Malmquist in 1922) [27]. Selection criteria depend on the telescope used in the survey and the procedures followed to obtain the data.
Historically, this bias in supernova populations was dealt with in an ad-hoc manner. After looking at a completed survey of SNIa data plotted over redshift, as in Figure 9, it would be noted that the number of selected SNIa would fall off at a given redshift. Some of the supernovae at lower redshift would then be assigned a higher redshift (corrected distance modulus), since as observers we’d expect to see more of these events the deeper we look into space.
More recently, this “ad-hoc” correction has been enhanced with new SNIa simulations. Using the SNANA simulation package, for example, it is possible to simulate the selection process of a given survey, like the Dark Energy Survey (DES) [2]. This means that, provided the simulations take both deterministic and random selection criteria into effect, a bias correction parameter can be quantified and applied to a real sample of DES data.
To do this, a bulk population of SNe are generated. For each of these supernovae, the underlying cosmological parameters are known a priori, so the true, simulated distance modulus, , is computed to each supernova.
Then, selection criteria and light-curve fitting are applied, with only a small subset of selected SNIa passing cuts, just as in a real survey. For this subset of “selected” supernovae, the fitted distance modulus, is fitted via the “observed” spectral characteristics at each supernova’s redshift. Since we know the true and fitted moduli of each supernova in the simulation, the bias, as a function of redshift bins, is quantified as:
Where 〈·〉 denotes an average over all supernovae in a given redshift bin. This bias term, fitted to a large number of simulations, can then be incorporated into a cosmological inference mechanism, first described by Mosher et al (2014). Once the correction term is computed via simulation and incorporated into an inference framework, this correction for selection effects can then be applied to real DES survey data by subtracting Equation 64 from the apparent corrected magnitude, Equation 51 [28].
There are significant drawbacks to this procedure highlighted in recent years. For instance, it has been shown that bluer SNe are more likely to be selected than redder events [26], [30]. Since the standard modulus ad-hoc adjustment doesn’t take spectral sources of selection bias into account, color selection bias often results in cosmological inference of higher values, as demonstrated by Kessler & Scolnic (2017).
Instead of correcting the distance modulus as a function of redshift, Kessler & Scolnic (2017) proposed mitigating selection effects bias as a function of SALT2 spectral attributes, . They proposed replacing µ bias correction term with the spectral bias correction terms , where is the bias term in each spectral variable, again trained on simulations.
There are still issues with spectral ad-hoc correction. Firstly, even though a given sample of SNIa is biased, each individual SNIa is not biased. The standard ad-hoc approach, however, corrects each SNIa’s observed data individually, resulting in a statistical inconsistency between what’s being assumed and reality. The way around this inconsistency is to weight each SNIa in a sample by a 32 population-level selection probability, dependent on the SNIa’s observed attributes.
Figure 9: Ad-hoc correction to distance moduli of a given sample of SNIa over redshift. 98 of 980 simulated, observed SNIa are shown in green, while 451 of 4500 simulated, missed SNIa are shown in red. Blue crosses show observed SNIa corrected via the ad-hoc method to a higher distance modulus. The bias correction term corrects the fitting expression for apparent magnitude (shown on the y-axis. (Adapted from Chen et al (in prep)) [29].
Within a Bayesian framework, we have the flexibility to tag each supernova in our dataset with such a weight. Rubin et al (2015) presented a selection function which assigns a probability dependent on apparent magnitude, stretch, and color attributes of a SN observation in a survey. This way, the selection bias is incorporated into the probabilistic model in a forward manner, instead of correcting the data itself.
Notes
SNe – Supernovae
SNIa – Type 1a Supernova
SNANA – SuperNova ANAlysis software, developed at The University of Chicago
Author Commentary / T. Lucas Mäkinen
When I signed up to study trumpet at the Royal College of Music in the fall of my junior year, I had no idea how invested I would become in my physics junior paper. It took a lot of convincing to get the Physics Department to back my decision, but after a flurry of emails the spring before, I got picked up by Dr. Roberto Trotta at Imperial College London a block away from the conservatory.
Probably the toughest part of this project was picking up the research where previous students had left off. The probabilistic model I worked with represents four years of effort between my adviser, his former PhD student, and several summer students. What it meant for me was jumping into the deep end from the get-go.
Keeping afloat in the sea of new (to me) probability notation, I found that the best way to sift through it all was to write my own summaries of the literature as I went. Professor Trotta and I scheduled weekly meetings where I’d present the pieces I was working on, as well as my questions on the papers I read. The Selection Effects excerpt was probably the part of the paper I wrote and rewrote (and enjoyed) the most. Outside of research, Professor Trotta is an avid science communicator. His knack for clear, concise explanations inspired me to gear my explanation of selection effects towards a more general audience. I found that jotting down easy-going explanations on tube rides between rehearsals made it easier to avoid getting bogged down in notation. Since probabilistic modelling in supernova cosmology is relatively new, I found quite a bit of inconsistency in notation in the literature. Talking through the theory and my notes with Professor Trotta each week made me conscious of my word choice, especially when defining keywords in my own way like “selection” versus “detection”.
There’s a great analogy to musical performance there too — most people don’t get much out of reading notation on a score. It’s up to the musician to communicate what the composer was after. Any practiced musician can produce the notes just fine — it takes a seasoned player to inflect precise feeling within the same stretch of score.
Editor Commentary / Isabella Khan
One of the most difficult tasks in writing an expository paper in a scientific field is deciding how to explain two things to an intelligent but technically uninitiated reader: first, what you accomplished in your work, and second, why you conducted your work as you did. By concentrating on a limited number of key terms and concepts, Lucas is able to describe a newly developed technique for correcting SNIa selection biases, and effectively convince the reader of its significance. The marvel of this excerpt is its clarity and unity of focus. Even if a reader is left unsure of how something is done—which is perfectly understandable, especially in a paper as technical as this one—the reader is never unsure of what has been done, and why.
A technical paper of any kind—be it in philosophy or astrophysics—places several demands on the reader, requiring him or her to arrive prepared, if not with background knowledge, then at least with a certain readiness to be confused. The tendency when one encounters a slew of unfamiliar terms is to throw up one’s hands and say, “I don’t understand,” but it is extremely important to differentiate between not understanding everything and not understanding anything. The first is often true, even for seasoned professionals working in their own field. The second is rarely true, even for a lay reader.
When reading a technical paper for the first time, it is still possible to obtain a meaningful impression of the topics discussed, their structure, and their significance. The role of the author is to arrange his or her ideas in such a way that the reader can form as clear an impression as possible. At every point in the section of his paper excerpted above, Lucas defines key terms—the detection step vs. the selection step, for instance, and the standard “ad-hoc” method of bias-correction used prior to the development of SNANA. These key terms then serve as guides, telling the reader which concepts are most significant and providing a mechanism by which the reader can measure his or her understanding. Since he begins the section from a completely naive perspective, it is possible to work one’s way through the paper from definition to definition: if two pages in, you find yourself confused, you can go back to the last term you understood and work down again from there. Thus, by scaffolding his paper with clear exposition and well-defined key terms, Lucas makes it possible for a lay reader to come away from his Junior Paper understanding a great deal about his work.
Works Cited
(2) R. Kessler et al., Testing Models of Intrinsic Brightness Variations in Type Ia Supernovae and Their Impact on Measuring Cosmological Parameters, 764, 48 (2013), 1209.2482.
(5) B. Ryden, Introduction to cosmology (2003).
(26) R. Kessler and D. Scolnic, Correcting Type Ia Supernova Distances for Selection Biases and Contamination in Photometrically Identified Samples, 836, 56 (2017), 1610.04677.
(27) K. G. Malmquist, On some relations in stellar statistics, Meddelanden fran Lunds Astronomiska Observatorium Serie I 100, 1 (1922).
(28) J. Mosher et al., Cosmological Parameter Uncertainties from SALT-II Type Ia Supernova Light Curve Models, 793, 16 (2014), 1401.4065.
(29) H. Z. Chen, E. Tey, R. Trotta, and D. Van Dyk, (in prep).
(30) S. R. Hinton et al., Steve: A hierarchical Bayesian model for Supernova Cosmology, ArXiv e-prints (2018), 1811.02381.
(31) D. Rubin et al., Precision Measurement of The Most Distant Spectroscopically Confirmed Supernova Ia with the Hubble Space Telescope, The Astrophysical Journal 763, 35 (2013).
You must be logged in to post a comment.