My research focuses on the search for transiting exomoons. What exactly does that mean?
Time-domain astronomy is a special kind of observational work, where we’re interested in the changes we can observe in astronomical objects over time. One of the most successful applications of this approach in recent years has been in the search for exoplanets. The Kepler spacecraft, for example, spent 4.5 years monitoring the brightness of something like 200,000 stars in the Cygnus constellation, looking for small changes in the light we receive from those stars. The idea is, when an exoplanet passes in front of its parent star from our point of view, we see a small dip in the intensity of starlight we’re receiving, maybe 1% or so. It’s very small.
Now, if that planet has a moon, we will see two dips — one dip from the planet, and another one from the moon. It’s that easy! Except… it’s not. Moons are expected to be quite small — smaller than the Earth — which means they’re often “lost in the noise.” Basically, they’re very difficult to detect. But I’m working hard to find them!
Using survey data from space telescopes like Kepler and TESS, I try to identify exoplanets that appear to be good exomoon hosts. Typically the data from these spacecraft aren’t quite precise enough to see the moon signals very well. But when we have a target we think is particularly promising, we can try to observe it again with a bigger telescope, like Hubble or (someday) JWST, to see if we can confirm the presence of a moon. It’s a big job! And we have to be extremely careful. But someone’s got to do it, and I’m having a blast.
Here’s some of the stuff I’ve been working on:
Machine learning for the identification of exomoon candidates in Kepler
Even with the advent of TESS, Kepler remains an incredible dataset, and we have yet to squeeze everything we can out of it in the search for exomoons. This is chiefly because, until now, the search for exomoons was carried out by first identifying a subset of targets that were a priori attractive moon hosts, and then carrying out a computationally expensive moon fit.
But there’s another way, and that’s by leveraging the power of machine learning algorithms. Recent work by Shallue and Vanderburg (among others) demonstrated that transiting planets could be distinguished from a variety of false positive scenarios with high accuracy by employing a convolutional neural network (CNN) to analyze the transits. Playing off this idea, I am now bringing the power of CNNs to the exomoon search. To that end, I’ve trained the network on ~200,000 artificial light curves and achieve up to 95% accuracy for moons with sufficient SNR. Now I am running every transit of every KOI through the pipeline to identify candidate exomoon signals. The most promising candidates will be flagged for follow-up with a full moon fit.
I intend to extend this work as a postdoc, expanding the search to the K2 and TESS datasets. This is a non-trivial extension, as the unique systematics of both missions will require entirely new training sets, and the shorter time windows will mean fewer long-period planets (which we think are the most attractive moon hosts), and fewer transits with which to work, even as the data volume itself will be far larger. Still, the experience I’ve gained with the Kepler data will allow me to hit the ground running, applying lessons already learned in this first stage to streamline the process going forward.
The exomoon candidate Kepler-1625b i
In the summer of 2016 I was working on a project to characterize the population of exomoons in the Kepler data (described in more detail below). In the course of that work I identified a single planet, Kepler-1625b, that showed the kind of signatures in the data we expect from an exomoon. We eventually determined that it was a very promising candidate — potentially the first transiting exomoon to be discovered — and I proposed to observe the October 2017 transit of the planet using the Hubble Space Telescope. And we got the time! Some 40 hours on target, which in many ways was an unprecedented observation.
In October 2018 we published the results of that observation, concluding that there was “evidence for a large exomoon orbiting Kepler-1625b“. When we say large, we mean it — the moon appears to be about the size of Neptune! That’s something hardly anyone has anticipated. Still, it was our professional opinion that because the analysis was so challenging, and because the claim is so extraordinary, it was premature to claim a discovery beyond a shadow of a doubt. We continue to believe an exomoon is the best explanation for the data in hand, but we look forward to future observations, particularly measurements of the planet’s transit timing variations and radial velocity measurements, so that we may understand more about this system and, hopefully, establish with greater certainty whether the moon is real. If it is, the theorists have got a new puzzle!
Tying up loose ends for the exomoon candidate host Kepler-1625b
In April 2019 we put out a second paper on the exomoon candidate host Kepler-1625b, in which we explored in greater detail some of the possible alternative explanations for the signals we saw in the data. In the months since our first paper we had lots of discussions with colleagues, who had all manner of questions for us, so it was a good opportunity to formally explore some of these lingering questions.
In particular, we explored the use of additional, more flexible detrending models to see how they would do. As expected, a sufficiently flexible model is able to remove the signal we attribute to the moon, but this is not favored by the Bayesian evidence. We also explored whether detrending with respect to the target’s centroid was better than detrending with respect to time, but found no data-driven impetus for adopting such a model.
We also explored whether an additional transiting planet in the system could be responsible for the transit-like dip we attribute to the moon, and computed the probability to be less than 0.75%. In addition, we examined the activity of the star to determine whether stellar activity could mimic the transit-like signal. That too was found to be improbable.
Finally, we addressed work by another team (Kreidberg et al) who undertook an independent analysis of the HST data and did not recover the moon signal. For the sake of saving space here, have a look at this twitter thread for a discussion on what their work means for the status of the moon candidate (or just read our paper!).
On the population of exomoons in Kepler
My first project at Columbia with David Kipping focused on characterizing the population of exomoons in the Kepler data. Now you might ask: if it’s so hard to find one exomoon, how on Earth do say something about the population of exomoons?
It comes down to what’s called the “orbital sampling effect“. The idea is, if you stack (or “phase-fold”) many transits of a moon-hosting exoplanet and look at the time-averaged signal, any moons present will show up on the wings of the planet’s transit as missing starlight. It turns out, though, that you need lots of transits of the planet to see this effect, and this makes things complicated, because the planets for which we have observed lots of transits are all very close to their host star and we think these planets are unlikely to host moons.
The good news is, we can play this same game by stacking transits from many different planets together, and looking again at the wings of the planet’s transit to see if there are moons in the sample suppressing the starlight. So that’s what David and I did, using 284 planets amounting to around 4000 transits, we stacked them and looked at that time averaged signal. (Of course, this is more complicated than I’m making it sound!)
The result? we found a very low occurrence rate of moons in the data, suggesting that moons are quite rare in the inner regions of exoplanetary systems (that is, interior to about 1 AU). Now, as the image above suggests, there are some degeneracies here, particularly with respect to the size of the moons in the sample and their occurrence rate. We find a very low occurrence rate of very large moons, and that is well constrained. By contrast, there could be a lot of small moons in the sample, but they’re so small that we can’t see them, and that’s relatively poorly constrained.
In any case, we interpret the results as corroboration of theoretical work, suggesting that indeed, moons are going to be rare at small semimajor axes, and if we want to find them, we’ll have to probe greater distances from the host star, potentially beyond the snow line.
A cloaking device for transiting planets
This was a fun little side project I worked on with David during my first year. Anyone who knows David knows he’s full of ideas, sometimes radical and very-outside-the-box ideas. One weekend he asked himself, would it be possible to use a laser to alter the shape of a planet’s transit? How much power would you need to fill in all the missing starlight during a planet’s transit? Or might you somehow modify your planet’s transit to send signals to another system? After some back-of-the-envelope calculations he determined that not only was it possible, but that it was totally within our present technological capabilities. So naturally, if we’re able to do it, any sufficiently sophisticated extraterrestrial civilization could do it, and this is something we might therefore look for! David asked if any of us might like to work on it with him, and I thought the idea was rather interesting so I signed on.
My work on the project focused chiefly on modifying a planet’s transmission spectrum. If instead of using a single color laser, we could beam lasers at many different frequencies of light, could you alter the transmission spectrum to, say, hide the presence of the atmosphere, or biosignatures in the atmosphere? The answer was yes, it’s astonishingly easy to do.
The paper got a sizeable amount of media attention, but unfortunately most media outlets were preoccupied with one aspect of it: the notion that we could hide ourselves from aliens. Now, that’s technically true (though this hides just one aspect of the planet, its transit), and for our part I’m afraid we weren’t quite as careful as we could have been about the way we talked about it to the media. Really, though, the idea was much more interesting from the perspective of, could this be something other civilizations might do? If it is — and we found that it was — you could go look for it.
It didn’t help at all that the paper came out on April 1st. Some people thought it was a joke. Well it wasn’t, we were serious about it… but I guess it wasn’t the kind of thing super serious scientists do with their super serious time. I learned a lot though, both about transmission spectroscopy and, importantly, about how to talk to the media. There’s a real appetite for cool astronomy-related news, and this certainly garnered a lot of attention, but we do have to be exceptionally careful about the way we talk about some things, particularly when it comes to the search for extraterrestrial life.
Machine learning with an application to gyrochronology
All Columbia graduate students are obligated to work with two separate advisors in their first two years. When I arrived at Columbia and met David I was instantly drawn to the search for exomoons — a topic that had greatly intrigued me as an undergraduate — but before that I had planned to my thesis work with Marcel Agüeros on measuring stellar ages through their rotation periods, a process known as gyrochronology. It was this project which I originally proposed for my NSF GRFP application.
In particular, I was interested in seeing what we might be able to measure using the exceptionally sparse time-domain photometry from the Palomar Transient Factory (PTF), and what we might be able to achieve with the Zwicky Transient Facility (ZTF) and ultimately the Large Synoptic Survey Telescope (LSST). These data would not be nearly as well suited for gyrochronology as Kepler, but I suspected there might be some subset of stars for which a rotation period could be extracted.
A very common approach to measuring rotation periods is through the application of Lomb-Scargle periodograms. The problem is, these periodograms will always have some power at different frequencies, and the sparser the dataset, the more difficult these periodograms are to interpret.
To approach this problem I turned to machine learning. Machine learning has fascinated me for some time, and I thought this was the perfect application. What kind of relationships might these codes be able to identify that humans simply cannot (chiefly due to the high dimensionality of the problem). I asked a simple question: for a given light curve, can a reliable rotation period be extracted, or not?
After quite a bit of trial and error — a hallmark of machine learning efforts, I’ve found — we started to see some positive results. I worked with both neural networks and random forests, supplying a series of metrics for a series of simulated light curves (we needed a ground truth answer to supply the algorithms), and got comparable accuracy from them both.
I’m still very much interested in this project and would like to revisit it one day. At the end of my year working with Marcel things were looking very promising, but it was clearly going to take a lot more work to get it across the finish line and my exomoon work had to come first. I would absolutely love to see an all-sky catalog of stellar rotation periods, but now it’s looking like the (nearly) all-sky coverage of TESS and the ease with which full-frame image light curves can be extracted may make trying to do this with sparse data sets like ZTF and LSST generally less attractive.
Extinction mapping and probing gamma-ray emission from the rho Ophiuchi molecular cloud
As an undergraduate at Hunter College I did research at the American Museum of Natural History under Professor Tim Paglione. My work there focused at first on generating extinction maps of the rho Ophiuchi molecular cloud using data from 2MASS. Later on, I transitioned to using data from the Fermi space telescope to probe the gamma-ray emission from rho Oph. The motivation here was that gamma rays are emitted as cosmic rays interact the cloud, and therefore, these clouds may be used as tracers of the cosmic ray distribution around the galaxy.
Looking back I don’t know that I contributed terribly much, but it was during this project that I first learned to code, how to grapple with (sometimes enormous) astronomical datasets, and when I really cut my teeth as an observational astronomer. I guess that’s all to say, I feel so much more productive these days than I was back then, but then again, I was learning so much along the way. The work resulted in this paper, first-authored by the delightfully low-key Ryan Abrahams, who was Tim’s graduate student at the time.
Ammonia masers in the galactic center (NRAO)
In the summer of 2014 I got the chance to work at the National Radio Astronomy Observatory (NRAO) in Socorro, NM as an REU student. My advisors there were Betsy Mills, Juergen Ott, and Dave Meier, and the work focused on the identification of ammonia masers in the galactic center. It was my first time working with radio data (taken with the VLA), and I was absolutely amazed by the three-dimensional structure of the molecular clouds captured in the velocity channels.
My work focused on developing new approaches to identifying the (3,3) ammonia masers automatically. With the code I wrote I managed to recover the masers that had previously been identified and found several new candidates. I presented my work at the AAS meeting in Seattle that year, and worked for several months on a paper. Ultimately I felt our results probably would have some difficulty making it through the referee process, and being swamped with applications to graduate school it made sense to retire the project. But I’m extraordinarily grateful to my advisors at NRAO, I learned a great deal and I’ll always have a special place in my heart for radio astronomy and the folks I met in Socorro.