Vishnu Gade's Personal Website About Me CIERA REU 2023
Simulating Large Populations of Rare Binaries With POSYDON
2023 CIERA REU at Northwestern University

Abstract

A prominent roadblock in the study of binary stellar systems is efficiently generating large populations of rare binaries such as double compact object (DCO) mergers or low-mass X-ray binaries (LMXBs). We significantly improve the efficiency of this process in the POSYDON1 population synthesis code by implementing the STROOPWAFEL importance sampling algorithm by Broekgaarden et al. 20192. We use this algorithm to generate samples of various rare binary systems more efficiently to enable more detailed study of these systems. We find a 12x and 30x improvement in sampling for 100 thousand binaries for both LMXBs and DCO Mergers respectively. We also plan to implement more customizable user-defined numerical filters and see the potential for research into the progenitors of observed systems.

Introduction

Population synthesis models are crucial to our study of binary star systems, allowing us to study the evolution of these systems on timescales that we cannot observe ourselves. In doing so they provide vital information about the evolution of what we now know to be the configuration of most massive stars in the universe. POSYDON is a next-generation population synthesis code designed for this purpose addressing some of the main drawbacks of population synthesis through machine learning (Fragos et al. 2023)1. This allows POSYDON to accurately model crucial dynamics between the stellar bodies in binary systems including realistic mass-transfer calculations and assessment of stability, internal angular-momentum transport and tides, stellar core sizes, mass-transfer rates, and orbital periods. For a more detailed discussion of POSYDON see Fragos et al. 20231.

Within the context of binary star systems, most population synthesis models will sample initial conditions for the binaries they evolve using distributions from known models of star masses and orbital parameters (i.e. orbital period and orbital separation). While the results of these simulations are useful in determining the proportion of different types of binaries in the universe, they are very limiting if we want to study specific classes of binaries. For instance, we may be interested in studying accretion dynamics through models of a compact object and a companion star most commonly found in X-ray binaries (XRBs). Or we may want to study the dynamics of merger events between double compact object systems (DCOs) to understand gravitational wave data from detectors like LIGO. For the study of specific systems, standard random sampling from these prior distributions is inefficient and wastes time generating unnecessary data.

To combat this we use alternative sampling methods to more efficiently generate rarer target populations. STROOPWAFEL is an “Adaptive Importance Sampling” (AIS) algorithm created for this purpose (Broekgaarden et al. 2019)2. It was designed specifically for improved simulation of DCO mergers as gravitational wave sources, but we expand it to sample for other rare binaries as well.

The STROOPWAFEL algorithm is split into 3 main phases: exploration, adaptation, and refinement. During the Exploration phase, the algorithm randomly samples according to the known (prior) distributions of our initial conditions, identifies binary systems of interest (our target population), and gathers their particular initial conditions. Then in the adaptation phase, it generates a new sampling distribution, called the \textit{instrumental distribution}, focused around the initial conditions (a mixture of Gaussians) of our target binaries. Finally, in the refinement phase, it draws from this new sampling distribution hoping to generate a larger proportion of target binary systems.

Methods

Significant Details of the STROOPWAFEL Algorithm

The duration of the Exploration phase is important in optimizing the STROOPWAFEL algorithm. If too much time is spent in this phase, we don't fully take advantage of the improved sampling but if too little time is spent we may miss important target regions and skew our final samples. The duration, here on denoted fexpl, is directly calculated based on the rareness of the target population and is quantified as a fraction of the total samples N we would like to generate. fexpl is modified each time as small increments of samples are drawn from the prior distributions based on the number of targets generated. The important thing is to account for the effect of any regions of parameter space that may contain missed targets. To do so parameter space is considered in two parts described by z1 and z2. z1 is the contribution to the total rate of formation of targets (RT) from the space sampled so far while z2 is the contribution from targets that have not yet been discovered. fexpl is then optimized by minimizing the theoretical effect of z2 on the rate of formation of targets in the prior distribution. This is done by minimizing the variance in RT due to z2 such that it is less than the uncertainty in RT from z1. This gives us an fexpl of

\[f_{expl}=1-\frac{z_1(\sqrt{1-z_1}-\sqrt{z_2})}{\sqrt{1-z_1}(\sqrt{z_2(1-z_1)}+z_1)}\]

For a full derivation refer to Broekgaarden et al. 20192. Assuming z1 and z2 are much smaller than 1, i.e. the target population is very rare, the above equation can be approximated to

\[f_{expl}\approx1-\frac{z_1}{z_1+\sqrt{z_2}}\]

This gives us the expected behavior that when z1 is 0, fexpl is 1, meaning all our time is spent in the exploration phase, but as z1 increases, fexpl decreases. During the algorithm z1 and z2 are defined as follows

\[z_1=\frac{\#targetsfound}{\#samplestaken}\] \[z_2=\frac{1}{f_{expl}N}\]

z2 is defined here to be equivalent to the effect of the sampling noise of sampling fexplN samples satisfying our conditions for optimizing fexpl.

In the Adaptation Phase, Gaussian distributions are created around the initial parameters of the found targets. The instrumental distribution is chosen to be a mixture of Gaussians such that equal samples are taken from each Gaussian. This decision makes it computationally inexpensive to generate samples and calculate their weights. It is important to be able to calculate the weights (which quantify the significance/likelihood) of each generated sample as they allow us to re-normalize the skewed data we generate back to our original distributions ensuring that later data analysis is not skewed by our artificially instrumental distribution. The fact that they are chosen to be computationally inexpensive to calculate, also enables the use of the weights of each target found in the exploration phase to adjust how many samples we draw from that target's Gaussian if we choose to do so at a later date.

Our Implementation

The prior distributions used by POSYDON are as follows: Kroupa2001 for primary star mass (Kroupa 2001)3, flat-mass ratio for secondary star mass, and Sana+12_period_extended for orbital period (Sana et al. 2012)4.

The filters we have designed and included are X-ray Binaries (XRBs), Low-Mass X-ray Binaries (LMXBs), and Double Compact Object (DCO) Mergers. We define XRBs as systems that evolve to have only 1 compact object (that is not a white dwarf) and is either detached or in Roche-Lobe overflow. LMXBs are defined the same as XRBs except we also require that the donor star be less than 1.5M&odot. We define DCO Mergers as a systems which evolve to contain 2 compact objects (neither of which are white dwarfs) and which come in contact with each other near the end of their evolution.

Discussion

For our testing, we ran the algorithm for 100 thousand binaries filtering for LMXBs and DCO mergers.

Low-Mass X-ray Binaries:

During the exploration phase, the formation rate of LMXBs was 0.027902 for 20500 binaries (572 hits). During the refinement phase, the target formation rate rose by a factor of 15 to 0.4282 for 79500 (34039 hits). This results in a cumulative LMXB formation rate of 0.34611 for 100000 binaries (34611 hits): a 12x improvement.

Double Compact Object Mergers:

Figure 1. The plot on the left highlights the few DCO Mergers found from default sampling of 100000 binaries in our initial conditions parameter space. While, the plot on the right highlights the DCO Mergers found with our implementation of STROOPWAFEL sampling for 100000 binaries.

During the exploration phase, the formation rate of DCO Mergers was 0.001163 for 76500 binaries (89 hits). During the refinement phase, the target formation rate rose by a factor of 120 to 0.1440 for 23500 (3383 hits). This results in a cumulative DCO Merger formation rate of 0.03472 for 100000 binaries (3472 hits): a 30x improvement!

Conlusions and Futher Work

By implementing the STROOPWAFEL importance sampling algorithm into the POSYDON binary population synthesis code, we have significantly improved the sampling of rare binaries. This leads to substantial computational time savings and allows for more detailed inclusion of massive-star physics and exploring model assumptions compared to traditional methods. It also results in a higher-resolution parameter space mapping, reducing sampling uncertainty by up to a factor of 5. Although this effect is largest for DCO mergers, it remains applicable and can easily be extended to other systems like LMXBs, gamma-ray bursts (GRBs), and Be X-ray binaries. We plan to use this exact functionality to aid our study of supernova kicks using LMXBs in the coming months.

We also see the potential to study more specific systems with the future addition of supplementary numerical limits to our filters. This will allow us to constrain to specific systems such as DCO Mergers within certain mass ranges or even constrain to observed data for a specific system to probe its possible progenitors and further evolution! We also plan to add a customizable purely numerical filter for use in a wider range of studies. Beyond this, we are considering accounting for the weights of the targets before samples are drawn from the instrumental distribution so that the drawn population is more representative of the prior distributions. Lastly, we of course hope to expand the qualitative filters further to the other systems mentioned above and continue to seek other ways to further improve sampling efficiency.

References

Broekgaarden, F. S., Justham, S., de Mink, S. E., et al. 2019, MNRAS, 490, 5228
Fragos, T., Andrews, J. J., Bavera, S. S., et al. 2023, ApJS, 264, 45
Kroupa, P. 2001, MNRAS, 322, 231
Sana, H., de Mink, S. E., de Koter, A. et al. 2012, Science, 337, 444

Acknowledgements

I would like to thank Dr. Vishal Baibhav and Dr. Vicky Kalogera for their support throughout this project Dr. Aaron Geller and Dr. Emily Leiner for organizing the REU at CIERA, opening the doors for this work. This material is based upon work supported by the National Science Foundation under grant No. AST 2149425, a Research Experience for Undergraduates (REU) grant awarded to CIERA at Northwestern University. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. This research was supported in part through the computational resources and staff contributions provided for the Quest high-performance computing facility at Northwestern University which is jointly supported by the Office of the Provost, the Office for Research, and Northwestern University Information Technology.