On this page... (hide)

  1.   1.  The model
  2.   2.  Sampling algorithm
  3.   3.  References

Page in construction

In cosmology we are concerned with the universe on large scales and the laws governing the evolution of our universe. One important aspect is understanding the distribution of matter in our cosmos and the structures it forms. These structures originate from tiny fluctuations in the early universe, which have grown under the influence of gravity and became the observed structures. The structures encode valuable information about the evolution of the cosmos and hence allow to test fundamental physics, like gravity or even particle physics like the nature of Dark Matter. In order to use the cosmic structures as probe for physics we must be able to perform 3D imaging of the cosmic structures using the galaxies as tracer particles. Reconstruction of the Dark Matter density field has drawn a lot of attention in the last years. This is the objective of the VIRBIUS2 model. The first variant of this model was specifically restricted to Gaussian random field and could not be further adapted. The VIRBIUS2 model and algorithm newly developped allows to scale for much larger datasets, volume and physical phenomena. It comes however with the big drawback of further relying on block sampling.

1.  The model

Our model, which is similar to the model proposed in [1], basically consists of fitting the relation between the galaxies redshifts {$z_i$} and there velocities {$v_i$} , to velocity field {$v$} at the galaxies positions, projected on to the line of sight {$\mathbf{n}$} {$$ \tag{1}\mathbf{v}(d_{L,i})\cdot \mathbf{n} = v_i = \frac{z_i - \bar{z}(d_{L,i})}{1 + \bar{z}(d_{L,i})}, $$} where {$\bar{z}(d_L)$} denotes the cosmological redshift, an indicator of the galaxy distance {$d_{L,i}$}. We reconstruct the velocity field on a regular grid and use trilinear interpolation to obtain the velocity field at the galaxy position.

Because galaxies do not perfectly trace the underlying Dark Matter field equation (1) does not hold exactly. We account for this by adding a noise term, given by a Gaussian Mixture, whose parameters are self-consistently determined by the algorithm.

On large scales theory predicts matter to be distributed Gaussian, we use this as a prior for the velocity field. We account for measurement errors on the galaxies redshifts and distances by drawing samples from their posterior distributions. The procedure is flexible and can be extended to obtain distances and redshifts from more directly observable quantities. We model the expected distribution of galaxies in the survey volume, by putting {$$ \tag{2}\pi(d_L) \propto d^p_L \exp\left[-\left(\frac{d_L}{d_c}\right)^n\right] $$} as a prior on distances, where {$p$}, {$n$}, {$d_c$} are free parameters to be determined from the data.

Lastly, we account for non-absolute distance calibrations by adding an effective Hubble constant to the model, which determines the ratio between observed and "real" distances.

2.  Sampling algorithm

We obtain samples by block sampling, where samples of the high dimensional are obtained using Hamiltonian Monte Carlo (HMC) which has already been successfully used for density field reconstructions. Obtaining samples from the other quantities consists of drawing samples them successively from 1D distributions. In HMC the problem of sampling is reformulated as a Hamiltonian particle system, hence to obtain a sample one has to solve a set of ordinary first order differential equations. This is however immediately generalizable to more complex velocity field which would be generated from e.g. BORG. We use the HMC to obtain samples from the divergence of the velocity field {$$\theta = \nabla \cdot \mathbf{v},$$} from which the velocity field can be obtained by transforming {$$\mathbf{\hat{v}}(\mathbf{k}) = \frac{i \mathbf{k}}{k^2} \hat{\theta}(\mathbf{k}) .$$}

3.  References

[1]: Lavaux, 2016, MNRAS

Text credit: Florian Führer (ILP/IAP), Guilhem Lavaux (CNRS/IAP)