Atemkeng M. T., 2017, Data compression, field of interest shaping and fast algorithms for direction-dependent deconvolution in radio interferometry
In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
Bester H. L., 2016, Observational cosmology with imperfect data
We develop a formalism suitable to infer the background geometry of a general spherically symmetric dust universe directly from data on the past lightcone. This direct observational approach makes minimal assumptions about inaccessible parts of the Universe. The non-parametric and Bayesian framework we propose provides a very direct way to test one of the most fundamental underlying assumptions of concordance cosmology viz. the Copernican principle. We present the Copernicus algorithm for this purpose. By applying the algorithm to currently available data, we demonstrate that it is not yet possible to confirm or refute the validity of the Copernican principle within the proposed framework. This is followed by an investigation which aims to determine which future data will best be able to test the Copernican principle. Our results on simulated data suggest that, besides the need to improve the current data, it will be important to identify additional model independent observables for this purpose. The main difficulty with current data is their inability to constrain the value of the cosmological constant. We show how redshift drift data could be used to infer its value with minimal assumptions about the nature of the early Universe. We also discuss some alternative applications of the algorithm.
HUGO B., 2016, ACCELERATED COPLANAR FACET RADIO SYNTHESIS IMAGING
Imaging in radio astronomy entails the Fourier inversion of the relation between the sampled spatial coherence of an electromagnetic field and the intensity of its emitting source. This inversion is normally computed by performing a convolutional resampling step and applying the Inverse Fast Fourier Transform, because this leads to computational savings. Unfortunately, the resulting planar approximation of the sky is only valid over small regions. When imaging over wider fields of view, and in particular using telescope arrays with long non-East-West components, significant distortions are introduced in the computed image. We propose a coplanar faceting algorithm, where the sky is split up into many smaller images. Each of these narrow-field images are further corrected using a phase-correcting tech- nique known as w-projection. This eliminates the projection error along the edges of the facets and ensures approximate coplanarity. The combination of faceting and w-projection approaches alleviates the memory constraints of previous w-projection implementations. We compared the scaling performance of both single and double precision resampled images in both an optimized multi-threaded CPU implementation and a GPU implementation that uses a memory-access- limiting work distribution strategy. We found that such a w-faceting approach scales slightly better than a traditional w-projection approach on GPUs. We also found that double precision resampling on GPUs is about 71% slower than its single precision counterpart, making double precision resampling on GPUs less power efficient than CPU-based double precision resampling. Lastly, we have seen that employing only single precision in the resampling summations produces significant error in continuum images for a MeerKAT-sized array over long observations, especially when employing the large convolution filters necessary to create large images.
NUNHOKEE, C. D., 2015, LINK BETWEEN GHOST ARTEFACTS, SOURCE SUPPRESSION AND INCOMPLETE CALIBRATION SKY MODELS
Calibration is a fundamental step towards producing radio interferometric images. However, naive calibration produces calibration artefacts, in the guise of spurious emission, buried in the thermal noise. This work investigates these calibration artefacts, henceforth referred to as “ghosts”. A 21 cm observation with the Westerbork Synthesis Radio Telescope yielded similar ghost sources, and it was anticipated that they were due to calibrating with incomplete sky models. An analytical ghost distribution of a two-source scenario is derived to substantiate this theory and to seek answers to the related bewildering features (regular ghost pattern, points spread function-like sidelobes, independent of model flux). The theoretically predicted ghost distribution qualitatively matches with the observational ones and shows high dependence on the array geometry. The theory draws the conclusion that both the ghost phenomenon and suppression of the unmodelled flux have the same root cause. In addition, the suppression of the unmodelled flux is studied as functions of unmodelled flux, differential gain solution interval and the number of sources subjected to direction-dependent gains. These studies summarise that the suppression rate is constant irrespective of the degree of incompleteness of the calibration sky model. In the presence of a direction-dependent effect, the suppression drastically increases; however, this increase can be compensated for by using longer solution intervals.
Peters, A. B., 2017, Creating and optimizing a sky tessellation algorithm for direction-dependent effects
With the promise of the SKA comes multiple challenges in terms of capturing and cleaning the data. One part of this involves breaking up or tessellating an image so that it can be cleaned of noise for better analysis. While methods to do this are currently in circulation, more can be done to ensure the results are as accurate as possible and are obtained as quickly as possible. This research seeks to improve the current best tessellation model for correcting the noise and do so in an optimal way with specialised hardware. To achieve these aims a novel algorithm is created and tested to generate the tessellation more effectively than the current best model. In order to increase the calculation speeds, part of this algorithm is then parallelised for processing on a GPU. The tessellation algorithm generated for this research is more effective than the current best model in general. Through accelerating parts of the algorithm on a GPU, speed-ups of up to 39.96x are obtained for tessellations generated from 1000 data sources.