Sunday, 2 December
17:00 - 19:00 Informal icebreaker at Meloncino
Upper Level V&A Waterfront
Cape Town 8001
Monday, 3 December
09:00 - 09:10 Welcome and Logistics
Session 1: (Chair: Ludwig Schwardt)
09:10 - 09:35 Jasper Horrell (SKA SA)
09:35 - 10:00 Maxim Voronkov (CSIRO Astronomy & Space Science)
The Australian Square Kilometre Array Pathfinder (ASKAP) is a new generation radio-interferometer featuring Phased Array Feed (PAF) technology. Since December 2011, a number of commissioning activities are underway using a software correlator and a subset of 3 antennas. The results obtained so far include fringes and phase closure for both individual ports and beamformed beams, as well as a rudimentary imaging using boresight and off-axis beams. Despite the limited quality, these experiments clearly demonstrate PAF technology at work and uncovered a number of issues specific to PAF-enabled interferometers. In this talk, I will describe the commissioning activities and review the current status of ASKAP.
10:00 - 10:25 Dirk Petry (ESO)
10:25 - 11:10 Tea/Coffee
Session 2: (Chair: Ludwig Schwardt)
Status updates (continued)
11:10 - 11:35 Bill Cotton (NRAO)
11:35 - 12:00 Tim Cornwell (SKA Organization)
12:00 - 13:30 Lunch
Session 3: (Chair: Sanjay Bhatnagar)
13:30 - 14:15 Oleg Smirnov (SKA SA / Rhodes)
14:15 - 14:45 Ian Heywood (Oxford)
The work-in-progress that I presented at CALIM 2011 where I was using the dE algorithm (Smirnov, 2011) implemented in the MeqTrees software system (Noordam & Smirnov, 2010) to model and subtract a confusing source from some equatorial VLA data has been completed (Heywood et al., 2012). I'll provide an update on this and demonstrate how the horrid behaviour of the confusing source can be quantitatively explained in terms of being in the Worst Possible Place in the VLA primary beam, coupled with finite antenna pointing accuracy. Gain solutions derived during subtraction accurately trace the response of the primary beam. I'll go on to describe some on-going and forthcoming projects of varying degrees of ambitiousness involving the KAT-7 and e-MERLIN arrays. Deliberately busy fields have been observed with the parallel goals of (i) cleanly excising sources that lurk on the edge of the primary beam, and (ii) using the solutions derived from this process to probe the array beam itself.
14:45 - 15:30 Tea/Coffee
Session 4: (Chair: Oleg Smirnov)
Primary Beams (continued)
15:30 - 15:55 Mattieu de Villiers (SKA SA)
I will present an L-band holography technique and associated primary beam pattern results for KAT-7 antennas. Full co and cross polarization beam patterns per feed are measured over 24 degrees for a subset of antennas simultaneously, over the full range of observing frequencies.
15:55 - 16:20 André Young (Stellenbosch)
Recently a number of methods have been reported and shown to be able to model antenna beam patterns with a high degree of accuracy, including the Characteristic Basis Function Pattern (CBFP) method and the use of the Jacobi-Bessel series. An overview of the methods and latest developments will be presented.
Tuesday, 4 December
Session 5: (Chair: Andreas Wicenec)
Status updates (continued)
09:00 - 09:25 Ronald Nijboer (ASTRON)
On December 1st 2012 LOFAR operations Cycle 0 starts. In this presentation an update on the status of the LOFAR array, its observing, processing, and archiving capabilities will be given.
09:25 - 09:50 George Heald (ASTRON)
09:50 - 10:15 Peeyush Prasad (University of Amsterdam)
The AARTFAAC project will reconfigure the central six stations of LOFAR as a near real-time All-Sky Monitor (ASM), which aims to detect bright radio transients in the local sky via searches in the image domain. The AARTFAAC ASM will strive for continuous operation and the imaging of the available very wide field of view (All-sky at low frequencies), both of which are essential for transient detection. This will be achieved by enhancing the chosen LOFAR stations with hardware to estimate the correlation between all 288 dipoles of the ASM in real-time over an independent coupled data path, and thus simultaneously with regular LOFAR observations. A software pipeline on dedicated processors will then calibrate and form images from the AARTFAAC, ultimately feeding into a software transient detection pipeline. I will elaborate upon our strategy for calibrating and imaging the full field of view of the AARTFAAC under constraints of low latencies, and with challenging observing environments. Further, I will illustrate our approach with results based on the analysis of test observations.
10:15 - 11:00 Tea/Coffee
Session 6: (Chair: Ronald Nijboer)
11:00 - 11:25 Rik Jongerius (IBM Netherlands)
The LOw-Frequency ARray (LOFAR) is a phased-array radio telescope in the Netherlands. It is recognized as a pathfinder for the Square Kilometer Array (SKA). Many of the LOFAR techniques form the basis for several SKA modes of operation. As a result, a retrospective analysis of LOFAR, reviewing the design giving recent advances in technology, is of interest. In this talk I will present part of a retrospective analysis, as it is performed at the ASTRON & IBM Center for Exascale Technology. The talk focuses on digital processing in LOFAR stations. By defining scaling rules for compute and bandwidth requirements of algorithms used in LOFAR we try to gain an understanding of how these properties relate with varying system parameters. Furthermore, implementations of LOFAR station processing on several parallel platforms are discussed. A comparison is made between CPU-, GPU- and FPGA-based platforms in terms of required system size for real-time processing and energy efficiency.
11:25 - 11:50 Chris Broekema (ASTRON)
At last years CALIM I introduced a first glimpse at how computing will evolve in the next decade or so. In this talk I will update that view, based on numerous discussions with both industry and research partners alike. I will discuss the expected hardware developments and how these will affect the programming models we know today. Since energy consumption is expected to limit the operational capacity of the phase 2 SKA central processor(s), I will focus some slides on ways systems, both hardware and software, can minimize energy per computational answer and how this will affect the CALIM audience.
11:50 - 12:15 Simon Ratcliffe (SKA SA)
A look at the extensive use of Python within the MeerKAT telescope, and investigation and discussion on how far we can push Python in the HPC arena.
12:15 - 13:45 Lunch
Session 7: (Chair: Anna Scaife)
Primary Beams (continued)
13:45 - 14:10 Dirk Petry (ESO)
As part of the ALBiUS project (now ended) and further ongoing work at ESO, detailed simulated antenna responses are being made available for use in ALMA data analysis with the CASA software. As part of this, a general scheme for antenna response access was developed and also a new simulation path using a ray tracing algorithm by W. Brisken (NRAO) was opened up for general use. I report the status of this work and first tests with real ALMA data.
14:10 - 14:35 Cyril Tasse (SKA SA / Rhodes)
We present a few implementations of A-Projection applied to LOFAR, that can deal with non-unitary station beams and non-diagonal Mueller matrices. The algorithm is designed to correct for all the DDE, including ionospheric effects, but we focus our attention on the correction of the phased array beam patterns. They include individual antenna, projection of the dipoles on the sky, and up to a few levels of phased arrays. We describe a few important algorithmic optimizations related to LOFAR’s architecture, that allowed us to build a fast imager. We will use it for the construction of the deepest extragalactic surveys, comprising hundreds of days of integration.
14:35 - 15:20 Tea/Coffee
Session 8: (Chair: Tim Cornwell)
15:20 - 15:45 Ian Sullivan (Washington)
We introduce the Fast Holographic Deconvolution method for analyzing interferometric radio data. Our new method is an extension of A-projection/software-holography/forward modeling analysis techniques and shares their precision deconvolution and wide-field polarimetry, while being significantly faster than current implementations that use full direction-dependent antenna gains. Using data from the MWA 32 antenna prototype, we demonstrate the effectiveness and precision of our new algorithm. Fast Holographic Deconvolution may be particularly important for upcoming 21 cm cosmology observations of the Epoch of Reionization and Dark Energy where foreground subtraction is intimately related to the precision of the data reduction.
15:45 - 16:10 Sarod Yatawatta (ASTRON)
17:00 - 18:30 Football match at Pinelands Cricket Grounds
Wednesday, 5 December
Session 9: (Chair: Tim Cornwell)
09:00 - 09:25 Bill Cotton (NRAO)
Radio source counts give an integrated history of the evolution of galaxy formation over cosmic time and, in particular, of the sources of significant radio emission. Studies of the counts of individual sources suffer from the fact that at the resolution needed to distinguish the faintest sources individually, most are resolved, requiring a difficult and uncertain correction for the effects of the resolution on the derived counts. Much of this problem can be overcome in statistical measures of the source counts using a ""P of D"" analysis of statistical confusion from lower resolution observations. To obtain the necessary sensitivity for such observations, wide bandwidths are needed. Variations in the sky brightness and instrumental response with frequency add complexity. I will describe techniques developed for a deep (1 microJy RMS) confusion limited VLA survey in the frequency range 2-4 GHz. The basic approach is to divide the frequency range into multiple narrow bands with imaging constrained to give the same dirty beam resolution and then do a joint deconvolution to remove the side-lobes of the brighter sources. The distribution of pixel values is then derived from a weighted combination of the narrow band images correcting for the frequency variable primary antenna pattern. The results of this analysis are consistent with previous SKA simulations and reveal no new population of sources above 1 microJy. These results are published in Condon et al., 2012, ApJ 758, 23.
09:25 - 09:50 Stefan Wijnholds (ASTRON)
The unprecedented dynamic range envisaged to be achieved by the SKA begs for answers to a number of fundamental and practical questions in the areas of design of the telescope and processing of the data. We need to answer these questions to ensure by design that the SKA can live up to its expectations. In this talk, I present an overview of questions that have been raised in the framework of the AAVP programme and discuss the (partial) answers that have been found to date. Based on this assessment of the current status, I conclude with a list of open issues.
09:50 - 10:15 Laura Richter (SKA SA)
Confusing astronomical sources far from the beam centre can create considerable image artifacts in deep images. One cause of these artifacts is spectral structure present in wide-band observations, either intrinsic to the source or as a result of the primary beam. These artifacts are explored through KAT-7 and ATCA data.
10:15 - 11:00 Tea/Coffee
Session 10: (Chair: Anna Scaife)
11:00 - 11:25 Sanjay Bhatnagar (NRAO)
11:25 - 11:50 Urvashi Rau (NRAO)
11:50 - 12:15 Tim Cornwell (SKA Organization)
12:15 - Free Afternoon
Thursday, 6 December
Session 11: (Chair: Sanjay Bhatnagar)
Algorithms and Modelling
09:00 - 09:25 David Davidson (Stellenbosch)
MeerKAT will be one of the first radio telescopes whose electromagnetic design has been rigorously evaluated --- even before prototyping --- using full-wave numerical simulation as opposed to faster but less accurate asymptotic methods. The ability to rigorously characterize the electromagnetic performance of radio telescopes offers interesting new opportunities in calibration. The computational electromagnetics program most extensively used for the design of MeerKAT is FEKO. Recent advances in FEKO have permitted full-wave analysis of the new offset Gregorian MeerKAT dish; this has been combined with asymptotic codes (in particular GRASP) to rapidly explore the design parameter space, with full-wave FEKO analysis to check the results. Furthermore, full-wave simulation has been applied to radio frequency interference studies and has been elegantly combined with numerical simulation of the mechanical structure of the dish to couple the effects of mechanical deformation due to wind loading (for instance) into the electrical performance of the dish. For the future, the efficient numerical analysis of focal plane arrays is under investigation at SU in collaboration with EMSS-SA (the developers of FEKO) and leading European institutions.
09:25 - 09:50 Ludwig Schwardt (SKA SA)
Quantisation correction (also known as Van Vleck correction) adjusts visibilities for the effects of quantisation and is typically performed in the online system of a radio telescope. I revisit this topic from a Bayesian perspective and show how it can also be used to derive improved visibility weights.
09:50 - 10:15 Sanaz Kazemi (Kapteyn)
We introduce the OS-LS and the OS-SAGE radio interferometric calibration methods, as a combination of the Ordered-Subsets (OS) method with the Least-Squares (LS) and Space Alternating Generalized Expectation maximization (SAGE) calibration techniques, respectively. The OS algorithm speeds up the ML estimation and achieves nearly the same level of accuracy of solutions as the one obtained by the non-OS methods. Furthermore, we introduce a modified version of the OS calibration scheme which achieves a higher accuracy. Simulations and real observations illustrate that the introduced OS-LS and OS-SAGE calibration methods benefit from a considerably higher convergence rate compared to the conventional LS and SAGE techniques. Moreover, the obtained results show that the OS-SAGE calibration technique has a superior performance compared to the OS-LS calibration method in the sense of achieving more accurate results while having significantly less computational cost.
10:15 - 11:00 Tea/Coffee
Session 12: (Chair: Oleg Smirnov)
Algorithms and Modelling (continued)
11:00 - 11:25 Robert Braun (CSIRO)
A framework is developed for quantifying the different contributions to visibility fluctuations in a synthesis measurement together with the correlation timescales and frequency bandwidths that apply to each. Parameterised expressions for each contribution allow assessment of their impact on calibration quality and ultimately image ""noise"". Applying the resulting noise budget analysis to existing and planned telescope systems provides insights into their likely limitations, as well as mitigation strategies.
11:25 - 11:50 Tobia Carozzi (Onsala Space Observatory)
I present a theory for the spatial channel capacity of an interferometer. Interferometers generate a lot of information, indeed it has been suggested that the SKA will have a network traffic comparable to the entire Internet. But how much of this information is actually useful imaging data? I will show how an array's configuration and primary element gain determines the amount of sky information can be collected with an interferometer. This approach is roughly analogous to looking at how many megapixels are photographic camera has, and can be used to assess imaging performance in interferometers. The results show that minimum-redundancy arrays have the largest imaging information capacity. They also suggest that 3D arrays on 2D surfaces, so-called conformal arrays, so in other words w-terms contribute more information than flat arrays.
11:50 - 12:15 Jason McEwen (UCL)
Recent developments in compressive sensing techniques for radio interferometric imaging have shown a great deal of promise. These techniques promise improved image fidelity, flexibility and computation time over traditional approaches. However, to date these new techniques have necessarily been somewhat idealised. Now that the merits of these methods have been demonstrated in idealised settings, it is important to entend them to the realistic settings so that their benefits can be realised on observations made by real interferometric telescopes. I will review recent progress in this endeavour. I will discuss the impact of the spread spectrum effect, which arises from non-negligible w-components, in the realistic setting. I will also discuss recent progress towards the inclusion of a gridding operator in compressive sensing radio imaging techniques to handle the continuous visibility tracks of real telescopes. Finally, I will highlight a new fast code that is under development for the application of compressive sensing interferometric imaging methods to real radio interferometric observations.
12:15 - 13:45 Lunch
Session 13: (Chair: Ronald Nijboer)
Algorithms and Modelling (continued)
13:45 - 14:10 Yves Wiaux (University of Geneva / Lausanne University Hospital / EPFL)
In this talk we will review a novel regularization method recently proposed for sparse image reconstruction from compressive measurements in the context of the recent theory of compressed sensing with coherent and redundant dictionaries. The approach relies on the conjecture that natural images exhibit strong average sparsity over multiple dictionaries. The associated reconstruction algorithm, based on an analysis prior and a reweighted l1 scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We will probe this prior and the associated algorithm by analyzing the results of extensive numerical simulations in the context of radio-interferometric imaging, in particular in the presence of direction-dependent effects. These results show that average sparsity drastically outperforms state-of-the-art priors that promote sparsity in a single dictionary.
14:10 - 14:35 Arash Owrang (ASTRON)
Many recent array signal processing studies investigate the applicability of sparse reconstruction techniques. In this paper, we study the limits of sparse direction-of-arrival (DOA) reconstruction with random arrays in terms of number of recoverable sources, grid spacing in the spatial domain and sensor distribution. We use the coherence of the measurement matrix as figure-of-merit, because it is tractable in closed form and is easily computed. We establish a general lower bound for random sensor distributions for the probability that the coherence is lower than a certain threshold, determining the number of sources that can be successfully reconstructed. We apply our results to (truncated) Gaussian and uniform sensor distributions, both in closed form and in simulations. We also demonstrate that successful sparse reconstruction for a given number of signals is more likely if reconstruction is done based on the signal covariance matrix than if reconstruction is done in the time domain.
14:35 - 15:20 Tea/Coffee
15:20 - 15:45 Conference Summary
50 Orange Street
Friday, 7 December
10:00 - 16:00 Open day for informal discussions etc.