Zenodo repository: The repository containing the 8 challenge datasets and the 1 training dataset can be found here.


Datasets from two high-contrast spectro-imagers of the latest generation are used:

There are 4 datasets from SPHERE-IFS and 4 datasets from GPI.
The training dataset is from SPHERE-IFS.

The 8 target stars contain no known companion candidate. In order to mitigate the impact of a potential astrophysical signal (planet or disk), we performed the injection using the opposite parallactic angles. This trick preserves the temporal correlation of the starlight residuals, but any pre-existing circumstellar signal is no longer co-aligned after derotation. The opposite parallactic angles are the ones used for the injections and provided with each downloadable dataset.

The datasets are taken under various observing conditions, from very favorable to bad, with bright or faint targets (see details below).

The data have been pre-reduced by the official IFS (SPHERE-DC) and GPI pipelines (GPIES Data Cruncher). Our team homogenized the data (centering, cropping etc.) and injected between 2 and 3 synthetic planetary signals in the coronagraphic image cubes (see details below).

Provided data content

The 8 data set of the data challenge contain the following files (in .fits format):

The images (coronagraphic and non-coronagraphics) are withing a frame with an odd number of pixels, centered on the central pixel. If Npix is the dimension of the frame, it means that the center is located at (Npix-1)/2.

The first guess gives an estimation of the location of the injected planetary signal within a radius of 5 pixels (in case of low SNR). The values are given as the distance (in pixels) from the star (the center of the frame) in cartesian coordinates (x,y).

The airmass is given as an information, it is not mandatory to take it into account within the algorithm used.

More information (telescope, instrument, coronagraph type, effective telescope diameter, total field rotation, spectral resolving power, central wavelength, exposure time DIT, number of exposures NDIT, pixel scale, seeing etc.) are written in the header, when available.

Observing conditions

The table below summarizes the main information about the data set and the observing conditions:

Injection procedure

In each data set we observed from 2 to 3 synthetic exoplanet (point source) signals. Within the SPHERE-IFS images we injected 11 exoplanet signals, and within the GPI data 10 exoplanet signals. In total there are 21 exoplanet signals to be characterized.

As shown in the injection procedure tutorial, based on VIP pipeline procedures, the exoplanet signal injection relies on the following steps:

The injected signal at a given wavelength (lambda) and position then writes: Injected_planet_lambda = PSF_lambda * spectrum_planet * mean_contrast_planet * transmission_AtmInstr * airmass_factor, where transmission_AtmInstr = fitted_stellar_spectrum / model_stellar_spectrum .

No other effect is taken into account for the injection: no other flux temporal variation (intrinsic nor instrumental), no smearing at large separation due to the exposure time, no temporal binning, no off-centering of the star behind the coronagraph during the exposure, no diffraction effect due to the coronagraph at close separation etc.

Training data set: On the Zenodo repository containing the data, you will find a data set annoted sphere0. This is the training data set (empty of exoplanetary signals). On the Github repository containing the toolkit, you will find a tutorial showing our planet injection procedure within this example SPHERE-IFS data set. This tutorial makes use of 2 planet spectra (in folder /planet_spectra/) to be injected and uses a given stellar spectra (in folder /stellar_spectra/) to compute a mean contrast. This tutorial contains a part to visualise the input data and one to process the data for a quick-look using a full frame ASDI PCA. Feel free to use this training set to refine your algorithm use.

Data and pre-reduction team:

Injection of planetary signals, homogenization and test team: