Zenodo repository: The Zenodo repository containing the challenge datasets can be found here.
Make sure you get the latest version (v2.0).

Instruments

The datasets used in this data challenge were kindly provided by scientists from several high-contrast imaging instruments (see Team), and are the result of many years of work from different teams around the globe.

Subchallenge 1:

Subchallenge 2:

The datasets used in this competition are diverse: from H-band (1.6 mum) to L-band (3.8 mum), broadband/narrowband filters, total range of field rotation, with or without a coronagraph, observing conditions etc.

Pre-processing steps

The datasets were pre-reduced (calibrated) using the standard pipeline of each instrument. We then applied a few pre-processing procedures on each cube to make sure that the offered data cubes are homogeneous (centering, cropping etc.). All the image of the cubes have been cropped to focus on the innermost 20 lambda/D region.

The data are saved in .fits files format. A convenient software for quick visualization is SAOImageDS9. If you use Python and Jupyterlab, you can use the HCIplot open-source library for visualizing the cubes.

No additional sorting or temporal binning have been applied. Users can sort-out the images if they wish

Injections

Using the center defined in the table below and here, we injected 0 to 5 synthetic planetary signals in each dataset, with contrast and separation randomly picked around the detection limit (5-sigma contrast curve) computed from the baseline post-processing technique (a full frame PCA). To inject the synthetic planetary signals, we made use of the VIP pipeline.

Provided data content

For the sub-challenge on ADI post-processing, each dataset is composed of:

For the second sub-challenge on ADI+mSDI post-processing, each dataset is composed of:

Data parameters

The information about the datasets can be find in the table below:

Note LMIRCam images: The image cube of LMIRCam being quite long, feel free to bin temporally the images to process it faster.


It is mandatory that the submitted datasets remain secret for the duration of the challenge. After the data challenge is finished, the contributed datasets (without injected companions) will constitute the HCI benchmark library that will be made available for the community. This benchmark library will be stored on Zenodo, ensuring the long term preservation of data, and will serve the next generation of researchers who will be able to re-use the benchmark datasets for quick validation of novel algorithms and for publication.


Data and pre-reduction team:

Injection of planetary signals, homoegenization and test team: