SPARK 2026 Challenge

SPAcecraft Recognition leveraging Knowledge of Space Environment

Overview

SPARK 2026 (SPAcecraft Recognition leveraging Knowledge of Space Environment) is organized as part of the AI4Space 2026 workshop, in conjunction with the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2026 (CVPR 2026).

SPARK 2026 pushes the boundaries of space perception by introducing two exciting and forward-looking challenge streams, each targeting critical capabilities for next-generation autonomous space systems.

Challenge Streams

🚀 Stream 1: Multi-Task Spacecraft Perception

This stream challenges participants to design a single, powerful model capable of performing:

  • Spacecraft classification
  • Spacecraft detection
  • Fine-grained segmentation of spacecraft components

The focus is on efficiency and performance, encouraging the development of compact, high-performing models suitable for deployment on resource-constrained space platforms, regardless of spacecraft type.

âš¡ Stream 2: Event-Based Pose Estimation

Dive into the world of event-based vision with Stream 2, which focuses on pose estimation using the SPADES dataset.

Participants will train their models on high-quality synthetic event data and validate their approaches on real event data, addressing one of the most challenging perception problems in space.

Whether you are pushing model efficiency to its limits or exploring cutting-edge event-driven perception, SPARK 2026 offers a competitive platform to showcase innovation, performance, and real-world impact in space autonomy.

SPADES Dataset

SPADES (SPAcecraft Pose Estimation Dataset using Event Sensing) is a unique and new space dataset designed to advance spacecraft pose estimation research. The dataset contains two categories of data: Synthetic and Real.

Synthetic Dataset

The synthetic dataset focuses on simulating RGB images of a satellite target—in this case, Proba-2—by moving a spacecraft model along predefined trajectories within the simulator's camera field of view.

  • Simulation Environment: Unreal Engine (UE) with various camera orientations and distances
  • Dynamic Background: Animated Sun and Earth rotating around their respective axes
  • Event Data: Generated by the ICNS event simulator using Blender to model neuromorphic sensor behavior
  • Scale: 300 trajectories with ~600 RGB images each
  • Total Pose Labels: 179,400

Real Dataset

Real data was collected at the Zero-G Laboratory at the SnT, University of Luxembourg, using a scaled mockup of the Proba-2 satellite.

  • Sensor: Prophesee Metavision EVK4-HD equipped with SONY IMX636ES (HD) event vision sensor
  • Scale: 32 trajectories with ~530 pose labels each
  • Total Pose Labels: 16,900
  • Note: A subset of this data will be used as the test dataset for the challenge