The experiments conducted by FDL Europe involving machine learning technology are consolidated under an initiative known as 'NIO' or Networked Intelligence in Orbit - part of a unifying vision to build a world-class capability in ML-enhanced hybrid observations and decision intelligence in space. NIO plays a crucial role in FDL Europe and Trillium's ambition to develop an intelligent cyberinfrastructure for disaster response and climate adaptation, enhancing Earth system predictability and planetary stewardship over the coming decades.

The FDL research sprint produces outcomes at technology readiness levels 4-5. These outcomes are then integrated into prototype applications, or ML Payloads, which operate at TRL-7 as NIO experiments.

Our overarching vision is motivated by the potential to connect Digital Twins with Networked Intelligence in Orbit, with a specific emphasis on Physics-Informed ML. We believe the combination will be vastly more powerful than any component in isolation, which we call a ‘Live Twin’.

NIO.space machine learning allows multi-vantage point observations between instruments and spacecraft, sometimes referred to as ‘hybrid observation’.
https://www.nature.com/articles/s41598-023-44918-6

Nio.space machine learning on spacecraft

NIO.space ML payloads on board spacecraft can inform large physics informed simulations running terrestrially

Nio.space projects

Worldfloods

SSL for SAR

D-tacs

Kessler

Worldfloods

Ravaen

Starcop

Karman

Nio.space select publications


Partners

Semantic segmentation of methane plumes with hyperspectral machine learning models

Vít Růžička, Gonzalo Mateo-Garcia, Luis Gómez-Chova, Anna Vaughan, Luis Guanter & Andrew Markham

Read the paper here (published: 17 November 2023)

Abstract: Methane is the second most important greenhouse gas contributor to climate change; at the same time its reduction has been denoted as one of the fastest pathways to preventing temperature growth due to its short atmospheric lifetime. In particular, the mitigation of active point-sources associated with the fossil fuel industry has a strong and cost-effective mitigation potential. Detection of methane plumes in remote sensing data is possible, but the existing approaches exhibit high false positive rates and need manual intervention. Machine learning research in this area is limited due to the lack of large real-world annotated datasets. In this work, we are publicly releasing a machine learning ready dataset with manually refined annotation of methane plumes. We present labelled hyperspectral data from the AVIRIS-NG sensor and provide simulated multispectral WorldView-3 views of the same data to allow for model benchmarking across hyperspectral and multispectral sensors. We propose sensor agnostic machine learning architectures, using classical methane enhancement products as input features. Our HyperSTARCOP model outperforms strong matched filter baseline by over 25% in F1 score, while reducing its false positive rate per classified tile by over 41.83%. Additionally, we demonstrate zero-shot generalisation of our trained model on data from the EMIT hyperspectral instrument, despite the differences in the spectral and spatial resolution between the two sensors: in an annotated subset of EMIT images HyperSTARCOP achieves a 40% gain in F1 score over the baseline.

In-orbit demonstration of a re-trainable machine learning payload for processing optical imagery

Gonzalo Mateo-Garcia, Josh Veitch-Michaelis, Cormac Purcell, Nicolas Longepe, Simon Reid, Alice Anlind, Fredrik Bruhn, James Parr & Pierre Philippe Mathieu 

Read the paper here (published 27 June 2023)

Abstract: Cognitive cloud computing in space (3CS) describes a new frontier of space innovation powered by Artificial Intelligence, enabling an explosion of new applications in observing our planet and enabling deep space exploration. In this framework, machine learning (ML) payloads—isolated software capable of extracting high level information from onboard sensors—are key to accomplish this vision. In this work we demonstrate, in a satellite deployed in orbit, a ML payload called ‘WorldFloods’ that is able to send compressed flood maps from sensed images. In particular, we perform a set of experiments to: (1) compare different segmentation models on different processing variables critical for onboard deployment, (2) show that we can produce, onboard, vectorised polygons delineating the detected flood water from a full Sentinel-2 tile, (3) retrain the model with few images of the onboard sensor downlinked to Earth and (4) demonstrate that this new model can be uplinked to the satellite and run on new images acquired by its camera. Overall our work demonstrates that ML-based models deployed in orbit can be updated if new information is available, paving the way for agile integration of onboard and onground processing and “on the fly” continuous learning


Fast model inference and training on-board of Satellites

Vít Růžička, Gonzalo Mateo-García, Chris Bridges, Chris Brunskill, Cormac Purcell, Nicolas Longépé, Andrew Markham

Read the paper here (published 17 Jul 2023)

Artificial intelligence onboard satellites has the potential to reduce data transmission requirements, enable real-time decision-making and collaboration within constellations. This study deploys a lightweight foundational model called RaVAEn on D-Orbit's ION SCV004 satellite. RaVAEn is a variational auto-encoder (VAE) that generates compressed latent vectors from small image tiles, enabling several downstream tasks. In this work we demonstrate the reliable use of RaVAEn onboard a satellite, achieving an encoding time of 0.110s for tiles of a 4.8x4.8 km2 area. In addition, we showcase fast few-shot training onboard a satellite using the latent representation of data. We compare the deployment of the model on the on-board CPU and on the available Myriad vision processing unit (VPU) accelerator. To our knowledge, this work shows for the first time the deployment of a multi-task model on-board a CubeSat and the on-board training of a machine learning model.



RaVÆn: unsupervised change detection of extreme events using ML on-board satellites

Vít Růžička, Anna Vaughan, Daniele De Martini, James Fulton, Valentina Salvatelli, Chris Bridges, Gonzalo Mateo-Garcia & Valentina Zantedeschi 

Read the paper here (published 8 October 2022)

Applications such as disaster management enormously benefit from rapid availability of satellite observations. Traditionally, data analysis is performed on the ground after being transferred—downlinked—to a ground station. Constraints on the downlink capabilities, both in terms of data volume and timing, therefore heavily affect the response delay of any downstream application. In this paper, we introduce RaVÆn, a lightweight, unsupervised approach for change detection in satellite data based on Variational Auto-Encoders (VAEs), with the specific purpose of on-board deployment. RaVÆn pre-processes the sampled data directly on the satellite and flags changed areas to prioritise for downlink, shortening the response time. We verified the efficacy of our system on a dataset—which we release alongside this publication—composed of time series containing a catastrophic event, demonstrating that RaVÆn outperforms pixel-wise baselines. Finally, we tested our approach on resource-limited hardware for assessing computational and memory limitations, simulating deployment on real hardware.


Towards global flood mapping onboard low cost satellites with machine learning

Gonzalo Mateo-Garcia, Joshua Veitch-Michaelis, Lewis Smith, Silviu Vlad Oprea, Guy Schumann, Yarin Gal, Atılım Güneş Baydin and Dietmar Backes

Read the paper here (published 31 March 2021)

Spaceborne Earth observation is a key technology for flood response, offering valuable information to decision makers on the ground. Very large constellations of small, nano satellites— ’CubeSats’ are a promising solution to reduce revisit time in disaster areas from days to hours. However, data transmission to ground receivers is limited by constraints on power and bandwidth of CubeSats. Onboard processing offers a solution to decrease the amount of data to transmit by reducing large sensor images to smaller data products. The ESA’s recent PhiSat-1 mission aims to facilitate the demonstration of this concept, providing the hardware capability to perform onboard processing by including a power-constrained machine learning accelerator and the software to run custom applications. This work demonstrates a flood segmentation algorithm that produces flood masks to be transmitted instead of the raw images, while running efficiently on the accelerator aboard the PhiSat-1. Our models are trained on WorldFloods: a newly compiled dataset of 119 globally verified flooding events from disaster response organizations, which we make available in a common format. We test the system on independent locations, demonstrating that it produces fast and accurate segmentation masks on the hardware accelerator, acting as a proof of concept for this approach.

Onboard Cloud Detection and Atmospheric Correction with Deep Learning Emulators

Gonzalo Mateo-García, Cesar Aybar, Giacomo Acciarini, Vít Růžička, Gabriele Meoni, Nicolas Longépé, Luis Gómez-Chova

Read the paper here (published 16-21 July 2023)

This paper introduces DTACSNet, a Convolutional Neural Network (CNN) model specifically developed for efficient onboard atmospheric correction and cloud detection in optical Earth observation satellites. The model is developed with Sentinel-2 data. Through a comparative analysis with the operational Sen2Cor processor, DTACSNet demonstrates a significantly better performance in cloud scene classification (F2 score of 0.89 for DTACSNet compared to 0.51 for Sen2Cor v2.8) and a surface reflectance estimation with average absolute error below 2% in reflectance units. Moreover, we tested DTACSNet on hardware-constrained systems similar to recent deployed missions and show that DTACSNet is 11 times faster than Sen2Cor with a significantly lower memory consumption footprint. These preliminary results highlight the potential of DTACSNet to provide enhanced efficiency, autonomy, and responsiveness in onboard data processing for Earth observation satellite missions.