4.7 Article

Highly-Parallel GPU Architecture for Lossy Hyperspectral Image Compression

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSTARS.2013.2247975

Keywords

Graphics processing unit (GPU); lossy hyperspectral image compression; CUDA

Funding

  1. HiPEAC network of excellence
  2. [TEC2011-28666.C04-04]

Ask authors/readers for more resources

Graphics Processing Units (GPU) are becoming a widespread tool for general-purpose scientific computing, and are attracting interest for future onboard satellite image processing payloads due to their ability to perform massively parallel computations. This paper describes the GPU implementation of an algorithm for onboard lossy hyperspectral image compression, and proposes an architecture that allows to accelerate the compression task by parallelizing it on the GPU. The selected algorithm was amenable to parallel computation owing to its block-based operation, and has been optimized here to facilitate GPU implementation incurring a negligible overhead with respect to the original single-threaded version. In particular, a parallelization strategy has been designed for both the compressor and the corresponding decompressor, which are implemented on a GPU using Nvidia's CUDA parallel architecture. Experimental results on several hyperspectral images with different spatial and spectral dimensions are presented, showing significant speed-ups with respect to a single-threaded CPU implementation. These results highlight the significant benefits of GPUs for onboard image processing, and particularly image compression, demonstrating the potential of GPUs as a future hardware platform for very high data rate instruments.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available