Over the past two and a half decades, the satellite-based Earth observation market has grown significantly. While optical imagery is still a high priority, the importance of Synthetic Aperture Radar (SAR) has become undeniably important over the past 15 years – a trend which is based on continuous technology improvements and more vendors rising to the equation.

With our highly precise and very high resolution #TerraSAR-X and #TanDEM-X satellites, Airbus is at the very forefront of this development since 2007. In 2018, our Spanish Partner Satellite #PAZ (Hisdesat) was launched. Together, the three satellites form our #Radar Constellation.

We are excited to launch our “SAR Social Media Tutorial” - a mini-series of social media posts, entirely focused on space borne SAR. It has been created for our (future) SAR users who currently having little to no experience with this incredible technology.

It is an easy to digest introduction and will act as a reference guide, highlighting the basic principles of Radar based remote sensing, supported by some very exiting imagery examples.

With that being said, please enjoy this mini-series, starting with our first Episode on SAR basics.


Episode 1 SAR Basics


Episode 1 SAR Basics

Unlike any ‘passive’ optical systems, SAR sensors do not require external illumination (sunlight) in order to collect (useful) data. Being a so called ‘active’ system, Radar provides its own energy source to illuminate the area of interest and targets – allowing Radar to acquire data completely independent of daylight or weather conditions.

A Radar remote sensing system is based on three main functionalities:

  1. The Transmission of the Radar microwave signal to the ground in its specific wave length (X-Band in the case of our Radar Constellation).
  2. The Reception a portion of the transmitted energy as backscatter from the ground surface
  3. The Processing of the returned signal into an image, alternatively a tailored monitoring or measurement product, considering signal strength and time delay.

Key take-away: Today’s Staring SpotLight image in sub-metre resolution has been acquired over Tórshavn, Faroe Islands, which is considered to be the number one ranked city in the world which receives the least amount of sunlight. As usual, the Radar Constellation image is clear and sharp and unaffected by these conditions.

Episode 2 Acquisition Modes and Resolution


Episode 2 Acquisition Modes and Resolution 

Airbus’ Radar Constellation offers six different Acquisition Modes (or satellite settings) ranging from sub-metre resolution for hot-spot monitoring all the way to large area coverage at decent resolution benefitting services such as maritime surveillance.

  1. Today’s imagery provides… an overview of Lago Gatun in the North and the Gulf of Panama in the South, with the famous Panama Canal shipping route almost centered in this Wide ScanSAR image and
  2.  a close-up of the actual lock captured with Staring SpotLight mode in sub-metre resolution providing a very high level of detail.

Key take-away: Our Radar Constellation offers a wide range of resolution and area coverages, being the perfect ‘toolbox’ for a broad variety of applications.

Episode 3 Radar Shadow


Episode 3 Radar Shadow

This sub-metre Radar Constellation Staring SpotLight image of the Pyramids of the Sun and Moon in Teotihuacán, Mexico perfectly displays the so called ‘Radar Shadow’.

While shadows in photographs are quite obvious, there also are portions in radar images that can be subject to shadowing.

The radar satellite is always looking sideways towards the pyramid - in this case from the right hand side as it is measuring signal runtime differences to correctly place the pixels across the image. The sides of the pyramids facing the sensor appears to be brighter as caused by significant signal returns, while the opposite side of both temples appears much darker (= radar shadow), due to missing/little signal returns coming back to the sensor.

This is especially valuable for any interpretation of terrain or any tall object since it adds height information to the analysis and can even be used to generate 3D models.

Key take-away: A radar shadow occurs when the radar beam is not able to illuminate the ground surface, specifically behind objects or other vertical features or slopes with steep sides. A sharp radar shadow can derive height information or, in other use cases, can help to identify the shape of a specific object.

Episode 4 Layover


Episode 4 Layover

This is a Radar Constellation Staring SpotLight sub-metre resolution image of Downtown Honolulu, Hawaii.

Similar to distortions encountered when using cameras, radar images are also subject to geometric distortions due to relief displacement.

The so called layover always occurs when a radar beam reaches the top of an object (e.g. a tall building as shown in this image) before it reaches the ground. The reflection from the top of the building is received back at the sensor before the one from the bottom of the building. In the image, the buildings appear to be ‘leaning’ towards the sensor and are laying over or across their actual bases. Layover is helpful to get a clear view on the buildings structure, to count the number of floors and to even calculate the object’s height.

Key take-away: Layover has to be taken into consideration when analysing radar imagery, specifically with respect to tall objects or steep terrain. In doing so, additional information (e.g. height, detailed object structure) can be derived.

Episode 5 Backscattering


Episode 5 Backscattering

This Radar Constellation High Resolution SpotLight has been collected over an area in Bavaria, Germany.

Radar sensors are always looking sideways towards the ground at a specific, selectable incidence angle. This causes a 3D look of the area due to radar shadow and layover effects (see #Episodes 3 and #4 for more information). At the same time, all objects and materials on the ground have a different surface roughness in relation to the sensor’s wavelength, leading to a different behavior in signal reflection. This ultimately causes the individual shades of grey in radar images (also called intensity):

  • Smooth surfaces (e.g. airport runways, parking areas, calm water surfaces) will appear dark to black.
  • Bright pixels are often found in urban areas, at metal objects or at river banks and lake shores.
  • Forests will show a bright side where it is facing the radar and a darker edge on the opposite side.

Key take-away: A single radar image roughly has 65,000 different shades of grey representing structural information such as objects or surfaces.

Episode 6 Polarisation


Episode 6 Polarisation

This Radar Constellation image in StripMap mode at 3m spatial resolution has been collected over Po Yang Hu Reservoir in China. It is presented here in two different layers, so called ‘polarisations’ and their color combination.
Recall #Episode 1: Radar sensors actively transmit a signal (i.e. electromagnetic wave) to the ground and receive its echo off the ground. This is done in a controlled way - either as a horizontal (H = left to right) or vertical (V = up and down) polarised wave. This is comparable to a pol filter hand-held camera: Turn the filter to block out certain portions of the sunlight and see the landscape more clearly.
Each polarisation combination at transmit and receive stages provides a separate image, for example VV polarised waves interact with the vertical stalks of a plant canopy, whilst HH polarised waves penetrate through plant canopy.
Therefore, the combination of the polarisation channels results in a false-color image, which helps to differentiate among ground cover types such as vegetation or surface material classes. Whilst a variety of polarisations is possible, today’s image showcases a comparison between polarisation types ‘single HH’, ‘single VV’ as well as ‘dual HH/VV’.

Key take-away: Radar can create imagery with a variety of polarisation combinations. Combined, this will help delineate land cover types and vegetation classes.

Episode 7 Colour Composite


Episode 7 Colour Composite

This Colour Composite image of London, ON, Canada has been created from three StripMap images at 3m resolution, collected over the area in October and November. Each of these grey scale images can already be exploited quite substantially by itself, and yet, users often prefer to have additional and/or ‘easy’ to access information ( colour).

In an optical image an overview of e.g. crop/plant health can be achieved by combining red, green and near infrared bands to create color composites. Radar sensors do not provide this type of spectral information. They usually work at one preset frequency band (here X-band). However, ‘colourisation’ can be created by either combining the same image in different polarisations or multiple images collected at different dates. In today’s example, three Radar images have been collected at different dates (using ‘HH’ polarisation). Afterwards, each of the images has been assigned to an individual colour channel (1st image = red, 2nd image = green, 3rd image = blue). Combined, these images result in the shown colour composite. Due to additive colour mixing, additional colours such as yellow, magenta, cyan and white can occur in the image – and, of course, black where none of the images contain any other information.

Key take away: Colour composite images are specifically valuable for land cover classification purposes. The different colours reflect the crop type as well as the change in crop condition during a defined period (e.g. growth/removal)  the harvesting period in our example.

Episode 8 Amplitude Change Detection (ACD)


Episode 8 Amplitude Change Detection

This Amplitude Change Detection (ACD) image of India’s Indira Gandhi International Airport has been created from two Radar Constellation Staring SpotLight images, acquired on different dates and at sub-metre resolution.

In the case of ACD, a change is identified based on backscatter (grey value of pixel) changes between repeated acquisitions; thus simple overlaying of subsequent images is sufficient to identify changes. By allocating the individual images to different colour channels (e.g. ‘green’ and ‘red’) and generating a colour composite of a minimum of two different acquisitions, changes are easily visible as colourised areas. Grey tones represent these objects which have shown no change between the two observations.

ACD is specifically useful for Hotspot Monitoring in order to detect changes to man-made-objects (e.g. buildings, vehicles: such as the planes in today’s image), flood areas or mass movements on terrain slopes. Automated methods can be employed to help the analyst and truly detect the changes.

Key take-away: Structural changes among a set of repeated image acquisitions can automatically be detected and made available to a GIS for easy access to the information.

Episode 9 Coherence Change Detection

Similar to the previous #Episode 8, this Coherence Change Detection (CCD) image of Indira Gandhi Int’l Airport, New Delhi has been created from two Radar Constellation Staring SpotLight images, collected at an interval of 11 days.
CCD is the analysis of radar image coherence (or ‘similarity’) based on the measurement of phase information that is included in the Radar data.

This very sensitive method allows an early detection of even subtle changes not detectable otherwise. These changes do not necessarily have to be object-based (e.g. appearance or removal of a vehicle or demolition of a building), but could also be caused by groundworks/earthworks or by vehicles using a gravel road.

This CCD image is a colour composite of two Radar images having the same viewing geometry, as well as their coherence image.

The colours indicate changes (Green = Day 1, Red = Day 2) as well as stable objects and surfaces (White = buildings, Black = tarmac, Blue = concrete). They allow getting a quick overview of recent site activities.

Episode 9 Coherence Change Detection (CCD)


Key take-away: SAR based CCD is used to detect even small and subtle differences between two or more Radar images, indicating activities over an area at a very early stage. It is also used for image segmentation and classification, e.g. by machine learning algorithms to help indicate the major land cover types.