Super typhoon Haiyan makes landfall
Super typhoon Haiyan
Subscribe here to receive special images

Data Processing - Vegetation System

Introduction

The quality and the operationality of the VEGETATION remote sensing system rely not only on the satellite and the instrument, but also on the ground segment, and particularly in the Image Processing Centre. All data acquired by the VEGETATION instruments over the whole globe are downloaded since the beginning at the Kiruna receiving station, transfered to the Central Processing Facility at Mol (B): the CTIV (Centre de Traitement des Images VEGETATION) for archival and processing.

 The next figure gives an overview of the organization of the main processing algorithms used in the VEGETATION Image processing centre. From the telemetry received from the receiving station, to the final products delivered to the users, the geometrical processing, the radiometrical corrections, the atmospherical corrections, the synthesis composition, and the product formatting are the main steps of this process. The details of every process will bedescribed in the following chapters.

 

data processing

 

Figure 1: the general scheme of VEGETATION data processing at CTIV. 1. Geometrical processing

In this chapter, we will describe the computations performed to compute the location in the raw image of the corresponding pixel in a map projected product. The accuracy of the results depends on the accuracy of the basic inputs as satellite location, or camera model. Because of some inputs are not accurate enough, some extra processing are performed to correct or reduce some bias, e.g. satellite attitude.

1.1. The origin of the geometrical errors

If the satellite were perfectly located at a given time, and its attitude perfectly known, if the cameras were perfect and identical, the pixel location accuracy should only be subject to inaccuracy originated from the cartographic model. In the real world, the following alineas detail the most important error items, and give an order of magnitude of these errors, and how they are corrected, when possible.

Dating

There is an uncertainty in the on board time of about 10 milliseconds, coming from the on board time data accuracy (4ms) and the drift of the on board clock. This leads to a mislocation of the satellite of less than 0.1 km.

Satellite location

The satellite location can be estimated in the CTIV using two methods: the first one, using an orbit model,called MADRAS, which gives a location accuracy of about 0.3 km ; the other is based on measures of an accurate on board navigator called DORIS/DIODE. This gives a much better location accuracy : about 10meters. This second method is normally used in the CTIV, but, in case of DORIS/DIODE service interruption, CTIV switches automatically to the MADRAS model.

Satellite absolute attitude

The SPOT satellite attitude movements are measured by a set of gyrometers, but the absolute attitude is known only from a sun sensor and an earth sensor, which do not give an accuracy better than 0.6 milliradians (peaks around 1 milliradian). This leads to 0.5 km RMS mislocation on the images. This quite big error needs to be corrected to achieve the VEGETATION mission requirements : a Ground Control Point workshop is used for that purpose.

Satellite relative attitude

The gyrometers data are used to measure the satellite attitude movements during the viewing. On a short viewing, their errors are neglectible, but on very long segments, a maximum error of 0,6 milliradians could happen, leading to a mislocation of 0.5 km between the first and last lines of the segment.

Inter-camera misregistration

Because there are 4 distincts instruments in the VEGETATION payload, there is some differences between them originated from the optical or mechanical items. These differences are less than half of a pixel (about 0.6 km), but they are corrected , because measured on the ground, before launch. Consequently, the resulting misregistration after corrections, is about 0,1 pixel.

Figure 2: the geometric perturbations of data acquired from a satellite

  1.2. The Geometrical modelling

The first step of the geometrical processing is to build 2 location grids refering to a " system " projection. This step, called geometrical modelling, produces two tables giving, for every 8th pixel in column and row, its locations in a map reference on a specific map projection, called « system projection » ; this map projection is in fact a set of 20 UTM projections tangential to the track of the satellite, chosen to fit to the best to the given track. The location is given for 2 elevations (0 and 5000m) ; in such a way, the true elevation of the site will be taken into account at the last step of the processing, allowing to update the Elevation Model , without re-computing the locations grids. For that purpose, the satellite location is computed from an orbit bulletin (MADRAS) computed in the Spot control Centre, or from the on board navigator (DORIS) data, available in the telemetry, which gives directly the location of the satellite in an earth reference. The attitude of the satellite is estimated from the gyrometer data, which takes into account the satellite movements during the viewing. The payload positioning on SPOT4 and the 4 cameras positioning on the payload, which are constant bias measured before launch, are taken into account. For each pixel of the line, the viewing directions take into account the lenses characteristics, including the lenses distorsion specific for each camera. The hereabove steps permits to trace a virtual ray from the satellite to the earth surface, and to determine the location of the viewed area corresponding to a given pixel. This geometrical modelling is made for the B3 band, and the other bands pixels are located by difference to B3 (dl,dp)

Figure 3: Geometric deformation of a segment acquired from a satellite  

1.3. GCP pointing to improve location accuracy

The previous step gives quite good a location accuracy (500 to 800 m RMS) but it is not accurate enough for the multitemporal registration (300m expected) . The biggest error item is the uncertainty of the absolute attitude of the satellite. The solution chosen to improve this location is a Ground Control Pointing system. For each of 14 orbits/day, a GCP session is performed. A subsampled view of the viewing segments of the orbit is displayed to the operator, as well as the several tens of GCP available within the area. The operator chooses about ten GCP located in the not cloudy parts of the image, and triggers the computation. The result is another set of location grids ready to be used for the further steps. This GCP workshop has been used since March 99, but some significant improvements have been made early 2000. The GCP data base contains 3650 GCPs which have been built from VEGETATION image chips, previously pinpointed with SPOT HRV images, themselves located with GCP on maps. The seasonal effects are taken into account : some GCP are used only in some seasons, to avoid bad correlations on areas subjects to variations : vegetation, lakes ...

For the VEGETATION 2 instrument on-board SPOT5, the use of GCP is not necessary anymore, thanks to the use of a star tracker that can provide highly accurate satellite location and attitude measurements.

Figure 4: spatial distribution of the Ground Control points used for geometric rectification

2. Segments Archiving

The lower level core archive is built with the raw image data, as received from the Kiruna receiving station . Auxiliary data are appended to these image data : The descriptive data (timing, orbit bulletin, gains ...) and the geometrical grids, results of the computation as described above. The amount of archive data produced every day is about about 1.2 Gbyte. This level of archive, called « Segment archive » is recorded on 35 or 70 Gigabytes DLT tape cartridges, stored in an automated library (« juke-box ») , to avoid cartridges handling by the operators. This archive is only for the CTIV internal use, it cannot be exported in a standard format. An archive clone is stored in another building, to avoid to loose the core archive in case of disaster in the CTIV building. Every P product is built from this level of data ; consequently, any improvement of calibration data, or Digital Elevation Model, or projection algorithm is taken into account immediatly, with an effect on the delivered product. If this occurs, a product may be lightly different than the same one ordered previously.

3. Synthesis processing

3.1. Map projection in Plate-Carrée

The first step of the synthesis processing is to get every viewing segment projected in the same cartographic projection. The chosen projection is Plate-Carrée with 1/112° per pixel in line and row, which is about 1km x 1km at the equator, and 0.5km wide and 1km high at 60° latitude. For this geometrical process, the previous computed geometrical grids are used, related to a Digital Elevation Model (ETOPO5) to compute the location of the pixels taking into account the relief effects, particularly important on the edges of the viewing field ; an interpolation is performed between the 2 location grids at 0 and 5000m elevation. To go from the « system projection » coordinates to the Plate Carrée coordinates, the cartographic coordinates are computed with the Geolib IGN library. To calculate the best radiometry of the resulting pixel, a bicubic interpolation is performed using a 4x4 window, from the 16 nearest pixels in the raw image.

Figure 5: resampling from raw image to re- projected image with the bi-cubic interpolation  

3.2. Computation of TOA reflectance

The top of atmosphere reflectance computation is performed in several steps To compensate the differences between every detector of the viewing line, the normalization takes into account the parameters computed in QIV, which gives for every detector its characteristics. In the viewing instruments, especially in the SWIR one, some detectors are blind because located between two detector chips, and some others are declared defective consequently to proton shocks ; the data given by those detectors are ignored, and replaced by data obtained from an interpolation from the neighbours. The TOA radiance is computed from the normalized digital count, and using absolute calibration parameters from QIV. Then the TOA reflectance is computed taking into acount the solar irradiance model, the sun angles, and sun-earth distance. The TOA NDVI is performed only for a temporary use in the synthesis process.

  3.3. Atmospherical corrections to get ground reflectances

The atmospherical corrections are based on the use of the the SMAC software (CESBIO, Rahman et Dedieu ), which is a simplified implementation of the 6S method (Tanré et al.). The SMAC software has been tuned for the VEGETATION spectral bands. The physical data used as input of SMAC are the following :

>· The water vapor data are global short term forecasts, downloaded 4 times/day from Meteo-France. The elementary data cell corresponds to 1.5° x 1.5° in longitude and latitude. A geometrical interpolation is performed to obtain 8/112° x 8/112° cells, and a temporal interpolation is performed between the two nearest measures, taking into account the viewing time.

>· A climatology of ozone obtained from the CESBIO

>· Asimple static models for aerosols obtained from the CESBIO

>· A 8/112° resolution DEM for atmospherical pressure estimation

The atmospherical parameters (optical thickness) are calculated on 8km x 8km cells, then interpolated for every pixel,and used for the final reflectance calculation.

  3.4. Tackling the issue of aerosols

During a first phase (i.. e. until 11/05/2001), a crude correction for aerosol effects has been made using fixed aerosol amounts per band of latitude.

In order to improve the correction for molecular effects, it was recommended the complement the use of ozone climatology and estimates of meteorological models for the water vapour content with actual atmospheric aerosol content retrieved by using the VEGETATION blue channel.

>A new method > for the correction of aerosols effects was thus conceived, tested and implemented (effective from V2, 12/05/2001). Starting from the principle that for dark targets th SWIR/blue reflectance is supposed to be constant, it also relies on this SWIR/Blue relationship at the surface (R), but assuming this ratio as a NDVI dependent variable instead of considering a constant value in the specific case of dark targets. There is indeed an approximate exponential relationship between R and Top Of Atmosphere (TOA) NDVI. The algorithm was calibrated by taking advantage of the validated results of a more sophisticated method intended to serve as a reference.

The atmospheric correction scheme is then the following: 

  • From TOA NDVI , compute R for any pixel at any given date.

  • Deduce the SWIR surface reflectance from the SWIR TOA reflectance using a standard correction for molecular effects using the SMAC correction. The aerosols are neglected in this correction since they play little role at this wavelength.

  • Deduce from R the blue surface reflectance.

  • From this and the measured TOA blue reflectance, deduce the aerosol optical thickness (AOT), assuming a fixed aerosol phase function (fixed aerosol model).

  • Operate an aerosol and molecular scattering correction on all 4 VEGETATION channels.

Note: this procedure is used only for pixels displaying a number of criteria, i. e. no snow, NDVI above an empirical threshold (NDVI ³ 0.2) and SWIR below an empirical threshold (SWIR <0.4). If the conditions are not met (e. g. over deserts), then the original seasonal and latitudinal "climatoligy" aerosol data are used.

For more details see Maisongrande & al (2001).

Figure 6: observed relationship between the SWIR/blue reflectance ratio and the TOA NDVI

> Figure 7: aerosol optical thickness at 0.550 m (combination of measures and "climatology" data)

> Figure 8: a VEGETATION colour composite before correction for aerosol

> Figure 9: The same image after correction for aerosol

  3.5. Identification of clouds , snow and ice 

In the initial version (version V1 operated since CTIV start until 11/05/2001) of the processing chain, clouds and ice were identified according to the following rules:

parameter > delivery date start of operational use > threshold values
snow ice > 05/12/1997 > 05/12/997 > B0:1900; B2:1920; B3:1940; MIR:1960
> snow ice > 12/01/1999 > 12/01/1999 > B0:674; B2:549; B3:521; MIR:328
clouds > 5/12/1997 > 5/12/1997  
clouds  > 26/11/1998 > 26/11/1998 > B0:580; B2:1000; B3:1000; MIR:1000
clouds > 04/01/1999 > 01/01/1999 > B0:395; B2: 294; B3:486; MIR:449
clouds > 08/02/1999 > 08/02/1999 > B0:674; B2: 549; B3:521; MIR:328
 
From version V2 (from 12/05/2001), the algorithm proposed by Lissens & al (2001) has been implemented. It works in the following manner:

A pixel is declared CLEAR if:

> rBLUE < 493 OR >rSWIR < 180

A pixel is declared CLOUDY if:

>rBLUE >= 720 AND rSWIR >= 320

A pixel is declared uncertain if :

It is not declared clear or cloudy by the two previous rules.

A pixel is declared snow covered if all of the following conditions are met:

>rRED >= 615 

>rSWIR < 481

(rBLUE - rNIR)/(rBLUE + rNIR) * 1000 >= -773

>(rBLUE - rSWIR)/(rBLUE + rSWIR) * 1000 >= 87

>((rBLUE + rRED)/2) - rSWIR >= 77

>A pixel is declared SHADOW in the following way:

First, the line along which a cloud pixel (cloud mask information) may cast a shadow on the earth is calculated from the observation and illumination geometry (geometric information). Then, differences of NIR channel reflectances (radiometric information) are used to determine the edge of a cloud shadow along that line. From this information, the cloud height is estimated and a shadow mask is calculated using geometry again. The algorithm is able to detect shadows at the border of clouds as well as shadows which are several pixels away from the projected cloud. This feature is quite unique for a geometric based algorithm.

The information on clouds is stored in a separate file, together with several quality flags, and provided with the data.

Figure 10: Identifying the location of pixels in cloud shadows

Figure 11: left: a VEGETATION B3 image, right corresponding cloud and shadow mask  

3.6. MVC Synthesis processing  

> To obtain a synthetised view of the globe from a set of viewing segments, a pixel to pixel mosaicking process is performed, choosing the « best » view obtained for a given pixel among the set of the available viewings. The « best viewing » algorithm chosen is the Maximum Value Composite, also called « best NDVI ». For a given pixel having been viewed several times, the selected reflectance corresponds to the viewing having the best ground NDVI. Cloudy, bad quality or interpolated views are excluded, if possible. Pixels on sea or great lakes are set to null values : the instrument is not programmed above the oceans, and the MVC algorithm gives bad results on the water (chooses clouds instead of sea). Sea or lake is determined from a sea/land static indicator, derived from the « Digital Chart of the World » where lands have been expanded of 5 km to cover the inaccuracy of the DCW, to cover the cases of tides or growing river deltas, and not to alter the real shore pixels by interpolating them with null data. If this algorithm is used with 10 days of viewing, which implies 5 to 30 views per resulting pixel, this gives the S10 decadly synthesis, which is an almost cloud free view of the globe. The synthesis process is also used with the global daily viewing, this gives the VGT-S1daily synthesis, with 0,1,2 or 3 views per synthesis pixel ; this is a global image of the earth, with some missing parts in the equatorial regions. The S10 synthesis only are archived in the tape cartridges library (8Gbytes/decade)

Figure 12: the production of 10-days synthesis according to the Maximum NDVI Value Composite principle 3.7. MVC Synthesis Artefacts 

>A given area on the ground (but the theoretical Lambertian areas), even stable between to viewings, may give quite different reflectances depending on the viewing angles ; this complex light reflection function is the origin of the directional effects : because we merge in certains cases some neighbour pixels seen under different viewing angles, some obvious discontinuities can be seen on the VEGETATION synthesis : visible segment borders, and « salt and pepper » effects. These directional effects are well known on MVC composites, and are still visible in the VEGETATION synthesis. The future VEGETATION synthesis will merge BRDF normalized reflectances, and should give quite constant reflectances for constant viewed areas. The coast lines are surrounded by a sea or cloud margin, often heterogeneous. This is a consequence of the expansion of the land indicator (see above): the MVC algorithm selects cloudy pixels preferably to sea pixels. Some oblique lines with high values are often visible in the SWIR band : these are some defective detectors of the instrument, before they were detected and flagged by the Image Quality Center. An automatic algorithm is foreseen to detect and possibly correct these aberrant values.

Figure 13: artefacts in overlapping areas of a MVC synthesis

> Figure 14: the buffer area along coast lines

  3.8. BDC Synthesis

Surface reflectances provided by large field of view optical sensors present a strong dependency on the ‘source-target-sensor’ geometry. Despite substantial reduction of this so-called bidirectional effect through the calculation of Vegetation Indices (VIs), residual impacts still remain on 10-days products derived from the maximum value compositing procedure. The reduction of these noise-like fluctuations can be done by retrieving the bidirectional reflectance distribution function (BRDF). Synthesis production based on BRDF was named BDC: BRDF-Corrected Composite.

The normalisation of bidirectional effects consists here in the correction of reflectances from their actual acquisition geometry to a reference geometry (nadir viewing), following the shape of the BRDF. Kernels driven semi-empirical models offer a good efficiency/complexity compromise to retrieve the BRDF.  The model used here is the simplified version of Roujean & al. 1992 developed by Lacaze & al. (1999).

The VGT-D10 compositing method can be described as follows:

for each 10-day composite,

  • collect the 12 last cloud-free data (regardless of their date of acquisition, that clearly extends beyond the 10-day period)

  • fit the above-mentioned BRDF model on a per-pixel basis and for each spectral band

  • normalize, for a nadir viewing, the cloud-free data of the 10-day period using the shape of the BRDF

  • average the normalized reflectance values for each spectral channel.

It should be noted that the procedure assumes that BRDF shape varies slowly with time. This assumption is particularly critical in regions with frequent cloud cover.

More detail can be found in Duchemin & al 2001 and Duchemin & al 2002a and 2002b

  4. Products manufacturing 

> The product manufacturing is the suite of processes performed from the internal image data, until the recording of the product on the deliverable media.

  4.1. VGT-P

The VGT-P manufacturing process is made from the following steps:

>· Extraction of the selected viewing segment from the tape archive, or from the disk cache.

>· Extraction of the geographical zone requested by the user Region Of Interest.

>· Computation of TOA reflectance using the same method as in the syntheses preparation.

>· Map projection as requested, using GEOLIB, DEM, and bicubic interpolation

>· Extraction of atmospherical auxiliary data to be delivered with the product (but not used for the reflectance computation)

>· HDF formatting and recording on CD, tape, or ftp server. Auxiliary data are appended.

  4.2. VGT-S

 

The VGT-S manufacturing process is made from the following steps:

>· Extraction of the selected viewing from the S10 tape archive, or from the disc cache for the S1 on subscription, and for recently accessed S10.

>· Extraction of the geographical zone requested by the user ROI

>· If the requested map projection is not the «Plate-Carrée » projection, a map re-projection is performed, from the plate-Carrée, to the user chosen projection, using the GEOLIB software, and using the nearest neighbour interpolation algorithm (bicubic cannot be used because the neighbour pixels have not been viewed at the same time, neither under similar angles.)

>· HDF formatting and recording on CD, tape, or ftp server. Auxiliary data are appended.

  4.3. VGT-D10 

The VGT-D manufacturing process is made from the following steps:

>· Extraction of the selected viewing from the D10 tape archive, or from the disc cache for for recently accessed S10.

>· Extraction of the geographical zone requested by the user ROI

>· If the requested map projection is not the «Plate-Carrée » projection, a map re-projection is performed, from the plate-Carrée, to the user chosen projection, using the GEOLIB software, and using the nearest neighbour interpolation algorithm (bicubic cannot be used because the neighbour pixels have not been viewed at the same time, neither under similar angles.)

>· HDF formatting and recording on CD, tape, or ftp server. Auxiliary data are appended.

5. Bibliography

 

 

Rahman H., Dedieu G. 1994: SMAC : a simplified method for the atmospheric correction of satellite measurements in the solar spectrum Int. J. Remote Sensing, vol.15, no.1, 123-143.

 

Duchemin B. , Maisongrande P., Berthelot B., Dubegny C., Dedieu G. , Leroy M. 2000:  A New Algorithm for Atmospheric Correction of Surface Reflectances Delivered by the Vegetation System IGARSS 2000 : International Geoscience and Remote Sensing Symposium, poster session, Honolulu (Hawaii, USA), 24-28 July 2000.

 

Passot X., 2001: VEGETATION image processing methods in the CTIV. Proceedings of the VEGETATION 2000 conference, Belgirate-Italy, 3-6 April 2000 , Saint G. Ed, CNES - Toulouse &  JRC - Ispra, pp 15-22

 

SYLVANDER S., HENRY P., BASTIEN-THIRY C., MEUNIER F.,FUSTER D., 2001:  VEGETATION Geometrical Image Quality.  Proceedings of the VEGETATION 2000 conference, Belgirate-Italy, 3-6 April 2000 , Saint G. Ed, CNES - Toulouse &  JRC - Ispra, pp 33-44

 

Maisongrande Ph, Duchemin, B., Berthelot, B., Dubegny C., Dedieu G. Leroy M. 2001: a new algorithm concept for atmospheric correction of surface reflectances delivered by the VEGETATION system. Proceedings of the VEGETATION 2000 conference, Belgirate

 

Duchemin, B., Maisongrande, P., Dedieu, G ., Leroy, M., Roujean, J .L., Bicheron, P., Hautecoeur, O., Lacaze, R., 2001. A 10-days compositing method accounting for bidirectional effects . Proceedings of the VEGETATION 2000 conference, Belgirate-Italy, 3-6 April 2000 , Saint G. Ed, CNES - Toulouse &  JRC - Ispra, pp 313 – 318

 

Lissens G., Kempeneers P., and Fierens F. 2001: Development of a cloud, snow and cloud shadow mask for VEGETATION imagery. Proceedings of the VEGETATION 2000 conference, Belgirate-Italy, 3-6 April 2000 , Saint G. Ed, CNES - Toulouse &  JRC - Ispra, pp 303 – 306

 

Duchemin, B., and Ph. Maisongrande (2002a). Normalisation of directional effects in 10-day global syntheses derived from VEGETATION/SPOT. I Investigation of concepts based on simulation. Remote Sensing of Environment 81: 90-100.

 

Duchemin, B., Berthelot, B., Dedieu, G., Leroy, M. and Maisongrande, Ph. (2002b). Normalisation of directional effects in 10-day global syntheses derived from VEGETATION/SPOT. II Validation of an operational method on actual data sets. Remote Sensing of Environment 81: 101-113.

 

Maisongrande, P., B. Duchemin, G. Dedieu, 2003 VEGETATION/SPOT - An Operational Mission for the Earth Monitoring : Presentation of New Standard Products. International Journal of Remote Sensing. In press

 

J.M. Roujean, M. Leroy, and P.Y. Deschamps, ‘A bidirectional reflectance model of the Earth’s surface for the correction of remote sensing data’, J. Geophy. Res. 97:20,455-20,468, 1992.

 
R. Lacaze, J.L. Roujean and J.P. Goutorbe. Spatial distribution of the Sahelian land surface properties from airbone POLDER multiangular observations. , J. Geophy. Res. 104:12,131-12,146, 1999. last revision: 12 Nov. 2003