RADARSAT Image Analysis

RADARSAT possible target

We have updated http://skydev.pha.jhu.edu/nieto/tenacious with a list of possible targets from the RADARSAT images taken January 31st and February 3rd.  RADARSAT images are 12.5 m per pixel, so the boat should be about 1 pixel. Maybe not the best images, but we thought it was worth a try.

Maria Nieto-Santisteban & Jeff Valenti

5 Responses to “RADARSAT Image Analysis”

  1. Felipe Oliveira Says:

    *Tenacious may have a fish finder and it may have a GPS Element, Jim’s family may have some manuals of these devices whith informations about fabricant, serial number and more… a eletronic tool like “ping in MS-DOS” may measure the response times and give us a relatively idea of distance of some point.
    *We don’t have thermal image of satellites?
    *We have a capacity to identify a microwave oven, over the sea!? I guess we can.
    Sorry for my bad english.
    hopeful, Felipe Oliveira.

  2. IgorCarron Says:

    Maria and Jeff:

    For the Radarsat images, are they affected by clouds ?

    For the ER-2 shots, same question ?

    Igor.

  3. Maria A. Nieto-Santisteban Says:

    Based on what we saw in the images, clouds affect radarsat
    very little or not at all. The radar frequencies apparently
    penetrate clouds fairly well (like home satellite systems).

    Clouds can and do obscure the surface in ER-2 images,
    The ER-2 flew above the clouds and took images at
    visible and near-infrared wavelengths, where clouds
    are opaque. The clouds in ER-2 data look similar to
    the clouds in DigitalGlobe data.

    J&M

  4. IgorCarron Says:

    Maria and Jeff,

    The same for me on the multispectral images. And so I am wondering how we can quantify the spatial state of our knowledge based on these shoots. Two items come to my mind:
    - cloud cover
    - time of data acquisition.

    we (actually you) have identified probable targets of interest, but are we missing some because we could not physically see them ?

    As an add-on to this question, it is somehow difficult to convince the remote sensing community that even if the spatial pixel is too large for a boat of the size of Tenacious, we are really looking for detection above background, not spatial resolution nor reconstruction. A positive above the background hit would go a long way toward establishing an additional element of confidence provided by the other sensors.

    To begin to answer that question, is there any way you could process the Ikonos and ER-2 images and identify those pixels that are obviously clouds ? ( I do not have a good answer for that signature, maybe somebody could enlighten us if they have specific knowledge on this matter).

    What do you think ?

    Igor.

  5. IgorCarron Says:

    I started thinking of this issue and wrote something about it. It is rough and by all matters, any other scheme/algorithm are more than welcome.
    http://nuit-blanche.blogspot.com/2007/02/finding-jim-gray-quantifying-state-of.html

    but in order to have any model working, we would need to have data on cloud cover mentionned above.

    Igor.

Leave a Reply

You must be logged in to post a comment.