Update on Mechanical Turk results

Thanks to all of you who’ve contributed your time and eyeballs to review of tiles on the Mechanical Turk. Based on your efforts, we have a sizeable number of possible hits to examine more closely. We have a team of experts working on that now. As that review turns up likely sightings of the boat, we’ll work with the Coast Guard on the best way to follow up.

Let me say that we know the image quality was a challenge in some cases; weather and physics constrain us, and we appreciate your diligence.

The Mechanical Turk team is working to place more imagery on-line for review. We are also looking into ways to redraw the search box and to retarget the imaging systems we have to account for wind, drift and the time that has passed.

[Update: Werner and I were editing simultaneously! See his post, just below this one, on additional tiles ready for your review on MTurk. Because these are near-IR images from an aircraft, contrast and detail are somewhat easier to deal with than the panchromatic tiles in the previous set.]

9 Responses to “Update on Mechanical Turk results”

  1. Adam says:

    I’ve been volunteering on mturk, and have been marking cloudy images as cloudy in comments. My thought was that if people are tasking satelites, this might help re-task on areas, but it might also be a distraction. Do you have an opinion? Should we bother?


  2. Richard says:

    I have a concern about how the Mechanical Turk task to search images is being presented. From a visual search standpoint, finding an object in the “noisy” images provided is challenging and would take some training. I think the training provided on the HIT pages that I saw is not sufficient. There is a image of a boat (powerboat with a long wake) and a red rectangle that is supposed to represent the size of a sailboat. What would other objects, such as wreckage, look like?

    I suggest that more examples of search targets should be provided to help us out. Also, am I the only one looking at a specific image? It would be nice to know if others are also looking at the same images. As it was, I wanted to help, but declined because I felt I did not know what to look for and did not want to be the only one looking at a specific image, with little or no training.

    Please suggest to the authors of this task that they might provide better examples and some additional contextual information, and whether others will be looking a the same images.

  3. Adam says:


    I share your concerns, and decided to participate anyway. In the worst case, you can definetly identify clouds and save them from further screening. In the best case, there are things that jump out at you as worth looking at.

    In the other cases, you can decide not to take a HIT.

    That’s not to say that your concerns aren’t good ones, or even ones I share. I’m somewhat worried by the implications that the boat is red, or that it will definetly not be aliased. I looked for 4×10 shapes that looked out of place. But I choose to participate anyway.

  4. Chase says:

    My browser messed up my submission for a series of photos related to http://s3.amazonaws.com/JimGray/HITs:07FEB01192323-P2AS_R10C10-005598915030_01_P001/07FEB01192323-P2AS_R10C10-005598915030_01_P001-3300-3600.jpg.

    In that series, I believe I saw the same foreign object in different locations, which I’m presuming are photos over the same location over a time interval.

    Is there a means to review previous hits?

  5. I agree with Adam about participating anyway, but Richard is right in his comments IMO. It would be great to have these refinements if possible please.

  6. Frank Wales says:

    [I sent this direct to Joe, and he asked me to post it here too.]

    I’ve spent some time today working through images on MT, and have a couple of suggestions which might help (assuming changes to the MT set-up are easy):

    + Re-present the calibration image *beside* each candidate image, so that it’s easy to tell if a possible formation is about the right size — I’m worried that I perhaps approved a few that were too big, and would like to reduce any false positives like this from others

    + Alter the list of answers so that they mean:

    + ‘Yes, something worth looking at here’

    + ‘No, definitely nothing in this image’

    + ‘I can’t see the ocean, (cloud cover, for example)’

    I’m worried that ‘No’ has two meanings in the present system, and you won’t know which areas just had useless imagery rather than useful-but-empty images.

  7. prasanna says:

    Would be very useful if every so often, the system served up images that actually had something in them (even hits identified as false positives). It would give the viewers some idea of what different kinds of things on the water actually look like. It is very challenging to determine if a pattern of whitecaps is actually something man made

  8. Jean says:

    I’m on my 100+ set and suddenly realized Mac 10.x has universal access zoom features that lets me zoom the images bigger. This is very very helpful.
    I’m also hoping that other people are looking at the same images I’m getting and giving their reading as well. I think in the over 500 pictures I looked at I’ve only seen something of interest maybe 10 times.

  9. Bob says:

    How should we indicate an image that deserves special attention? After reviewing over a hundred sets, I flagged a few for further review, but one or two stood out and should be looked at before the others. I wrote comments on those but not the rest. Will all of the images with comments be sorted to be reviewed first?

Leave a Reply

You must be logged in to post a comment.