I don’t think that Amazon Mechanical Turk can be called a social media tool, but since we discussed crown sourcing in class and I used MTurk to conduct my class research experiment, I thought I should write about my experiment with it. Always skeptical about MTurk, I thought that this class research paper would be a great opportunity to try it out. Overall I’d have to say that I like MTurk, although I’m not certain about the reliability of the data. After all, those who complete the human intelligence tasks are paid and they may try to complete as many tasks as possible to increase their pay.
Knowing this MTurk allows you the person designating to decide whether you will accept or reject the work completed. To increase your chances of getting good work completed, it’s better to filter for higher quality workers, though this means that your pay will need to be slightly higher. If your HIT is a survey, you could also verify that workers completed the survey by having them submit a validation code at the end of the survey.
I think MTurk provides a great opportunity for recruiting users to complete any sort of HIT that one may need done. The task type will determine how much you need to pay your workers, which can range from a penny to more than 20.00 per task. The higher the pay, the more complex and time consuming the task will be. I was able to pay workers 0.35 cents to complete a 12 item Likert scale survey which took worker an average of 2.5 minutes to complete.
Still skeptical even after my first trial, but I am willing to try MTurk again.