Abstract: Crowdsourcing is the practice of engaging acrowd or a group of people for a common task to be done.It is an easy and efficient way for tasks to be done fasterwith good accuracy and can be used in applications likedata collection, recommendation systems, social studies etc.It also plays an important role in machine learning wherelarge amount of labeled data is needed to train the models.In the process of labeling data sets, labelers with more qual-ity should be selected to reduce noisy labels. A number ofoffline and online approaches exist for this labeler selectionproblem and some of them are discussed here. 1 Introduction Crowdsourcing is the process by which a work beingdone by a group of people and the people get rewarded fortheir work. The key idea behind crowdsourcing is to dis-tribute a task(unlabelled data) to a large number of peopleand aggregate the results obtained from them.

A task can beanything like labeling an image, rating a movie etc. Main components of a crowdsourcing system is a taskmaster and some workers(labelers). Task master posts a taskand interested workers approach to do the task.

The work-ers, in turn will get paid by the task master. One famousexample for crowdsourcing system is Amazon MechanicalTurk(AMT). It is a crowdsourcing internet marketplace forwork that requires human intelligence. Crowdsourcing works based on the principle of moreheads are better than one. Because of the involvement ofmore people with different skills and talents, good qualityresults can be achieved for the tasks. Crowdsourcing playsan important role in machine learning applications also. Inmachine learning, huge amount of labeled data is needed totrain a model. Such huge amount of labeled data can be col-lected via crowdsourcing.

The major problem concerned with crowdsourcing is thequality of labeled data obtained from labelers. This is be-cause some of the labelers assigned to a task may be showing irresponsible behavior and some of them may be having lowdegree of expertise. As a result, the obtained labels becomenoisy and contain erroneous answers. Hence, the selection oflabelers should be done carefully so that the quality of labelscan be improved.

The problem of finding the best and trusted labelers iscalled as ‘labeler selection problem’. Several techniqueshave been proposed to solve the labeler selection problem.The purpose of this paper is to survey a number of the morepromising of those techniques. 2 Literature Review2.1 Who Moderates the Moderators? Crowdsourcing Abuse Detection in User-Generated Content 1User generated content(UGC) can be any form of posts,comments, blogs posted by users in websites. These contentsmay sometimes contain spam and abuse. Such abusive con-tents should be recognised and eliminated from web pages.

This process is called as moderation.To cope with large amount of contents to be moderated, the authors suggest to use crowdsourced ratings for modera-tion of UGC. That is, the viewers of websites will be allowedto label the content as good or bad. By aggregating their rat-ings, abusive contents can be detected and such contents canbe eliminated from the websites. But it is not necessary thatall the raters are honest and will give accurate ratings. Sotrusted raters should be selected in order to obtain correctratings. The algorithm proposed in this paper works based onthe assumption that the identity of a single good or trustedrater is known.

It means that this trusted rater will rate thecontent accurately almost all the time. Hence, by comparingthe labels obtained from other raters with that of the trustedperson, honest and good raters can be determined. Limitation of this approach is that it is an offline algo-rithm. The process of finding best raters is done first and thenewly arriving content is given to this best set of raters. But it does not update the accuracy of raters based on each ar-riving contents to be moderated. Hence, this approach is notadaptive. Also, the elimination of bad raters is done as postprocessing, i.e, after the rating is done by all raters.

If mostof these labels are noisy, then time and resources has beenwasted. 2.4 An Online Learning Approach to Improving theQuality of Crowd-Sourcing 4 In this paper, the authors introduce an online learningframework to solve the labeler selection problem wherebythe labeler quality is updates as tasks are assigned and per-formed. It is thus adaptive to newly arrived tasks because,the accuracy of labeler is updated on each task arrival. Thisapproach does not require any reference label set or ground-truth for checking the correctness of the label.

Instead ofusing ground-truth information, they use weighted majorityrule for inferring the true label. It consists of two steps namely exploration and exploita-tion. A condition will be checked to determine whether ex-ploration or exploitation is to be conducted. A set of tasks aredesignated as testers and they are assigned repeatedly to eachlabeler for estimating his labeling quality. The explorationphase is entered if there is no enough number of testers or ifall the testers have not been tested enough number of times.In the exploration phase, either an old tester task or the newarrived task is given to the labelers. Weighted majority ruleis applied over the collected labels to infer the true label.

Theaccuracy of each labeler is the ratio of number of times hislabel matches with the true label to the total number of tasksassigned to him. It is updated again and again on each newtask arrival and the algorithm over time learns the best set oflabelers. Labelers who always conflict with others in their labelsare eliminated. Also, same task is given to same person tocheck the consistency of his labels and inconsistent labelersare eliminated. In the exploitation phase, the algorithm selects the bestset of labelers based on current quality estimates to label thearriving task. The limitation of this approach lies in the fact that it doesnot consider the context of the arriving task and the qualityof labelers in different contexts.

Each person will be havingknowledge in different domains. A person receiving a taskunder a particular context in which he has less knowledgecan not give correct label even if he is having high accuracyestimate. Due to this reason, there are chances for gettinglabels having low quality. 3 Conclusions Crowdsourcing is being used in a variety of applicationsto get good quality results faster. Labeler selection should bedone carefully to obtain accurate output from crowdsourc-ing.

There are different offline and online approaches thatare used to select best set of labelers and thereby improvingthe label quality. Some of these approaches were detailed inthe literature. References 1 A.Ghosh,S.Kale,andP.McAfee,”Whomoderatesthemoderators?: Crowdsourcing abuse detection in user-generated content,” in Proc. 12th ACM Conf. Electron.

Commerce, New York, NY, USA, 2011, pp. 167176. 2.2 Efficient Crowdsourcing for Multi-class Labeling 2 In this paper, the authors try to increase the reliability oflabels by utilising the principle of redundancy. It means thatinstead of giving one task to one person and trusting the labelgiven by him which could be incorrect, each task is given tomultiple workers.

More accurate answers can be achievedif more redundancy is introduced. Answer for a task is thencomputed based on majority rule. That is, the label for whichmajority of workers have been voted is selected as the correctlabel.

Based on these true labels, the accuracy of each labeleris calculated. The authors develop an algorithm for deciding whichtasks to be assigned to which workers, and estimating the an-swers to the tasks from noisy answers collected from thoseassigned workers. The algorithm is based on low-rank ap-proximation of weighted adjacency matrix for a random reg-ular bipartite graph, weighted according to the answers pro-vided by the workers. The disadvantage of this approach is that the task as-signment is done in a one-shot fashion. It means that alltasks are initially given to workers and once all answers arecollected, true labels are estimated. The algorithm does notupdate the labeler accuracy on each task arrival and hence itis non adaptive to newly arrived tasks. In that case, valuabletime and resources have been wasted if most of the labelsobtained are incorrect. 2.

3 Here, an online learning approach is used for the taskassignment problem. That is, the tasks are given sequentiallyand the accuracy of labeler is updated based on each task ar-rival. The authors propose a method for determining whichtask should be given to which person according to their ac-curacy. It consists of two steps namely exploration and ex-ploitation. This approach uses ground truth labels for infer-ring correct labels. Ground truth labels are labels for sometasks which are already known. In the exploration phase,tasks with known ground truth labels are given to labelersas one by one.

By comparing the collected labels with theseground truth labels, labeler accuracy is estimated and it is up-dated based on each assigned task. The result of explorationphase will be labelers with their corresponding accuracies. During the exploitation phase, labelers with high accu-racies are allowed to label further tasks. The limitation ofthis approach is that it needs ground truth labels for sometasks which might not be available always.

Written by
admin
x

Hi!
I'm Colleen!

Would you like to get a custom essay? How about receiving a customized one?

Check it out