Thesefeatures, and different combinations of them, are then categorised intoattribute configuration files which are fed into three supervised learningmethods, namely Naive Bayes, maximum entropy and support vector machine. Thiswork evaluates the improvement achieved in definition extraction by using thesemachine learning methods and the different feature combinations. The accuracyachieved over the different methods and configuration files varies with thebest resulting method being the maximum entropy on the configuration using the firstthree features described above.

Westerhout& Monachesi (2007a, 2008) experiment with both rule-based and machinelearning techniques. They use an eLearning corpus in Dutch which is partiallyannotate with manually identified definitions. They argue that since the definitionextractor is for an eLearning setting and the learning objects tend to be smallin size, both precision and recall need to be given importance unlike similar workwhere only precision is considered. The manually annotated definitions are dividedinto five different categories that have been identified through observation. Linguisticrules are used to capture a large number of definitions, after which machinelearning techniques similar to Fahmi & Bouma (2006) are applied as a filteringtechnique. Their final results are slightly lower than those of Fahmi & Bouma.

This is to be expected since the corpus used is less structured than Wikipediaarticles.Degfiorskiet al. (2008) focus on Polish eLearning material withthe purpose of extracting definitions to be presented to a tutor for glossarycreation. Initial attempts using manually crafted grammar rules didn’t achievea good f-measure and thus the authors attempt several machine learning classifiersavailable in the Weka toolset (Witten & Frank 2005) to improve results. Thetechniques used are naive Bayes, decision trees such as ID3 and C4.

5, lazyclassifier IB1, AdaBoostM1 with Decision Stump and AdaBoostM1 with nu-SVC. Inthese experiments they report an increase of f-measure with the best resultobtained by the ID3 decision tree classifier. Furtherexperiments on the Polish language use Balanced Random Forest (BRF) in Kobylifinski& Przepifiorkowski (2008). BRF is a machine learning technique for classificationusing decision trees, where decisions are based on a subset of attributes whichare randomly selected and the best attribute for the current tree is thenchosen. Both techniques have improved results over manually crafted rules,achieving close results to work by (Fahmi & Bouma 2007; Westerhout 2008).

Written by
admin
x

Hi!
I'm Colleen!

Would you like to get a custom essay? How about receiving a customized one?

Check it out