Better training for function labeling
Chrupała, Grzegorz and Stroppa, Nicolas and van Genabith, Josef and Dinu, Georgiana (2007) Better training for function labeling. In: RANLP 2007 - Recent Advances in Natural Language Processing Conference, 27-29 September, 2007, Borovets, Bulgaria.
Full text available as:
Function labels enrich constituency parse tree nodes with information about their abstract syntactic and semantic roles. A common way to obtain function-labeled trees is to use a two-stage architecture where first a statistical parser produces the constituent structure and then a second
component such as a classifier adds the missing function tags. In order to achieve optimal results, training
examples for machine-learning-based classifiers should be as similar as possible to the instances seen during prediction. However, the method which has been used so far to obtain training examples for the function labeling classifier suffers from a serious drawback: the training examples come from perfect treebank trees, whereas test
examples are derived from parser-produced, imperfect trees.
We show that extracting training instances from the reparsed training part of the treebank results in better training material as measured by similarity to test instances. We show that our training method achieves statistically significantly higher f-scores on the function labeling task for the English Penn Treebank. Currently our method achieves 91.47% f-score on the section 23 of WSJ, the highest score reported in the literature so far.
Archive Staff Only: edit this record