The use of automatic methods for the study
of lexical semantic change (LSC) has led to the creation of evaluation benchmarks. Benchmark datasets, however, are intimately tied to the corpus used for their creation questioning their reliability as well as the robustness of automatic methods. This contribution investigates these aspects showing the impact of unforeseen social and cultural dimensions. We also identify a set of additional issues (OCR quality, named entities) that impact the performance of the automatic methods, especially when used to discover LSC.
Proceedings of the 2nd International Workshop on Computational Approaches to Historical Language Change 2021.
.
Association for Computational Linguistics (ACL).
This item is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 3.0 License. View License
Funders:
Science Foundation Ireland (grant Agreement No. 13/RC/2106) at the ADAPT at DCU, EVALITA4ELG project, funded by ELG (European Language Grid) Pilot Projects Open Call 1 (Grant Agreement No. 825627 –H2020, ICT 20182020 FSTP), Science Foundation Ireland, European Regional Development Fund (ERDF, Grant No. 13/RC/2106 References P2)
ID Code:
26587
Deposited On:
10 Jan 2022 16:55 by
Annalina Caputo
. Last Modified 10 Jan 2022 16:55