Lexical Semantic Change Detection in German

Organized by garrafao - Current server time: Oct. 27, 2020, 11:36 a.m. UTC

Previous

Testing
July 10, 2019, midnight UTC

Current

Analysis
July 16, 2019, 7 a.m. UTC

End

Competition Ends
Never

Welcome!

This is the CodaLab Competition for a shared task addressing the unsupervised detection of lexical semantic change, i.e., word sense changes over time, in a German text corpus. The task is organized as part of the seminar 'Lexical Semantic Change Detection' at IMS in the summer term of 2019.

The evaluation data is based on

Find model implementations in the LSCDetection repository. Find a summary of the shared task in

Task

Given two corpora Ca and Cb, rank all target words according to their degree of lexical semantic change between Ca and Cb as annotated by human judges. (Higher rank means higher change.)

The corpora Ca and Cb for this task are DTA18 and DTA19. Find more information in

  • Dominik Schlechtweg, Anna Hätty, Marco del Tredici, and Sabine Schulte im Walde. 2019. A Wind of Change: Detecting and Evaluating Lexical Semantic Change across Times and Domains. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Florence, Italy. ACL.

Evaluation

Participants are required to upload a list with a predicted change value for each target word. The submission is then scored by calculating the Spearman rank correlation coefficient between the submitted scores and the true scores. Note that for Spearman correlation only the ranking of the targets influences the results, not the exact values. Thus, a submission will receive the high-score 1.0 if the predicted values rank the target words exactly as the gold rank. On the contrary, it will receive the lowest possible score -1.0 if the predicted values rank the target words exactly opposite to the gold rank.

The submission must contain real-valued predictions for each target (no nans). Ties (targets for which values are equal) are automatically corrected by assigning to them the average rank of all ties. (See also scipy.stats.rankdata.) Find more information on the data format under 'Participate'.

Development

Start: July 1, 2019, midnight

Description: implement your model, check out the competition, upload test results for trial data

Testing

Start: July 10, 2019, midnight

Description: test data (corpus, targets) will be released, apply your model to the test corpus to create predictions, upload predictions, upload number is limited, try to get the best result

Analysis

Start: July 16, 2019, 7 a.m.

Description: analyze your model predictions, test data is publicly available, try to tune parameters, examine bias factors such as frequency and particularities of your model

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 JensKaiser 0.802
2 sisu 0.730
3 adnan.sust 0.570