%0 Conference Proceedings %T UoB_UK at SemEval 2021 Task 2: Zero-Shot and Few-Shot Learning for Multi-lingual and Cross-lingual Word Sense Disambiguation. %A Li, Wei %A Tayyar Madabushi, Harish %A Lee, Mark %Y Palmer, Alexis %Y Schneider, Nathan %Y Schluter, Natalie %Y Emerson, Guy %Y Herbelot, Aurelie %Y Zhu, Xiaodan %S Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021) %D 2021 %8 August %I Association for Computational Linguistics %C Online %F li-etal-2021-uob %X This paper describes our submission to SemEval 2021 Task 2. We compare XLM-RoBERTa Base and Large in the few-shot and zero-shot settings and additionally test the effectiveness of using a k-nearest neighbors classifier in the few-shot setting instead of the more traditional multi-layered perceptron. Our experiments on both the multi-lingual and cross-lingual data show that XLM-RoBERTa Large, unlike the Base version, seems to be able to more effectively transfer learning in a few-shot setting and that the k-nearest neighbors classifier is indeed a more powerful classifier than a multi-layered perceptron when used in few-shot learning. %R 10.18653/v1/2021.semeval-1.97 %U https://aclanthology.org/2021.semeval-1.97 %U https://doi.org/10.18653/v1/2021.semeval-1.97 %P 738-742