%0 Conference Proceedings %T Sailor: Open Language Models for South-East Asia %A Dou, Longxu %A Liu, Qian %A Zeng, Guangtao %A Guo, Jia %A Zhou, Jiahui %A Mao, Xin %A Jin, Ziqi %A Lu, Wei %A Lin, Min %Y Hernandez Farias, Delia Irazu %Y Hope, Tom %Y Li, Manling %S Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations %D 2024 %8 November %I Association for Computational Linguistics %C Miami, Florida, USA %F dou-etal-2024-sailor %X We present Sailor, a family of open language models ranging from 0.5B to 14B parameters, tailored for South-East Asian (SEA) languages. From Qwen1.5, Sailor models accept 200B to 400B tokens during continual pre-training, primarily covering the languages of English, Chinese, Vietnamese, Thai, Indonesian, Malay, and Lao. The training leverages several techniques, including BPE dropout for improving the model robustness, aggressive data cleaning and deduplication, and small proxy models to optimize the data mixture. Experimental results on four typical tasks indicate that Sailor models demonstrate strong performance across different benchmarks, including commonsense reasoning, question answering, reading comprehension and examination. We share our insights to spark a wider interest in developing large language models for multilingual use cases. %R 10.18653/v1/2024.emnlp-demo.45 %U https://aclanthology.org/2024.emnlp-demo.45 %U https://doi.org/10.18653/v1/2024.emnlp-demo.45 %P 424-435