🌿 An easy-to-use Japanese Text Processing tool, which makes it possible to switch tokenizers with small changes of code.
-
Updated
May 15, 2024 - Python
🌿 An easy-to-use Japanese Text Processing tool, which makes it possible to switch tokenizers with small changes of code.
使用sentencepiece中BPE训练中文词表,并在transformers中进行使用。
A Robustly Optimized BERT Pretraining Approach for Vietnamese
Extremely simple and understandable GPT2 implementation with minor tweaks
Learning BPE embeddings by first learning a segmentation model and then training word2vec
BERT implementation of PyTorch
dataset, train, inference
Bengali language Tokenizer (SentencePiece)
NMT with RNN Models: (1) in Vanilla style, (2) with Sentencepiece, (3) using Pre-trained models from FairSeq
Escape unknown symbols in SentecePiece vocabularies
A framework for building Sentencepiece tokenizer from a dataset
Automated WikiGame-playing 'bot'. Achieved via SentenceTransformer Word Embeddings.
An Industry Standard Tokenizer, purposed for large-scale language models like OpenAI's GPT Series.
pretrained models and a training code for sentencepiece
A huggingface space for Sugoi V4
Add a description, image, and links to the sentencepiece topic page so that developers can more easily learn about it.
To associate your repository with the sentencepiece topic, visit your repo's landing page and select "manage topics."