[ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"
-
Updated
Oct 25, 2024 - Python
[ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"
[ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models
One Diffusion model implementation base on LibTorch
Image Captioning With MobileNet-LLaMA 3
5M decoder-only transformer for molecule generation (SMILES), using rotary positional encoding.
Add a description, image, and links to the rotary-position-embedding topic page so that developers can more easily learn about it.
To associate your repository with the rotary-position-embedding topic, visit your repo's landing page and select "manage topics."