[go: up one dir, main page]

Skip to content
This repository has been archived by the owner on Mar 6, 2023. It is now read-only.

KumaTea/pytorch-aarch64

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

69 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

pytorch-aarch64

Deprecation

Since v 1.8.0, PyTorch has officially provided aarch64 wheels.

This project has completed its mission.


Build Status ver Coverage

PyTorch, vision, audio, text and csprng wheels (whl) and docker images for aarch64 / ARMv8 / ARM64 devices

中文版 (for Gitee) | GitHub | Web | TF


Install

conda 🆕 (Recommended)

conda install -c kumatea pytorch

You might (or possibly will) need to install numpy: conda install -c kumatea pytorch numpy

cpuonly in the official installation guide is not needed, but supported: conda install -c kumatea pytorch numpy cpuonly

pip

It's not recommended to use pip to install from this source. Instead, install from the official PyPI index:

pip install torch

Install from here...

pip install torch -f https://torch.kmtea.eu/whl/stable.html

Add torchvision, torchaudio, torchtext, torchcsprng and other packages if needed.

Consider using prebuilt wheels to speed up installation: pip install torch -f https://torch.kmtea.eu/whl/stable.html -f https://ext.kmtea.eu/whl/stable.html

(For users in China, please use the CDN)

Note: this command installs the latest version. For choosing a specific version, please check the Custom Builds section.

To pick the whl files manually, please check the releases.

Docker (deprecated)

docker run -it kumatea/pytorch

To pull the image, run docker pull kumatea/pytorch.

To check all available tags, click here.


FastAI is a great open-source high-level deep learning framework based on PyTorch.

conda (recommended)

conda install -c fastai -c kumatea fastai

Similarly, fastbook could be installed by:

conda install -c fastai -c kumatea fastbook

pip

pip install fastai -f https://torch.kmtea.eu/whl/stable.html

torch and torchvision will be installed as dependencies automatically.


Custom Builds

click to view corresponding versions
torch torchvision torchaudio torchtext torchcsprng Status python
master
nightly
master
nightly
master
nightly
master
nightly
master
nightly
>=3.6
1.10.0 0.11.1
0.11.0
0.10.0 0.11.0 passing >=3.6
1.9.1 0.10.1 0.9.1 0.10.1 >=3.6
1.9.0[i] 0.10.0 0.9.0 0.10.0 passing >=3.6 [i]
1.8.1 0.9.1 [i] 0.8.1 0.9.1 0.2.1 passing >=3.6
1.8.0 [i] 0.9.0 0.8.0 0.9.0 0.2.0 passing >=3.6
1.7.1 0.8.2 0.7.2 0.8.1 0.1.4 passing >=3.6
1.7.0 0.8.1
0.8.0
0.7.0 0.8.0 0.1.3 passing >=3.6
1.6.0 [i] 0.7.0 0.6.0 0.7.0 0.1.2
0.1.1
0.1.0
passing >=3.6
1.5.1 0.6.1 0.5.1 0.6.0 passing >=3.5
1.5.0 0.6.0 0.5.0 0.6.0 passing >=3.5
1.4.1
1.4.0
0.5.0 0.4.0 0.5.0 passing ==2.7, >=3.5, <=3.8
1.3.1 0.4.2 ==2.7, >=3.5, <=3.7
1.3.0 0.4.1 ==2.7, >=3.5, <=3.7
1.2.0 0.4.0 ==2.7, >=3.5, <=3.7
1.1.0 0.3.0 ==2.7, >=3.5, <=3.7
<=1.0.1 0.2.2 ==2.7, >=3.5, <=3.7

Corresponding Versions


More Info

click to expand...

FAQ

  • Q: Does this run on Raspberry Pi?
    A: Yes, if the architecture of the SoC is aarch64. It should run on all ARMv8 chips.

  • Q: Does this support CUDA / CUDNN?
    A: No. Check here for more information.

  • Q: Does this run on Nvidia Jetson?
    A: Yes, but extremely slow. Each Nvidia Jetson boards contains an Nvidia GPU, but this project only build cpu wheels. To better make use of your hardware, build it yourself.

Difference From The Official Wheels

In most circumstances, it's recommended to just use the official wheels, and it will also be installed via pip by default, even with -f.

The wheels here are compiled from source on a rpi 4b+, and are for codes that crashed on official wheels, because of some unsupported instructions are used.

Use the torch wheels here only if you encounter problems like #8.

About Python 3.10

By the time this change (v1.9.0) is committed, NONE of the stable version of Python 3.10.0, Numpy 1.21.0 (which adds Python 3.10 support), or PyTorch 1.9.0 for Python 3.10 has been released.

If any critical issue is found, I may rebuild the wheel after stable releases.

About PyTorch v1.8.0

  • Starting from v1.8.0, the official wheels of PyTorch for aarch64 has finally released!
    • To use the official wheels, use this index link:
      https://torch.kmtea.eu/whl/pfml.html
      where pfml stands for prefer-manylinux here.

      manylinux wheels will be installed by default.
  • torchvision wheels are built with FFmpeg support. For wheels without it, please install torchvision==0.9.0+slim

About PyTorch v1.6.0

A fatal bug is encountered and this patch is applied while building PyTorch v1.6.0. The patch has been merged into mainstream in later versions.

About torchvision v0.9.1

Starting from torchvision v0.9.1, manylinux wheels are officially provided via both its indexes and PyPI. However, since they do not contain necessary backends (< 1MB) and may require extra installations, this project will continue to build torchvision wheels.

RuntimeError while importing

If you see something like this when import torch:

RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd

Please upgrade your numpy version: pip install -U numpy.

CUDA / CUDNN Support

Since the building environment (as below) does not contain an Nvidia GPU, the wheels could not be built with cuda support.

If you need it, please use an Nvidia Jetson board to run the building code.

Building Environment

Host: Raspberry Pi 4 Model B

SoC: BCM2711 (quad-core A53)

Architecture: ARMv8 / ARM64 / aarch64

OS: Debian Buster

GCC: v8.3.0

Virtualization: Docker

Performance

Test date: 2021-10-29

Script: bench.py

Less execution time is better

Platform Specs Training Prediction Version
aarch64 BCM2711 (4x Cortex-A72) 1:48:44 11,506.080 ms 1.10.0
3.9.7
aarch64 QUALCOMM Snapdragon 845 N/A 4,821.148 ms (24x) 1.10.0
3.9.7
amd64 INTEL Core i5-6267U 162.964 s 140.680 ms (82x) 1.10.0+cpu
3.9.7
Google Colab INTEL Xeon ???
NVIDIA Tesla K80
6.400 s 70.714 ms (163x) 1.10.0+cu113
3.7.12
Kaggle INTEL Xeon ???
NVIDIA Tesla P100
6.626 s 33.878 ms (340x) 1.10.0+cu113
3.7.10

Note:

  1. This test was done by using a same Cat or Dog model, to predict 10 random animal images (while same for each group).
  2. The latest version of PyTorch was manually installed on all platforms, but driver and Python remained default.