π¨ This program is for blob storage only - it does not support Azure Data Lake storage
The reason for creating this script was simple - I am already using another script created by my friend Reece to automatically batch-compress all my images, optimising them for being displayed on my website. However, that solution still required me to upload those files manually to my Azure container, from where my Node backend was grabbing them and serving on prutkowski.tech.
This script automates a lot of the work for me, and gives me access to many Azure functions from the comfort of my terminal. It is using the Blob Storage v12 Python SDK.
Make sure Python 3 is installed and configured if you're using Windows - it comes preinstalled on most machines running macOS and Linux. You can check your Python version by running python -V
, and you might have to update it if necessary.
Before use, you have to configure your Azure account connection string as an environment variable in your system.
You can find it in your Azure portal by navigating to your storage account -> settings -> access keys, and copying "connection string" under "key 1" - they usually start with DefaultEndpointsProtocol=https...
$ export AZURE_CON_STR=<your_connection_string>
# check that the env var has been set
$ $AZURE_CON_STR
Remember to source ~/.zshrc
or source ~/.bashrc
after setting the environment variable in order for it to take effect. Alternatively you can kill your terminal instance and restart it.
You can at any time access information about all functions by running
$ python am.py --help
To create a new container, simply run:
$ python am.py cc <container_name>
To get a list of all containers, run:
$ python am.py lc
To delete one, run:
$ python am.py dc <container_name>
In order to execute the script, you should include two arguments in the command - container
and path
.
The path can either be absolute or relative, and it is recommended to have your data folder in the same working directory for simplicity and less room for error.
$ python am.py upload <container_name> <file_path>
# example (this would grab the files from the "files" folder in the same working directory):
# βββ src
# βββ am.py
# βββ /files
$ python am.py upload container1 ./files
To list all the blobs within a container, you'll need to gather a bit more information about your account than with the previous function.
What you'll need is:
- Your
account connection URI
- this can be found on the Azure portal under properties -> primary endpoint - Your
SAS
- can be generated from the Azure portal under shared access signature.- This should be signed with the same access key you used to generate your connection string earlier on.
It is worth mentioning that SAS
tokens are time-constrained, and will expire after 8 hours after being generated by default. You can
change the expiration time before generating.
The format of the command is as follows:
# <acc_connection_uri> & <sas_token> must be wrapped in quotes (" ")
$ python am.py list <acc_connection_uri> <container_name> <sas_token>
In order to download blobs, you will need the same account information as for the "List blobs" function, so your
connection URI
and your SAS
. You should however allow all three of the resource types to access your SAS
(service, container, object) when generating it.
All downloads are automatically saved in to src/downloads
, and the program will create this directory if it
doesn't exist. You can change the output directory by editing download_dir
.
To download all blobs in a container of your choice, you can run:
$ python am.py download <acc_connection_uri> <container_name> <sas_token>
# Example state after successful exec:
# βββ src
# βββ am.py
# βββ /downloads
# βββ file1.png
# βββ ...rest
You can also only download one blob at a time. To do that, pass in the name of the blob as the last argument:
$ python am.py download <acc_connection_uri> <container_name> <sas_token> <blob_name>