The following guide is adapted from 🤗 Transformers.
To generate the documentation for 🤗 Optimum, simply run the following command
from the root of the optimum
repository:
make doc BUILD_DIR=optimum-doc-build VERSION=main
This command will generate the HTML files that will be rendered as the
documentation on the Hugging Face
website. You can inspect them in
your favorite browser. You can also adapt the BUILD_DIR
and VERSION
arguments to any temporary folder or version that you prefer.
To generate the documentation for one of the hardware partner integrations, you
first need to clone the corresponding repository and run the make doc
command
to build the docs. For example, the following commands generate the
documentation for optimum-habana
:
git clone https://github.com/huggingface/optimum-habana.git
cd optimum-habana
make doc BUILD_DIR=habana-doc-build
NOTE
You only need to generate the documentation to inspect it locally, e.g. if you're planning changes and want to check how they look like before committing. You don't have to commit the built documentation.
The 🤗 Optimum documentation follows the Google documentation style for docstrings, although we can write them directly in Markdown.
Under the hood, the documentation is generated by the
hf-doc-builder
library. Here we
summarize the main syntax needed to write the documentation -- consult
hf-doc-builder
for more details.
Accepted files are Markdown (.md or .mdx).
Create a file with its extension and put it in the docs/source
directory. You
can then link it to the table of contents by putting the filename without the
extension in the
_toctree.yml
file.
It helps to keep the old links working when renaming section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums and social media and it makes for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.
Therefore we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.
So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:
Sections that were moved:
[ <a href="#section-b">Section A</a><a id="section-a"></a> ]
and of course if you moved it to another file, then:
Sections that were moved:
[ <a href="../new-file#section-b">Section A</a><a id="section-a"></a> ]
Use the relative style to link to the new file so that the versioned docs continue to work.
For an example of a rich moved sections set please see the very end of the
Trainer
doc
in transformers
.
Adding a new tutorial or section is done in two steps:
- Add a new file under
docs/source
. This file should be in Markdown (.md) format. - Link that file in
docs/source/_toctree.yml
on the correct toc-tree.
Make sure to put your new file under the proper section. It's unlikely to go in the first section (Get Started), so depending on the intended targets (beginners, more advanced users or researchers) it should go in a later section.
Values that should be put in code
should either be surrounded by backticks:
`like so`. Note that argument names and objects like True, None or any strings
should usually be put in code
.
When mentioning a class, function or method, it is recommended to use our syntax for internal links so that our tool adds a link to its documentation with this syntax: [`XXXClass`] or [`function`]. This requires the class or function to be in the main package.
If you want to create a link to some internal class or function, you need to
provide its path. For instance: [`utils.ModelOutput`]. This will be
converted into a link with utils.ModelOutput
in the description. To get rid of
the path and only keep the name of the object you are linking to in the
description, add a ~: [`~utils.ModelOutput`] will generate a link with
ModelOutput
in the description.
The same works for methods so you can either use [`XXXClass.method`] or [~`XXXClass.method`].
Arguments should be defined with the Args:
(or Arguments:
or Parameters:
)
prefix, followed by a line return and an indentation. The argument should be
followed by its type, with its shape if it is a tensor, a colon and its
description:
Args:
n_layers (`int`): The number of layers of the model.
If the description is too long to fit in one line, another indentation is necessary before writing the description after the argument.
Here's an example showcasing everything so far:
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using [`AlbertTokenizer`]. See [`~PreTrainedTokenizer.encode`] and
[`~PreTrainedTokenizer.__call__`] for details.
[What are input IDs?](../glossary#input-ids)
For optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the following signature:
def my_function(x: str = None, a: float = 1):
then its documentation should look like this:
Args:
x (`str`, *optional*):
This argument controls ...
a (`float`, *optional*, defaults to 1):
This argument is used to ...
Note that we always omit the "defaults to `None`" when None is the default for
any argument. Also note that even if the first line describing your argument
type and its default gets long, you can't break it on several lines. You can
however write as many lines as you want in the indented description (see the
example above with input_ids
).
Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:
```
# first line of code
# second line
# etc
```
We follow the doctest syntax for the examples to automatically test the results stay consistent with the library.
The return block should be introduced with the Returns:
prefix, followed by a
line return and an indentation. The first line should be the type of the return,
followed by a line return. No need to indent further for the elements building
the return.
Here's an example for a single value return:
Returns:
`List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token.
Here's an example for tuple return, comprising several objects:
Returns:
`tuple(torch.FloatTensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs:
- ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.FloatTensor` of shape `(1,)` --
Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
- **prediction_scores** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) --
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
Due to the rapidly growing repository, it is important to make sure that no
files that would significantly weigh down the repository are added. This
includes images, videos and other non-text files. We prefer to leverage a hf.co
hosted dataset
like the ones hosted on
hf-internal-testing
in which to
place these files and reference them by URL. We recommend putting them in the
following dataset:
huggingface/documentation-images.
If an external contribution, feel free to add the images to your PR and ask a
Hugging Face member to migrate your images to this dataset.
We have an automatic script running with the make style
comment that will make
sure that:
- the docstrings fully take advantage of the line width
- all code examples are formatted using black, like the code of the 🤗 Optimum library
This script may have some weird failures if you made a syntax mistake or if you
uncover a bug. Therefore, it's recommended to commit your changes before running
make style
, so you can revert the changes done by that script easily.
Good documentation often comes with an example of how a specific function or class should be used. Each model class should contain at least one example showcasing how to use this model class in inference. E.g. the class Wav2Vec2ForCTC includes an example of how to transcribe speech to text in the docstring of its forward function.
Reference: https://github.com/huggingface/transformers/blob/main/docs/README.md#writing-doctests
The syntax for Example docstrings can look as follows:
Example:
```python
>>> from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
>>> from datasets import load_dataset
>>> import torch
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
>>> model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
>>> # audio file is decoded on the fly
>>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_ids = torch.argmax(logits, dim=-1)
>>> # transcribe speech
>>> transcription = processor.batch_decode(predicted_ids)
>>> transcription[0]
'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'
```
The docstring should give a minimal, clear example of how the respective model is to be used in inference and also include the expected (ideally sensible) output. Often, readers will try out the example before even going through the function or class definitions. Therefore it is of utmost importance that the example works as expected.
🤗 Optimum is distributed as a namespace
package,
where each hardware integration subpackage such as optimum-graphcore
or
optimum-intel
is bundled together as a single package. For every pull request
or release of 🤗 Optimum, we use GitHub Actions to combine the documentation for
each subpackage with the base documentation of the optimum
repository.
Including the documentation for a subpackage involves four main steps:
- Adding a
docs/source
folder to youroptimum-*
repo with content and a_toctree.yml
file that follows the same specification as 🤗 Optimum (see the Writing documentation section above) - Creating a Dockerfile in
docs
that installs all necessary dependencies - Adding a
make doc
target to the Makefile of the subpackage that generates the HTML files of the documentation - Updating the GitHub Actions in
build_pr_documentation.yml
andbuild_main_documentation.yml
to render the subpackage documentation on the Hugging Face website
Let's walk through an example with optimum-habana
to see how steps 2-4 work in
detail. The Docker file for this subpackage looks as follows:
# Define base image for Habana
FROM vault.habana.ai/gaudi-docker/1.4.0/ubuntu20.04/habanalabs/pytorch-installer-1.10.2:1.4.0-442
# Need node to build doc HTML. Taken from https://stackoverflow.com/a/67491580
RUN apt-get update && apt-get install -y \
software-properties-common \
npm
RUN npm install npm@latest -g && \
npm install n -g && \
n latest
# Clone repo and install basic dependencies
RUN python3 -m pip install --no-cache-dir --upgrade pip
RUN git clone https://github.com/huggingface/optimum-habana.git
RUN python3 -m pip install --no-cache-dir ./optimum-habana[quality]
The main thing to note here is the need to install Node in the Docker image -
that's because we need Node to generate the HTML files with the hf-doc-builder
library. Once you have the Dockerfile, the next step is to define a doc
target
in the Makefile:
SHELL := /bin/bash
CURRENT_DIR = $(shell pwd)
...
build_doc_docker_image:
docker build -t doc_maker ./docs
doc: build_doc_docker_image
@test -n "$(BUILD_DIR)" || (echo "BUILD_DIR is empty." ; exit 1)
@test -n "$(VERSION)" || (echo "VERSION is empty." ; exit 1)
docker run -v $(CURRENT_DIR):/doc_folder --workdir=/doc_folder doc_maker \
doc-builder build optimum.habana /optimum-habana/docs/source/ \
--build_dir $(BUILD_DIR) \
--version $(VERSION) \
--version_tag_suffix "" \
--html \
--clean
Once you've added the doc
target to the Makefile, you can generate the
documentation by running the following command from the root of the subpackage
repository:
make doc BUILD_DIR=habana-doc-build VERSION=main
The final step is to include the subpackage in the GitHub Actions of the
optimum
repo, e.g. add/edit these steps to build_pr_documentation.yml
and
build_main_documentation.yml
:
# Add this
- uses: actions/checkout@v2
with:
repository: 'huggingface/optimum-habana'
path: optimum-habana
# Add this
- name: Make Habana documentation
run: |
cd optimum-habana
make doc BUILD_DIR=habana-doc-build VERSION=pr_$PR_NUMBER # Make sure BUILD_DIR={subpackage_name}-doc-build
sudo mv habana-doc-build ../optimum
cd ..
# Tweak this to include your subpackage
- name: Combine subpackage documentation
run: |
cd optimum
sudo python docs/combine_docs.py --subpackages habana --version pr_$PR_NUMBER # Make sure the subpackage is listed here!
sudo mv optimum-doc-build ../
cd ..
NOTE
Since the optimum
documentation depends on the documentation of each
subpackage, it is good practice to ensure the subpackage documentation will
always build successfully. To ensure this, add a GitHub Action to your
subpackage that tests the documentation builds with every pull request / push to
main
. Check out the optimum-habana
repo for an example.