[go: up one dir, main page]

Skip to content

InvokeAI 2.2.3

Compare
Choose a tag to compare
@lstein lstein released this 02 Dec 17:54
· 12810 commits to main since this release

Note: This point release removes references to the binary installer from the installation guide. The binary installer is not stable at the current time. First time users are encouraged to use the "source" installer as described in Installing InvokeAI with the Source Installer

With InvokeAI 2.2, this project now provides enthusiasts and professionals a robust workflow solution for creating AI-generated and human facilitated compositions. Additional enhancements have been made as well, improving safety, ease of use, and installation.

Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).

You can see the release video here, which introduces the main WebUI enhancement for version 2.2 - The Unified Canvas. This new workflow is the biggest enhancement added to the WebUI to date, and unlocks a stunning amount of potential for users to create and iterate on their creations. The following sections describe what's new for InvokeAI.

Update 1 December 2022 -

  • The Unified Canvas: The Web UI now features a fully fitted infinite canvas that is capable of outpainting, inpainting, img2img and txt2img so you can streamline and extend your creative workflow. The canvas was rewritten to improve performance greatly and bring support for a variety of features like Paint Brushing, Unlimited History, Real-Time Progress displays and more.

  • Embedding Management: Easily pull from the top embeddings on Huggingface directly within Invoke, using the embed token to generate the exact style you want. With the ability to use multiple embeds simultaneously, you can easily import and explore different styles within the same session!

  • Viewer: The Web UI now also features a Viewer that lets you inspect your invocations in greater detail. No more opening the images in your external file explorer, even with large upscaled images!

  • 1 Click Installer Launch: With our official 1-click installation launch, using our tool has never been easier. Our OS specific bundles (Mac M1/M2, Windows, and Linux) will get everything set up for you. Click and get going - It’s now simple to get started with InvokeAI. See Installation.

  • Model Safety: A checkpoint scanner (picklescan) has been added to the initialization process of new models, helping with security against malicious and evil pickles.

  • DPM++2 Experimental Samplers: New samplers have been added! Please note that these are experimental, and are subject to change in the future as we continue to enhance our backend system.

First-time Installation

For those installing InvokeAI for the first time, please use this recipe:
For automated installation, open up the "Assets" section below and download one of the InvokeAI-*.zip files. The instructions in the Installation section of the InvokeAI docs will provide you with a guide to which file to download and what to do with it when you get it.

For manual installation download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates. Alternatively, you may clone the source code repository using the command git clone http://github.com/invoke-ai/InvokeAI and follow the instructions in Manual Installation.

Upgrading

For those wishing to upgrade from an earlier version, please use this recipe:
Download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates.
Alternatively, if you have previously cloned the InvokeAI repository, you may update it by entering the InvokeAI directory and running the commands git checkout main, followed by git pull
Select the appropriate environment file for your operating system and GPU hardware. A number of files can be found in a new environments-and-requirements directory:

environment-lin-amd.yml # Linux with an AMD (ROCm) GPU
environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU
environment-mac.yml # Macintoshes with MPS acceleration
environment-win-cuda.yml # Windows with an NVIDA CUDA GPU

Important Step that developers tend to miss! Either copy this environment file to the root directory with the name environment.yml, or make a symbolic link from environment.yml to the selected enrivonment file:

Macintosh and Linux using a symbolic link:
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml

Replace xxx and yyy with the appropriate OS and GPU codes.

Windows:
copy environments-and-requirements\environment-win-cuda.yml environment.yml

When this is done, confirm that a file environment.yml has been created in the InvokeAI root directory and that it points to the correct file in the environments-and-requirements directory.
Now run the following commands in the InvokeAI directory.

conda env update
conda activate invokeai
python scripts/preload_models.py

Additional installation information, including recipes for installing without Conda, can be found in Manual Installation

Known Bugs

  1. If you use the binary installer, the autocomplete function will not work on the command line client due to limitations of the version of python that the installer uses. However, all other functions of the command line client, and all features of the web UI will function perfectly well.
  2. The PyPatchMatch module, which provides excellent outpainting and inpainting results, does not currently work on Macintoshes. It will work on Linux after a support library is added to the system. See Installing PyPatchMatch.
  3. InvokeAI 2.2.0 does not support the Stable Diffusion 2.0 model at the current time, but is expected to provide full support in the near future.
  4. The 1650 and 1660ti GPU cards only run in full-precision mode, which greatly limits the size of the models you can load and images you can generate with InvokeAI.

Contributing
Please see CONTRIBUTORS for a list of the many individuals who contributed to this project. Also many thanks to the dozens of patient testers who flushed out bugs in this release before it went live.
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
to contribute to GitHub projects, here is a
Getting Started Guide. Unlike previous versions of InvokeAI we have now moved all development to the main branch, so please make your pull requests against this branch.

Support
For support, please use this repository's GitHub Issues tracking service. Live support is also available on the InvokeAI Discord server.