Instructions to use TensorStack/AutoEncoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use TensorStack/AutoEncoder with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("TensorStack/AutoEncoder", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 6dbc1cc6ccae102bf8c74aa1e91861247911299b47d7b911723fe27b6b0771c5
- Size of remote file:
- 986 MB
- SHA256:
- 7c68a6295f9034a88225fbafb1f3258291a08d57a1fdb938233fa57b1b8f4883
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.