Instructions to use TensorStack/AutoEncoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use TensorStack/AutoEncoder with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("TensorStack/AutoEncoder", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- b3f090b8040a5d6087e7db8713a98cd75282fa50c5f82a3cdd433b78ef6f4447
- Size of remote file:
- 335 MB
- SHA256:
- 27ed3b02e09638568e99d4398c67bc654dde04e6c0db61fb2d21dba630e7058a
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.