-
Notifications
You must be signed in to change notification settings - Fork 426
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Replicate demo and API #9
base: main
Are you sure you want to change the base?
Conversation
Hey there! Does this still work given the changes that were made yesterday? I updated to pin diffusers to stable pypi version, among a couple other updates. The package is now pip installable: |
Hi! Ah I probably implemented the version before your update yesterday- basically the The demo now is hosted on A100 for faster inference. We can feature it to the front page once you claim the page which makes it 'public', you can see the popular models featured on Replicate here. I can also help to update the demo to the latest version used in your repo. Let me know! |
@@ -0,0 +1,25 @@ | |||
import os |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this file being used anywhere?
from stable_diffusion_videos import StableDiffusionPipeline | ||
|
||
|
||
MODEL_CACHE = "diffusers-cache" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this used anywhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TYSM for your contribution! I think this looks fine. I would like to remove download_weights.py
if it's unnecessary, though. I saw in another repo that you can run the script with cog - guessing that caches it in the env? Is that what you were using it for?
Sorry, not too familiar 😅 . Also I don't have a machine with nvidia docker on it right now so I wasn't able to successfully run locally.
@@ -0,0 +1,225 @@ | |||
import os | |||
from subprocess import call | |||
from typing import Optional, List |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused
import shutil | ||
import numpy as np | ||
import torch | ||
from torch import autocast |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unused - you refer to it later as torch.autocast
import torch | ||
from torch import autocast | ||
from diffusers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler | ||
from PIL import Image |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused
Hi thanks for reviewing the PR, sorry for not having checked all the unused import. Yes the weights are loaded to the cache folder in the image before pushing to the website, so no need to download it and the demo can start faster. The Thanks again! |
Hey @nateraw ! 👋
Great implementation for the stable-diffusion videos!
This pull request makes it possible to run your model inside a Docker environment, which makes it easier for other people to run it. We're using an open source tool called Cog to make this process easier.
This also means we can make a web page where other people can run your model! View it here: https://replicate.com/nateraw/stable-diffusion-videos
Replicate also have an API, so people can easily run your model from their code:
We noticed you have registered Replicate account, do claim the page so modify the demo, push any updates to it!
In case you're wondering who I am, I'm from Replicate, where we're trying to make machine learning reproducible. We got frustrated that we couldn't run all the really interesting ML work being done. So, we're going round implementing models we like. 😊