Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The model weights were updated 17 hours ago and now loading them causes the error #14

Open
KyriaAnnwyn opened this issue Nov 29, 2022 · 12 comments

Comments

@KyriaAnnwyn
Copy link

Error message when loading new weights from lambdalabs/sd-image-variations-diffusers:

AttributeError: module transformers has no attribute CLIPImageProcessor

@KyriaAnnwyn
Copy link
Author

updated transformers to 4.25.0.dev0

With this version I get the following error:
TypeError: getattr(): attribute name must be string when loading

@generalsvr
Copy link

Same error:

AttributeError: module transformers has no attribute CLIPImageProcessor

Any solution?

@KyriaAnnwyn
Copy link
Author

@artyemk this one with CLIPImageProcessor is solved by installing 4.25.0.dev0 transformers version from transformers repo. In the nets configs they mention that they work with this version, but the next error with getattr() i haven't solved yet

@KyriaAnnwyn
Copy link
Author

Other solution is to checkout previous version of nets configs and weights

@generalsvr
Copy link

@KyriaAnnwyn I tried to solve this issue by running

pip install --upgrade git+https://github.com/huggingface/transformers.git

But now I get another error:

AttributeError: 'CLIPVisionModelWithProjection' object has no attribute 'get_image_features'

@KyriaAnnwyn
Copy link
Author

@artyemk seems like the version of transformers you got is 4.24. Check this please

@mkabatek
Copy link

mkabatek commented Dec 1, 2022

I'm getting the same error. Trying to install 4.25.0.dev0 but i'm not sure how. Any advice?

(lambda-diffusers) PS C:\Users\Landon\stable_diffusion\lambda-diffusers> pip install -Iv transformers==4.25.0.dev0
Using pip 22.3.1 from C:\Users\Landon\anaconda3\envs\lambda-diffusers\lib\site-packages\pip (python 3.7)
ERROR: Could not find a version that satisfies the requirement transformers==4.25.0.dev0 (from versions: 0.1, 2.0.0, 2.1.0, 2.1.1, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.6.0, 2.7.0, 2.8.0, 2.9.0, 2.9.1, 2.10.0, 2.11.0, 3.0.0, 3.0.1, 3.0.2, 3.1.0, 3.2.0, 3.3.0, 3.3.1, 3.4.0, 3.5.0, 3.5.1, 4.0.0rc1, 4.0.0, 4.0.1, 4.1.0, 4.1.1, 4.2.0, 4.2.1, 4.2.2, 4.3.0rc1, 4.3.0, 4.3.1, 4.3.2, 4.3.3, 4.4.0, 4.4.1, 4.4.2, 4.5.0, 4.5.1, 4.6.0, 4.6.1, 4.7.0, 4.8.0, 4.8.1, 4.8.2, 4.9.0, 4.9.1, 4.9.2, 4.10.0, 4.10.1, 4.10.2, 4.10.3, 4.11.0, 4.11.1, 4.11.2, 4.11.3, 4.12.0, 4.12.1, 4.12.2, 4.12.3, 4.12.4, 4.12.5, 4.13.0, 4.14.0, 4.14.1, 4.15.0, 4.16.0, 4.16.1, 4.16.2, 4.17.0, 4.18.0, 4.19.0, 4.19.1, 4.19.2, 4.19.3, 4.19.4, 4.20.0, 4.20.1, 4.21.0, 4.21.1, 4.21.2, 4.21.3, 4.22.0, 4.22.1, 4.22.2, 4.23.0, 4.23.1, 4.24.0)
ERROR: No matching distribution found for transformers==4.25.0.dev0

@mkabatek
Copy link

mkabatek commented Dec 1, 2022

@artyemk this one with CLIPImageProcessor is solved by installing 4.25.0.dev0 transformers version from transformers repo. In the nets configs they mention that they work with this version, but the next error with getattr() i haven't solved yet

I was able to solve installing transformers with this command

pip install --upgrade "git+https://github.com/huggingface/[email protected]"

However now I am also getting the error.

Traceback (most recent call last):
  File ".\im2im.py", line 11, in <module>
    pipe = StableDiffusionImageEmbedPipeline.from_pretrained("lambdalabs/sd-image-variations-diffusers")
  File "C:\Users\Landon\anaconda3\envs\lambda-diffusers\lib\site-packages\diffusers\pipeline_utils.py", line 373, in from_pretrained
    load_method = getattr(class_obj, load_method_name)
TypeError: getattr(): attribute name must be string

@hniksoleimani
Copy link

install the following version of transformers:

pip install transformers==4.19.2

run the following script:

from pathlib import Path
from lambda_diffusers import StableDiffusionImageEmbedPipeline
from PIL import Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = StableDiffusionImageEmbedPipeline.from_pretrained(
    "lambdalabs/sd-image-variations-diffusers",
    revision="273115e88df42350019ef4d628265b8c29ef4af5",
    )
pipe = pipe.to(device)
im = Image.open("your/input/image/here.jpg")
num_samples = 4
image = pipe(num_samples*[im], guidance_scale=3.0)
image = image["sample"]
base_path = Path("outputs/im2im")
base_path.mkdir(exist_ok=True, parents=True)
for idx, im in enumerate(image):
    im.save(base_path/f"{idx:06}.jpg")

@generalsvr
Copy link

I solved this issue by initializing pipe like this:

pipe = StableDiffusionImageEmbedPipeline.from_pretrained(
 "lambdalabs/sd-image-variations-diffusers",
 revision="273115e88df42350019ef4d628265b8c29ef4af5",
 )

@KyriaAnnwyn
Copy link
Author

KyriaAnnwyn commented Dec 5, 2022

@artyemk @hniksoleimani this makes you use old version of weights, but more interesting to test the new ones!

@mkabatek
Copy link

mkabatek commented Dec 5, 2022

@hniksoleimani @artyemk

I solved this issue by initializing pipe like this:

pipe = StableDiffusionImageEmbedPipeline.from_pretrained(
 "lambdalabs/sd-image-variations-diffusers",
 revision="273115e88df42350019ef4d628265b8c29ef4af5",
 )

Thank you this works, however now i'm curious is there a way to run this with a lower memory GPU? for example I see people use .half(), or fp16 how would I do that using this script?

Thanks again for your feedback. I am running on Nvidia 3070 with 8GB.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants