You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm seeing a 12 second run time that isn't there for other save nodes, and that is really hurting me when using comfy with an API. It seems like the SD PromptSaver node might actually be flushing the flux model and reloading it into memory; I'm guessing that if not using the SD ParameterGenerator, this is what you're doing to get model name, which seems really inneficient. if model name could just be a string that would be ideal, or really anything that wouldn't trigger a reload
Reproduction steps
run any basic workflow using the SD PromptSaver, take a look at the inference time reported for that node on the first run from a cold boot
Image file
first run:
second run:
first run with a dummy empty model nullModel.ckpt:
The text was updated successfully, but these errors were encountered:
Sorry for the late reply. This is weird, and I don't think it's normal. The only time-consuming part of the Saver node is calculating hashes (including checkpoint and loras). However, once calculated, these hashes don't need to be recalculated until the server is restarted, and you can disable this feature with the calculate_hash option. If you're still facing this issue, try updating the node first and then turning off the calculate_hash option. If the problem persists, please let me know.
Description
I'm seeing a 12 second run time that isn't there for other save nodes, and that is really hurting me when using comfy with an API. It seems like the SD PromptSaver node might actually be flushing the flux model and reloading it into memory; I'm guessing that if not using the SD ParameterGenerator, this is what you're doing to get model name, which seems really inneficient. if model name could just be a string that would be ideal, or really anything that wouldn't trigger a reload
Reproduction steps
run any basic workflow using the SD PromptSaver, take a look at the inference time reported for that node on the first run from a cold boot
Image file
first run:
![image](https://private-user-images.githubusercontent.com/6729692/380342242-bff11478-b6d0-437c-8a12-551829c60c89.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyMTg1MjQsIm5iZiI6MTczOTIxODIyNCwicGF0aCI6Ii82NzI5NjkyLzM4MDM0MjI0Mi1iZmYxMTQ3OC1iNmQwLTQzN2MtOGExMi01NTE4MjljNjBjODkucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIxMCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMTBUMjAxMDI0WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9NjEyM2I0NGM0MDQ5ZDg1MDgwNWE3NzNlNjM0MDUxZTViNTc0NjQwNjg5MTBkMDA0YjcwZTQ2ZjlkYjA4ZGY3MSZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.nRoCZgKuodJfpdmUZJct3WUJrtdzFrIIHUssfnK9-H4)
![image](https://private-user-images.githubusercontent.com/6729692/380342356-ace3fabb-a4e1-4185-a0c8-b79b09499e85.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyMTg1MjQsIm5iZiI6MTczOTIxODIyNCwicGF0aCI6Ii82NzI5NjkyLzM4MDM0MjM1Ni1hY2UzZmFiYi1hNGUxLTQxODUtYTBjOC1iNzliMDk0OTllODUucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIxMCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMTBUMjAxMDI0WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9ZDFjMGJkYTExMDBlYmZiMzIzMjk0MDg5ODM0YzViNmUzNzE0NGViNDZlYWJhNjdiZTMwMTA4ZjdlYWJjY2QyNiZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.TeLPBQoyPqbZtZo3fBpMYm8YTwGyfwK7zE4ZemcgeM8)
![image](https://private-user-images.githubusercontent.com/6729692/380342764-af00f9bb-531f-422a-8f9c-09ee50439ef7.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyMTg1MjQsIm5iZiI6MTczOTIxODIyNCwicGF0aCI6Ii82NzI5NjkyLzM4MDM0Mjc2NC1hZjAwZjliYi01MzFmLTQyMmEtOGY5Yy0wOWVlNTA0MzllZjcucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIxMCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMTBUMjAxMDI0WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9NjI3MDZiNzNjNzZiZTc3NjQxY2YzNzRhZDI4ZjQ1MGQwMGE1MWRlYzUzYmEwNDQ5Mzc3MDc1ZTI2ODUwODAwNyZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.foIP3CElW9qtP6D0GYbiMcIXeSXYsua-GBqZQN4VLaI)
second run:
first run with a dummy empty model nullModel.ckpt:
The text was updated successfully, but these errors were encountered: