Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Add ROCm for Windows support #1880

Open
1 of 4 tasks
TeutonJon78 opened this issue Jul 29, 2023 · 24 comments
Open
1 of 4 tasks

[Feature]: Add ROCm for Windows support #1880

TeutonJon78 opened this issue Jul 29, 2023 · 24 comments
Labels
enhancement New feature or request

Comments

@TeutonJon78
Copy link

TeutonJon78 commented Jul 29, 2023

Feature description

Since there seems to be a lot of excitement about AMD finally releasing ROCm support for Windows, I thought I would open a tracking FR for information related to it. Before it can be integrated into SD.Next, pyTorch needs to add support for it, and that also includes several other dependencies being ported to windows as well. Obviously no ETA is known for any of that work to be done and released.

Status:

  • AMD released Drivers
  • ROCm dependencies updated for Windows
  • pyTorch ported to ROCm for Windows
  • any needed support added to SD.Next

Tracking Bugs/PRs from upstream projects:

Version Platform Description

AMD on Windows 10/11

@TeutonJon78 TeutonJon78 added the enhancement New feature or request label Jul 29, 2023
@brknsoul
Copy link
Contributor

SQUEEEEEE!!!
Ahem.. adjusts tie.. marvellous, ol' chap!

@TeutonJon78
Copy link
Author

TeutonJon78 commented Jul 29, 2023

If anyone finds any more requirements that need porting, post them here as well. I didn't do a deep dive into which all is needed beyond pyTorch and MIOpen.

@NeedsMoar
Copy link

Can't wait to see what the performance is like (and more importantly get access to the full extent of SD features various UIs support). I've been using Shark... It supports LoRas... and that's it, but has to recompile for every combination of model, LoRA, resolution, and sometimes prompt length but from benchmarks I've found laying around it's faster than anything but high end NVidia modules in SD2.1 image generation under that compilation method. The problem is the inflexibility of the build system and how much disk space it starts eating up after you mess around with multiple model / lora combinations. directml is a huge speed drop and memory management eats in comparison.

I kind of figured this was coming since the late June agility SDK hardware scheduling drivers accidentally included an amdhip64.dll but it's nice to see it was faster than I'd thought. I'll be keeping an eye out here for news. :D

@sukualam
Copy link

sukualam commented Aug 7, 2023

rocm windows not support my gpu (amd rx 560), guess i stay on hackintosh

@TeutonJon78
Copy link
Author

There is also this handy chart on the differences between ROCm on Linux and Windows: https://rocm.docs.amd.com/en/latest/rocm.html#rocm-on-windows

@Enferlain
Copy link

Seems like shark is getting rocm somehow despite miopen not even being on windows yet

image

@CharlesCato
Copy link

I don't want to open a new issue or anything just for this so I allow myself to comment here.
Someone managed to get a repo 'working' with onnx and olive. I followed the tutorial but couldn't optimized my own checkpoint, hiresfix wasn't working on my rig, and I couldn't have the loras working neither.

Positive point though: generating a basic pic was indeed really 10times faster on my 7900xt (I am really surprise how fast it is), but since I'm not insterested generating pics without hires/lora/checkpoints, I won't try more now.
But if someone else wants to give it a try, I leave this here:
https://community.amd.com/t5/ai/updated-how-to-running-optimized-automatic1111-stable-diffusion/ba-p/630252

@lshqqytiger
Copy link
Collaborator

I don't want to open a new issue or anything just for this so I allow myself to comment here. Someone managed to get a repo 'working' with onnx and olive. I followed the tutorial but couldn't optimized my own checkpoint, hiresfix wasn't working on my rig, and I couldn't have the loras working neither.

Positive point though: generating a basic pic was indeed really 10times faster on my 7900xt (I am really surprise how fast it is), but since I'm not insterested generating pics without hires/lora/checkpoints, I won't try more now. But if someone else wants to give it a try, I leave this here: https://community.amd.com/t5/ai/updated-how-to-running-optimized-automatic1111-stable-diffusion/ba-p/630252

I added an experimental support for ONNX and Olive. Wiki
I recommend to use this one rather than my fork. (because a1111 does not have diffusers support besides sd.next does, the implementation on sd.next is more organized and clean)

@brknsoul
Copy link
Contributor

brknsoul commented Oct 30, 2023

There seems to be an Olive optimised model of Dreamshaper here; https://huggingface.co/softwareweaver/dreamshaper

Since I'm still getting errors trying to convert models, I'll give this a try and report back.

@CharlesCato
Copy link

I added an experimental support for ONNX and Olive. Wiki

So, I'm very novice with git and repo/branch.
You said on the info to switch to Olive branch. I assume I needed to run
git checkout --track origin/olive
But then when I try to launch a1111 I get this error:

  File "D:\StableAMD\onnx\automatic\launch.py", line 170, in <module>
    init_modules() # setup argparser and default folders
  File "D:\StableAMD\onnx\automatic\launch.py", line 33, in init_modules
    import modules.cmd_args
  File "D:\StableAMD\onnx\automatic\modules\cmd_args.py", line 3, in <module>
    from modules.paths import data_path
  File "D:\StableAMD\onnx\automatic\modules\paths.py", line 6, in <module>
    import olive.workflows
ModuleNotFoundError: No module named 'olive'

Not sure what to do.

@brknsoul
Copy link
Contributor

It's probably best to just clone a separate instance;

git clone -b olive https://github.com/vladmandic/automatic olive

This will create the Olive branch in a folder called olive.

@CharlesCato
Copy link

git clone -b olive https://github.com/vladmandic/automatic olive

I actually tried that...... but for some reason I thought putting automatic/olive was the right way to do, so it failed...
Thank you. I'll try then.

@brknsoul
Copy link
Contributor

brknsoul commented Oct 31, 2023

git (program) clone (command) -b olive (select 'olive' branch) https://github.com/vladmandic/automatic (url of git repo) olive (folder to clone into).

That last one can be anything;
git clone -b olive https://github.com/vladmandic/automatic iLikeBigButtsAndICannotLie
will clone the repo into a folder called "iLikeBigButtsAndICannotLie" ;-)

@CharlesCato
Copy link

Well... nothing changed though. Same error.

@lshqqytiger
Copy link
Collaborator

I added an experimental support for ONNX and Olive. Wiki

So, I'm very novice with git and repo/branch.
You said on the info to switch to Olive branch. I assume I needed to run
git checkout --track origin/olive
But then when I try to launch a1111 I get this error:

  File "D:\StableAMD\onnx\automatic\launch.py", line 170, in <module>
    init_modules() # setup argparser and default folders
  File "D:\StableAMD\onnx\automatic\launch.py", line 33, in init_modules
    import modules.cmd_args
  File "D:\StableAMD\onnx\automatic\modules\cmd_args.py", line 3, in <module>
    from modules.paths import data_path
  File "D:\StableAMD\onnx\automatic\modules\paths.py", line 6, in <module>
    import olive.workflows
ModuleNotFoundError: No module named 'olive'

Not sure what to do.

Thank you for reporting! I think the recent commits about paths corrupted the installation process. Will be fixed..

@vladmandic
Copy link
Owner

Thank you for reporting! I think the recent commits about paths corrupted the installation process. Will be fixed..

if you've merged dev recently, then yes. note that paths.py must not have any additional dependencies or imports as its imported by launcher before anything else has started.

@lshqqytiger
Copy link
Collaborator

Fixed: 2fd8d2c

@vladmandic
Copy link
Owner

Fixed: 2fd8d2c

That cannot go into base_requirements.

And installing olive must be optional, not for every install.

I can add --use-onnxflag and bind it to that if you want?

@lshqqytiger
Copy link
Collaborator

lshqqytiger commented Nov 1, 2023

I think so. But if it imports modules.paths from init_modules and then import olive, it's broken. microsoft/Olive#554 Is there any other way to solve this?

And I considered that but thought it was inappropriate because OnnxStableDiffusionPipeline belongs to diffusers. Should I add --use-onnx?

@vladmandic
Copy link
Owner

Let me take a look to morrow, I've handled module conflict before, never clean, but doable.

@vladmandic
Copy link
Owner

@lshqqytiger lets move olive conversation to #2429

@Kademo15
Copy link

ROCm/MIOpen#2570

@morovinger
Copy link

RoCk has an official released now at windows am i right? is there any plans to complete this task then?

@brknsoul
Copy link
Contributor

brknsoul commented Apr 5, 2024

Source?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

10 participants