-
Notifications
You must be signed in to change notification settings - Fork 14k
ggml-cuda: reorder only relevant nodes #17639
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
I'm trying to merging master on my PR #16548 and hit hard by this 😭 Is it possible to avoid changing Does the node changes need to be persistent after returning from this function? If not, isn't it easier to just create a temporary It is of course possible to hack over this in my PR, but I feel the current solution is already a bit hacky and I would like to avoid a hack over a hack. |
|
@wishstudio for now you should just not use this path in your PR (it is not enabled by default) |
|
BTW @wishstudio do you still face an issue after this PR? I think it solves your particular issue |
|
I’m not sure if my graph handling code is semantically correct with the nodes changing halfway but if you are still working on it it’s ok for me to skip this codepath rn.
Without this PR it won’t even compile because the cgraph itself is being changed. I think this PR should make it work.
|
|
@wishstudio sorry, I was not aware of your efforts and that the recently merged approach would be causing problems for you. As I'm sure you're aware Diego is currently on Hiatus. For this reason pull requests such as yours that have to do with his work will probably not receive good attention from maintainers. As it is I'm already operating at full capacity but I intend to also do work having to do with ggml backends. When I do I'll take a look at the ggml graph plan API and see if I can review your work (definitely no promises though). FWIW I don't consider the current implementation for concurrent CUDA streams to be something that we should be using long-term and I agree that ggml backends should not be changing ggml graphs if at all possible. |
|
@wishstudio I'll take a look at your PR and try to review it as well |
|
@am17an After you merged this PR I updated mine and it is working fine now. Thanks! @JohannesGaessler Thank you for the explanation and I appreciate your stance on code quality. Please take your time! I think that PR also enables more graph based optimizations to be done in all the backends, but it still needs some design work. The annoying part atm is I did some refactoring and the diff all got messed up, causing conflicts every time |
|
BTW to be clear as to there's no confusion. The nodes don't change "halfway through", they are captured by cuda graph and they remain the same way throughout until the graph is captured again. This PR fixed a bug when the size of the cgraph changes because there are splits added for cpu moe, I will add code to disable this when cuda graphs cannot be used as it leads to a performance drop anyway. Reordering or skipping nodes is allowed in graph compute under the correct conditions. The earlier PR plus this PR does not break this paradigm |
GGML_CUDA_GRAPH_OPT=1is broken with any tensor offloading options liken-cpu-moebecause we just copy the graph back to enable fusion within streams. This PR only re-orders nodes within streams.Also, because we don't use CUDA graphs for hybrid inference at the moment,
GGML_CUDA_GRAPH_OPT=1is slower than not using it. This may change in the future