Skip to content

[Question] Serving Llama 3.3 70B model with 8x RTX 4090 GPUs using Triton Inference Server without NVLink #677

Unanswered
novela77 asked this question in Q&A
Discussion options

You must be logged in to vote

Replies: 0 comments

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant