-
Notifications
You must be signed in to change notification settings - Fork 487
Issues: pytorch/xla
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
multi_queries_paged_attention_kernel fails with Llama3 70B on a TPU-v4-16 with sequence length of 256
#8515
opened Dec 21, 2024 by
OhadRubin
2 questions for the composite op feature
stablehlo
StableHLO related work
#8486
opened Dec 12, 2024 by
Zantares
Program Hang/Stuck after using F.interpolate? VAE Decode Step of HunyuanVideo model.
#8470
opened Dec 9, 2024 by
radna0
TPU memory use increased significantly in torch/xla - 2.6.0.dev20241107
#8423
opened Nov 27, 2024 by
dudulightricks
[LoweringContext] Modularize for enhanced user control and optimization
#8415
opened Nov 25, 2024 by
rpsilva-aws
[LoweringContext] Support explicit device data parameters for scalar inputs
#8414
opened Nov 25, 2024 by
rpsilva-aws
Review documentation in the docs/source/contribute directory
#8413
opened Nov 25, 2024 by
mikegre-google
torch.split
followed by torch.cat
fails to restore the tensor (v2.1-2.5)
#8410
opened Nov 23, 2024 by
jeffhataws
Dataloading hangs on Trillium when using recent whls
dataloading
#8407
opened Nov 21, 2024 by
miladm
Previous Next
ProTip!
Updated in the last three days: updated:>2024-12-21.