-
Notifications
You must be signed in to change notification settings - Fork 197
Issues: pytorch/ao
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Installation] Warn if submodule wasn't checked out
enhancement
New feature or request
#1534
opened Jan 10, 2025 by
cpuhrsch
[Proposal] Add targeting to CI tests
ci
topic: improvement
Use this tag if this PR is an improvement (doesn't fit into any of the other categories)
triaged
#1524
opened Jan 8, 2025 by
danielvegamyhre
Float8Linear autocast's
torch.get_autocast_gpu_dtype()
is deprecated
triaged
#1522
opened Jan 7, 2025 by
apaz-cli
torch.export.export_for_inference not available in current stable PyTorch
triaged
#1514
opened Jan 7, 2025 by
dongxiaolong
Not Seeing much memory savings with Fp8 optimizer suddenly
optimizer
reproduction needed
#1499
opened Jan 6, 2025 by
asahni04
How can I use Torchao effectively with Naslib's pytorch models
#1467
opened Dec 31, 2024 by
anissbslh
The internal torchao tensor subclasses cause errors with torch.compile
bug
Something isn't working
#1463
opened Dec 28, 2024 by
JohnnyRacer
fuse ConvTranspose2d+BatchNorm2d+ReLU
enhancement
New feature or request
#1462
opened Dec 28, 2024 by
yokosyun
Linear Model not saved time using to_sparse_semi_structured
triaged
#1460
opened Dec 26, 2024 by
phyllispeng123
torchao/experimental/ CMAKE_SYSTEM_PROCESSOR checks need to accept aarch64
build
triaged
#1457
opened Dec 22, 2024 by
swolchok
QOL: easy way to run lint locally that matches the behavior in CI
#1448
opened Dec 19, 2024 by
vkuzo
Segmentation Fault Running Int8 Quantized Model on GPU
question
Further information is requested
triaged
#1437
opened Dec 18, 2024 by
wendywangwwt
QLoRA Worse Memory When linear_nf4 Used on Output
performance
#1433
opened Dec 18, 2024 by
pbontrager
[Feature Request] W4A4 Quantization Support in torchao
topic: new feature
Use this tag if this PR adds a new feature
topic: performance
Use this tag if this PR improves the performance of a feature
#1406
opened Dec 12, 2024 by
xxw11
Does Torch Support NPU Architectures like Ascend MDC910B and Multi-GPU Quantization for Large Models?
multibackend
#1405
opened Dec 12, 2024 by
Lenan22
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.