tvm
https://github.com/apache/tvm
Python
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported3 Subscribers
Add a CodeTriage badge to tvm
Help out
- Issues
- [Bug] tvm.cuda().exist return false while torch.cuda.is_available() return true
- [Bug] terminate called after throwing an instance of 'tvm::runtime::InternalError'
- [Relax] Expose BlockBuilder's Analyzer instance in Python
- [BugFix][CUDA] Increase FloatImm precision when printing 64 bit values in CUDA codegen
- [Bug] [Relax] Build fails when applying `dlight.gpu.GeneralReduction` to `R.nn.group_norm` with dynamic shapes and `R.reshape`
- [Bug] Check failed: (::tvm::runtime::IsContiguous(tensor->dl_tensor)) is false: DLManagedTensor must be contiguous.
- [Relax] Fix the parser to avoid treating a list as an integer
- [Bug] InternalError: Check failed: (it != slot_map_.end()) is false: Var mis not defined in the function but is referenced by m * n during VM Shape Lowering
- [Bug] Inconsistent module structure and InternalError: Check failed: (!require_value_computed) is false: PrimExpr m is not computed
- [Bug] TVMError: unknown intrinsic Op(tir.atan) during relax.build with custom atan TIR function
- Docs
- Python not yet supported