Skip to content

Releases: hidet-org/hidet

Hidet v0.2.1

18 Feb 06:26
0617089

Choose a tag to compare

What's Changed

  • [Version] Bump version to 0.2.1.dev by @yaoyaoding in #73
  • [CI] Prevent fork repos from running workflow by @yaoyaoding in #74
  • [Fixbug] Fix a bug in trace_from when the inputs are directly used as outputs by @yaoyaoding in #76
  • [Operator] Add reduce_f16 and squeeze as Reduce's resolve variants by @hjjq in #75
  • [IR] Input specification assertion message for valid IR check by @AndreSlavescu in #78
  • [Operator] Add conv3d, max_pool3d, avg_pool3d by @hjjq in #79
  • [Dynamo] Add the entry point registration for dynamo by @yaoyaoding in #80
  • [Fix] Update shape utility functions to expect Sequence instead of List by @yaoyaoding in #86
  • [Bugfix] 'double'->'float64' in onnx dtype conversion by @soodoshll in #88
  • [Fix] Mark the reduce fp16 operator not fusible by @yaoyaoding in #100
  • [Fixbug] Use uint64_t instead of unsigned long long for literals by @yaoyaoding in #101
  • [Fixbug] Fix a bug in the minimum and maximum operator by @yaoyaoding in #102
  • [Dynamo] Update dynamo registration after pytorch refactored that part by @yaoyaoding in #84
  • [Fixbug] Fix bugs in binary_arithmetic op and swizzle layout by @hjjq in #104
  • [Fixbug] Call fuse in reduce_fp16 operator by @yaoyaoding in #105
  • [ONNX] Fix the out of bound error in onnx slice function during importing by @yaoyaoding in #106
  • [Fixbug] Reverse map of binary operator by @yaoyaoding in #107
  • [Fixbug] Add attributes to Clip operator by @yaoyaoding in #108
  • [Fixbug] Binary arthmatic ops raise error when one is scalar on GPU by @yaoyaoding in #109
  • [Graph] Refactor forward function of FlowGraph by @yaoyaoding in #110
  • [Fixbug] Use int64 as the output of arg-reduce by @yaoyaoding in #111
  • [README] Update readme by @yaoyaoding in #114
  • [Fixbug] Fix a bug when an graph output is constant by @yaoyaoding in #113
  • [Community] Create CODE_OF_CONDUCT.md by @yaoyaoding in #115
  • [Community] Update issue templates by @yaoyaoding in #116
  • [Fixbug] Resolve the min/max function according to compute capability by @yaoyaoding in #112
  • [Workflow] Update workflow by @yaoyaoding in #117
  • [Workflow] Update publish workflow by @yaoyaoding in #119

New Contributors

Full Changelog: v0.2.0...v0.2.1

Hidet v0.2.0

13 Jan 23:59

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.1...v0.2.0

Hidet v0.1

06 Jan 02:57
001c438

Choose a tag to compare

This is the first release of hidet.

For the usage of hidet, please visit: https://docs.hidet.org

What's Changed

  • [Docs] Update documentation by @yaoyaoding in #2
  • [Operator] Add leaky_relu and conv2d_transpose operator by @yaoyaoding in #3
  • [Doc] Add doc on how to define operator computation by @yaoyaoding in #4
  • [Bug] fix bugs in reshape and conv2d_transpose by @yaoyaoding in #5
  • [Option] Add option module by @yaoyaoding in #6
  • [Docs] Add documentation on how to add new operators by @yaoyaoding in #7
  • [Operator] Add PRelu op by @hjjq in #8
  • [Docs] Add documentation for operator cache & fix a typo by @yaoyaoding in #9
  • [Operator] Add Abs and And operator by @hjjq in #10
  • [CI] Update github workflow by @yaoyaoding in #11
  • [CI] Update docs workflow, not delete remote dest dir by @yaoyaoding in #12
  • [Operator] Add conv2d_transpose_gemm operator & fix a bug by @yaoyaoding in #13
  • [Runtime] force to use gpu tensor buffer in cuda graph by @yaoyaoding in #14
  • [Functor] Fix a bug in IR functor by @yaoyaoding in #15
  • [Graph] Force users to give an input order when multiple symbolic inputs are found in traced graph by @yaoyaoding in #17
  • [Operator] Add BitShift, Bitwise*, Ceil Operators by @hjjq in #19
  • [IR] Refactor scalar type system by @yaoyaoding in #18
  • [IR] Refactoring math functions by @yaoyaoding in #20
  • [Operator] Fix a bug when resolve matmul to batch_matmul by @yaoyaoding in #21
  • [Operator] Add cubic interpolation to Resize Operator by @hjjq in #22
  • [Packfunc] Refactor packed func & add vector type by @yaoyaoding in #23
  • [Pass] Add lower_special_cast pass and refactor resolve rule registration by @yaoyaoding in #24
  • [Docs] Change github repo url by @yaoyaoding in #25
  • [Operator] Add float16 precision matrix multiplication by @yaoyaoding in #26
  • [Docs] Add a guide on operator resolving by @yaoyaoding in #27
  • [CI] Avoid interactive query in apt installation of tzdata by @yaoyaoding in #28
  • [Docs] Add sub-graph rewrite tutorial by @yaoyaoding in #29
  • [Tensor] Implement dlpack tensor exchange protocol by @yaoyaoding in #30
  • [Frontend] Add a torch dynamo backend based on hidet "onnx2hidet" by @yaoyaoding in #31
  • [Frontend] Add hidet dynamo backend based on torch.fx by @yaoyaoding in #32
  • [Frontend] Make onnx dependency optional by @yaoyaoding in #33
  • [Frontend] Add more operator mappings for pytorch frontend by @yaoyaoding in #34
  • [Opeartor] Fix a bug in take (index can be in [-r, r-1]) by @yaoyaoding in #35
  • [Frontend] Add an option to print correctness report in hidet backend of torch dynamo by @yaoyaoding in #36
  • [IR] Refactor the attribute 'dtype' of hidet.Tensor from 'str' to 'DataType' by @yaoyaoding in #37
  • [Operator] Add a constant operator and deprecates manually implemented fill cuda kernel by @yaoyaoding in #38
  • [ONNX] Add reduce l2 onnx operator by @yaoyaoding in #40
  • [CLI] Add the 'hidet' command line interface by @yaoyaoding in #39
  • [Codegen] Add explicit conversion type for float16 by @yaoyaoding in #41
  • [Docs] Add the documentation for 'hidet' backend of PyTorch dynamo by @yaoyaoding in #42
  • [Runtime] Refactor the cuda runtime api used in hidet by @yaoyaoding in #43
  • [Testing] Remove redundant models in hidet.testing by @yaoyaoding in #44
  • [Runtime][IR] Refactor the device attribute of Tensor object by @yaoyaoding in #45
  • [Array-API][Phase 0] Adding the declarations of missing operators in Array API by @yaoyaoding in #46
  • [Operator] Add arange and linspace operator by @yaoyaoding in #47
  • [Bug] Fix a bug related to memset by @yaoyaoding in #49
  • [Docs] Add and update documentation by @yaoyaoding in #48
  • [Docs][Opeartor] Add more pytorch operator bindings and docs by @yaoyaoding in #50
  • [License][Docs] Add license header and update README.md by @yaoyaoding in #51
  • [Docs] Update docs by @yaoyaoding in #52
  • [IR] Add LaunchKernelStmt by @yaoyaoding in #53
  • [Operator] Add some torch operator mapping by @yaoyaoding in #54
  • [Bug] Fix a bug in hidet dynamo backend when cuda graph is not used by @yaoyaoding in #55
  • [Dynamo] Allow torch dynamo backend to accepts non-contiguous input by @yaoyaoding in #56
  • [Graph] Add to_cuda() for Module class by @hjjq in #57
  • [Bug] Fix a bug where the shared memory becomes zero in LaunchKernelStmt by @yaoyaoding in #58
  • [Release] Prepare to release the first version of hidet to public by @yaoyaoding in #59

New Contributors

  • @hjjq made their first contribution in #8

Full Changelog: https://github.com/hidet-org/hidet/commits/v0.1