Skip to content

Commit

Permalink
typo: fix a bunch of typos. (#862)
Browse files Browse the repository at this point in the history
  • Loading branch information
didier-durand authored Feb 18, 2025
1 parent 6ec3bae commit fbb3135
Show file tree
Hide file tree
Showing 4 changed files with 11 additions and 11 deletions.
6 changes: 3 additions & 3 deletions flashinfer/activation.py
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ def silu_and_mul(input: torch.Tensor, out: torch.Tensor = None) -> torch.Tensor:
Input tensor, shape (..., 2 * hidden_size).
out: Optional[torch.Tensor]
The the output tensor, if specified, the kernel will update this tensor inplace.
The output tensor, if specified, the kernel will update this tensor inplace.
Returns
-------
Expand Down Expand Up @@ -139,7 +139,7 @@ def gelu_tanh_and_mul(input: torch.Tensor, out: torch.Tensor = None) -> torch.Te
Input tensor, shape (..., 2 * hidden_size).
out: Optional[torch.Tensor]
The the output tensor, if specified, the kernel will update this tensor inplace.
The output tensor, if specified, the kernel will update this tensor inplace.
Returns
-------
Expand Down Expand Up @@ -171,7 +171,7 @@ def gelu_and_mul(input: torch.Tensor, out: torch.Tensor = None) -> torch.Tensor:
Input tensor, shape (..., 2 * hidden_size).
out: Optional[torch.Tensor]
The the output tensor, if specified, the kernel will update this tensor inplace.
The output tensor, if specified, the kernel will update this tensor inplace.
Returns
-------
Expand Down
4 changes: 2 additions & 2 deletions flashinfer/norm.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ def rmsnorm(
eps: float
Epsilon for numerical stability.
out: Optional[torch.Tensor]
The the output tensor, if specified, the kernel will update this tensor inplace.
The output tensor, if specified, the kernel will update this tensor inplace.
Returns
-------
Expand Down Expand Up @@ -144,7 +144,7 @@ def gemma_rmsnorm(
eps: float
Epsilon for numerical stability.
out: Optional[torch.Tensor]
The the output tensor, if specified, the kernel will update this tensor inplace.
The output tensor, if specified, the kernel will update this tensor inplace.
Returns
-------
Expand Down
8 changes: 4 additions & 4 deletions flashinfer/page.py
Original file line number Diff line number Diff line change
Expand Up @@ -180,9 +180,9 @@ def get_batch_indices_positions(
Returns
-------
batch_indices: torch.Tensor
The batch indices of the each entry in the ragged tensor, shape: ``[nnz]``.
The batch indices of each entry in the ragged tensor, shape: ``[nnz]``.
positions: torch.Tensor
The positions of the each entry in the ragged tensor, shape: ``[nnz]``.
The positions of each entry in the ragged tensor, shape: ``[nnz]``.
Example
-------
Expand All @@ -201,7 +201,7 @@ def get_batch_indices_positions(
----
This function is similar to `CSR2COO <https://docs.nvidia.com/cuda/cusparse/#csr2coo>`_
conversion in cuSPARSE library, with the difference that we are converting from a ragged
tensor (which don't require a column indices array) to a COO format.
tensor (which doesn't require a column indices array) to a COO format.
See Also
--------
Expand Down Expand Up @@ -349,7 +349,7 @@ def append_paged_kv_cache(
Note
----
The function assumes that the space for appended k/v have already been allocated,
The function assumes that the space for appended k/v has already been allocated,
which means :attr:`kv_indices`, :attr:`kv_indptr`, :attr:`kv_last_page_len` has
incorporated appended k/v.
Expand Down
4 changes: 2 additions & 2 deletions flashinfer/prefill.py
Original file line number Diff line number Diff line change
Expand Up @@ -991,7 +991,7 @@ class BatchPrefillWithPagedKVCacheWrapper:
Note
----
To accelerate computation, FlashInfer's batch prefill/append attention operators
creates some auxiliary data structures, these data structures can be reused across
create some auxiliary data structures, these data structures can be reused across
multiple prefill/append attention calls (e.g. different Transformer layers). This
wrapper class manages the lifecycle of these data structures.
"""
Expand Down Expand Up @@ -1815,7 +1815,7 @@ class BatchPrefillWithRaggedKVCacheWrapper:
Note
----
To accelerate computation, FlashInfer's batch prefill/append attention operators
creates some auxiliary data structures, these data structures can be reused across
create some auxiliary data structures, these data structures can be reused across
multiple prefill/append attention calls (e.g. different Transformer layers). This
wrapper class manages the lifecycle of these data structures.
"""
Expand Down

0 comments on commit fbb3135

Please sign in to comment.