Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add vinds for internal use of inds without Tuple specialization #335

Merged
merged 5 commits into from
Mar 11, 2025

Conversation

mofeing
Copy link
Member

@mofeing mofeing commented Mar 5, 2025

This PR adds vinds function, which is just the old inds that returns the Vector{Symbol} underneath a Tensor.

Keep in mind that vinds is dangerous if used externally because it allows to mutate the indices of a Tensor and that's why we made inds to return a Tuple of Symbols.

Still vinds is required for performance as using Tuplecan specialize too much.

@mofeing mofeing requested a review from jofrevalles March 5, 2025 14:09
Copy link
Member

@jofrevalles jofrevalles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @mofeing, could you show us some example where this has some performance benefits? I am having a hard time seeing the utility of this. Maybe one example with BenchmarkTools is enough... :)

@mofeing
Copy link
Member Author

mofeing commented Mar 10, 2025

@jofrevalles okay, so i found an example of where this affects

on size(::Tensor, ::Symbol)

(actually, this comparison also adds a lil fix on dim which affects size(::Tensor, ::Symbol))

before

look that the compiler cannot type-infer the result of dim(::Tensor, ::Symbol) and directly returns Any and it propagates to size(::Tensor, ::Symbol)

julia> a = Tensor(rand(2,2), (:i, :j))
2×2 Tensor{Float64, 2, Matrix{Float64}}:
 0.471688   0.627282
 0.0198292  0.371403

julia> @code_warntype size(a, :i)
MethodInstance for size(::Tensor{Float64, 2, Matrix{Float64}}, ::Symbol)
  from size(t::Tensor, i) @ Tenet ~/.julia/packages/Tenet/AEf3W/src/Tensor.jl:201
Arguments
  #self#::Core.Const(size)
  t::Tensor{Float64, 2, Matrix{Float64}}
  i::Symbol
Body::Any
1%1 = Tenet.size::Core.Const(size)
│   %2 = Tenet.parent::Core.Const(parent)
│   %3 = (%2)(t)::Matrix{Float64}%4 = Tenet.dim(t, i)::Any%5 = (%1)(%3, %4)::Any
└──      return %5

after

here, dim returns Union{Nothing,Int} which is type-unstable but stable enough such that the compiler knows that only 2 types can be returned, and can add a if and specialize for each case. so even if it's type-unstable, if it's just a lil`, code is fast and there are no dynamic dispatches.

as a consequence, size(::Tensor, ::Symbol) is ensured to always return a Int or throw, but that is outside of the type-inference system and doesn't affect it.

julia> @code_warntype size(a, :i)
MethodInstance for size(::Tensor{Float64, 2, Matrix{Float64}}, ::Symbol)
  from size(t::Tensor, i) @ Tenet ~/.julia/packages/Tenet/6kXCN/src/Tensor.jl:203
Arguments
  #self#::Core.Const(size)
  t::Tensor{Float64, 2, Matrix{Float64}}
  i::Symbol
Body::Int64
1%1 = Tenet.size::Core.Const(size)
│   %2 = Tenet.parent::Core.Const(parent)
│   %3 = (%2)(t)::Matrix{Float64}%4 = Tenet.dim(t, i)::Union{Nothing, Int64}%5 = (%1)(%3, %4)::Int64
└──      return %5

on contract!(c::Tensor, a::Tensor, b::Tensor)

the text in red means that the type is very unstable (so no branch-type optimization can be performed) and thus, there is a dynamic dispatch.

in this case, both "before" and "after" implementations have no problems with the return type (is just returning c), but the "before" impl has type-inference issues inside and as a consequence, does dynamic dispatch inside.

the result is clear: this complete fixes the type-inference when the c Tensor is known.

before

Captura de pantalla 2025-03-10 a las 12 47 09

after

Captura de pantalla 2025-03-10 a las 12 46 24

@mofeing mofeing merged commit 20ec49d into master Mar 11, 2025
6 of 8 checks passed
@mofeing mofeing deleted the feature/vinds branch March 11, 2025 08:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants