Skip to content

Address comments of Discourse #600

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Apr 26, 2025
Merged

Conversation

@ChrisRackauckas ChrisRackauckas merged commit da17d44 into main Apr 26, 2025
1 check was pending
@ChrisRackauckas ChrisRackauckas deleted the ChrisRackauckas-patch-3 branch April 26, 2025 13:49
Comment on lines +5 to +11
* Offloading: offloading takes a CPU-based problem and automatically transforms it into a
GPU-based problem in the background, and returns the solution on CPU. Thus using
offloading requires no change on the part of the user other than to choose an offloading
solver.
* Array type interface: the array type interface requires that the user defines the
`LinearProblem` using an `AbstractGPUArray` type and chooses an appropriate solver
(or uses the default solver). The solution will then be returned as a GPU array type.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
* Offloading: offloading takes a CPU-based problem and automatically transforms it into a
GPU-based problem in the background, and returns the solution on CPU. Thus using
offloading requires no change on the part of the user other than to choose an offloading
solver.
* Array type interface: the array type interface requires that the user defines the
`LinearProblem` using an `AbstractGPUArray` type and chooses an appropriate solver
(or uses the default solver). The solution will then be returned as a GPU array type.
- Offloading: offloading takes a CPU-based problem and automatically transforms it into a
GPU-based problem in the background, and returns the solution on CPU. Thus using
offloading requires no change on the part of the user other than to choose an offloading
solver.
- Array type interface: the array type interface requires that the user defines the
`LinearProblem` using an `AbstractGPUArray` type and chooses an appropriate solver
(or uses the default solver). The solution will then be returned as a GPU array type.

sections we will demonstrate how to use each of the approaches.

!!! warn

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change

move things to CPU on command.

!!! warn

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change

```

!!! note

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change

Comment on lines +64 to +68
* [BandedMatrices.jl](https://github.com/JuliaLinearAlgebra/BandedMatrices.jl)
* [BlockDiagonals.jl](https://github.com/JuliaArrays/BlockDiagonals.jl)
* [CUDA.jl](https://cuda.juliagpu.org/stable/) (CUDA GPU-based dense and sparse matrices)
* [FastAlmostBandedMatrices.jl](https://github.com/SciML/FastAlmostBandedMatrices.jl)
* [Metal.jl](https://metal.juliagpu.org/stable/) (Apple M-series GPU-based dense matrices)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
* [BandedMatrices.jl](https://github.com/JuliaLinearAlgebra/BandedMatrices.jl)
* [BlockDiagonals.jl](https://github.com/JuliaArrays/BlockDiagonals.jl)
* [CUDA.jl](https://cuda.juliagpu.org/stable/) (CUDA GPU-based dense and sparse matrices)
* [FastAlmostBandedMatrices.jl](https://github.com/SciML/FastAlmostBandedMatrices.jl)
* [Metal.jl](https://metal.juliagpu.org/stable/) (Apple M-series GPU-based dense matrices)
- [BandedMatrices.jl](https://github.com/JuliaLinearAlgebra/BandedMatrices.jl)
- [BlockDiagonals.jl](https://github.com/JuliaArrays/BlockDiagonals.jl)
- [CUDA.jl](https://cuda.juliagpu.org/stable/) (CUDA GPU-based dense and sparse matrices)
- [FastAlmostBandedMatrices.jl](https://github.com/SciML/FastAlmostBandedMatrices.jl)
- [Metal.jl](https://metal.juliagpu.org/stable/) (Apple M-series GPU-based dense matrices)

@@ -117,7 +117,7 @@ function SciMLBase.solve!(cache::LinearCache, alg::LUFactorization; kwargs...)
end
cache.cacheval = fact

if !LinearAlgebra.issuccess(fact)
if hasmethod(LinearAlgebra.issuccess, Tuple{typeof(fact)}) && !LinearAlgebra.issuccess(fact)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
if hasmethod(LinearAlgebra.issuccess, Tuple{typeof(fact)}) && !LinearAlgebra.issuccess(fact)
if hasmethod(LinearAlgebra.issuccess, Tuple{typeof(fact)}) &&
!LinearAlgebra.issuccess(fact)

@WalterMadelim
Copy link

This link.
The first sentence under this link is not comprehensive.
To specify the model Au = b, I need to provide an A matrix, a b vector, and then I call prob = LinearProblem(A, b).
What "Type" should I choose? Should the user specify a concrete type, will this be beneficial?
For the matrix A, it can have varied types

julia> A1 = rand(4, 4)
4×4 Matrix{Float64}:
 0.639008   0.0695166  0.157621  0.112311
 0.0444344  0.840106   0.461039  0.140124
 0.646453   0.626477   0.655603  0.717909
 0.649285   0.151508   0.165179  0.411041

julia> A2 = Symmetric(rand(4, 4))
4×4 Symmetric{Float64, Matrix{Float64}}:
 0.904668  0.484867  0.860761   0.72931
 0.484867  0.945613  0.648764   0.996674
 0.860761  0.648764  0.101596   0.0521238
 0.72931   0.996674  0.0521238  0.00692615

julia> A3 = SparseArrays.sprand(4, 4, .2)
4×4 SparseMatrixCSC{Float64, Int64} with 2 stored entries:
                        
     0.729519  0.680267   
                        
                        

julia> A4 = Symmetric(SparseArrays.sprand(4, 4, .2))
4×4 Symmetric{Float64, SparseMatrixCSC{Float64, Int64}}:
 0.892644                     
          0.702673  0.381419  0.745596
          0.381419            
          0.745596            

Will the typeof(A4) be preferable than typeof(A1)?
Similarly, for the vector

julia> b1 = rand(4)
4-element Vector{Float64}:
 0.6830613731051517
 0.5774420692994588
 0.2954095531849277
 0.7373953570499356

julia> b2 = SparseArrays.sprand(4, .8)
4-element SparseVector{Float64, Int64} with 4 stored entries:
  [1]  =  0.221711
  [2]  =  0.904912
  [3]  =  0.68406
  [4]  =  0.740414

Will the LinearSolve interface benefit from typeof(b2) rather than typeof(b1)?

And secondly, I don't quite understand the first sentence under this link.

There is no difference in the interface for using LinearSolve.jl on sparse and structured matrices.

What does "no difference" mean? Does it suggest that the user can write any code freely (e.g. the aforementioned varied types) as long as it won't ERROR, and the performance is the same?

Similerly structure matrix types, like banded matrices...

The word is "Similarly"?
will specifying the special structure be beneficial?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants