Skip to content

chore: bump CUDA to v6 #82

chore: bump CUDA to v6

chore: bump CUDA to v6 #82

Triggered via pull request April 16, 2026 10:49
@gdallegdalle
synchronize #48
gd/cubump
Status Failure
Total duration 2m 8s
Artifacts

Docs.yml

on: pull_request
Documentation
2m 5s
Documentation
Fit to window
Zoom out
Zoom in

Annotations

5 errors and 1 warning
Documentation
Process completed with exit code 1.
Documentation: ../../../.julia/packages/Documenter/AXNMp/src/utilities/utilities.jl#L47
failed to run `@example` block in docs/src/tutorial.md:129-131 ```@example tutorial objective_value(x_jump, milp) ``` exception = UndefVarError: `x_jump` not defined in `Main.__atexample__named__tutorial` Suggestion: check for spelling errors or missing imports. Stacktrace: [1] top-level scope @ tutorial.md:130 [2] eval(m::Module, e::Any) @ Core ./boot.jl:489 [3] #61 @ ~/.julia/packages/Documenter/AXNMp/src/expander_pipeline.jl:879 [inlined] [4] cd(f::Documenter.var"#61#62"{Module, Expr}, dir::String) @ Base.Filesystem ./file.jl:112 [5] (::Documenter.var"#59#60"{Documenter.Page, Module, Expr})() @ Documenter ~/.julia/packages/Documenter/AXNMp/src/expander_pipeline.jl:878 [6] (::IOCapture.var"#12#13"{Type{InterruptException}, Documenter.var"#59#60"{Documenter.Page, Module, Expr}, IOContext{Base.PipeEndpoint}, IOContext{Base.PipeEndpoint}, IOContext{Base.PipeEndpoint}, IOContext{Base.PipeEndpoint}})() @ IOCapture ~/.julia/packages/IOCapture/MR051/src/IOCapture.jl:170 [7] with_logstate(f::IOCapture.var"#12#13"{Type{InterruptException}, Documenter.var"#59#60"{Documenter.Page, Module, Expr}, IOContext{Base.PipeEndpoint}, IOContext{Base.PipeEndpoint}, IOContext{Base.PipeEndpoint}, IOContext{Base.PipeEndpoint}}, logstate::Base.CoreLogging.LogState) @ Base.CoreLogging ./logging/logging.jl:542 [8] with_logger(f::Function, logger::Base.CoreLogging.ConsoleLogger) @ Base.CoreLogging ./logging/logging.jl:653 [9] capture(f::Documenter.var"#59#60"{Documenter.Page, Module, Expr}; rethrow::Type, color::Bool, passthrough::Bool, capture_buffer::IOBuffer, io_context::Vector{Any}) @ IOCapture ~/.julia/packages/IOCapture/MR051/src/IOCapture.jl:167 [10] runner(::Type{Documenter.Expanders.ExampleBlocks}, node::MarkdownAST.Node{Nothing}, page::Documenter.Page, doc::Documenter.Document) @ Documenter ~/.julia/packages/Documenter/AXNMp/src/expander_pipeline.jl:877
Documentation: ../../../.julia/packages/Documenter/AXNMp/src/utilities/utilities.jl#L47
failed to run `@example` block in docs/src/tutorial.md:118-127 ```@example tutorial model = JuMP.read_from_file(path; format = MOI.FileFormats.FORMAT_MPS) JuMP.set_optimizer(model, CoolPDLP.Optimizer) JuMP.set_silent(model) JuMP.set_attribute(model, "termination_reltol", 1.0e-6) JuMP.set_attribute(model, "matrix_type", GPUSparseMatrixCSR) JuMP.set_attribute(model, "backend", JLBackend()) JuMP.optimize!(model) x_jump = JuMP.value.(JuMP.all_variables(model)) ``` exception = Scalar indexing is disallowed. Invocation of setindex! resulted in scalar indexing of a GPU array. This is typically caused by calling an iterating implementation of a method. Such implementations *do not* execute on the GPU, but very slowly on the CPU, and therefore should be avoided. If you want to allow scalar iteration, use `allowscalar` or `@allowscalar` to enable scalar iteration globally or for the operations in question. Stacktrace: [1] error(s::String) @ Base ./error.jl:44 [2] errorscalar(op::String) @ GPUArraysCore ~/.julia/packages/GPUArraysCore/aNaXo/src/GPUArraysCore.jl:151 [3] _assertscalar(op::String, behavior::GPUArraysCore.ScalarIndexing) @ GPUArraysCore ~/.julia/packages/GPUArraysCore/aNaXo/src/GPUArraysCore.jl:124 [4] assertscalar(op::String) @ GPUArraysCore ~/.julia/packages/GPUArraysCore/aNaXo/src/GPUArraysCore.jl:112 [5] setindex! @ ~/.julia/packages/GPUArrays/ZRJ3p/src/host/indexing.jl:58 [inlined] [6] randn!(rng::StableRNGs.LehmerRNG, A::JLArrays.JLArray{Float64, 1}) @ Random /opt/hostedtoolcache/julia/1.12.6/x64/share/julia/stdlib/v1.12/Random/src/normal.jl:230 [7] macro expansion @ ~/work/CoolPDLP.jl/CoolPDLP.jl/src/utils/linalg.jl:101 [inlined] [8] spectral_norm(K::GPUSparseMatrixCSR{Float64, Int64, JLArrays.JLArray{Float64, 1}, JLArrays.JLArray{Int64, 1}}, Kᵀ::GPUSparseMatrixCSR{Float64, Int64, JLArrays.JLArray{Float64, 1}, JLArrays.JLArray{Int64, 1}}; kwargs::@kwargs{}) @ CoolPDLP ./none:0 [9] spectral_norm @ ./none:-1 [inlined] [10] macro expansion @ ~/work/CoolPDLP.jl/CoolPDLP.jl/src/components/step_size.jl:25 [inlined] [11] fixed_stepsize @ ./none:0 [inlined] [12] macro expansion @ ~/work/CoolPDLP.jl/CoolPDLP.jl/src/algorithms/pdlp.jl:55 [inlined] [13] initialize(milp::MILP{Float64, JLArrays.JLArray{Float64, 1}, GPUSparseMatrixCSR{Float64, Int64, JLArrays.JLArray{Float64, 1}, JLArrays.JLArray{Int64, 1}}, GPUSparseMatrixCSR{Float64, Int64, JLArrays.JLArray{Float64, 1}, JLArrays.JLArray{Int64, 1}}, JLArrays.JLArray{Bool, 1}}, sol::PrimalDualSolution{Float64, JLArrays.JLArray{Float64, 1}}, algo::CoolPDLP.Algorithm{:PDLP, Float64, Int64, GPUSparseMatrixCSR, JLArrays.JLBackend}; starting_time::Float64) @ CoolPDLP ./none:0 [14] initialize @ ./none:-1 [inlined] [15] macro expansion @ ~/work/CoolPDLP.jl/CoolPDLP.jl/src/algorithms/common.jl:193 [inlined] [16] solve(milp_init_cpu::MILP{Float64, Vector{Float64}, SparseArrays.SparseMatrixCSC{Float64, Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Bool}}, sol_init_cpu::PrimalDualSolution{Float64, Vector{Float64}}, algo::CoolPDLP.Algorithm{:PDLP, Float64, Int64, GPUSparseMatrixCSR, JLArrays.JLBackend}) @ CoolPDLP ./none:0 [17] macro expansion @ ~/work/CoolPDLP.jl/CoolPDLP.jl/src/algorithms/common.jl:208 [inlined] [18] solve(milp_init_cpu::MILP{Float64, Vector{Float64}, SparseArrays.SparseMatrixCSC{Float64, Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Bool}}, algo::CoolPDLP.Algorithm{:PDLP, Float64, Int64, GPUSparseMatrixCSR, JLArrays.JLBackend}) @ CoolPDLP ./none:0 [19] optimize!(dest::CoolPDLP.Optimizer{Float64}, fcache::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.GenericModel{Float64, MathOptInterface.Utilities.ObjectiveContainer{Float64}, MathOptInterface.Utilities.VariablesContainer{Float64}, MathOptInterface.Utilities.MatrixOfConstraints{
Documentation: ../../../.julia/packages/Documenter/AXNMp/src/utilities/utilities.jl#L47
failed to run `@example` block in docs/src/tutorial.md:110-112 ```@example tutorial objective_value(Array(sol_gpu.x), milp) ``` exception = UndefVarError: `sol_gpu` not defined in `Main.__atexample__named__tutorial` Suggestion: check for spelling errors or missing imports. Stacktrace: [1] top-level scope @ tutorial.md:111 [2] eval(m::Module, e::Any) @ Core ./boot.jl:489 [3] #61 @ ~/.julia/packages/Documenter/AXNMp/src/expander_pipeline.jl:879 [inlined] [4] cd(f::Documenter.var"#61#62"{Module, Expr}, dir::String) @ Base.Filesystem ./file.jl:112 [5] (::Documenter.var"#59#60"{Documenter.Page, Module, Expr})() @ Documenter ~/.julia/packages/Documenter/AXNMp/src/expander_pipeline.jl:878 [6] (::IOCapture.var"#12#13"{Type{InterruptException}, Documenter.var"#59#60"{Documenter.Page, Module, Expr}, IOContext{Base.PipeEndpoint}, IOContext{Base.PipeEndpoint}, IOContext{Base.PipeEndpoint}, IOContext{Base.PipeEndpoint}})() @ IOCapture ~/.julia/packages/IOCapture/MR051/src/IOCapture.jl:170 [7] with_logstate(f::IOCapture.var"#12#13"{Type{InterruptException}, Documenter.var"#59#60"{Documenter.Page, Module, Expr}, IOContext{Base.PipeEndpoint}, IOContext{Base.PipeEndpoint}, IOContext{Base.PipeEndpoint}, IOContext{Base.PipeEndpoint}}, logstate::Base.CoreLogging.LogState) @ Base.CoreLogging ./logging/logging.jl:542 [8] with_logger(f::Function, logger::Base.CoreLogging.ConsoleLogger) @ Base.CoreLogging ./logging/logging.jl:653 [9] capture(f::Documenter.var"#59#60"{Documenter.Page, Module, Expr}; rethrow::Type, color::Bool, passthrough::Bool, capture_buffer::IOBuffer, io_context::Vector{Any}) @ IOCapture ~/.julia/packages/IOCapture/MR051/src/IOCapture.jl:167 [10] runner(::Type{Documenter.Expanders.ExampleBlocks}, node::MarkdownAST.Node{Nothing}, page::Documenter.Page, doc::Documenter.Document) @ Documenter ~/.julia/packages/Documenter/AXNMp/src/expander_pipeline.jl:877
Documentation: ../../../.julia/packages/Documenter/AXNMp/src/utilities/utilities.jl#L47
failed to run `@example` block in docs/src/tutorial.md:103-106 ```@example tutorial sol_gpu, stats_gpu = solve(milp, algo_gpu) sol_gpu.x ``` exception = Scalar indexing is disallowed. Invocation of setindex! resulted in scalar indexing of a GPU array. This is typically caused by calling an iterating implementation of a method. Such implementations *do not* execute on the GPU, but very slowly on the CPU, and therefore should be avoided. If you want to allow scalar iteration, use `allowscalar` or `@allowscalar` to enable scalar iteration globally or for the operations in question. Stacktrace: [1] error(s::String) @ Base ./error.jl:44 [2] errorscalar(op::String) @ GPUArraysCore ~/.julia/packages/GPUArraysCore/aNaXo/src/GPUArraysCore.jl:151 [3] _assertscalar(op::String, behavior::GPUArraysCore.ScalarIndexing) @ GPUArraysCore ~/.julia/packages/GPUArraysCore/aNaXo/src/GPUArraysCore.jl:124 [4] assertscalar(op::String) @ GPUArraysCore ~/.julia/packages/GPUArraysCore/aNaXo/src/GPUArraysCore.jl:112 [5] setindex! @ ~/.julia/packages/GPUArrays/ZRJ3p/src/host/indexing.jl:58 [inlined] [6] randn!(rng::StableRNGs.LehmerRNG, A::JLArrays.JLArray{Float32, 1}) @ Random /opt/hostedtoolcache/julia/1.12.6/x64/share/julia/stdlib/v1.12/Random/src/normal.jl:230 [7] macro expansion @ ~/work/CoolPDLP.jl/CoolPDLP.jl/src/utils/linalg.jl:101 [inlined] [8] spectral_norm(K::GPUSparseMatrixCSR{Float32, Int32, JLArrays.JLArray{Float32, 1}, JLArrays.JLArray{Int32, 1}}, Kᵀ::GPUSparseMatrixCSR{Float32, Int32, JLArrays.JLArray{Float32, 1}, JLArrays.JLArray{Int32, 1}}; kwargs::@kwargs{}) @ CoolPDLP ./none:0 [9] spectral_norm @ ./none:-1 [inlined] [10] macro expansion @ ~/work/CoolPDLP.jl/CoolPDLP.jl/src/components/step_size.jl:25 [inlined] [11] fixed_stepsize @ ./none:0 [inlined] [12] macro expansion @ ~/work/CoolPDLP.jl/CoolPDLP.jl/src/algorithms/pdlp.jl:55 [inlined] [13] initialize(milp::MILP{Float32, JLArrays.JLArray{Float32, 1}, GPUSparseMatrixCSR{Float32, Int32, JLArrays.JLArray{Float32, 1}, JLArrays.JLArray{Int32, 1}}, GPUSparseMatrixCSR{Float32, Int32, JLArrays.JLArray{Float32, 1}, JLArrays.JLArray{Int32, 1}}, JLArrays.JLArray{Bool, 1}}, sol::PrimalDualSolution{Float32, JLArrays.JLArray{Float32, 1}}, algo::CoolPDLP.Algorithm{:PDLP, Float32, Int32, GPUSparseMatrixCSR, JLArrays.JLBackend}; starting_time::Float64) @ CoolPDLP ./none:0 [14] initialize @ ./none:-1 [inlined] [15] macro expansion @ ~/work/CoolPDLP.jl/CoolPDLP.jl/src/algorithms/common.jl:193 [inlined] [16] solve(milp_init_cpu::MILP{Float64, Vector{Float64}, SparseArrays.SparseMatrixCSC{Float64, Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Bool}}, sol_init_cpu::PrimalDualSolution{Float64, Vector{Float64}}, algo::CoolPDLP.Algorithm{:PDLP, Float32, Int32, GPUSparseMatrixCSR, JLArrays.JLBackend}) @ CoolPDLP ./none:0 [17] macro expansion @ ~/work/CoolPDLP.jl/CoolPDLP.jl/src/algorithms/common.jl:208 [inlined] [18] solve(milp_init_cpu::MILP{Float64, Vector{Float64}, SparseArrays.SparseMatrixCSC{Float64, Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Bool}}, algo::CoolPDLP.Algorithm{:PDLP, Float32, Int32, GPUSparseMatrixCSR, JLArrays.JLBackend}) @ CoolPDLP ./none:0 [19] top-level scope @ tutorial.md:104 [20] eval(m::Module, e::Any) @ Core ./boot.jl:489 [21] #61 @ ~/.julia/packages/Documenter/AXNMp/src/expander_pipeline.jl:879 [inlined] [22] cd(f::Documenter.var"#61#62"{Module, Expr}, dir::String) @ Base.Filesystem ./file.jl:112 [23] (::Documenter.var"#59#60"{Documenter.Page, Module, Expr})() @ Documenter ~/.julia/packages/Documenter/AXNMp/src/expander_pipeline.jl:878 [24] (::IOCapture.var"#12#13"{Type{InterruptException}, Documenter.var"#59#60"{Documenter.Page, Module, Expr}, IOContext{Base.PipeEndpoint}, IOContext{Base.Pipe
Documentation
Node.js 20 actions are deprecated. The following actions are running on Node.js 20 and may not work as expected: julia-actions/setup-julia@v2. Actions will be forced to run with Node.js 24 by default starting June 2nd, 2026. Node.js 20 will be removed from the runner on September 16th, 2026. Please check if updated versions of these actions are available that support Node.js 24. To opt into Node.js 24 now, set the FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true environment variable on the runner or in your workflow file. Once Node.js 24 becomes the default, you can temporarily opt out by setting ACTIONS_ALLOW_USE_UNSECURE_NODE_VERSION=true. For more information see: https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/