-
Notifications
You must be signed in to change notification settings - Fork 0
Luminal #7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Luminal #7
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR migrates the optimizer framework from Candle to Luminal, introducing graph-based computation and switching from 64-bit to 32-bit floating-point precision throughout the codebase.
Key Changes:
- Replaced Candle tensor library with Luminal's graph-based computation framework
- Changed all numeric types from
f64tof32for improved performance and memory efficiency - Restructured optimizer trait to work with static computation graphs instead of dynamic tensors
Reviewed changes
Copilot reviewed 47 out of 55 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| src/optimizers/optimizer.rs | Introduced graph-based optimizer interface with gradient context support |
| src/optimizers/mod.rs | Updated numeric tolerance to f32 and removed Candle error handling |
| src/optimizers/lbfgs.rs | Converted L-BFGS to use graph tensors and f32 precision |
| src/optimizers/gd.rs | Migrated gradient descent to Luminal with simplified state management |
| src/optimizers/adam.rs | Restructured Adam optimizer for graph-based computation |
| src/line_search/strong_wolfe.rs | Updated to evaluate objectives via graph execution |
| src/line_search/more_thuente.rs | Converted to f32 and graph-based evaluation |
| src/line_search/mod.rs | Changed numeric tolerance constant to f32 |
| src/line_search/line_search.rs | Replaced closure-based API with graph tensor interface |
| src/line_search/golden_section.rs | Updated to work with graph execution model |
| src/line_search/cubic_quadratic.rs | Converted interpolation logic to f32 precision |
| src/line_search/bisection.rs | Introduced evaluator trait for graph-based line search |
| src/line_search/backtracking.rs | Simplified to use graph tensor evaluation |
| src/lib.rs | Removed ML problem re-exports |
| src/experiment_runner/unified_report_tests.rs | Updated test data to use f32 |
| src/experiment_runner/test_data.rs | Changed mock problem signatures to f32 |
| src/experiment_runner/reports/unified_summary_statistics.rs | Converted statistics calculations to f32 |
| src/experiment_runner/reports/family_vs_family_report.rs | Changed NAN constants to f32 |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src/optimizers/lbfgs.rs
Outdated
| // estimate_optimum returns -H*g (descent direction) | ||
| // We want H*g (search direction before negation), but QQN might expect the descent direction? | ||
| // The comment says "actual direction, not the descent direction". | ||
| // Usually "direction" means p = -H*g. | ||
| // If QQN wants H*g, we negate. | ||
| // But estimate_optimum returns p = -H*g. | ||
| // So if we want H*g, we negate p. |
Copilot
AI
Dec 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment block is confusing and contradictory. It appears to question what direction format is needed but doesn't reach a clear conclusion. The comment should clearly document what compute_direction returns and why it negates the result from estimate_optimum, or the implementation should be corrected if the current behavior is wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot open a new pull request to apply changes based on this feedback
src/line_search/golden_section.rs
Outdated
| let mut num_f_evals = 0usize; | ||
|
|
||
| // Create objective function that evaluates loss at a given step | ||
| let mut num_f_evals = 0usize; | ||
|
|
||
| let mut objective = |step: f32| -> Result<f32> { |
Copilot
AI
Dec 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variable num_f_evals is declared three times consecutively (lines 139, 142, 145), with the first two declarations being unused. Remove the duplicate declarations on lines 139 and 142.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot open a new pull request to apply changes based on this feedback
|
@acharneski I've opened a new pull request, #8, to work on those changes. Once the pull request is ready, I'll request review from you. |
|
@acharneski I've opened a new pull request, #9, to work on those changes. Once the pull request is ready, I'll request review from you. |
| use tokio::task::LocalSet; | ||
|
|
||
| // #[tokio::test] | ||
| #[allow(dead_code)] | ||
| async fn calibration() -> Result<(), Box<dyn Error + Send + Sync>> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Restore this method
89efabf to
2451acc
Compare
This pull request migrates the project from using the Candle and OneDNN libraries to the Luminal ecosystem for neural network and optimization functionality. It removes dependencies, code, and examples related to Candle and OneDNN, and introduces Luminal and its relevant crates as new dependencies. The changes also update project configuration files to include Luminal as a git submodule and adjust IDE settings accordingly.
Dependency and library migration:
candle-coreandcandle-nndependencies withluminal,luminal_nn, andluminal_traininginCargo.toml, and removed all OneDNN-related dependencies and features. Addeddfdxas a new dependency. [1] [2].gitmodulesand updated.idea/vcs.xmlto track the new submodule in version control. [1] [2]Example and code cleanup:
examples/basic_usage.rsfile, which demonstrated QQN optimization using Candle.examples/benchmark_comparison.rsfile, which benchmarked Candle and OneDNN MNIST implementations.