Skip to content

Comments

Add source terms for SGN with BathymetryFlat#180

Merged
JoshuaLampert merged 25 commits intoNumericalMathematics:mainfrom
cwittens:main
Apr 30, 2025
Merged

Add source terms for SGN with BathymetryFlat#180
JoshuaLampert merged 25 commits intoNumericalMathematics:mainfrom
cwittens:main

Conversation

@cwittens
Copy link
Member

Add source terms for SGN with BathymetryFlat. Will do for BathymetryVariable next.

I did local testing in examples/serre_green_naghdi_1d/serre_green_naghdi_manufactured.jl and everything worked fine.

I was unsure about test/test_serre_green_naghdi_1d.jl. Here I only put in a template, but commented it out, because I was unsure which numerical values to put in.

Also I am not sure, if any more test are needed, which I missed.

@JoshuaLampert
Copy link
Member

Thanks a lot! Regarding the tests it's fine to use the values you obtain locally after you verified everything works as expected. You can obtain these values from errors(analysis_callback) and integrals(analysis_callback).

@JoshuaLampert
Copy link
Member

Since the manufactured solution is (obviously) an analytical solution, could you please provide a convergence test to see if we match the expected order of convergence? You can use convergence_test for that.

@JoshuaLampert
Copy link
Member

For the formatter you can run:

julia -e 'using Pkg; Pkg.add(PackageSpec(name = "JuliaFormatter", version="1.0.60"))'
julia -e 'using JuliaFormatter; format(".")'

@cwittens
Copy link
Member Author

Thanks a lot! Regarding the tests it's fine to use the values you obtain locally after you verified everything works as expected. You can obtain these values from errors(analysis_callback) and integrals(analysis_callback).

For @test_allocations(semi, sol, allocs=250_000) I just use what e.g. @Btime gives me when running sol=(...) and round up?
The other values I found, thanks :)

@cwittens
Copy link
Member Author

cwittens commented Apr 29, 2025

julia -e 'using Pkg; Pkg.add(PackageSpec(name = "JuliaFormatter", version="1.0.60"))'
julia -e 'using JuliaFormatter; format(".")'

I did use (Ctrl-K Ctrl-F) in VSCode now - I think it did the same. https://www.julia-vscode.org/docs/dev/userguide/formatter/

Edit: ok with all those CI fails, maybe I should start doing format(".") afterall :D

@cwittens
Copy link
Member Author

Since the manufactured solution is (obviously) an analytical solution, could you please provide a convergence test to see if we match the expected order of convergence? You can use convergence_test for that.

where? The only reference to convergence_test() I found in ./DispersiveShallowWater.jl/test was https://github.com/NumericalMathematics/DispersiveShallowWater.jl/blob/95211460557415ce9793da70a7df5c2d20fc0adc/test/test_unit.jl#L443-463, but this is probably not what you mean.

@JoshuaLampert
Copy link
Member

Thanks a lot! Regarding the tests it's fine to use the values you obtain locally after you verified everything works as expected. You can obtain these values from errors(analysis_callback) and integrals(analysis_callback).

For @test_allocations(semi, sol, allocs=250_000) I just use what e.g. @Btime gives me when running sol=(...) and round up? The other values I found, thanks :)

The allocs refer to the allocation during one call of rhs!, not the whole simulation. What you could do to determine the number is to set it to 0 first, run the test locally (you can directly do that in VSCode), see the error and how many allocations are reported, and finally round the number up and set it as allocs.

@JoshuaLampert
Copy link
Member

julia -e 'using Pkg; Pkg.add(PackageSpec(name = "JuliaFormatter", version="1.0.60"))'
julia -e 'using JuliaFormatter; format(".")'

I did use (Ctrl-K Ctrl-F) in VSCode now - I think it did the same. julia-vscode.org/docs/dev/userguide/formatter

Edit: ok with all those CI fails, maybe I should start doing format(".") afterall :D

You could use VSCode for that (I would recommend using (Ctrl-Shift-I) to format the whole file), but it might be that VSCode uses another version of JuliaFormatter.jl, which might result in another formatting.

@JoshuaLampert
Copy link
Member

JoshuaLampert commented Apr 29, 2025

Since the manufactured solution is (obviously) an analytical solution, could you please provide a convergence test to see if we match the expected order of convergence? You can use convergence_test for that.

where? The only reference to convergence_test() I found in ./DispersiveShallowWater.jl/test was 9521146/test/test_unit.jl#L443-463, but this is probably not what you mean.

Oh, I didn't mean to include a convergence_test in the tests, but run it locally and post the results here in a comment.
So something like in this comment.

@cwittens
Copy link
Member Author

Because the tests are failing, was it correct to put in the three components of

err = errors(analysis_callback)
err.l2_error[:,end]
3-element Vector{Float64}:
 5.034595145055819e-5
 9.546213964026592e-7
 0.0

into l2=[5.034595145055819e-5 9.546213964026592e-7 0.0], in @test_trixi_include(...)?
Either way, I will try fixing it tomorrow.

@JoshuaLampert
Copy link
Member

Because the tests are failing, was it correct to put in the three components of

err = errors(analysis_callback)
err.l2_error[:,end]
3-element Vector{Float64}:
 5.034595145055819e-5
 9.546213964026592e-7
 0.0

into l2=[5.034595145055819e-5 9.546213964026592e-7 0.0], in @test_trixi_include(...)? Either way, I will try fixing it tomorrow.

Yes, looks correct to me. I don't know why tests are failing. Let's check tomorrow :)

@JoshuaLampert JoshuaLampert mentioned this pull request Apr 8, 2025
6 tasks
@JoshuaLampert
Copy link
Member

Looking at the CI logs, the tests only fail on MacOS with differences of the order of 1e-6, which is a little bit too big to be acceptable in my opinion. This looks to me like the problem is rather ill-posed. Could you please check the condition number of the system_matrix to see how ill-posed the linear systems are?

Copy link
Member

@JoshuaLampert JoshuaLampert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot! I left some comments below. I suggest to add support for source terms with a variable bathymetry in a separate PR.

cwittens and others added 3 commits April 30, 2025 14:23
Co-authored-by: Joshua Lampert <51029046+JoshuaLampert@users.noreply.github.com>
Co-authored-by: Joshua Lampert <51029046+JoshuaLampert@users.noreply.github.com>
cwittens and others added 4 commits April 30, 2025 14:32
Co-authored-by: Joshua Lampert <51029046+JoshuaLampert@users.noreply.github.com>
Co-authored-by: Joshua Lampert <51029046+JoshuaLampert@users.noreply.github.com>
@cwittens
Copy link
Member Author

Thanks a lot! I left some comments below. I suggest to add support for source terms with a variable bathymetry in a separate PR.

yes, this was the idea :)

@cwittens
Copy link
Member Author

Looking at the CI logs, the tests only fail on MacOS with differences of the order of 1e-6, which is a little bit too big to be acceptable in my opinion. This looks to me like the problem is rather ill-posed. Could you please check the condition number of the system_matrix to see how ill-posed the linear systems are?

I will look into it.
I assume the if a ill conditioned system is the problem, the easiest way to fix it will be a different manufactured solution?

@cwittens
Copy link
Member Author

Since the manufactured solution is (obviously) an analytical solution, could you please provide a convergence test to see if we match the expected order of convergence? You can use convergence_test for that.

I did run

convergence_test(SGN_MANUF_File, [128, 256, 512, 1024, 2048], tspan=(0.0, 1.0))

####################################################################################################
l2
η                        v                        D
N    error     EOC       N    error     EOC       N    error     EOC
128  3.97e-05  -         128  1.01e-05  -         128  0.00e+00  -
256  5.18e-05  -0.39     256  9.90e-07  3.35      256  0.00e+00  NaN
512  5.61e-05  -0.11     512  1.08e-06  -0.12     512  0.00e+00  NaN
1024 1.28e-04  -1.19     1024 1.08e-06  0.00      1024 0.00e+00  NaN
2048 2.50e-04  -0.96     2048 1.19e-06  -0.15     2048 0.00e+00  NaN

mean           -0.66     mean           0.77      mean           NaN
----------------------------------------------------------------------------------------------------
linf
η                        v                        D
N    error     EOC       N    error     EOC       N    error     EOC
128  2.01e-04  -         128  1.52e-05  -         128  0.00e+00  -
256  2.41e-04  -0.26     256  3.92e-06  1.95      256  0.00e+00  NaN
512  3.84e-04  -0.68     512  5.91e-06  -0.59     512  0.00e+00  NaN
1024 9.55e-04  -1.31     1024 6.96e-06  -0.24     1024 0.00e+00  NaN
2048 2.12e-03  -1.15     2048 6.43e-06  0.12      2048 0.00e+00  NaN

mean           -0.85     mean           0.31      mean           NaN
----------------------------------------------------------------------------------------------------

but not sure if this is how it is supposed to be, because nowhere I did but the analytical solution for reference (?).
And also I don't think a negativ EOC is a good thing :D

@JoshuaLampert
Copy link
Member

JoshuaLampert commented Apr 30, 2025

Since the manufactured solution is (obviously) an analytical solution, could you please provide a convergence test to see if we match the expected order of convergence? You can use convergence_test for that.

I did run

convergence_test(SGN_MANUF_File, [128, 256, 512, 1024, 2048], tspan=(0.0, 1.0))

####################################################################################################
l2
η                        v                        D
N    error     EOC       N    error     EOC       N    error     EOC
128  3.97e-05  -         128  1.01e-05  -         128  0.00e+00  -
256  5.18e-05  -0.39     256  9.90e-07  3.35      256  0.00e+00  NaN
512  5.61e-05  -0.11     512  1.08e-06  -0.12     512  0.00e+00  NaN
1024 1.28e-04  -1.19     1024 1.08e-06  0.00      1024 0.00e+00  NaN
2048 2.50e-04  -0.96     2048 1.19e-06  -0.15     2048 0.00e+00  NaN

mean           -0.66     mean           0.77      mean           NaN
----------------------------------------------------------------------------------------------------
linf
η                        v                        D
N    error     EOC       N    error     EOC       N    error     EOC
128  2.01e-04  -         128  1.52e-05  -         128  0.00e+00  -
256  2.41e-04  -0.26     256  3.92e-06  1.95      256  0.00e+00  NaN
512  3.84e-04  -0.68     512  5.91e-06  -0.59     512  0.00e+00  NaN
1024 9.55e-04  -1.31     1024 6.96e-06  -0.24     1024 0.00e+00  NaN
2048 2.12e-03  -1.15     2048 6.43e-06  0.12      2048 0.00e+00  NaN

mean           -0.85     mean           0.31      mean           NaN
----------------------------------------------------------------------------------------------------

but not sure if this is how it is supposed to be, because nowhere I did but the analytical solution for reference (?). And also I don't think a negativ EOC is a good thing :D

Thanks for the convergence test. Yes, I agree, it seems like something is not working yet. We would expect an EOC of approximately 4 for η and v.

@cwittens
Copy link
Member Author

Not sure why it now also fails for windows and linux, but the bad condition number might be a reason.
For serre_green_naghdi_manufactured.jl I get cond(Matrix(system_matrix)) = ~863932
while for serre_green_naghdi_soliton.jl I get cond(Matrix(system_matrix)) = ~28

I will see what I can do to improve that :)

@JoshuaLampert
Copy link
Member

Not sure why it now also fails for windows and linux, but the bad condition number might be a reason. For serre_green_naghdi_manufactured.jl I get cond(Matrix(system_matrix)) = ~863932 while for serre_green_naghdi_soliton.jl I get cond(Matrix(system_matrix)) = ~28

I will see what I can do to improve that :)

Thanks, but I think ~863932 seems not too high to have such an impact. Looks like there is something else going on.

@cwittens
Copy link
Member Author

Since the manufactured solution is (obviously) an analytical solution, could you please provide a convergence test to see if we match the expected order of convergence? You can use convergence_test for that.

I did run

convergence_test(SGN_MANUF_File, [128, 256, 512, 1024, 2048], tspan=(0.0, 1.0))

####################################################################################################
l2
η                        v                        D
N    error     EOC       N    error     EOC       N    error     EOC
128  3.97e-05  -         128  1.01e-05  -         128  0.00e+00  -
256  5.18e-05  -0.39     256  9.90e-07  3.35      256  0.00e+00  NaN
512  5.61e-05  -0.11     512  1.08e-06  -0.12     512  0.00e+00  NaN
1024 1.28e-04  -1.19     1024 1.08e-06  0.00      1024 0.00e+00  NaN
2048 2.50e-04  -0.96     2048 1.19e-06  -0.15     2048 0.00e+00  NaN

mean           -0.66     mean           0.77      mean           NaN
----------------------------------------------------------------------------------------------------
linf
η                        v                        D
N    error     EOC       N    error     EOC       N    error     EOC
128  2.01e-04  -         128  1.52e-05  -         128  0.00e+00  -
256  2.41e-04  -0.26     256  3.92e-06  1.95      256  0.00e+00  NaN
512  3.84e-04  -0.68     512  5.91e-06  -0.59     512  0.00e+00  NaN
1024 9.55e-04  -1.31     1024 6.96e-06  -0.24     1024 0.00e+00  NaN
2048 2.12e-03  -1.15     2048 6.43e-06  0.12      2048 0.00e+00  NaN

mean           -0.85     mean           0.31      mean           NaN
----------------------------------------------------------------------------------------------------

but not sure if this is how it is supposed to be, because nowhere I did but the analytical solution for reference (?). And also I don't think a negativ EOC is a good thing :D

Thanks for the convergence test. Yes, I agree, it seems like something is not working yet. We would expect an EOC of approximately 4 for η and v.

Setting abstol = 1e-13, reltol = 1e-13, I now get:

####################################################################################################
l2
η                        v                        D
N    error     EOC       N    error     EOC       N    error     EOC
128  1.83e-05  -         128  1.00e-05  -         128  0.00e+00  -
256  1.15e-06  4.00      256  6.28e-07  4.00      256  0.00e+00  NaN
512  7.17e-08  4.00      512  3.93e-08  4.00      512  0.00e+00  NaN
1024 4.48e-09  4.00      1024 2.45e-09  4.00      1024 0.00e+00  NaN
2048 2.92e-10  3.94      2048 1.41e-10  4.12      2048 0.00e+00  NaN

mean           3.98      mean           4.03      mean           NaN
----------------------------------------------------------------------------------------------------
linf
η                        v                        D
N    error     EOC       N    error     EOC       N    error     EOC
128  4.66e-05  -         128  1.35e-05  -         128  0.00e+00  -
256  2.93e-06  3.99      256  8.33e-07  4.02      256  0.00e+00  NaN
512  1.83e-07  4.00      512  5.21e-08  4.00      512  0.00e+00  NaN
1024 1.21e-08  3.92      1024 3.25e-09  4.00      1024 0.00e+00  NaN
2048 9.50e-10  3.67      2048 2.10e-10  3.95      2048 0.00e+00  NaN

mean           3.90      mean           3.99      mean           NaN
----------------------------------------------------------------------------------------------------

Should I adjust the l2, linf, change_waterhight, ... values in the testitem to the values corresponding to sol with abstol = 1e-13, reltol = 1e-13 and commit again?

@JoshuaLampert
Copy link
Member

Ahh, that makes sense. The time error was dominating. Lowering the time integration tolerances to focus on the spatial error sounds good. I'm wondering, however, whether they need to be that low. Can you maybe try something more like 1e-9 to 1e-12?
Thanks for finding this!

@github-actions
Copy link
Contributor

github-actions bot commented Apr 30, 2025

Benchmark Results

main 4b8ce7c... main/4b8ce7cf3f7fa0...
bbm_1d/bbm_1d_basic.jl 14.5 ± 0.53 μs 13.8 ± 0.38 μs 1.05
bbm_1d/bbm_1d_fourier.jl 0.535 ± 0.0095 ms 0.528 ± 0.29 ms 1.01
bbm_bbm_1d/bbm_bbm_1d_basic_reflecting.jl 0.0805 ± 0.00042 ms 0.0809 ± 0.00041 ms 0.995
bbm_bbm_1d/bbm_bbm_1d_dg.jl 0.04 ± 0.00047 ms 0.0341 ± 0.0005 ms 1.17
bbm_bbm_1d/bbm_bbm_1d_relaxation.jl 27.1 ± 0.39 μs 27.5 ± 0.46 μs 0.983
bbm_bbm_1d/bbm_bbm_1d_upwind_relaxation.jl 0.0484 ± 0.00056 ms 0.0483 ± 0.00062 ms 1
hyperbolic_serre_green_naghdi_1d/hyperbolic_serre_green_naghdi_dingemans.jl 4.14 ± 0.013 μs 4.08 ± 0.013 μs 1.01
serre_green_naghdi_1d/serre_green_naghdi_well_balanced.jl 0.199 ± 0.0079 ms 0.199 ± 0.0078 ms 0.996
svaerd_kalisch_1d/svaerd_kalisch_1d_dingemans_relaxation.jl 0.146 ± 0.0039 ms 0.15 ± 0.0048 ms 0.979
time_to_load 1.99 ± 0.015 s 1.98 ± 0.03 s 1

Benchmark Plots

A plot of the benchmark results have been uploaded as an artifact to the workflow run for this PR.
Benchmark Plot

@cwittens
Copy link
Member Author

Ahh, that makes sense. The time error was dominating. Lowering the time integration tolerances to focus on the spatial error sounds good. I'm wondering, however, whether they need to be that low. Can you maybe try something more like 1e-9 to 1e-12? Thanks for finding this!

for eta I only get a EOC of ~4 with 1e-12 or stricter - for both central and upwind operators.

@JoshuaLampert
Copy link
Member

JoshuaLampert commented Apr 30, 2025

Ahh, that makes sense. The time error was dominating. Lowering the time integration tolerances to focus on the spatial error sounds good. I'm wondering, however, whether they need to be that low. Can you maybe try something more like 1e-9 to 1e-12? Thanks for finding this!

for eta I only get a EOC of ~4 with 1e-12 or stricter - for both central and upwind operators.

Ok, I assume the EOC for smaller N will be closer to ~4 as for N = 2048? If we only see a "clean" EOC of 4 until, say, N = 512 or 1024 that should also be fine. Otherwise, setting the tolerances to 1e-12 would also be fine with me.

@cwittens
Copy link
Member Author

Ahh, that makes sense. The time error was dominating. Lowering the time integration tolerances to focus on the spatial error sounds good. I'm wondering, however, whether they need to be that low. Can you maybe try something more like 1e-9 to 1e-12? Thanks for finding this!

for eta I only get a EOC of ~4 with 1e-12 or stricter - for both central and upwind operators.

Ok, I assume the EOC for smaller N will be closer to ~4 as for N = 2048? If we only see a "clean" EOC of 4 until, say, N = 512 or 1024 that should also be fine. Otherwise, setting the tolerances to 1e-12 would also be fine with me.

What I just got was with Ns = [128, 256, 512, 1024, ].
I am just adjusting the l2, ... values in the testset, do the test locally and if everything passes I will commit again, and hopefully everything will be working then.

@codecov-commenter
Copy link

Codecov Report

All modified and coverable lines are covered by tests ✅

📢 Thoughts on this report? Let us know!

Copy link
Member

@JoshuaLampert JoshuaLampert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One final fix for CI until this looks good from my side. Thanks again!
@ranocha, do you also want to take a look?

Co-authored-by: Joshua Lampert <51029046+JoshuaLampert@users.noreply.github.com>
Copy link
Member

@ranocha ranocha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@JoshuaLampert JoshuaLampert merged commit eaa6bb1 into NumericalMathematics:main Apr 30, 2025
10 checks passed
@coveralls
Copy link
Collaborator

Pull Request Test Coverage Report for Build 14761975630

Warning: This coverage report may be inaccurate.

This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.

Details

  • 29 of 29 (100.0%) changed or added relevant lines in 2 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall first build on main at 98.092%

Totals Coverage Status
Change from base Build 14759959157: 98.1%
Covered Lines: 1902
Relevant Lines: 1939

💛 - Coveralls

@JoshuaLampert JoshuaLampert added the enhancement New feature or request label May 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants