Skip to content

Commit 4d74e75

Browse files
merge with main
2 parents 3ed4428 + e4c6021 commit 4d74e75

File tree

18 files changed

+380
-181
lines changed

18 files changed

+380
-181
lines changed

README.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,12 +27,13 @@ If you are interested in using these products, and do not (yet) want to compile
2727
- **Delft3D 4 Suite website:** https://www.deltares.nl/en/software-and-data/products/delft3d-4-suite
2828
- **Delft3D FM Suite:** https://www.deltares.nl/en/software-and-data/products/delft3d-flexible-mesh-suite
2929

30-
and contact our **sales team:** https://www.deltares.nl/en/software-and-data/software-sales-and-support-teams
30+
and contact our **sales services team:** https://www.deltares.nl/en/software-and-data/software-sales-and-support-teams
3131

32-
## Bug reports
32+
## Community support
3333

34-
To limit the number of parallel communication channels and issue trackers, the issues tab has been removed on the Delft3D GitHub site.
35-
In case you encounter bugs, please report them to our **support team:** https://www.deltares.nl/en/software-and-data/software-sales-and-support-teams
34+
Clients with Service Packages have access to the support team.
35+
For the wider open source community, we recommend the use of the [GitHub Discussions](https://github.com/Deltares/Delft3D/discussions) tab.
36+
Please post questions and suggestions there in the Q&A sections.
3637

3738
## Open Source Community
3839

doc/contributing.md

Lines changed: 31 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -5,23 +5,44 @@ Below are the specifics about our contributing process.
55
If you are entirely new to contributing to open source, [this generic guide](https://opensource.guide/how-to-contribute/) also helps explain why, what, and how to successfully get involved in open source projects.
66

77
## Workflow
8-
- Create a fork of the Delft3D repository, or request write-access to our Delft3D repository via Delft3D support.
9-
- Create a JIRA issue ticket at https://issuetracker.deltares.nl describing the bug to be fixed or functionality to be developed.
10-
The issue number is relevant for naming the development branch (see below).
11-
Third party developers don't have access to the issue tracker; we recommend to communicate with a Deltares contact person.
8+
### External contributors
9+
- Create a fork of the Delft3D repository.
10+
- Reach out to Deltares as early as possible if you intend to contribute changes back to the main Delft3D version.
11+
This helps us with keeping track of what developments are ongoing.
12+
This helps avoid starting work on something that other people are already implementing, and we can give guidance on how best to implement the intended changes.
13+
Reach out to us via the **sales services team:** https://www.deltares.nl/en/software-and-data/software-sales-and-support-teams with a brief description of the scope of the code change; they will forward your request internally to the appropriate development team.
14+
- Checkout/Clone the repository locally.
15+
- Create a branch, ideally using the naming convention below.
16+
The frequency of updating your fork/branch from the Deltares main is up to personal taste.
17+
Yet, merge from our main as often as possible, and contribute back to us as early as possible.
18+
- Implement, test and document the modifications.
19+
- Provide a patch-file, or reach out to Deltares to create a pull request:
20+
- Although anyone can create a pull request on our repository, our pipelines will only be triggered if the pull request is created by a Deltares employee.
21+
- Merging back to our main will typically include the following steps: transfer code changes to a branch in our Delft3D repository, security scan of the changes made, creation of a pull request, review of code, documentation and test cases, and automated code testing.
22+
Obviously, with some iterations if one of the steps identifies issues to be resolved before the merge.
23+
- To keep legal representation of the Delft3D software indisputable, we ask you to sign a Fiduciary License Agreement (FLA) before the final merge into main.
24+
For an explanation why, see [this page](https://fsfe.org/activities/fla/fla.en.html) by the Free Software Foundation Europe.
25+
The FLA can be obtained via the Deltares contact person who handles the merging process.
26+
Signing the FLA makes sense for code contributions of significant size.
27+
For small bug fixes, it's better to send an email with a test case and a description of the recommended code changes than following the formal procedure described above.
28+
29+
### Deltares employees
1230
- Checkout/Clone the repository locally.
31+
- Create a JIRA issue ticket at https://issuetracker.deltares.nl describing the bug to be fixed or functionality to be developed.
32+
The issue number is required for naming the development branch (see below).
1333
- Create a branch using the naming convention below.
14-
If no issue number is available, create a research branch starting with `research/`.
1534
The frequency of updating your branch from main is up to personal taste.
1635
Yet, merge from main as often as possible, and merge back to main as early as possible.
17-
- Make and test the modifications.
18-
- Provide a patch-file, or create a pull request:
19-
- Our Continuous Integration pipelines will only be triggered if the pull request is created by a Deltares contact person.
36+
- Implement, test and document the modifications.
37+
In case of changes by external contributors, this step will include pulling the changes from the external repository into the local branch and at least a security scan of the changes made by the external contributor.
38+
- Create a pull request:
39+
- Our Continuous Integration pipelines will be triggered automatically by a pull request created by Deltares employees.
2040
These pipelines consist of (Deltares-internal) TeamCity projects to build the source code (Windows and Linux) and subsequently a set of model simulation testbenches.
2141
A merge is only possible when all checks succeed.
2242
The projects will take at least 30 minutes to complete.
23-
- You have to assign the Pull request to a core developer for reviewing and testing. When succeeded, the tester/reviewer is allowed to merge into main.
24-
- Official binary deliveries are only allowed using Deltares TeamCity server
43+
- You have to assign the pull request to a core developer for review.
44+
If review and all tests pass, the tester/reviewer is allowed to merge into main (signed Fiduciary License Agreement required in case of external contributor).
45+
- Official binary deliveries are only allowed using the Deltares TeamCity server.
2546

2647
## Branch naming
2748
For each issue or feature, a separate branch should be created from the main.

src/engines_gpl/dflowfm/packages/dflowfm_kernel/src/dflowfm_data/m_heatfluxes.f90

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,8 @@ module m_heatfluxes
6868
real(kind=dp), dimension(:), allocatable :: qtotmap
6969

7070
! Secchi depth variables
71-
logical :: spatial_secchi_depth_is_available !< Flag to indicate if spatially varying Secchi depth is available
71+
logical :: secchi_depth_is_spatially_varying !< Flag to indicate if spatially varying Secchi depth is available
72+
logical :: secchi_depth_is_time_varying !< Flag to indicate if time-varying Secchi depth is available
7273
real(kind=dp), dimension(:), allocatable, target :: spatial_secchi_depth !< [m] Space-varying Secchi depth {"location": "face", "shape": ["ndx"]}
7374

7475
contains

src/engines_gpl/dflowfm/packages/dflowfm_kernel/src/dflowfm_data/partition.F90

Lines changed: 88 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -3543,6 +3543,28 @@ subroutine reduce_int1_max(var)
35433543
return
35443544
end subroutine reduce_int1_max
35453545

3546+
!> for an array of doubles, take maximum over all subdomains (not over the array itself)
3547+
subroutine reduce_double_array_max(N, var)
3548+
#ifdef HAVE_MPI
3549+
use mpi
3550+
#endif
3551+
3552+
implicit none
3553+
3554+
integer, intent(in) :: N !< array size
3555+
real(kind=dp), dimension(N), intent(inout) :: var !< array with values to be reduced to global maximum over all subdomains
3556+
3557+
real(kind=dp), dimension(N) :: dum
3558+
3559+
integer :: ierror
3560+
3561+
#ifdef HAVE_MPI
3562+
call MPI_allreduce(var, dum, N, mpi_double_precision, mpi_max, DFM_COMM_DFMWORLD, ierror)
3563+
var = dum
3564+
#endif
3565+
return
3566+
end subroutine reduce_double_array_max
3567+
35463568
!> for an array over integers, take maximum over all subdomains (not over the array itself)
35473569
subroutine reduce_int_max(N, var)
35483570
#ifdef HAVE_MPI
@@ -3740,8 +3762,8 @@ subroutine reduce_lateral_output()
37403762
real(kind=dp), dimension(:, :), allocatable, save :: lateral_volume_per_layer_buffer
37413763
real(kind=dp), dimension(:, :), allocatable, save :: outgoing_lat_volume_buffer
37423764
real(kind=dp), dimension(:, :, :), allocatable, save :: outgoing_lat_concentration_buffer
3743-
real(kind=dp), dimension(:), allocatable, save :: cumulative_value_buffer
3744-
real(kind=dp), dimension(:), allocatable, save :: cumulative_weight_buffer
3765+
real(kind=dp), dimension(:), allocatable, save :: cumulative_value_buffer
3766+
real(kind=dp), dimension(:), allocatable, save :: cumulative_weight_buffer
37453767
real(kind=dp), parameter :: dsmall = -huge(1.0_dp)
37463768

37473769
#ifdef HAVE_MPI
@@ -3764,21 +3786,21 @@ subroutine reduce_lateral_output()
37643786
if (average_waterlevels_per_lateral%is_used) then
37653787
cumulative_value_buffer = average_waterlevels_per_lateral%cumulative_value
37663788
cumulative_weight_buffer = average_waterlevels_per_lateral%cumulative_weight
3767-
call MPI_reduce(cumulative_value_buffer, average_waterlevels_per_lateral%cumulative_value, average_waterlevels_per_lateral%num_elements, mpi_double_precision, mpi_sum, 0, DFM_COMM_DFMWORLD, ierror)
3768-
call MPI_reduce(cumulative_weight_buffer, average_waterlevels_per_lateral%cumulative_weight, average_waterlevels_per_lateral%num_elements, mpi_double_precision, mpi_sum, 0, DFM_COMM_DFMWORLD, ierror)
3769-
if (my_rank == 0) then
3770-
do i_element = 1, average_waterlevels_per_lateral%num_elements
3771-
average_waterlevels_per_lateral%values(i_element) = average_waterlevels_per_lateral%cumulative_value(i_element) / &
3772-
max(average_waterlevels_per_lateral%cumulative_weight(i_element), eps10)
3773-
end do
3774-
else
3775-
! This is a work-around required to avoid issue in dimr.cpp send() i.e. when reducing negative values
3776-
average_waterlevels_per_lateral%values = dsmall
3777-
end if
3789+
call MPI_reduce(cumulative_value_buffer, average_waterlevels_per_lateral%cumulative_value, average_waterlevels_per_lateral%num_elements, mpi_double_precision, mpi_sum, 0, DFM_COMM_DFMWORLD, ierror)
3790+
call MPI_reduce(cumulative_weight_buffer, average_waterlevels_per_lateral%cumulative_weight, average_waterlevels_per_lateral%num_elements, mpi_double_precision, mpi_sum, 0, DFM_COMM_DFMWORLD, ierror)
3791+
if (my_rank == 0) then
3792+
do i_element = 1, average_waterlevels_per_lateral%num_elements
3793+
average_waterlevels_per_lateral%values(i_element) = average_waterlevels_per_lateral%cumulative_value(i_element) / &
3794+
max(average_waterlevels_per_lateral%cumulative_weight(i_element), eps10)
3795+
end do
3796+
else
3797+
! This is a work-around required to avoid issue in dimr.cpp send() i.e. when reducing negative values
3798+
average_waterlevels_per_lateral%values = dsmall
3799+
end if
37783800
end if
37793801
#endif
37803802
return
3781-
end subroutine reduce_lateral_output
3803+
end subroutine reduce_lateral_output
37823804

37833805
!> Distribute lateral input to all ranks
37843806
subroutine distribute_lateral_input()
@@ -3813,8 +3835,8 @@ subroutine distribute_lateral_input()
38133835
call MPI_bcast(incoming_lat_concentration, num_lateral_layer_constituent, mpi_double_precision, 0, DFM_COMM_DFMWORLD, ierror)
38143836
#endif
38153837
return
3816-
end subroutine distribute_lateral_input
3817-
3838+
end subroutine distribute_lateral_input
3839+
38183840
!> reduce outputted values at observation stations
38193841
!! NOTE: It seems that, now that we reduce the statistical output before writing, this routine is
38203842
!! only needed to maintain functionality in unstruc_bmi/get_compound_field
@@ -6389,4 +6411,54 @@ subroutine logical_and_across_partitions(val, allval)
63896411
end if
63906412
end subroutine logical_and_across_partitions
63916413

6414+
!> Given a list of local flow cell indices, returns a list of local flow cell numbers at their global position in the global union,
6415+
! and -1 for the cells that lie on the other partitions.
6416+
function reduce_cells(local_cells, ndx) result(global_cells)
6417+
#ifdef HAVE_MPI
6418+
use mpi
6419+
#endif
6420+
6421+
integer, dimension(:), intent(in) :: local_cells !< Local flow cell indices found on this partition
6422+
integer, intent(in) :: ndx !< number of flow cells (internal + boundary), should match ndx in m_flowgeom
6423+
integer, dimension(:), allocatable :: global_cells !< Local flow cell indices of the global union
6424+
integer, dimension(:), allocatable :: global_cellmask, ilocal_s
6425+
integer :: k, num_cells
6426+
#ifdef HAVE_MPI
6427+
integer :: mpi_err
6428+
#endif
6429+
6430+
allocate (global_cellmask(nglobal_s))
6431+
global_cellmask = 0
6432+
! Mark locally present cells in global cellmask, reduce afterwards
6433+
global_cellmask(iglobal_s(local_cells)) = 1
6434+
6435+
#ifdef HAVE_MPI
6436+
call MPI_Allreduce(MPI_IN_PLACE, global_cellmask, nglobal_s, &
6437+
MPI_INTEGER, MPI_MAX, DFM_COMM_DFMWORLD, mpi_err)
6438+
#endif
6439+
6440+
num_cells = count(global_cellmask == 1)
6441+
allocate (global_cells(num_cells))
6442+
6443+
! iglobal_s contains global numbers of local cells, but required are local numbers of global cells, so build an inverse mapping.
6444+
allocate (ilocal_s(nglobal_s))
6445+
ilocal_s = -1
6446+
do k = 1, ndx
6447+
if (iglobal_s(k) > 0) then
6448+
ilocal_s(iglobal_s(k)) = k
6449+
end if
6450+
end do
6451+
6452+
num_cells = 0
6453+
! Build global cells from ilocal_s by iterating over global_cellmask.
6454+
! Cells that exist globally not on current partition will have -1 in ilocal_s and thus -1 in global_cells.
6455+
do k = 1, nglobal_s
6456+
if (global_cellmask(k) == 1) then
6457+
num_cells = num_cells + 1
6458+
global_cells(num_cells) = ilocal_s(k)
6459+
end if
6460+
end do
6461+
6462+
end function reduce_cells
6463+
63926464
end module m_partitioninfo

src/engines_gpl/dflowfm/packages/dflowfm_kernel/src/dflowfm_data/unstruc_inifields.f90

Lines changed: 10 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1575,15 +1575,14 @@ subroutine process_parameter_block(qid, inifilename, target_location_type, time_
15751575
use timespace_parameters, only: NCGRID
15761576
use m_missing, only: dmiss
15771577
use fm_location_types, only: UNC_LOC_S, UNC_LOC_U, UNC_LOC_CN, UNC_LOC_GLOBAL, UNC_LOC_S3D
1578-
use m_flowparameters, only: jatrt, javiusp, jafrcInternalTides2D, jadiusp, jafrculin, jaCdwusp, ibedlevtyp, jawave, waveforcing
1579-
use m_flowparameters, only: ja_friction_coefficient_time_dependent
1580-
use m_flow, only: frcu
1581-
use m_flow, only: jacftrtfac, cftrtfac, viusp, diusp, DissInternalTidesPerArea, frcInternalTides2D, frculin, Cdwusp
1578+
use m_flowparameters, only: jatrt, javiusp, jafrcInternalTides2D, jadiusp, jafrculin, jaCdwusp, ibedlevtyp, jawave, &
1579+
waveforcing, ja_friction_coefficient_time_dependent
1580+
use m_flow, only: frcu, jacftrtfac, cftrtfac, viusp, diusp, DissInternalTidesPerArea, frcInternalTides2D, frculin, Cdwusp
15821581
use m_flowgeom, only: ndx, lnx, grounlay, iadv, jagrounlay, ibot
15831582
use m_lateral_helper_fuctions, only: prepare_lateral_mask
15841583
use fm_external_forcings_data, only: success
15851584
use fm_external_forcings_utils, only: split_qid
1586-
use m_heatfluxes, only: spatial_secchi_depth
1585+
use m_heatfluxes, only: spatial_secchi_depth, secchi_depth_is_time_varying
15871586
use m_wind, only: wind_drag_type, CD_TYPE_CONST
15881587
use m_fm_icecover, only: ja_ice_area_fraction_read, ja_ice_thickness_read, fm_ice_activate_by_ext_forces
15891588
use m_meteo, only: ec_addtimespacerelation
@@ -1736,6 +1735,10 @@ subroutine process_parameter_block(qid, inifilename, target_location_type, time_
17361735
call realloc(spatial_secchi_depth, ndx, keepExisting=.true., fill=dmiss, stat=ierr)
17371736
target_location_type = UNC_LOC_S
17381737
target_array => spatial_secchi_depth
1738+
if (filetype == NCGRID) then
1739+
time_dependent_array = .true.
1740+
secchi_depth_is_time_varying = .true.
1741+
end if
17391742
case ('backgroundverticaleddydiffusivitycoefficient')
17401743
target_location_type = UNC_LOC_S
17411744
call realloc(dicoww, ndx, constant_dicoww)
@@ -2008,7 +2011,7 @@ subroutine finish_initialization(qid)
20082011
use m_grw, only: jaintercept2D
20092012
use m_fm_icecover, only: ja_ice_area_fraction_read, ja_ice_thickness_read
20102013

2011-
use m_heatfluxes, only: spatial_secchi_depth_is_available, spatial_secchi_depth
2014+
use m_heatfluxes, only: secchi_depth_is_spatially_varying, spatial_secchi_depth
20122015
use m_physcoef, only: secchi_depth
20132016
use m_meteo, only: ec_addtimespacerelation
20142017
use m_vegetation, only: stemheight, stemheightstd
@@ -2066,7 +2069,7 @@ subroutine finish_initialization(qid)
20662069
case ('sea_ice_thickness')
20672070
ja_ice_thickness_read = 1
20682071
case ('secchidepth')
2069-
spatial_secchi_depth_is_available = .true.
2072+
secchi_depth_is_spatially_varying = .true.
20702073
do n = 1, ndx
20712074
if (spatial_secchi_depth(n) == dmiss) then
20722075
spatial_secchi_depth(n) = secchi_depth(1)

0 commit comments

Comments
 (0)