Skip to content

Conversation

@juancamarotti
Copy link
Contributor

This PR extends the InterfaceVectorContainer and the corresponding mapping utilities so that an interface vector can now be built not only from Nodes, but also from:

  • Elements
  • Conditions
  • Geometries

The container now holds an InterfaceEntityType enum, and the mapping operations automatically select the correct update routines based on the underlying entity type.

This enables future development of interface-based mappings that operate on arbitrary ModelPart entities (elements, conditions or geometries) in a unified way.

FYI @matekelemen @philbucher

This PR is related to the following Discussion #13927

Copy link
Member

@sunethwarna sunethwarna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @juancamarotti . Minor comments only. But I have some curiosity questions:

  1. Why did you go with the switch statements rather than making the base InterfaceVectorContainer and having derrived versions doing different things with different containers?
  2. Did you test this in MPI?

if (!rModelPart.GetCommunicator().GetDataCommunicator().IsDefinedOnThisRank())
return;

const auto& r_local_mesh = rModelPart.GetCommunicator().LocalMesh();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is nice that you are working on a LocalMesh, but I am curious, when do you do the synchronization to update the values on the GhostNodes?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it happens after mapping, but please double check

{
constexpr bool in_parallel = false; // accessing the trilinos vectors is not threadsafe in the default configuration!
MapperUtilities::UpdateSystemVectorFromModelPart((*mpInterfaceVector)[0], mrModelPart, rVariable, rMappingOptions, in_parallel);
constexpr bool in_parallel = false; // accessing the Trilinos vectors is not threadsafe in the default configuration!
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought this is fixed now... hmm...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it is, can you please add a link to the docs?

Copy link
Member

@philbucher philbucher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, minor comment


IndexPartition<std::size_t>(n_entities, num_threads).for_each([&](const std::size_t i){
const EntityType& r_entity = *(it_begin + i);
const double value = r_entity.GetValue(rVariable);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if running in parallel, use `SetValue

if (!rModelPart.GetCommunicator().GetDataCommunicator().IsDefinedOnThisRank())
return;

const auto& r_local_mesh = rModelPart.GetCommunicator().LocalMesh();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it happens after mapping, but please double check

{
constexpr bool in_parallel = false; // accessing the trilinos vectors is not threadsafe in the default configuration!
MapperUtilities::UpdateSystemVectorFromModelPart((*mpInterfaceVector)[0], mrModelPart, rVariable, rMappingOptions, in_parallel);
constexpr bool in_parallel = false; // accessing the Trilinos vectors is not threadsafe in the default configuration!
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it is, can you please add a link to the docs?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants