Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changes for more resonable partitioning: #813

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

bska
Copy link
Member

@bska bska commented Dec 16, 2024

  • option to not remove anything from partitioning graph
  • added possibility for setting overlap
  • changed export list: need testing in particular in parallel (if it is intended to work there)

@bska
Copy link
Member Author

bska commented Dec 16, 2024

jenkins build this please

@@ -1219,10 +1219,10 @@ namespace Dune
/// \param overlapLayers The number of layers of cells of the overlap region (default: 1).
/// \param partitionMethod The method used to partition the grid, one of Dune::PartitionMethod
/// \warning May only be called once.
bool loadBalance(int overlapLayers=1, int partitionMethod = Dune::PartitionMethod::zoltan, double imbalanceTol = 1.1)
bool loadBalance(int overlapLayers=1, int partitionMethod = Dune::PartitionMethod::zoltan, double imbalanceTol = 1.1, bool addCorners = true)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this is an opportunity to get rid of the default arguments?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. But probably that can be done separately.

@@ -391,7 +391,8 @@ void addOverlapLayer(const CpGrid& grid, int index, const CpGrid::Codim<0>::Enti
{
// Note: multiple adds for same process are possible
exportList.emplace_back(nb_index, owner, AttributeSet::copy);
exportList.emplace_back(index, cell_part[nb_index], AttributeSet::copy);
// not need or check if
//exportList.emplace_back(index, cell_part[nb_index], AttributeSet::copy);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this not needed? Should it not be symmetrical?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not understand this fully, but this will be done for all processes so this will be added when the other partition is processed anyway.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If one already have added overlap one will add a copy to an other partitions potential on a cell which already is owner, which make an error. The Overlap code is probably has bed used and tested to little. In some tests parallel indexsets seems to have a problem.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which make an error.

What do you mean by this?
If I recall correctly: If a cell was added as owner than all additional occurrences as copy will be neglected.

If I am wrong then please add a failing test case demonstrating this first.

The idea of the index sets used is that if we have an index then we now where that index is present in addition. We might be breaking this guarantee here...

@@ -286,7 +286,7 @@ void setDefaultZoltanParameters(Zoltan_Struct* zz) {
Zoltan_Set_Param(zz, "CHECK_GRAPH", "2");
Zoltan_Set_Param(zz,"EDGE_WEIGHT_DIM","0");
Zoltan_Set_Param(zz, "OBJ_WEIGHT_DIM", "0");
Zoltan_Set_Param(zz, "PHG_EDGE_SIZE_THRESHOLD", ".35"); /* 0-remove all, 1-remove none */
Zoltan_Set_Param(zz, "PHG_EDGE_SIZE_THRESHOLD", "1"); /* 0-remove all, 1-remove none */
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the observations that favor this change? Have you observed problems with too many of the graph edges being cut? I must admit I do not know how to interpret the threshold: will 0.35 cut 35% of the edges?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the effect of setting this to 1 is that no (hyper-)edges will be omitted from the graph, even if the number of vertices on the edge is a large amount of the total amount of vertices in the graph (used to be 35%). I believe this is intended to detect all-to-all-ish connections that would be a problem for the partitioning algorithm, since all partitions would be penalized for this cut.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem on small cases or "dead ends" i.e 1D is that every second cell is on different partitions.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we really optimizing for those cases? Are there performance penalties for other more relevant cases due to this?

Copy link
Member

@atgeirr atgeirr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My curiosity was triggered... These changes are likely a good idea but I would like some more information.

@blattms
Copy link
Member

blattms commented Dec 17, 2024

This has a very brief description. It is a bit hard to understand those bullet points. Please make it more detailed and explain the improvements.

- option to not remove anything from partitioning graph
- added possibility for setting overlap
- changed export list: need testing in particular in parallel (if it is intended to work there)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants