Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion for Redesign 1 #158

Open
srosenbu opened this issue Sep 27, 2024 · 12 comments
Open

Suggestion for Redesign 1 #158

srosenbu opened this issue Sep 27, 2024 · 12 comments
Labels
enhancement New feature or request

Comments

@srosenbu
Copy link
Member

After our discussion, I thought again about what I like about this repo:

  1. I can treat a Simulation as a black-box that just gets parameters and returns sensor-data and plots
  2. Input can probably be serialized into JSON files because all inputs are dicts of ureg.Quantity
  3. Units are checked
  4. I can reuse parts of the code through a seperation of geometry-description and the actual FEM description
  5. Adding any sensor you want to the simulation is nice
  6. Reduction of boilerplate FEniCS code, like for the definition of BCs.

What I don't like:

  1. The classes, as they are designed don't allow me to describe a dynamic simulation and probably a lot of other problems (not flexible enough)
  2. It is unclear what data and methods a class has. For example, experiments sometimes have a boundary condition for temperature, but I don't know that in advance. The "simple cube" currently contains all the bcs to work with all finite element problems in order to be an actual reusable piece of code.
  3. The classes are normal Python classes and class properties can just be added dynamicylly in the __init__. This is i.m.o. too flexiblebe because it encourages to "hack your way around the limitations of the design" but then your hack becomes unusable to others. There is of course no perfect design, but I would be more in favor of updating the interfaces from time to time to allow new features which then all new problems must adhere to.
  4. Plots are defined by the finite-element problem and cannot be changed by the user like sensors
  5. The seperation of experiment and FEM problem is not very strict. The description of BCs currently requires knowledge about the functionspaces which should only be part of the FEM description.

What I would want (in addition to what I like):

  1. Seperation of the experiment and the geometry to improve reusability of code
  2. Describe BCs without needing to know the FEM problem
  3. Use plots as flexible as sensors
  4. A subset of all problems should be serializable as JSON files.
  5. Creation of problems from JSON files (Not possible for everything, but for a certain subset of problems, this would be helpful for the https://github.com/BAMresearch/NFDI4IngModelValidationPlatform)

I have some code suggestions on the branch https://github.com/BAMresearch/FenicsXConcrete/tree/sjard_suggestion.
It's all WIP, but I am currently tending to make the description of the experiment fully independent of FEniCS, because the FEniCS-objects might be different depending on the problem that is solved. All classes use pydantic.dataclasses in order to enforce the class variables that are being used and to enable serializability by default.

To be continued...

@srosenbu srosenbu added the enhancement New feature or request label Sep 27, 2024
@joergfunger
Copy link
Member

Just wanted to comment on the previous suggestions and comments, and primarily getting to better understand the cons

What I don't like:

1. The classes, as they are designed don't allow me to describe a dynamic simulation and probably a lot of other problems (not flexible enough)

What would be features that are general and would have to be added to the general class?

2. It is unclear what data and methods a class has. For example, experiments sometimes have a boundary condition for temperature, but I don't know that in advance. The _"simple cube"_ currently contains all the bcs to work with all finite element problems in order to be an actual reusable piece of code.

Not sure I understand the problem, I mechanics problem does not have temperature boundary conditions, so would you then to return NULL when getting those?

3. The classes are normal Python classes and class properties can just be added dynamicylly in the `__init__`. This is i.m.o. too flexiblebe because it encourages to _"hack your way around the limitations of the design"_ but then your hack becomes unusable to others. There is of course no perfect design, but I would be more in favor of updating the interfaces from time to time to allow new features which then all new problems must adhere to.

IMO, this is just a base class and everyone can derive from that. In the redesign, we might actually think about modularizing the tools (tool agnostic problem definition with experimental data, simulation sensor module with visualization capabilities), such that you do not need to hack the classes.

4. Plots are defined by the finite-element problem and cannot be changed by the user like sensors

That could potentially be solved by the new approach.

5. The seperation of experiment and FEM problem is not very strict. The description of BCs currently requires knowledge about the functionspaces which should only be part of the FEM description.

I agree.

What I would want (in addition to what I like):

1. Seperation of the experiment and the geometry to improve reusability of code

The question is, what we mean by geometry, I guess this is the mesh (eg. a msh input) and the boundary conditions. For the sensor data (and potentially the definition of sensors) we might separate that potentially in the same module as the simulation sensors, such that we create them only once and activate some experimental sensors also as relevant for the FE postprocessing.

2. Describe BCs without needing to know the FEM problem

I agree, the straightforward way is just function definitions. If they are defined as string, they could be converted to python functions and then executed, while still being serializable in JSON.

3. Use plots as flexible as sensors

Here we would have to agree on an interface, or even see, if there are advantages compared to just providing examples and then copy paste those. I have the impression the latter might be more user-friendly (and complies better with the flexibility that most people require when doing post-processing.

4. A subset of all problems should be serializable as JSON files.

See above, otherwise I agree.

5. Creation of problems from JSON files (Not possible for everything, but for a certain subset of problems, this would be helpful for the https://github.com/BAMresearch/NFDI4IngModelValidationPlatform)

Would be nice, but I guess this is rather an additional addon, most people would not use on a regular basis (maybe at the end when doing their paper).

I have some code suggestions on the branch https://github.com/BAMresearch/FenicsXConcrete/tree/sjard_suggestion. It's all WIP, but I am currently tending to make the description of the experiment fully independent of FEniCS, because the FEniCS-objects might be different depending on the problem that is solved. All classes use pydantic.dataclasses in order to enforce the class variables that are being used and to enable serializability by default.

To be continued...

@srosenbu
Copy link
Member Author

Thank you for your feedback.

  1. The classes, as they are designed don't allow me to describe a dynamic simulation and probably a lot of other problems (not flexible enough)

What would be features that are general and would have to be added to the general class?

For my case, that would be time-dependent boundary conditions (Neuman and Dirichlet) and initial conditions. I was vague here because I don't know the limitations of the other potential users.

  1. It is unclear what data and methods a class has. For example, experiments sometimes have a boundary condition for temperature, but I don't know that in advance. The "simple cube" currently contains all the bcs to work with all finite element problems in order to be an actual reusable piece of code.

Not sure I understand the problem, I mechanics problem does not have temperature boundary conditions, so would you then to return NULL when getting those?

An experiment for a temperature problem is expected to have a method for temperature bcs. But there is no interface that enforces this and therefore I currently need to look into the specific FE problem to see which methods are needed. The BaseExperiment only requires bcs for the displacement. I think we decided against an additional interface for temperature problems. With my other suggestions, we wouldn't need to define a method for the bcs, but could treat them as data input.

IMO, this is just a base class and everyone can derive from that. In the redesign, we might actually think about modularizing the tools (tool agnostic problem definition with experimental data, simulation sensor module with visualization capabilities), such that you do not need to hack the classes.

Base classes can enforce methods. I would also be in favor of enforcing attributes of the class with pydantic. At least for the experiment, bcs and sensors. Just because it forces you to think in advance about what you need.

The question is, what we mean by geometry, I guess this is the mesh (eg. a msh input) and the boundary conditions. For the sensor data (and potentially the definition of sensors) we might separate that potentially in the same module as the simulation sensors, such that we create them only once and activate some experimental sensors also as relevant for the FE postprocessing.

I think I mean mesh when I say geometry. I mainly want this because currently the definition of the mesh is always linked to a very specific set of bcs. If I want two experiments that have the same mesh but different bcs, I would need to write two classes for that (or have the bcs be parameters that are passed in the init). I currently have a MeshGenerator in my proposal that has meshtags for boundary conditions and it can generate a mesh. But I would leave the definition of the bcs to the Experiment.

I think we agree on the rest;)

@joergfunger
Copy link
Member

Could we do a merge request (without actually merging), because that allows to have a discussion on single lines of code rather than abstract text in an issue.

@srosenbu
Copy link
Member Author

Sure, I'll make a draft

@joergfunger
Copy link
Member

In order to advance the discussion, it would be great to get a feedback from everyone. Please (@aradermacher, @div-tyg , @danielandresarcones , @ajafarihub , @saifr68) have a look at the suggestions from @srosenbu in order to start a discussion. In particular, we want to build something that is useful and used within all related projects. If this is just interesting for a single person (and that is to be found out in the discussion), there is no need to develop an overarching solution. And if that solution only comprises minimal additional functionality, this would also be fine.

@ajafarihub
Copy link

I just wanted to point out one observation/challenge and propose my suggestion.

IMO, a tricky part of using/developing the interface is being able to flexibly handle the definition of BCs (boundary conditions) just in terms of the location/sub-domain where those BCs should be applied. In particular, when we deal with a real mesh that has been constructed for example from real CT data or any other meshing tool (and this is at least for me a relevant situation to be in), it will be complicated to define/use flexible routines/methods in the interface to identify specific surfaces or collection of nodes where a desired BC should be applied.

My suggestion is to design the interface by relying on: (i) mesh files that essentially store mesh objects, (ii) other files that represent certain desired sub-domains of the mesh in terms of collections of nodes.
For example, for a three-point bending experiment we my have mesh_3point_2d.msh file storing the mesh, and for instance a middle_top.yaml file that stores the mesh coordinates (nodes) where the bending loading is applied at the top-middle of the beam. This way, the interface will only rely on such files which define various sub-domains of interest, and the user should provide these files, pretty much like you would define a problem in ABAQUS where you have to specify the regions for BCs.

IMO, the above scenario is the most flexible way, which is - I think - pretty much unavoidable if we deal with arbitrary real meshes. Besides, and to make the interface more user-friendly, we can also provide specific methods that return such list of coordinates for certain sub-domains in certain geometries; especially if the mesh is nominal with known geometric features/dimensions.

@joergfunger
Copy link
Member

I believe this is exactly what the proposed interface provides, the input of groups/sets of nodes or elements (to e.g. apply Dirichlet boundary conditions on nodes) can be done either using a functional representation (similar to fenics) or using e.g. the physical components from the mesh file. So what exactly would you change? The only open issue IMO is the correct definition for applying Neumann bc, since that requires not only elements and nodes, but surfaces.

@ajafarihub
Copy link

ajafarihub commented Oct 17, 2024

Thanks for your reply, @joergfunger .

"I believe this is exactly what the proposed interface provides, the input of groups/sets of nodes or elements (to e.g. apply Dirichlet boundary conditions on nodes) can be done either using a functional representation (similar to fenics) or using e.g. the physical components from the mesh file."

Well, from your conversations above, I got the impression that a flexible way of handling BCs was still quite unclear, although it was clear that the BCs should be specified regardless of the FEM problem. One question is, how you would expect those "physical components" to be represented. Are they specific features/components that are stored within a same mesh file? Or are they represented in any other separate files? My proposal was to go with the second approach and rely on files that store certain collections of nodes for every sub-domain (potentially sub-boundary) where Dirichlet BCs should be applied. The main reason I like this approach is in dealing with real meshes obtained from CT data. These meshes usually do not have regular/smooth surfaces, edges, and facets. They are - at the end - consisting of a set of nodal coordinates and a set of cells connectivity. For such meshes, I believe the most practical way of specifying the domain of BCs is to define/extract sets of coordinates over which any BC should apply. Of course, the specification of such nodes collection is challenging by itself, nevertheless, at least I do not see any better way.

"So what exactly would you change?"

I am not sure what kind of changes the interface would need for the approach that I suggested (since I have not been involved in developing the interface). Or maybe I did not get your question.

"The only open issue IMO is the correct definition for applying Neumann bc, since that requires not only elements and nodes, but surfaces."

I am not aware of how Neumann BCs are treated in dolfinX. From my experience in dolfin however, I think we can still stay with collection of nodes (as I proposed above), because (at-least in dolfin) we can define a sub-domain based on any arbitrary collection of nodes, and then we can use that sub-domain for specifying the integral measure (ds) over which a Neumann BCs is applied. But this is just a rough idea and may be different in dolfinX than dolfin.

@ajafarihub
Copy link

ajafarihub commented Oct 21, 2024

A follow-up:
I realized the following big challenge in relying on "node collections" for representing an arbitrary BC domain:
If a function space over which we wish to define a BC is of higher degree than 1, there are some nodes associated with that space that do not belong to the mesh itself (one could call them the intermediate nodes). Therefore, those intermediate nodes cannot belong to any "node collection" we should have extracted from the mesh itself. So, the challenge I see is how to extract intermediate nodes (associated with a function space of higher degree than 1) out of a "node collection" that only contains a sub-set of the mesh nodes (notice that the mesh is previously defined with no regards to any function space). And I do not think this extraction is an easy task.

@pdiercks
Copy link
Collaborator

A follow-up: I realized the following big challenge in relying on "node collections" for representing an arbitrary BC domain: If a function space over which we wish to define a BC is of higher degree than 1, there are some nodes associated with that space that do not belong to the mesh itself (one could call them the intermediate nodes). Therefore, those intermediate nodes cannot belong to any "node collection" we should have extracted from the mesh itself. So, the challenge I see is how to extract intermediate nodes (associated with a function space of higher degree than 1) out of a "node collection" that only contains a sub-set of the mesh nodes (notice that the mesh is previously defined with no regards to any function space). And I do not think this extraction is an easy task.

Hi Abbas,
this is specific to legacy dolfin, because the geometry interpolation is always of degree 1. In dolfinx you can use an isoparametric formulation, s.t. the problem of "intermediate nodes" does not occur.

@ajafarihub
Copy link

ajafarihub commented Oct 22, 2024

Hi @pdiercks , thank you for your reply.

I agree in general. But I think this approach (using higher-order geometry in dolfinX) is optional and not the only way to follow. Often, a liner geometry (mesh) is rather enough, although we still wish to have higher-degree shape-functions for representing specific fields. In such cases, we may need to define BC domains just as usual and directly from geometric features of the domain (like x[0]=0 or so). So, my suggestion of relying on "node collections" might be flexible for handling more complex geometries, but could be sub-optimal for simple linear geometries.

Furthermore, even if we are going to use higher order geometry representation (for meshing), I am wondering about another challenge. It now seems that we should define/generate the mesh already in view of the interpolation degree of a Function space that we would naturally determine in a later stage. However, I actually would prefer to have the mesh generation in a separate module and prior stage that is independent of any problem definition. So, I am not sure how to design the interface in this respect.

@pdiercks
Copy link
Collaborator

pdiercks commented Oct 22, 2024

hi @ajafarihub ,

agree in general. But I think this approach (using higher-order geometry in dolfinX) is optional and not the only way to follow.

I think with your node collection approach you already assume an isoparametric formulation and this does not work as you noticed when the formulation is not isoparamtric (or at least would require additional information if the DOFs associated to the mid-nodes should also be constrained).

So I think if you want to describe the boundary as a set of discrete entities you should not limit that to topological dimension 0, but also consider edges, facets and cells. How this information is used to define boundary conditions would depend on the PDE solver and some kind of conversion step might be necessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants