Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean up the linear operator document and look for errors #29

Open
jlperla opened this issue Aug 13, 2018 · 12 comments
Open

Clean up the linear operator document and look for errors #29

jlperla opened this issue Aug 13, 2018 · 12 comments
Assignees

Comments

@jlperla
Copy link
Contributor

jlperla commented Aug 13, 2018

I think a great place to start on this is to clean up the https://github.com/JuliaDiffEq/PDERoadmap/blob/master/linear_operators_overview.tex document, fill in any necessary blanks, and remove any unnecessary garbage. Hopefully this is a complete but compact description of all of the linear algebra required to handle the extensions and boundary conditions.

A few notes:

  • Most of the document is written for solving stationary PDEs (i.e. ODEs), which I think is perfectly fine for testing out all of the algebra.
  • The linked pdf on the README.md is often out of date, and manually created.
  • We spent a lot of time on getting the notation correct in section 1 and 2 and 3.1 (and checking with chris on a few things) so lets try to be extremely careful to implement it throughout. For example, using a tilde when it is a pre-discretized function, using L when something is linear, A when it is affine, etc. There may be mistakes
  • A crucial application of this framework for us in the september/october timeframe is to solve a jump-diffusion equation. For that reason, we need to be careful with the size of the extension space (e.g. M_bar) which is not always M + 2. This will make the general design of the interface a little trickier.
  • I think there are plenty of mistakes in section 3.2 and to 3.6, as the people writing it stopped working on the project for a few months.
  • I would start on section 3.2 and go through it very carefully. Hopefully it is correct and mostly complete, but worth getting it right. The other examples in 3.3 to 3.6 are more to test out different boundaries, and can be checked carefully over time.
  • Appendix A.1 and A.2 are something I added to make sure I understood the connection between the gaussian elimination steps and ensuring the boundary conditions are always fulfilled. I would check it, but hopefully it helps illuminate things.
  • I think that it is very important to answer whether Support affine operators (with affine boundary values)? #28 is possible, ASAP. If so, then we need to change some of the stuff in sections 1, 2, and 3.1... it also means we can write the examples like 3.2 in an affine form rather than just using the linear operator and solving the system ourselves.
@jlperla
Copy link
Contributor Author

jlperla commented Aug 14, 2018

Based on the discussions with @ChrisRackauckas and @dlfivefifty it sounds like we should try to change the math so that we can apply boundary conditions to the sub-components of the composed operator separately, so that everything ends up square before we compose them.

Lets see how that works and implement it in the document. See the slack discussion for more details, and best to copy material in here earlier than later if it is useful. I think it will take you a few days to update and be clear on this stuff up to Section 3.1. By the end of the week, I hope to write down a complete set of test-cases for differential equations and boundary conditions, though you have a lot of them in this document already. The true affine ones are missing, though.

@dlfivefifty
Copy link

That sounds like the opposite of the discussion.... I’m pretty sure both Chris and I agree that the composition of operators should be rectangular and the boundary conditions added at the end to make things square.

@jlperla
Copy link
Contributor Author

jlperla commented Aug 14, 2018

Are you talking about the discussion in that issue or including the Julia slack with @ChrisRackauckas from last night? I think might summarize it, though I could be wrong:

  • he agreed we couldn't come up with the Q matrix without using the entire composed operator and the boundary conditions, and that the size of the Q is determined by the maximum stencil
  • but given the maximum stencil size we can pad every individual operator with zeros to make it the maximum rectangle
  • then Chris said that distributing the application of the Q across each of the components of the operator, and then composing them as squares helps
  • There is also a hope that for a large class of boundary conditions, we don't even need to know the full Q to apply that distributed L*Q operator, but I think we need to see some examples to see how that works.

I could be wrong on all or none of this, but that is my understanding. @ChrisRackauckas will help guide @MSeeker1340 to formalize the math.

My job is to generate the list of ODEs and PDEs I need solved. If I can solve all of them with the framework with a pretty enough DSL, I am happy with any design

@ChrisRackauckas
Copy link
Member

That's pretty much it. If you look at the jump operators, derivative operators, and upwinding operators, then you can see that at least for everything but non-zero Neuman (I don't know this case yet) what you can do is get Q to the larger size, but since the derivative operators are local they don't use discretizations further than the boundary (all zeros in the operator discretization) and so you can compose L*Q and see that it's the same as the case without the jump operator, so for this set of operators it's fine to just make square matrices which embed the operator like this.

But that's not necessarily true for the extension with say Chevyschev discretizations, which I don't know how you'd do jumping with it.

It has to do with the zeros and the distribution property because it's affine.

@MSeeker1340
Copy link
Contributor

MSeeker1340 commented Aug 14, 2018

Seems we have a consensus on the issue then. Personally I don't really see the appeal of enforcing every operator to be square (composing rectangular operators isn't much of an issue), but I agree that having each component operator maintaining their own $Q$ is a more versatile approach.

Let me try summarizing things in the simplest possible way:

  1. A simple derivative operator is (conceptually) represented as $LQ$

  2. The $Q$ operator maps the interior points $u$ to the extention $\bar{u}$ (can be affine)

  3. The stencil operator $L$ maps $\bar{u}$ back to $u$

  4. A complex derivative operator is constructed by composing simple derivative operators (whose $Q$ are not necessarily the same).

Implementation-wise 4) is already achievable using the basic operator composition functionality that I added to DiffEqOperators a few weeks ago (at least addition/subtraction/multiplication/scalar multiplication is in place; advanced compositions like tensor product are not available yet), so the focus is on 2) and 3).

Let's make sure we are all on the same page about this and as Jesse suggests, first update the document (in particular the overview section 2).

@dlfivefifty
Copy link

Let's make sure we are all on the same page about this

I don't understand it yet but if it works, then sounds good.

@jlperla
Copy link
Contributor Author

jlperla commented Aug 14, 2018

Seems we have a consensus on the issue then. Personally I don't really see the appeal of enforcing every operator to be square (composing rectangular operators isn't much of an issue), but I agree that having each component operator maintaining their own $Q$ is a more versatile approach.

I think @ChrisRackauckas hopes that the subsets of Q for each component of the operator will be more manageable and that we won't actually need to pad the 0s and apply the full Q in the implementation. Something like figuring out which elements of the ghost nodes of a particular operator are in the B boundaries... Other rows of B then may not enter the parts of Q required for the multiplication and can be ignored. His hope is that it makes composing the lazy operators easier

I think that once we carefully go through the math for a few examples, we will see if that works. But we always have padding with zeros, adding up to the maximizing extended dain, and applying the whole Q as a backup and numerical check.

@MSeeker1340
Copy link
Contributor

I think @ChrisRackauckas hopes that the subsets of Q for each component of the operator will be more manageable and that we won't actually need to pad the 0s and apply the full Q in the implementation.

Fully agree. After all, zero-padding is an expensive and allocating operation for large systems, and especially problematic for >1D arrays.

On the other hand, the L & Q separation provides a good conceptual picture so I believe the document should reflect this. We can work out the implementation details later. I'm in the midst of updating the first two sections btw.

@dlfivefifty
Copy link

After all, zero-padding is an expensive and allocating operation for large systems, and especially problematic for >1D arrays.

Just use Vcat for lazy padding from https://github.com/JuliaArrays/LazyArrays.jl (A PR to register v0.0.1 is in the queue so any second now...)

@MSeeker1340
Copy link
Contributor

Just use Vcat for lazy padding from https://github.com/JuliaArrays/LazyArrays.jl (A PR to register v0.0.1 is in the queue so any second now...)

Looks neat!

@jlperla
Copy link
Contributor Author

jlperla commented Aug 14, 2018

On the other hand, the L & Q separation provides a good conceptual picture so I believe the document should reflect this.

Exactly. If we get the math complete it helps us generate tests, and potentially let's us see simplifications when Q is distributed.

I also think doing the crude "padding of everything to the maximum stencil" and then applying the whole Q is a good baseline to start with. The high level code interface would be identical.

@dlfivefifty
Copy link

LazyArrays.jl is now merged. Hopefully you'll find it helpful, PRs are definitely welcome ☺️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants