Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify parts of Lenses and Prisms chapter #329

Open
simonsan opened this issue Dec 11, 2022 · 11 comments
Open

Clarify parts of Lenses and Prisms chapter #329

simonsan opened this issue Dec 11, 2022 · 11 comments
Labels
A-functional Area: Content about Functional Programming C-amendment Category: Amendments to existing content

Comments

@simonsan
Copy link
Collaborator

tmarsh1024 1 hr. ago

I love seeing more discussion and presentations of optics such as lenses and prisms. I do find their treatment in this document, however, more than a little confusing.

A lot of the discussion of lenses is focused on achieving structural typing (accessing a field by name if it exists on the struct without knowing the nominal type in advance). I've never witnessed optics in the context of structural or duck typing before and certainly isn't the usually touted benefit. The lenses section goes on to mention that the power in rust really comes from composition but stops there and doesn't show any examples.

The section on prisms is mistaken in its characterization that prisms represent families of optics. This is no more true than it is true for the lens - both prisms and lenses can be generated from the simpler iso optic. In some representations of optics prisms can be seen as dual to lenses, another way lenses and prisms are peer level.

I didn't understand the visitor example and the discussion of type erasure, which all seemed to me very far removed from anything to do with prisms.

Maybe I'm missing something here, and I admit to not reading too deeply when I got confused, but I would definitely recommend the audience and the explanation be reevaluated. Like I said, I would love for these topics to be more broadly presented!

CC: @jhwgh1968

I asked the person to leave some feedback here as well, so we can improve on the article. But we could start discussing given feedback on top.

@jhwgh1968
Copy link
Contributor

In general, I would state that I wrote the initial version of the text some time ago when I had a solid grasp, but it was rather fleeting. The PR was opened much later. My re-write late in its development was not done with the same focus, so it is entirely possible I missed something.

Some more specific initial reactions to the comments above:

A lot of the discussion of lenses is focused on achieving structural typing (accessing a field by name if it exists on the struct without knowing the nominal type in advance). I've never witnessed optics in the context of structural or duck typing before and certainly isn't the usually touted benefit.

The "duck typing" is indeed a very reductive example, but I believe it is the most concrete one I could get in my head after the reading I did. It was also the code I could write with the lens-rs crate that made the most sense to me.

All that was my attempt to capture a basic idea from the Scala article:

So far so good, but besides giving us a better syntax and a more functional way to get and set a value in a case class, Lenses don’t seem to provide much.

Where is then the advantage of using a Lens in place of a nested copy function, if every time we have to create a new Lens? Here is the thing: we don’t have to.

It then explains profunctors and composition.

While extremely reductive, I think that my get customer ID technically qualifies as a profunctor? As for composition, that gets to the next point:

The lenses section goes on to mention that the power in rust really comes from composition but stops there and doesn't show any examples.

I did previously have a composition example, but ended up dropping it. It was very complicated, and didn't use lens-rs. Instead I had a bunch of handwaving about higher-kinded types and "if Rust could do this and that, it would look like this" in boilerplate comments because the code wouldn't compile.

I decided that distracted from what I thought the real thing to impart was about: prisms, and how they can let users understand APIs. Perhaps re-writing my previous example of composition with lens-rs would be instructive.

The section on prisms is mistaken in its characterization that prisms represent families of optics. This is no more true than it is true for the lens

I wrote it this way because of this comment in the Scala article:

What if we want to manipulate data inside a trait , in general referred as a sum type or coproduct? Prisms come in handy. They’re like Lenses but for sum types.

Originally, I said something much closer to literally that, but I got PR feedback that "sum types was confusing", and removed that terminology. I am open to suggestions on improving this. Still, the example of the Serde API, I think, corresponds to a similar use in concept.

The article also comments:

We’ve already seen that one possible would be in case of deeply nested objects. Another interesting use case is when you have to work with different representations of essentially the same data.

Perhaps I'm being too literal-minded, but that is pretty much exactly what Serde is about: multiple data formats for one Rust struct.


Reading it again, I can understand why someone much more expert in functional languages would call this a stretch.

But as I tried to say in the section itself, I don't think the idea itself need apply to Rust directly very often. Rust's generics and memory model work differently. It's the idea of structuring APIs in a "lens like" way, with these multiple levels of generics, that I tried to write it about.

Perhaps there is a better approach for understanding Serde. But this was what made it finally click for me.

@thomasmarsh
Copy link

thomasmarsh commented Dec 12, 2022

Thanks for all the clarifications and explanation of your motivations. I'll try to address some of your points.

General thoughts

Lenses are primarily used as functional tools for getting and mutating nested properties of immutable objects. This is what lens-rs focuses on. As a point of comparison, the Swift language provides lenses natively and calls them KeyPaths. If you look at them after using lens-rs, you should see they are broadly similar.

Structural/Duck typing

It seems the Scala article you are referencing is Functional references: Lens and other Optics in Scala by Dorian Davì. I took a look and that section is simply explaining that if you have a Lens[A,B] and a Lens[B,C] that you don't need to define a Lens[A,C] since you can get one for free by composing the two lenses. He is not, however, indicating that you get a lens for free for every type that contains some member of a given name.

I did find this article Structural Typing in Rust which does provided structural typing accessors for attributes and leverages lenses to provide composition, much like your example. However, I think it is worth clarifying: this is using macro and other Rust tricks to provide structural typing in combination with lenses. Lenses, in general, are not tools that enable structural typing on their own - in fact they are usually strongly defined in terms of nominal types (i.e., the opposite of structural typing).

So, in the case of structural typing, I think that forms its own "rust pattern" and builds on knowledge of lenses. Placing it in an explanation of lenses conflates the two topics.

Composition of Lenses

Composition is the magic feature of lenses. They turn messy and error-prone boilerplate into simple composition operations. The lens-rs crate provides an example of such composition in its README. Given structs:

#[derive(Lenses)]
struct Address {
    street: String,
    city: String,
    postcode: String
}

#[derive(Lenses)]
struct Person {
    name: String,
    age: u8,
    address: Address
}

and some instance:

let p0 = Person {
    name: "Pop Zeus".to_string(),
    age: 58,
    address: Address {
        street: "123 Needmore Rd".to_string(),
        city: "Dayton".to_string(),
        postcode: "99999".to_string()
    }
};

then you can traverse the composition of lenses using the lens! macro, for example to set the deeply nested street property:

let p1 = lens!(Person.address.street).set(p0, "666 Titus Ave".to_string());

In this case, it is the macro's parsing of the Person.address.street tokens that is translated into the get/set calls of the lens to produce the new immutable value based on the original. I.e., it's leveraging the composition of the Lens[Person,Address] and the Lens[Address,String] lenses.

Updating deeply nested structs is a known pain point and has been discussed here, with some potential solutions and comparisons to languages that support lenses: https://internals.rust-lang.org/t/nested-struct-updates/8319

In functional languages, composition of functions is trivial. If I have have Haskell functions f :: a -> b and g :: b -> c, it is I can simply write h = f . g, and h is now a function a -> c. This is not so easy in Rust, so function composition is often replaced by macros that help achieve new syntax. This lack of functional composition is important to keep in mind when thinking about different representations of optics (e.g., profunctors).

Profunctors and other Representations

It then explains profunctors and composition.

While extremely reductive, I think that my get customer ID technically qualifies as a profunctor?

A getter for the customer ID is technically a profunctor. Since profunctors are just generalized functions, this isn't saying a whole lot. I'm not sure this equivalence is helpful for someone who is trying to learn Rust patterns. Profunctors are very powerful generalizations, but it is important to understand why they are sometimes used (and why they are not always used!) in functional optics. Profunctors allow trivial composition of all possible combinations of optics types. This ability to nicely combine optics is in contrast to "normal form" optics in which composition must be manually defined. Profunctors are not always the most efficient code, however, so there are trade-offs involved and hence why other representations are used.

A "normal form" for a lens would be:

case class Lens[S, A](
    get: S => A,
    set: (S, A) => S
)

(I'm not a scala programmer, so just copying the example from that Davì article, but using more consistent parameter names.) This is essentially a struct that contains two functions, a getter and a setter. This also known as a non-polymorphic lens, because full lenses allow type changing:

case class PolymorphicLens[S, T, A, B](
    get: S => A,
    set: (S, B) => T
)

And then Lens[S,A] == PolymorphicLens[S,S,A,A].

This form is less common outside of languages like Haskell and Scala, but is often useful for implementing the optics library internally.

Another optic is Prism, which the Davì article presents in a simplified fashion. It should rely on an Either type (Result in Rust):

case class PolymorphicPrism[S,T,A,B] {
    embed: A => S,
    extract: S => Either[T,B]
}

(Davì calls embed reverseGet, and he calls extract getOption. Naming is inconsistent across implementations. Note that getOption in Davì's implementation can be derived from extract, but extract is needed for optic composition in pure functional settings.)

A simple prism is then: Prism[S,A] == PolymorphicPrism[S,S,A,A].

It may be worth mentioning an even simpler optic called the Iso (or sometimes called "Adapter"):

case class PolymorphicIso[S,T,A,B] {
  to: S -> A,
  from: B -> T
}

The simple iso is then: Iso[S,A] == PolymorphicIso[S,S,A,A]. A common example is an Iso[Float,Float] which converts between Fahrenheit and Celsius.

These are all normal forms, by which we mean, they are defined in terms of simple pairs of functions. There are additional equivalent representations. The van Laarhoven form is often used in Haskell and Scala (and I have also implemented such in C++). The van Laarhoven form looks very strange and needs some familiarity with CPS (continuation passing style) and higher kinded types to make sense. But it has advantages in terms of composition. There is also the Profunctor representation that requires a new set of terminology, but which provides very general composition abilities.

The composition of two "normal" form lenses compose(Lens[A,B], Lens[B,C]): Lens[A,C] requires you write that function manually, return a new Lens object containing closures that leverage get/set from the two input lenses. Davì provides:

def compose[Outer, Inner, Value](
    outer: Lens[Outer, Inner],
    inner: Lens[Inner, Value]
) = Lens[Outer, Value](
  get = outer.get andThen inner.get,
  set = (obj, value) => outer.set(obj, inner.set(outer.get(obj), value))
)

By contrast, a van Laarhoven representation is defined as a function type. In Haskell this is most succinctly described as a type alias:

type Lens s t a b = forall f. Functor f => (a -> f b) -> s -> f t

That says that a Lens is just a function. The benefit of this is that composition of lenses is just function composition (no compose function needs to be written). Prism is defined similarly, with a different constraint on the higher kinded type:

type Prism s t a b = forall f. Applicative f => (a -> f b) -> s -> f t

So, what about the combination of a Lens and a Prism? In the normal form, you would need to write another compose function producing something: compose(Lens[A,B], Prism[B,C]): AffineTraversal[A,C]. It turns out this produces a new type of optic called an AffineTraversal (or sometimes called "Optional") with different properties. So, you'd need to define this new type as well as compositions between AffineTraversals and all other optic types. With the van Laarhoven representation, you get this for free as a result of composing two functions. Profunctor optics also provide trivial composition.

Families of Optics

The section on prisms is mistaken in its characterization that prisms represent families of optics. This is no more true than it is true for the lens

I wrote it this way because of this comment in the Scala article:

What if we want to manipulate data inside a trait , in general referred as a sum type or coproduct? Prisms come in handy. They’re like Lenses but for sum types.

Originally, I said something much closer to literally that, but I got PR feedback that "sum types was confusing", and removed that terminology. I am open to suggestions on improving this. Still, the example of the Serde API, I think, corresponds to a similar use in concept.

There are two points here: 1) sum types, and 2) families of optics.

Sum types should not be confusing in Rust since they are provided by the language. They are called enums. (A C/C++ enum is not a sum type because it cannot contain associated data.) If the term "sum type" is confusing, then it may be useful just to use the term "enum" instead.

As to families of optics, we've encountered several optics so far in this discussion: Iso, Lens, Prism, AffineTraversal. These form the following relationships:

graph TD;
    Iso-->Lens;
    Iso-->Prism;
    Lens-->AffineTraversal;
    Prism-->AffineTraversal;
Loading

This is a family of optics. The graph shows through the arrows which direction you get free conversions. An Iso can be freely converted to a Lens or a Prism. A Prism and a Lens can both be freely converted to an AffineTraversal, etc. A more complete list of Lens families is in Oleg's Glassery which includes this larger diagram:

Optics families from Oleg's Glassery

Note that there are many more optics even than listed in the Glassery, and more optics are always being discovered. See, for example, the deck Profunctor Optics: A Categorical Update from Román et. al. (and associated paper).

So, "families" of optics refer to these relationships between optics. That is why it is confusing to incorrectly claim that prism represents a family of optics and, by implication, state that a lens does not.

Relationship to Parsing

I took another look and I still don't get the point of the Visitor pattern. Visitor patterns are an object-oriented way to achieve sum types (enums). But I think there is a strong connection to serde that can be drawn.

If you look at serde, it is generally trying to solve the problem of uniformly 1) parsing, and 2) building (or "deserializing" and "serializing" if you prefer). If you combine those two operations into one structure, you get something that may look familiar:

case class ParserBuilder[A] {
    parse: String => Option[A],
    build: A -> String
}

This is essentially equivalent to ParserBuilder[A] == Prism[String,A]. More generally, this is a topic referred to as "Invertible Parsing". The paper Invertible Syntax Descriptions talks about this problem generally and settles on the PartialIso optic for its implementation. Outside of Haskell, you can also see real world implementations of this in Scala in the Parserz library or a particularly advanced production Swift example in Point-Free's swift-parsing.

What's the point?

That was a big info dump - I hope it's not too much info overload and is actually helpful. The main question that I come back to is: who is the audience of this particular information in this "rust patterns" book?

As I have learned about these topics I have come to the conclusion that these functional and category theoretical notions are very useful as a framework for understanding very hard problems and turning them into very simple problems. However, I do find that not all languages support these ideas directly. Haskell and Scala, with their support for higher kinder types, do a great job of encoding these things directly. But if you look at projects in Swift or Rust which leverage this thinking, they do not translate directly into code. Rather, you'll see that Rust often requires macros (a form of code generation). Swift often relies on type erasure, result builders and other tricks to get concise compositional DSLs. Often the names need to change or other compromises need to be made to support adoption (e.g., Swift calls lenses "KeyPaths" and doesn't even implement prisms, though there is some discussion).

The point is, the functional and categorical patterns transcend language and are not really "rust patterns". However, thinking about them can help identify common problems and patterns to solve those problems in rust. I hope, by way of the above explanations, that you may settle on some useful angle here. I would definitely recommend a clearer separation of theory (functional theory, category theory) from application (patterns in rust).

Again, hopefully that is helpful. Thanks for the great first pass and good luck!

@jhwgh1968
Copy link
Contributor

Hi @thomasmarsh, just letting you know I skimmed your text and found it very helpful. I will see what improvements I can make over the next couple of weeks.

@jhwgh1968
Copy link
Contributor

jhwgh1968 commented Dec 20, 2022

Okay @thomasarch, I have read this thoroughly and give it some thought.

My article was put into a section I think of as "functional languages concepts useful in Rust." That is, a section where people who have not had exposure to these, and are learning them "by doing" due to Rust's functional "style", can see what's going on with big picture.

As noted before, reading articles about lenses and currying is what finally let me understand Serde. This realization breathed new life into what was an article with substance (not quite as much as I thought), but no direction.

It was reading these articles that let me realize Serde is using macros and a Visitor as an intermediary type to get around higher-kinded type limitations that Haskell does not have. As a result, I think that should remain the focus -- and also set the primary audience.

Perhaps the best thing to do is turn my article "upside down" in flow. My first draft would be something like:

  1. Serde's API is very difficult to read. How does it work?

  2. Let's look at some functional Optics, which Rust uses conceptually for API design

  3. The Iso: value transform on the same type; trivial Fahrenheit and Celsius example

  4. The Poly Iso: Rust's FromStr trait, which is designed for changing types; very simple "parser" example

  5. Notice how traits create one optic impl per type, written manually; that's a lot of typing

  6. Let's consider what Serde wants to achieve. Generically:

    for any type T and format F:
            F.serialize(T) -> String
            F.deserialize(String) -> Option[T]
    
  7. This is called a Prism, which builds upon Isos, so it's "one level higher" in generics

  8. Prisms can be built directly from Isos in Haskell, but Rust does not support higher-kinded types well enough

  9. So Serde created a uniform data model which is the "secret sauce" behind the library

  10. This allows the API to be split with the Visitor pattern as the data model representative:

    for any type T:
        let V: associated visitor type that does data model transformations
        T.serialize(F) -> String
        T.deserialize(F) -> Option[T]
    
    for any visitor type V:
        V.build(F) -> Option[T]
        V.decompose(F, T) -> String
    
    for any format type F:
        F.serialize(V, T) -> String
        F.deserialize(V, String) -> Option[T]
    
  11. Hey look, it's a pair of Poly Isos at the bottom! Rust can do those with traits, as we saw earlier

  12. Macros can generate the T methods and V type for ease of use

  13. Thus Serde "thinks" in functional Optics concepts while impl uses macro glue and traits

I think that would be more accurate, even if it's not quite as exciting to someone interested in the concepts

What do you think?

ETA:

The "psudeocode" is conceptual, rather than literal. Serde looks extremely different, because of the way the Visitor "drives" the parsing process, and the way that generics support type associations without requiring values.

But if you strip things down to the core concept, and simplify all that detail away, I am pretty sure this very-simplified model is representative. That is mostly for the explanation in this thread, matching the style of previous examples.

ETA 2: ugh, this is hard and always takes me a long time, but I think I've finally got it

@thomasmarsh
Copy link

Hi! I just wanted to respond quickly, and will come back with more thoughts. I think this sounds very interesting as a way to build the story you want to tell. I wonder if you had seen this recent video exploring serde's architecture https://www.youtube.com/watch?v=BI_bHCGRgMY. I have not watched the whole thing yet, but it seems to line up closely with your topics. (I'm not sure that it mentions optics at all.)

Another thought that occurs when reading your description: optics are one way of looking at things (and I believe a very useful way), but there are other perspectives (e.g., object oriented approaches for one) which can lead to similar implementations without knowledge of optics. This reminds me of how optics are implemented: there are multiple equivalent representations (e.g., normal form or profunctor optics) - none of them is "correct" they are just different ways of looking at things. My personal instinct would be to exercise caution in claiming that Rust APIs are inspired by optics. They may be, but I don't know that for sure!

@simonsan
Copy link
Collaborator Author

@jhwgh1968 Thank you for your detailed explanations! 👍🏽 😍

@jhwgh1968
Copy link
Contributor

Quick update: things have been busy, but I have started on a draft of my new approach described above. Once it's ready for review, I will ping the important people in this thread for early feedback.

@simonsan
Copy link
Collaborator Author

Quick update: things have been busy, but I have started on a draft of my new approach described above. Once it's ready for review, I will ping the important people in this thread for early feedback.

Take your time and thanks for the update. 🤗

@simonsan simonsan added C-amendment Category: Amendments to existing content A-functional Area: Content about Functional Programming labels Apr 8, 2023
@simonsan simonsan changed the title Feedback on 'Lenses and Prisms' article Clarify parts of Lenses and Prisms chapter Apr 8, 2023
@jhwgh1968
Copy link
Contributor

Hi @simonsan and @thomasmarsh, I have a draft version!

It's not a PR yet because it's much earlier. What I'm interested in is:

  • Is the approach and flow good
  • Is it understandable (I suspect I could explain some things better)
  • Is anything I say about the different concepts incorrect
  • Is the "normal form" correct

Please put comments on the commit itself as a code review.

Thanks!

jhwgh1968@636a736

@simonsan
Copy link
Collaborator Author

@jhwgh1968 Could you maybe really just (draft) PR it, just because it's much easier to get feedback in and follow up on already made reviews, etc.? Thank you 🫂

@jhwgh1968
Copy link
Contributor

Opened #364

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-functional Area: Content about Functional Programming C-amendment Category: Amendments to existing content
Projects
None yet
Development

No branches or pull requests

3 participants