-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cedar Function Macros #61
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Aaron Eline <[email protected]>
b7377f7
to
60156be
Compare
Signed-off-by: Aaron Eline <[email protected]>
Signed-off-by: Aaron Eline <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally approve of this RFC, but withholding my final approval pending resolution of some of the comments in other reviews
text/0061-functions.md
Outdated
permit(principal,action,resource) when { | ||
foo(1, "hello", principal) // Parse error | ||
}; | ||
permit(principal,action,resource) when { | ||
bar(1, "hello", principal) // Parse error | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This implies that the parser is stateful in a way it is not currently today. In order to determine whether these examples are parse errors, the parser has to refer to information it has already parsed earlier in the file (the definition of foo
). I'm not sure this is the design we want.
An alternative would be that calling a nonexistent function, or calling with the wrong number of arguments, is not a parse error but an evaluation error, and also detected by validation of course.
Edit: With the below stipulation that function declarations do not have to lexically precede use, this is even more problematic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The argument against making them runtime errors is that that reifies functions. Under the current proposal, functions could be fully erased.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you not do the checking in two steps? Basically:
- While processing an input file, store all function defs in a global table that logs name, args, body. For each function application, also consult the table for consistency. If the function is not yet in the table, then add it with the arity and "none" for the body. If a def comes after a call, check it for consistency and replace "none".
- Make sure no functions in the table have "none" for the body.
Doesn't sound hard?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's hard, I think Craig's point is that until now we have not had to deal with any kind of use-def relationships in the parser, and this RFC changes that. If we make it a runtime error, our parser stays the way it is.
I think it's worth the change, and I think the RFC doesn't meet our needs if it's a runtime error, but the callout is correct.
text/0061-functions.md
Outdated
3. Function application with incorrect arity --> | ||
|
||
### Naming | ||
Should these really be called `function`s? They are actually `macro`s. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I agree that these are macros, in the current design. They are pure functions.
If we want to avoid associations with other languages' functions or macros, we could also pick a third term, like snippet
, predicate
, def
, ....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fine with snippet
or def
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would claim these are best communicated as macros, for two reasons:
- Most readers are used to CBV languages. So they inherently assume (having not actually read the docs) that for functions when you write
f(1+2,principal.id like "foo")
then you will evaluate1+2
and thenprincipal.id like "foo"
, and then call the function with the results. They will not imagine that inlining will happen, and that short-circuiting and whatnot can change the results. - Readers are familiar with macros in C, and perhaps other languages. For macros they know that when you write
f(1+2,principal.id like "foo")
you are substituting full expressions1+2
andprincipal.id like "foo"
into the body, you are not evaluating the them first.
So I claim that by calling them macros we help users naturally go to the right semantics in their heads. Yes, it's true that we are really (or equivalently) providing is call-by-name functions, but calling them macros is more illuminating, IMO.
In sum: I propose changing the whole RFC to be "Cedar Function Macros" and then use "macro" for short, throughout.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Writing up the function vs macro discussion in the RFC...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also prefer macro
. To me function
suggests that I should be able to nest calls, whereas I can easily accept that a macro shouldn't call another macro. I'm not a fan of predicate
since that suggests that the resulting expression must be boolean-typed. No preference on snippet
, except that I haven't seen it used before so it feels non-standard.
Also, not a fan of "Cedar Function Macros". I found this terminology confusing on my initial read through -- just pick function or macro.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm happy with macro
. Maybe inline function
would be another option (though it still may suggest CBV to some readers)?
Co-authored-by: Craig Disselkoen <[email protected]>
Signed-off-by: Aaron Eline <[email protected]>
Signed-off-by: Aaron Eline <[email protected]>
Signed-off-by: Aaron Eline <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a couple comments, and there are several existing threads to address, but overall I'm in favor of this proposal.
text/0061-functions.md
Outdated
1. They are extremely heavyweight, requiring modifying the source of the Cedar evaluator. This means that users who want to stay on official versions of Cedar have no choice but to attempt to submit a PR and get it accepted into the mainline. This process does not scale. | ||
1. For data structures that are relatively standard (ex: SemVer, or OID Users as proposed in [RFC 58](https://github.com/cedar-policy/rfcs/blob/cdisselkoen/standard-library/text/0058-standard-library.md)), it’s hard to know what’s in-demand enough to be included, and how to balance that against avoiding bloat. There’s no way to naturally observe usage because the only way to “install” the extension pre-acceptance is to vend a modified version of Cedar. | ||
2. Users may have data structures that are totally bespoke to their systems. It makes no sense to include these in the standard Cedar distribution at all, yet users may still want some way to build abstractions. | ||
2. They are too powerful. Extensions are implemented via arbitrary Rust code, which is essential for encoding features that cannot be represented via Cedar expressions (such as IP Addresses), but opens the door for a wide range of bugs/design issues. It’s trivial to design an extension that is difficult to validate and/or logically encode for analysis. Problematically, extension functions can potentially exhibit non-determinism, non-termination, or non-linear performance; interact with the operating system; or violate memory safety. This raises the code review burden when considering an extension function's implementation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a great summary of the current state of extension functions. This is why I'm in favor of this RFC: there is currently no way to build abstractions in Cedar outside of (built-in) extension functions.
So although I prefer "macro" to "function" for the feature in this RFC, I'd like to throw another name into the ring: User-defined extension functions
(Note that this would make the most sense if we used CBV for both types of calls.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
User-defined extension functions
I prefer not to call them functions because they are not CBV.
Co-authored-by: Kesha Hietala <[email protected]>
Co-authored-by: Kesha Hietala <[email protected]>
Co-authored-by: Kesha Hietala <[email protected]>
I've taken a pass over the whole RFC, bringing the terminology and presentation up to date with all the various discussion threads. I'm approving the current version. |
Signed-off-by: Mike Hicks <[email protected]>
077ef2d
to
32ac7eb
Compare
Thank you @aaronjeline and @mwhicks1 for writing this RFC! I appreciate that the current version of the proposal is simpler than the original. In particular, by proposing macros with call-by-name semantics rather than functions with call-by-value semantics, we avoid complicating the language semantics, validation, and analysis. Despite this, I'm not in favor of adding macros to the language at this time. The pros don’t outweigh the cons (yet) in my opinion, and this is a major addition to the language that we cannot take back. Macros will create a management problem for services, and they will introduce two significant sources of confusion and complexity into policies:
When this policy errors at runtime, what source location should be shown as the source of the error? Should it be the location in the macro body
As a pathological example, consider this macro and its application:
This policy expands to an AST that is exponential in the size of the CST: with 4 calls to double, we've created a parse tree (and a runtime value) of size 2^4. These problems aren’t present in Cedar right now because each policy is self-contained, and its meaning and cost are evident from the text. That’s the advantage of the lack of abstraction. There are downsides, of course, as outlined in this RFC. In my view, we are yet to see compelling evidence that abstraction provides enough benefits in a very restricted authorization DSL like Cedar to justify the added complexity. For that reason, I’m voting against accepting the RFC at this time. |
Thanks for providing this carefully thought-out critique! I am still optimistic that macros are a net win. Here are some responses.
This seems like a surmountable problem. I presume there is much we can learn from far more interesting macro systems' handling of errors. Off the top of my head: For run-time evaluation, you don't have to in-line the call in advance, you can in-line by creating the expression on the fly, while evaluating, basically resulting in a call-by-name function call. The in-lined body can include source locations that reflect the macro body in the right places, and the macro args in the others. The error message can include a stack trace, which a user might expect. It should be possible to prove (or test) that that doing this is equivalent to in-advance in-lining. For validation, I think you can do a similar thing. For Lean models, obviously you don't need to track the source locations. You don't even need to do on-the-fly inlining, at least at first, by relying on DRT against the Rust, and PBT within the Rust, to show equivalence. We can prove equivalence in Lean later, if we wish.
This is a fair point, and I worried about this, too. Here are some mitigating factors that come to mind:
Ultimately, I feel that the readability and reusability benefits of macros are significant; the increase in complexity to support them is small; and that we are failing to solve the problem posed by today's extension functions if we do not provide them. Yes, we cannot stop users from using them in unwise ways, but that's true in general. We can guide them with linting, documentation, etc. toward good solutions. |
Can you say more about how on-the-fly macro expansion would work? The way I imagine it would work means we’d have to break the In addition to breaking the API, we’d also have to extend the AST (and possibly break the EST), at least in Rust. Doing macro expansion statically, prior to evaluation, avoids the above problems, but has the issue that constructing useful error messages becomes hard. As for Lean, formalizing function calls (CBV or CBN) is possible but tricky. For example, it will be non-trivial to prove termination (because the size of the AST won’t be usable as a natural decreasing measure during recursive calls). If we want to formalize this, we’ll definitely want to keep the alternative formalization separate from the core semantics / validation / analysis formalization, because we don’t want to handle on-the-fly inlining in these more complicated components too. So, we would need an additional concrete semantics for Cedar, with macros and CBN semantics, and a proof that this alternative semantics is equivalent to our existing semantics post-expansion. This all adds a hefty engineering and maintenance burden. My concern continues to be that this burden as well as the potential confusion / usability / performance issues are not offset by the upsides of macros for Cedar, at least from I’ve seen so far. How to weigh these tradeoffs is, of course, subjective, and I realize that mine is a minority opinion :) |
Except almost everyone should be using schemas. So.. :) |
Comments from Dan Wang (who doesn't have a GitHub account), plus my responses:
Abbreviations in Twelf, per the link, look like type abbreviations, not term-level functions. They don't seem to have parameters. In Lean, an "abbreviation" seems to be a non-unicode way of specifying a unicode character. It is call-by-name in the sense that you can really do the calls on the fly, rather than all in advance, as an evaluation strategy.
It's a fair point that even without self-calls there is an opportunity for bad behavior, and thus we might relax that. Our thinking was that we can add internal calls later, if customers demand it. But maybe we should consider it now.
That seems reasonable. |
Lean
|
Signed-off-by: Mike Hicks <[email protected]>
85f7755
to
9efba90
Compare
Hi, I did some thinking here with a few folks, just some observations:
In any case, I really liked some of the comments / notes in the RFC such as:
In the Go implementation, I think adding user-made extensions wouldn't be terribly difficult, but the concerns about breaking the analyzability / determinism give me pause to think. |
Similar concerns are already present in comments, but:
|
Thank you @philhassey and @200sc for the comments! My take is along the same lines: I want all the abstraction in a general purpose language :) but not necessarily in a DSL like Cedar, for the reasons mentioned in the discussion and in your comments. To answer the last point raised by @200sc: Cedar won't become Turing complete. Analyzability is a key tenet of the design. It is worth noting that macros won't compromise logical analyzability---that is, the ability to mechanically reason about policy behavior by reducing its semantics to a logical formula. But they can hurt understandability as much as they help, as all abstractions do. In my view, the upsides of abstraction (terseness and expression reuse) are not worth the added complexity for policy writers, readers, services and applications, and implementors of Cedar. |
@emina @philhassey @200sc all express this sentiment: No abstractions in Cedar, which is basically a data format; abstractions can be in a general-purpose language. I see the appeal of this position, but I fear that taking it fails to solve the problems that motivated this RFC. Cedar is not just a data format. It was specifically designed so its policies can be authored, read, and understood by human beings. One reflection of this design is our use of a custom syntax rather than JSON (as with AWS IAM) or XML (as with XACML). Customers specifically tell us they appreciate the greater readability of Cedar’s syntax. Another reflection of this design is keeping Cedar’s mechanisms simple, which aids understanding by humans and analysis tools. Cedar supports the “policies as code” methodology: authorization policies are written as code, in a language such as Cedar, and kept separate (as much as possible) from the application(s) that use those policies. Doing so makes policies easier to audit and maintain. As more authorization logic is kept in the application code, the less one can understand about the application’s security rules by just looking at the Cedar policies. And the harder it is to evolve those policies, since you need to evolve the application code, too. Let’s look at the initial SemVer example. If we do nothing, then policies using SemVer are harder to write and understand. If we support macros as proposed, these tasks are made easier. The example I added yesterday is also much easier to read once you include macros. Conversely, it is easier to misunderstand or make mistakes without them. Users might like to address these difficulties. How should they do so? One idea is to use a non-Cedar macro-supporting language. For example, I could store my Cedar policies in a text file with macro constructions from the C preprocessor:
This is not too bad. But it has some drawbacks:
I do agree with the concerns about managing macro definitions. Introducing them probably means thinking about a packaging system soon, if not immediately. We already have interest in packaging and sharing schema definitions (see RFC 58), so this would be a natural step. But I don’t feel that we need a package system before introducing macros. It can be a fast-follow. I note that telling users to use their own templating system essentially punts the management question to them. They, too, might want to manage common abstractions by packages or names. Cedar’s reason for existence is to help users avoid tricky details about managing their authorization posture. Surfacing the need for package/definition management in Cedar means that it becomes easier for users to collaborate on building policy abstractions that others can use. As for moving Cedar towards a general-purpose language: As @emina says, we want to keep Cedar analyzable, which means there is a firm ceiling over how general-purpose it can get. |
Hi @mwhicks1 - just to clarify my statement:
I'm not objecting in general, but I am saying "if we do this, we'll want a few more things". Which relates well to your comment:
I feel the same way about adding in constants, and types with macro-methods. At which point Cedar would be a much richer language. I do think it's reasonable at this point to consider if that's where we want to end up though. I'm not sure. |
Responding to various comments:
Even worse, it's another step where we need to make sure we have the exact same behavior between the Rust and the Go, since they won't be able to share this code easily. While this is also true of the macro system, it's not true of each individual macro. From @200sc
|
|
||
Of course, these drawbacks do not necessarily speak against Cedar functions generally, but suggest that for suitably general use-cases (like decimal numbers!), an extension function might be warranted instead. | ||
|
||
### Hidden performance costs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me, this (tied with the general principal to avoid redundant language features) are the most concerning parts of this RFC. Since there are reasonable mitigations (e.g., the vscode plugin can display the expanded policy) I think this is ok if there is a sufficient level of interest in macros (let's say 5 independent users asking for the feature).
So far, I don't think we've reached this level. So I'm not opposed to this RFC, but I'd vote against doing it now.
Adding my two cents: I'm inclined to "no". It puts a huge burden on services built around Cedar. To keep up with Cedar releases, these parties need to schedule a large engineering project for storing these fragments and utilizing them efficiently. If they don't, they fall behind the language treadmill. Or, we end up with a bunch of opt-in features for Cedar such that when someone says they are "Cedar compliant", nobody knows what that means until they read the matrix of opt-in features within the service documentation. We also have audiences who are already concerned that Cedar may be growing too complex when their customers are trying to achieve simple things. Evolving Cedar toward something that starts looking like a full-fledged programming language is a slippery slope that runs a real risk of turning away these audiences. I'm sympathetic to the goals, and agree with other comments that audiences could use their favorite standard templating language and preprocess the Cedar themselves. And, I wouldn't be adverse to vendors supporting this feature on their own timelines, as an extension. Or, even for Cedar to suggest a common approach to this. I also noted this discussion began with a request for a "Semver" extension, which I'm also sympathetic too. Somewhat ironically, I'm wondering if it would have been faster to just implement "Semver"! |
This is a good point, and suggests what others are saying about needing a higher bar for requests from users
I disagree with this for two reasons:
I think if we want to suggest a common approach to this, it should be something like this RFC. |
To address this last point: We didn't just implement SemVer because doing so has maintenance costs which don't scale well. Each extension function we add increases the testing, validation, analysis, etc. burden of core Cedar. Instead, we wanted to explore a single mechanism that addresses them in one fell swoop. This is more scalable on the core Cedar team, and gives more agency to Cedar users. I am still bullish that what's proposed here is the right thing to do. But I also appreciate the words of caution. The right course is to gather more information about uses/needs, and to think about open questions like packaging/management of macro definitions (which still aren't solved by the current extension function mechanism, even if we thought that was the right approach instead). One thought: Could we add Cedar macros as an experimental feature, and let customers use it and report back? |
Chiming in here to substantially agree with @aaronjeline and @mwhicks1, particularly their most recent two comments each. I think it would be a big mistake to encourage folks to use "standard" templating languages on top of Cedar, instead of building something like this RFC, which can enforce Cedar macro hygiene, syntax checking, etc etc. To the extent that one of the concerns raised in comments here is that this feature increases the cognitive complexity of Cedar : using a general-purpose templating language on top of Cedar would be even worse in terms of cognitive complexity, and wouldn't integrate with Cedar validation and analysis, like the macros in this proposal would. |
Co-authored-by: Andrew Wells <[email protected]>
a9874ca
to
fb4f046
Compare
Can you say what you mean by integration? If an external templating system simply expands whatever the user writes into a Cedar policy, the analysis will handle the expanded policy fine and so will validation. It’s worth noting that custom macros don’t provide any advantage for analysis, which would have to inline every invocation of the macro in order to capture its semantics at the invocation point. (That’s how we handle extension functions too—the definition is inlined at the call site, after appropriate sequencing of argument evaluation to account for errors.) In this proposal specifically, macros aren’t helping with validation either, because they aren’t typed and cannot be type checked independently of call sites. We could consider writing a type inference algorithm that infers types for the polymorphic macros, but that’s another big endeavor, source of complexity, and engineering burden. An advantage that this proposal has over an external templating system is the potential for better error messages, because a custom macro system (like this one) could propagate source location information through expansion. (And maybe this is what integration is referring to in the parent comment?) Still, I strongly urge caution here, and specifically, not underestimating the burden that this feature imposes on users, services, and language implementers. If I had to take a guess at the implementation and maintenance effort for Cedar alone, I would say we’re looking at an additional ~10K LOC, including the Rust implementation (with proper expansion tracking, error messages, etc), Lean model and proof (by having a parallel semantics with CBN and proving it equivalent to the core semantics after expansion), and diff testing. We need to be really sure that this is the right direction for Cedar before proceeding, and not just because of the engineering effort. In my view, this is not the right direction from a language design point of view either. As several other commenters noted, this would push Cedar toward looking more like a general-purpose language. And that’s a slippery slope, which inevitably leads to more complexity and feature creep. |
syntax highlighting, autocomplete, hover tooltips, potential for type annotations, using the cedar namespace system, using whatever stdlib mechanism we may decide on, checking if a macro is a valid expression, rendering the macro as an EST, runtime errors that track proper source locations, type checking errors that track proper source locations, anything that requires parsing or thinking about the macro in isolation becomes impossible in the "generic template" system answer.
This feels like a very ill-defined argument. We still have the guardrail of analysis preventing us from expanding. This adds no expressive power to the language. Templates did not meet this criticism, were we wrong to miss it there? Action groups likewise. Surely opening of the extension mechanism so users can easily share extensions would hit this bar. I'm not saying it's an incorrect argument, it just feels vague. I think there's three arguments happening here, and I'll try to distill them.
This is fine. I may personally disagree, but I'm ok with declining for now and seeing if more evidence accumulates. If that evidence ends up accumulating, we could potentially revive this RFC or consider a different mechanism for the same goal. If it doesn't, we won't.
This feels very wrong to me. For all the reasons I've described above. I'll add one more here. If a macro system within cedar brings it too close to a general purpose languages, why does a recommended macro system outside of cedar not do that? Again, operationally, you have all the same concerns. Semantically/readability wise, you have all the same concerns except with less guadrails. Standards wise it's a mess. To a person writing/reading policies as per our recommendations, it's equivalent complexity. (It's probably definitionally more complexity as these macros would be text macros) If we know people want this feature, and the feature is feasible, then it feels weird to recommend what is essentially a workaround. Is there an existing template system we can point to that we think is equivalently readable?
Obviously I disagree (I wrote the RFC :D ), but there's probably more to talk about here. If I've misstated anyone's position, please correct me. |
The more I think about this feature, the more torn on it I become. I see value it provides in streamlining policies and reducing duplication. On the other hand, introducing brand new standalone concepts like these macros feels a bit heavy handed, with implications for integrated services. May I share an idea to get some feedback on it? Extension methods in C# inspired this approach. Basically, we'd allow users to define their own type extensions in schema directly. Semver example could then look like this:
I understand that this goes against one of our tenets that schema is not supposed to affect authorization outcome. But I also feel like we've taken incremental steps to relax that stance with schema-based parsing of the request data and schema-based validation of the request. |
Thanks @aaronjeline for summarizing the three arguments! That's really helpful. I'm in camps (1) and (3), and below, I'll try to explain why this RFC is triggering my spidey sense, with the full acknowledgement that this opinion is biased (informed? :D) by my own programming experience (Racket, Lean, and a long time ago, Java) and design preferences (minimalist).
I believe this is true. We haven't had any users (that I know of) ask for this feature. We have had many users ask for all other features that have required this level of engineering effort (e.g., partial evaluation). We have also not considered a feature that departs this much from the original language design, which focuses on simplicity and "what you see is what you get" behavior. This focus is lost with no gain in expressiveness and with minor (in my eyes) gain in convenience. It is worth noting that this RFC doesn't solve the problem that originally inspired it, SemVer comparison, because the original ask was for the comparison function to provide a similar API to decimals and IP addresses, which requires writing an extension function to parse SemVer strings. The effects of supporting this feature are wide-ranging and global. This feature is not only an engineering lift for us, but also for all applications and services that use Cedar. We would be forcing them to deal with macro storage, management, distribution, and updates---or risk falling behind the language treadmill as @D-McAdams's comment explains. Services and applications are our main customers, even more so than individual policy writers, and I would want to see at least 5 services / applications actively advocate for this feature before adding it to the language (borrowing 5 from @andrewmwells-amazon's comment).
I believe this is true as well, and the following is going to stray a bit into subjective preferences and design philosophy, so please bear with me :) Cedar is not a general-purpose language, and it was designed explicitly to be analyzable and simple. The current proposal is to add a feature to the language that is extremely limited (for good reasons), and not really capable of letting you build real abstractions---which are built through composition. Cedar macros cannot call other macros (for good reasons). They are not real CBV functions (for good reasons), which is what most programmers expect from functions. They are also a very weak syntactic abstraction (for good reasons), because they are lacking all the features you find in powerful macro systems (such as Racket or Lean). In other words, they land us in the uncanny valley of language design (in my eyes), where Cedar looks sort-of general purpose without actually being so. This then creates the expectation of even more features you find in a general purpose language (as noted by @philhassey): global constants, module system, packaging system, etc. And that lands us in a state where Cedar is not beautiful (again, in my eyes) either through simplicity or power. I cannot think of any popular, simple, standalone DSLs that have things like macros, modules, packages, etc. (I can think of some that do have these features but are considered complex, and are well out of bounds of what is analyzable.) This is likely my own blind spot, and I'd be super curious if someone can point me at some DSLs where this has been done well, where it's considered simple and ergonomic, and that we should emulate :) Cedar is not used like most programming languages. Most programming languages are file-oriented; people write a single conceptual unit (application) that is thousands of lines of interconnected code; and in that world, things like modules and abstractions are absolutely essential. While some people may use Cedar by putting all policies in a single file, that is not the case for a large number of users who interact with Cedar through a service or an application. Here, we see two modes of use: a (relatively!) small number of static policies, or a small number of templates with thousands of instantiation. In both cases, the actual logical unit (the authz model) is not thousands of lines of hand-written Cedar code, and in my view, macros or other heavyweight abstraction mechanism are not needed to build and maintain this logical unit. Finally, thank you all for a really great discussion. I've found it immensely helpful in crystallizing my own understanding of my gut reaction :)) |
Rendered
Discussion Summary (05/14/2024):
This RFC has a lot of discussion, and hasn't produced a consensus. Proposing we track use-cases that could be solved by this RFC and restart discussion if we hit 10 use cases.