-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Templates — Reusable packages to share dependencies and configuration #3
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zkochan There are a few differences in this RFC from what we discussed previously. I think the differences are helpful, but please feel free to disagree if that's not the case.
text/0003-templates.md
Outdated
|
||
## Unresolved Questions and Bikeshedding | ||
|
||
- In prior discussions, this feature was referred to as "_Environments_". The initial draft proposes "_Templates_" to make it more clear that this feature is simply a `package.json` authoring mechanism. Templates/environments are not themselves installed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The proposal here ended up a bit different than the "environments" concept in Bit: https://github.com/teambit/envs
To achieve component isolation, Bit provides the extension with the Capsule APIs to create a separate development environment, detached from the original workspace and codebase. It then runs all operations (build, test, render, etc.) on the isolated environment to provide feedback.
While Bit environments tend to be fully featured and installed development environments, the goals pnpm needs to be can be accomplished by a simple templating system that only affects the package.json
file.
I'm not too partial to the existing "Templates" naming. We can rename back to "Environments" or a different name if that makes more sense.
text/0003-templates.md
Outdated
|
||
A package can reference the template to populate different portions of its definition. The specific feature is determined by `pnpm.templates`: `extends`, `toolkit`, and `catalog`. | ||
|
||
### Extends |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually not sure if we need extends
. This is definitely the most complicated part, and it's possible toolkit
and catalog
are sufficient in the initial version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I care mostly about is the possibility to share fields like: overrides
, patchedDependencies
, packageExtensions
, peerDependencyRules
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my concerns are centered most around "scripts"
and not having to use mrm
or such to keep those up to date across all packages/projects. that's been the largest motivator for looking at make again, and tools such as https://moonrepo.dev/. In a basic product monorepo we'd have different base scripts for services, apps, and packages.
text/0003-templates.md
Outdated
|
||
### Lockfile | ||
|
||
No explicit changes will be made to the `pnpm-lock.yaml` file. Any `importers` entries referencing a template will have their rendered result saved to the lockfile. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we don't save the template versions to the lockfile, we'll have to fetch all template metadata files in order to verify that the lockfile is up-to-date. Which might be OK. I don't know.
But also, currently we store the integrity checksum of every package. Should store the integrity checksums of the template packages? What if someone hijacks a template package?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we don't save the template versions to the lockfile, we'll have to fetch all template metadata files in order to verify that the lockfile is up-to-date. Which might be OK. I don't know.
I'm also a bit unsure. My thought was that the template will be saved to the pnpm content addressable store. (I'm realizing I should have elaborated on that in the RFC.) From there, loading the template subsequent times becomes a disk block I/O operation, much like reading part of pnpm-lock.yaml
would.
But also, currently we store the integrity checksum of every package. Should store the integrity checksums of the template packages? What if someone hijacks a template package?
This is a great point. We should absolutely store the integrity checksums of the template packages. Thank you for the catch. Will update the RFC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Commit cb35007 now describes a new templates
block.
Let's consider the following scenario. I want to share a patch to a package. How would I do it? I guess I can extend the |
text/0003-templates.md
Outdated
|
||
A package can reference the template to populate different portions of its definition. The specific feature is determined by `pnpm.templates`: `extends`, `toolkit`, and `catalog`. | ||
|
||
### Extends |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I care mostly about is the possibility to share fields like: overrides
, patchedDependencies
, packageExtensions
, peerDependencyRules
.
``` | ||
|
||
We expect most monorepos to use this feature to keep dependency specifiers consistent between different in-repo packages. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add more details to the section about catalogs. If you remember, we discussed how only versions from dependencies and peerDependencies of the template will be used: https://github.com/orgs/pnpm/discussions/5974#discussioncomment-4783001
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack! Will do. Agree this is important to solidify into the RFC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some details to catalogs here: 24c69db
I reread the initial discussion linked 2 comments above and I'm actually a bit curious for whether we want to keep the "peerDependencies
can only catalog peerDependencies
restriction". I'm not too opinionated and could go either way.
Just wondering if users might be surprised by the need to copy a specifier in dependencies
to peerDependencies
when they try to use it in the later. How would we explain that restriction to someone asking in the future?
Let's also describe how this would work in a workspace. Will the projects in the workspace "inherit" templates from the root |
This is a great question I hadn't thought about. I was hoping pnpm could only fetch the manifest metadata from the registry, but perhaps we do want to fetch the entire package and link it into the CAFS. Will think more about this.
Oh I see. I was imagining projects in the workspace would not "inherit" templates from the root We should think about something like this. I think one challenge will be keeping the semantics between inheriting a |
Also, another potentially confusing thing. Currently, most of the fields from the "pnpm" section of |
Before iterating more, I could use help deciding between 2 different directions the RFC can go in. The However, I'm concerned that it might be too powerful and introduce a new form of security vulnerability on the ecosystem. For example, a package might depend on an external toolkit-like package through On a team, the problematic workflow might look like this:
Another field a malicious template could add is I think this leaves a few options:
I personally wouldn't trust any package asking for me to add it to the This means whenever pnpm supports new |
It is not more dangerous than depending on other open source dependencies. It is actually less dangerous because the template dependency will have no dependencies of its own, it will be used only locally, and it will be required only by exact versions. We can list the fields that should be inherited but it won't be completely secure either. For instance, if we list devDependencies as inherited, in a new version the template may include a new devDependency that wasn't present in the past. If we include
How is this different than providing an allowlist for the extend field? cc @pnpm/collaborators |
I agree templates are safer in some ways. My comment was around the
My security concerns are gone if we inherit nothing by default. 👍
It's very similar. I think the differences are a matter of personal taste rather than technical. I'm comfortable moving forward with the allowlist option, but wanted to avoid that by grouping templates into different purposes:
This is simply a different alternative that avoids having to add It also makes it more clear when creating a template what purpose it's meant for. I could see confusion after users forget to update their allowlist when updating a template. In the grouped templates model, users opt into the types of risky changes they're willing to accept rather than specific |
On one hand, this provides better user experience to the consumer, which won't have to list the allowed fields. On the other hand, it forces template authors to create more templates. What if there are multiple fields that should be changed by a template to fix an issue? For instance, a single template needs to add a new dev dep and a new script that runs the dev dep. Actually, now that I gave it a thought, if we choose a generic template, I think it should be possible for the generic template to specify "required" fields. So if you use that template you "must" allow a given list of fields. Otherwise, there's a chance that the template will not work as expected. |
The afore mentioned security concerns really only matter for registry-based template packages. That doesn't really apply to templates located within workspace packages. One thing I didn't see mentioned (please correct me if wrong) - How many templates can a single package contain, and how are they loaded from the package? Consider this ESLint config: https://github.com/shellscape/dot/blob/b850d274cafaecd5df368e37f0d3238bd1ca4f20/.eslintrc.js#L2 It extends my own config from |
In that case, since the mentioned use cases are in a workspace scenario, could templates be restricted to only be declared as a workspace package (i.e. a package.json can only extend from a template declared in the same workspace)? This would help to mitigate security concern I guess |
@kenrick95 you mean limiting templates to local templates only? That would make templates a lot less useful. |
That's right. Something that hadn't been stated yet was that the allowlist of fields to inherit from a template isn't necessary in an entirely local workspace. We might want to discuss more, but we could reasonably default that to allow every field in that case.
Open to suggestions, but the current idea would be to have 1 template per package. The template would be the package's |
It depends on whether we want to ship additional stuff with the template. Like I mentioned patch files. Overall, we can use the package.json from that endpoint but we'll have to request the full metadata, not the abbreviated one. |
It's probably ideal for authors to create more templates if they're intended for different purposes? I would worry about the other direction, where authors create overloaded templates that are used for multiple purposes
I was hoping we could enumerate the full set of |
nothing stops them doing that even with generic templates. But the learning curve is smaller and less work for us on designing different template types. |
I've got kind of a crazy idea. Doesn't a template with an override basically replace what peer dependencies are for? The only difference is that an override doesn't print a warning currently, when it overrides a range to an incompatible other range or version. But unlike peer dependencies, an override will never cause the same package to be duplicated twice in the dependency graph. So it is even a better approach? Edit: actually, it is not even related to templates. It is just related to overrides. |
I really like that templates are resolved on-the-fly, unlike peer deps or pnpm hooks, but I worry how this would play with all other toolings. IDEs pick up dependencies to enable features (e.g mocha) and libraries too. Could we use a meta-manisfest instead? Something like ‘package.meta.json’ same as the “in-memory” version. Whenever pnpm uses package manifest it reifies it into a ‘package.json’ with the “on-disk” version? Only meta file would need to be version-controlled but there would be no harm in adding package.json - in fact it would make it more straightforward fornon-pnpm users to use it. Since pnpm would read package.json during pck/publish, triggering a reification, it would also just work. And reified file could also act as a cache. There could also be a CLI command to trigger it on-demand for custom builds, work out env incompatibilities, etc |
Actually my idea doesn't make sense because overrides will not work in a workspace, where different projects may have different versions of the peer dependencies installed. |
This was one strong reason the You're right that dependencies defined through the inheritance/extends based templates would have difficulty interacting with IDEs that read
There's definitely merit to this idea. A few questions:
I like the idea because it's the academically correct approach that allows full compatibility with existing tooling, but I think we're willing to make some compatibility tradeoffs for |
I did want to comment that this is an awesome feature of pnpm. My team is in the middle of migrating a large monorepo with 4 million lines of TypeScript from Webpack 4 to Webpack 5, and this would be impossible to do incrementally without pnpm's peer dependencies resolution. |
Right, it's still possible for developers to split templates that are intended for different purposes correctly with the generic option. If our solution shape can provide guardrails that limit developers into best practices out of the box, I'd prefer that. If I can analogize this, I think this is similar to how it's possible to write I think the choice of multiple limited template options or a singular powerful extends option (generic templates) comes down to whether we're okay with the solution surface area covering less than the initial problem surface area, As opposed to more. If we were to add a solution that covers more than the surface area of the problems ( Given that, my current personal leaning is to remove The risk of this direction is if we introduce
Multiple fields that need to be changed to fix an issue is still the case that is hard to account for without a generic extends-based template option. For this specific example, I would bet that this won't be a common problem? I'd expect most developers to template All that said, I want to recognize that I'm starting to repeat points made in earlier comments. I've been thinking about this for 3 weeks and still prefer multiple limited templates since that's the safer option, but would want to avoid the discussion going in circles. If you have a strong preference @zkochan, I'm happy to update the RFC for it. |
I am OK with your suggestion. If someone else has objections, speak up. |
I had a great chat with @zkochan in a twitter DM. I shared a bit of context on what brought me to this issue and where I was coming from. I plan on spending some time tomorrow morning to write up a bit more about where I think this RFC can be improved. And will likely join the discord so that I can ask my questions in a more real time way so I don't just drop into a project I haven't really participated in before and disrupt things. |
Responding to @wesleytodd #3 (comment), #3 (comment), #3 (comment)
Having read RFC for parent package.json, I would find feedback from someone in that group very insightful. Thanks for dropping by! 👋
I do think we're mostly aligned on the problems. I'm in agreement that some sort of solution here would be important. I similarly work on a "platform team" for my day to day.
I appreciate the thought behind this and I completely welcome your detailed feedback! I similarly would have asked for permission to give feedback before jumping into a different project's RFC discussions page. For those in the future, I think we're fairly open here at PNPM and usually prefer up-front feedback, but acknowledge that's an unknown until you've hopped in. 🙂
I'm extremely worried about introducing new classes of security vulnerabilities to the NPM ecosystem. In earlier drafts of the RFC, we had (simplified explanation) 2 flavors of templates:
The security concerns around the now removed
Yes, I definitely acknowledge that's more complicated that simple inheritance. If I can try to convince folks why the delineation is valuable:
Awesome! 🙂
It sounds like many of us are aligning towards having only this RFC be about catalogs, and move the other parts that involve inheritance/extends into a different RFC. I'd welcome feedback either way, but just a heads up that this RFC may be refactored significantly tomorrow. Thanks for your thoughts @wesleytodd! 😁 |
I'm not too opinionated on the exact logistics of how we break things up, but I'm leaning towards these actions:
The primary difference between catalogs in this RFC and workspace-consistent in RFC #1 is that catalogs are "shareable", which I still think is a very cool property with a lot of potential that @zkochan thought of. @zkochan I think the actions above affects you the most as the primary reviewer. Does that sound okay? RationaleThe actions above make most sense to me after revisiting the discussion in both RFCs.
|
@gluxon ok, sounds good |
I still believe it is important to write the "static" package.json to disk. Tools doing dependency analysis would be the ones i am most concerned about, but i am sure there are tools and workflows that parse other fields and get it wrong in subtle ways if something is missing or not aligned with what the template renders to the published version. Having to hunt down bugs that were introduced by an opaque setup like this would quickly eat up any time savings you had and then some. Writing the output to disk raises awareness because you don't have to run pack or another command to see the potential result, and avoids the issues with other tools. |
For stuff like "scripts", maybe. But for the catalog protocol, I am not so sure.
Like what tools? I don't think I use such tools at the moment, that is why I ask. All the version information would still be available in a "static" state inside the lockfile. |
from personal experience: vite-plugin-svelte reads the package dependencies to autogenerate extra vite config. There are other vite plugins doing similar things. While most of these work with packages in node_modules, it would fall apart if they did fs reads to workspace: packages. And there are a lot of vulnerability scanner/auditor/corporate snakeoil things reading package.json directly. |
Please don't use Lastly, & where I think I've seen the most discussion, if a template were ever going to also be a valid |
How would you get the version spec of a dependency from a "template"? The current suggestion is to use a new catalog protocol. We can't just extend the dependencies field with all the dependencies of the template as we might not use all of those dependencies. With catalog we can pick just those that the project uses. And also we can put it to the right dependencies field: {
"dependencies": {
"is-odd": "catalog:",
"is-even": "1.0.0"
},
"devDependencies": {
"is-number": "catalog:"
}
} |
Awesome @gluxon! I think one of the main things I was going to recommend for the RFC doc (and really any technical proposal) is to spend a bit more time in the write up on the problem statement. It really helps folks like me, who drop in and read it to understand what you are trying to do and not just make assumptions like I did.
This is awesome to hear and I felt the same from @zkochan in our DM, thanks!
I share your worry. It is actually one of the reasons I fear complexity in these types of things. It is critical for security that end users understand the system enough to make good decisions. When they do not understand it, they are likely to make mistakes which reduce security overall (for example blowing away package lock files instead of understanding how to fix them). So it was not exactly that I was sure this proposal introduced any clear issues, the fact that it was a bit hard to follow made me worried end users would also not be sure how it worked.
FWIW, the stuff below this is stuff that I would have loved to read in the RFC itself. I can give specific feedback on these points in a way that the original RFC doc really didn't help much with. And honestly, I see in the comment you linked that you did address the "why not extends" but it was not included in the RFC doc. That is my bad for not reading it, but it makes it hard when the comment thread is so long, so maybe adding a section of the doc with links to key thread comments would help?
Seeing that you addressed this, I now understand more about what you are thinking here. I still think that the complexity of the current proposal is too high, but I understand more how we got here. I think most of the above commentary is about the RFC description and not the technical details of the RFC, so I will split that off for now from below which is my more technical responses.
I think these are two separate concerns. We have always had the "extraneous dependency" problem and the reverse "implicit dependency" problem. There are tons of tools to help deal with this. I am not saying this is not a problem, just that letting that problem block a really ergonomic feature might not be in the best interest of the users. I think a reasonable compromise here is to say "if you are extending a package.json you get everything in there" and then maybe you could introduce a new way of explicitly opting out of a individual dependencies from there (just brainstorming
Not only is it not realizistic, I think it is not desired. This was something that came up in the discussion in the npm rfc as well, and I think what sticks with me from that is that we need semantics which we all (tool maintainers) stick to with regards to
I am fully in support of @dominikg's comments above. I think finding a middle ground where the upkeep from a project maintainer is minimized but also doesn't effect the currently nice debugging ergonomics of a static
I assume this meant things like
This is for sure one good way to augment |
You'd load the configuration into the tool. No different then what will have to happen if you "extend" a I'd imagine you could simplify the architecture/API here by just using env vars & walking/loading consumer modules which will contain whatever nuance they want. Example
[pnpm]
prefix="ws-"
version=...
author=...
license=...
deps=...
devDeps=...
const { prefix, version, author, license, deps, devDeps } = process.env.pnpm
module.exports = {
name: `${prefix}foo`,
version,
author,
license,
"dependencies": {
"is-odd": deps["is-odd"],
"is-even": "1.0.0"
},
"devDependencies": {
"is-number": devDeps["is-number"]
}
}
The above is by no means prescriptive & you could change the config & consumer name/format etc.. If you were deadset on using |
I think I like the solution suggested by Darcy. It is definitely better than extending everything and then excluding what is not needed. As we might end up excluding dozens of dependencies. |
We already have the However, if this will be a js file, I am not sure how the EDIT Wait a moment. This now starts to look the same as a tool that I have already created: https://github.com/pnpm/meta-updater That tool works because it also uses info from package.json. So it reads the package.json, makes some changes to it and writes it back. |
@zkochan I don't think you can get around the fact that |
Following up to #3 (comment), the "Catalogs" concept has now been moved back to #1. I'm appreciative of the recent engagement from @dominikg, @wesleytodd, @styfle, and @darcyclarke. If any of you have time, I would love all of your continued feedback on catalogs over in RFC #1. It seems like we're aligned from the PNPM side that catalogs are the feature we're most excited about. Templates were an attempt to go a bit further and solve some other common PNPM feature requests, but I recognize we have some more thinking to do. To briefly respond to everyone's comments since last time:
|
Responding to @dominikg. Prior comments: #3 (comment), #3 (comment), #3 (comment), #3 (comment)
To recap, I asked in #3 (comment) if static I think it's worth noting that PNPM would not be the first package manager to introduce a new version specifier protocol.
Regarding dependency analysis tools parsing You're right to be concerned of existing tools breaking. However, maintaining 100% compatibility with all third-party tools would severely limit any ability to improve developer experience. I think in the
Definitely possible! I'm sure the community would be interested in other ideas you have to reduce merge conflicts. Another idea in the past was to break up parts of the
A lockfile driver has been explored in the past, but the main problem is that it's a partial solution since GitHub (I believe rightfully so), doesn't support lockfile merge drivers. This means merge conflicts will continue to slow down developers when they're ready to merge their pull requests. To give more context, I do believe reducing merge conflicts is an important concern. PNPM has been chosen by many organizations for its ability to scale well to monorepos with thousands of packages. In those situations, @dominikg If you have the time, I would appreciate any thoughts or alternatives you have around catalogs in #1. 🙂 |
I've been going back to the drawing board over the last few days. First, I believe what's missing is an explanation of how catalog and toolkit are solving a somewhat uniquely pnpm concern. Isolated ModeI believe pnpm's semi-strict Isolated mode is fantastic. It means phantom dependencies where packages forget to declare their dependencies in the Declaring Dependencies: Do, or don't?However, properly declaring dependencies across large monorepos with thousands of packages is actually surprisingly difficult, especially when there's hundreds of humans each working on their own feature. You end up with a dependency graph that's full of duplicates, Thus, pnpm has two groups of users that this RFC was attempting to make happy:
NOTE: In case it's relevant, I'm personally in the first camp. The bias in my writing might be showing a bit. FocusTo recap, the features that are important to continue thinking about are catalogs and toolkits. My theory for why they're so frequently discussed in the pnpm repository is because both address side-effects stemming from pnpm's default isolated linker mode.
Proposal for Next StepsLooking forward, I propose we take any and all templating out of our solution space. This includes templating in-memory, at publish, and onto the disk. Given that, let's look at each of the problems the template flavors attempted to address and discuss our approach to each. CatalogThis has been moved back to #1. Mention of templating AuthoringThe I would personally vote we stop attempting to solve this in pnpm. There's already many existing tools to synchronize these fields by editing ScriptsI would also vote we stop attempting to solve this in pnpm. One of my favorite aspects of Users that may need to share a set of commands across multiple packages already have a simple workaround. It's simple to create a
Compatibility and PatchesLet's make these explicit remote references rather than templated. {
"pnpm": {
"packageExtensions:remote": [
"@example/[email protected]"
],
"packageExtensions": {
"react-redux": {
"peerDependencies": {
"react-dom": "*"
}
}
}
}
} The {
"name": "@example/patches",
"version": "0.1.0",
"exports": {
"pnpm-packageExtensions-v1": "packageExtensions.json"
}
} I think I actually prefer this syntax over the existing RFC since it's more explicit. ToolkitThis is the hard one, and why my comment goes onto a detour about isolated mode. At the heart of this, what we're currently thinking of as toolkits are an isolated mode "escape hatch". In package managers with a flat First, I'm hoping we can start a more thorough discussion on the pnpm side for why users want to selectively escape isolated mode for certain packages. It's possible we can learn a bit more. If the use case is valid, I think we need to consider a more formal concept for exposing subdependencies. A similar formal concept for "exposing" parts of a package today would be {
"exports": {
".": "./index.js",
"./submodule.js": "./src/submodule.js"
}
} You could imagine that if we absolutely had to solve this problem, one (really unpleasant) formalized concept would be: {
"dependencies": {
"something-my-consumers-can-import": "0.0.0"
},
"dependenciesExports": [
"something-my-consumers-can-import"
]
} But let's not do that. The above is to illustrate an alternative given no other options. I would be curious to hear from folks that have worked on npm: How would you respond to folks asking for an isolated mode selective escape hatch? |
👋 from Turborepo!
The term that I use to describe these goals are conformance. There are two general strategies for this:
After years of experience in the infra space, I feel comfortable asserting that "conformance via single source of configuration" is not going to work outside of trivially-sized projects. Even if it is an option, people will fail to properly extend from the source of truth. 🤦♂️ Given that, these features would implicitly be "shortcut" ways for (human) authors to quickly assert conformance, or (less-complex) tooling to assert conformance. This RFC is essentially competing with external tool adoption for conformance. Isolated mode: obviously correct, and weird to conflate with conformance. "partial isolated mode opt out" should be split from this conversation. I would not attempt to solve that using any of the strategies proposed in this RFC. |
I don't think that missing native support in github is an argument against this feature. People using it can do so on their commandline with native git and pnpm and push the result. |
To piggy back off of this, I came to this thread via https://github.com/orgs/pnpm/discussions/5974. I agree with @nathanhammond. I also think that this RFC might be too ambitious when just having a shared, root-defined package version to be reused would do most of the legwork here. root package.json
{
"sharedPNPMDependencies" {
"react": "18.12.0"
}
} some package package.json
{
"dependencies": {
"react": "workspace" --> uses 18.12.0
}
} This would all resolve to a common pnpm-lock file but allow you to quickly pull out packages. The whole concept of packages of packages and imports/exports seems like way overkill in my opinion. |
Responding to @nathanhammond and @jakeleventhal: #3 (comment), #3 (comment) 👋 Hello! I'm a big fan of everything folks at Vercel are working on. Thanks to you and your team for improving open source.
@nathanhammond The "conformance" term is useful nomenclature here. Thanks for bringing it into this conversation. I'm also leaning away from conformance of authoring fields and I think usage of the catalog to declare dependencies consistently might still be a useful conformance tool though. Namely because usages of a dependency in the default catalog can be enforced easily on pre-merge CI checks. In that case, people that "fail to property extend from the source of truth" can be given a nudge towards the right direction. 🙂 Am I understanding the "conformance" idea correctly? Do you see concerns of "conformance via single source of configuration" for #1? RFC #1 (Catalogs) currently doesn't enforce usage of the catalog, but I think it's a useful followup addition.
Agree. We can start a new discussion on this elsewhere.
@jakeleventhal Also agree! I think we're going to do what's essentially the suggestion above for catalogs in #1. |
Responding to @dominikg #3 (comment)
I do agree that it's useful to have a merge conflict driver in addition to anything we do here. Apologies for the unclear wording in the earlier comment, but I would do both. Any formatting changes or data deduplication we can do to prevent merge conflicts in the first place is valuable. @zkochan and I have made prior improvements to reduce My earlier argument was that a merge conflict driver doesn't completely replace the need for a For reference, I think there is a working merge conflict driver available today: https://github.com/pnpm/merge-driver
I suspect you're correct. My gut is the average number of packages in a pnpm monorepo is <10. Similar to the earlier comment, I'd still be interested in helping repositories that host thousands of packages in a way that doesn't make it more difficult for the average case. |
I'm appreciating all the feedback. The most contentious points folks have called out are around importing/exporting Waiting for @zkochan's thoughts around the suggested alternative solutions, which don't involve sharing |
Caught up with @zkochan in a DM. We're going to close this RFC. 🎉 Thanks to everyone who helped axe it. 🪓 Your feedback prevented a bad feature from making it onto the Internet. The world is a better place thanks to you. 🙇🏻♂️ Catalogs will continue in #1. All other features we were attempting were lower priority and won't be addressed for now. |
@gluxon thanks for working on this RFC. You did a good job both with the content and communicating with the reviewers. We have definitely learned a lot in the process. |
I have create a somewhat related RFC about "config dependencies": #8 These are a new type of dependencies that are installed before everything else in order to allow to share some configs between projects. |
This is a proposal of a templating system for
package.json
files. The RFC does not propose a full templating system where anypackage.json
can be referenced to populate any other portions. Templating is more limited specific functionality requested within the community.Link to rendered text
Originally proposed by @zkochan in pnpm/pnpm#2713 (comment).
Previous discussions: