Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handling of yaml references/anchors #147

Closed
jdeluyck opened this issue Dec 10, 2018 · 4 comments
Closed

Handling of yaml references/anchors #147

jdeluyck opened this issue Dec 10, 2018 · 4 comments

Comments

@jdeluyck
Copy link

I'm trying to use pykwalify to validate some YAML schemas that users can deliver to me, before processing them.

Creating the pykwalify schema to validate one that contains references is a bit of a pain, though. I have to make sure both the YAML and any references are defined in one document.

Would it be possible to extend the syntax to allow loading of an arbitrary amount of YAML files, which then get concatenated internally into one string? Or how should I handle this?

@Grokzen
Copy link
Owner

Grokzen commented Dec 10, 2018

Short answer is, No.

But... inside #143 this is kinda implemented where to make the include feature work i concat all sources of schema to fix some issues in the implementation.

But i am not sure about the feature, right now the job of merging and/or preparing the data that you want to validate is up to you, and not to this lib. There is a option to run multiple different data blobs against the same setup of schemas, but not the other way around.

@Grokzen
Copy link
Owner

Grokzen commented Dec 10, 2018

Another argument against having data merge inside the lib, is that there is to many ways to deal with the data merge and the entire deep-merge problem. There is to many factors to deal with to make a generic solution or configurable solution to merge multiple data blobs into a single one just the way you and everyone else wants. So pushing the problem up to the user makes it much easier to customize it any way you want it to be merged before sending it in for validation.

@jdeluyck
Copy link
Author

I'll keep a look out at #143. I understand the reasoning - though it would be practical ;)

@Grokzen
Copy link
Owner

Grokzen commented Dec 11, 2018

I know it might seem like that from the consumer pov, but from my pov it only makes things a lot harder to build and maintain. One could also invoke the Unix philosophy argument about tools and their purpose. This tools purpose is to take in some data, a schema and validate the data against the schema. This tool do not exists to handle the deep-data-merge problem as that problem is in off itself really difficult to solve in a generic way.

@Grokzen Grokzen closed this as completed Dec 11, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants