Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pass down the original input, making it available as second parameter in "custom" function #260

Closed
Lukasz17git opened this issue Nov 19, 2023 · 15 comments
Assignees
Labels
enhancement New feature or request priority This has priority

Comments

@Lukasz17git
Copy link

Lukasz17git commented Nov 19, 2023

Suggestion:

Pass the "full input being tested" down the tree (at call time, for some/all of the methods available, like "parse/parseAsync"), either as a 3rd parameter to the _parse method, or as a new property ("_full_input") to the info object. It would allow us to access the full input in the "custom" function as second parameter, which i think it may provide a good solution for customization and DX.

const schema = object({
   email: string([email()]),
   password: string([minLength(6, 'Minimum 6 characters')]),
   repeatPassword: string([
      minLength(6),
      custom((input, { password }) => input === password, `Passwords don't match`)] // <= can write it now like this
   )
})

Pros

Good DX, granularity, order execution, solves some of the issues with form libraries and its resolvers (works with react-hook-forms and valibotResolver). Not being able to compare fields easily or having a bad DX may be a deal-breaker for the devs using this library, and this suggestion kinda fixes most of the common use cases.

Cons

The second parameter on the "custom" function most likelly can't be well typed (it will have "any" type), this is because the function will run before checking/comparing the hole "original input" with the hole schema, so you may access properties without checking its existence beforehand. Also i dont think there is a way to infer/pass down the hole Output-object type at schema definition, you may have to give it manually beforehand 😕

Branch with this implementation

passing down full input

Additional feature to complement this suggestion (didn't test/implemented this one)

We could add also a "deferedCustom" function, which returns an action with mode = 'defered', making it so it doesn't execute at main "_parse runtime", but instead if the pipe encounters a "deferedCustom" action (action.mode="defered") will skip it until the main "_parse" execution finishes without issues, then it ll run again but only with the "deferedCustom" actions (this will garantee that the input matches the schema structure). An easy implementation/mental-model could be:

export function parse(schema, input, info?) {

   //Add _full_input so it can be available inside the "custom" and "deferedCustom" functions as second parameter
   const infoWithOriginalInput = { ...info, _full_input: input }

   // Execute _parse skipping "deferedCustom" actions (action.mode = 'defered')
   const result = schema._parse(input, infoWithOriginalInput);
   if (result.issues) {
      throw new ValiError(result.issues);
   }

   // If everything is okey, execute now the "deferedCustom" actions only (action.mode = 'defered')
   const deferedResult = schema._parse(input, { ...infoWithOriginalInput, _defered_only: true })
   if (deferedResult.issues) {
      throw new ValiError(deferedResult.issues);
   }

   // output should be the same for both, result and deferedResult
   return result.output;
}

And inside the "executePipe" function

// executePipe.ts
 /*...*/
  // Execute any action of pipe
  for (const action of pipe) {
   // skip "deferedCustom" actions on normal _parse execution
   if (!parseInfo._defered_only && action.mode === 'defered') continue
   // skip all actions instead of "deferedCustom" ones on defered _parse execution
   if (parseInfo._defered_only && action.mode !== 'defered') continue
   /*...*/
}

PD: I'm new to this library, so i only took a look at the things i use/need; so im not sure if these implementations may introduce bugs in the rest parts of the library; but i think everything should work fine. ( all test also passed in my branch where i implemented these changes)

@fabian-hiller
Copy link
Owner

I want to focus on the problem you are trying to solve with this issue in the next days. So I appreciate any input. I had the same thought. However, type safety is the big problem. Strictly speaking, we would have to type the input as unknown, which makes it pretty useless. I don't think we should go down the any route as a type-safe library.

The most promising approach so far is PR #223, which marks the required keys as dependencies, so that the corresponding pipe function is only executed if the type of those dependencies is guaranteed. However, I have to investigate this first before I can make a decision.

@fabian-hiller fabian-hiller self-assigned this Nov 20, 2023
@fabian-hiller fabian-hiller added enhancement New feature or request priority This has priority labels Nov 20, 2023
@Lukasz17git
Copy link
Author

Lukasz17git commented Nov 21, 2023

I see, after thinking about it i do agree with you. My current aproach would be hard to mantain and error prone if there are many changes over time in big/complex forms/inputs with a lot of custom validations. Took a look at #223, its clever, however im not really a fan of it, it will look ugly once you start adding more and more fields/validations, and custom validations will be separated from the fields you want to apply those (in some cases you want that, but in most of the cases you don't).

How about splitting the parsing into two parts?, the first one being only the type validation (same ones as in typescript), and the second one the methods itself (like minLength, email, custom)? This way it will ensure strong typing in the first run, and then it will proceed to validate all the fields.

export function parse(schema, input, info?) {

   // Adding "_full_input" (so its available in nested custom methods)
   // Adding "_parsing_mode" (to indicate the parsing mode)
   info._full_input = input 
   info._parse_mode = 'types'

   // Execute _parse only checking the types (skipping ALL PIPES)
   const typesResult = schema._parse(input, info);
   if (typesResult.issues) {
      throw new ValiError(typesResult.issues);
   }

   // If everything is good, parse now the pipes (skipping the types)
   info._parse_mode = 'pipes'
   const pipesResult = schema._parse(input, info)
   if (pipesResult.issues) {
      throw new ValiError(pipesResult.issues);
   }

   return pipesResult.output;
}
//declaration
const schema = object({
   email: string([email()]),
   password: string([minLength(6, 'Minimum 6 characters')]),
   repeatPassword: string([
      minLength(6),
      custom((input, { password }) => input === password, `Passwords don't match`)
   ])
})
// Execution, under the hood it will be like this:
//first run
object({
   email: string(),
   password: string(),
   repeatPassword: string()
})
// second run only if the typings are correct
object({
   email: [email()],
   password: [minLength(6, 'Minimum 6 characters')],
   repeatPassword: [
      minLength(6),
      custom((input, { password }) => input === password, `Passwords don't match`)
   ])
})

@fabian-hiller
Copy link
Owner

Your idea is great. I will consider it. However, there are two problems. Since each schema is independent, it can be combined with any other schema. For example, object({ email... }). could be used in another object({ ... }) schema. This means that the second argument in the custom function must be bound to the parent schema and not to the total schema (the input), otherwise we cannot guarantee type safety. I hope you can follow me. If not, I can explain with examples.

Another disadvantage is that if any type is wrong, all pipelines will not be executed. The only way to avoid this is to know which keys must be valid to execute a pipeline function.

It would probably be an advantage to enable both in the final implementation. Your idea and #223. I will probably start researching and testing it this weekend.

@Lukasz17git
Copy link
Author

Lukasz17git commented Nov 23, 2023

I see. How about adding two new things: smartCustom and smartObject ("smartSchemas").

smartObject's will create new boundaries (by adding _newBoundary prop to the schema), solving the first problem. (Its better this way, by adding new schemas declarations, instead of adding a new functions which will wrap an object schema (smart(object({...})), so people wont use that stuff in places they don't have to).

// smartObject.ts
export const smartObject = (objectSchema) => ({...objectSchema, _newBoundary : true})
// object.ts
// _parse method
for (const [key, schema] of cachedEntries) {
    const value = (input as Record<string, unknown>)[key];
    const result = schema._parse(value, _newBoundary ? { ...info, _full_input: input } : info );
// usage
import { smartObject } from 'valibot'
const registerSchema = smartObject({email ... })

Then the smartCustom can only be used inside smartObject's, where its argument will be the closest smartObject value (need to check if its possible to implement this with the current valibot code/model); and optionally giving an array of the required well-typed fields (using #223 idea), instead of all of them (by default).

smartCustom((full_input_value) => ...., 'error message') // all closest smartObject type must be valid
smartCustom(['pass1', 'pass2'], (full_input_value) => ...., 'error message') // only pass1 and pass2 types must be valid

So the new mental model would be something like this?:

  1. parse types
  2. parse correct field pipes (even if there are typing issues in the schema), skipping smartCustom's (depending of its requirements)

But still, regarding the second disadvantage, I don't think is that big of a deal, most of the time your typings are going to be correct, and i cant think of any case in which you would care about executing pipes if there is some kind of typing error in the schema.

Im not sure if there would be more edge cases, the ones i can think of are methods like omit, partial, pick with smartObjects, and the only "decent" solution is to disallow the usage of those? (objects with nested smartObjects will still be able to use them).

@fabian-hiller
Copy link
Owner

As soon as I have a little more time in the next few days, I'll get back to you.

@fabian-hiller
Copy link
Owner

Sorry that it took so long to get back to you. I just added a partialCheck action to the library. Do you think this solves the issue? Here is an example:

import * as v from 'valibot';

const RegisterSchema = v.pipe(
  v.object({
    email: v.pipe(
      v.string(),
      v.nonEmpty('Please enter your email.'),
      v.email('The email address is badly formatted.'),
    ),
    password1: v.pipe(
      v.string(),
      v.nonEmpty('Please enter your password.'),
      v.minLength(8, 'Your password must have 8 characters or more.'),
    ),
    password2: v.string(),
  }),
  v.forward(
    v.partialCheck(
      [['password1'], ['password2']],
      (input) => input.password1 === input.password2,
      'The two passwords do not match.',
    ),
    ['password2'],
  ),
);

@Lukasz17git
Copy link
Author

Hi, forgot about it haha, its been a while. Yes i think so, thanks a lot !!!, I hope to have the opportunity to use it again soon.

@fabian-hiller
Copy link
Owner

What exactly is not working? It works as expected for me via the shared link.

@themre
Copy link

themre commented Sep 23, 2024

Ah, I'm apparently blind. Thought I missed password2 error, but I see it. I guess it's an issue on react-hook-form together with their resolvers. I opened an issue here: react-hook-form/resolvers#716

@fabian-hiller
Copy link
Owner

That would be weird because I implemented the resolver. The code is pretty simple. Is your version of Valibot and the resolver up to date?

@themre
Copy link

themre commented Sep 23, 2024

Yeah, really strange. So versions are:

"react-hook-form": "7.53.0",
@hookform/resolvers": "3.9.0",
"valibot": "0.42.1"

Here is code sandbox: https://codesandbox.io/p/sandbox/wdxr64?file=%2Fpackage.json%3A16%2C5-18%2C24

@themre
Copy link

themre commented Sep 23, 2024

Ah, I believe this condition makes it escape early:

const validateAllFieldCriteria =
      !options.shouldUseNativeValidation && options.criteriaMode === 'all';

@themre
Copy link

themre commented Sep 23, 2024

Yes, if I now have this like so (https://codesandbox.io/p/sandbox/wdxr64?file=%2Fsrc%2FApp.tsx%3A40%2C18):

shouldUseNativeValidation: false,
criteriaMode: "all",

then it works correctly. The only issue is that it doesn't work if I use this:

resolver: valibotResolver(Schema, {
     abortEarly: false,
   }),

maybe this should also take into account. So basically:

const validateAllFieldCriteria =
     schemaOptions.abortEarly || (!options.shouldUseNativeValidation && options.criteriaMode === 'all');

@fabian-hiller
Copy link
Owner

abortEarly: false should not affect the abortPipeEarly configuration, as false is the default. With abortEarly: true the schema aborts the validation after the first issue and with abortPipeEarly: true the schema aborts the validation of a pipe after the first issue. Since the password2 check is in the pipeline of object, it will not be executed if object returns an issue and abortPipeEarly is enabled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request priority This has priority
Projects
None yet
Development

No branches or pull requests

3 participants