Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test arbitrarily generated opshin validators with hypothesmith #167

Open
juliusfrost opened this issue May 25, 2023 · 5 comments
Open

Test arbitrarily generated opshin validators with hypothesmith #167

juliusfrost opened this issue May 25, 2023 · 5 comments
Labels
bb: medium Medium issue according to bug bounty categorization bug bounty This issue is prized out as part of the Bug Bounty Program enhancement New feature or request

Comments

@juliusfrost
Copy link
Contributor

juliusfrost commented May 25, 2023

Is your feature request related to a problem? Please describe.

So far we need to create manual unit tests for compilation with opshin. Using hypothesis to generate opshin code would help catch bugs in compilation at a higher rate and give more assurance of what opshin can do.

Describe the solution you'd like
A clear and concise description of what you want to happen.

I found a cool library called hypothesmith that can generate arbitrary python code.
It would be awesome if we could extend this to test validators with a reduced subset of python that's valid for opshin.

I have not taken an in depth look at this library yet, but the next steps would be to do some experiments generating opshin code.

Describe alternatives you've considered
None

Additional context
N/A

Bug bounty: 1000 ADA

@nielstron nielstron added the enhancement New feature or request label May 25, 2023
@nielstron
Copy link
Contributor

I strongly support this. We will have to take care when combining this with the new constant propagation which essentially executes whatever part of the python program it can - leading to the big disclaimer in the package warning about arbitrary code execution :)

@juliusfrost
Copy link
Contributor Author

I strongly support this. We will have to take care when combining this with the new constant propagation which essentially executes whatever part of the python program it can - leading to the big disclaimer in the package warning about arbitrary code execution :)

Can we turn off constant propagation just for testing?

@nielstron
Copy link
Contributor

Yes definitely. Though this will reduce the amount of stuff we can test to not break constant folding. Goes into the direction of #168

@juliusfrost
Copy link
Contributor Author

Perhaps we can limit the scope of operations such that it won't modify the host system state so that we can safely evaluate certain operations for constant folding. As a start, maybe avoiding any IO ops and loops that might create memory or runtime issues.

@nielstron
Copy link
Contributor

Yes ideally this would be the case. But I am not sure how feasible this is :) Anyways we can start thinking about it once we have hypothesmith running

@nielstron nielstron added bug bounty This issue is prized out as part of the Bug Bounty Program bb: medium Medium issue according to bug bounty categorization labels Aug 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bb: medium Medium issue according to bug bounty categorization bug bounty This issue is prized out as part of the Bug Bounty Program enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants