Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Learning the adaptivity parameters on the fly #107

Open
Fujikawas opened this issue May 23, 2024 · 0 comments
Open

Learning the adaptivity parameters on the fly #107

Fujikawas opened this issue May 23, 2024 · 0 comments
Assignees
Labels
new-feature Adding a new feature

Comments

@Fujikawas
Copy link

Fujikawas commented May 23, 2024

After numerical experiments with the implementations in #60 and #103, which requires tuning of the method parameters that might be case-dependent, we are motivated to find a way to learn the adaptivity parameters or the parameters in these methods on the fly with sequential optimization approaches.

Speaking of the adaptivity parameters, we would want to find the optimal coarsening_const $C_c$ - refining_const $r_c$-historical_parameter $h$ to minimize the runtime under simulation error constraints. According to Felix, it would benefit the prediction a lot if we have a priori knowledge(global model) about the relationship between error/runtime and the three parameters. And this knowledge also gives info on if we need to scale the parameters or the runtime/error, which might range across multiple orders, to a logarithmic space.

For the optimization we could start with classic Gaussian process. For the global model we could use paper data and do experiments with heat-conduction tutorial.

@IshaanDesai IshaanDesai added the new-feature Adding a new feature label May 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new-feature Adding a new feature
Projects
None yet
Development

No branches or pull requests

2 participants