Skip to content

Commit

Permalink
Fix links (#3421)
Browse files Browse the repository at this point in the history
Summary:

As titled:
* ask_tell.ipynb
  * Links to intro to ae/bo in prerequisites
  * Link to intro to bo in analyze results
* human_in_the_loop.ipynb
  * Link to ask/tell in intro
  * Links to intro to ae/bo in prerequisites
  * Remved setting Seed in client (we do this inconsistently, we should decide to do it everywhere or nowhere)
* closed_loop.ipynb
  * Links to intro to ae/bo in prerequisites
  * Links to ask/tell in prerequisites
  * Link to early stopping tutorial in Runner setup
  * Link to ask/tell tutorial in client configuration
  * Link to Scheduler diagram has a different location post website build
* automl.ipynb
  * Links in prereqs
* early_stopping.ipynb
  * Links in prereqs


Still TODO:
* ask_tell.ipynb
  * Link to configuration recipe
* early_stopping.ipynb
  * Link to configuration recipe
  * Link to objectives/constraints recipe
* closed_loop.ipynb
  * Link to storage recipe

Differential Revision: D70252015
  • Loading branch information
mpolson64 authored and facebook-github-bot committed Feb 26, 2025
1 parent 625f8f6 commit 5433e1c
Show file tree
Hide file tree
Showing 5 changed files with 60 additions and 45 deletions.
14 changes: 8 additions & 6 deletions tutorials/ask_tell/ask_tell.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,9 @@
"\n",
"where\n",
"\n",
"$$\\alpha=(1.0,1.2,3.0,3.2)^T$$\n",
"$$\n",
"\\alpha=(1.0,1.2,3.0,3.2)^T\n",
"$$\n",
"\n",
"$$\n",
"\\mathbf{A}=\\left(\\begin{array}{cccccc}10 & 3 & 17 & 3.50 & 1.7 & 8 \\\\ 0.05 & 10 & 17 & 0.1 & 8 & 14 \\\\ 3 & 3.5 & 1.7 & 10 & 17 & 8 \\\\ 17 & 8 & 0.05 & 10 & 0.1 & 14\\end{array}\\right)\n",
Expand All @@ -44,7 +46,7 @@
"### Prerequisites\n",
"\n",
"* Familiarity with Python and basic programming concepts\n",
"* Understanding of adaptive experimentation and Bayesian optimization (see [Introduction to Adaptive Experimentation](#) and [Introduction to Bayesian Optimization](#))\n"
"* Understanding of [adaptive experimentation](https://ax.dev/docs/intro-to-ae) and [Bayesian optimization](https://ax.dev/docs/intro-to-bo)\n"
]
},
{
Expand Down Expand Up @@ -143,7 +145,7 @@
"* `\"task_0 + 0.5 * task_1\"` would direct Ax to maximize the sum of two task scores, downweighting task_1 by a factor of 0.5\n",
"* `\"score, -flops\"` would direct Ax to simultaneously maximize score while minimizing flops\n",
"\n",
"For more information, see the [string parsing recipe](#)."
"For more information on configuring objectives and outcome constraints, see this [recipe](#)."
]
},
{
Expand Down Expand Up @@ -256,9 +258,9 @@
"After running trials, you can analyze the results.\n",
"Most commonly this means extracting the parameterization from the best performing trial you conducted.\n",
"\n",
"Hartmann6 has a known global minimum of $f(x*) = -3.32237$ at $x* = (0.20169, 0.150011, 0.476874, 0.27332, 0.311652, 0.6573)$.\n",
"Ax is able to identify a point very near to this true optimum *using just 30 evaluations.*\n",
"This is possible due to the sample-efficiency of [Bayesian Optimization](#), the optimization method we use under the hood in Ax."
"Hartmann6 has a known global minimum of $f(x*) = -3.322$ at $x* = (0.201, 0.150, 0.477, 0.273, 0.312, 0.657)$.\n",
"Ax is able to identify a point very near to this true optimum **using just 30 evaluations.**\n",
"This is possible due to the sample-efficiency of [Bayesian optimization](https://ax.dev/docs/intro-to-bo), the optimization method we use under the hood in Ax."
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions tutorials/automl/automl.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@
"\n",
"## Prerequisites\n",
"- Familiarity with [scikit-learn](https://scikit-learn.org/) and basic machine learning concepts\n",
"- Understanding of [adaptive experimentation](#) and [Bayesian optimization](#)\n",
"- [Ask-tell Optimization of Python Functions with early stopping](#)"
"- Understanding of [adaptive experimentation](https://ax.dev/docs/intro-to-ae) and [Bayesian optimization](https://ax.dev/docs/intro-to-bo)\n",
"- [Ask-tell Optimization of Python Functions with early stopping](https://ax.dev/docs/tutorials/early_stopping)"
]
},
{
Expand Down
18 changes: 9 additions & 9 deletions tutorials/closed_loop/closed_loop.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,13 @@
},
"source": [
"# Closed-loop Optimization with Ax\n",
"Previously, we've demonstrated using Ax for ask-tell optimization, a paradigm in which we \"ask\" Ax for candidate configurations and \"tell\" Ax our observations.\n",
"Previously, we've demonstrated [using Ax for ask-tell optimization](https://ax.dev/docs/tutorials/ask_tell), a paradigm in which we \"ask\" Ax for candidate configurations and \"tell\" Ax our observations.\n",
"This can be effective in many scenerios, and it can be automated through use of flow control statements like `for` and `while` loops.\n",
"However there are some situations where it would be beneficial to allow Ax to orchestrate the entire optimization: deploying trials to external systems, polling their status, and reading reading their results.\n",
"This can be common in a number of real world engineering tasks, including:\n",
"* Large scale machine learning experiments running workloads on high-performance computing clusters\n",
"* A/B tests conducted using an external experimentation platform\n",
"* Materials science optimizations utilizing a self-driving laboratory\n",
"* **Large scale machine learning experiments** running workloads on high-performance computing clusters\n",
"* **A/B tests** conducted using an external experimentation platform\n",
"* **Materials science** optimizations utilizing a self-driving laboratory\n",
"\n",
"Ax's `Client` can orchestrate automated adaptive experiments like this using its method `run_trials`.\n",
"Users create custom classes which implement Ax's `IMetric` and `IRunner` protocols to handle data fetching and trial deployment respectively.\n",
Expand All @@ -35,8 +35,8 @@
"* Understand tradeoffs between parallelism and optimization performance\n",
"\n",
"### Prerequisites\n",
"* Understanding of adaptive experimentation and Bayesian optimization (see [Introduction to Adaptive Experimentation](#) and [Introduction to Bayesian Optimization](#))\n",
"* Familiarity with [configuring and conducting experiments in Ax](#)"
"* Understanding of [adaptive experimentation](https://ax.dev/docs/intro-to-ae) and [Bayesian optimization](https://ax.dev/docs/intro-to-bo)\n",
"* Familiarity with [configuring and conducting experiments in Ax](https://ax.dev/docs/tutorials/ask_tell)"
]
},
{
Expand Down Expand Up @@ -171,7 +171,7 @@
"In this mock example, we will check to see how many seconds have elapsed since the `run_trial` was called and only report a trial as completed once 5 seconds have elapsed.\n",
"\n",
"Runner's may also optionally implement a `stop_trial` method to terminate a trial's execution before it has completed.\n",
"This is necessary for using [early stopping](#) in closed-loop experimentation, but we will skip this for now."
"This is necessary for using [early stopping](https://ax.dev/docs/tutorials/early_stopping) in closed-loop experimentation, but we will skip this for now."
]
},
{
Expand Down Expand Up @@ -375,7 +375,7 @@
"## Step 3: Initialize the Client and Configure the Experiment\n",
"\n",
"Finally, we can initialize the `Client` and configure the experiment as before.\n",
"This will be familiar to readers of the [Ask-tell optimization with Ax tutorial](#) -- the only difference is we will attach the previously defined Runner and Metric by calling `configure_runner` and `configure_metrics` respectively.\n",
"This will be familiar to readers of the [Ask-tell optimization with Ax tutorial](https://ax.dev/docs/tutorials/ask_tell) -- the only difference is we will attach the previously defined Runner and Metric by calling `configure_runner` and `configure_metrics` respectively.\n",
"\n",
"Note that when initializing `hartmann6_metric` we set `name=hartmann6`, matching the objective we now set in `configure_optimization`. The `configure_metrics` method uses this name to ensure that data fetched by this Metric is used correctly during the experiment.\n",
"Be careful to correctly set the name of the Metric to reflect its use as an objective or outcome constraint."
Expand Down Expand Up @@ -460,7 +460,7 @@
"\n",
"Internally, Ax uses a class named `Scheduler` to orchestrate the trial deployment, polling, data fetching, and candidate generation.\n",
"\n",
"![Scheduler state machine](../../docs/assets/scheduler_state_machine.png)\n",
"![Scheduler state machine](../../assets/scheduler_state_machine.png)\n",
"\n",
"The `OrchestrationConfig` provides users with control over various orchestration settings:\n",
"* `parallelism` defines the maximum number of trials that may be run at once. If your external system supports multiple evaluations in parallel, increasing this number can significantly decrease experimentation time. However, it is important to note that as parallelism increases, optimiztion performance often decreases. This is because adaptive experimentation methods rely on previously observed data for candidate generation -- the more tirals that have been observed prior to generation of a new candidate, the more accurate Ax's model will be for generation of that candidate.\n",
Expand Down
61 changes: 37 additions & 24 deletions tutorials/early_stopping/early_stopping.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@
"\n",
"## Prerequisites\n",
"- Familiarity with Python and basic programming concepts\n",
"- Understanding of [adaptive experimentation](#) and [Bayesian optimization](#)\n",
"- [Ask-tell Optimization of Python Functions](#)"
"- Understanding of [adaptive experimentation](https://ax.dev/docs/intro-to-ae) and [Bayesian optimization](https://ax.dev/docs/intro-to-bo)\n",
"- [Ask-tell Optimization of Python Functions](https://ax.dev/docs/tutorials/ask_tell)"
]
},
{
Expand Down Expand Up @@ -111,7 +111,7 @@
},
"outputs": [],
"source": [
"client = Client(random_seed=42)"
"client = Client()"
]
},
{
Expand Down Expand Up @@ -195,7 +195,7 @@
"* `\"task_0 + 0.5 * task_1\"` would direct Ax to maximize the sum of two task scores, downweighting task_1 by a factor of 0.5\n",
"* `\"score, -flops\"` would direct Ax to simultaneously maximize score while minimizing flops\n",
"\n",
"For more information, see the [string parsing recipe](#)."
"For more information on configuring objectives and outcome constraints, see this [recipe](#)."
]
},
{
Expand Down Expand Up @@ -274,30 +274,42 @@
"# Hartmann6 function\n",
"def hartmann6(x1, x2, x3, x4, x5, x6):\n",
" alpha = np.array([1.0, 1.2, 3.0, 3.2])\n",
" A = np.array([\n",
" [10, 3, 17, 3.5, 1.7, 8],\n",
" [0.05, 10, 17, 0.1, 8, 14],\n",
" [3, 3.5, 1.7, 10, 17, 8],\n",
" [17, 8, 0.05, 10, 0.1, 14]\n",
" ])\n",
" P = 10**-4 * np.array([\n",
" [1312, 1696, 5569, 124, 8283, 5886],\n",
" [2329, 4135, 8307, 3736, 1004, 9991],\n",
" [2348, 1451, 3522, 2883, 3047, 6650],\n",
" [4047, 8828, 8732, 5743, 1091, 381]\n",
" ])\n",
" A = np.array(\n",
" [\n",
" [10, 3, 17, 3.5, 1.7, 8],\n",
" [0.05, 10, 17, 0.1, 8, 14],\n",
" [3, 3.5, 1.7, 10, 17, 8],\n",
" [17, 8, 0.05, 10, 0.1, 14],\n",
" ]\n",
" )\n",
" P = 10**-4 * np.array(\n",
" [\n",
" [1312, 1696, 5569, 124, 8283, 5886],\n",
" [2329, 4135, 8307, 3736, 1004, 9991],\n",
" [2348, 1451, 3522, 2883, 3047, 6650],\n",
" [4047, 8828, 8732, 5743, 1091, 381],\n",
" ]\n",
" )\n",
"\n",
" outer = 0.0\n",
" for i in range(4):\n",
" inner = 0.0\n",
" for j, x in enumerate([x1, x2, x3, x4, x5, x6]):\n",
" inner += A[i, j] * (x - P[i, j])**2\n",
" inner += A[i, j] * (x - P[i, j]) ** 2\n",
" outer += alpha[i] * np.exp(-inner)\n",
" return -outer\n",
"\n",
"# Hartmann6 function with additional t term\n",
"\n",
"# Hartmann6 function with additional t term such that\n",
"# hartmann6(X) == hartmann6_curve(X, t=100)\n",
"def hartmann6_curve(x1, x2, x3, x4, x5, x6, t):\n",
" return hartmann6(x1, x2, x3, x4, x5, x6) - np.log2(t)"
" return hartmann6(x1, x2, x3, x4, x5, x6) - np.log2(t / 100)\n",
"\n",
"\n",
"(\n",
" hartmann6(0.1, 0.45, 0.8, 0.25, 0.552, 1.0),\n",
" hartmann6_curve(0.1, 0.45, 0.8, 0.25, 0.552, 1.0, 100),\n",
")"
]
},
{
Expand Down Expand Up @@ -327,15 +339,16 @@
" for t in range(1, maximum_progressions + 1):\n",
" raw_data = {\"hartmann6\": hartmann6_curve(t=t, **parameters)}\n",
"\n",
" # On the final reading call complete_trial, else call attach_data\n",
" # On the final reading call complete_trial and break, else call attach_data\n",
" if t == maximum_progressions:\n",
" client.complete_trial(\n",
" trial_index=trial_index, raw_data=raw_data, progression=t\n",
" )\n",
" else:\n",
" client.attach_data(\n",
" trial_index=trial_index, raw_data=raw_data, progression=t\n",
" )\n",
" break\n",
"\n",
" client.attach_data(\n",
" trial_index=trial_index, raw_data=raw_data, progression=t\n",
" )\n",
"\n",
" # If the trial is underperforming, stop it\n",
" if client.should_stop_trial_early(trial_index=trial_index):\n",
Expand Down
8 changes: 4 additions & 4 deletions tutorials/human_in_the_loop/human_in_the_loop.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
"source": [
"# Ask-tell Optimization in a Human-in-the-loop Setting\n",
"\n",
"Some optimization experiments, like the one described in LINK, can be conducted in a completely automated manner.\n",
"Some optimization experiments, like the one described in this [tutorial](https://ax.dev/docs/tutorials/ask_tell), can be conducted in a completely automated manner.\n",
"Other experiments may require a human in the loop, for instance a scientist manually conducting and evaluating each trial in a lab.\n",
"In this tutorial we demonstrate this ask-tell optimization in a human-in-the-loop setting by imagining the task of maximizing the strength of a 3D printed part using compression testing (i.e., crushing the part) where different print settings will have to be manually tried and evaluated.\n",
"\n",
Expand All @@ -35,7 +35,7 @@
"\n",
"### Prerequisites\n",
"- Familiarity with Python and basic programming concepts\n",
"- Understanding of adaptive experimentation and Bayesian optimization"
"- Understanding of [adaptive experimentation](https://ax.dev/docs/intro-to-ae) and [Bayesian optimization](https://ax.dev/docs/intro-to-bo)"
]
},
{
Expand Down Expand Up @@ -103,7 +103,7 @@
},
"outputs": [],
"source": [
"client = Client(random_seed=42)"
"client = Client()"
]
},
{
Expand Down Expand Up @@ -292,7 +292,7 @@
"We'll make use of Ax's support for parallelism, i.e. suggesting more than one trial at a time -- this can allow us to conduct our experiment much faster!\n",
"If our lab had three identical 3D printers, we could ask Ax for a batch of three trials and evaluate three different infill density, layer height, and infill types at once.\n",
"\n",
"Note that there will always be a tradeoff between \"parallelism\" and optimization performance since the quality of a suggested trial is often proportional to the amount of data Ax has access to, see LINK for a more detailed explanation."
"Note that there will always be a tradeoff between \"parallelism\" and optimization performance since the quality of a suggested trial is often proportional to the amount of data Ax has access to, see this [recipe](#) for a more detailed explanation."
]
},
{
Expand Down

0 comments on commit 5433e1c

Please sign in to comment.