Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GUI panel for mlos_bench #824

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,10 @@ MLOS is a project to enable autotuning for systems.
- [Installation](#installation)
- [See Also](#see-also)
- [Examples](#examples)
- [MLOS Viz Panel](#mlos-viz-panel)
- [Usage](#usage)
- [Running the Backend](#running-the-backend)
- [Running the Frontend](#running-the-frontend)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no content for these entries it looks like. Is that intentional?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Forget to commit something?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bpkroth Check internal repo might be there

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yshady this is effectively a new effort, so I'm not necessarily looking to replicate that.


<!-- /TOC -->

Expand Down
6 changes: 6 additions & 0 deletions mlos_vizpanel/.streamlit/config.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[theme]
primaryColor = '#FF8C00' # Amber for better visibility of sliders and other interactive elements
backgroundColor = '#FFFFFF' # Azure blue background
secondaryBackgroundColor = '#ADD8F6' # Darker blue for the sidebar
textColor = '#0078D4' # White text
font = "Segoe UI" # Microsoft's standard font
53 changes: 53 additions & 0 deletions mlos_vizpanel/README.md
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might also argue that this is intended to do more than just visualization, correct?
It's also intended to be able to launch new experiments from existing config dirs, so we could call it "mlos_webui" or some such and will probably want to make it pip installable in that case.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct my intention was to launch this so multiple people could access the same set of experiments easily. Example me and @eujing could theoretically be collaborating on the same set of experiments, both monitoring and making sure everything is smooth sailing.

Execution isn’t there but was worth a shot. Was constrained by time and lack of testing really.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My goal was basically to turn Mlos into a web app that can be deployed with a login page :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even benchmarks should be configurable from a GUI that would be a pretty web app in my opinion

Copy link
Author

@yshady yshady Aug 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important to note though that this pr removes launching experiment functionality and solely focuses on visualizations, again check internal repo as it is far more comprehensive

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's focus on doing this externally for now.
And I understand the original constraints, but we don't have those now, so we can be a little more methodical about what and how we want to design parts of that.
I'm not opposed to either launching or config editing at a high level, though I have opinions about the implementation details and constraints around those, so let's start with a list of needs, wants, would be nices and then chart a course for us to get there.
I'll start a new Issue to track some of that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@


### MLOS Viz Panel
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, this doesn't pass a markdownlint check.


3. Set up Azure credentials for OpenAI and Azure Compute:
- Create a `azure_openai_credentials.json` file with the following structure:
```json
{
"azure_endpoint": "<your_azure_endpoint>",
"api_key": "<your_api_key>",
"api_version": "<api_version>"
}
```
- Ensure you have configured Azure credentials for the `ComputeManagementClient` to access VM SKUs.

- Create `global_config_storage.jsonc`
```json
{
"host": "x.mysql.database.azure.com",
"username": "mlos",
"password": "x",
"database": "x"
}
```

- Create `global_config_azure.json`
```json
{
"subscription": "x",
"tenant": "x",
"storageAccountKey": "x"
}
```

4. Set up the necessary configuration files in the `config/` directory as per your environment.

## Usage

### Running the Backend

1. Navigate to the project directory.
2. Start the FastAPI server:
```bash
uvicorn backend:app --reload
```

### Running the Frontend

1. Navigate to the project directory.
2. Start the Streamlit application:
```bash
streamlit run frontend.py
```
124 changes: 124 additions & 0 deletions mlos_vizpanel/backend.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
from datetime import datetime, timedelta
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs some linting/style/formatting/etc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Module docstring and/or some README.md with details on what this is
a) supposed to do,
b) how it does it,

import time
import schedule
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
import os
from openai import AzureOpenAI
from fastapi import FastAPI, Request, WebSocket, WebSocketDisconnect, HTTPException
from pydantic import BaseModel
import pandas as pd
import json
from pathlib import Path
from azure.mgmt.compute import ComputeManagementClient
from azure.identity import DefaultAzureCredential
from mlos_bench.storage import from_config
from copy import deepcopy
import subprocess
import logging
import asyncio
from fastapi.middleware.cors import CORSMiddleware
import re
import json5

app = FastAPI()

# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

# Load global configuration
base_dir = Path(__file__).resolve().parent
global_config_path = base_dir / 'global_config_azure.json'
with global_config_path.open() as f:
global_config = json.load(f)
subscription_id = global_config['subscription']

# Load the storage config and connect to the storage
storage_config_path = "config/storage/mlos-mysql-db.jsonc"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should probably be configurable.

try:
storage = from_config(config_file=storage_config_path)
except Exception as e:
raise HTTPException(
status_code=500, detail=f"Error loading storage configuration: {e}"
)

@app.get("/experiments")
def get_experiments():
return list(storage.experiments.keys())

@app.get("/experiment_results/{experiment_id}")
def get_experiment_results(experiment_id: str):
try:
exp = storage.experiments[experiment_id]
return exp.results_df.to_dict(orient="records")
except KeyError:
raise HTTPException(status_code=404, detail="Experiment not found")

def count_categorical_values(df: pd.DataFrame) -> str:
categorical_counts = {}
for col in df.select_dtypes(include=['object', 'category']).columns:
counts = df[col].value_counts().to_dict()
categorical_counts[col] = counts

count_str = "Categorical Counts:\n"
for col, counts in categorical_counts.items():
count_str += f"{col}:\n"
for value, count in counts.items():
count_str += f" {value}: {count}\n"

return count_str

# Load credentials from the JSON file
with open('azure_openai_credentials.json', 'r') as file:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be configurable

credentials = json.load(file)

# Try to create the AzureOpenAI client
try:
client = AzureOpenAI(
azure_endpoint=credentials['azure_endpoint'],
api_key=credentials['api_key'],
api_version=credentials['api_version']
)
except Exception as e:
print("Error creating AzureOpenAI client:", e)
class ExperimentExplanationRequest(BaseModel):
experiment_id: str

@ app.post("/get_experiment_explanation")
def get_experiment_explanation(request: ExperimentExplanationRequest):
experiment_id = request.experiment_id
try:
exp = storage.experiments[experiment_id]
# Taking only the first 10 rows for simplicity
df = exp.results_df.tail(10)
experiment_data = df.to_dict(orient='records')

df_head = exp.results_df.head(10)
experiment_data_head = df_head.to_dict(orient='records')

df_des = exp.results_df.describe()
experiment_data_des = df_des.to_dict(orient='records')

count_str = count_categorical_values(df)

prompt = f"Explain the following experiment data: First 10 rows {experiment_data_head} last 10 {experiment_data} & descriptive stats {experiment_data_des} & categorical vars counts {count_str}. Give me params to complement config. params present in the data. Also explain what each param does and params for MySQL config that would complement what we have and can boost preformance if tuned. Explain which are dangreous to tune as it might fail the server. Also talk about parameters that are safe to tune. Talk about each in list format so that you are listing all information relevant to a param under its name"

response = client.chat.completions.create(
model="gpt4o", # model = "deployment_name".
messages=[
{"role": "assistant",
"content": prompt}
],
max_tokens=1000
)

explanation = response.choices[0].message.content.strip()
print(explanation)
return {"explanation": explanation}
except KeyError:
raise HTTPException(status_code=404, detail="Experiment not found")

if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000, reload=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some discussion in the README.md about networking requirements for this should also happen.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also instructions for deploying this in the cloud would be helpful.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I agree but there are some restrictions within the team on deployment so never got to that

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Totally fair, but this doesn't have those same constraints. It does have additional goals though.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed also will need more reliability and testing of the systems before it is feasible, maybe more long term

Loading
Loading