You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Error Response: status: 400 Bad Request; headers: {"content-type":
"application/json", "x-request-id": "f96ba139-2229-4a39-8435-7f6b39d640fb",
"content-length": "551", "date": "Wed, 12 Feb 2025 20:15:45 GMT"}; value: Error
{ error_code: Some("InvalidRequest"), message: "The filter expression
\"(silo_id == \"7bd7623a-68ed-4636-8ecb-b59e3b068787\") && (project_id ==
\"9c4152f9-4317-4269-9018-66142964d21c\")\" is not valid, the following errors
were encountered\n > The filter expression refers to identifiers that are not
valid for its input table \"virtual_machine:vcpu_usage\". Invalid identifiers:
[\"silo_id\", \"project_id\"], valid identifiers: [\"datum\", \"instance_id\",
\"start_time\", \"timestamp\"]", request_id:
"f96ba139-2229-4a39-8435-7f6b39d640fb" }
in there is something about a filter expression using silo_id and project_id. that wasn't in my command. was it added by the CLI? SDK? i really wanted to know exactly what the HTTP request was that the CLI made on my behalf. --debug did not really help:
i can't really tell why the one random reqwest::connect log is there and nothing else. maybe because we're only configuring env_logger and the dependencies involved use tracing?
so for myself, i'd added eprintln!("request: {:?}", request); in the function handling the project-scoped timeseries query and that got me about what i'd wanted:
request method
request path
query parameters
headers - unfortunately not all! i stole a cookie from my browser login to auth in my curl reproduction.
(the hostname is good! but i already knew i was talking to dogfood.)
this did not get me the query body for /v1/timeseries/query, unfortunately. i'd guessed at the right POST body and happened to get it first try.
Implementation details
it would be really nice if --debug caused the CLI to log at least a request head before getting to client.client.execute(request) in the various handlers. i think that gets 90% of the way there without sweating request bodies.
Anything else you would like to add?
obviously not all CLI operations are a single HTTP request, and even ones that are, you might not want to log a 100MB POST body. i don't think we have a clever way to know which endpoints we'd expect have reasonably-loggable POST bodies either. ideally for some operations like timeseries queries with these relatively small json POST bodies we could know to log the body for easy reproduction. disk upload in particular is where i would be concerned that is a footgun if used carelessly.
The text was updated successfully, but these errors were encountered:
Currently we don't offer a way to expose the details of the API requests
the CLI or other SDK consumers to users, making troubleshooting
difficult.
With the new `middleware` feature available in Progenitor, we can now
inject our own logger using the `reqwest-tracing` crate. This gives us
output like:
./target/debug/oxide --debug disk list --project will | jq .
{
"timestamp": "2025-02-26T17:29:23.354297Z",
"level": "DEBUG",
"fields": {
"message": "close",
"time.busy": "16.7ms",
"time.idle": "365ms"
},
"target": "oxide::tracing",
"span": {
"host": "oxide.sys.r3.oxide-preview.com",
"http.request.method": "GET",
"http.response.content_length": 998,
"http.response.status_code": 200,
"oxide.request_id": "c5e7d65e-bcb2-4ade-a817-6f13b681b19b",
"url": "https://oxide.sys.r3.oxide-preview.com/v1/disks?project=will",
"name": "Oxide API Request"
},
"spans": []
}
We will also log the first KiB of the request body, if present. This
should be enough to capture the details human-readable requests, e.g. an
OxQL query, but avoid too much noise from something like a disk import.
The `--debug` flag will enable debug logs for both the CLI and any
dependencies, such as `hyper`. To view only CLI logs, set
`RUST_LOG=oxide=debug`.
Closes#1014.
Target component
Overview
in getting to oxidecomputer/omicron#7532, i'd started with a CLI invocation that clearly did something wrong.
returned
in there is something about a filter expression using
silo_id
andproject_id
. that wasn't in my command. was it added by the CLI? SDK? i really wanted to know exactly what the HTTP request was that the CLI made on my behalf.--debug
did not really help:i can't really tell why the one random
reqwest::connect
log is there and nothing else. maybe because we're only configuringenv_logger
and the dependencies involved usetracing
?so for myself, i'd added
eprintln!("request: {:?}", request);
in the function handling the project-scoped timeseries query and that got me about what i'd wanted:curl
reproduction.(the hostname is good! but i already knew i was talking to dogfood.)
this did not get me the query body for
/v1/timeseries/query
, unfortunately. i'd guessed at the right POST body and happened to get it first try.Implementation details
it would be really nice if
--debug
caused the CLI to log at least a request head before getting toclient.client.execute(request)
in the various handlers. i think that gets 90% of the way there without sweating request bodies.Anything else you would like to add?
obviously not all CLI operations are a single HTTP request, and even ones that are, you might not want to log a 100MB POST body. i don't think we have a clever way to know which endpoints we'd expect have reasonably-loggable POST bodies either. ideally for some operations like timeseries queries with these relatively small json POST bodies we could know to log the body for easy reproduction. disk upload in particular is where i would be concerned that is a footgun if used carelessly.
The text was updated successfully, but these errors were encountered: