-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Export Functionality #96
Comments
@lordlycastle There you go. Good idea. Actually I'm looking into the export functionality. But I feel like it's too much to include a set of all requests and their timestamp. I'm guessing writing this summary to a file is enough: |
I think it should be a time-stamped entry for each attack in a format that can be submitted to RRDTool. |
Or InfluxDB... Way more reliable than RRDTool, and can then be grafana'd super easily :-) |
@ghowardMSD Okay, you want to plot into external tools, right? Initially, the scope of this tool was just to plot the terminal, so I wasn't considering it. But looks interesting. Because I'm honestly not very familiar with that area, give me a moment to think about it. |
Supporting a propertary format may not be cool. Almost all tools can accept standard formats like CSV (most universal), JSON (newer). This is perfectly tabular data where CSV will be most intuitive. @nakabonne I believe the raw info would be best like the RAW photos. People can do their own processing. With tools like I must say processed data is useful too though. Otherwise you need to do manipulations to compare multiple export results outside of One question is what about requests that returned a different code. This info is important because you might wish to filter only successful/failed/429/5XX requests.
|
Yes, I feel the same way 👍
Collecting raw data is a little bit tedious but isn't so difficult. The phrase "processed data" means a kind of like the request latency, right? If so, I'm guessing only processed data is enough like the k6's export feature. Also, as you mentioned, the HTTP status codes should be included, and input data a kind of like the URL and Method as well. What I'm thinking is:
I feel like we need to include only the start_time as a |
I don't know anything that's better than unix_time. |
Maybe we could do ISO format. https://en.wikipedia.org/wiki/ISO_8601 Would make it readable and it’s recognised by most tools easily. Should be able to fetch on all systems easily too regardless of locale and timezone. Date and time in UTC
Decimal points for Milli seconds.
|
Do we need url? Thought that was always fixed for a single run. Yeah doesn’t matter if you include latency or stop time. One is enough. |
Looks good. I have no objection to the ISO format for now.
We don't need it In case that you want to handle only one target once. But for those who want to export results for multiple targets and save them into a single time-series DB, I feel like it would be nice if we have such input data. I'm in the middle of thinking about this topic but I'm wondering our CSV should be convertible to InfluxDB's line protocol. InfluxDB is one of the most popular time-series DB and this format looks relatively versatile. I mean it's gonna be kind of like:
|
Right now, I am not even able to copy the metrics from the GUI. Is there a way to report the summary to a file? |
@alwaysastudent Actually even such a simple summary doesn't exist. I'm looking to support it in the near future, kind of like: #96 (comment) |
Currently, I'm in the middle of working on implementing the time-series storage layer; once got finished I'm going to support the export functionality, so just a moment. |
Have this feature been supported? |
3 years later, is there any progress ? I feel like the tool can be prod-ready only with this feature added, no ? |
Would love some export functionality of all the data that is collected.
Simplest export would be a "CSV" file where each line is latency & start time of each request in order they were made.
Eventually we could expand this to include other things about the request like the response size, start timestamps, number of request there were in parallel when you started it etc.
I don't want to export processed data. Since they're fairly basic calculations I don't think that weight should fall on this tool; as it is not its main focus.
The text was updated successfully, but these errors were encountered: