Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

massive disk usage/throughput , very slow ( disk killer :( ) #2

Open
benchonaut opened this issue Dec 20, 2022 · 0 comments
Open

massive disk usage/throughput , very slow ( disk killer :( ) #2

benchonaut opened this issue Dec 20, 2022 · 0 comments

Comments

@benchonaut
Copy link

due to your writing to the TMP_RESULT file and the sqlite writing on each query , with ~200 test objects
there was a 10MB sqlite file ( quit after ~70 hosts) that was fully read and written all the time ( sqlite .. ) so one "cancelled" run
lead to 1Gbyte Disk I/O ..maybe not what you wanted , especially when you can do it in ram 50 times faster..

you already work with for loops , so maybe use it like

TMP_RESULT=""

while foo;do
TMP_RESULT=$(echo "$TMP_RESULT";echo;echo "APPEND THIS LINE TO RESULT")
done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant