You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Support optional time limit on function generation. Although CHD can complete in bounded time (at least given a uniformly random hashing primitive), most applications really only care about constraining actual processor or wall time consumed. Trying to estimate the cost beforehand is problematic, especially for large sets, because of memory latency issues and dynamic system load.
Applications cannot interrupt generation themselves without leaking the memory we use for our temporary arrays.
Some non-exclusive options:
longjmp from a signal handler triggered by alarm, setitimer, or timer_create. Need to worry about threading issues: posix_sigmask versus sigprocmask, sigaction installing global handler, etc. Probably best to only use timer_create and set the struct sigevent sigval member to point to the jmp_buf. This is probably the most performant option, at least for moderate to large key sets.
Check sig_atomic_t flag which is set by a timer_create sigevent handler. Here we set the sigval member to point to the sig_atomic_t flag. A little simpler than the longjmp solution as we don't have to worry about using volatile qualifiers.
Periodically check a system clock (clock, gettimeofday, clock_monotonic, etc). Fewer portability problems. None if using clock, but clock might be cumulative across all threads.
Use getrusage to query CPU time. Definite threading issues. Would only want to use if RUSAGE_THREAD or RUSAGE_LWP available.
Simply allow the application to specify a callback which we invoke periodically. The return value controls whether to fail generation. Portable and simple, but not very convenient for portable applications, which will have to tackle the above issues.
The text was updated successfully, but these errors were encountered:
Support optional time limit on function generation. Although CHD can complete in bounded time (at least given a uniformly random hashing primitive), most applications really only care about constraining actual processor or wall time consumed. Trying to estimate the cost beforehand is problematic, especially for large sets, because of memory latency issues and dynamic system load.
Applications cannot interrupt generation themselves without leaking the memory we use for our temporary arrays.
Some non-exclusive options:
The text was updated successfully, but these errors were encountered: