You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Duda stack should support an API to allow the server to listen for large data uploads, where Monkey server after receive the HTTP request headers (on PUT/POST), let Duda handle each incoming data frame.
The API should be something like this:
map->static_upload_stream("/upload", int (*callback)(structduda_request*dr, void*buffer, ssize_tlen));
so all POST/PUT request with content length > 0 that arrives over /upload URI, will be handled by the callback function which also sets a buffer and a size, that function will be invoked many times until the upload is complete or for some reason the connection is closed.
The text was updated successfully, but these errors were encountered:
Or just wait for block end and parse whole header block at once
Add necessary API hooks (placeholder names)
on_headers_ready
Could be used to authenticate request before body is sent (fail early), or setup body streaming
Must expose an API to enable body chunk callback
Return value actions:
Abort connection, error response
Continue, read request body
on_body_chunk_read
Receives read body chunks.
Return value actions:
Abort connection, error response
Chunk handled, buffered data can be discarded
Chunk handled, continue to buffer data
Find all code that expects whole request buffered and work around that
Duda stack should support an API to allow the server to listen for large data uploads, where Monkey server after receive the HTTP request headers (on PUT/POST), let Duda handle each incoming data frame.
The API should be something like this:
so all POST/PUT request with content length > 0 that arrives over /upload URI, will be handled by the callback function which also sets a buffer and a size, that function will be invoked many times until the upload is complete or for some reason the connection is closed.
The text was updated successfully, but these errors were encountered: