Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: Python binding inference performance improvement #426

Open
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

kthui
Copy link
Contributor

@kthui kthui commented Jan 11, 2025

What does the PR do?

Refactor infer() and async_infer() APIs to handle memory allocation and callbacks internally in C++, and only expose the basic interface for the Python iterator to fetch responses.

Checklist

  • PR title reflects the change and is of format <commit_type>: <Title>
  • Changes are described in the pull request.
  • Related issues are referenced.
  • Populated github labels field
  • Added test plan and verified test passes.
  • Verified that the PR passes existing CI.
  • Verified copyright is correct on all changed files.
  • Added succinct git squash message before merging ref.
  • All template sections are filled out.
  • Optional: Additional screenshots for behavior/output changes with before/after.

Commit Type:

Check the conventional commit type
box here and add the label to the github PR.

  • build
  • ci
  • docs
  • feat
  • fix
  • perf
  • refactor
  • revert
  • style
  • test

Related PRs:

triton-inference-server/server#7949

Where should the reviewer start?

Start with the tritonserver_pybind.cc for the interface change, then move on to _model.py on how the Python iterator interacts with the interface. Finally, move on to the _request.py and _response.py on how they support the Python iterator.

For testing, start with test_binding.py and test_api.py, and then _tensor.py on the DLPack limitation regarding bytes.

Test plan:

Existing L0_python_api is sufficient for catching any regression from this performance improvement. It is modified to test from the new interface.

  • CI Pipeline ID: 23077037

Caveats:

User are no longer able to specify custom:

  • request release callback
  • response allocator
  • response callback

Currently, only CPU memory output is supported at the binding level, so GPU memory output will involve an extra D2H copy at the backend and a H2D copy at the frontend. This will be resolved as a follow-up.

The test_stop failure will have to be triaged and fixed as a follow-up.

Background

N/A

Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)

N/A

* fix request tensor lifecycle

* remove prints for benchmarking

* remove one h2h response copy

* add allocator delete

* schedule future.set_result to be called in event loop
@kthui kthui added the PR: perf A code change that improves performance label Jan 11, 2025
@kthui kthui self-assigned this Jan 11, 2025
* Use tensor object in output

* Enable infer() to use the improved binding

* Pass the C pointer directly to Python

* Move output device if differ from request setting

* Copyright and pre-commit
@kthui kthui force-pushed the jacky-py-res-callback branch from 300b0b6 to 131078a Compare January 11, 2025 02:52
@kthui kthui marked this pull request as ready for review January 15, 2025 19:43
@@ -137,6 +137,7 @@ def test_memory_fallback_to_cpu(self, server_options):

tritonserver.default_memory_allocators[tritonserver.MemoryType.GPU] = allocator
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be removed - or changed in some way to indicate the allocator is internal ....

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, updated the test to indicate the allocator is internal, and it will always use CPU memory regardless of the backend memory preference.

@@ -164,6 +165,7 @@ def test_memory_allocator_exception(self, server_options):
):
pass

@pytest.mark.skip(reason="Skipping test, infer no longer use allocator")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should keep this but refactor - if user requests output memory type gpu and that is not supported by the internal allocator - we would still want to raise an exception during inference

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, we should raise an exception if the output memory type specified on the request is not supported, but currently the bindings does not accept a requested output memory type, so I think we can skip this test for now and add proper testing after adding support for allocating GPU memory.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, but if we fail because say cupy is not available? Can we still make sure the right error gets propagated?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, the "test_unsupported_memory_type" is repurposed for testing moving outputs to unsupported memory type

@@ -418,6 +420,9 @@ def test_ready(self, server_options):
server = tritonserver.Server(server_options).start()
assert server.ready()

@pytest.mark.skip(
reason="Skipping test, some request/response object may not be released which may cause server stop to fail"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use xfail instead of skip?

@rmccorm4 - what do you think - as we had spent time on fixing this - how much an issue is this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, switched to xfail.

* Indicate allocator is internal on test_memory_fallback_to_cpu()

* Remove test_memory_allocator_exception() as infer no longer use custom allocators

* Update reason for skipping test_unsupported_memory_type()

* Mark test_stop() with xfail instead of skip
TRITONSERVER_MemoryType* actual_memory_type,
int64_t* actual_memory_type_id)
{
*buffer = malloc(byte_size * sizeof(uint8_t));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are there tritonserver apis we should be using for memory allocation instead of directly calling malloc?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@GuanLuo to double check - should we be using the backend_memory_manager ? Reason I ask is that these / and the server already do support the different types of memory gpu / cpu / cpu pinned - and we would want that kind of support here if we can get it - and we should reuse it ...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is in server certainly won't work, it is tightly related to the server frontend for optimization which is why allocator is up for the user to define.

You probably can reuse the one for backend but that is kind of strange. Plus the one in backend may not be as well crafted as you expected.. Please evaluate whether it may sense to proceed in this way.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make that a todo as we bring in support for more memory types - we should look to see if we can reuse a common memory allocator in the core - of the ones backend_memeory_manager seems the closest - but agree is odd to reuse here directly.

*actual_memory_type = TRITONSERVER_MEMORY_CPU;
*actual_memory_type_id = 0;
// The response allocator needs to be kept alive until the allocated memory
// is released via the release function.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this true? - question: why not an allocator singleton for cpu - and reuse it for all requests?

Copy link
Contributor Author

@kthui kthui Jan 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, the core will need the allocator opaque object for finding the set release function. The allocator opaque object is basically this class that stores the release function pointer.

yes, updated to use singleton allocator object for all instances of the request wrapper.

int64_t memory_type_id)
{
free(buffer);
// Release ownership of the response allocator.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thinking a singleton could be simpler and avoid alloc / dealloc of the allocator object itself - if possible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, updated to use a singleton response allocator.

// Release ownership of the response allocator.
std::unique_ptr<std::shared_ptr<struct TRITONSERVER_ResponseAllocator>>
allocator_ptr(reinterpret_cast<
std::shared_ptr<struct TRITONSERVER_ResponseAllocator>*>(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually - what do we use the buffer_userp for?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it was used to store the shared pointer to the response allocator object and passed to the release function, so the reference count can be increased/decreased as malloc/free that enables the allocator to be destructed only after the last allocated response memory is freed.

but it is no longer necessary with the singleton allocator.

if (response_future_.get() != nullptr) {
throw AlreadyExistsError("cannot call GetNextResponse concurrently");
}
response_future_.reset(new py::object(py_future));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure I follow the logic here - why do we create a new object based on the passed in object? why not a reference on the original object?

second: how does the swap below work?

Copy link
Contributor Author

@kthui kthui Jan 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried holding a raw pointer to the original object, but the pointer can no longer be dereferenced (segmentation fault) after the GetNextResponse() returned, so the only safe way of holding on to the future object after GetNextResponse() returns is to increment the reference count on the future object, which is achieved by "copying" it to a new object.

the reason for using a unique pointer is to have a mechanism for checking if there is a future object pending to be set with the next response without accessing any Python variable, to avoid holding the GIL when doing the simple check. it can be achieved with a simple bool flag, but I think it is safer to have one variable than two to avoid setting one variable but forgetting about the other one.

Copy link
Contributor Author

@kthui kthui Jan 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Experiment: Increment the reference count on the py_future object, and see if the object can be remained valid after the function return.

TODO: The new will always allocate on heap, can we keep the variable on the rts, to avoid new/delete for every response?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have some kind of small memory leak/growth pytest test that checks memory before, does some inferences, and checks memory after?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The implementation is updated to remove the pointer wrapper around response_future_, and more comments clarifying the flow among GetNextResponse() and AddNextResponse(): Avoid extra malloc when tracking response_future_
cc @nnshah1 @rmccorm4

Copy link
Contributor Author

@kthui kthui Jan 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have some kind of small memory leak/growth pytest test that checks memory before, does some inferences, and checks memory after?

Do you think something like this makes sense? I think there will always be some small growth, but we can set the acceptable growth on the test case and have the test fail if it grows beyond what we set.

More discussion here: #426 (comment)

Copy link
Contributor

@rmccorm4 rmccorm4 Jan 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice find! Looks worth a try to me - I see it's used by quite a few big projects - https://github.com/search?q=pytest-memray&type=code

  • pydantic
  • datadog
  • huggingface
  • ...

I'm curious how much time overhead it adds to run.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but we can set the acceptable growth on the test case and have the test fail if it grows beyond what we set.

Let's make sure whatever threshold we set isn't tailored to the specific test case or number of iterations, and doesn't get hit if we increased the iterations on the test case or something

{
py::gil_scoped_acquire gil;
std::unique_ptr<py::object> response_future_local(nullptr);
response_future_.swap(response_future_local);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure I follow this - we create a new py object from null, swap it with the stored one (releasing it?) - then set result on the nullptr based object?...

Or ... I guess swap works the opposite way :)

we swap the null ptr for the one stored in the future ....

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the guarantee here that the GetNextResponse() will not be called and receive a new future object (vs this future object) is this future object is not yet set with a result, so as soon as this future object is set with a result, we need to be ready for the next future object from GetNextResponse() that may take the place of this future object. thus, the solution here is to move the current future object from the class to local variable, making the class variable ready for the next future object before the current future object is set with a result.

inference_request.response_queue,
raise_on_error,
)
self._server.infer_async(request)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how do we pass in the requested memory type to the C++ side?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can add a new binding function to the request wrapper that allows setting a requested output memory type to the request wrapper object, then the response allocation function can access the requested memory type from the request wrapper object and allocate the correct memory type.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's make sure the stories to follow up on customizing allocation are in place and scheduled. Priority would be to specify the memory type (device id, etc.)

def test_unsupported_memory_type(self, server_options):
# TODO: Revisit this test when GPU memory support is added, i.e. the request
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a ticket for GPU follow-up? Can add DLIS-XXXX here if so

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, added the number to the codes: [comment] Add ticket number to GPU follow-up TODOs

Comment on lines +313 to +316
@pytest.mark.xfail(
run=False,
reason="Some request/response object may not be released which may cause server stop to fail",
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you refresh my memory on the context of this one?

  1. What happens when server.stop() is called? Crash, hang, timeout, etc?
  2. Will it always happen, or only intermittently?
  3. Does letting the server go out of scope naturally without a call to .stop() work nicely? Or is it silently having issues behind the scenes too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens when server.stop() is called?

According to test_api_logs/test_stop.server.log, the server tried to shutdown, but it timeout at the end. There is no crash on the server log.

The pytest itself crashed afterward, in a segmentation fault.

Will it always happen, or only intermittently?

It happens 100% of the time.

Does letting the server go out of scope naturally without a call to .stop() work nicely?

There will be no crash on the pytest, but the server shutdown still timeout, according to the test_stop.server.log.

Copy link
Contributor

@rmccorm4 rmccorm4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome work on this PR @kthui !

Overall changes and concepts to reduce thrashing on python GIL acquisition make sense to me - this was an awesome find.

Left a couple comments on benchmarking, memory checks, tests, etc

parameters={}, output_memory_type=None,
output_memory_allocator=None, response_queue=None,
_serialized_inputs={})
parameters={}, output_memory_type=None, _serialized_inputs={})
Copy link
Contributor

@nnshah1 nnshah1 Jan 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Todo: for the future - we should find a way to document how to add a custom allocator - memory allocation is an area where customization is probably needed.

@kthui do we have follow on stories / epics identified for that?

"asyncio.Queue must be used for async response iterator"
)

# The 'request' is referencing pointers managed by the 'inference_request', so
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this note seems unnecessary here - would we add it instead to the asyncresponseiterator object? That's where we maintain that relationship is that correct?

"queue.SimpleQueue must be used for response iterator"
)

# The 'request' is referencing pointers managed by the 'inference_request', so
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see above - I think this more logically belongs with the comments for the Response Iterator - or could be omitted

LogMessage(LogLevel.ERROR, message)
# catastrophic failure
raise e from None
DeviceOrMemoryType = (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this may be better defined in _memorybuffer.py and reused

):
result = InferenceResponse(
model,
request.id,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: why omit this / change this - the idea here is that in case there is a fundamental error (i.e. an empty response, or some other error) - we get at least the request.id as stored in the original request to tie this response to the error - as in line 118 we aren't guranteed a full response ...

response,
flags: TRITONSERVER_ResponseCompleteFlag,
output_memory_type: Optional[DeviceOrMemoryType] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe instead of passing output_memory_type alone - we could pass the entire original request object and also use it to set the response id - incase the response is none?

# TODO: support classification
# values["classification_label"] = response.output_classification_label()

return result


class AsyncResponseIterator:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly - does this remove the ability for someone to set a queue / async queue in addition to the iterators?

raise response.error
return response

def cancel(self) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aren't we missing cancel in the new iterator?

self._complete = False
self._request = request
self._model = model
self._raise_on_error = raise_on_error
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we missing raise_on_error with new iterator?

raise StopIteration
response = self._queue.get()
self._complete = response.final
if response.error is not None and self._raise_on_error:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems missing from the new code?

@@ -868,16 +844,15 @@ class PyInferenceResponse
ThrowIfError(TRITONSERVER_InferenceResponseOutput(
triton_object_, index, &name, &datatype, &shape, &dim_count, &base,
&byte_size, &memory_type, &memory_type_id, &userp));
// The base pointer is deallocated when the response is deallocated.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment needed? are all pointers deallocated when response is deallocated?

// The request wrapper needs to be kept alive until the release callback is
// invoked, so the Triton request can be deleted after the core/backend is
// done with it.
std::unique_ptr<std::shared_ptr<PyInferenceRequest>> request_ptr(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems a little confusing to me,

we are creating a new shared pointer to an existing shared pointer and then creating a unique ptr to it - but then releasing it back and casting to a void pointer ...

why not do

std::shared_ptr<PyInferenceRequest> request_ptr = request;  // increment reference count
void * user_p = reinterpret_cast<void*>(new std::shared_ptr<T>(request_ptr)); // convert to void * 
TRITONSERVER_InferenceRequestSetReleaseCallback(triton_object_, ReleaseCallback, user_p); 

if we don't want to increase the reference count we can remove the assignment.

I think what is confusing is the mix of unique and shared pointer and new here -

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or should we be using make_unique instead of new? maybe not supported for our c++ version.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, the unique pointer is used here to make sure the resource is being cleaned up if exception is thrown below.. But if that is the intention, I don't think the current code does the job as the ownership is released before calling into TRITONSERVER_InferenceRequestSetReleaseCallback which is not safe guarding the exception.

struct TRITONSERVER_InferenceRequest* request, const uint32_t flags,
void* userp)
{
std::unique_ptr<std::shared_ptr<PyInferenceRequest>> request_ptr(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I almost prefer to be more verbose here,

std::shared_ptr<PyInferenceRequest> * request_ptr = reinterpret_cast<std::shared_ptr<PyInferenceRequest>*>(userp));
delete request_ptr;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rmccorm4, @GuanLuo for style - I see the general goodness of unique_ptr - but it kind of obscures what's happening here ....

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, at this point just reinterpret cast and delete should be fine.

// circular inclusion.
std::shared_ptr<PyInferenceRequest> request;
if (allocator != nullptr) {
ThrowIfError(TRITONSERVER_ResponseAllocatorDelete(allocator));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question - is it safe to call TRITONSERVER_ResponseAllocatorDelete(nullptr) ?

Generally for free / delete that is safe - just question if that is true here as well

{
// The response allocator is shared among all instances of this class, so it
// needs to be initialized once and once only.
// TODO: Why 'numpy.ones(2**27)' gets stuck when the response allocator is
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems very odd like something is interacting between numpy and this allocator - and that doesn't seem to make sense to me ....

auto cr = reinterpret_cast<CallbackResource*>(userp);
cr->release_fn(cr->request, flags, cr->user_object);
delete cr;
ThrowIfError(TRITONSERVER_BufferAttributesSetMemoryType(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we want to throw if error or return the error?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should return error as that is what the caller expects

void* buffer_userp, size_t byte_size, TRITONSERVER_MemoryType memory_type,
int64_t memory_type_id)
{
free(buffer);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we add assert here to ensure memory type and memory type id are cpu and 0?

// Response management
void GetNextResponse(const py::object& py_future)
{
// Called exclusively from Python, so GIL is always held.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: if we also acquire the GIL here will it hang or is safe to do as a sanity check?

public:
std::unique_ptr<CallbackResource> request_callback_resource_{nullptr};
// There are two cases:
// 1. The backend is faster than the frontend:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// 1. The backend is faster than the frontend:
// 1. Responses have been generated and queued before GetNextResponse() has been called
// 2. GetNextResponse() is called before responses have been generated and queued.

I would remove the notion of "faster" and "slower" - I think that can be confusing here - it's not necessarily about speed of execution (i.e. applicaiton may be purposefully not reading, handling something else, etc.) - faster and slower makes it seem like a rate matching issue - but's really just whether or not there are items available when GetNextResponse is called ....

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also change the description to match the if / else order, so:

  1. GetNextResponse is called before / when no responses have been queued.

if (response_future_.ptr() != nullptr) {
throw AlreadyExistsError("cannot call GetNextResponse concurrently");
}
response_future_ = py_future;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question - does this increment the py_future ref count?

std::pair<std::shared_ptr<PyInferenceResponse>, const uint32_t> py_response(
std::move(managed_ptr), std::move(flags));

// There are two cases:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// There are two cases:
// There are two cases:
// 1. GetNextResponse() hasn't been called and the future is not set.
// 2. GetNextResponse() has been called and future is set.

// 'response_future_'.
{
py::gil_scoped_acquire gil;
py::object response_future_local(std::move(response_future_));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

still a little unsure why we need this is GetNextResponse holds the gil - but I think we can explore that seperately (the explicit move and release of the response_future_)

allocator, allocater_user_object));
response_callback_resource_.reset(new PyInferenceResponse::CallbackResource(
response, allocator_callback_resource_.get(), response_user_object));
// The caller is responsible for keeping the request wrapper alive until
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is maintained by the _api ? is that correc t- so user doesn't have to worry about it explicitly -double checking

Copy link
Contributor

@nnshah1 nnshah1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generally looks good to me - had a few questions about removing the queues, cancel and request id from the response iterators / creation -

// The request wrapper needs to be kept alive until the release callback is
// invoked, so the Triton request can be deleted after the core/backend is
// done with it.
std::unique_ptr<std::shared_ptr<PyInferenceRequest>> request_ptr(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, the unique pointer is used here to make sure the resource is being cleaned up if exception is thrown below.. But if that is the intention, I don't think the current code does the job as the ownership is released before calling into TRITONSERVER_InferenceRequestSetReleaseCallback which is not safe guarding the exception.

struct TRITONSERVER_InferenceRequest* request, const uint32_t flags,
void* userp)
{
std::unique_ptr<std::shared_ptr<PyInferenceRequest>> request_ptr(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, at this point just reinterpret cast and delete should be fine.

ThrowIfError(TRITONSERVER_ResponseAllocatorSetBufferAttributesFunction(
allocator_.get(), ResponseAllocatorBufferAttributesFn));
});
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, I think you can move the allocator initialization and the related callback functions out to a separate class and make it a singleton, which then this SetResponseAllocator function simply grabs the Triton allocator pointer from the singleton

auto cr = reinterpret_cast<CallbackResource*>(userp);
cr->release_fn(cr->request, flags, cr->user_object);
delete cr;
ThrowIfError(TRITONSERVER_BufferAttributesSetMemoryType(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should return error as that is what the caller expects

TRITONSERVER_MemoryType* actual_memory_type,
int64_t* actual_memory_type_id)
{
*buffer = malloc(byte_size * sizeof(uint8_t));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is in server certainly won't work, it is tightly related to the server frontend for optimization which is why allocator is up for the user to define.

You probably can reuse the one for backend but that is kind of strange. Plus the one in backend may not be as well crafted as you expected.. Please evaluate whether it may sense to proceed in this way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
PR: perf A code change that improves performance
Development

Successfully merging this pull request may close these issues.

4 participants