- New: Inference callable receives Request object with requested output names. Thanks @catwell.
- Version of Triton Inference Server embedded in wheel: 2.50.0
- Version of Triton Inference Server embedded in wheel: 2.49.0
- Version of Triton Inference Server embedded in wheel: 2.48.0
- Version of Triton Inference Server embedded in wheel: 2.47.0
- New: PyTriton workspace clean raises warning
CleanupWarning
instead ofOSError
for file system failures - Fix: Set NumPy version upper limit
<2.0.0
- Version of Triton Inference Server embedded in wheel: 2.46.0
- New: Open Telemetry propagation support for tracing
- Version of Triton Inference Server embedded in wheel: 2.45.0
- New: Add PyTriton Check Tool to perform preliminary checks on the environment where PyTriton is deployed.
- Change: limited the
tritonclient
pacakge extras to http and grpc only - Fix: Pin grpc-tools version to handle grpc issue in tritonclient
- Build scripts update
- upgrade cmake version during build
- automatically configure wheel name based on
glibc
version
- Version of Triton Inference Server embedded in wheel: 2.44.0
- Fix: Performance improvements
- Version of Triton Inference Server embedded in wheel: 2.44.0
- New: Python 3.12 support
- Version of Triton Inference Server embedded in wheel: 2.44.0
- New: Relaxed wheel dependencies to avoid forced downgrading of protobuf and other packages in the NVIDIA 24.02 docker containers for PyTorch and other frameworks.
- Version of Triton Inference Server embedded in wheel: 2.43.0
- Add: Add
TritonLifecyclePolicy
parameter to Triton class to control the lifecycle of the Triton Inference Server (Triton Inference Server can be started at the beginning of the context - default behavior, or at the call ofrun
orserve
method), second flag in this parameter indicates if model configs should be created in local filesystem or passed to Triton Inference Server and managed by it. - Fix: ModelManager does not raise
tritonclient.grpc.InferenceServerException
forstop
method when HTTP endpoint is disabled in Triton configuration. - Fix: Methods can be used as the inference callable.
- Version of Triton Inference Server embedded in wheel: 2.42.0
- Fix: ModelClient does not raise
gevent.exceptions.InvalidThreadUseError
when destroyed in a different thread.
- Version of Triton Inference Server embedded in wheel: 2.42.0
- New: Decoupled models support
- New: AsyncioDecoupledModelClient, which works in async frameworks and decoupled Triton models like some Large Language Models.
- Fix: Fixed a bug that prevented getting the log level when HTTP endpoint was disabled. Thanks @catwell.
- Version of Triton Inference Server embedded in wheel: 2.41.0
- New: You can create a client from an existing client instance or model configuration to avoid loading model configuration from the server.
- New: Introduced warning system using the
warnings
module. - Fix: Experimental client for decoupled models prevents sending another request, when responses from previous request are not consumed, blocks close until stream is stopped.
- Fix: Leak of ModelClient during Triton creation
- Fix: Fixed non-declared project dependencies (removed from use in code or added to package dependencies)
- Fix: Remote model is being unloaded from Triton when RemoteTriton is closed.
- Version of Triton Inference Server embedded in wheel: 2.39.0
- New: Place where workspaces with temporary Triton model repositories and communication file sockets can be configured by
$PYTRITON_HOME
environment variable - Fix: Recover handling
KeyboardInterrupt
intriton.serve()
- Fix: Remove limit for handling bytes dtype tensors
- Build scripts update
- Added support for arm64 platform builds
- Version of Triton Inference Server embedded in wheel: 2.39.0
- New: Remote Mode - PyTriton can be used to connect to a remote Triton Inference Server
- Introduced RemoteTriton class which can be used to connect to a remote Triton Inference Server running on the same machine, by passing triton url.
- Changed Triton lifecycle - now the Triton Inference Server is started while entering the context. This allows to load models dynamically to the running server while calling the bind method. It is still allowed to create Triton instance without entering the context and bind models before starting the server (in this case the models are lazy loaded when calling run or serve method like it worked before).
- In RemoteTriton class, calling enter or connect method connects to triton server, so we can safely load models while binding inference functions (if RemoteTriton is used without context manager, models are lazy loaded when calling connect or serve method).
- Change:
@batch
decorator raises aValueError
if any of the outputs have a different batch size than expected. - fix: gevent resources leak in
FuturesModelClient
- Version of Triton Inference Server embedded in wheel: 2.36.0
- Change:
KeyboardInterrupt
is now handled intriton.serve()
. PyTriton hosting scripts return an exit code of 0 instead of 130 when they receive a SIGINT signal. - Fix: Addressed potential instability in shared memory management.
- Version of Triton Inference Server embedded in wheel: 2.36.0
- new: Support for multiple Python versions starting from 3.8+
- new: Added support for decoupled models enabling to support streaming models (alpha state)
- change: Upgraded Triton Inference Server binaries to version 2.36.0. Note that this Triton Inference Server requires glibc 2.35+ or a more recent version.
- Version of Triton Inference Server embedded in wheel: 2.36.0
- new: Allow to execute multiple PyTriton instances in the same process and/or host
- fix: Invalid flags for Proxy Backend configuration passed to Triton
- Version of Triton Inference Server embedded in wheel: 2.33.0
- new: Introduced
strict
flag inTriton.bind
which enables data types and shapes validation of inference callable outputs against model config - new:
AsyncioModelClient
which works in FastAPI and other async frameworks - fix:
FuturesModelClient
do not raisegevent.exceptions.InvalidThreadUseError
- fix: Do not throw TimeoutError if could not connect to server during model verification
- Version of Triton Inference Server embedded in wheel: 2.33.0
- Improved verification of Proxy Backend environment when running under same Python interpreter
- Fixed pytriton.version to represent currently installed version
- Version of Triton Inference Server embedded in wheel: 2.33.0
- Added
inference_timeout_s
parameters to client classes - Renamed
PyTritonClientUrlParseError
toPyTritonClientInvalidUrlError
ModelClient
andFuturesModelClient
methods raisePyTritonClientClosedError
when used after client is closed- Pinned tritonclient dependency due to issues with tritonclient >= 2.34 on systems with glibc version lower than 2.34
- Added warning after Triton Server setup and teardown while using too verbose logging level as it may cause a significant performance drop in model inference
- Version of Triton Inference Server embedded in wheel: 2.33.0
- Fixed handling
TritonConfig.cache_directory
option - the directory was always overwritten with the default value. - Fixed tritonclient dependency - PyTriton need tritonclient supporting http headers and parameters
- Improved shared memory usage to match 64MB limit (default value for Docker, Kubernetes) reducing the initial size for PyTriton Proxy Backend.
- Version of Triton Inference Server embedded in wheel: 2.33.0
-
Added support for using custom HTTP/gRPC request headers and parameters.
This change breaks backward compatibility of the inference function signature. The undecorated inference function now accepts a list of
Request
instances instead of a list of dictionaries. TheRequest
class contains data for inputs and parameters for combined parameters and headers.See documentation for further information
-
Added
FuturesModelClient
which enables sending inference requests in a parallel manner. -
Added displaying documentation link after models are loaded.
- Version of Triton Inference Server embedded in wheel: 2.33.0
- Improved
pytriton.decorators.group_by_values
function- Modified the function to avoid calling the inference callable on each individual sample when grouping by string/bytes input
- Added
pad_fn
argument for easy padding and combining of the inference results
- Fixed Triton binaries search
- Improved Workspace management (remove workspace on shutdown)
- Version of external components used during testing:
- Triton Inference Server: 2.29.0
- Other component versions depend on the used framework and Triton Inference Server containers versions. Refer to its support matrix for a detailed summary.
- Add validation of the model name passed to Triton bind method.
- Add monkey patching of
InferenceServerClient.__del__
method to prevent unhandled exceptions.
- Version of external components used during testing:
- Triton Inference Server: 2.29.0
- Other component versions depend on the used framework and Triton Inference Server containers versions. Refer to its support matrix for a detailed summary.
- Fixed getting model config in
fill_optionals
decorator.
- Version of external components used during testing:
- Triton Inference Server: 2.29.0
- Other component versions depend on the used framework and Triton Inference Server containers versions. Refer to its support matrix for a detailed summary.
- Fixed wheel build to support installations on operating systems with glibc version 2.31 or higher.
- Updated the documentation on custom builds of the package.
- Change: TritonContext instance is shared across bound models and contains model_configs dictionary.
- Fixed support of binding multiple models that uses methods of the same class.
- Version of external components used during testing:
- Triton Inference Server: 2.29.0
- Other component versions depend on the used framework and Triton Inference Server containers versions. Refer to its support matrix for a detailed summary.
- Change: The
@first_value
decorator has been updated with new features:- Renamed from
@first_values
to@first_value
- Added a
strict
flag to toggle the checking of equality of values on a single selected input of the request. Default is True - Added a
squeeze_single_values
flag to toggle the squeezing of single value ND arrays to scalars. Default is True
- Renamed from
- Fix:
@fill_optionals
now supports non-batching models - Fix:
@first_value
fixed to work with optional inputs - Fix:
@group_by_values
fixed to work with string inputs - Fix:
@group_by_values
fixed to work per sample-wise
- Version of external components used during testing:
- Triton Inference Server: 2.29.0
- Other component versions depend on the used framework and Triton Inference Server containers versions. Refer to its support matrix for a detailed summary.
- Initial release of PyTriton
- Version of external components used during testing:
- Triton Inference Server: 2.29.0
- Other component versions depend on the used framework and Triton Inference Server containers versions. Refer to its support matrix for a detailed summary.