diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index ef4dce98..c8c62828 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -9,7 +9,7 @@ on: jobs: build-lint-test: strategy: - fail-fast: true + fail-fast: false matrix: # TODO(cretz): Enable Windows (it's slow) # @@ -75,12 +75,12 @@ jobs: working-directory: ./temporalio run: bundle install - - name: Check generated protos + - name: Check generated code unchanged if: ${{ matrix.checkTarget }} - working-directory: ./temporalio run: | - bundle exec rake proto:generate - [[ -z $(git status --porcelain lib/temporalio/api) ]] || (git diff lib/temporalio/api; echo "Protos changed" 1>&2; exit 1) + npx doctoc README.md + cd temporalio && bundle exec rake proto:generate + git diff --exit-code - name: Lint, compile, test Ruby working-directory: ./temporalio diff --git a/README.md b/README.md index c507ad7a..a89a8cbc 100644 --- a/README.md +++ b/README.md @@ -1,11 +1,427 @@ # Temporal Ruby SDK +[![MIT](https://img.shields.io/github/license/temporalio/sdk-ruby.svg?style=for-the-badge)](LICENSE) + +[Temporal](https://temporal.io/) is a distributed, scalable, durable, and highly available orchestration engine used to +execute asynchronous, long-running business logic in a scalable and resilient way. + +"Temporal Ruby SDK" is the framework for authoring workflows and activities using the Ruby programming language. + ⚠️ UNDER ACTIVE DEVELOPMENT -The last tag before this refresh is [v0.1.1](https://github.com/temporalio/sdk-ruby/tree/v0.1.1). Please reference that -tag for the previous code. +This SDK is under active development and has not released a stable version yet. APIs may change in incompatible ways +until the SDK is marked stable. The SDK has undergone a refresh from a previous unstable version. The last tag before +this refresh is [v0.1.1](https://github.com/temporalio/sdk-ruby/tree/v0.1.1). Please reference that tag for the +previous code if needed. + +Notably missing from this SDK: + +* Workflow workers + +**NOTE: This README is for the current branch and not necessarily what's released on RubyGems.** + +--- + + + +**Contents** + +- [Quick Start](#quick-start) + - [Installation](#installation) + - [Implementing an Activity](#implementing-an-activity) + - [Running a Workflow](#running-a-workflow) +- [Usage](#usage) + - [Client](#client) + - [Cloud Client Using mTLS](#cloud-client-using-mtls) + - [Data Conversion](#data-conversion) + - [ActiveRecord and ActiveModel](#activerecord-and-activemodel) + - [Workers](#workers) + - [Workflows](#workflows) + - [Activities](#activities) + - [Activity Definition](#activity-definition) + - [Activity Context](#activity-context) + - [Activity Heartbeating and Cancellation](#activity-heartbeating-and-cancellation) + - [Activity Worker Shutdown](#activity-worker-shutdown) + - [Activity Concurrency and Executors](#activity-concurrency-and-executors) + - [Activity Testing](#activity-testing) +- [Development](#development) + - [Build](#build) + - [Testing](#testing) + - [Code Formatting and Type Checking](#code-formatting-and-type-checking) + - [Proto Generation](#proto-generation) + + + +## Quick Start + +### Installation + +⚠️ PENDING GEM PUBLISH + +**NOTE: Due to [an issue](https://github.com/temporalio/sdk-ruby/issues/162), fibers (and `async` gem) are only +supported on Ruby versions 3.3 and newer. + +### Implementing an Activity + +Implementing workflows is not yet supported in the Ruby SDK, but implementing activities is. + +For example, if you have a `SayHelloWorkflow` workflow in another Temporal language that invokes `SayHello` activity on +`my-task-queue` in Ruby, you can have the following Ruby script: + +```ruby +require 'temporalio/activity' +require 'temporalio/cancellation' +require 'temporalio/client' +require 'temporalio/worker' + +# Implementation of a simple activity +class SayHelloActivity < Temporalio::Activity + def execute(name) + "Hello, #{name}!" + end +end + +# Create a client +client = Temporalio::Client.connect('localhost:7233', 'my-namespace') + +# Create a worker with the client and activities +worker = Temporalio::Worker.new( + client:, + task_queue: 'my-task-queue', + # There are various forms an activity can take, see specific section for details. + activities: [SayHelloActivity] +) + +# Run the worker until SIGINT. This can be done in many ways, see specific +# section for details. +cancellation, cancel_proc = Temporalio::Cancellation.new +Signal.trap('INT') { cancel_proc } +worker.run(cancellation:) +``` + +Running that will run the worker until Ctrl+C pressed. + +### Running a Workflow + +Assuming that `SayHelloWorkflow` just calls this activity, it can be run like so: + +```ruby +require 'temporalio/client' + +# Create a client +client = Temporalio::Client.connect('localhost:7233', 'my-namespace') + +# Run workflow +result = client.execute_workflow( + 'SayHelloWorkflow', + 'Temporal', + id: 'my-workflow-id', + task_queue: 'my-task-queue' +) +puts "Result: #{result}" +``` + +This will output: + +``` +Result: Hello, Temporal! +``` + +## Usage + +### Client + +A client can be created and used to start a workflow or otherwise interact with Temporal. For example: + +```ruby +require 'temporalio/client' + +# Create a client +client = Temporalio::Client.connect('localhost:7233', 'my-namespace') + +# Start a workflow +handle = client.start_workflow( + 'SayHelloWorkflow', + 'Temporal', + id: 'my-workflow-id', + task_queue: 'my-task-queue' +) + +# Wait for result +result = handle.result +puts "Result: #{result}" +``` + +Notes about the above code: + +* Temporal clients are not explicitly closed. +* To enable TLS, the `tls` option can be set to `true` or a `Temporalio::Client::Connection::TLSOptions` instance. +* Instead of `start_workflow` + `result` above, `execute_workflow` shortcut can be used if the handle is not needed. +* The `handle` above is a `Temporalio::Client::WorkflowHandle` which has several other operations that can be performed + on a workflow. To get a handle to an existing workflow, use `workflow_handle` on the client. +* Clients are thread safe and are fiber-compatible (but fiber compatibility only supported for Ruby 3.3+ at this time). + +#### Cloud Client Using mTLS + +Assuming a client certificate is present at `my-cert.pem` and a client key is present at `my-key.pem`, this is how to +connect to Temporal Cloud: + +```ruby +require 'temporalio/client' + +# Create a client +client = Temporalio::Client.connect( + 'my-namespace.a1b2c.tmprl.cloud:7233', + 'my-namespace.a1b2c', + tls: Temporalio::Client::Connection::TLSOptions.new( + client_cert: File.read('my-cert.pem'), + client_private_key: File.read('my-key.pem') + )) +``` + +#### Data Conversion + +Data converters are used to convert raw Temporal payloads to/from actual Ruby types. A custom data converter can be set +via the `data_converter` keyword argument when creating a client. Data converters are a combination of payload +converters, payload codecs, and failure converters. Payload converters convert Ruby values to/from serialized bytes. +Payload codecs convert bytes to bytes (e.g. for compression or encryption). Failure converters convert exceptions +to/from serialized failures. + +Data converters are in the `Temporalio::Converters` module. The default data converter uses a default payload converter, +which supports the following types: + +* `nil` +* "bytes" (i.e. `String` with `Encoding::ASCII_8BIT` encoding) +* `Google::Protobuf::MessageExts` instances +* [`JSON` module](https://docs.ruby-lang.org/en/master/JSON.html) for everything else + +This means that normal Ruby objects will use `JSON.generate` when serializing and `JSON.parse` when deserializing (with +`create_additions: true` set by default). So a Ruby object will often appear as a hash when deserialized. While +"JSON Additions" are supported, it is not cross-SDK-language compatible since this is a Ruby-specific construct. + +The default payload converter is a collection of "encoding payload converters". On serialize, each encoding converter +will be tried in order until one accepts (default falls through to the JSON one). The encoding converter sets an +`encoding` metadata value which is used to know which converter to use on deserialize. Custom encoding converters can be +created, or even the entire payload converter can be replaced with a different implementation. + +##### ActiveRecord and ActiveModel + +By default, `ActiveRecord` and `ActiveModel` objects do not natively support the `JSON` module. A mixin can be created +to add this support for `ActiveRecord`, for example: + +```ruby +module ActiveRecordJSONSupport + extend ActiveSupport::Concern + include ActiveModel::Serializers::JSON + + included do + def to_json(*args) + hash = as_json + hash[::JSON.create_id] = self.class.name + hash.to_json(*args) + end -**TODO: Usage documentation** + def self.json_create(object) + object.delete(::JSON.create_id) + ret = new + ret.attributes = object + ret + end + end +end +``` + +Similarly, a mixin for `ActiveModel` that adds `attributes` accessors can leverage this same mixin, for example: + +```ruby +module ActiveModelJSONSupport + extend ActiveSupport::Concern + include ActiveRecordJSONSupport + + included do + def attributes=(hash) + hash.each do |key, value| + send("#{key}=", value) + end + end + + def attributes + instance_values + end + end +end +``` + +Now `include ActiveRecordJSONSupport` or `include ActiveModelJSONSupport` will make the models work with Ruby `JSON` +module and therefore Temporal. Of course any other approach to make the models work with the `JSON` module will work as +well. + +### Workers + +Workers host workflows and/or activities. Workflows cannot yet be written in Ruby, but activities can. Here's how to run +an activity worker: + +```ruby +require 'temporalio/client' +require 'temporalio/worker' +require 'my_module' + +# Create a client +client = Temporalio::Client.connect('localhost:7233', 'my-namespace') + +# Create a worker with the client and activities +worker = Temporalio::Worker.new( + client:, + task_queue: 'my-task-queue', + # There are various forms an activity can take, see specific section for details. + activities: [MyModule::MyActivity] +) + +# Run the worker until block complete +worker.run do + something_that_waits_for_completion +end +``` + +Notes about the above code: + +* A worker uses the same client that is used for other Temporal things. +* This just shows providing an activity class, but there are other forms, see the "Activities" section for details. +* The worker `run` method accepts an optional `Temporalio::Cancellation` object that can be used to cancel instead or in + addition to providing a block that waits for completion. +* Workers work with threads or fibers (but fiber compatibility only supported for Ruby 3.3+ at this time). Fiber-based + activities (see "Activities" section) only work if the worker is created within a fiber. +* The `run` method does not return until the worker is shut down. This means even if shutdown is triggered (e.g. via + `Cancellation` or block completion), it may not return immediately. Activities not completing may hang worker + shutdown, see the "Activities" section. +* Workers can have many more options not shown here (e.g. data converters and interceptors). +* The `Temporalio::Worker.run_all` class method is available for running multiple workers concurrently. + +### Workflows + +⚠️ Workflows cannot yet be implemented Ruby. + +### Activities + +#### Activity Definition + +Activities can be defined in a few different ways. They are usually classes, but manual definitions are supported too. + +Here is a common activity definition: + +```ruby +class FindUserActivity < Temporalio::Activity + def execute(user_id) + User.find(user_id) + end +end +``` + +Activities are defined as classes that extend `Temporalio::Activity` and provide an `execute` method. When this activity +is provided to the worker as a _class_ (e.g. `activities: [FindUserActivity]`), it will be instantiated for +_every attempt_. Many users may prefer using the same instance across activities, for example: + +```ruby +class FindUserActivity < Temporalio::Activity + def initialize(db) + @db = db + end + + def execute(user_id) + @db[:users].first(id: user_id) + end +end +``` + +When this is provided to a worker as an instance of the activity (e.g. `activities: [FindUserActivity.new(my_db)]`) then +the same instance is reused for each activity. + +Some notes about activity definition: + +* Temporal activities are identified by their name (or sometimes referred to as "activity type"). This defaults to the + unqualified class name of the activity, but can be customized by calling the `activity_name` class method. +* Long running activities should heartbeat regularly, see "Activity Heartbeating and Cancellation" later. +* By default every activity attempt is executed in a thread on a thread pool, but fibers are also supported. See + "Activity Concurrency and Executors" section later for more details. +* Technically an activity definition can be created manually via `Temporalio::Activity::Definition.new` that accepts a + proc or a block, but the class form is recommended. + +#### Activity Context + +When running in an activity, the `Temporalio::Activity::Context` is available via +`Temporalio::Activity::Context.current` which is backed by a thread/fiber local. In addition to other more advanced +things, this context provides: + +* `info` - Information about the running activity. +* `heartbeat` - Method to call to issue an activity heartbeat (see "Activity Heartbeating and Cancellation" later). +* `cancellation` - Instance of `Temporalio::Cancellation` canceled when an activity is canceled (see + "Activity Heartbeating and Cancellation" later). +* `worker_shutdown_cancellation` - Instance of `Temporalio::Cancellation` canceled when worker is shutting down (see + "Activity Worker Shutdown" later). +* `logger` - Logger that automatically appends a hash with some activity info to every message. + +#### Activity Heartbeating and Cancellation + +In order for a non-local activity to be notified of server-side cancellation requests, it must regularly invoke +`heartbeat` on the `Temporalio::Activity::Context` instance (available via `Temporalio::Activity::Context.current`). It +is strongly recommended that all but the fastest executing activities call this function regularly. + +In addition to obtaining cancellation information, heartbeats also support detail data that is persisted on the server +for retrieval during activity retry. If an activity calls `heartbeat(123)` and then fails and is retried, +`Temporalio::Activity::Context.current.info.heartbeat_details.first` will be `123`. + +An activity can be canceled for multiple reasons, some server-side and some worker side. Server side cancellation +reasons include workflow canceling the activity, workflow completing, or activity timing out. On the worker side, the +activity can be canceled on worker shutdown (see next section). By default cancellation is relayed two ways - by marking +the `cancellation` on `Temporalio::Activity::Context` as canceled, and by issuing a `Thread.raise` or `Fiber.raise` with +the `Temporalio::Error::CanceledError`. + +The `raise`-by-default approach was chosen because it is dangerous to the health of the system and the continued use of +worker slots to require activities opt-in to checking for cancellation by default. But if this behavior is not wanted, +`activity_cancel_raise false` class method can be called at the top of the activity which will disable the `raise` +behavior and just set the `cancellation` as canceled. + +If needing to shield work from being canceled, the `shield` call on the `Temporalio::Cancellation` object can be used +with a block for the code to be shielded. The cancellation will not take effect on the cancellation object nor the raise +call while the work is shielded (regardless of nested depth). Once the shielding is complete, the cancellation will take +effect, including `Thread.raise`/`Fiber.raise` if that remains enabled. + +#### Activity Worker Shutdown + +An activity can react to a worker shutdown specifically and also a normal cancellation will be sent. A worker will not +complete its shutdown while an activity is in progress. + +Upon worker shutdown, the `worker_shutdown_cancellation` cancellation on `Temporalio::Activity::Context` will be +canceled. Then the worker will wait a for a grace period set by the `graceful_shutdown_period` worker option (default 0) +before issuing actual cancellation to all still-running activities. + +Worker shutdown will then wait on all activities to complete. If a long-running activity does not respect cancellation, +the shutdown may never complete. + +#### Activity Concurrency and Executors + +By default, activities run in the "thread pool executor" (i.e. `Temporalio::Worker::ActivityExecutor::ThreadPool`). This +default is shared across all workers and is a naive thread pool that continually makes threads as needed when none are +idle/available to handle incoming work. If a thread sits idle long enough, it will be killed. + +The maximum number of concurrent activities a worker will run at a time is configured via its `tuner` option. The +default is `Temporalio::Worker::Tuner.create_fixed` which defaults to 100 activities at a time for that worker. When +this value is reached, the worker will stop asking for work from the server until there are slots available again. + +In addition to the thread pool executor, there is also a fiber executor in the default executor set. To use fibers, call +`activity_executor :fiber` class method at the top of the activity class (the default of this value is `:default` which +is the thread pool executor). Activities can only choose the fiber executor if they create and run the worker in a +fiber, but thread pool executor is always available. Currently due to +[an issue](https://github.com/temporalio/sdk-ruby/issues/162), workers can only run in a fiber on Ruby versions 3.3 and +newer. + +Technically the executor can be customized. The `activity_executors` worker option accepts a hash with the key as the +symbol and the value as a `Temporalio::Worker::ActivityExecutor` implementation. Users should usually not need to +customize this. If general code is needed to run around activities, users should use interceptors instead. + +#### Activity Testing + +TODO: https://github.com/temporalio/sdk-ruby/issues/167 ## Development @@ -38,7 +454,19 @@ This project uses `minitest`. To test: Can add options via `TESTOPTS`. E.g. single test: - bundle exec rake test TESTOPTS="--name=test_start_workflows_async" + bundle exec rake test TESTOPTS="--name=test_some_method" + +E.g. all starting with prefix: + + bundle exec rake test TESTOPTS="--name=/^test_some_method_prefix/" + +E.g. all for a class: + + bundle exec rake test TESTOPTS="--name=/SomeClassName/" + +E.g. show all test names while executing: + + bundle exec rake test TESTOPTS="--verbose" ### Code Formatting and Type Checking diff --git a/temporalio/.rubocop.yml b/temporalio/.rubocop.yml index 3003a1e5..900c7b1c 100644 --- a/temporalio/.rubocop.yml +++ b/temporalio/.rubocop.yml @@ -9,6 +9,7 @@ AllCops: Exclude: - ext/**/* - lib/temporalio/api/**/* + - lib/temporalio/internal/bridge/api/**/* - target/**/* - tmp/**/* - vendor/**/* @@ -24,6 +25,11 @@ Gemspec/DevelopmentDependencies: Layout/ClassStructure: Enabled: true +# Don't need super for activities +Lint/MissingSuper: + AllowedParentClasses: + - Temporalio::Activity + # The default is too small and triggers simply setting lots of values on a proto Metrics/AbcSize: Max: 200 @@ -52,9 +58,17 @@ Metrics/ModuleLength: Metrics/PerceivedComplexity: Max: 25 +# We want classes to be documented +Style/Documentation: + Enabled: true + Exclude: + - lib/temporalio/internal/**/* + # We want methods to be documented Style/DocumentationMethod: Enabled: true + Exclude: + - lib/temporalio/internal/**/* # Ok to have global vars in tests Style/GlobalVars: diff --git a/temporalio/Cargo.lock b/temporalio/Cargo.lock index d7781be6..6b232759 100644 --- a/temporalio/Cargo.lock +++ b/temporalio/Cargo.lock @@ -178,18 +178,17 @@ checksum = "0c4b4d0bd25bd0b74681c0ad21497610ce1b7c91b1022cd21c80c6fbdd9476b0" [[package]] name = "axum" -version = "0.6.20" +version = "0.7.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3b829e4e32b91e643de6eafe82b1d90675f5874230191a4ffbc1b336dec4d6bf" +checksum = "504e3947307ac8326a5437504c517c4b56716c9d98fac0028c2acc7ca47d70ae" dependencies = [ "async-trait", "axum-core", - "bitflags 1.3.2", "bytes", "futures-util", - "http 0.2.12", - "http-body 0.4.6", - "hyper 0.14.30", + "http", + "http-body", + "http-body-util", "itoa", "matchit", "memchr", @@ -198,25 +197,28 @@ dependencies = [ "pin-project-lite", "rustversion", "serde", - "sync_wrapper 0.1.2", - "tower", + "sync_wrapper 1.0.1", + "tower 0.5.1", "tower-layer", "tower-service", ] [[package]] name = "axum-core" -version = "0.3.4" +version = "0.4.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "759fa577a247914fd3f7f76d62972792636412fbfd634cd452f6a385a74d2d2c" +checksum = "09f2bd6146b97ae3359fa0cc6d6b376d9539582c7b4220f041a33ec24c226199" dependencies = [ "async-trait", "bytes", "futures-util", - "http 0.2.12", - "http-body 0.4.6", + "http", + "http-body", + "http-body-util", "mime", + "pin-project-lite", "rustversion", + "sync_wrapper 1.0.1", "tower-layer", "tower-service", ] @@ -482,22 +484,22 @@ checksum = "d3fd119d74b830634cea2a0f58bbd0d54540518a14397557951e79340abc28c0" [[package]] name = "console-api" -version = "0.6.0" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fd326812b3fd01da5bb1af7d340d0d555fd3d4b641e7f1dfcf5962a902952787" +checksum = "86ed14aa9c9f927213c6e4f3ef75faaad3406134efe84ba2cb7983431d5f0931" dependencies = [ "futures-core", "prost", "prost-types", - "tonic 0.10.2", + "tonic", "tracing-core", ] [[package]] name = "console-subscriber" -version = "0.2.0" +version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7481d4c57092cd1c19dd541b92bdce883de840df30aa5d03fd48a3935c01842e" +checksum = "e2e3a111a37f3333946ebf9da370ba5c5577b18eb342ec683eb488dd21980302" dependencies = [ "console-api", "crossbeam-channel", @@ -505,13 +507,15 @@ dependencies = [ "futures-task", "hdrhistogram", "humantime", + "hyper-util", + "prost", "prost-types", "serde", "serde_json", "thread_local", "tokio", "tokio-stream", - "tonic 0.10.2", + "tonic", "tracing", "tracing-core", "tracing-subscriber", @@ -523,12 +527,6 @@ version = "0.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f7144d30dcf0fafbce74250a3963025d8d52177934239851c917d29f1df280c2" -[[package]] -name = "convert_case" -version = "0.4.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6245d59a3e82a7fc217c5828a6692dbc6dfb63a0c8c90495621f7b9d79704a0e" - [[package]] name = "core-foundation" version = "0.9.4" @@ -721,6 +719,20 @@ dependencies = [ "parking_lot_core", ] +[[package]] +name = "dashmap" +version = "6.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5041cc499144891f3790297212f32a74fb938e5136a14943f338ef9e0ae276cf" +dependencies = [ + "cfg-if", + "crossbeam-utils", + "hashbrown 0.14.5", + "lock_api", + "once_cell", + "parking_lot_core", +] + [[package]] name = "deflate64" version = "0.1.9" @@ -780,15 +792,23 @@ dependencies = [ [[package]] name = "derive_more" -version = "0.99.18" +version = "1.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4a9b99b9cbbe49445b21764dc0625032a89b145a2642e67603e1c936f5458d05" +dependencies = [ + "derive_more-impl", +] + +[[package]] +name = "derive_more-impl" +version = "1.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5f33878137e4dafd7fa914ad4e259e18a4e8e532b9617a2d0150262bf53abfce" +checksum = "cb7330aeadfbe296029522e6c40f315320aba36fc43a5b3632f3795348f3bd22" dependencies = [ - "convert_case", "proc-macro2", "quote", - "rustc_version", "syn", + "unicode-xid", ] [[package]] @@ -1090,7 +1110,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "68a7f542ee6b35af73b06abc0dad1c1bae89964e4e253bc4b587b91c9637867b" dependencies = [ "cfg-if", - "dashmap", + "dashmap 5.5.3", "futures", "futures-timer", "no-std-compat", @@ -1103,25 +1123,6 @@ dependencies = [ "spinning_top", ] -[[package]] -name = "h2" -version = "0.3.26" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "81fe527a889e1532da5c525686d96d4c2e74cdd345badf8dfef9f6b39dd5f5e8" -dependencies = [ - "bytes", - "fnv", - "futures-core", - "futures-sink", - "futures-util", - "http 0.2.12", - "indexmap 2.2.6", - "slab", - "tokio", - "tokio-util", - "tracing", -] - [[package]] name = "h2" version = "0.4.5" @@ -1133,8 +1134,8 @@ dependencies = [ "fnv", "futures-core", "futures-sink", - "http 1.1.0", - "indexmap 2.2.6", + "http", + "indexmap 2.6.0", "slab", "tokio", "tokio-util", @@ -1167,6 +1168,12 @@ dependencies = [ "allocator-api2", ] +[[package]] +name = "hashbrown" +version = "0.15.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1e087f84d4f86bf4b218b927129862374b72199ae7d8657835f1e89000eea4fb" + [[package]] name = "hdrhistogram" version = "7.5.4" @@ -1201,17 +1208,6 @@ dependencies = [ "digest", ] -[[package]] -name = "http" -version = "0.2.12" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "601cbb57e577e2f5ef5be8e7b83f0f63994f25aa94d673e54a92d5c516d101f1" -dependencies = [ - "bytes", - "fnv", - "itoa", -] - [[package]] name = "http" version = "1.1.0" @@ -1223,17 +1219,6 @@ dependencies = [ "itoa", ] -[[package]] -name = "http-body" -version = "0.4.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7ceab25649e9960c0311ea418d17bee82c0dcec1bd053b5f9a66e265a693bed2" -dependencies = [ - "bytes", - "http 0.2.12", - "pin-project-lite", -] - [[package]] name = "http-body" version = "1.0.1" @@ -1241,7 +1226,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1efedce1fb8e6913f23e0c92de8e62cd5b772a67e7b3946df930a62566c93184" dependencies = [ "bytes", - "http 1.1.0", + "http", ] [[package]] @@ -1252,8 +1237,8 @@ checksum = "793429d76616a256bcb62c2a2ec2bed781c8307e797e2598c50010f2bee2544f" dependencies = [ "bytes", "futures-util", - "http 1.1.0", - "http-body 1.0.1", + "http", + "http-body", "pin-project-lite", ] @@ -1275,30 +1260,6 @@ version = "2.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9a3a5bfb195931eeb336b2a7b4d761daec841b97f947d34394601737a7bba5e4" -[[package]] -name = "hyper" -version = "0.14.30" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a152ddd61dfaec7273fe8419ab357f33aee0d914c5f4efbf0d96fa749eea5ec9" -dependencies = [ - "bytes", - "futures-channel", - "futures-core", - "futures-util", - "h2 0.3.26", - "http 0.2.12", - "http-body 0.4.6", - "httparse", - "httpdate", - "itoa", - "pin-project-lite", - "socket2", - "tokio", - "tower-service", - "tracing", - "want", -] - [[package]] name = "hyper" version = "1.4.1" @@ -1308,9 +1269,9 @@ dependencies = [ "bytes", "futures-channel", "futures-util", - "h2 0.4.5", - "http 1.1.0", - "http-body 1.0.1", + "h2", + "http", + "http-body", "httparse", "httpdate", "itoa", @@ -1327,27 +1288,28 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5ee4be2c948921a1a5320b629c4193916ed787a7f7f293fd3f7f5a6c9de74155" dependencies = [ "futures-util", - "http 1.1.0", - "hyper 1.4.1", + "http", + "hyper", "hyper-util", - "rustls 0.23.12", + "rustls", + "rustls-native-certs 0.7.1", "rustls-pki-types", "tokio", - "tokio-rustls 0.26.0", + "tokio-rustls", "tower-service", - "webpki-roots", ] [[package]] name = "hyper-timeout" -version = "0.4.1" +version = "0.5.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bbb958482e8c7be4bc3cf272a766a2b0bf1a6755e7a6ae777f017a31d11b13b1" +checksum = "3203a961e5c83b6f5498933e78b6b263e208c197b63e9c6c53cc82ffd3f63793" dependencies = [ - "hyper 0.14.30", + "hyper", + "hyper-util", "pin-project-lite", "tokio", - "tokio-io-timeout", + "tower-service", ] [[package]] @@ -1359,13 +1321,13 @@ dependencies = [ "bytes", "futures-channel", "futures-util", - "http 1.1.0", - "http-body 1.0.1", - "hyper 1.4.1", + "http", + "http-body", + "hyper", "pin-project-lite", "socket2", "tokio", - "tower", + "tower 0.4.13", "tower-service", "tracing", ] @@ -1398,12 +1360,12 @@ dependencies = [ [[package]] name = "indexmap" -version = "2.2.6" +version = "2.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "168fb715dda47215e360912c096649d23d58bf392ac62f73919e831745e40f26" +checksum = "707907fe3c25f5424cce2cb7e1cbcafee6bdbe735ca90ef77c29e84591e5b9da" dependencies = [ "equivalent", - "hashbrown 0.14.5", + "hashbrown 0.15.0", ] [[package]] @@ -1658,14 +1620,13 @@ dependencies = [ [[package]] name = "mockall" -version = "0.12.1" +version = "0.13.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "43766c2b5203b10de348ffe19f7e54564b64f3d6018ff7648d1e2d6d3a0f0a48" +checksum = "d4c28b3fb6d753d28c20e826cd46ee611fda1cf3cde03a443a974043247c065a" dependencies = [ "cfg-if", "downcast", "fragile", - "lazy_static", "mockall_derive", "predicates", "predicates-tree", @@ -1673,9 +1634,9 @@ dependencies = [ [[package]] name = "mockall_derive" -version = "0.12.1" +version = "0.13.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "af7cbce79ec385a1d4f54baa90a76401eb15d9cab93685f62e7e9f942aa00ae2" +checksum = "341014e7f530314e9a1fdbc7400b244efea7122662c96bfa248c31da5bfb2020" dependencies = [ "cfg-if", "proc-macro2", @@ -1774,9 +1735,9 @@ checksum = "ff011a302c396a5197692431fc1948019154afc178baf7d8e37367442a4601cf" [[package]] name = "opentelemetry" -version = "0.23.0" +version = "0.24.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1b69a91d4893e713e06f724597ad630f1fa76057a5e1026c0ca67054a9032a76" +checksum = "4c365a63eec4f55b7efeceb724f1336f26a9cf3427b70e59e2cd2a5b947fba96" dependencies = [ "futures-core", "futures-sink", @@ -1788,27 +1749,27 @@ dependencies = [ [[package]] name = "opentelemetry-otlp" -version = "0.16.0" +version = "0.17.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a94c69209c05319cdf7460c6d4c055ed102be242a0a6245835d7bc42c6ec7f54" +checksum = "6b925a602ffb916fb7421276b86756027b37ee708f9dce2dbdcc51739f07e727" dependencies = [ "async-trait", "futures-core", - "http 0.2.12", + "http", "opentelemetry", "opentelemetry-proto", "opentelemetry_sdk", "prost", "thiserror", "tokio", - "tonic 0.11.0", + "tonic", ] [[package]] name = "opentelemetry-prometheus" -version = "0.16.0" +version = "0.17.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5e1a24eafe47b693cb938f8505f240dc26c71db60df9aca376b4f857e9653ec7" +checksum = "cc4191ce34aa274621861a7a9d68dbcf618d5b6c66b10081631b61fd81fbc015" dependencies = [ "once_cell", "opentelemetry", @@ -1819,47 +1780,37 @@ dependencies = [ [[package]] name = "opentelemetry-proto" -version = "0.6.0" +version = "0.7.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "984806e6cf27f2b49282e2a05e288f30594f3dbc74eb7a6e99422bc48ed78162" +checksum = "30ee9f20bff9c984511a02f082dc8ede839e4a9bf15cc2487c8d6fea5ad850d9" dependencies = [ "opentelemetry", "opentelemetry_sdk", "prost", - "tonic 0.11.0", + "tonic", ] [[package]] name = "opentelemetry_sdk" -version = "0.23.0" +version = "0.24.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ae312d58eaa90a82d2e627fd86e075cf5230b3f11794e2ed74199ebbe572d4fd" +checksum = "692eac490ec80f24a17828d49b40b60f5aeaccdfe6a503f939713afd22bc28df" dependencies = [ "async-trait", "futures-channel", "futures-executor", "futures-util", "glob", - "lazy_static", "once_cell", "opentelemetry", - "ordered-float", "percent-encoding", "rand", + "serde_json", "thiserror", "tokio", "tokio-stream", ] -[[package]] -name = "ordered-float" -version = "4.2.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "19ff2cf528c6c03d9ed653d6c4ce1dc0582dc4af309790ad92f07c1cd551b0be" -dependencies = [ - "num-traits", -] - [[package]] name = "overload" version = "0.1.1" @@ -1918,7 +1869,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b4c5cc86750666a3ed20bdaf5ca2a0344f9c67674cae0515bec2da16fbaa47db" dependencies = [ "fixedbitset", - "indexmap 2.2.6", + "indexmap 2.6.0", ] [[package]] @@ -2050,6 +2001,15 @@ dependencies = [ "syn", ] +[[package]] +name = "proc-macro-crate" +version = "3.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8ecf48c7ca261d60b74ab1a7b20da18bede46776b2e55535cb958eb595c5fa7b" +dependencies = [ + "toml_edit", +] + [[package]] name = "proc-macro2" version = "1.0.86" @@ -2076,9 +2036,9 @@ dependencies = [ [[package]] name = "prost" -version = "0.12.6" +version = "0.13.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "deb1435c188b76130da55f17a466d252ff7b1418b2ad3e037d127b94e3411f29" +checksum = "7b0487d90e047de87f984913713b85c601c05609aad5b0df4b4573fbf69aa13f" dependencies = [ "bytes", "prost-derive", @@ -2086,13 +2046,13 @@ dependencies = [ [[package]] name = "prost-build" -version = "0.12.6" +version = "0.13.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "22505a5c94da8e3b7c2996394d1c933236c4d743e81a410bcca4e6989fc066a4" +checksum = "0c1318b19085f08681016926435853bbf7858f9c082d0999b80550ff5d9abe15" dependencies = [ "bytes", "heck", - "itertools 0.12.1", + "itertools 0.13.0", "log", "multimap", "once_cell", @@ -2107,12 +2067,12 @@ dependencies = [ [[package]] name = "prost-derive" -version = "0.12.6" +version = "0.13.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "81bddcdb20abf9501610992b6759a4c888aef7d1a7247ef75e2404275ac24af1" +checksum = "e9552f850d5f0964a4e4d0bf306459ac29323ddfbae05e35a7c0d35cb0803cc5" dependencies = [ "anyhow", - "itertools 0.12.1", + "itertools 0.13.0", "proc-macro2", "quote", "syn", @@ -2120,18 +2080,18 @@ dependencies = [ [[package]] name = "prost-types" -version = "0.12.6" +version = "0.13.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9091c90b0a32608e984ff2fa4091273cbdd755d54935c51d520887f4a1dbd5b0" +checksum = "4759aa0d3a6232fb8dbdb97b61de2c20047c68aca932c7ed76da9d788508d670" dependencies = [ "prost", ] [[package]] name = "prost-wkt" -version = "0.5.1" +version = "0.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5fb7ec2850c138ebaa7ab682503b5d08c3cb330343e9c94776612928b6ddb53f" +checksum = "a8d84e2bee181b04c2bac339f2bfe818c46a99750488cc6728ce4181d5aa8299" dependencies = [ "chrono", "inventory", @@ -2144,9 +2104,9 @@ dependencies = [ [[package]] name = "prost-wkt-build" -version = "0.5.1" +version = "0.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "598b7365952c2ed4e32902de0533653aafbe5ae3da436e8e2335c7d375a1cef3" +checksum = "8a669d5acbe719010c6f62a64e6d7d88fdedc1fe46e419747949ecb6312e9b14" dependencies = [ "heck", "prost", @@ -2157,9 +2117,9 @@ dependencies = [ [[package]] name = "prost-wkt-types" -version = "0.5.1" +version = "0.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1a8eadc2381640a49c1fbfb9f4a857794b4e5bf5a2cbc2d858cfdb74f64dcd22" +checksum = "01ef068e9b82e654614b22e6b13699bd545b6c0e2e721736008b00b38aeb4f64" dependencies = [ "chrono", "prost", @@ -2205,7 +2165,7 @@ dependencies = [ "quinn-proto", "quinn-udp", "rustc-hash", - "rustls 0.23.12", + "rustls", "thiserror", "tokio", "tracing", @@ -2221,7 +2181,7 @@ dependencies = [ "rand", "ring", "rustc-hash", - "rustls 0.23.12", + "rustls", "slab", "thiserror", "tinyvec", @@ -2416,10 +2376,10 @@ dependencies = [ "bytes", "futures-core", "futures-util", - "http 1.1.0", - "http-body 1.0.1", + "http", + "http-body", "http-body-util", - "hyper 1.4.1", + "hyper", "hyper-rustls", "hyper-util", "ipnet", @@ -2430,7 +2390,8 @@ dependencies = [ "percent-encoding", "pin-project-lite", "quinn", - "rustls 0.23.12", + "rustls", + "rustls-native-certs 0.7.1", "rustls-pemfile", "rustls-pki-types", "serde", @@ -2438,7 +2399,7 @@ dependencies = [ "serde_urlencoded", "sync_wrapper 1.0.1", "tokio", - "tokio-rustls 0.26.0", + "tokio-rustls", "tokio-util", "tower-service", "url", @@ -2446,7 +2407,6 @@ dependencies = [ "wasm-bindgen-futures", "wasm-streams", "web-sys", - "webpki-roots", "winreg", ] @@ -2498,9 +2458,9 @@ dependencies = [ [[package]] name = "rstest" -version = "0.19.0" +version = "0.22.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9d5316d2a1479eeef1ea21e7f9ddc67c191d497abc8fc3ba2467857abbb68330" +checksum = "7b423f0e62bdd61734b67cd21ff50871dfaeb9cc74f869dcd6af974fbcb19936" dependencies = [ "futures", "futures-timer", @@ -2510,12 +2470,13 @@ dependencies = [ [[package]] name = "rstest_macros" -version = "0.19.0" +version = "0.22.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "04a9df72cc1f67020b0d63ad9bfe4a323e459ea7eb68e03bd9824db49f9a4c25" +checksum = "c5e1711e7d14f74b12a58411c542185ef7fb7f2e7f8ee6e2940a883628522b42" dependencies = [ "cfg-if", "glob", + "proc-macro-crate", "proc-macro2", "quote", "regex", @@ -2585,11 +2546,12 @@ dependencies = [ [[package]] name = "rustls" -version = "0.22.4" +version = "0.23.12" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bf4ef73721ac7bcd79b2b315da7779d8fc09718c6b3d2d1b2d94850eb8c18432" +checksum = "c58f8c84392efc0a126acce10fa59ff7b3d2ac06ab451a33f2741989b806b044" dependencies = [ "log", + "once_cell", "ring", "rustls-pki-types", "rustls-webpki", @@ -2598,24 +2560,23 @@ dependencies = [ ] [[package]] -name = "rustls" -version = "0.23.12" +name = "rustls-native-certs" +version = "0.7.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c58f8c84392efc0a126acce10fa59ff7b3d2ac06ab451a33f2741989b806b044" +checksum = "a88d6d420651b496bdd98684116959239430022a115c1240e6c3993be0b15fba" dependencies = [ - "once_cell", - "ring", + "openssl-probe", + "rustls-pemfile", "rustls-pki-types", - "rustls-webpki", - "subtle", - "zeroize", + "schannel", + "security-framework", ] [[package]] name = "rustls-native-certs" -version = "0.7.1" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a88d6d420651b496bdd98684116959239430022a115c1240e6c3993be0b15fba" +checksum = "fcaf18a4f2be7326cd874a5fa579fae794320a0f388d365dca7e480e55f83f8a" dependencies = [ "openssl-probe", "rustls-pemfile", @@ -2913,16 +2874,14 @@ checksum = "a7065abeca94b6a8a577f9bd45aa0867a2238b74e8eb67cf10d492bc39351394" [[package]] name = "sysinfo" -version = "0.30.13" +version = "0.31.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0a5b4ddaee55fb2bea2bf0e5000747e5f5c0de765e5a5ff87f4cd106439f4bb3" +checksum = "355dbe4f8799b304b05e1b0f05fc59b2a18d36645cf169607da45bde2f69a1be" dependencies = [ - "cfg-if", "core-foundation-sys", "libc", + "memchr", "ntapi", - "once_cell", - "rayon", "windows", ] @@ -2962,8 +2921,10 @@ dependencies = [ "derive_more", "futures", "futures-retry", - "http 0.2.12", - "hyper 0.14.30", + "http", + "http-body-util", + "hyper", + "hyper-util", "mockall", "once_cell", "opentelemetry", @@ -2974,8 +2935,8 @@ dependencies = [ "temporal-sdk-core-protos", "thiserror", "tokio", - "tonic 0.11.0", - "tower", + "tonic", + "tower 0.5.1", "tracing", "url", "uuid", @@ -3018,7 +2979,7 @@ dependencies = [ "crossbeam-channel", "crossbeam-queue", "crossbeam-utils", - "dashmap", + "dashmap 6.1.0", "derive_builder", "derive_more", "enum-iterator", @@ -3028,7 +2989,7 @@ dependencies = [ "futures-util", "governor", "http-body-util", - "hyper 1.4.1", + "hyper", "hyper-util", "itertools 0.13.0", "lru", @@ -3064,7 +3025,7 @@ dependencies = [ "tokio", "tokio-stream", "tokio-util", - "tonic 0.11.0", + "tonic", "tonic-build", "tracing", "tracing-subscriber", @@ -3085,7 +3046,7 @@ dependencies = [ "serde_json", "temporal-sdk-core-protos", "thiserror", - "tonic 0.11.0", + "tonic", "tracing-core", "url", ] @@ -3105,7 +3066,7 @@ dependencies = [ "serde", "serde_json", "thiserror", - "tonic 0.11.0", + "tonic", "tonic-build", "uuid", ] @@ -3144,6 +3105,7 @@ dependencies = [ name = "temporalio_bridge" version = "0.1.0" dependencies = [ + "futures", "magnus", "parking_lot", "prost", @@ -3153,7 +3115,7 @@ dependencies = [ "temporal-sdk-core-api", "temporal-sdk-core-protos", "tokio", - "tonic 0.11.0", + "tonic", "tracing", "url", ] @@ -3266,16 +3228,6 @@ dependencies = [ "windows-sys 0.52.0", ] -[[package]] -name = "tokio-io-timeout" -version = "1.2.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "30b74022ada614a1b4834de765f9bb43877f910cc8ce4be40e89042c9223a8bf" -dependencies = [ - "pin-project-lite", - "tokio", -] - [[package]] name = "tokio-macros" version = "2.4.0" @@ -3287,33 +3239,22 @@ dependencies = [ "syn", ] -[[package]] -name = "tokio-rustls" -version = "0.25.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "775e0c0f0adb3a2f22a00c4745d728b479985fc15ee7ca6a2608388c5569860f" -dependencies = [ - "rustls 0.22.4", - "rustls-pki-types", - "tokio", -] - [[package]] name = "tokio-rustls" version = "0.26.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0c7bc40d0e5a97695bb96e27995cd3a08538541b0a846f65bba7a359f36700d4" dependencies = [ - "rustls 0.23.12", + "rustls", "rustls-pki-types", "tokio", ] [[package]] name = "tokio-stream" -version = "0.1.15" +version = "0.1.16" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "267ac89e0bec6e691e5813911606935d77c476ff49024f98abcea3e7b15e37af" +checksum = "4f4e6ce100d0eb49a2734f8c0812bcd324cf357d21810932c5df6b96ef2b86f1" dependencies = [ "futures-core", "pin-project-lite", @@ -3347,20 +3288,20 @@ dependencies = [ [[package]] name = "toml_datetime" -version = "0.6.7" +version = "0.6.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f8fb9f64314842840f1d940ac544da178732128f1c78c21772e876579e0da1db" +checksum = "0dd7358ecb8fc2f8d014bf86f6f638ce72ba252a2c3a2572f2a795f1d23efb41" dependencies = [ "serde", ] [[package]] name = "toml_edit" -version = "0.22.17" +version = "0.22.22" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8d9f8729f5aea9562aac1cc0441f5d6de3cff1ee0c5d67293eeca5eb36ee7c16" +checksum = "4ae48d6208a266e853d946088ed816055e556cc6028c5e8e2b84d9fa5dd7c7f5" dependencies = [ - "indexmap 2.2.6", + "indexmap 2.6.0", "serde", "serde_spanned", "toml_datetime", @@ -3369,57 +3310,32 @@ dependencies = [ [[package]] name = "tonic" -version = "0.10.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d560933a0de61cf715926b9cac824d4c883c2c43142f787595e48280c40a1d0e" -dependencies = [ - "async-stream", - "async-trait", - "axum", - "base64 0.21.7", - "bytes", - "h2 0.3.26", - "http 0.2.12", - "http-body 0.4.6", - "hyper 0.14.30", - "hyper-timeout", - "percent-encoding", - "pin-project", - "prost", - "tokio", - "tokio-stream", - "tower", - "tower-layer", - "tower-service", - "tracing", -] - -[[package]] -name = "tonic" -version = "0.11.0" +version = "0.12.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "76c4eb7a4e9ef9d4763600161f12f5070b92a578e1b634db88a6887844c91a13" +checksum = "877c5b330756d856ffcc4553ab34a5684481ade925ecc54bcd1bf02b1d0d4d52" dependencies = [ "async-stream", "async-trait", "axum", - "base64 0.21.7", + "base64 0.22.1", "bytes", - "h2 0.3.26", - "http 0.2.12", - "http-body 0.4.6", - "hyper 0.14.30", + "h2", + "http", + "http-body", + "http-body-util", + "hyper", "hyper-timeout", + "hyper-util", "percent-encoding", "pin-project", "prost", - "rustls-native-certs", + "rustls-native-certs 0.8.0", "rustls-pemfile", - "rustls-pki-types", + "socket2", "tokio", - "tokio-rustls 0.25.0", + "tokio-rustls", "tokio-stream", - "tower", + "tower 0.4.13", "tower-layer", "tower-service", "tracing", @@ -3427,13 +3343,14 @@ dependencies = [ [[package]] name = "tonic-build" -version = "0.11.0" +version = "0.12.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "be4ef6dd70a610078cb4e338a0f79d06bc759ff1b22d2120c2ff02ae264ba9c2" +checksum = "9557ce109ea773b399c9b9e5dca39294110b74f1f342cb347a80d1fce8c26a11" dependencies = [ "prettyplease", "proc-macro2", "prost-build", + "prost-types", "quote", "syn", ] @@ -3458,17 +3375,31 @@ dependencies = [ "tracing", ] +[[package]] +name = "tower" +version = "0.5.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2873938d487c3cfb9aed7546dc9f2711d867c9f90c46b889989a2cb84eba6b4f" +dependencies = [ + "futures-core", + "futures-util", + "pin-project-lite", + "sync_wrapper 0.1.2", + "tower-layer", + "tower-service", +] + [[package]] name = "tower-layer" -version = "0.3.2" +version = "0.3.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c20c8dbed6283a09604c3e69b4b7eeb54e298b8a600d4d5ecb5ad39de609f1d0" +checksum = "121c2a6cda46980bb0fcd1647ffaf6cd3fc79a013de288782836f6df9c48780e" [[package]] name = "tower-service" -version = "0.3.2" +version = "0.3.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b6bc1c9ce2b5135ac7f93c72918fc37feb872bdc6a5533a8b85eb4b86bfdae52" +checksum = "8df9b6e13f2d32c91b9bd719c00d1958837bc7dec474d94952798cc8e69eeec3" [[package]] name = "tracing" @@ -3476,7 +3407,6 @@ version = "0.1.40" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c3523ab5a71916ccf420eebdf5521fcef02141234bbc0b8a49f2fdc4544364ef" dependencies = [ - "log", "pin-project-lite", "tracing-attributes", "tracing-core", @@ -3611,6 +3541,12 @@ dependencies = [ "tinyvec", ] +[[package]] +name = "unicode-xid" +version = "0.2.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ebc1c04c71510c7f702b52b7c350734c9ff1295c464a03335b00bb84fc54f853" + [[package]] name = "untrusted" version = "0.9.0" @@ -3769,15 +3705,6 @@ dependencies = [ "wasm-bindgen", ] -[[package]] -name = "webpki-roots" -version = "0.26.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bd7c23921eeb1713a4e851530e9b9756e4fb0e89978582942612524cf09f01cd" -dependencies = [ - "rustls-pki-types", -] - [[package]] name = "winapi" version = "0.3.9" @@ -3811,9 +3738,9 @@ checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f" [[package]] name = "windows" -version = "0.52.0" +version = "0.57.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e48a53791691ab099e5e2ad123536d0fff50652600abaf43bbf952894110d0be" +checksum = "12342cb4d8e3b046f3d80effd474a7a02447231330ef77d71daa6fbc40681143" dependencies = [ "windows-core", "windows-targets 0.52.6", @@ -3821,9 +3748,43 @@ dependencies = [ [[package]] name = "windows-core" -version = "0.52.0" +version = "0.57.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d2ed2439a290666cd67ecce2b0ffaad89c2a56b976b736e6ece670297897832d" +dependencies = [ + "windows-implement", + "windows-interface", + "windows-result", + "windows-targets 0.52.6", +] + +[[package]] +name = "windows-implement" +version = "0.57.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9107ddc059d5b6fbfbffdfa7a7fe3e22a226def0b2608f72e9d552763d3e1ad7" +dependencies = [ + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "windows-interface" +version = "0.57.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "29bee4b38ea3cde66011baa44dba677c432a78593e202392d1e9070cf2a7fca7" +dependencies = [ + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "windows-result" +version = "0.1.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "33ab640c8d7e35bf8ba19b884ba838ceb4fba93a4e8c65a9059d08afcfc683d9" +checksum = "5e383302e8ec8515204254685643de10811af0ed97ea37210dc26fb0032647f8" dependencies = [ "windows-targets 0.52.6", ] @@ -3969,9 +3930,9 @@ checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec" [[package]] name = "winnow" -version = "0.6.16" +version = "0.6.20" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b480ae9340fc261e6be3e95a1ba86d54ae3f9171132a73ce8d4bbaf68339507c" +checksum = "36c1fec1a2bb5866f07c25f68c26e565c4c200aebb96d7e55710c19d3e8ac49b" dependencies = [ "memchr", ] @@ -4053,7 +4014,7 @@ dependencies = [ "displaydoc", "flate2", "hmac", - "indexmap 2.2.6", + "indexmap 2.6.0", "lzma-rs", "memchr", "pbkdf2", diff --git a/temporalio/Cargo.toml b/temporalio/Cargo.toml index 21ba7092..52aaeba4 100644 --- a/temporalio/Cargo.toml +++ b/temporalio/Cargo.toml @@ -13,13 +13,13 @@ license-file = "LICENSE" [workspace.dependencies] derive_builder = "0.20" -derive_more = { version = "0.99", default-features = false, features = ["constructor", "display", "from", "into"] } +derive_more = { version = "1.0", features = ["constructor", "display", "from", "into", "debug"] } once_cell = "1.16" -tonic = "0.11" -tonic-build = "0.11" -opentelemetry = "0.23" -prost = "0.12" -prost-types = "0.12" +tonic = "0.12" +tonic-build = "0.12" +opentelemetry = { version = "0.24", features = ["metrics"] } +prost = "0.13" +prost-types = "0.13" [workspace.lints.rust] unreachable_pub = "warn" \ No newline at end of file diff --git a/temporalio/Rakefile b/temporalio/Rakefile index 0d0f8ecb..6db5af16 100644 --- a/temporalio/Rakefile +++ b/temporalio/Rakefile @@ -38,7 +38,8 @@ module CustomizeYardWarnings # rubocop:disable Style/Documentation super rescue YARD::Parser::UndocumentableError # We ignore if it's an API warning - raise unless statement.last.file.start_with?('lib/temporalio/api/') + raise unless statement.last.file.start_with?('lib/temporalio/api/') || + statement.last.file.start_with?('lib/temporalio/internal/bridge/api/') end end @@ -265,7 +266,7 @@ namespace :proto do # Camel case to snake case rpc = method.name.gsub(/([A-Z])/, '_\1').downcase.delete_prefix('_') file.puts <<~TEXT - "#{rpc}" => rpc_call!(self, block, call, #{trait}, #{rpc}), + "#{rpc}" => rpc_call!(self, callback, call, #{trait}, #{rpc}), TEXT end file.puts <<~TEXT @@ -277,14 +278,17 @@ namespace :proto do file.puts <<~TEXT // Generated code. DO NOT EDIT! - use magnus::{block::Proc, value::Opaque, Error, Ruby}; + use magnus::{Error, Ruby}; use temporal_client::{CloudService, OperatorService, WorkflowService}; use super::{error, rpc_call}; - use crate::client::{Client, RpcCall, SERVICE_CLOUD, SERVICE_OPERATOR, SERVICE_WORKFLOW}; + use crate::{ + client::{Client, RpcCall, SERVICE_CLOUD, SERVICE_OPERATOR, SERVICE_WORKFLOW}, + util::AsyncCallback, + }; impl Client { - pub fn invoke_rpc(&self, service: u8, block: Opaque, call: RpcCall) -> Result<(), Error> { + pub fn invoke_rpc(&self, service: u8, callback: AsyncCallback, call: RpcCall) -> Result<(), Error> { match service { TEXT generate_rust_match_arm( @@ -313,6 +317,32 @@ namespace :proto do TEXT end sh 'cargo', 'fmt', '--', 'ext/src/client_rpc_generated.rs' + + # Generate core protos + FileUtils.rm_rf('lib/temporalio/internal/bridge/api') + # Generate API to temp dir + FileUtils.rm_rf('tmp-proto') + FileUtils.mkdir_p('tmp-proto') + sh 'bundle exec grpc_tools_ruby_protoc ' \ + '--proto_path=ext/sdk-core/sdk-core-protos/protos/api_upstream ' \ + '--proto_path=ext/sdk-core/sdk-core-protos/protos/local ' \ + '--ruby_out=tmp-proto ' \ + "#{Dir.glob('ext/sdk-core/sdk-core-protos/protos/local/**/*.proto').join(' ')}" + # Walk all generated Ruby files and cleanup content and filename + Dir.glob('tmp-proto/temporal/sdk/**/*.rb') do |path| + # Fix up the imports + content = File.read(path) + content.gsub!(%r{^require 'temporal/(.*)_pb'$}, "require 'temporalio/\\1'") + content.gsub!(%r{^require 'temporalio/sdk/core/(.*)'$}, "require 'temporalio/internal/bridge/api/\\1'") + File.write(path, content) + + # Remove _pb from the filename + FileUtils.mv(path, path.sub('_pb', '')) + end + # Move from temp dir and remove temp dir + FileUtils.mkdir_p('lib/temporalio/internal/bridge/api') + FileUtils.cp_r(Dir.glob('tmp-proto/temporal/sdk/core/*'), 'lib/temporalio/internal/bridge/api') + FileUtils.rm_rf('tmp-proto') end end diff --git a/temporalio/Steepfile b/temporalio/Steepfile index 713f0f63..368288d5 100644 --- a/temporalio/Steepfile +++ b/temporalio/Steepfile @@ -7,7 +7,7 @@ target :lib do check 'lib', 'test' - ignore 'lib/temporalio/api' + ignore 'lib/temporalio/api', 'lib/temporalio/internal/bridge/api' library 'uri' diff --git a/temporalio/ext/Cargo.toml b/temporalio/ext/Cargo.toml index 9b3f79c1..d5c7ddad 100644 --- a/temporalio/ext/Cargo.toml +++ b/temporalio/ext/Cargo.toml @@ -10,15 +10,16 @@ publish = false crate-type = ["cdylib"] [dependencies] +futures = "0.3" magnus = "0.7" parking_lot = "0.12" -prost = "0.12" +prost = "0.13" rb-sys = "0.9" temporal-client = { version = "0.1.0", path = "./sdk-core/client" } temporal-sdk-core = { version = "0.1.0", path = "./sdk-core/core", features = ["ephemeral-server"] } temporal-sdk-core-api = { version = "0.1.0", path = "./sdk-core/core-api" } temporal-sdk-core-protos = { version = "0.1.0", path = "./sdk-core/sdk-core-protos" } tokio = "1.26" -tonic = "0.11" +tonic = "0.12" tracing = "0.1" url = "2.2" diff --git a/temporalio/ext/README.md b/temporalio/ext/README.md index d4c08c1f..993790db 100644 --- a/temporalio/ext/README.md +++ b/temporalio/ext/README.md @@ -25,19 +25,16 @@ So async calls usually looks like this: ``` queue = Queue.new - some_bridge_thing.do_foo { |result| Queue.push(result) } + some_bridge_thing.do_foo(queue) queue.pop ``` * In Rust, `do_foo` spawns some Tokio async thing and returns * Once Tokio async thing is completed, in the Ruby-thread callback, Rust side converts that thing to a Ruby thing and - invokes the block + pushes to the queue This allows Ruby to remain async if in a Fiber, because Ruby `queue.pop` does not block a thread when in a Fiber -context. - -The invocation of a block with a value is quite cheap in Ruby (`rb_proc_call_kw` C call). There are no obvious -performance savings by trying to push to a Ruby queue from inside Rust directly. +context. The invocation of a queue push with a value is quite cheap in Ruby. ## Argument Passing diff --git a/temporalio/ext/sdk-core b/temporalio/ext/sdk-core index 5e3b2749..8691fed9 160000 --- a/temporalio/ext/sdk-core +++ b/temporalio/ext/sdk-core @@ -1 +1 @@ -Subproject commit 5e3b2749e2040e5d60695b253b94b21b0075c14a +Subproject commit 8691fed95ffa1ea220ccb29b67acead76c116336 diff --git a/temporalio/ext/src/client.rs b/temporalio/ext/src/client.rs index e9315b43..e840f7a1 100644 --- a/temporalio/ext/src/client.rs +++ b/temporalio/ext/src/client.rs @@ -7,8 +7,8 @@ use temporal_client::{ }; use magnus::{ - block::Proc, class, function, method, prelude::*, scan_args, value::Opaque, DataTypeFunctions, - Error, RString, Ruby, TypedData, Value, + class, function, method, prelude::*, scan_args, DataTypeFunctions, Error, RString, Ruby, + TypedData, Value, }; use tonic::{metadata::MetadataKey, Status}; use url::Url; @@ -16,7 +16,7 @@ use url::Url; use super::{error, id, new_error, ROOT_MOD}; use crate::{ runtime::{Runtime, RuntimeHandle}, - util::Struct, + util::{AsyncCallback, Struct}, ROOT_ERR, }; use std::str::FromStr; @@ -36,7 +36,7 @@ pub fn init(ruby: &Ruby) -> Result<(), Error> { class.const_set("SERVICE_CLOUD", SERVICE_CLOUD)?; class.const_set("SERVICE_TEST", SERVICE_TEST)?; class.const_set("SERVICE_HEALTH", SERVICE_HEALTH)?; - class.define_singleton_method("async_new", function!(Client::async_new, 2))?; + class.define_singleton_method("async_new", function!(Client::async_new, 3))?; class.define_method("async_invoke_rpc", method!(Client::async_invoke_rpc, -1))?; let inner_class = class.define_error("RPCFailure", ruby.get_inner(&ROOT_ERR))?; @@ -52,22 +52,22 @@ type CoreClient = RetryClient #[magnus(class = "Temporalio::Internal::Bridge::Client", free_immediately)] pub struct Client { pub(crate) core: CoreClient, - runtime_handle: RuntimeHandle, + pub(crate) runtime_handle: RuntimeHandle, } #[macro_export] macro_rules! rpc_call { - ($client:ident, $block:ident, $call:ident, $trait:tt, $call_name:ident) => {{ + ($client:ident, $callback:ident, $call:ident, $trait:tt, $call_name:ident) => {{ if $call.retry { let mut core_client = $client.core.clone(); let req = $call.into_request()?; - crate::client::rpc_resp($client, $block, async move { + $crate::client::rpc_resp($client, $callback, async move { $trait::$call_name(&mut core_client, req).await }) } else { let mut core_client = $client.core.clone().into_inner(); let req = $call.into_request()?; - crate::client::rpc_resp($client, $block, async move { + $crate::client::rpc_resp($client, $callback, async move { $trait::$call_name(&mut core_client, req).await }) } @@ -75,7 +75,7 @@ macro_rules! rpc_call { } impl Client { - pub fn async_new(ruby: &Ruby, runtime: &Runtime, options: Struct) -> Result<(), Error> { + pub fn async_new(runtime: &Runtime, options: Struct, queue: Value) -> Result<(), Error> { // Build options let mut opts_build = ClientOptionsBuilder::default(); opts_build @@ -153,7 +153,7 @@ impl Client { .map_err(|err| error!("Invalid client options: {}", err))?; // Create client - let block = Opaque::from(ruby.block_proc()?); + let callback = AsyncCallback::from_queue(queue); let core_runtime = runtime.handle.core.clone(); let runtime_handle = runtime.handle.clone(); runtime.handle.spawn( @@ -163,24 +163,20 @@ impl Client { .await?; Ok(core) }, - move |ruby, result: Result| { - let block = ruby.get_inner(block); - let _: Value = match result { - Ok(core) => block.call((Client { - core, - runtime_handle, - },))?, - Err(err) => block.call((new_error!("Failed client connect: {}", err),))?, - }; - Ok(()) + move |_, result: Result| match result { + Ok(core) => callback.push(Client { + core, + runtime_handle, + }), + Err(err) => callback.push(new_error!("Failed client connect: {}", err)), }, ); Ok(()) } pub fn async_invoke_rpc(&self, args: &[Value]) -> Result<(), Error> { - let args = scan_args::scan_args::<(), (), (), (), _, Proc>(args)?; - let (service, rpc, request, retry, metadata, timeout) = scan_args::get_kwargs::< + let args = scan_args::scan_args::<(), (), (), (), _, ()>(args)?; + let (service, rpc, request, retry, metadata, timeout, queue) = scan_args::get_kwargs::< _, ( u8, @@ -189,6 +185,7 @@ impl Client { bool, Option>, Option, + Value, ), (), (), @@ -201,6 +198,7 @@ impl Client { id!("rpc_retry"), id!("rpc_metadata"), id!("rpc_timeout"), + id!("queue"), ], &[], )? @@ -213,8 +211,8 @@ impl Client { timeout, _not_send_sync: PhantomData, }; - let block = Opaque::from(args.block); - self.invoke_rpc(service, block, call) + let callback = AsyncCallback::from_queue(queue); + self.invoke_rpc(service, callback, call) } } @@ -237,7 +235,7 @@ impl RpcFailure { } pub fn details(&self) -> Option { - if self.status.details().len() == 0 { + if self.status.details().is_empty() { None } else { Some(RString::from_slice(self.status.details())) @@ -281,7 +279,7 @@ impl RpcCall<'_> { pub(crate) fn rpc_resp

( client: &Client, - block: Opaque, + callback: AsyncCallback, fut: impl Future, tonic::Status>> + Send + 'static, ) -> Result<(), Error> where @@ -290,15 +288,13 @@ where { client.runtime_handle.spawn( async move { fut.await.map(|msg| msg.get_ref().encode_to_vec()) }, - move |ruby, result| { - let block = ruby.get_inner(block); - let _: Value = match result { + move |_, result| { + match result { // TODO(cretz): Any reasonable way to prevent byte copy that is just going to get decoded into proto // object? - Ok(val) => block.call((RString::from_slice(&val),))?, - Err(status) => block.call((RpcFailure { status },))?, - }; - Ok(()) + Ok(val) => callback.push(RString::from_slice(&val)), + Err(status) => callback.push(RpcFailure { status }), + } }, ); Ok(()) diff --git a/temporalio/ext/src/client_rpc_generated.rs b/temporalio/ext/src/client_rpc_generated.rs index 8641066b..de7a3cab 100644 --- a/temporalio/ext/src/client_rpc_generated.rs +++ b/temporalio/ext/src/client_rpc_generated.rs @@ -1,320 +1,380 @@ // Generated code. DO NOT EDIT! -use magnus::{block::Proc, value::Opaque, Error, Ruby}; +use magnus::{Error, Ruby}; use temporal_client::{CloudService, OperatorService, WorkflowService}; use super::{error, rpc_call}; -use crate::client::{Client, RpcCall, SERVICE_CLOUD, SERVICE_OPERATOR, SERVICE_WORKFLOW}; +use crate::{ + client::{Client, RpcCall, SERVICE_CLOUD, SERVICE_OPERATOR, SERVICE_WORKFLOW}, + util::AsyncCallback, +}; impl Client { - pub fn invoke_rpc(&self, service: u8, block: Opaque, call: RpcCall) -> Result<(), Error> { + pub fn invoke_rpc( + &self, + service: u8, + callback: AsyncCallback, + call: RpcCall, + ) -> Result<(), Error> { match service { SERVICE_WORKFLOW => match call.rpc.as_str() { "count_workflow_executions" => rpc_call!( self, - block, + callback, call, WorkflowService, count_workflow_executions ), - "create_schedule" => rpc_call!(self, block, call, WorkflowService, create_schedule), - "delete_schedule" => rpc_call!(self, block, call, WorkflowService, delete_schedule), + "create_schedule" => { + rpc_call!(self, callback, call, WorkflowService, create_schedule) + } + "delete_schedule" => { + rpc_call!(self, callback, call, WorkflowService, delete_schedule) + } "delete_workflow_execution" => rpc_call!( self, - block, + callback, call, WorkflowService, delete_workflow_execution ), "deprecate_namespace" => { - rpc_call!(self, block, call, WorkflowService, deprecate_namespace) - } - "describe_batch_operation" => { - rpc_call!(self, block, call, WorkflowService, describe_batch_operation) + rpc_call!(self, callback, call, WorkflowService, deprecate_namespace) } + "describe_batch_operation" => rpc_call!( + self, + callback, + call, + WorkflowService, + describe_batch_operation + ), "describe_namespace" => { - rpc_call!(self, block, call, WorkflowService, describe_namespace) + rpc_call!(self, callback, call, WorkflowService, describe_namespace) } "describe_schedule" => { - rpc_call!(self, block, call, WorkflowService, describe_schedule) + rpc_call!(self, callback, call, WorkflowService, describe_schedule) } "describe_task_queue" => { - rpc_call!(self, block, call, WorkflowService, describe_task_queue) + rpc_call!(self, callback, call, WorkflowService, describe_task_queue) } "describe_workflow_execution" => rpc_call!( self, - block, + callback, call, WorkflowService, describe_workflow_execution ), - "execute_multi_operation" => { - rpc_call!(self, block, call, WorkflowService, execute_multi_operation) - } + "execute_multi_operation" => rpc_call!( + self, + callback, + call, + WorkflowService, + execute_multi_operation + ), "get_cluster_info" => { - rpc_call!(self, block, call, WorkflowService, get_cluster_info) + rpc_call!(self, callback, call, WorkflowService, get_cluster_info) } "get_search_attributes" => { - rpc_call!(self, block, call, WorkflowService, get_search_attributes) + rpc_call!(self, callback, call, WorkflowService, get_search_attributes) + } + "get_system_info" => { + rpc_call!(self, callback, call, WorkflowService, get_system_info) } - "get_system_info" => rpc_call!(self, block, call, WorkflowService, get_system_info), "get_worker_build_id_compatibility" => rpc_call!( self, - block, + callback, call, WorkflowService, get_worker_build_id_compatibility ), "get_worker_task_reachability" => rpc_call!( self, - block, + callback, call, WorkflowService, get_worker_task_reachability ), "get_worker_versioning_rules" => rpc_call!( self, - block, + callback, call, WorkflowService, get_worker_versioning_rules ), "get_workflow_execution_history" => rpc_call!( self, - block, + callback, call, WorkflowService, get_workflow_execution_history ), "get_workflow_execution_history_reverse" => rpc_call!( self, - block, + callback, call, WorkflowService, get_workflow_execution_history_reverse ), "list_archived_workflow_executions" => rpc_call!( self, - block, + callback, call, WorkflowService, list_archived_workflow_executions ), "list_batch_operations" => { - rpc_call!(self, block, call, WorkflowService, list_batch_operations) + rpc_call!(self, callback, call, WorkflowService, list_batch_operations) } "list_closed_workflow_executions" => rpc_call!( self, - block, + callback, call, WorkflowService, list_closed_workflow_executions ), - "list_namespaces" => rpc_call!(self, block, call, WorkflowService, list_namespaces), + "list_namespaces" => { + rpc_call!(self, callback, call, WorkflowService, list_namespaces) + } "list_open_workflow_executions" => rpc_call!( self, - block, + callback, call, WorkflowService, list_open_workflow_executions ), "list_schedule_matching_times" => rpc_call!( self, - block, + callback, call, WorkflowService, list_schedule_matching_times ), - "list_schedules" => rpc_call!(self, block, call, WorkflowService, list_schedules), + "list_schedules" => { + rpc_call!(self, callback, call, WorkflowService, list_schedules) + } "list_task_queue_partitions" => rpc_call!( self, - block, + callback, call, WorkflowService, list_task_queue_partitions ), - "list_workflow_executions" => { - rpc_call!(self, block, call, WorkflowService, list_workflow_executions) - } - "patch_schedule" => rpc_call!(self, block, call, WorkflowService, patch_schedule), - "poll_activity_task_queue" => { - rpc_call!(self, block, call, WorkflowService, poll_activity_task_queue) + "list_workflow_executions" => rpc_call!( + self, + callback, + call, + WorkflowService, + list_workflow_executions + ), + "patch_schedule" => { + rpc_call!(self, callback, call, WorkflowService, patch_schedule) } + "poll_activity_task_queue" => rpc_call!( + self, + callback, + call, + WorkflowService, + poll_activity_task_queue + ), "poll_nexus_task_queue" => { - rpc_call!(self, block, call, WorkflowService, poll_nexus_task_queue) + rpc_call!(self, callback, call, WorkflowService, poll_nexus_task_queue) } "poll_workflow_execution_update" => rpc_call!( self, - block, + callback, call, WorkflowService, poll_workflow_execution_update ), - "poll_workflow_task_queue" => { - rpc_call!(self, block, call, WorkflowService, poll_workflow_task_queue) + "poll_workflow_task_queue" => rpc_call!( + self, + callback, + call, + WorkflowService, + poll_workflow_task_queue + ), + "query_workflow" => { + rpc_call!(self, callback, call, WorkflowService, query_workflow) } - "query_workflow" => rpc_call!(self, block, call, WorkflowService, query_workflow), "record_activity_task_heartbeat" => rpc_call!( self, - block, + callback, call, WorkflowService, record_activity_task_heartbeat ), "record_activity_task_heartbeat_by_id" => rpc_call!( self, - block, + callback, call, WorkflowService, record_activity_task_heartbeat_by_id ), "register_namespace" => { - rpc_call!(self, block, call, WorkflowService, register_namespace) + rpc_call!(self, callback, call, WorkflowService, register_namespace) } "request_cancel_workflow_execution" => rpc_call!( self, - block, + callback, call, WorkflowService, request_cancel_workflow_execution ), - "reset_sticky_task_queue" => { - rpc_call!(self, block, call, WorkflowService, reset_sticky_task_queue) - } - "reset_workflow_execution" => { - rpc_call!(self, block, call, WorkflowService, reset_workflow_execution) - } + "reset_sticky_task_queue" => rpc_call!( + self, + callback, + call, + WorkflowService, + reset_sticky_task_queue + ), + "reset_workflow_execution" => rpc_call!( + self, + callback, + call, + WorkflowService, + reset_workflow_execution + ), "respond_activity_task_canceled" => rpc_call!( self, - block, + callback, call, WorkflowService, respond_activity_task_canceled ), "respond_activity_task_canceled_by_id" => rpc_call!( self, - block, + callback, call, WorkflowService, respond_activity_task_canceled_by_id ), "respond_activity_task_completed" => rpc_call!( self, - block, + callback, call, WorkflowService, respond_activity_task_completed ), "respond_activity_task_completed_by_id" => rpc_call!( self, - block, + callback, call, WorkflowService, respond_activity_task_completed_by_id ), "respond_activity_task_failed" => rpc_call!( self, - block, + callback, call, WorkflowService, respond_activity_task_failed ), "respond_activity_task_failed_by_id" => rpc_call!( self, - block, + callback, call, WorkflowService, respond_activity_task_failed_by_id ), "respond_nexus_task_completed" => rpc_call!( self, - block, + callback, call, WorkflowService, respond_nexus_task_completed ), "respond_nexus_task_failed" => rpc_call!( self, - block, + callback, call, WorkflowService, respond_nexus_task_failed ), "respond_query_task_completed" => rpc_call!( self, - block, + callback, call, WorkflowService, respond_query_task_completed ), "respond_workflow_task_completed" => rpc_call!( self, - block, + callback, call, WorkflowService, respond_workflow_task_completed ), "respond_workflow_task_failed" => rpc_call!( self, - block, + callback, call, WorkflowService, respond_workflow_task_failed ), - "scan_workflow_executions" => { - rpc_call!(self, block, call, WorkflowService, scan_workflow_executions) - } + "scan_workflow_executions" => rpc_call!( + self, + callback, + call, + WorkflowService, + scan_workflow_executions + ), "signal_with_start_workflow_execution" => rpc_call!( self, - block, + callback, call, WorkflowService, signal_with_start_workflow_execution ), "signal_workflow_execution" => rpc_call!( self, - block, + callback, call, WorkflowService, signal_workflow_execution ), "start_batch_operation" => { - rpc_call!(self, block, call, WorkflowService, start_batch_operation) - } - "start_workflow_execution" => { - rpc_call!(self, block, call, WorkflowService, start_workflow_execution) + rpc_call!(self, callback, call, WorkflowService, start_batch_operation) } + "start_workflow_execution" => rpc_call!( + self, + callback, + call, + WorkflowService, + start_workflow_execution + ), "stop_batch_operation" => { - rpc_call!(self, block, call, WorkflowService, stop_batch_operation) + rpc_call!(self, callback, call, WorkflowService, stop_batch_operation) } "terminate_workflow_execution" => rpc_call!( self, - block, + callback, call, WorkflowService, terminate_workflow_execution ), "update_namespace" => { - rpc_call!(self, block, call, WorkflowService, update_namespace) + rpc_call!(self, callback, call, WorkflowService, update_namespace) + } + "update_schedule" => { + rpc_call!(self, callback, call, WorkflowService, update_schedule) } - "update_schedule" => rpc_call!(self, block, call, WorkflowService, update_schedule), "update_worker_build_id_compatibility" => rpc_call!( self, - block, + callback, call, WorkflowService, update_worker_build_id_compatibility ), "update_worker_versioning_rules" => rpc_call!( self, - block, + callback, call, WorkflowService, update_worker_versioning_rules ), "update_workflow_execution" => rpc_call!( self, - block, + callback, call, WorkflowService, update_workflow_execution @@ -324,113 +384,135 @@ impl Client { SERVICE_OPERATOR => match call.rpc.as_str() { "add_or_update_remote_cluster" => rpc_call!( self, - block, + callback, call, OperatorService, add_or_update_remote_cluster ), "add_search_attributes" => { - rpc_call!(self, block, call, OperatorService, add_search_attributes) + rpc_call!(self, callback, call, OperatorService, add_search_attributes) } "create_nexus_endpoint" => { - rpc_call!(self, block, call, OperatorService, create_nexus_endpoint) + rpc_call!(self, callback, call, OperatorService, create_nexus_endpoint) } "delete_namespace" => { - rpc_call!(self, block, call, OperatorService, delete_namespace) + rpc_call!(self, callback, call, OperatorService, delete_namespace) } "delete_nexus_endpoint" => { - rpc_call!(self, block, call, OperatorService, delete_nexus_endpoint) + rpc_call!(self, callback, call, OperatorService, delete_nexus_endpoint) } "get_nexus_endpoint" => { - rpc_call!(self, block, call, OperatorService, get_nexus_endpoint) + rpc_call!(self, callback, call, OperatorService, get_nexus_endpoint) } - "list_clusters" => rpc_call!(self, block, call, OperatorService, list_clusters), + "list_clusters" => rpc_call!(self, callback, call, OperatorService, list_clusters), "list_nexus_endpoints" => { - rpc_call!(self, block, call, OperatorService, list_nexus_endpoints) - } - "list_search_attributes" => { - rpc_call!(self, block, call, OperatorService, list_search_attributes) + rpc_call!(self, callback, call, OperatorService, list_nexus_endpoints) } + "list_search_attributes" => rpc_call!( + self, + callback, + call, + OperatorService, + list_search_attributes + ), "remove_remote_cluster" => { - rpc_call!(self, block, call, OperatorService, remove_remote_cluster) - } - "remove_search_attributes" => { - rpc_call!(self, block, call, OperatorService, remove_search_attributes) + rpc_call!(self, callback, call, OperatorService, remove_remote_cluster) } + "remove_search_attributes" => rpc_call!( + self, + callback, + call, + OperatorService, + remove_search_attributes + ), "update_nexus_endpoint" => { - rpc_call!(self, block, call, OperatorService, update_nexus_endpoint) + rpc_call!(self, callback, call, OperatorService, update_nexus_endpoint) } _ => Err(error!("Unknown RPC call {}", call.rpc)), }, SERVICE_CLOUD => match call.rpc.as_str() { "add_namespace_region" => { - rpc_call!(self, block, call, CloudService, add_namespace_region) + rpc_call!(self, callback, call, CloudService, add_namespace_region) + } + "create_api_key" => rpc_call!(self, callback, call, CloudService, create_api_key), + "create_namespace" => { + rpc_call!(self, callback, call, CloudService, create_namespace) } - "create_api_key" => rpc_call!(self, block, call, CloudService, create_api_key), - "create_namespace" => rpc_call!(self, block, call, CloudService, create_namespace), "create_service_account" => { - rpc_call!(self, block, call, CloudService, create_service_account) + rpc_call!(self, callback, call, CloudService, create_service_account) } - "create_user" => rpc_call!(self, block, call, CloudService, create_user), + "create_user" => rpc_call!(self, callback, call, CloudService, create_user), "create_user_group" => { - rpc_call!(self, block, call, CloudService, create_user_group) + rpc_call!(self, callback, call, CloudService, create_user_group) + } + "delete_api_key" => rpc_call!(self, callback, call, CloudService, delete_api_key), + "delete_namespace" => { + rpc_call!(self, callback, call, CloudService, delete_namespace) } - "delete_api_key" => rpc_call!(self, block, call, CloudService, delete_api_key), - "delete_namespace" => rpc_call!(self, block, call, CloudService, delete_namespace), "delete_service_account" => { - rpc_call!(self, block, call, CloudService, delete_service_account) + rpc_call!(self, callback, call, CloudService, delete_service_account) } - "delete_user" => rpc_call!(self, block, call, CloudService, delete_user), + "delete_user" => rpc_call!(self, callback, call, CloudService, delete_user), "delete_user_group" => { - rpc_call!(self, block, call, CloudService, delete_user_group) - } - "failover_namespace_region" => { - rpc_call!(self, block, call, CloudService, failover_namespace_region) + rpc_call!(self, callback, call, CloudService, delete_user_group) } - "get_api_key" => rpc_call!(self, block, call, CloudService, get_api_key), - "get_api_keys" => rpc_call!(self, block, call, CloudService, get_api_keys), + "failover_namespace_region" => rpc_call!( + self, + callback, + call, + CloudService, + failover_namespace_region + ), + "get_api_key" => rpc_call!(self, callback, call, CloudService, get_api_key), + "get_api_keys" => rpc_call!(self, callback, call, CloudService, get_api_keys), "get_async_operation" => { - rpc_call!(self, block, call, CloudService, get_async_operation) + rpc_call!(self, callback, call, CloudService, get_async_operation) } - "get_namespace" => rpc_call!(self, block, call, CloudService, get_namespace), - "get_namespaces" => rpc_call!(self, block, call, CloudService, get_namespaces), - "get_region" => rpc_call!(self, block, call, CloudService, get_region), - "get_regions" => rpc_call!(self, block, call, CloudService, get_regions), + "get_namespace" => rpc_call!(self, callback, call, CloudService, get_namespace), + "get_namespaces" => rpc_call!(self, callback, call, CloudService, get_namespaces), + "get_region" => rpc_call!(self, callback, call, CloudService, get_region), + "get_regions" => rpc_call!(self, callback, call, CloudService, get_regions), "get_service_account" => { - rpc_call!(self, block, call, CloudService, get_service_account) + rpc_call!(self, callback, call, CloudService, get_service_account) } "get_service_accounts" => { - rpc_call!(self, block, call, CloudService, get_service_accounts) + rpc_call!(self, callback, call, CloudService, get_service_accounts) } - "get_user" => rpc_call!(self, block, call, CloudService, get_user), - "get_user_group" => rpc_call!(self, block, call, CloudService, get_user_group), - "get_user_groups" => rpc_call!(self, block, call, CloudService, get_user_groups), - "get_users" => rpc_call!(self, block, call, CloudService, get_users), + "get_user" => rpc_call!(self, callback, call, CloudService, get_user), + "get_user_group" => rpc_call!(self, callback, call, CloudService, get_user_group), + "get_user_groups" => rpc_call!(self, callback, call, CloudService, get_user_groups), + "get_users" => rpc_call!(self, callback, call, CloudService, get_users), "rename_custom_search_attribute" => rpc_call!( self, - block, + callback, call, CloudService, rename_custom_search_attribute ), "set_user_group_namespace_access" => rpc_call!( self, - block, + callback, call, CloudService, set_user_group_namespace_access ), - "set_user_namespace_access" => { - rpc_call!(self, block, call, CloudService, set_user_namespace_access) + "set_user_namespace_access" => rpc_call!( + self, + callback, + call, + CloudService, + set_user_namespace_access + ), + "update_api_key" => rpc_call!(self, callback, call, CloudService, update_api_key), + "update_namespace" => { + rpc_call!(self, callback, call, CloudService, update_namespace) } - "update_api_key" => rpc_call!(self, block, call, CloudService, update_api_key), - "update_namespace" => rpc_call!(self, block, call, CloudService, update_namespace), "update_service_account" => { - rpc_call!(self, block, call, CloudService, update_service_account) + rpc_call!(self, callback, call, CloudService, update_service_account) } - "update_user" => rpc_call!(self, block, call, CloudService, update_user), + "update_user" => rpc_call!(self, callback, call, CloudService, update_user), "update_user_group" => { - rpc_call!(self, block, call, CloudService, update_user_group) + rpc_call!(self, callback, call, CloudService, update_user_group) } _ => Err(error!("Unknown RPC call {}", call.rpc)), }, diff --git a/temporalio/ext/src/lib.rs b/temporalio/ext/src/lib.rs index 9dd7d124..5833b51d 100644 --- a/temporalio/ext/src/lib.rs +++ b/temporalio/ext/src/lib.rs @@ -5,6 +5,7 @@ mod client_rpc_generated; mod runtime; mod testing; mod util; +mod worker; pub static ROOT_MOD: Lazy = Lazy::new(|ruby| { ruby.define_module("Temporalio") @@ -50,6 +51,7 @@ fn init(ruby: &Ruby) -> Result<(), Error> { client::init(ruby)?; runtime::init(ruby)?; testing::init(ruby)?; + worker::init(ruby)?; Ok(()) } diff --git a/temporalio/ext/src/runtime.rs b/temporalio/ext/src/runtime.rs index 4da99c73..63da18ab 100644 --- a/temporalio/ext/src/runtime.rs +++ b/temporalio/ext/src/runtime.rs @@ -38,12 +38,22 @@ pub struct Runtime { #[derive(Clone)] pub(crate) struct RuntimeHandle { pub(crate) core: Arc, - async_command_tx: Sender, + pub(crate) async_command_tx: Sender, } -type Callback = Box Result<(), Error> + Send + 'static>; +#[macro_export] +macro_rules! enter_sync { + ($runtime:expr) => { + if let Some(subscriber) = $runtime.core.telemetry().trace_subscriber() { + temporal_sdk_core::telemetry::set_trace_subscriber_for_current_thread(subscriber); + } + let _guard = $runtime.core.tokio_handle().enter(); + }; +} + +pub(crate) type Callback = Box Result<(), Error> + Send + 'static>; -enum AsyncCommand { +pub(crate) enum AsyncCommand { RunCallback(Callback), Shutdown, } @@ -155,6 +165,7 @@ impl Runtime { // See the ext/README.md for details on how this works pub fn run_command_loop(&self) { + enter_sync!(self.handle); loop { let cmd = without_gvl( || self.async_command_rx.recv(), diff --git a/temporalio/ext/src/testing.rs b/temporalio/ext/src/testing.rs index 583d6292..8ca72897 100644 --- a/temporalio/ext/src/testing.rs +++ b/temporalio/ext/src/testing.rs @@ -1,6 +1,5 @@ use magnus::{ - class, function, method, prelude::*, value::Opaque, DataTypeFunctions, Error, Ruby, TypedData, - Value, + class, function, method, prelude::*, DataTypeFunctions, Error, Ruby, TypedData, Value, }; use parking_lot::Mutex; use temporal_sdk_core::ephemeral_server::{ @@ -10,7 +9,7 @@ use temporal_sdk_core::ephemeral_server::{ use crate::{ error, id, new_error, runtime::{Runtime, RuntimeHandle}, - util::Struct, + util::{AsyncCallback, Struct}, ROOT_MOD, }; @@ -22,12 +21,12 @@ pub fn init(ruby: &Ruby) -> Result<(), Error> { let class = module.define_class("EphemeralServer", class::object())?; class.define_singleton_method( "async_start_dev_server", - function!(EphemeralServer::async_start_dev_server, 2), + function!(EphemeralServer::async_start_dev_server, 3), )?; class.define_method("target", method!(EphemeralServer::target, 0))?; class.define_method( "async_shutdown", - method!(EphemeralServer::async_shutdown, 0), + method!(EphemeralServer::async_shutdown, 1), )?; Ok(()) } @@ -45,9 +44,9 @@ pub struct EphemeralServer { impl EphemeralServer { pub fn async_start_dev_server( - ruby: &Ruby, runtime: &Runtime, options: Struct, + queue: Value, ) -> Result<(), Error> { // Build options let mut opts_build = TemporalDevServerConfigBuilder::default(); @@ -85,21 +84,17 @@ impl EphemeralServer { .map_err(|err| error!("Invalid Temporalite config: {}", err))?; // Start - let block = Opaque::from(ruby.block_proc()?); + let callback = AsyncCallback::from_queue(queue); let runtime_handle = runtime.handle.clone(); runtime.handle.spawn( async move { opts.start_server().await }, - move |ruby, result| { - let block = ruby.get_inner(block); - let _: Value = match result { - Ok(core) => block.call((EphemeralServer { - target: core.target.clone(), - core: Mutex::new(Some(core)), - runtime_handle, - },))?, - Err(err) => block.call((new_error!("Failed starting server: {}", err),))?, - }; - Ok(()) + move |_, result| match result { + Ok(core) => callback.push(EphemeralServer { + target: core.target.clone(), + core: Mutex::new(Some(core)), + runtime_handle, + }), + Err(err) => callback.push(new_error!("Failed starting server: {}", err)), }, ); Ok(()) @@ -109,21 +104,19 @@ impl EphemeralServer { &self.target } - pub fn async_shutdown(&self) -> Result<(), Error> { - let ruby = Ruby::get().expect("Not in Ruby thread"); + pub fn async_shutdown(&self, queue: Value) -> Result<(), Error> { if let Some(mut core) = self.core.lock().take() { - let block = Opaque::from(ruby.block_proc()?); + let callback = AsyncCallback::from_queue(queue); self.runtime_handle - .spawn(async move { core.shutdown().await }, move |ruby, result| { - let block = ruby.get_inner(block); - let _: Value = match result { - Ok(_) => block.call((ruby.qnil(),))?, + .spawn( + async move { core.shutdown().await }, + move |ruby, result| match result { + Ok(_) => callback.push(ruby.qnil()), Err(err) => { - block.call((new_error!("Failed shutting down server: {}", err),))? + callback.push(new_error!("Failed shutting down server: {}", err)) } - }; - Ok(()) - }) + }, + ) } Ok(()) } diff --git a/temporalio/ext/src/util.rs b/temporalio/ext/src/util.rs index 1f099370..da8e2bc7 100644 --- a/temporalio/ext/src/util.rs +++ b/temporalio/ext/src/util.rs @@ -1,11 +1,11 @@ use std::ffi::c_void; use magnus::symbol::IntoSymbol; -use magnus::value::OpaqueId; -use magnus::Ruby; +use magnus::value::{BoxValue, OpaqueId, ReprValue}; use magnus::{Error, RStruct, TryConvert, Value}; +use magnus::{IntoValue, Ruby}; -use crate::error; +use crate::{error, id}; pub(crate) struct Struct { field_path: Vec, @@ -102,3 +102,28 @@ where *Box::from_raw(result as _) } } + +pub(crate) struct AsyncCallback { + queue: BoxValue, +} + +// We trust our usage of this across threads. We would use Opaque but we can't +// box that properly/safely. +unsafe impl Send for AsyncCallback {} +unsafe impl Sync for AsyncCallback {} + +impl AsyncCallback { + pub(crate) fn from_queue(queue: Value) -> Self { + Self { + queue: BoxValue::new(queue), + } + } + + pub(crate) fn push(&self, value: V) -> Result<(), Error> + where + V: IntoValue, + { + // self.push_and_continue(value) + self.queue.funcall(id!("push"), (value,)).map(|_: Value| ()) + } +} diff --git a/temporalio/ext/src/worker.rs b/temporalio/ext/src/worker.rs new file mode 100644 index 00000000..bdf783ef --- /dev/null +++ b/temporalio/ext/src/worker.rs @@ -0,0 +1,407 @@ +use std::{cell::RefCell, sync::Arc, time::Duration}; + +use crate::{ + client::Client, + enter_sync, error, id, new_error, + runtime::{AsyncCommand, RuntimeHandle}, + util::{AsyncCallback, Struct}, + ROOT_MOD, +}; +use futures::StreamExt; +use futures::{future, stream}; +use magnus::{ + class, function, method, prelude::*, typed_data, DataTypeFunctions, Error, IntoValue, RArray, + RString, RTypedData, Ruby, TypedData, Value, +}; +use prost::Message; +use temporal_sdk_core::{ + ResourceBasedSlotsOptions, ResourceBasedSlotsOptionsBuilder, ResourceSlotOptions, + SlotSupplierOptions, TunerHolder, TunerHolderOptionsBuilder, WorkerConfigBuilder, +}; +use temporal_sdk_core_api::errors::PollActivityError; +use temporal_sdk_core_protos::coresdk::{ActivityHeartbeat, ActivityTaskCompletion}; + +pub fn init(ruby: &Ruby) -> Result<(), Error> { + let class = ruby + .get_inner(&ROOT_MOD) + .define_class("Worker", class::object())?; + class.define_singleton_method("new", function!(Worker::new, 2))?; + class.define_singleton_method("async_poll_all", function!(Worker::async_poll_all, 2))?; + class.define_singleton_method( + "async_finalize_all", + function!(Worker::async_finalize_all, 2), + )?; + class.define_method("async_validate", method!(Worker::async_validate, 1))?; + class.define_method( + "async_complete_activity_task", + method!(Worker::async_complete_activity_task, 2), + )?; + class.define_method( + "record_activity_heartbeat", + method!(Worker::record_activity_heartbeat, 1), + )?; + class.define_method("replace_client", method!(Worker::replace_client, 1))?; + class.define_method("initiate_shutdown", method!(Worker::initiate_shutdown, 0))?; + Ok(()) +} + +#[derive(DataTypeFunctions, TypedData)] +#[magnus(class = "Temporalio::Internal::Bridge::Worker", free_immediately)] +pub struct Worker { + // This needs to be a RefCell of an Option of an Arc because we need to + // mutably take it out of the option at finalize time but we don't have + // a mutable reference of self at that time. + core: RefCell>>, + runtime_handle: RuntimeHandle, + activity: bool, + _workflow: bool, +} + +enum WorkerType { + Activity, +} + +impl Worker { + pub fn new(client: &Client, options: Struct) -> Result { + enter_sync!(client.runtime_handle); + let activity = options.member::(id!("activity"))?; + let _workflow = options.member::(id!("workflow"))?; + // Build config + let config = WorkerConfigBuilder::default() + .namespace(options.member::(id!("namespace"))?) + .task_queue(options.member::(id!("task_queue"))?) + .worker_build_id(options.member::(id!("build_id"))?) + .client_identity_override(options.member::>(id!("identity_override"))?) + .max_cached_workflows(options.member::(id!("max_cached_workflows"))?) + .max_concurrent_wft_polls( + options.member::(id!("max_concurrent_workflow_task_polls"))?, + ) + .nonsticky_to_sticky_poll_ratio( + options.member::(id!("nonsticky_to_sticky_poll_ratio"))?, + ) + .max_concurrent_at_polls( + options.member::(id!("max_concurrent_activity_task_polls"))?, + ) + .no_remote_activities(options.member::(id!("no_remote_activities"))?) + .sticky_queue_schedule_to_start_timeout(Duration::from_secs_f64( + options.member(id!("sticky_queue_schedule_to_start_timeout"))?, + )) + .max_heartbeat_throttle_interval(Duration::from_secs_f64( + options.member(id!("max_heartbeat_throttle_interval"))?, + )) + .default_heartbeat_throttle_interval(Duration::from_secs_f64( + options.member(id!("default_heartbeat_throttle_interval"))?, + )) + .max_worker_activities_per_second( + options.member::>(id!("max_worker_activities_per_second"))?, + ) + .max_task_queue_activities_per_second( + options.member::>(id!("max_task_queue_activities_per_second"))?, + ) + .graceful_shutdown_period(Duration::from_secs_f64( + options.member(id!("graceful_shutdown_period"))?, + )) + .use_worker_versioning(options.member::(id!("use_worker_versioning"))?) + .tuner(Arc::new(build_tuner( + options + .child(id!("tuner"))? + .ok_or_else(|| error!("Missing tuner"))?, + )?)) + // TODO(cretz): workflow_failure_errors + // TODO(cretz): workflow_types_to_failure_errors + .build() + .map_err(|err| error!("Invalid worker options: {}", err))?; + + let worker = temporal_sdk_core::init_worker( + &client.runtime_handle.core, + config, + client.core.clone().into_inner(), + ) + .map_err(|err| error!("Failed creating worker: {}", err))?; + Ok(Worker { + core: RefCell::new(Some(Arc::new(worker))), + runtime_handle: client.runtime_handle.clone(), + activity, + _workflow, + }) + } + + pub fn async_poll_all(workers: RArray, queue: Value) -> Result<(), Error> { + // Get the first runtime handle + let runtime = workers + .entry::>(0)? + .runtime_handle + .clone(); + + // Create stream of poll calls + // TODO(cretz): Map for workflow pollers too + let worker_streams = workers + .into_iter() + .enumerate() + .filter_map(|(index, worker_val)| { + let worker_typed_data = RTypedData::from_value(worker_val).expect("Not typed data"); + let worker_ref = worker_typed_data.get::().expect("Not worker"); + if worker_ref.activity { + let worker = Some( + worker_ref + .core + .borrow() + .as_ref() + .expect("Unable to borrow") + .clone(), + ); + Some(Box::pin(stream::unfold(worker, move |worker| async move { + // We return no worker so the next streamed item closes + // the stream with a None + if let Some(worker) = worker { + let res = + temporal_sdk_core_api::Worker::poll_activity_task(&*worker).await; + let shutdown_next = matches!(res, Err(PollActivityError::ShutDown)); + Some(( + (index, WorkerType::Activity, res), + // No more worker if shutdown + if shutdown_next { None } else { Some(worker) }, + )) + } else { + None + } + }))) + } else { + None + } + }) + .collect::>(); + let mut worker_stream = stream::select_all(worker_streams); + + // Continually call the callback with the worker and the result. The result can either be: + // * [worker index, :activity/:workflow, bytes] - poll success + // * [worker index, :activity/:workflow, error] - poll fail + // * [worker index, :activity/:workflow, nil] - worker shutdown + // * [nil, nil, nil] - all pollers done + let callback = Arc::new(AsyncCallback::from_queue(queue)); + let complete_callback = callback.clone(); + let async_command_tx = runtime.async_command_tx.clone(); + runtime.spawn( + async move { + // Get next item from the stream + while let Some((worker, worker_type, result)) = worker_stream.next().await { + // Encode result and send callback to Ruby + let result = result.map(|v| v.encode_to_vec()); + let callback = callback.clone(); + let _ = async_command_tx.send(AsyncCommand::RunCallback(Box::new(move || { + // Get Ruby in callback + let ruby = Ruby::get().expect("Ruby not available"); + let worker_type = match worker_type { + WorkerType::Activity => id!("activity"), + }; + // Call block + let result: Value = match result { + Ok(val) => RString::from_slice(&val).as_value(), + Err(PollActivityError::ShutDown) => ruby.qnil().as_value(), + Err(err) => new_error!("Poll failure: {}", err).as_value(), + }; + callback.push(ruby.ary_new_from_values(&[ + worker.into_value(), + worker_type.into_value(), + result, + ])) + }))); + } + }, + move |ruby, _| { + // Call with nil, nil, nil to say done + complete_callback.push(ruby.ary_new_from_values(&[ + ruby.qnil(), + ruby.qnil(), + ruby.qnil(), + ])) + }, + ); + Ok(()) + } + + pub fn async_finalize_all(workers: RArray, queue: Value) -> Result<(), Error> { + // Get the first runtime handle + let runtime = workers + .entry::>(0)? + .runtime_handle + .clone(); + + // Take workers and call finalize on them + let mut errs: Vec = Vec::new(); + let futs = workers + .into_iter() + .map(|worker_val| { + let worker_typed_data = RTypedData::from_value(worker_val).expect("Not typed data"); + let worker_ref = worker_typed_data.get::().expect("Not worker"); + let worker = worker_ref + .core + .try_borrow_mut() + .map_err(|_| "Worker still in use".to_owned()) + .and_then(|mut val| { + Arc::try_unwrap(val.take().unwrap()).map_err(|arc| { + format!("Expected 1 reference but got {}", Arc::strong_count(&arc)) + }) + })?; + Ok(temporal_sdk_core_api::Worker::finalize_shutdown(worker)) + }) + .filter_map(|fut_or_err| match fut_or_err { + Ok(fut) => Some(fut), + Err(err) => { + errs.push(err); + None + } + }) + .collect::>(); + + // Spawn the futures + let callback = AsyncCallback::from_queue(queue); + runtime.spawn( + async move { + // Run all futures and return errors + future::join_all(futs).await; + errs + }, + move |ruby, errs| { + if errs.is_empty() { + callback.push(ruby.qnil()) + } else { + callback.push(new_error!( + "{} worker(s) failed to finalize, reasons: {}", + errs.len(), + errs.join(", ") + )) + } + }, + ); + Ok(()) + } + + pub fn async_validate(&self, queue: Value) -> Result<(), Error> { + let callback = AsyncCallback::from_queue(queue); + let worker = self.core.borrow().as_ref().unwrap().clone(); + self.runtime_handle.spawn( + async move { temporal_sdk_core_api::Worker::validate(&*worker).await }, + move |ruby, result| match result { + Ok(()) => callback.push(ruby.qnil()), + Err(err) => callback.push(new_error!("Failed validating worker: {}", err)), + }, + ); + Ok(()) + } + + pub fn async_complete_activity_task(&self, proto: RString, queue: Value) -> Result<(), Error> { + let callback = AsyncCallback::from_queue(queue); + let worker = self.core.borrow().as_ref().unwrap().clone(); + let completion = ActivityTaskCompletion::decode(unsafe { proto.as_slice() }) + .map_err(|err| error!("Invalid proto: {}", err))?; + self.runtime_handle.spawn( + async move { + temporal_sdk_core_api::Worker::complete_activity_task(&*worker, completion).await + }, + move |ruby, result| match result { + Ok(()) => callback.push((ruby.qnil(),)), + Err(err) => callback.push((new_error!("Completion failure: {}", err),)), + }, + ); + Ok(()) + } + + pub fn record_activity_heartbeat(&self, proto: RString) -> Result<(), Error> { + enter_sync!(self.runtime_handle); + let heartbeat = ActivityHeartbeat::decode(unsafe { proto.as_slice() }) + .map_err(|err| error!("Invalid proto: {}", err))?; + let worker = self.core.borrow().as_ref().unwrap().clone(); + temporal_sdk_core_api::Worker::record_activity_heartbeat(&*worker, heartbeat); + Ok(()) + } + + pub fn replace_client(&self, client: &Client) -> Result<(), Error> { + enter_sync!(self.runtime_handle); + let worker = self.core.borrow().as_ref().unwrap().clone(); + worker.replace_client(client.core.clone().into_inner()); + Ok(()) + } + + pub fn initiate_shutdown(&self) -> Result<(), Error> { + enter_sync!(self.runtime_handle); + let worker = self.core.borrow().as_ref().unwrap().clone(); + temporal_sdk_core_api::Worker::initiate_shutdown(&*worker); + Ok(()) + } +} + +fn build_tuner(options: Struct) -> Result { + let (workflow_slot_options, resource_slot_options) = build_tuner_slot_options( + options + .child(id!("workflow_slot_supplier"))? + .ok_or_else(|| error!("Missing workflow slot options"))?, + None, + )?; + let (activity_slot_options, resource_slot_options) = build_tuner_slot_options( + options + .child(id!("activity_slot_supplier"))? + .ok_or_else(|| error!("Missing activity slot options"))?, + resource_slot_options, + )?; + let (local_activity_slot_options, resource_slot_options) = build_tuner_slot_options( + options + .child(id!("local_activity_slot_supplier"))? + .ok_or_else(|| error!("Missing local activity slot options"))?, + resource_slot_options, + )?; + + let mut opts_build = TunerHolderOptionsBuilder::default(); + if let Some(resource_slot_options) = resource_slot_options { + opts_build.resource_based_options(resource_slot_options); + } + opts_build + .workflow_slot_options(workflow_slot_options) + .activity_slot_options(activity_slot_options) + .local_activity_slot_options(local_activity_slot_options) + .build() + .map_err(|err| error!("Failed building tuner options: {}", err))? + .build_tuner_holder() + .map_err(|err| error!("Failed building tuner options: {}", err)) +} + +fn build_tuner_slot_options( + options: Struct, + prev_slots_options: Option, +) -> Result<(SlotSupplierOptions, Option), Error> { + if let Some(slots) = options.member::>(id!("fixed_size"))? { + Ok((SlotSupplierOptions::FixedSize { slots }, prev_slots_options)) + } else if let Some(resource) = options.child(id!("resource_based"))? { + build_tuner_resource_options(resource, prev_slots_options) + } else { + Err(error!("Slot supplier must be fixed size or resource based")) + } +} + +fn build_tuner_resource_options( + options: Struct, + prev_slots_options: Option, +) -> Result<(SlotSupplierOptions, Option), Error> { + let slots_options = ResourceBasedSlotsOptionsBuilder::default() + .target_mem_usage(options.member(id!("target_mem_usage"))?) + .target_cpu_usage(options.member(id!("target_cpu_usage"))?) + .build() + .map_err(|err| error!("Failed building resource slot options: {}", err))?; + if let Some(prev_slots_options) = prev_slots_options { + if slots_options.target_cpu_usage != prev_slots_options.target_cpu_usage + || slots_options.target_mem_usage != prev_slots_options.target_mem_usage + { + return Err(error!( + "All resource-based slot suppliers must have the same resource-based tuner options" + )); + } + } + Ok(( + SlotSupplierOptions::ResourceBased(ResourceSlotOptions::new( + options.member(id!("min_slots"))?, + options.member(id!("max_slots"))?, + Duration::from_secs_f64(options.member(id!("ramp_throttle"))?), + )), + Some(slots_options), + )) +} diff --git a/temporalio/lib/temporalio/activity.rb b/temporalio/lib/temporalio/activity.rb new file mode 100644 index 00000000..8abeb35d --- /dev/null +++ b/temporalio/lib/temporalio/activity.rb @@ -0,0 +1,69 @@ +# frozen_string_literal: true + +require 'temporalio/activity/complete_async_error' +require 'temporalio/activity/context' +require 'temporalio/activity/definition' +require 'temporalio/activity/info' + +module Temporalio + # Base class for all activities. + # + # Activities can be given to a worker as instances of this class, which will call execute on the same instance for + # each execution, or given to the worker as the class itself which instantiates the activity for each execution. + # + # All activities must implement {execute}. Inside execute, {Activity::Context.current} can be used to access the + # current context to get information, issue heartbeats, etc. + # + # By default, the activity is named as its unqualified class name. This can be customized with {activity_name}. + # + # By default, the activity uses the `:default` executor which is usually the thread-pool based executor. This can be + # customized with {activity_executor}. + # + # By default, upon cancellation {::Thread.raise} or {::Fiber.raise} is called with a {Error::CanceledError}. This can + # be disabled by passing `false` to {activity_cancel_raise}. + # + # See documentation for more detail on activities. + class Activity + # Override the activity name which is defaulted to the unqualified class name. + # + # @param name [String, Symbol] Name to use. + def self.activity_name(name) + raise ArgumentError, 'Activity name must be a symbol or string' if !name.is_a?(Symbol) && !name.is_a?(String) + + @activity_name = name.to_s + end + + # Override the activity executor which is defaulted to `:default`. + # + # @param executor_name [Symbol] Executor to use. + def self.activity_executor(executor_name) + raise ArgumentError, 'Executor name must be a symbol' unless executor_name.is_a?(Symbol) + + @activity_executor = executor_name + end + + # Override whether the activity uses Thread/Fiber raise for cancellation which is defaulted to true. + # + # @param cancel_raise [Boolean] Whether to raise. + def self.activity_cancel_raise(cancel_raise) + raise ArgumentError, 'Must be a boolean' unless cancel_raise.is_a?(TrueClass) || cancel_raise.is_a?(FalseClass) + + @activity_cancel_raise = cancel_raise + end + + # @!visibility private + def self._activity_definition_details + { + activity_name: @activity_name || name.to_s.split('::').last, + activity_executor: @activity_executor || :default, + activity_cancel_raise: @activity_cancel_raise.nil? ? true : @activity_cancel_raise + } + end + + # Implementation of the activity. The arguments should be positional and this should return the value on success or + # raise an error on failure. + def execute(*args) + raise NotImplementedError, 'Activity did not implement "execute"' + end + end +end diff --git a/temporalio/lib/temporalio/activity/complete_async_error.rb b/temporalio/lib/temporalio/activity/complete_async_error.rb new file mode 100644 index 00000000..dc0267e9 --- /dev/null +++ b/temporalio/lib/temporalio/activity/complete_async_error.rb @@ -0,0 +1,11 @@ +# frozen_string_literal: true + +require 'temporalio/error' + +module Temporalio + class Activity + # Error raised inside an activity to mark that the activity will be completed asynchronously. + class CompleteAsyncError < Error + end + end +end diff --git a/temporalio/lib/temporalio/activity/context.rb b/temporalio/lib/temporalio/activity/context.rb new file mode 100644 index 00000000..81a52762 --- /dev/null +++ b/temporalio/lib/temporalio/activity/context.rb @@ -0,0 +1,112 @@ +# frozen_string_literal: true + +require 'temporalio/error' + +module Temporalio + class Activity + # Context accessible only within an activity. Use {current} to get the current context. Contexts are fiber or thread + # local so may not be available in a newly started thread from an activity and may have to be propagated manually. + class Context + # @return [Context] The current context, or raises an error if not in activity fiber/thread. + def self.current + context = current_or_nil + raise Error, 'Not in activity context' if context.nil? + + context + end + + # @return [Context, nil] The current context or nil if not in activity fiber/thread. + def self.current_or_nil + _current_executor&.activity_context + end + + # @return [Boolean] Whether there is a current context available. + def self.exist? + !current_or_nil.nil? + end + + # @!visibility private + def self._current_executor + if Fiber.current_scheduler + Fiber[:temporal_activity_executor] + else + Thread.current[:temporal_activity_executor] + end + end + + # @!visibility private + def self._current_executor=(executor) + if Fiber.current_scheduler + Fiber[:temporal_activity_executor] = executor + else + Thread.current[:temporal_activity_executor] = executor + end + end + + # @return [Info] Activity info for this activity. + def info + raise NotImplementedError + end + + # Record a heartbeat on the activity. + # + # Heartbeats should be used for all non-immediately-returning, non-local activities and they are required to + # receive cancellation. Heartbeat calls are throttled internally based on the heartbeat timeout of the activity. + # Users do not have to be concerned with burdening the server by calling this too frequently. + # + # @param details [Array] Details to record with the heartbeat. + def heartbeat(*details) + raise NotImplementedError + end + + # @return [Cancellation] Cancellation that is canceled when the activity is canceled. + def cancellation + raise NotImplementedError + end + + # @return [Cancellation] Cancellation that is canceled when the worker is shutting down. On worker shutdown, this + # is canceled, then the `graceful_shutdown_period` is waited (default 0s), then the activity is canceled. + def worker_shutdown_cancellation + raise NotImplementedError + end + + # @return [Converters::PayloadConverter] Payload converter associated with this activity. + def payload_converter + raise NotImplementedError + end + + # @return [ScopedLogger] Logger for this activity. Note, this is a shared logger not created each activity + # invocation. It just has logic to extract current activity details and so is only able to do so on log calls + # made with a current context available. + def logger + raise NotImplementedError + end + + # @return [Definition] Definition for this activity. + def definition + raise NotImplementedError + end + + # @!visibility private + def _scoped_logger_info + return @scoped_logger_info unless @scoped_logger_info.nil? + + curr_info = info + @scoped_logger_info = { + temporal_activity: { + activity_id: curr_info.activity_id, + activity_type: curr_info.activity_type, + attempt: curr_info.attempt, + task_queue: curr_info.task_queue, + workflow_id: curr_info.workflow_id, + workflow_namespace: curr_info.workflow_namespace, + workflow_run_id: curr_info.workflow_run_id, + workflow_type: curr_info.workflow_type + } + }.freeze + end + + # TODO(cretz): metric meter + end + end +end diff --git a/temporalio/lib/temporalio/activity/definition.rb b/temporalio/lib/temporalio/activity/definition.rb new file mode 100644 index 00000000..abccc755 --- /dev/null +++ b/temporalio/lib/temporalio/activity/definition.rb @@ -0,0 +1,77 @@ +# frozen_string_literal: true + +module Temporalio + class Activity + # Definition of an activity. Activities are usually classes/instances that extend {Activity}, but definitions can + # also be manually created with a proc/block. + class Definition + # @return [String, Symbol] Name of the activity. + attr_reader :name + + # @return [Proc] Proc for the activity. + attr_reader :proc + + # @return [Symbol] Name of the executor. Default is `:default`. + attr_reader :executor + + # @return [Boolean] Whether to raise in thread/fiber on cancellation. Default is `true`. + attr_reader :cancel_raise + + # Obtain a definition representing the given activity, which can be a class, instance, or definition. + # + # @param activity [Activity, Class, Definition] Activity to get definition for. + # @return Definition Obtained definition. + def self.from_activity(activity) + # Class means create each time, instance means just call, definition + # does nothing special + case activity + when Class + raise ArgumentError, "Class '#{activity}' does not extend Activity" unless activity < Activity + + details = activity._activity_definition_details + new( + name: details[:activity_name], + executor: details[:activity_executor], + cancel_raise: details[:activity_cancel_raise], + # Instantiate and call + proc: proc { |*args| activity.new.execute(*args) } + ) + when Activity + details = activity.class._activity_definition_details + new( + name: details[:activity_name], + executor: details[:activity_executor], + cancel_raise: details[:activity_cancel_raise], + # Just call + proc: proc { |*args| activity.execute(*args) } + ) + when Activity::Definition + activity + else + raise ArgumentError, "#{activity} is not an activity class, instance, or definition" + end + end + + # Manually create activity definition. Most users will use an instance/class of {Activity}. + # + # @param name [String, Symbol] Name of the activity. + # @param proc [Proc, nil] Proc for the activity, or can give block. + # @param executor [Symbol] Name of the executor. + # @param cancel_raise [Boolean] Whether to raise in thread/fiber on cancellation. + # @yield Use this block as the activity. Cannot be present with `proc`. + def initialize(name:, proc: nil, executor: :default, cancel_raise: true, &block) + @name = name + if proc.nil? + raise ArgumentError, 'Must give proc or block' unless block_given? + + proc = block + elsif block_given? + raise ArgumentError, 'Cannot give proc and block' + end + @proc = proc + @executor = executor + @cancel_raise = cancel_raise + end + end + end +end diff --git a/temporalio/lib/temporalio/activity/info.rb b/temporalio/lib/temporalio/activity/info.rb new file mode 100644 index 00000000..64cb6475 --- /dev/null +++ b/temporalio/lib/temporalio/activity/info.rb @@ -0,0 +1,63 @@ +# frozen_string_literal: true + +module Temporalio + class Activity + # Information about an activity. + # + # @!attribute activity_id + # @return [String] ID for the activity. + # @!attribute activity_type + # @return [String] Type name for the activity. + # @!attribute attempt + # @return [Integer] Attempt the activity is on. + # @!attribute current_attempt_scheduled_time + # @return [Time] When the current attempt was scheduled. + # @!attribute heartbeat_details + # @return [Array] Details from the last heartbeat of the last attempt. + # @!attribute heartbeat_timeout + # @return [Float, nil] Heartbeat timeout set by the caller. + # @!attribute local? + # @return [Boolean] Whether the activity is a local activity or not. + # @!attribute schedule_to_close_timeout + # @return [Float, nil] Schedule to close timeout set by the caller. + # @!attribute scheduled_time + # @return [Time] When the activity was scheduled. + # @!attribute start_to_close_timeout + # @return [Float, nil] Start to close timeout set by the caller. + # @!attribute started_time + # @return [Time] When the activity started. + # @!attribute task_queue + # @return [String] Task queue this activity is on. + # @!attribute task_token + # @return [String] Task token uniquely identifying this activity. Note, this is a `ASCII-8BIT` encoded string, not + # a `UTF-8` encoded string nor a valid UTF-8 string. + # @!attribute workflow_id + # @return [String] Workflow ID that started this activity. + # @!attribute workflow_namespace + # @return [String] Namespace this activity is on. + # @!attribute workflow_run_id + # @return [String] Workflow run ID that started this activity. + # @!attribute workflow_type + # @return [String] Workflow type name that started this activity. + Info = Struct.new( + :activity_id, + :activity_type, + :attempt, + :current_attempt_scheduled_time, + :heartbeat_details, + :heartbeat_timeout, + :local?, + :schedule_to_close_timeout, + :scheduled_time, + :start_to_close_timeout, + :started_time, + :task_queue, + :task_token, + :workflow_id, + :workflow_namespace, + :workflow_run_id, + :workflow_type, + keyword_init: true + ) + end +end diff --git a/temporalio/lib/temporalio/cancellation.rb b/temporalio/lib/temporalio/cancellation.rb new file mode 100644 index 00000000..faddc977 --- /dev/null +++ b/temporalio/lib/temporalio/cancellation.rb @@ -0,0 +1,150 @@ +# frozen_string_literal: true + +require 'temporalio/error' + +module Temporalio + # Cancellation representation, often known as a "cancellation token". This is used by clients, activities, and + # workflows to represent cancellation in a thread/fiber-safe way. + class Cancellation + # Create a new cancellation. + # + # This is usually created and destructured into a tuple with the second value being the proc to invoke to cancel. + # For example: `cancel, cancel_proc = Temporalio::Cancellation.new`. This is done via {to_ary} which returns a proc + # to issue the cancellation in the second value of the array. + # + # @param parents [Array] Parent cancellations to link this one to. This cancellation will be canceled + # when any parents are canceled. + def initialize(*parents) + @canceled = false + @canceled_reason = nil + @canceled_mutex = Mutex.new + @canceled_cond_var = nil + @cancel_callbacks = [] + @shield_depth = 0 + @shield_pending_cancel = nil # When pending, set as single-reason array + parents.each { |p| p.add_cancel_callback { on_cancel(reason: p.canceled_reason) } } + end + + # @return [Boolean] Whether this cancellation is canceled. + def canceled? + @canceled_mutex.synchronize { @canceled } + end + + # @return [String, nil] Reason for cancellation. Can be nil if not canceled or no reason provided. + def canceled_reason + @canceled_mutex.synchronize { @canceled_reason } + end + + # @return [Boolean] Whether a cancel is pending but currently shielded. + def pending_canceled? + @canceled_mutex.synchronize { !@shield_pending_cancel.nil? } + end + + # @return [String, nil] Reason for pending cancellation. Can be nil if not pending canceled or no reason provided. + def pending_canceled_reason + @canceled_mutex.synchronize { @shield_pending_cancel&.first } + end + + # Raise an error if this cancellation is canceled. + # + # @param err [Exception] Error to raise. + def check!(err = Error::CanceledError.new('Canceled')) + raise err if canceled? + end + + # @return [Array(Cancellation, Proc)] Self and a proc to call to cancel that accepts an optional string `reason` + # keyword argument. As a general practice, only the creator of the cancellation should be the one controlling its + # cancellation. + def to_ary + [self, proc { |reason: nil| on_cancel(reason:) }] + end + + # Wait on this to be canceled. This is backed by a {::ConditionVariable}. + def wait + @canceled_mutex.synchronize do + break if @canceled + + # Add cond var if not present + if @canceled_cond_var.nil? + @canceled_cond_var = ConditionVariable.new + @cancel_callbacks.push(proc { @canceled_mutex.synchronize { @canceled_cond_var.broadcast } }) + end + + # Wait on it + @canceled_cond_var.wait(@canceled_mutex) + end + end + + # Shield the given block from cancellation. This means any cancellation that occurs while shielded code is running + # will be set as "pending" and will not take effect until after the block completes. If shield calls are nested, the + # cancellation remains "pending" until the last shielded block ends. + # + # @yield Requires a block to run under shield. + # @return [Object] Result of the block. + def shield + raise ArgumentError, 'Block required' unless block_given? + + @canceled_mutex.synchronize { @shield_depth += 1 } + yield + ensure + callbacks_to_run = @canceled_mutex.synchronize do + @shield_depth -= 1 + if @shield_depth.zero? && @shield_pending_cancel + reason = @shield_pending_cancel.first + @shield_pending_cancel = nil + prepare_cancel(reason:) + end + end + callbacks_to_run&.each(&:call) + end + + # Advanced call to invoke a proc or block on cancel. The callback usually needs to be quick and thread-safe since it + # is called in the canceler's thread. Usually the callback will just be something like pushing on a queue or + # signaling a condition variable. If the cancellation is already canceled, the callback is called inline before + # returning. + # + # @note WARNING: This is advanced API, users should use {wait} or similar. + # + # @param proc [Proc, nil] Proc to invoke, or nil to use block. + # @yield Accepts block if not using `proc`. + def add_cancel_callback(proc = nil, &block) + raise ArgumentError, 'Must provide proc or block' unless proc || block + raise ArgumentError, 'Cannot provide both proc and block' if proc && block + raise ArgumentError, 'Parameter not a proc' if proc && !proc.is_a?(Proc) + + callback_to_run_immediately = @canceled_mutex.synchronize do + callback = proc || block + @cancel_callbacks.push(proc || block) + break nil unless @canceled + + callback + end + callback_to_run_immediately&.call + nil + end + + private + + def on_cancel(reason:) + callbacks_to_run = @canceled_mutex.synchronize do + # If we're shielding, set as pending and return nil + if @shield_depth.positive? + @shield_pending_cancel = [reason] + nil + else + prepare_cancel(reason:) + end + end + callbacks_to_run&.each(&:call) + end + + # Expects to be called inside mutex by caller, returns callbacks to run + def prepare_cancel(reason:) + return nil if @canceled + + @canceled = true + @canceled_reason = reason + @cancel_callbacks.dup + end + end +end diff --git a/temporalio/lib/temporalio/client.rb b/temporalio/lib/temporalio/client.rb index 2851ce87..8a090335 100644 --- a/temporalio/lib/temporalio/client.rb +++ b/temporalio/lib/temporalio/client.rb @@ -1,10 +1,10 @@ # frozen_string_literal: true require 'google/protobuf/well_known_types' +require 'logger' require 'temporalio/api' require 'temporalio/client/async_activity_handle' require 'temporalio/client/connection' -require 'temporalio/client/implementation' require 'temporalio/client/interceptor' require 'temporalio/client/workflow_execution' require 'temporalio/client/workflow_execution_count' @@ -13,6 +13,7 @@ require 'temporalio/common_enums' require 'temporalio/converters' require 'temporalio/error' +require 'temporalio/internal/client/implementation' require 'temporalio/retry_policy' require 'temporalio/runtime' require 'temporalio/search_attributes' @@ -35,6 +36,7 @@ class Client :namespace, :data_converter, :interceptors, + :logger, :default_workflow_query_reject_condition, keyword_init: true ) @@ -53,6 +55,8 @@ class Client # client calls. The earlier interceptors wrap the later ones. Any interceptors that also implement worker # interceptor will be used as worker interceptors too so they should not be given separately when creating a # worker. + # @param logger [Logger] Logger to use for this client and any workers made from this client. Defaults to stdout + # with warn level. Callers setting this logger are responsible for closing it. # @param default_workflow_query_reject_condition [WorkflowQueryRejectCondition, nil] Default rejection # condition for workflow queries if not set during query. See {WorkflowHandle.query} for details on the # rejection condition. @@ -79,6 +83,7 @@ def self.connect( tls: false, data_converter: Converters::DataConverter.default, interceptors: [], + logger: Logger.new($stdout, level: Logger::WARN), default_workflow_query_reject_condition: nil, rpc_metadata: {}, rpc_retry: Connection::RPCRetryOptions.new, @@ -104,6 +109,7 @@ def self.connect( namespace:, data_converter:, interceptors:, + logger:, default_workflow_query_reject_condition: ) end @@ -119,10 +125,11 @@ def self.connect( # @param namespace [String] Namespace to use for client calls. # @param data_converter [Converters::DataConverter] Data converter to use for all data conversions to/from payloads. # @param interceptors [Array] Set of interceptors that are chained together to allow intercepting of - # client calls. The earlier interceptors wrap the later ones. - # - # Any interceptors that also implement worker interceptor will be used as worker interceptors too so they should - # not be given separately when creating a worker. + # client calls. The earlier interceptors wrap the later ones. Any interceptors that also implement worker + # interceptor will be used as worker interceptors too so they should not be given separately when creating a + # worker. + # @param logger [Logger] Logger to use for this client and any workers made from this client. Defaults to stdout + # with warn level. Callers setting this logger are responsible for closing it. # @param default_workflow_query_reject_condition [WorkflowQueryRejectCondition, nil] Default rejection condition for # workflow queries if not set during query. See {WorkflowHandle.query} for details on the rejection condition. # @@ -132,6 +139,7 @@ def initialize( namespace:, data_converter: DataConverter.default, interceptors: [], + logger: Logger.new($stdout, level: Logger::WARN), default_workflow_query_reject_condition: nil ) @options = Options.new( @@ -139,10 +147,11 @@ def initialize( namespace:, data_converter:, interceptors:, + logger:, default_workflow_query_reject_condition: ).freeze # Initialize interceptors - @impl = interceptors.reverse_each.reduce(Implementation.new(self)) do |acc, int| + @impl = interceptors.reverse_each.reduce(Internal::Client::Implementation.new(self)) do |acc, int| int.intercept_client(acc) end end @@ -268,7 +277,7 @@ def start_workflow( # # @return [Object] Successful result of the workflow. # @raise [Error::WorkflowAlreadyStartedError] Workflow already exists. - # @raise [Error::WorkflowFailureError] Workflow failed with +cause+ as the cause. + # @raise [Error::WorkflowFailedError] Workflow failed with +cause+ as the cause. # @raise [Error::RPCError] RPC error from call. def execute_workflow( workflow, diff --git a/temporalio/lib/temporalio/client/async_activity_handle.rb b/temporalio/lib/temporalio/client/async_activity_handle.rb index 19406e0d..3a28fa60 100644 --- a/temporalio/lib/temporalio/client/async_activity_handle.rb +++ b/temporalio/lib/temporalio/client/async_activity_handle.rb @@ -34,7 +34,12 @@ def heartbeat( rpc_metadata: nil, rpc_timeout: nil ) - raise NotImplementedError + @client._impl.heartbeat_async_activity(Interceptor::HeartbeatAsyncActivityInput.new( + task_token_or_id_reference:, + details:, + rpc_metadata:, + rpc_timeout: + )) end # Complete the activity. @@ -47,7 +52,12 @@ def complete( rpc_metadata: nil, rpc_timeout: nil ) - raise NotImplementedError + @client._impl.complete_async_activity(Interceptor::CompleteAsyncActivityInput.new( + task_token_or_id_reference:, + result:, + rpc_metadata:, + rpc_timeout: + )) end # Fail the activity. @@ -62,7 +72,13 @@ def fail( rpc_metadata: nil, rpc_timeout: nil ) - raise NotImplementedError + @client._impl.fail_async_activity(Interceptor::FailAsyncActivityInput.new( + task_token_or_id_reference:, + error:, + last_heartbeat_details:, + rpc_metadata:, + rpc_timeout: + )) end # Report the activity as cancelled. @@ -70,12 +86,24 @@ def fail( # @param details [Array] Cancellation details. # @param rpc_metadata [Hash, nil] Headers to include on the RPC call. # @param rpc_timeout [Float, nil] Number of seconds before timeout. + # @raise [AsyncActivityCanceledError] If the activity has been canceled. def report_cancellation( *details, rpc_metadata: nil, rpc_timeout: nil ) - raise NotImplementedError + @client._impl.report_cancellation_async_activity(Interceptor::ReportCancellationAsyncActivityInput.new( + task_token_or_id_reference:, + details:, + rpc_metadata:, + rpc_timeout: + )) + end + + private + + def task_token_or_id_reference + @task_token || @id_reference or raise end end end diff --git a/temporalio/lib/temporalio/client/connection.rb b/temporalio/lib/temporalio/client/connection.rb index f584d9d5..3ada7716 100644 --- a/temporalio/lib/temporalio/client/connection.rb +++ b/temporalio/lib/temporalio/client/connection.rb @@ -4,6 +4,7 @@ require 'temporalio/client/connection/cloud_service' require 'temporalio/client/connection/operator_service' require 'temporalio/client/connection/workflow_service' +require 'temporalio/internal/bridge' require 'temporalio/internal/bridge/client' require 'temporalio/runtime' require 'temporalio/version' @@ -219,6 +220,8 @@ def _core_client private def new_core_client + Internal::Bridge.assert_fiber_compatibility! + options = Internal::Bridge::Client::Options.new( target_host: @options.target_host, client_name: 'temporal-ruby', diff --git a/temporalio/lib/temporalio/client/implementation.rb b/temporalio/lib/temporalio/client/implementation.rb deleted file mode 100644 index 0b5964e0..00000000 --- a/temporalio/lib/temporalio/client/implementation.rb +++ /dev/null @@ -1,389 +0,0 @@ -# frozen_string_literal: true - -require 'google/protobuf/well_known_types' -require 'temporalio/api' -require 'temporalio/client/activity_id_reference' -require 'temporalio/client/async_activity_handle' -require 'temporalio/client/connection' -require 'temporalio/client/interceptor' -require 'temporalio/client/workflow_execution' -require 'temporalio/client/workflow_execution_count' -require 'temporalio/client/workflow_handle' -require 'temporalio/common_enums' -require 'temporalio/converters' -require 'temporalio/error' -require 'temporalio/error/failure' -require 'temporalio/internal/proto_utils' -require 'temporalio/runtime' -require 'temporalio/search_attributes' - -module Temporalio - class Client - # @!visibility private - class Implementation < Interceptor::Outbound - def initialize(client) - super(nil) - @client = client - end - - # @!visibility private - def start_workflow(input) - # TODO(cretz): Signal/update with start - req = Api::WorkflowService::V1::StartWorkflowExecutionRequest.new( - request_id: SecureRandom.uuid, - namespace: @client.namespace, - workflow_type: Api::Common::V1::WorkflowType.new(name: input.workflow.to_s), - workflow_id: input.workflow_id, - task_queue: Api::TaskQueue::V1::TaskQueue.new(name: input.task_queue.to_s), - input: @client.data_converter.to_payloads(input.args), - workflow_execution_timeout: Internal::ProtoUtils.seconds_to_duration(input.execution_timeout), - workflow_run_timeout: Internal::ProtoUtils.seconds_to_duration(input.run_timeout), - workflow_task_timeout: Internal::ProtoUtils.seconds_to_duration(input.task_timeout), - identity: @client.connection.identity, - workflow_id_reuse_policy: input.id_reuse_policy, - workflow_id_conflict_policy: input.id_conflict_policy, - retry_policy: input.retry_policy&.to_proto, - cron_schedule: input.cron_schedule, - memo: Internal::ProtoUtils.memo_to_proto(input.memo, @client.data_converter), - search_attributes: input.search_attributes&.to_proto, - workflow_start_delay: Internal::ProtoUtils.seconds_to_duration(input.start_delay), - request_eager_execution: input.request_eager_start, - header: input.headers - ) - - # Send request - begin - resp = @client.workflow_service.start_workflow_execution( - req, - rpc_retry: true, - rpc_metadata: input.rpc_metadata, - rpc_timeout: input.rpc_timeout - ) - rescue Error::RPCError => e - # Unpack and raise already started if that's the error, otherwise default raise - if e.code == Error::RPCError::Code::ALREADY_EXISTS && e.grpc_status.details.first - details = e.grpc_status.details.first.unpack(Api::ErrorDetails::V1::WorkflowExecutionAlreadyStartedFailure) - if details - raise Error::WorkflowAlreadyStartedError.new( - workflow_id: req.workflow_id, - workflow_type: req.workflow_type.name, - run_id: details.run_id - ) - end - end - raise - end - - # Return handle - WorkflowHandle.new( - client: @client, - id: input.workflow_id, - run_id: nil, - result_run_id: resp.run_id, - first_execution_run_id: resp.run_id - ) - end - - # @!visibility private - def list_workflows(input) - Enumerator.new do |yielder| - req = Api::WorkflowService::V1::ListWorkflowExecutionsRequest.new( - namespace: @client.namespace, - query: input.query || '' - ) - loop do - resp = @client.workflow_service.list_workflow_executions( - req, - rpc_retry: true, - rpc_metadata: input.rpc_metadata, - rpc_timeout: input.rpc_timeout - ) - resp.executions.each { |raw_info| yielder << WorkflowExecution.new(raw_info, @client.data_converter) } - break if resp.next_page_token.empty? - - req.next_page_token = resp.next_page_token - end - end - end - - # @!visibility private - def count_workflows(input) - resp = @client.workflow_service.count_workflow_executions( - Api::WorkflowService::V1::CountWorkflowExecutionsRequest.new( - namespace: @client.namespace, - query: input.query || '' - ), - rpc_retry: true, - rpc_metadata: input.rpc_metadata, - rpc_timeout: input.rpc_timeout - ) - WorkflowExecutionCount.new( - resp.count, - resp.groups.map do |group| - WorkflowExecutionCount::AggregationGroup.new( - group.count, - group.group_values.map { |payload| SearchAttributes.value_from_payload(payload) } - ) - end - ) - end - - # @!visibility private - def describe_workflow(input) - resp = @client.workflow_service.describe_workflow_execution( - Api::WorkflowService::V1::DescribeWorkflowExecutionRequest.new( - namespace: @client.namespace, - execution: Api::Common::V1::WorkflowExecution.new( - workflow_id: input.workflow_id, - run_id: input.run_id || '' - ) - ), - rpc_retry: true, - rpc_metadata: input.rpc_metadata, - rpc_timeout: input.rpc_timeout - ) - WorkflowExecution::Description.new(resp, @client.data_converter) - end - - # @!visibility private - def fetch_workflow_history_events(input) - Enumerator.new do |yielder| - req = Api::WorkflowService::V1::GetWorkflowExecutionHistoryRequest.new( - namespace: @client.namespace, - execution: Api::Common::V1::WorkflowExecution.new( - workflow_id: input.workflow_id, - run_id: input.run_id || '' - ), - wait_new_event: input.wait_new_event, - history_event_filter_type: input.event_filter_type, - skip_archival: input.skip_archival - ) - loop do - resp = @client.workflow_service.get_workflow_execution_history( - req, - rpc_retry: true, - rpc_metadata: input.rpc_metadata, - rpc_timeout: input.rpc_timeout - ) - resp.history&.events&.each { |event| yielder << event } - break if resp.next_page_token.empty? - - req.next_page_token = resp.next_page_token - end - end - end - - # @!visibility private - def signal_workflow(input) - @client.workflow_service.signal_workflow_execution( - Api::WorkflowService::V1::SignalWorkflowExecutionRequest.new( - namespace: @client.namespace, - workflow_execution: Api::Common::V1::WorkflowExecution.new( - workflow_id: input.workflow_id, - run_id: input.run_id || '' - ), - signal_name: input.signal, - input: @client.data_converter.to_payloads(input.args), - header: input.headers, - identity: @client.connection.identity, - request_id: SecureRandom.uuid - ), - rpc_retry: true, - rpc_metadata: input.rpc_metadata, - rpc_timeout: input.rpc_timeout - ) - nil - end - - # @!visibility private - def query_workflow(input) - begin - resp = @client.workflow_service.query_workflow( - Api::WorkflowService::V1::QueryWorkflowRequest.new( - namespace: @client.namespace, - execution: Api::Common::V1::WorkflowExecution.new( - workflow_id: input.workflow_id, - run_id: input.run_id || '' - ), - query: Api::Query::V1::WorkflowQuery.new( - query_type: input.query, - query_args: @client.data_converter.to_payloads(input.args), - header: input.headers - ), - query_reject_condition: input.reject_condition || 0 - ), - rpc_retry: true, - rpc_metadata: input.rpc_metadata, - rpc_timeout: input.rpc_timeout - ) - rescue Error::RPCError => e - # If the status is INVALID_ARGUMENT, we can assume it's a query failed - # error - raise Error::WorkflowQueryFailedError, e.message if e.code == Error::RPCError::Code::INVALID_ARGUMENT - - raise - end - unless resp.query_rejected.nil? - raise Error::WorkflowQueryRejectedError.new(status: Internal::ProtoUtils.enum_to_int( - Api::Enums::V1::WorkflowExecutionStatus, resp.query_rejected.status - )) - end - - results = @client.data_converter.from_payloads(resp.query_result) - warn("Expected 0 or 1 query result, got #{results.size}") if results.size > 1 - results&.first - end - - # @!visibility private - def start_workflow_update(input) - if input.wait_for_stage == WorkflowUpdateWaitStage::ADMITTED - raise ArgumentError, 'ADMITTED wait stage not supported' - end - - req = Api::WorkflowService::V1::UpdateWorkflowExecutionRequest.new( - namespace: @client.namespace, - workflow_execution: Api::Common::V1::WorkflowExecution.new( - workflow_id: input.workflow_id, - run_id: input.run_id || '' - ), - request: Api::Update::V1::Request.new( - meta: Api::Update::V1::Meta.new( - update_id: input.update_id, - identity: @client.connection.identity - ), - input: Api::Update::V1::Input.new( - name: input.update, - args: @client.data_converter.to_payloads(input.args), - header: input.headers - ) - ), - wait_policy: Api::Update::V1::WaitPolicy.new( - lifecycle_stage: input.wait_for_stage - ) - ) - - # Repeatedly try to invoke start until the update reaches user-provided - # wait stage or is at least ACCEPTED (as of the time of this writing, - # the user cannot specify sooner than ACCEPTED) - # @type var resp: untyped - resp = nil - loop do - resp = @client.workflow_service.update_workflow_execution( - req, - rpc_retry: true, - rpc_metadata: input.rpc_metadata, - rpc_timeout: input.rpc_timeout - ) - - # We're only done if the response stage is after the requested stage - # or the response stage is accepted - break if resp.stage >= req.wait_policy.lifecycle_stage || resp.stage >= WorkflowUpdateWaitStage::ACCEPTED - rescue Error::RPCError => e - # Deadline exceeded or cancel is a special error type - if e.code == Error::RPCError::Code::DEADLINE_EXCEEDED || e.code == Error::RPCError::Code::CANCELLED - raise Error::WorkflowUpdateRPCTimeoutOrCanceledError - end - - raise - end - - # If the user wants to wait until completed, we must poll until outcome - # if not already there - if input.wait_for_stage == WorkflowUpdateWaitStage::COMPLETED && !resp.outcome - resp.outcome = @client._impl.poll_workflow_update(PollWorkflowUpdateInput.new( - workflow_id: input.workflow_id, - run_id: input.run_id, - update_id: input.update_id, - rpc_metadata: input.rpc_metadata, - rpc_timeout: input.rpc_timeout - )) - end - - WorkflowUpdateHandle.new( - client: @client, - id: input.update_id, - workflow_id: input.workflow_id, - workflow_run_id: input.run_id, - known_outcome: resp.outcome - ) - end - - # @!visibility private - def poll_workflow_update(input) - req = Api::WorkflowService::V1::PollWorkflowExecutionUpdateRequest.new( - namespace: @client.namespace, - update_ref: Api::Update::V1::UpdateRef.new( - workflow_execution: Api::Common::V1::WorkflowExecution.new( - workflow_id: input.workflow_id, - run_id: input.run_id || '' - ), - update_id: input.update_id - ), - identity: @client.connection.identity, - wait_policy: Api::Update::V1::WaitPolicy.new( - lifecycle_stage: WorkflowUpdateWaitStage::COMPLETED - ) - ) - - # Continue polling as long as we have no outcome - loop do - resp = @client.workflow_service.poll_workflow_execution_update( - req, - rpc_retry: true, - rpc_metadata: input.rpc_metadata, - rpc_timeout: input.rpc_timeout - ) - return resp.outcome if resp.outcome - rescue Error::RPCError => e - # Deadline exceeded or cancel is a special error type - if e.code == Error::RPCError::Code::DEADLINE_EXCEEDED || e.code == Error::RPCError::Code::CANCELLED - raise Error::WorkflowUpdateRPCTimeoutOrCanceledError - end - - raise - end - end - - # @!visibility private - def cancel_workflow(input) - @client.workflow_service.request_cancel_workflow_execution( - Api::WorkflowService::V1::RequestCancelWorkflowExecutionRequest.new( - namespace: @client.namespace, - workflow_execution: Api::Common::V1::WorkflowExecution.new( - workflow_id: input.workflow_id, - run_id: input.run_id || '' - ), - first_execution_run_id: input.first_execution_run_id, - identity: @client.connection.identity, - request_id: SecureRandom.uuid - ), - rpc_retry: true, - rpc_metadata: input.rpc_metadata, - rpc_timeout: input.rpc_timeout - ) - nil - end - - # @!visibility private - def terminate_workflow(input) - @client.workflow_service.terminate_workflow_execution( - Api::WorkflowService::V1::TerminateWorkflowExecutionRequest.new( - namespace: @client.namespace, - workflow_execution: Api::Common::V1::WorkflowExecution.new( - workflow_id: input.workflow_id, - run_id: input.run_id || '' - ), - reason: input.reason || '', - first_execution_run_id: input.first_execution_run_id, - details: @client.data_converter.to_payloads(input.details), - identity: @client.connection.identity - ), - rpc_retry: true, - rpc_metadata: input.rpc_metadata, - rpc_timeout: input.rpc_timeout - ) - nil - end - end - end -end diff --git a/temporalio/lib/temporalio/client/interceptor.rb b/temporalio/lib/temporalio/client/interceptor.rb index dfec9f21..163a34fe 100644 --- a/temporalio/lib/temporalio/client/interceptor.rb +++ b/temporalio/lib/temporalio/client/interceptor.rb @@ -148,6 +148,43 @@ def intercept_client(next_interceptor) keyword_init: true ) + # Input for {Outbound.heartbeat_async_activity}. + HeartbeatAsyncActivityInput = Struct.new( + :task_token_or_id_reference, + :details, + :rpc_metadata, + :rpc_timeout, + keyword_init: true + ) + + # Input for {Outbound.complete_async_activity}. + CompleteAsyncActivityInput = Struct.new( + :task_token_or_id_reference, + :result, + :rpc_metadata, + :rpc_timeout, + keyword_init: true + ) + + # Input for {Outbound.fail_async_activity}. + FailAsyncActivityInput = Struct.new( + :task_token_or_id_reference, + :error, + :last_heartbeat_details, + :rpc_metadata, + :rpc_timeout, + keyword_init: true + ) + + # Input for {Outbound.report_cancellation_async_activity}. + ReportCancellationAsyncActivityInput = Struct.new( + :task_token_or_id_reference, + :details, + :rpc_metadata, + :rpc_timeout, + keyword_init: true + ) + # Outbound interceptor for intercepting client calls. This should be extended by users needing to intercept client # actions. class Outbound @@ -245,6 +282,34 @@ def cancel_workflow(input) def terminate_workflow(input) next_interceptor.terminate_workflow(input) end + + # Called for every {AsyncActivityHandle.heartbeat} call. + # + # @param input [HeartbeatAsyncActivityInput] Input. + def heartbeat_async_activity(input) + next_interceptor.heartbeat_async_activity(input) + end + + # Called for every {AsyncActivityHandle.complete} call. + # + # @param input [CompleteAsyncActivityInput] Input. + def complete_async_activity(input) + next_interceptor.complete_async_activity(input) + end + + # Called for every {AsyncActivityHandle.fail} call. + # + # @param input [FailAsyncActivityInput] Input. + def fail_async_activity(input) + next_interceptor.fail_async_activity(input) + end + + # Called for every {AsyncActivityHandle.report_cancellation} call. + # + # @param input [ReportCancellationAsyncActivityInput] Input. + def report_cancellation_async_activity(input) + next_interceptor.report_cancellation_async_activity(input) + end end end end diff --git a/temporalio/lib/temporalio/client/workflow_handle.rb b/temporalio/lib/temporalio/client/workflow_handle.rb index 22e681e1..1888e919 100644 --- a/temporalio/lib/temporalio/client/workflow_handle.rb +++ b/temporalio/lib/temporalio/client/workflow_handle.rb @@ -1,5 +1,6 @@ # frozen_string_literal: true +require 'securerandom' require 'temporalio/api' require 'temporalio/client/interceptor' require 'temporalio/client/workflow_update_handle' @@ -68,7 +69,7 @@ def initialize(client:, id:, run_id:, result_run_id:, first_execution_run_id:) # # @return [Object] Result of the workflow after being converted by the data converter. # - # @raise [Error::WorkflowFailureError] Workflow failed with +cause+ as the cause. + # @raise [Error::WorkflowFailedError] Workflow failed with +cause+ as the cause. # @raise [Error::WorkflowContinuedAsNewError] Workflow continued as new and +follow_runs+ is +false+. # @raise [Error::RPCError] RPC error from call. def result( @@ -102,16 +103,16 @@ def result( hist_run_id = attrs.new_execution_run_id next if follow_runs && hist_run_id && !hist_run_id.empty? - raise Error::WorkflowFailureError.new, cause: @client.data_converter.from_failure(attrs.failure) + raise Error::WorkflowFailedError.new, cause: @client.data_converter.from_failure(attrs.failure) when :EVENT_TYPE_WORKFLOW_EXECUTION_CANCELED attrs = event.workflow_execution_canceled_event_attributes - raise Error::WorkflowFailureError.new, cause: Error::CanceledError.new( + raise Error::WorkflowFailedError.new, cause: Error::CanceledError.new( 'Workflow execution canceled', details: @client.data_converter.from_payloads(attrs&.details) ) when :EVENT_TYPE_WORKFLOW_EXECUTION_TERMINATED attrs = event.workflow_execution_terminated_event_attributes - raise Error::WorkflowFailureError.new, cause: Error::TerminatedError.new( + raise Error::WorkflowFailedError.new, cause: Error::TerminatedError.new( Internal::ProtoUtils.string_or(attrs.reason, 'Workflow execution terminated'), details: @client.data_converter.from_payloads(attrs&.details) ) @@ -120,7 +121,7 @@ def result( hist_run_id = attrs.new_execution_run_id next if follow_runs && hist_run_id && !hist_run_id.empty? - raise Error::WorkflowFailureError.new, cause: Error::TimeoutError.new( + raise Error::WorkflowFailedError.new, cause: Error::TimeoutError.new( 'Workflow execution timed out', type: Api::Enums::V1::TimeoutType::TIMEOUT_TYPE_START_TO_CLOSE, last_heartbeat_details: [] @@ -302,7 +303,7 @@ def query( # @return [WorkflowUpdateHandle] The update handle. # # @raise [Error::WorkflowUpdateRPCTimeoutOrCanceledError] This update call timed out or was canceled. This doesn't - # mean the update itself was timed out or cancelled. + # mean the update itself was timed out or canceled. # @raise [Error::RPCError] RPC error from call. # # @note Handles created as a result of {Client.start_workflow} will send updates the latest workflow with the same @@ -342,7 +343,7 @@ def start_update( # # @raise [Error::WorkflowUpdateFailedError] If the update failed. # @raise [Error::WorkflowUpdateRPCTimeoutOrCanceledError] This update call timed out or was canceled. This doesn't - # mean the update itself was timed out or cancelled. + # mean the update itself was timed out or canceled. # @raise [Error::RPCError] RPC error from call. # # @note Handles created as a result of {Client.start_workflow} will send updates the latest workflow with the same diff --git a/temporalio/lib/temporalio/error.rb b/temporalio/lib/temporalio/error.rb index 9209e90d..06079c2f 100644 --- a/temporalio/lib/temporalio/error.rb +++ b/temporalio/lib/temporalio/error.rb @@ -33,7 +33,7 @@ def self._with_backtrace_and_cause(err, backtrace:, cause:) end # Error that is returned from when a workflow is unsuccessful. - class WorkflowFailureError < Error + class WorkflowFailedError < Error # @!visibility private def initialize super('Workflow failed') @@ -76,7 +76,7 @@ def initialize end end - # Error that occurs when update RPC call times out or is cancelled. + # Error that occurs when update RPC call times out or is canceled. # # @note This is not related to any general concept of timing out or cancelling a running update, this is only # related to the client call itself. @@ -87,6 +87,14 @@ def initialize end end + # Error that occurs when an async activity handle tries to heartbeat and the activity is marked as canceled. + class AsyncActivityCanceledError < Error + # @!visibility private + def initialize + super('Activity canceled') + end + end + # Error raised by a client for a general RPC failure. class RPCError < Error # @return [Code] Status code for the error. diff --git a/temporalio/lib/temporalio/error/failure.rb b/temporalio/lib/temporalio/error/failure.rb index 1cf35f31..4da19915 100644 --- a/temporalio/lib/temporalio/error/failure.rb +++ b/temporalio/lib/temporalio/error/failure.rb @@ -72,7 +72,7 @@ class CanceledError < Failure attr_reader :details # @!visibility private - def initialize(message, details:) + def initialize(message, details: []) super(message) @details = details end diff --git a/temporalio/lib/temporalio/internal/bridge.rb b/temporalio/lib/temporalio/internal/bridge.rb index 8b532299..2becc613 100644 --- a/temporalio/lib/temporalio/internal/bridge.rb +++ b/temporalio/lib/temporalio/internal/bridge.rb @@ -2,16 +2,19 @@ module Temporalio module Internal - # @!visibility private module Bridge - # @!visibility private - def self.async_call - queue = Queue.new - yield queue - result = queue.pop - raise result if result.is_a?(Exception) + def self.assert_fiber_compatibility! + return unless Fiber.current_scheduler && !fibers_supported - result + raise 'Temporal SDK only supports fibers with Ruby 3.3 and newer, ' \ + 'see https://github.com/temporalio/sdk-ruby/issues/162' + end + + def self.fibers_supported + # We do not allow fibers on < 3.3 due to a bug we still need to dig + # into: https://github.com/temporalio/sdk-ruby/issues/162 + major, minor = RUBY_VERSION.split('.').take(2).map(&:to_i) + !major.nil? && major >= 3 && !minor.nil? && minor >= 3 end end end diff --git a/temporalio/lib/temporalio/internal/bridge/api.rb b/temporalio/lib/temporalio/internal/bridge/api.rb new file mode 100644 index 00000000..cf767bd2 --- /dev/null +++ b/temporalio/lib/temporalio/internal/bridge/api.rb @@ -0,0 +1,3 @@ +# frozen_string_literal: true + +require 'temporalio/internal/bridge/api/core_interface' diff --git a/temporalio/lib/temporalio/internal/bridge/api/activity_result/activity_result.rb b/temporalio/lib/temporalio/internal/bridge/api/activity_result/activity_result.rb new file mode 100644 index 00000000..4e762ae0 --- /dev/null +++ b/temporalio/lib/temporalio/internal/bridge/api/activity_result/activity_result.rb @@ -0,0 +1,34 @@ +# frozen_string_literal: true +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: temporal/sdk/core/activity_result/activity_result.proto + +require 'google/protobuf' + +require 'google/protobuf/duration_pb' +require 'google/protobuf/timestamp_pb' +require 'temporalio/api/common/v1/message' +require 'temporalio/api/failure/v1/message' + + +descriptor_data = "\n7temporal/sdk/core/activity_result/activity_result.proto\x12\x17\x63oresdk.activity_result\x1a\x1egoogle/protobuf/duration.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a$temporal/api/common/v1/message.proto\x1a%temporal/api/failure/v1/message.proto\"\x95\x02\n\x17\x41\x63tivityExecutionResult\x12\x35\n\tcompleted\x18\x01 \x01(\x0b\x32 .coresdk.activity_result.SuccessH\x00\x12\x32\n\x06\x66\x61iled\x18\x02 \x01(\x0b\x32 .coresdk.activity_result.FailureH\x00\x12:\n\tcancelled\x18\x03 \x01(\x0b\x32%.coresdk.activity_result.CancellationH\x00\x12I\n\x13will_complete_async\x18\x04 \x01(\x0b\x32*.coresdk.activity_result.WillCompleteAsyncH\x00\x42\x08\n\x06status\"\xfc\x01\n\x12\x41\x63tivityResolution\x12\x35\n\tcompleted\x18\x01 \x01(\x0b\x32 .coresdk.activity_result.SuccessH\x00\x12\x32\n\x06\x66\x61iled\x18\x02 \x01(\x0b\x32 .coresdk.activity_result.FailureH\x00\x12:\n\tcancelled\x18\x03 \x01(\x0b\x32%.coresdk.activity_result.CancellationH\x00\x12\x35\n\x07\x62\x61\x63koff\x18\x04 \x01(\x0b\x32\".coresdk.activity_result.DoBackoffH\x00\x42\x08\n\x06status\":\n\x07Success\x12/\n\x06result\x18\x01 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload\"<\n\x07\x46\x61ilure\x12\x31\n\x07\x66\x61ilure\x18\x01 \x01(\x0b\x32 .temporal.api.failure.v1.Failure\"A\n\x0c\x43\x61ncellation\x12\x31\n\x07\x66\x61ilure\x18\x01 \x01(\x0b\x32 .temporal.api.failure.v1.Failure\"\x13\n\x11WillCompleteAsync\"\x8d\x01\n\tDoBackoff\x12\x0f\n\x07\x61ttempt\x18\x01 \x01(\r\x12\x33\n\x10\x62\x61\x63koff_duration\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration\x12:\n\x16original_schedule_time\x18\x03 \x01(\x0b\x32\x1a.google.protobuf.TimestampB4\xea\x02\x31Temporalio::Internal::Bridge::Api::ActivityResultb\x06proto3" + +pool = Google::Protobuf::DescriptorPool.generated_pool +pool.add_serialized_file(descriptor_data) + +module Temporalio + module Internal + module Bridge + module Api + module ActivityResult + ActivityExecutionResult = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.activity_result.ActivityExecutionResult").msgclass + ActivityResolution = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.activity_result.ActivityResolution").msgclass + Success = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.activity_result.Success").msgclass + Failure = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.activity_result.Failure").msgclass + Cancellation = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.activity_result.Cancellation").msgclass + WillCompleteAsync = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.activity_result.WillCompleteAsync").msgclass + DoBackoff = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.activity_result.DoBackoff").msgclass + end + end + end + end +end diff --git a/temporalio/lib/temporalio/internal/bridge/api/activity_task/activity_task.rb b/temporalio/lib/temporalio/internal/bridge/api/activity_task/activity_task.rb new file mode 100644 index 00000000..6cf273ce --- /dev/null +++ b/temporalio/lib/temporalio/internal/bridge/api/activity_task/activity_task.rb @@ -0,0 +1,31 @@ +# frozen_string_literal: true +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: temporal/sdk/core/activity_task/activity_task.proto + +require 'google/protobuf' + +require 'google/protobuf/duration_pb' +require 'google/protobuf/timestamp_pb' +require 'temporalio/api/common/v1/message' +require 'temporalio/internal/bridge/api/common/common' + + +descriptor_data = "\n3temporal/sdk/core/activity_task/activity_task.proto\x12\x15\x63oresdk.activity_task\x1a\x1egoogle/protobuf/duration.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a$temporal/api/common/v1/message.proto\x1a%temporal/sdk/core/common/common.proto\"\x8d\x01\n\x0c\x41\x63tivityTask\x12\x12\n\ntask_token\x18\x01 \x01(\x0c\x12-\n\x05start\x18\x03 \x01(\x0b\x32\x1c.coresdk.activity_task.StartH\x00\x12/\n\x06\x63\x61ncel\x18\x04 \x01(\x0b\x32\x1d.coresdk.activity_task.CancelH\x00\x42\t\n\x07variant\"\xed\x06\n\x05Start\x12\x1a\n\x12workflow_namespace\x18\x01 \x01(\t\x12\x15\n\rworkflow_type\x18\x02 \x01(\t\x12\x45\n\x12workflow_execution\x18\x03 \x01(\x0b\x32).temporal.api.common.v1.WorkflowExecution\x12\x13\n\x0b\x61\x63tivity_id\x18\x04 \x01(\t\x12\x15\n\ractivity_type\x18\x05 \x01(\t\x12\x45\n\rheader_fields\x18\x06 \x03(\x0b\x32..coresdk.activity_task.Start.HeaderFieldsEntry\x12.\n\x05input\x18\x07 \x03(\x0b\x32\x1f.temporal.api.common.v1.Payload\x12:\n\x11heartbeat_details\x18\x08 \x03(\x0b\x32\x1f.temporal.api.common.v1.Payload\x12\x32\n\x0escheduled_time\x18\t \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x42\n\x1e\x63urrent_attempt_scheduled_time\x18\n \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x30\n\x0cstarted_time\x18\x0b \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x0f\n\x07\x61ttempt\x18\x0c \x01(\r\x12<\n\x19schedule_to_close_timeout\x18\r \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x39\n\x16start_to_close_timeout\x18\x0e \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x34\n\x11heartbeat_timeout\x18\x0f \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x39\n\x0cretry_policy\x18\x10 \x01(\x0b\x32#.temporal.api.common.v1.RetryPolicy\x12\x10\n\x08is_local\x18\x11 \x01(\x08\x1aT\n\x11HeaderFieldsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12.\n\x05value\x18\x02 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload:\x02\x38\x01\"E\n\x06\x43\x61ncel\x12;\n\x06reason\x18\x01 \x01(\x0e\x32+.coresdk.activity_task.ActivityCancelReason*X\n\x14\x41\x63tivityCancelReason\x12\r\n\tNOT_FOUND\x10\x00\x12\r\n\tCANCELLED\x10\x01\x12\r\n\tTIMED_OUT\x10\x02\x12\x13\n\x0fWORKER_SHUTDOWN\x10\x03\x42\x32\xea\x02/Temporalio::Internal::Bridge::Api::ActivityTaskb\x06proto3" + +pool = Google::Protobuf::DescriptorPool.generated_pool +pool.add_serialized_file(descriptor_data) + +module Temporalio + module Internal + module Bridge + module Api + module ActivityTask + ActivityTask = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.activity_task.ActivityTask").msgclass + Start = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.activity_task.Start").msgclass + Cancel = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.activity_task.Cancel").msgclass + ActivityCancelReason = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.activity_task.ActivityCancelReason").enummodule + end + end + end + end +end diff --git a/temporalio/lib/temporalio/internal/bridge/api/child_workflow/child_workflow.rb b/temporalio/lib/temporalio/internal/bridge/api/child_workflow/child_workflow.rb new file mode 100644 index 00000000..a74d06c0 --- /dev/null +++ b/temporalio/lib/temporalio/internal/bridge/api/child_workflow/child_workflow.rb @@ -0,0 +1,33 @@ +# frozen_string_literal: true +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: temporal/sdk/core/child_workflow/child_workflow.proto + +require 'google/protobuf' + +require 'temporalio/api/common/v1/message' +require 'temporalio/api/failure/v1/message' +require 'temporalio/internal/bridge/api/common/common' + + +descriptor_data = "\n5temporal/sdk/core/child_workflow/child_workflow.proto\x12\x16\x63oresdk.child_workflow\x1a$temporal/api/common/v1/message.proto\x1a%temporal/api/failure/v1/message.proto\x1a%temporal/sdk/core/common/common.proto\"\xc3\x01\n\x13\x43hildWorkflowResult\x12\x34\n\tcompleted\x18\x01 \x01(\x0b\x32\x1f.coresdk.child_workflow.SuccessH\x00\x12\x31\n\x06\x66\x61iled\x18\x02 \x01(\x0b\x32\x1f.coresdk.child_workflow.FailureH\x00\x12\x39\n\tcancelled\x18\x03 \x01(\x0b\x32$.coresdk.child_workflow.CancellationH\x00\x42\x08\n\x06status\":\n\x07Success\x12/\n\x06result\x18\x01 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload\"<\n\x07\x46\x61ilure\x12\x31\n\x07\x66\x61ilure\x18\x01 \x01(\x0b\x32 .temporal.api.failure.v1.Failure\"A\n\x0c\x43\x61ncellation\x12\x31\n\x07\x66\x61ilure\x18\x01 \x01(\x0b\x32 .temporal.api.failure.v1.Failure*\xa4\x01\n\x11ParentClosePolicy\x12#\n\x1fPARENT_CLOSE_POLICY_UNSPECIFIED\x10\x00\x12!\n\x1dPARENT_CLOSE_POLICY_TERMINATE\x10\x01\x12\x1f\n\x1bPARENT_CLOSE_POLICY_ABANDON\x10\x02\x12&\n\"PARENT_CLOSE_POLICY_REQUEST_CANCEL\x10\x03*\xae\x01\n&StartChildWorkflowExecutionFailedCause\x12;\n7START_CHILD_WORKFLOW_EXECUTION_FAILED_CAUSE_UNSPECIFIED\x10\x00\x12G\nCSTART_CHILD_WORKFLOW_EXECUTION_FAILED_CAUSE_WORKFLOW_ALREADY_EXISTS\x10\x01*~\n\x1d\x43hildWorkflowCancellationType\x12\x0b\n\x07\x41\x42\x41NDON\x10\x00\x12\x0e\n\nTRY_CANCEL\x10\x01\x12\x1f\n\x1bWAIT_CANCELLATION_COMPLETED\x10\x02\x12\x1f\n\x1bWAIT_CANCELLATION_REQUESTED\x10\x03\x42\x33\xea\x02\x30Temporalio::Internal::Bridge::Api::ChildWorkflowb\x06proto3" + +pool = Google::Protobuf::DescriptorPool.generated_pool +pool.add_serialized_file(descriptor_data) + +module Temporalio + module Internal + module Bridge + module Api + module ChildWorkflow + ChildWorkflowResult = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.child_workflow.ChildWorkflowResult").msgclass + Success = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.child_workflow.Success").msgclass + Failure = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.child_workflow.Failure").msgclass + Cancellation = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.child_workflow.Cancellation").msgclass + ParentClosePolicy = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.child_workflow.ParentClosePolicy").enummodule + StartChildWorkflowExecutionFailedCause = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.child_workflow.StartChildWorkflowExecutionFailedCause").enummodule + ChildWorkflowCancellationType = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.child_workflow.ChildWorkflowCancellationType").enummodule + end + end + end + end +end diff --git a/temporalio/lib/temporalio/internal/bridge/api/common/common.rb b/temporalio/lib/temporalio/internal/bridge/api/common/common.rb new file mode 100644 index 00000000..e3b18819 --- /dev/null +++ b/temporalio/lib/temporalio/internal/bridge/api/common/common.rb @@ -0,0 +1,26 @@ +# frozen_string_literal: true +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: temporal/sdk/core/common/common.proto + +require 'google/protobuf' + +require 'google/protobuf/duration_pb' + + +descriptor_data = "\n%temporal/sdk/core/common/common.proto\x12\x0e\x63oresdk.common\x1a\x1egoogle/protobuf/duration.proto\"U\n\x1bNamespacedWorkflowExecution\x12\x11\n\tnamespace\x18\x01 \x01(\t\x12\x13\n\x0bworkflow_id\x18\x02 \x01(\t\x12\x0e\n\x06run_id\x18\x03 \x01(\t*@\n\x10VersioningIntent\x12\x0f\n\x0bUNSPECIFIED\x10\x00\x12\x0e\n\nCOMPATIBLE\x10\x01\x12\x0b\n\x07\x44\x45\x46\x41ULT\x10\x02\x42,\xea\x02)Temporalio::Internal::Bridge::Api::Commonb\x06proto3" + +pool = Google::Protobuf::DescriptorPool.generated_pool +pool.add_serialized_file(descriptor_data) + +module Temporalio + module Internal + module Bridge + module Api + module Common + NamespacedWorkflowExecution = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.common.NamespacedWorkflowExecution").msgclass + VersioningIntent = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.common.VersioningIntent").enummodule + end + end + end + end +end diff --git a/temporalio/lib/temporalio/internal/bridge/api/core_interface.rb b/temporalio/lib/temporalio/internal/bridge/api/core_interface.rb new file mode 100644 index 00000000..9fe2b8e2 --- /dev/null +++ b/temporalio/lib/temporalio/internal/bridge/api/core_interface.rb @@ -0,0 +1,36 @@ +# frozen_string_literal: true +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: temporal/sdk/core/core_interface.proto + +require 'google/protobuf' + +require 'google/protobuf/duration_pb' +require 'google/protobuf/empty_pb' +require 'google/protobuf/timestamp_pb' +require 'temporalio/api/common/v1/message' +require 'temporalio/internal/bridge/api/activity_result/activity_result' +require 'temporalio/internal/bridge/api/activity_task/activity_task' +require 'temporalio/internal/bridge/api/common/common' +require 'temporalio/internal/bridge/api/external_data/external_data' +require 'temporalio/internal/bridge/api/workflow_activation/workflow_activation' +require 'temporalio/internal/bridge/api/workflow_commands/workflow_commands' +require 'temporalio/internal/bridge/api/workflow_completion/workflow_completion' + + +descriptor_data = "\n&temporal/sdk/core/core_interface.proto\x12\x07\x63oresdk\x1a\x1egoogle/protobuf/duration.proto\x1a\x1bgoogle/protobuf/empty.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a$temporal/api/common/v1/message.proto\x1a\x37temporal/sdk/core/activity_result/activity_result.proto\x1a\x33temporal/sdk/core/activity_task/activity_task.proto\x1a%temporal/sdk/core/common/common.proto\x1a\x33temporal/sdk/core/external_data/external_data.proto\x1a?temporal/sdk/core/workflow_activation/workflow_activation.proto\x1a;temporal/sdk/core/workflow_commands/workflow_commands.proto\x1a?temporal/sdk/core/workflow_completion/workflow_completion.proto\"Y\n\x11\x41\x63tivityHeartbeat\x12\x12\n\ntask_token\x18\x01 \x01(\x0c\x12\x30\n\x07\x64\x65tails\x18\x02 \x03(\x0b\x32\x1f.temporal.api.common.v1.Payload\"n\n\x16\x41\x63tivityTaskCompletion\x12\x12\n\ntask_token\x18\x01 \x01(\x0c\x12@\n\x06result\x18\x02 \x01(\x0b\x32\x30.coresdk.activity_result.ActivityExecutionResultB3\xea\x02\x30Temporalio::Internal::Bridge::Api::CoreInterfaceb\x06proto3" + +pool = Google::Protobuf::DescriptorPool.generated_pool +pool.add_serialized_file(descriptor_data) + +module Temporalio + module Internal + module Bridge + module Api + module CoreInterface + ActivityHeartbeat = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.ActivityHeartbeat").msgclass + ActivityTaskCompletion = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.ActivityTaskCompletion").msgclass + end + end + end + end +end diff --git a/temporalio/lib/temporalio/internal/bridge/api/external_data/external_data.rb b/temporalio/lib/temporalio/internal/bridge/api/external_data/external_data.rb new file mode 100644 index 00000000..0e12f284 --- /dev/null +++ b/temporalio/lib/temporalio/internal/bridge/api/external_data/external_data.rb @@ -0,0 +1,27 @@ +# frozen_string_literal: true +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: temporal/sdk/core/external_data/external_data.proto + +require 'google/protobuf' + +require 'google/protobuf/duration_pb' +require 'google/protobuf/timestamp_pb' + + +descriptor_data = "\n3temporal/sdk/core/external_data/external_data.proto\x12\x15\x63oresdk.external_data\x1a\x1egoogle/protobuf/duration.proto\x1a\x1fgoogle/protobuf/timestamp.proto\"\xfe\x01\n\x17LocalActivityMarkerData\x12\x0b\n\x03seq\x18\x01 \x01(\r\x12\x0f\n\x07\x61ttempt\x18\x02 \x01(\r\x12\x13\n\x0b\x61\x63tivity_id\x18\x03 \x01(\t\x12\x15\n\ractivity_type\x18\x04 \x01(\t\x12\x31\n\rcomplete_time\x18\x05 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12*\n\x07\x62\x61\x63koff\x18\x06 \x01(\x0b\x32\x19.google.protobuf.Duration\x12:\n\x16original_schedule_time\x18\x07 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\"3\n\x11PatchedMarkerData\x12\n\n\x02id\x18\x01 \x01(\t\x12\x12\n\ndeprecated\x18\x02 \x01(\x08\x42\x32\xea\x02/Temporalio::Internal::Bridge::Api::ExternalDatab\x06proto3" + +pool = Google::Protobuf::DescriptorPool.generated_pool +pool.add_serialized_file(descriptor_data) + +module Temporalio + module Internal + module Bridge + module Api + module ExternalData + LocalActivityMarkerData = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.external_data.LocalActivityMarkerData").msgclass + PatchedMarkerData = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.external_data.PatchedMarkerData").msgclass + end + end + end + end +end diff --git a/temporalio/lib/temporalio/internal/bridge/api/workflow_activation/workflow_activation.rb b/temporalio/lib/temporalio/internal/bridge/api/workflow_activation/workflow_activation.rb new file mode 100644 index 00000000..1e7a378a --- /dev/null +++ b/temporalio/lib/temporalio/internal/bridge/api/workflow_activation/workflow_activation.rb @@ -0,0 +1,52 @@ +# frozen_string_literal: true +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: temporal/sdk/core/workflow_activation/workflow_activation.proto + +require 'google/protobuf' + +require 'google/protobuf/timestamp_pb' +require 'google/protobuf/duration_pb' +require 'temporalio/api/failure/v1/message' +require 'temporalio/api/update/v1/message' +require 'temporalio/api/common/v1/message' +require 'temporalio/api/enums/v1/workflow' +require 'temporalio/internal/bridge/api/activity_result/activity_result' +require 'temporalio/internal/bridge/api/child_workflow/child_workflow' +require 'temporalio/internal/bridge/api/common/common' + + +descriptor_data = "\n?temporal/sdk/core/workflow_activation/workflow_activation.proto\x12\x1b\x63oresdk.workflow_activation\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x1egoogle/protobuf/duration.proto\x1a%temporal/api/failure/v1/message.proto\x1a$temporal/api/update/v1/message.proto\x1a$temporal/api/common/v1/message.proto\x1a$temporal/api/enums/v1/workflow.proto\x1a\x37temporal/sdk/core/activity_result/activity_result.proto\x1a\x35temporal/sdk/core/child_workflow/child_workflow.proto\x1a%temporal/sdk/core/common/common.proto\"\xc7\x02\n\x12WorkflowActivation\x12\x0e\n\x06run_id\x18\x01 \x01(\t\x12-\n\ttimestamp\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x14\n\x0cis_replaying\x18\x03 \x01(\x08\x12\x16\n\x0ehistory_length\x18\x04 \x01(\r\x12@\n\x04jobs\x18\x05 \x03(\x0b\x32\x32.coresdk.workflow_activation.WorkflowActivationJob\x12 \n\x18\x61vailable_internal_flags\x18\x06 \x03(\r\x12\x1a\n\x12history_size_bytes\x18\x07 \x01(\x04\x12!\n\x19\x63ontinue_as_new_suggested\x18\x08 \x01(\x08\x12!\n\x19\x62uild_id_for_current_task\x18\t \x01(\t\"\xa7\t\n\x15WorkflowActivationJob\x12N\n\x13initialize_workflow\x18\x01 \x01(\x0b\x32/.coresdk.workflow_activation.InitializeWorkflowH\x00\x12<\n\nfire_timer\x18\x02 \x01(\x0b\x32&.coresdk.workflow_activation.FireTimerH\x00\x12K\n\x12update_random_seed\x18\x04 \x01(\x0b\x32-.coresdk.workflow_activation.UpdateRandomSeedH\x00\x12\x44\n\x0equery_workflow\x18\x05 \x01(\x0b\x32*.coresdk.workflow_activation.QueryWorkflowH\x00\x12\x46\n\x0f\x63\x61ncel_workflow\x18\x06 \x01(\x0b\x32+.coresdk.workflow_activation.CancelWorkflowH\x00\x12\x46\n\x0fsignal_workflow\x18\x07 \x01(\x0b\x32+.coresdk.workflow_activation.SignalWorkflowH\x00\x12H\n\x10resolve_activity\x18\x08 \x01(\x0b\x32,.coresdk.workflow_activation.ResolveActivityH\x00\x12G\n\x10notify_has_patch\x18\t \x01(\x0b\x32+.coresdk.workflow_activation.NotifyHasPatchH\x00\x12q\n&resolve_child_workflow_execution_start\x18\n \x01(\x0b\x32?.coresdk.workflow_activation.ResolveChildWorkflowExecutionStartH\x00\x12\x66\n resolve_child_workflow_execution\x18\x0b \x01(\x0b\x32:.coresdk.workflow_activation.ResolveChildWorkflowExecutionH\x00\x12\x66\n resolve_signal_external_workflow\x18\x0c \x01(\x0b\x32:.coresdk.workflow_activation.ResolveSignalExternalWorkflowH\x00\x12u\n(resolve_request_cancel_external_workflow\x18\r \x01(\x0b\x32\x41.coresdk.workflow_activation.ResolveRequestCancelExternalWorkflowH\x00\x12:\n\tdo_update\x18\x0e \x01(\x0b\x32%.coresdk.workflow_activation.DoUpdateH\x00\x12I\n\x11remove_from_cache\x18\x32 \x01(\x0b\x32,.coresdk.workflow_activation.RemoveFromCacheH\x00\x42\t\n\x07variant\"\xe3\t\n\x12InitializeWorkflow\x12\x15\n\rworkflow_type\x18\x01 \x01(\t\x12\x13\n\x0bworkflow_id\x18\x02 \x01(\t\x12\x32\n\targuments\x18\x03 \x03(\x0b\x32\x1f.temporal.api.common.v1.Payload\x12\x17\n\x0frandomness_seed\x18\x04 \x01(\x04\x12M\n\x07headers\x18\x05 \x03(\x0b\x32<.coresdk.workflow_activation.InitializeWorkflow.HeadersEntry\x12\x10\n\x08identity\x18\x06 \x01(\t\x12I\n\x14parent_workflow_info\x18\x07 \x01(\x0b\x32+.coresdk.common.NamespacedWorkflowExecution\x12=\n\x1aworkflow_execution_timeout\x18\x08 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x37\n\x14workflow_run_timeout\x18\t \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x38\n\x15workflow_task_timeout\x18\n \x01(\x0b\x32\x19.google.protobuf.Duration\x12\'\n\x1f\x63ontinued_from_execution_run_id\x18\x0b \x01(\t\x12J\n\x13\x63ontinued_initiator\x18\x0c \x01(\x0e\x32-.temporal.api.enums.v1.ContinueAsNewInitiator\x12;\n\x11\x63ontinued_failure\x18\r \x01(\x0b\x32 .temporal.api.failure.v1.Failure\x12@\n\x16last_completion_result\x18\x0e \x01(\x0b\x32 .temporal.api.common.v1.Payloads\x12\x1e\n\x16\x66irst_execution_run_id\x18\x0f \x01(\t\x12\x39\n\x0cretry_policy\x18\x10 \x01(\x0b\x32#.temporal.api.common.v1.RetryPolicy\x12\x0f\n\x07\x61ttempt\x18\x11 \x01(\x05\x12\x15\n\rcron_schedule\x18\x12 \x01(\t\x12\x46\n\"workflow_execution_expiration_time\x18\x13 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x45\n\"cron_schedule_to_schedule_interval\x18\x14 \x01(\x0b\x32\x19.google.protobuf.Duration\x12*\n\x04memo\x18\x15 \x01(\x0b\x32\x1c.temporal.api.common.v1.Memo\x12\x43\n\x11search_attributes\x18\x16 \x01(\x0b\x32(.temporal.api.common.v1.SearchAttributes\x12.\n\nstart_time\x18\x17 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x1aO\n\x0cHeadersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12.\n\x05value\x18\x02 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload:\x02\x38\x01\"\x18\n\tFireTimer\x12\x0b\n\x03seq\x18\x01 \x01(\r\"m\n\x0fResolveActivity\x12\x0b\n\x03seq\x18\x01 \x01(\r\x12;\n\x06result\x18\x02 \x01(\x0b\x32+.coresdk.activity_result.ActivityResolution\x12\x10\n\x08is_local\x18\x03 \x01(\x08\"\xd1\x02\n\"ResolveChildWorkflowExecutionStart\x12\x0b\n\x03seq\x18\x01 \x01(\r\x12[\n\tsucceeded\x18\x02 \x01(\x0b\x32\x46.coresdk.workflow_activation.ResolveChildWorkflowExecutionStartSuccessH\x00\x12X\n\x06\x66\x61iled\x18\x03 \x01(\x0b\x32\x46.coresdk.workflow_activation.ResolveChildWorkflowExecutionStartFailureH\x00\x12]\n\tcancelled\x18\x04 \x01(\x0b\x32H.coresdk.workflow_activation.ResolveChildWorkflowExecutionStartCancelledH\x00\x42\x08\n\x06status\";\n)ResolveChildWorkflowExecutionStartSuccess\x12\x0e\n\x06run_id\x18\x01 \x01(\t\"\xa6\x01\n)ResolveChildWorkflowExecutionStartFailure\x12\x13\n\x0bworkflow_id\x18\x01 \x01(\t\x12\x15\n\rworkflow_type\x18\x02 \x01(\t\x12M\n\x05\x63\x61use\x18\x03 \x01(\x0e\x32>.coresdk.child_workflow.StartChildWorkflowExecutionFailedCause\"`\n+ResolveChildWorkflowExecutionStartCancelled\x12\x31\n\x07\x66\x61ilure\x18\x01 \x01(\x0b\x32 .temporal.api.failure.v1.Failure\"i\n\x1dResolveChildWorkflowExecution\x12\x0b\n\x03seq\x18\x01 \x01(\r\x12;\n\x06result\x18\x02 \x01(\x0b\x32+.coresdk.child_workflow.ChildWorkflowResult\"+\n\x10UpdateRandomSeed\x12\x17\n\x0frandomness_seed\x18\x01 \x01(\x04\"\x84\x02\n\rQueryWorkflow\x12\x10\n\x08query_id\x18\x01 \x01(\t\x12\x12\n\nquery_type\x18\x02 \x01(\t\x12\x32\n\targuments\x18\x03 \x03(\x0b\x32\x1f.temporal.api.common.v1.Payload\x12H\n\x07headers\x18\x05 \x03(\x0b\x32\x37.coresdk.workflow_activation.QueryWorkflow.HeadersEntry\x1aO\n\x0cHeadersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12.\n\x05value\x18\x02 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload:\x02\x38\x01\"B\n\x0e\x43\x61ncelWorkflow\x12\x30\n\x07\x64\x65tails\x18\x01 \x03(\x0b\x32\x1f.temporal.api.common.v1.Payload\"\x83\x02\n\x0eSignalWorkflow\x12\x13\n\x0bsignal_name\x18\x01 \x01(\t\x12.\n\x05input\x18\x02 \x03(\x0b\x32\x1f.temporal.api.common.v1.Payload\x12\x10\n\x08identity\x18\x03 \x01(\t\x12I\n\x07headers\x18\x05 \x03(\x0b\x32\x38.coresdk.workflow_activation.SignalWorkflow.HeadersEntry\x1aO\n\x0cHeadersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12.\n\x05value\x18\x02 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload:\x02\x38\x01\"\"\n\x0eNotifyHasPatch\x12\x10\n\x08patch_id\x18\x01 \x01(\t\"_\n\x1dResolveSignalExternalWorkflow\x12\x0b\n\x03seq\x18\x01 \x01(\r\x12\x31\n\x07\x66\x61ilure\x18\x02 \x01(\x0b\x32 .temporal.api.failure.v1.Failure\"f\n$ResolveRequestCancelExternalWorkflow\x12\x0b\n\x03seq\x18\x01 \x01(\r\x12\x31\n\x07\x66\x61ilure\x18\x02 \x01(\x0b\x32 .temporal.api.failure.v1.Failure\"\xcb\x02\n\x08\x44oUpdate\x12\n\n\x02id\x18\x01 \x01(\t\x12\x1c\n\x14protocol_instance_id\x18\x02 \x01(\t\x12\x0c\n\x04name\x18\x03 \x01(\t\x12.\n\x05input\x18\x04 \x03(\x0b\x32\x1f.temporal.api.common.v1.Payload\x12\x43\n\x07headers\x18\x05 \x03(\x0b\x32\x32.coresdk.workflow_activation.DoUpdate.HeadersEntry\x12*\n\x04meta\x18\x06 \x01(\x0b\x32\x1c.temporal.api.update.v1.Meta\x12\x15\n\rrun_validator\x18\x07 \x01(\x08\x1aO\n\x0cHeadersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12.\n\x05value\x18\x02 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload:\x02\x38\x01\"\xc1\x02\n\x0fRemoveFromCache\x12\x0f\n\x07message\x18\x01 \x01(\t\x12K\n\x06reason\x18\x02 \x01(\x0e\x32;.coresdk.workflow_activation.RemoveFromCache.EvictionReason\"\xcf\x01\n\x0e\x45victionReason\x12\x0f\n\x0bUNSPECIFIED\x10\x00\x12\x0e\n\nCACHE_FULL\x10\x01\x12\x0e\n\nCACHE_MISS\x10\x02\x12\x12\n\x0eNONDETERMINISM\x10\x03\x12\r\n\tLANG_FAIL\x10\x04\x12\x12\n\x0eLANG_REQUESTED\x10\x05\x12\x12\n\x0eTASK_NOT_FOUND\x10\x06\x12\x15\n\x11UNHANDLED_COMMAND\x10\x07\x12\t\n\x05\x46\x41TAL\x10\x08\x12\x1f\n\x1bPAGINATION_OR_HISTORY_FETCH\x10\tB8\xea\x02\x35Temporalio::Internal::Bridge::Api::WorkflowActivationb\x06proto3" + +pool = Google::Protobuf::DescriptorPool.generated_pool +pool.add_serialized_file(descriptor_data) + +module Temporalio + module Internal + module Bridge + module Api + module WorkflowActivation + WorkflowActivation = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.WorkflowActivation").msgclass + WorkflowActivationJob = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.WorkflowActivationJob").msgclass + InitializeWorkflow = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.InitializeWorkflow").msgclass + FireTimer = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.FireTimer").msgclass + ResolveActivity = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.ResolveActivity").msgclass + ResolveChildWorkflowExecutionStart = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.ResolveChildWorkflowExecutionStart").msgclass + ResolveChildWorkflowExecutionStartSuccess = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.ResolveChildWorkflowExecutionStartSuccess").msgclass + ResolveChildWorkflowExecutionStartFailure = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.ResolveChildWorkflowExecutionStartFailure").msgclass + ResolveChildWorkflowExecutionStartCancelled = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.ResolveChildWorkflowExecutionStartCancelled").msgclass + ResolveChildWorkflowExecution = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.ResolveChildWorkflowExecution").msgclass + UpdateRandomSeed = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.UpdateRandomSeed").msgclass + QueryWorkflow = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.QueryWorkflow").msgclass + CancelWorkflow = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.CancelWorkflow").msgclass + SignalWorkflow = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.SignalWorkflow").msgclass + NotifyHasPatch = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.NotifyHasPatch").msgclass + ResolveSignalExternalWorkflow = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.ResolveSignalExternalWorkflow").msgclass + ResolveRequestCancelExternalWorkflow = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.ResolveRequestCancelExternalWorkflow").msgclass + DoUpdate = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.DoUpdate").msgclass + RemoveFromCache = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.RemoveFromCache").msgclass + RemoveFromCache::EvictionReason = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_activation.RemoveFromCache.EvictionReason").enummodule + end + end + end + end +end diff --git a/temporalio/lib/temporalio/internal/bridge/api/workflow_commands/workflow_commands.rb b/temporalio/lib/temporalio/internal/bridge/api/workflow_commands/workflow_commands.rb new file mode 100644 index 00000000..52dcaa0f --- /dev/null +++ b/temporalio/lib/temporalio/internal/bridge/api/workflow_commands/workflow_commands.rb @@ -0,0 +1,54 @@ +# frozen_string_literal: true +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: temporal/sdk/core/workflow_commands/workflow_commands.proto + +require 'google/protobuf' + +require 'google/protobuf/duration_pb' +require 'google/protobuf/timestamp_pb' +require 'google/protobuf/empty_pb' +require 'temporalio/api/common/v1/message' +require 'temporalio/api/enums/v1/workflow' +require 'temporalio/api/failure/v1/message' +require 'temporalio/internal/bridge/api/child_workflow/child_workflow' +require 'temporalio/internal/bridge/api/common/common' + + +descriptor_data = "\n;temporal/sdk/core/workflow_commands/workflow_commands.proto\x12\x19\x63oresdk.workflow_commands\x1a\x1egoogle/protobuf/duration.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x1bgoogle/protobuf/empty.proto\x1a$temporal/api/common/v1/message.proto\x1a$temporal/api/enums/v1/workflow.proto\x1a%temporal/api/failure/v1/message.proto\x1a\x35temporal/sdk/core/child_workflow/child_workflow.proto\x1a%temporal/sdk/core/common/common.proto\"\xf2\r\n\x0fWorkflowCommand\x12<\n\x0bstart_timer\x18\x01 \x01(\x0b\x32%.coresdk.workflow_commands.StartTimerH\x00\x12H\n\x11schedule_activity\x18\x02 \x01(\x0b\x32+.coresdk.workflow_commands.ScheduleActivityH\x00\x12\x42\n\x10respond_to_query\x18\x03 \x01(\x0b\x32&.coresdk.workflow_commands.QueryResultH\x00\x12S\n\x17request_cancel_activity\x18\x04 \x01(\x0b\x32\x30.coresdk.workflow_commands.RequestCancelActivityH\x00\x12>\n\x0c\x63\x61ncel_timer\x18\x05 \x01(\x0b\x32&.coresdk.workflow_commands.CancelTimerH\x00\x12[\n\x1b\x63omplete_workflow_execution\x18\x06 \x01(\x0b\x32\x34.coresdk.workflow_commands.CompleteWorkflowExecutionH\x00\x12S\n\x17\x66\x61il_workflow_execution\x18\x07 \x01(\x0b\x32\x30.coresdk.workflow_commands.FailWorkflowExecutionH\x00\x12g\n\"continue_as_new_workflow_execution\x18\x08 \x01(\x0b\x32\x39.coresdk.workflow_commands.ContinueAsNewWorkflowExecutionH\x00\x12W\n\x19\x63\x61ncel_workflow_execution\x18\t \x01(\x0b\x32\x32.coresdk.workflow_commands.CancelWorkflowExecutionH\x00\x12\x45\n\x10set_patch_marker\x18\n \x01(\x0b\x32).coresdk.workflow_commands.SetPatchMarkerH\x00\x12`\n\x1estart_child_workflow_execution\x18\x0b \x01(\x0b\x32\x36.coresdk.workflow_commands.StartChildWorkflowExecutionH\x00\x12\x62\n\x1f\x63\x61ncel_child_workflow_execution\x18\x0c \x01(\x0b\x32\x37.coresdk.workflow_commands.CancelChildWorkflowExecutionH\x00\x12w\n*request_cancel_external_workflow_execution\x18\r \x01(\x0b\x32\x41.coresdk.workflow_commands.RequestCancelExternalWorkflowExecutionH\x00\x12h\n\"signal_external_workflow_execution\x18\x0e \x01(\x0b\x32:.coresdk.workflow_commands.SignalExternalWorkflowExecutionH\x00\x12Q\n\x16\x63\x61ncel_signal_workflow\x18\x0f \x01(\x0b\x32/.coresdk.workflow_commands.CancelSignalWorkflowH\x00\x12S\n\x17schedule_local_activity\x18\x10 \x01(\x0b\x32\x30.coresdk.workflow_commands.ScheduleLocalActivityH\x00\x12^\n\x1drequest_cancel_local_activity\x18\x11 \x01(\x0b\x32\x35.coresdk.workflow_commands.RequestCancelLocalActivityH\x00\x12\x66\n!upsert_workflow_search_attributes\x18\x12 \x01(\x0b\x32\x39.coresdk.workflow_commands.UpsertWorkflowSearchAttributesH\x00\x12Y\n\x1amodify_workflow_properties\x18\x13 \x01(\x0b\x32\x33.coresdk.workflow_commands.ModifyWorkflowPropertiesH\x00\x12\x44\n\x0fupdate_response\x18\x14 \x01(\x0b\x32).coresdk.workflow_commands.UpdateResponseH\x00\x42\t\n\x07variant\"S\n\nStartTimer\x12\x0b\n\x03seq\x18\x01 \x01(\r\x12\x38\n\x15start_to_fire_timeout\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration\"\x1a\n\x0b\x43\x61ncelTimer\x12\x0b\n\x03seq\x18\x01 \x01(\r\"\x84\x06\n\x10ScheduleActivity\x12\x0b\n\x03seq\x18\x01 \x01(\r\x12\x13\n\x0b\x61\x63tivity_id\x18\x02 \x01(\t\x12\x15\n\ractivity_type\x18\x03 \x01(\t\x12\x12\n\ntask_queue\x18\x05 \x01(\t\x12I\n\x07headers\x18\x06 \x03(\x0b\x32\x38.coresdk.workflow_commands.ScheduleActivity.HeadersEntry\x12\x32\n\targuments\x18\x07 \x03(\x0b\x32\x1f.temporal.api.common.v1.Payload\x12<\n\x19schedule_to_close_timeout\x18\x08 \x01(\x0b\x32\x19.google.protobuf.Duration\x12<\n\x19schedule_to_start_timeout\x18\t \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x39\n\x16start_to_close_timeout\x18\n \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x34\n\x11heartbeat_timeout\x18\x0b \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x39\n\x0cretry_policy\x18\x0c \x01(\x0b\x32#.temporal.api.common.v1.RetryPolicy\x12N\n\x11\x63\x61ncellation_type\x18\r \x01(\x0e\x32\x33.coresdk.workflow_commands.ActivityCancellationType\x12\x1e\n\x16\x64o_not_eagerly_execute\x18\x0e \x01(\x08\x12;\n\x11versioning_intent\x18\x0f \x01(\x0e\x32 .coresdk.common.VersioningIntent\x1aO\n\x0cHeadersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12.\n\x05value\x18\x02 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload:\x02\x38\x01\"\xee\x05\n\x15ScheduleLocalActivity\x12\x0b\n\x03seq\x18\x01 \x01(\r\x12\x13\n\x0b\x61\x63tivity_id\x18\x02 \x01(\t\x12\x15\n\ractivity_type\x18\x03 \x01(\t\x12\x0f\n\x07\x61ttempt\x18\x04 \x01(\r\x12:\n\x16original_schedule_time\x18\x05 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12N\n\x07headers\x18\x06 \x03(\x0b\x32=.coresdk.workflow_commands.ScheduleLocalActivity.HeadersEntry\x12\x32\n\targuments\x18\x07 \x03(\x0b\x32\x1f.temporal.api.common.v1.Payload\x12<\n\x19schedule_to_close_timeout\x18\x08 \x01(\x0b\x32\x19.google.protobuf.Duration\x12<\n\x19schedule_to_start_timeout\x18\t \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x39\n\x16start_to_close_timeout\x18\n \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x39\n\x0cretry_policy\x18\x0b \x01(\x0b\x32#.temporal.api.common.v1.RetryPolicy\x12\x38\n\x15local_retry_threshold\x18\x0c \x01(\x0b\x32\x19.google.protobuf.Duration\x12N\n\x11\x63\x61ncellation_type\x18\r \x01(\x0e\x32\x33.coresdk.workflow_commands.ActivityCancellationType\x1aO\n\x0cHeadersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12.\n\x05value\x18\x02 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload:\x02\x38\x01\"$\n\x15RequestCancelActivity\x12\x0b\n\x03seq\x18\x01 \x01(\r\")\n\x1aRequestCancelLocalActivity\x12\x0b\n\x03seq\x18\x01 \x01(\r\"\x9c\x01\n\x0bQueryResult\x12\x10\n\x08query_id\x18\x01 \x01(\t\x12<\n\tsucceeded\x18\x02 \x01(\x0b\x32\'.coresdk.workflow_commands.QuerySuccessH\x00\x12\x32\n\x06\x66\x61iled\x18\x03 \x01(\x0b\x32 .temporal.api.failure.v1.FailureH\x00\x42\t\n\x07variant\"A\n\x0cQuerySuccess\x12\x31\n\x08response\x18\x01 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload\"L\n\x19\x43ompleteWorkflowExecution\x12/\n\x06result\x18\x01 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload\"J\n\x15\x46\x61ilWorkflowExecution\x12\x31\n\x07\x66\x61ilure\x18\x01 \x01(\x0b\x32 .temporal.api.failure.v1.Failure\"\xfb\x06\n\x1e\x43ontinueAsNewWorkflowExecution\x12\x15\n\rworkflow_type\x18\x01 \x01(\t\x12\x12\n\ntask_queue\x18\x02 \x01(\t\x12\x32\n\targuments\x18\x03 \x03(\x0b\x32\x1f.temporal.api.common.v1.Payload\x12\x37\n\x14workflow_run_timeout\x18\x04 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x38\n\x15workflow_task_timeout\x18\x05 \x01(\x0b\x32\x19.google.protobuf.Duration\x12Q\n\x04memo\x18\x06 \x03(\x0b\x32\x43.coresdk.workflow_commands.ContinueAsNewWorkflowExecution.MemoEntry\x12W\n\x07headers\x18\x07 \x03(\x0b\x32\x46.coresdk.workflow_commands.ContinueAsNewWorkflowExecution.HeadersEntry\x12j\n\x11search_attributes\x18\x08 \x03(\x0b\x32O.coresdk.workflow_commands.ContinueAsNewWorkflowExecution.SearchAttributesEntry\x12\x39\n\x0cretry_policy\x18\t \x01(\x0b\x32#.temporal.api.common.v1.RetryPolicy\x12;\n\x11versioning_intent\x18\n \x01(\x0e\x32 .coresdk.common.VersioningIntent\x1aL\n\tMemoEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12.\n\x05value\x18\x02 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload:\x02\x38\x01\x1aO\n\x0cHeadersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12.\n\x05value\x18\x02 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload:\x02\x38\x01\x1aX\n\x15SearchAttributesEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12.\n\x05value\x18\x02 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload:\x02\x38\x01\"\x19\n\x17\x43\x61ncelWorkflowExecution\"6\n\x0eSetPatchMarker\x12\x10\n\x08patch_id\x18\x01 \x01(\t\x12\x12\n\ndeprecated\x18\x02 \x01(\x08\"\xe0\t\n\x1bStartChildWorkflowExecution\x12\x0b\n\x03seq\x18\x01 \x01(\r\x12\x11\n\tnamespace\x18\x02 \x01(\t\x12\x13\n\x0bworkflow_id\x18\x03 \x01(\t\x12\x15\n\rworkflow_type\x18\x04 \x01(\t\x12\x12\n\ntask_queue\x18\x05 \x01(\t\x12.\n\x05input\x18\x06 \x03(\x0b\x32\x1f.temporal.api.common.v1.Payload\x12=\n\x1aworkflow_execution_timeout\x18\x07 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x37\n\x14workflow_run_timeout\x18\x08 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x38\n\x15workflow_task_timeout\x18\t \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x46\n\x13parent_close_policy\x18\n \x01(\x0e\x32).coresdk.child_workflow.ParentClosePolicy\x12N\n\x18workflow_id_reuse_policy\x18\x0c \x01(\x0e\x32,.temporal.api.enums.v1.WorkflowIdReusePolicy\x12\x39\n\x0cretry_policy\x18\r \x01(\x0b\x32#.temporal.api.common.v1.RetryPolicy\x12\x15\n\rcron_schedule\x18\x0e \x01(\t\x12T\n\x07headers\x18\x0f \x03(\x0b\x32\x43.coresdk.workflow_commands.StartChildWorkflowExecution.HeadersEntry\x12N\n\x04memo\x18\x10 \x03(\x0b\x32@.coresdk.workflow_commands.StartChildWorkflowExecution.MemoEntry\x12g\n\x11search_attributes\x18\x11 \x03(\x0b\x32L.coresdk.workflow_commands.StartChildWorkflowExecution.SearchAttributesEntry\x12P\n\x11\x63\x61ncellation_type\x18\x12 \x01(\x0e\x32\x35.coresdk.child_workflow.ChildWorkflowCancellationType\x12;\n\x11versioning_intent\x18\x13 \x01(\x0e\x32 .coresdk.common.VersioningIntent\x1aO\n\x0cHeadersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12.\n\x05value\x18\x02 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload:\x02\x38\x01\x1aL\n\tMemoEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12.\n\x05value\x18\x02 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload:\x02\x38\x01\x1aX\n\x15SearchAttributesEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12.\n\x05value\x18\x02 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload:\x02\x38\x01\":\n\x1c\x43\x61ncelChildWorkflowExecution\x12\x1a\n\x12\x63hild_workflow_seq\x18\x01 \x01(\r\"\xa7\x01\n&RequestCancelExternalWorkflowExecution\x12\x0b\n\x03seq\x18\x01 \x01(\r\x12I\n\x12workflow_execution\x18\x02 \x01(\x0b\x32+.coresdk.common.NamespacedWorkflowExecutionH\x00\x12\x1b\n\x11\x63hild_workflow_id\x18\x03 \x01(\tH\x00\x42\x08\n\x06target\"\x8f\x03\n\x1fSignalExternalWorkflowExecution\x12\x0b\n\x03seq\x18\x01 \x01(\r\x12I\n\x12workflow_execution\x18\x02 \x01(\x0b\x32+.coresdk.common.NamespacedWorkflowExecutionH\x00\x12\x1b\n\x11\x63hild_workflow_id\x18\x03 \x01(\tH\x00\x12\x13\n\x0bsignal_name\x18\x04 \x01(\t\x12-\n\x04\x61rgs\x18\x05 \x03(\x0b\x32\x1f.temporal.api.common.v1.Payload\x12X\n\x07headers\x18\x06 \x03(\x0b\x32G.coresdk.workflow_commands.SignalExternalWorkflowExecution.HeadersEntry\x1aO\n\x0cHeadersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12.\n\x05value\x18\x02 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload:\x02\x38\x01\x42\x08\n\x06target\"#\n\x14\x43\x61ncelSignalWorkflow\x12\x0b\n\x03seq\x18\x01 \x01(\r\"\xe6\x01\n\x1eUpsertWorkflowSearchAttributes\x12j\n\x11search_attributes\x18\x01 \x03(\x0b\x32O.coresdk.workflow_commands.UpsertWorkflowSearchAttributes.SearchAttributesEntry\x1aX\n\x15SearchAttributesEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12.\n\x05value\x18\x02 \x01(\x0b\x32\x1f.temporal.api.common.v1.Payload:\x02\x38\x01\"O\n\x18ModifyWorkflowProperties\x12\x33\n\rupserted_memo\x18\x01 \x01(\x0b\x32\x1c.temporal.api.common.v1.Memo\"\xd2\x01\n\x0eUpdateResponse\x12\x1c\n\x14protocol_instance_id\x18\x01 \x01(\t\x12*\n\x08\x61\x63\x63\x65pted\x18\x02 \x01(\x0b\x32\x16.google.protobuf.EmptyH\x00\x12\x34\n\x08rejected\x18\x03 \x01(\x0b\x32 .temporal.api.failure.v1.FailureH\x00\x12\x34\n\tcompleted\x18\x04 \x01(\x0b\x32\x1f.temporal.api.common.v1.PayloadH\x00\x42\n\n\x08response*X\n\x18\x41\x63tivityCancellationType\x12\x0e\n\nTRY_CANCEL\x10\x00\x12\x1f\n\x1bWAIT_CANCELLATION_COMPLETED\x10\x01\x12\x0b\n\x07\x41\x42\x41NDON\x10\x02\x42\x36\xea\x02\x33Temporalio::Internal::Bridge::Api::WorkflowCommandsb\x06proto3" + +pool = Google::Protobuf::DescriptorPool.generated_pool +pool.add_serialized_file(descriptor_data) + +module Temporalio + module Internal + module Bridge + module Api + module WorkflowCommands + WorkflowCommand = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.WorkflowCommand").msgclass + StartTimer = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.StartTimer").msgclass + CancelTimer = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.CancelTimer").msgclass + ScheduleActivity = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.ScheduleActivity").msgclass + ScheduleLocalActivity = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.ScheduleLocalActivity").msgclass + RequestCancelActivity = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.RequestCancelActivity").msgclass + RequestCancelLocalActivity = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.RequestCancelLocalActivity").msgclass + QueryResult = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.QueryResult").msgclass + QuerySuccess = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.QuerySuccess").msgclass + CompleteWorkflowExecution = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.CompleteWorkflowExecution").msgclass + FailWorkflowExecution = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.FailWorkflowExecution").msgclass + ContinueAsNewWorkflowExecution = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.ContinueAsNewWorkflowExecution").msgclass + CancelWorkflowExecution = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.CancelWorkflowExecution").msgclass + SetPatchMarker = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.SetPatchMarker").msgclass + StartChildWorkflowExecution = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.StartChildWorkflowExecution").msgclass + CancelChildWorkflowExecution = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.CancelChildWorkflowExecution").msgclass + RequestCancelExternalWorkflowExecution = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.RequestCancelExternalWorkflowExecution").msgclass + SignalExternalWorkflowExecution = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.SignalExternalWorkflowExecution").msgclass + CancelSignalWorkflow = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.CancelSignalWorkflow").msgclass + UpsertWorkflowSearchAttributes = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.UpsertWorkflowSearchAttributes").msgclass + ModifyWorkflowProperties = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.ModifyWorkflowProperties").msgclass + UpdateResponse = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.UpdateResponse").msgclass + ActivityCancellationType = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_commands.ActivityCancellationType").enummodule + end + end + end + end +end diff --git a/temporalio/lib/temporalio/internal/bridge/api/workflow_completion/workflow_completion.rb b/temporalio/lib/temporalio/internal/bridge/api/workflow_completion/workflow_completion.rb new file mode 100644 index 00000000..47e05531 --- /dev/null +++ b/temporalio/lib/temporalio/internal/bridge/api/workflow_completion/workflow_completion.rb @@ -0,0 +1,30 @@ +# frozen_string_literal: true +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: temporal/sdk/core/workflow_completion/workflow_completion.proto + +require 'google/protobuf' + +require 'temporalio/api/failure/v1/message' +require 'temporalio/api/enums/v1/failed_cause' +require 'temporalio/internal/bridge/api/common/common' +require 'temporalio/internal/bridge/api/workflow_commands/workflow_commands' + + +descriptor_data = "\n?temporal/sdk/core/workflow_completion/workflow_completion.proto\x12\x1b\x63oresdk.workflow_completion\x1a%temporal/api/failure/v1/message.proto\x1a(temporal/api/enums/v1/failed_cause.proto\x1a%temporal/sdk/core/common/common.proto\x1a;temporal/sdk/core/workflow_commands/workflow_commands.proto\"\xac\x01\n\x1cWorkflowActivationCompletion\x12\x0e\n\x06run_id\x18\x01 \x01(\t\x12:\n\nsuccessful\x18\x02 \x01(\x0b\x32$.coresdk.workflow_completion.SuccessH\x00\x12\x36\n\x06\x66\x61iled\x18\x03 \x01(\x0b\x32$.coresdk.workflow_completion.FailureH\x00\x42\x08\n\x06status\"d\n\x07Success\x12<\n\x08\x63ommands\x18\x01 \x03(\x0b\x32*.coresdk.workflow_commands.WorkflowCommand\x12\x1b\n\x13used_internal_flags\x18\x06 \x03(\r\"\x81\x01\n\x07\x46\x61ilure\x12\x31\n\x07\x66\x61ilure\x18\x01 \x01(\x0b\x32 .temporal.api.failure.v1.Failure\x12\x43\n\x0b\x66orce_cause\x18\x02 \x01(\x0e\x32..temporal.api.enums.v1.WorkflowTaskFailedCauseB8\xea\x02\x35Temporalio::Internal::Bridge::Api::WorkflowCompletionb\x06proto3" + +pool = Google::Protobuf::DescriptorPool.generated_pool +pool.add_serialized_file(descriptor_data) + +module Temporalio + module Internal + module Bridge + module Api + module WorkflowCompletion + WorkflowActivationCompletion = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_completion.WorkflowActivationCompletion").msgclass + Success = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_completion.Success").msgclass + Failure = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("coresdk.workflow_completion.Failure").msgclass + end + end + end + end +end diff --git a/temporalio/lib/temporalio/internal/bridge/client.rb b/temporalio/lib/temporalio/internal/bridge/client.rb index 1b979ef7..c1a92e7a 100644 --- a/temporalio/lib/temporalio/internal/bridge/client.rb +++ b/temporalio/lib/temporalio/internal/bridge/client.rb @@ -6,7 +6,6 @@ module Temporalio module Internal module Bridge - # @!visibility private class Client Options = Struct.new( :target_host, @@ -22,7 +21,6 @@ class Client keyword_init: true ) - # @!visibility private TLSOptions = Struct.new( :client_cert, # Optional :client_private_key, # Optional @@ -31,7 +29,6 @@ class Client keyword_init: true ) - # @!visibility private RPCRetryOptions = Struct.new( :initial_interval, :randomization_factor, @@ -42,14 +39,12 @@ class Client keyword_init: true ) - # @!visibility private KeepAliveOptions = Struct.new( :interval, :timeout, keyword_init: true ) - # @!visibility private HTTPConnectProxyOptions = Struct.new( :target_host, :basic_auth_user, # Optional @@ -57,16 +52,15 @@ class Client keyword_init: true ) - # @!visibility private def self.new(runtime, options) - Bridge.async_call do |queue| - async_new(runtime, options) do |val| - queue.push(val) - end - end + queue = Queue.new + async_new(runtime, options, queue) + result = queue.pop + raise result if result.is_a?(Exception) + + result end - # @!visibility private def _invoke_rpc( service:, rpc:, @@ -76,19 +70,20 @@ def _invoke_rpc( rpc_metadata:, rpc_timeout: ) - response_bytes = Bridge.async_call do |queue| - async_invoke_rpc( - service:, - rpc:, - request: request.to_proto, - rpc_retry:, - rpc_metadata:, - rpc_timeout: - ) do |val| - queue.push(val) - end - end - response_class.decode(response_bytes) + queue = Queue.new + async_invoke_rpc( + service:, + rpc:, + request: request.to_proto, + rpc_retry:, + rpc_metadata:, + rpc_timeout:, + queue: + ) + result = queue.pop + raise result if result.is_a?(Exception) + + response_class.decode(result) end end end diff --git a/temporalio/lib/temporalio/internal/bridge/runtime.rb b/temporalio/lib/temporalio/internal/bridge/runtime.rb index 5814d827..23acbae7 100644 --- a/temporalio/lib/temporalio/internal/bridge/runtime.rb +++ b/temporalio/lib/temporalio/internal/bridge/runtime.rb @@ -3,29 +3,24 @@ module Temporalio module Internal module Bridge - # @!visibility private class Runtime - # @!visibility private Options = Struct.new( :telemetry, keyword_init: true ) - # @!visibility private TelemetryOptions = Struct.new( :logging, # Optional :metrics, # Optional keyword_init: true ) - # @!visibility private LoggingOptions = Struct.new( :log_filter, :forward_to, # Optional keyword_init: true ) - # @!visibility private MetricsOptions = Struct.new( :opentelemetry, # Optional :prometheus, # Optional @@ -36,7 +31,6 @@ class Runtime keyword_init: true ) - # @!visibility private OpenTelemetryMetricsOptions = Struct.new( :url, :headers, # Optional @@ -46,7 +40,6 @@ class Runtime keyword_init: true ) - # @!visibility private PrometheusMetricsOptions = Struct.new( :bind_address, :counters_total_suffix, diff --git a/temporalio/lib/temporalio/internal/bridge/testing.rb b/temporalio/lib/temporalio/internal/bridge/testing.rb index 0c78ac1d..00178ee3 100644 --- a/temporalio/lib/temporalio/internal/bridge/testing.rb +++ b/temporalio/lib/temporalio/internal/bridge/testing.rb @@ -7,9 +7,7 @@ module Temporalio module Internal module Bridge module Testing - # @!visibility private class EphemeralServer - # @!visibility private StartDevServerOptions = Struct.new( :existing_path, # Optional :sdk_name, @@ -27,22 +25,20 @@ class EphemeralServer keyword_init: true ) - # @!visibility private def self.start_dev_server(runtime, options) - Bridge.async_call do |queue| - async_start_dev_server(runtime, options) do |val| - queue.push(val) - end - end + queue = Queue.new + async_start_dev_server(runtime, options, queue) + result = queue.pop + raise result if result.is_a?(Exception) + + result end - # @!visibility private def shutdown - Bridge.async_call do |queue| - async_shutdown do |val| - queue.push(val) - end - end + queue = Queue.new + async_shutdown(queue) + result = queue.pop + raise result if result.is_a?(Exception) end end end diff --git a/temporalio/lib/temporalio/internal/bridge/worker.rb b/temporalio/lib/temporalio/internal/bridge/worker.rb new file mode 100644 index 00000000..f7a8d8aa --- /dev/null +++ b/temporalio/lib/temporalio/internal/bridge/worker.rb @@ -0,0 +1,84 @@ +# frozen_string_literal: true + +require 'temporalio/internal/bridge' +require 'temporalio/temporalio_bridge' + +module Temporalio + module Internal + module Bridge + class Worker + Options = Struct.new( + :activity, + :workflow, + :namespace, + :task_queue, + :tuner, + :build_id, + :identity_override, + :max_cached_workflows, + :max_concurrent_workflow_task_polls, + :nonsticky_to_sticky_poll_ratio, + :max_concurrent_activity_task_polls, + :no_remote_activities, + :sticky_queue_schedule_to_start_timeout, + :max_heartbeat_throttle_interval, + :default_heartbeat_throttle_interval, + :max_worker_activities_per_second, + :max_task_queue_activities_per_second, + :graceful_shutdown_period, + :use_worker_versioning, + keyword_init: true + ) + + TunerOptions = Struct.new( + :workflow_slot_supplier, + :activity_slot_supplier, + :local_activity_slot_supplier, + keyword_init: true + ) + + TunerSlotSupplierOptions = Struct.new( + :fixed_size, + :resource_based, + keyword_init: true + ) + + TunerResourceBasedSlotSupplierOptions = Struct.new( + :target_mem_usage, + :target_cpu_usage, + :min_slots, + :max_slots, + :ramp_throttle, + keyword_init: true + ) + + def self.finalize_shutdown_all(workers) + queue = Queue.new + async_finalize_all(workers, queue) + result = queue.pop + raise result if result.is_a?(Exception) + end + + def validate + queue = Queue.new + async_validate(queue) + result = queue.pop + raise result if result.is_a?(Exception) + end + + def complete_activity_task(proto) + queue = Queue.new + async_complete_activity_task(proto.to_proto, queue) + result = queue.pop + raise result if result.is_a?(Exception) + end + + def complete_activity_task_in_background(proto) + queue = Queue.new + # TODO(cretz): Log error on this somehow? + async_complete_activity_task(proto.to_proto, queue) + end + end + end + end +end diff --git a/temporalio/lib/temporalio/internal/client/implementation.rb b/temporalio/lib/temporalio/internal/client/implementation.rb new file mode 100644 index 00000000..2c517929 --- /dev/null +++ b/temporalio/lib/temporalio/internal/client/implementation.rb @@ -0,0 +1,524 @@ +# frozen_string_literal: true + +require 'google/protobuf/well_known_types' +require 'temporalio/api' +require 'temporalio/client/activity_id_reference' +require 'temporalio/client/async_activity_handle' +require 'temporalio/client/connection' +require 'temporalio/client/interceptor' +require 'temporalio/client/workflow_execution' +require 'temporalio/client/workflow_execution_count' +require 'temporalio/client/workflow_handle' +require 'temporalio/common_enums' +require 'temporalio/converters' +require 'temporalio/error' +require 'temporalio/error/failure' +require 'temporalio/internal/proto_utils' +require 'temporalio/runtime' +require 'temporalio/search_attributes' + +module Temporalio + module Internal + module Client + class Implementation < Temporalio::Client::Interceptor::Outbound + def initialize(client) + super(nil) + @client = client + end + + def start_workflow(input) + # TODO(cretz): Signal/update with start + req = Api::WorkflowService::V1::StartWorkflowExecutionRequest.new( + request_id: SecureRandom.uuid, + namespace: @client.namespace, + workflow_type: Api::Common::V1::WorkflowType.new(name: input.workflow.to_s), + workflow_id: input.workflow_id, + task_queue: Api::TaskQueue::V1::TaskQueue.new(name: input.task_queue.to_s), + input: @client.data_converter.to_payloads(input.args), + workflow_execution_timeout: ProtoUtils.seconds_to_duration(input.execution_timeout), + workflow_run_timeout: ProtoUtils.seconds_to_duration(input.run_timeout), + workflow_task_timeout: ProtoUtils.seconds_to_duration(input.task_timeout), + identity: @client.connection.identity, + workflow_id_reuse_policy: input.id_reuse_policy, + workflow_id_conflict_policy: input.id_conflict_policy, + retry_policy: input.retry_policy&.to_proto, + cron_schedule: input.cron_schedule, + memo: ProtoUtils.memo_to_proto(input.memo, @client.data_converter), + search_attributes: input.search_attributes&.to_proto, + workflow_start_delay: ProtoUtils.seconds_to_duration(input.start_delay), + request_eager_execution: input.request_eager_start, + header: input.headers + ) + + # Send request + begin + resp = @client.workflow_service.start_workflow_execution( + req, + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + rescue Error::RPCError => e + # Unpack and raise already started if that's the error, otherwise default raise + if e.code == Error::RPCError::Code::ALREADY_EXISTS && e.grpc_status.details.first + details = e.grpc_status.details.first.unpack( + Api::ErrorDetails::V1::WorkflowExecutionAlreadyStartedFailure + ) + if details + raise Error::WorkflowAlreadyStartedError.new( + workflow_id: req.workflow_id, + workflow_type: req.workflow_type.name, + run_id: details.run_id + ) + end + end + raise + end + + # Return handle + Temporalio::Client::WorkflowHandle.new( + client: @client, + id: input.workflow_id, + run_id: nil, + result_run_id: resp.run_id, + first_execution_run_id: resp.run_id + ) + end + + def list_workflows(input) + Enumerator.new do |yielder| + req = Api::WorkflowService::V1::ListWorkflowExecutionsRequest.new( + namespace: @client.namespace, + query: input.query || '' + ) + loop do + resp = @client.workflow_service.list_workflow_executions( + req, + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + resp.executions.each do |raw_info| + yielder << Temporalio::Client::WorkflowExecution.new(raw_info, @client.data_converter) + end + break if resp.next_page_token.empty? + + req.next_page_token = resp.next_page_token + end + end + end + + def count_workflows(input) + resp = @client.workflow_service.count_workflow_executions( + Api::WorkflowService::V1::CountWorkflowExecutionsRequest.new( + namespace: @client.namespace, + query: input.query || '' + ), + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + Temporalio::Client::WorkflowExecutionCount.new( + resp.count, + resp.groups.map do |group| + Temporalio::Client::WorkflowExecutionCount::AggregationGroup.new( + group.count, + group.group_values.map { |payload| SearchAttributes.value_from_payload(payload) } + ) + end + ) + end + + def describe_workflow(input) + resp = @client.workflow_service.describe_workflow_execution( + Api::WorkflowService::V1::DescribeWorkflowExecutionRequest.new( + namespace: @client.namespace, + execution: Api::Common::V1::WorkflowExecution.new( + workflow_id: input.workflow_id, + run_id: input.run_id || '' + ) + ), + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + Temporalio::Client::WorkflowExecution::Description.new(resp, @client.data_converter) + end + + def fetch_workflow_history_events(input) + Enumerator.new do |yielder| + req = Api::WorkflowService::V1::GetWorkflowExecutionHistoryRequest.new( + namespace: @client.namespace, + execution: Api::Common::V1::WorkflowExecution.new( + workflow_id: input.workflow_id, + run_id: input.run_id || '' + ), + wait_new_event: input.wait_new_event, + history_event_filter_type: input.event_filter_type, + skip_archival: input.skip_archival + ) + loop do + resp = @client.workflow_service.get_workflow_execution_history( + req, + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + resp.history&.events&.each { |event| yielder << event } + break if resp.next_page_token.empty? + + req.next_page_token = resp.next_page_token + end + end + end + + def signal_workflow(input) + @client.workflow_service.signal_workflow_execution( + Api::WorkflowService::V1::SignalWorkflowExecutionRequest.new( + namespace: @client.namespace, + workflow_execution: Api::Common::V1::WorkflowExecution.new( + workflow_id: input.workflow_id, + run_id: input.run_id || '' + ), + signal_name: input.signal, + input: @client.data_converter.to_payloads(input.args), + header: input.headers, + identity: @client.connection.identity, + request_id: SecureRandom.uuid + ), + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + nil + end + + def query_workflow(input) + begin + resp = @client.workflow_service.query_workflow( + Api::WorkflowService::V1::QueryWorkflowRequest.new( + namespace: @client.namespace, + execution: Api::Common::V1::WorkflowExecution.new( + workflow_id: input.workflow_id, + run_id: input.run_id || '' + ), + query: Api::Query::V1::WorkflowQuery.new( + query_type: input.query, + query_args: @client.data_converter.to_payloads(input.args), + header: input.headers + ), + query_reject_condition: input.reject_condition || 0 + ), + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + rescue Error::RPCError => e + # If the status is INVALID_ARGUMENT, we can assume it's a query failed + # error + raise Error::WorkflowQueryFailedError, e.message if e.code == Error::RPCError::Code::INVALID_ARGUMENT + + raise + end + unless resp.query_rejected.nil? + raise Error::WorkflowQueryRejectedError.new(status: ProtoUtils.enum_to_int( + Api::Enums::V1::WorkflowExecutionStatus, resp.query_rejected.status + )) + end + + results = @client.data_converter.from_payloads(resp.query_result) + warn("Expected 0 or 1 query result, got #{results.size}") if results.size > 1 + results&.first + end + + def start_workflow_update(input) + if input.wait_for_stage == Temporalio::Client::WorkflowUpdateWaitStage::ADMITTED + raise ArgumentError, 'ADMITTED wait stage not supported' + end + + req = Api::WorkflowService::V1::UpdateWorkflowExecutionRequest.new( + namespace: @client.namespace, + workflow_execution: Api::Common::V1::WorkflowExecution.new( + workflow_id: input.workflow_id, + run_id: input.run_id || '' + ), + request: Api::Update::V1::Request.new( + meta: Api::Update::V1::Meta.new( + update_id: input.update_id, + identity: @client.connection.identity + ), + input: Api::Update::V1::Input.new( + name: input.update, + args: @client.data_converter.to_payloads(input.args), + header: input.headers + ) + ), + wait_policy: Api::Update::V1::WaitPolicy.new( + lifecycle_stage: input.wait_for_stage + ) + ) + + # Repeatedly try to invoke start until the update reaches user-provided + # wait stage or is at least ACCEPTED (as of the time of this writing, + # the user cannot specify sooner than ACCEPTED) + # @type var resp: untyped + resp = nil + loop do + resp = @client.workflow_service.update_workflow_execution( + req, + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + + # We're only done if the response stage is after the requested stage + # or the response stage is accepted + if resp.stage >= req.wait_policy.lifecycle_stage || + resp.stage >= Temporalio::Client::WorkflowUpdateWaitStage::ACCEPTED + break + end + rescue Error::RPCError => e + # Deadline exceeded or cancel is a special error type + if e.code == Error::RPCError::Code::DEADLINE_EXCEEDED || e.code == Error::RPCError::Code::CANCELLED + raise Error::WorkflowUpdateRPCTimeoutOrCanceledError + end + + raise + end + + # If the user wants to wait until completed, we must poll until outcome + # if not already there + if input.wait_for_stage == Temporalio::Client::WorkflowUpdateWaitStage::COMPLETED && !resp.outcome + resp.outcome = @client._impl.poll_workflow_update( + Temporalio::Client::Interceptor::PollWorkflowUpdateInput.new( + workflow_id: input.workflow_id, + run_id: input.run_id, + update_id: input.update_id, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + ) + end + + Temporalio::Client::WorkflowUpdateHandle.new( + client: @client, + id: input.update_id, + workflow_id: input.workflow_id, + workflow_run_id: input.run_id, + known_outcome: resp.outcome + ) + end + + def poll_workflow_update(input) + req = Api::WorkflowService::V1::PollWorkflowExecutionUpdateRequest.new( + namespace: @client.namespace, + update_ref: Api::Update::V1::UpdateRef.new( + workflow_execution: Api::Common::V1::WorkflowExecution.new( + workflow_id: input.workflow_id, + run_id: input.run_id || '' + ), + update_id: input.update_id + ), + identity: @client.connection.identity, + wait_policy: Api::Update::V1::WaitPolicy.new( + lifecycle_stage: Temporalio::Client::WorkflowUpdateWaitStage::COMPLETED + ) + ) + + # Continue polling as long as we have no outcome + loop do + resp = @client.workflow_service.poll_workflow_execution_update( + req, + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + return resp.outcome if resp.outcome + rescue Error::RPCError => e + # Deadline exceeded or cancel is a special error type + if e.code == Error::RPCError::Code::DEADLINE_EXCEEDED || e.code == Error::RPCError::Code::CANCELLED + raise Error::WorkflowUpdateRPCTimeoutOrCanceledError + end + + raise + end + end + + def cancel_workflow(input) + @client.workflow_service.request_cancel_workflow_execution( + Api::WorkflowService::V1::RequestCancelWorkflowExecutionRequest.new( + namespace: @client.namespace, + workflow_execution: Api::Common::V1::WorkflowExecution.new( + workflow_id: input.workflow_id, + run_id: input.run_id || '' + ), + first_execution_run_id: input.first_execution_run_id, + identity: @client.connection.identity, + request_id: SecureRandom.uuid + ), + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + nil + end + + def terminate_workflow(input) + @client.workflow_service.terminate_workflow_execution( + Api::WorkflowService::V1::TerminateWorkflowExecutionRequest.new( + namespace: @client.namespace, + workflow_execution: Api::Common::V1::WorkflowExecution.new( + workflow_id: input.workflow_id, + run_id: input.run_id || '' + ), + reason: input.reason || '', + first_execution_run_id: input.first_execution_run_id, + details: @client.data_converter.to_payloads(input.details), + identity: @client.connection.identity + ), + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + nil + end + + def heartbeat_async_activity(input) + resp = if input.task_token_or_id_reference.is_a?(Temporalio::Client::ActivityIDReference) + @client.workflow_service.record_activity_task_heartbeat_by_id( + Api::WorkflowService::V1::RecordActivityTaskHeartbeatByIdRequest.new( + workflow_id: input.task_token_or_id_reference.workflow_id, + run_id: input.task_token_or_id_reference.run_id, + activity_id: input.task_token_or_id_reference.activity_id, + namespace: @client.namespace, + identity: @client.connection.identity, + details: @client.data_converter.to_payloads(input.details) + ), + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + else + @client.workflow_service.record_activity_task_heartbeat( + Api::WorkflowService::V1::RecordActivityTaskHeartbeatRequest.new( + task_token: input.task_token_or_id_reference, + namespace: @client.namespace, + identity: @client.connection.identity, + details: @client.data_converter.to_payloads(input.details) + ), + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + end + raise Error::AsyncActivityCanceledError if resp.cancel_requested + + nil + end + + def complete_async_activity(input) + if input.task_token_or_id_reference.is_a?(Temporalio::Client::ActivityIDReference) + @client.workflow_service.respond_activity_task_completed_by_id( + Api::WorkflowService::V1::RespondActivityTaskCompletedByIdRequest.new( + workflow_id: input.task_token_or_id_reference.workflow_id, + run_id: input.task_token_or_id_reference.run_id, + activity_id: input.task_token_or_id_reference.activity_id, + namespace: @client.namespace, + identity: @client.connection.identity, + result: @client.data_converter.to_payloads([input.result]) + ), + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + else + @client.workflow_service.respond_activity_task_completed( + Api::WorkflowService::V1::RespondActivityTaskCompletedRequest.new( + task_token: input.task_token_or_id_reference, + namespace: @client.namespace, + identity: @client.connection.identity, + result: @client.data_converter.to_payloads([input.result]) + ), + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + end + nil + end + + def fail_async_activity(input) + if input.task_token_or_id_reference.is_a?(Temporalio::Client::ActivityIDReference) + @client.workflow_service.respond_activity_task_failed_by_id( + Api::WorkflowService::V1::RespondActivityTaskFailedByIdRequest.new( + workflow_id: input.task_token_or_id_reference.workflow_id, + run_id: input.task_token_or_id_reference.run_id, + activity_id: input.task_token_or_id_reference.activity_id, + namespace: @client.namespace, + identity: @client.connection.identity, + failure: @client.data_converter.to_failure(input.error), + last_heartbeat_details: if input.last_heartbeat_details.empty? + nil + else + @client.data_converter.to_payloads(input.last_heartbeat_details) + end + ), + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + else + @client.workflow_service.respond_activity_task_failed( + Api::WorkflowService::V1::RespondActivityTaskFailedRequest.new( + task_token: input.task_token_or_id_reference, + namespace: @client.namespace, + identity: @client.connection.identity, + failure: @client.data_converter.to_failure(input.error), + last_heartbeat_details: if input.last_heartbeat_details.empty? + nil + else + @client.data_converter.to_payloads(input.last_heartbeat_details) + end + ), + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + end + nil + end + + def report_cancellation_async_activity(input) + if input.task_token_or_id_reference.is_a?(Temporalio::Client::ActivityIDReference) + @client.workflow_service.respond_activity_task_canceled_by_id( + Api::WorkflowService::V1::RespondActivityTaskCanceledByIdRequest.new( + workflow_id: input.task_token_or_id_reference.workflow_id, + run_id: input.task_token_or_id_reference.run_id, + activity_id: input.task_token_or_id_reference.activity_id, + namespace: @client.namespace, + identity: @client.connection.identity, + details: @client.data_converter.to_payloads(input.details) + ), + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + else + @client.workflow_service.respond_activity_task_canceled( + Api::WorkflowService::V1::RespondActivityTaskCanceledRequest.new( + task_token: input.task_token_or_id_reference, + namespace: @client.namespace, + identity: @client.connection.identity, + details: @client.data_converter.to_payloads(input.details) + ), + rpc_retry: true, + rpc_metadata: input.rpc_metadata, + rpc_timeout: input.rpc_timeout + ) + end + nil + end + end + end + end +end diff --git a/temporalio/lib/temporalio/internal/proto_utils.rb b/temporalio/lib/temporalio/internal/proto_utils.rb index 1e84eff4..893860aa 100644 --- a/temporalio/lib/temporalio/internal/proto_utils.rb +++ b/temporalio/lib/temporalio/internal/proto_utils.rb @@ -4,9 +4,7 @@ module Temporalio module Internal - # @!visibility private module ProtoUtils - # @!visibility private def self.seconds_to_duration(seconds_float) return nil if seconds_float.nil? @@ -15,26 +13,22 @@ def self.seconds_to_duration(seconds_float) Google::Protobuf::Duration.new(seconds:, nanos:) end - # @!visibility private def self.memo_to_proto(hash, converter) return nil if hash.nil? Api::Common::V1::Memo.new(fields: hash.transform_values { |val| converter.to_payload(val) }) end - # @!visibility private def self.memo_from_proto(memo, converter) return nil if memo.nil? memo.fields.each_with_object({}) { |(key, val), h| h[key] = converter.from_payload(val) } # rubocop:disable Style/HashTransformValues end - # @!visibility private def self.string_or(str, default = nil) str && !str.empty? ? str : default end - # @!visibility private def self.enum_to_int(enum_mod, enum_val, zero_means_nil: false) # Per https://protobuf.dev/reference/ruby/ruby-generated/#enum when # enums are read back, they are symbols if they are known or number @@ -43,6 +37,18 @@ def self.enum_to_int(enum_mod, enum_val, zero_means_nil: false) enum_val = nil if zero_means_nil && enum_val.zero? enum_val end + + def self.convert_from_payload_array(converter, payloads) + return [] if payloads.empty? + + converter.from_payloads(Api::Common::V1::Payloads.new(payloads:)) + end + + def self.convert_to_payload_array(converter, values) + return [] if values.empty? + + converter.to_payloads(values).payloads.to_ary + end end end end diff --git a/temporalio/lib/temporalio/internal/worker/activity_worker.rb b/temporalio/lib/temporalio/internal/worker/activity_worker.rb new file mode 100644 index 00000000..7679d7c7 --- /dev/null +++ b/temporalio/lib/temporalio/internal/worker/activity_worker.rb @@ -0,0 +1,348 @@ +# frozen_string_literal: true + +require 'temporalio/activity' +require 'temporalio/activity/definition' +require 'temporalio/cancellation' +require 'temporalio/internal/bridge/api' +require 'temporalio/internal/proto_utils' +require 'temporalio/scoped_logger' +require 'temporalio/worker/interceptor' + +module Temporalio + module Internal + module Worker + class ActivityWorker + LOG_TASKS = false + + attr_reader :worker, :bridge_worker + + def initialize(worker, bridge_worker) + @worker = worker + @bridge_worker = bridge_worker + + # Create shared logger that gives scoped activity details + @scoped_logger = ScopedLogger.new(@worker.options.logger) + @scoped_logger.scoped_values_getter = proc { + Activity::Context.current_or_nil&._scoped_logger_info + } + + # Build up activity hash by name, failing if any fail validation + @activities = worker.options.activities.each_with_object({}) do |act, hash| + # Class means create each time, instance means just call, definition + # does nothing special + defn = Activity::Definition.from_activity(act) + # Confirm name not in use + raise ArgumentError, "Multiple activities named #{defn.name}" if hash.key?(defn.name) + + # Confirm executor is a known executor and let it initialize + executor = worker.options.activity_executors[defn.executor] + raise ArgumentError, "Unknown executor '#{defn.executor}'" if executor.nil? + + executor.initialize_activity(defn) + + hash[defn.name] = defn + end + + # Need mutex for the rest of these + @running_activities_mutex = Mutex.new + @running_activities = {} + @running_activities_empty_condvar = ConditionVariable.new + end + + def set_running_activity(task_token, activity) + @running_activities_mutex.synchronize do + @running_activities[task_token] = activity + end + end + + def get_running_activity(task_token) + @running_activities_mutex.synchronize do + @running_activities[task_token] + end + end + + def remove_running_activity(task_token) + @running_activities_mutex.synchronize do + @running_activities.delete(task_token) + @running_activities_empty_condvar.broadcast if @running_activities.empty? + end + end + + def wait_all_complete + @running_activities_mutex.synchronize do + @running_activities_empty_condvar.wait(@running_activities_mutex) until @running_activities.empty? + end + end + + def handle_task(task) + @scoped_logger.debug("Received activity task: #{task}") if LOG_TASKS + if !task.start.nil? + handle_start_task(task.task_token, task.start) + elsif !task.cancel.nil? + handle_cancel_task(task.task_token, task.cancel) + else + raise "Unrecognized activity task: #{task}" + end + end + + def handle_start_task(task_token, start) + set_running_activity(task_token, nil) + + # Find activity definition + defn = @activities[start.activity_type] + if defn.nil? + raise Error::ApplicationError.new( + "Activity #{start.activity_type} for workflow #{start.workflow_execution.workflow_id} " \ + "is not registered on this worker, available activities: #{@activities.keys.sort.join(', ')}", + type: 'NotFoundError' + ) + end + + # Run everything else in the excecutor + executor = @worker.options.activity_executors[defn.executor] + executor.execute_activity(defn) do + # Set current executor + Activity::Context._current_executor = executor + # Execute with error handling + execute_activity(task_token, defn, start) + ensure + # Unset at the end + Activity::Context._current_executor = nil + end + rescue Exception => e # rubocop:disable Lint/RescueException We are intending to catch everything here + remove_running_activity(task_token) + @scoped_logger.warn("Failed starting activity #{start.activity_type}") + @scoped_logger.warn(e) + + # We need to complete the activity task as failed, but this is on the + # hot path for polling, so we want to complete it in the background + begin + @bridge_worker.complete_activity_task_in_background( + Bridge::Api::CoreInterface::ActivityTaskCompletion.new( + task_token:, + result: Bridge::Api::ActivityResult::ActivityExecutionResult.new( + failed: Bridge::Api::ActivityResult::Failure.new( + # TODO(cretz): If failure conversion does slow failure + # encoding, it can gum up the system + failure: @worker.options.client.data_converter.to_failure(e) + ) + ) + ) + ) + rescue StandardError => e_inner + @scoped_logger.error("Failed building start failure to return for #{start.activity_type}") + @scoped_logger.error(e_inner) + end + end + + def handle_cancel_task(task_token, cancel) + activity = get_running_activity(task_token) + if activity.nil? + @scoped_logger.warn("Cannot find activity to cancel for token #{task_token}") + return + end + activity._server_requested_cancel = true + _, cancel_proc = activity.cancellation + begin + cancel_proc.call(reason: cancel.reason.to_s) + rescue StandardError => e + @scoped_logger.warn("Failed cancelling activity #{activity.info.activity_type} \ + with ID #{activity.info.activity_id}") + @scoped_logger.warn(e) + end + end + + def execute_activity(task_token, defn, start) + # Build info + info = Activity::Info.new( + activity_id: start.activity_id, + activity_type: start.activity_type, + attempt: start.attempt, + current_attempt_scheduled_time: start.current_attempt_scheduled_time.to_time, + heartbeat_details: ProtoUtils.convert_from_payload_array( + @worker.options.client.data_converter, + start.heartbeat_details.to_ary + ), + heartbeat_timeout: start.heartbeat_timeout&.to_f, + local?: start.is_local, + schedule_to_close_timeout: start.schedule_to_close_timeout&.to_f, + scheduled_time: start.scheduled_time.to_time, + start_to_close_timeout: start.start_to_close_timeout&.to_f, + started_time: start.started_time.to_time, + task_queue: @worker.options.task_queue, + task_token:, + workflow_id: start.workflow_execution.workflow_id, + workflow_namespace: start.workflow_namespace, + workflow_run_id: start.workflow_execution.run_id, + workflow_type: start.workflow_type + ).freeze + + # Build input + input = Temporalio::Worker::Interceptor::ExecuteActivityInput.new( + proc: defn.proc, + args: ProtoUtils.convert_from_payload_array( + @worker.options.client.data_converter, + start.input.to_ary + ), + headers: start.header_fields + ) + + # Run + activity = RunningActivity.new( + info:, + cancellation: Cancellation.new, + worker_shutdown_cancellation: @worker._worker_shutdown_cancellation, + payload_converter: @worker.options.client.data_converter.payload_converter, + logger: @scoped_logger, + definition: defn + ) + Activity::Context._current_executor&.activity_context = activity + set_running_activity(task_token, activity) + run_activity(activity, input) + rescue Exception => e # rubocop:disable Lint/RescueException We are intending to catch everything here + @scoped_logger.warn("Failed starting or sending completion for activity #{start.activity_type}") + @scoped_logger.warn(e) + # This means that the activity couldn't start or send completion (run + # handles its own errors). + begin + @bridge_worker.complete_activity_task( + Bridge::Api::CoreInterface::ActivityTaskCompletion.new( + task_token:, + result: Bridge::Api::ActivityResult::ActivityExecutionResult.new( + failed: Bridge::Api::ActivityResult::Failure.new( + failure: @worker.options.client.data_converter.to_failure(e) + ) + ) + ) + ) + rescue StandardError => e_inner + @scoped_logger.error("Failed sending failure for activity #{start.activity_type}") + @scoped_logger.error(e_inner) + end + ensure + Activity::Context._current_executor&.activity_context = nil + remove_running_activity(task_token) + end + + def run_activity(activity, input) + result = begin + # Build impl with interceptors + # @type var impl: Temporalio::Worker::Interceptor::ActivityInbound + impl = InboundImplementation.new(self) + impl = @worker._all_interceptors.reverse_each.reduce(impl) do |acc, int| + int.intercept_activity(acc) + end + impl.init(OutboundImplementation.new(self)) + + # Execute + result = impl.execute(input) + + # Success + Bridge::Api::ActivityResult::ActivityExecutionResult.new( + completed: Bridge::Api::ActivityResult::Success.new( + result: @worker.options.client.data_converter.to_payload(result) + ) + ) + rescue Exception => e # rubocop:disable Lint/RescueException We are intending to catch everything here + if e.is_a?(Activity::CompleteAsyncError) + # Wanting to complete async + @scoped_logger.debug('Completing activity asynchronously') + Bridge::Api::ActivityResult::ActivityExecutionResult.new( + will_complete_async: Bridge::Api::ActivityResult::WillCompleteAsync.new + ) + elsif e.is_a?(Error::CanceledError) && activity._server_requested_cancel + # Server requested cancel + @scoped_logger.debug('Completing activity as canceled') + Bridge::Api::ActivityResult::ActivityExecutionResult.new( + cancelled: Bridge::Api::ActivityResult::Cancellation.new( + failure: @worker.options.client.data_converter.to_failure(e) + ) + ) + else + # General failure + @scoped_logger.warn('Completing activity as failed') + @scoped_logger.warn(e) + Bridge::Api::ActivityResult::ActivityExecutionResult.new( + failed: Bridge::Api::ActivityResult::Failure.new( + failure: @worker.options.client.data_converter.to_failure(e) + ) + ) + end + end + + @scoped_logger.debug("Sending activity completion: #{result}") if LOG_TASKS + @bridge_worker.complete_activity_task( + Bridge::Api::CoreInterface::ActivityTaskCompletion.new( + task_token: activity.info.task_token, + result: + ) + ) + end + + class RunningActivity < Activity::Context + attr_reader :info, :cancellation, :worker_shutdown_cancellation, :payload_converter, :logger, :definition + attr_accessor :_outbound_impl, :_server_requested_cancel + + def initialize( # rubocop:disable Lint/MissingSuper + info:, + cancellation:, + worker_shutdown_cancellation:, + payload_converter:, + logger:, + definition: + ) + @info = info + @cancellation = cancellation + @worker_shutdown_cancellation = worker_shutdown_cancellation + @payload_converter = payload_converter + @logger = logger + @definition = definition + @_outbound_impl = nil + @_server_requested_cancel = false + end + + def heartbeat(*details) + raise 'Implementation not set yet' if _outbound_impl.nil? + + _outbound_impl.heartbeat(Temporalio::Worker::Interceptor::HeartbeatActivityInput.new(details:)) + end + end + + class InboundImplementation < Temporalio::Worker::Interceptor::ActivityInbound + def initialize(worker) + super(nil) # steep:ignore + @worker = worker + end + + def init(outbound) + context = Activity::Context.current + raise 'Unexpected context type' unless context.is_a?(RunningActivity) + + context._outbound_impl = outbound + end + + def execute(input) + input.proc.call(*input.args) + end + end + + class OutboundImplementation < Temporalio::Worker::Interceptor::ActivityOutbound + def initialize(worker) + super(nil) # steep:ignore + @worker = worker + end + + def heartbeat(input) + @worker.bridge_worker.record_activity_heartbeat( + Bridge::Api::CoreInterface::ActivityHeartbeat.new( + task_token: Activity::Context.current.info.task_token, + details: ProtoUtils.convert_to_payload_array(@worker.worker.options.client.data_converter, + input.details) + ).to_proto + ) + end + end + end + end + end +end diff --git a/temporalio/lib/temporalio/internal/worker/multi_runner.rb b/temporalio/lib/temporalio/internal/worker/multi_runner.rb new file mode 100644 index 00000000..3a162479 --- /dev/null +++ b/temporalio/lib/temporalio/internal/worker/multi_runner.rb @@ -0,0 +1,161 @@ +# frozen_string_literal: true + +require 'singleton' +require 'temporalio/internal/bridge/worker' + +module Temporalio + module Internal + module Worker + class MultiRunner + def initialize(workers:) + @workers = workers + @queue = Queue.new + + @shutdown_initiated_mutex = Mutex.new + @shutdown_initiated = false + + # Start pollers + Bridge::Worker.async_poll_all(workers.map(&:_bridge_worker), @queue) + end + + def apply_thread_or_fiber_block(&) + return unless block_given? + + @thread_or_fiber = if Fiber.current_scheduler + Fiber.schedule do + @queue.push(Event::BlockSuccess.new(result: yield)) + rescue InjectEventForTesting => e + @queue.push(e.event) + @queue.push(Event::BlockSuccess.new(result: e)) + rescue Exception => e # rubocop:disable Lint/RescueException Intentionally catch all + @queue.push(Event::BlockFailure.new(error: e)) + end + else + Thread.new do + @queue.push(Event::BlockSuccess.new(result: yield)) + rescue InjectEventForTesting => e + @queue.push(e.event) + @queue.push(Event::BlockSuccess.new(result: e)) + rescue Exception => e # rubocop:disable Lint/RescueException Intentionally catch all + @queue.push(Event::BlockFailure.new(error: e)) + end + end + end + + def raise_in_thread_or_fiber_block(error) + @thread_or_fiber&.raise(error) + end + + # Clarify this is the only thread-safe function here + def initiate_shutdown + should_call = @shutdown_initiated_mutex.synchronize do + break false if @shutdown_initiated + + @shutdown_initiated = true + end + return unless should_call + + @workers.each(&:_initiate_shutdown) + end + + def wait_complete_and_finalize_shutdown + # Wait for them all to complete + @workers.each(&:_wait_all_complete) + + # Finalize them all + Bridge::Worker.finalize_shutdown_all(@workers.map(&:_bridge_worker)) + end + + # Intentionally not an enumerable/enumerator since stop semantics are + # caller determined + def next_event + # Queue value is one of the following: + # * Event - non-poller event + # * [worker index, :activity/:workflow, bytes] - poll success + # * [worker index, :activity/:workflow, error] - poll fail + # * [worker index, :activity/:workflow, nil] - worker shutdown + # * [nil, nil, nil] - all pollers done + result = @queue.pop + if result.is_a?(Event) + result + else + worker_index, worker_type, poll_result = result + if worker_index.nil? || worker_type.nil? + Event::AllPollersShutDown.instance + else + worker = @workers[worker_index] + case poll_result + when nil + Event::PollerShutDown.new(worker:, worker_type:) + when Exception + Event::PollFailure.new(worker:, worker_type:, error: poll_result) + else + Event::PollSuccess.new(worker:, worker_type:, bytes: poll_result) + end + end + end + end + + class Event + class PollSuccess < Event + attr_reader :worker, :worker_type, :bytes + + def initialize(worker:, worker_type:, bytes:) # rubocop:disable Lint/MissingSuper + @worker = worker + @worker_type = worker_type + @bytes = bytes + end + end + + class PollFailure < Event + attr_reader :worker, :worker_type, :error + + def initialize(worker:, worker_type:, error:) # rubocop:disable Lint/MissingSuper + @worker = worker + @worker_type = worker_type + @error = error + end + end + + class PollerShutDown < Event + attr_reader :worker, :worker_type + + def initialize(worker:, worker_type:) # rubocop:disable Lint/MissingSuper + @worker = worker + @worker_type = worker_type + end + end + + class AllPollersShutDown < Event + include Singleton + end + + class BlockSuccess < Event + attr_reader :result + + def initialize(result:) # rubocop:disable Lint/MissingSuper + @result = result + end + end + + class BlockFailure < Event + attr_reader :error + + def initialize(error:) # rubocop:disable Lint/MissingSuper + @error = error + end + end + end + + class InjectEventForTesting < Temporalio::Error + attr_reader :event + + def initialize(event) + super('Injecting event for testing') + @event = event + end + end + end + end + end +end diff --git a/temporalio/lib/temporalio/scoped_logger.rb b/temporalio/lib/temporalio/scoped_logger.rb new file mode 100644 index 00000000..a13582bc --- /dev/null +++ b/temporalio/lib/temporalio/scoped_logger.rb @@ -0,0 +1,96 @@ +# frozen_string_literal: true + +require 'delegate' +require 'logger' + +module Temporalio + # Implementation via delegator to {::Logger} that puts scoped values on the log message and appends them to the log + # message. + class ScopedLogger < SimpleDelegator + # @!attribute scoped_values_getter + # @return [Proc, nil] Proc to call to get scoped values when needed. + attr_accessor :scoped_values_getter + + # @!attribute scoped_values_getter + # @return [Boolean] Whether the scoped value appending is disabled. + attr_accessor :disable_scoped_values + + # @see Logger.add + def add(severity, message = nil, progname = nil) + return true if (severity || Logger::Unknown) < level + return super if scoped_values_getter.nil? || @disable_scoped_values + + scoped_values = scoped_values_getter.call + return super if scoped_values.nil? + + if message.nil? + if block_given? + message = yield + else + message = progname + progname = nil + end + end + # For exceptions we need to dup and append here, for everything else we + # need to delegate to a log message + new_message = if message.is_a?(Exception) + message.exception("#{message.message} #{scoped_values}") + else + LogMessage.new(message, scoped_values) + end + super(severity, new_message, progname) + end + alias log add + + # @see Logger.debug + def debug(progname = nil, &) + add(Logger::DEBUG, nil, progname, &) + end + + # @see Logger.info + def info(progname = nil, &) + add(Logger::INFO, nil, progname, &) + end + + # @see Logger.warn + def warn(progname = nil, &) + add(Logger::WARN, nil, progname, &) + end + + # @see Logger.error + def error(progname = nil, &) + add(Logger::ERROR, nil, progname, &) + end + + # @see Logger.fatal + def fatal(progname = nil, &) + add(Logger::FATAL, nil, progname, &) + end + + # @see Logger.unknown + def unknown(progname = nil, &) + add(Logger::UNKNOWN, nil, progname, &) + end + + # Scoped log message wrapping original log message. + class LogMessage + # @return [Object] Original log message. + attr_reader :message + + # @return [Object] Scoped values. + attr_reader :scoped_values + + # @!visibility private + def initialize(message, scoped_values) + @message = message + @scoped_values = scoped_values + end + + # @return [String] Message with scoped values appended. + def inspect + message_str = message.is_a?(String) ? message : message.inspect + "#{message_str} #{scoped_values}" + end + end + end +end diff --git a/temporalio/lib/temporalio/testing/workflow_environment.rb b/temporalio/lib/temporalio/testing/workflow_environment.rb index fa7d81b4..e426fe77 100644 --- a/temporalio/lib/temporalio/testing/workflow_environment.rb +++ b/temporalio/lib/temporalio/testing/workflow_environment.rb @@ -24,6 +24,9 @@ class WorkflowEnvironment # @param namespace [String] Namespace for the server. # @param data_converter [Converters::DataConverter] Data converter for the client. # @param interceptors [Array] Interceptors for the client. + # @param logger [Logger] Logger for the client. + # @param default_workflow_query_reject_condition [WorkflowQueryRejectCondition, nil] Default rejection condition + # for the client. # @param ip [String] IP to bind to. # @param port [Integer, nil] Port to bind on, or +nil+ for random. # @param ui [Boolean] If +true+, also starts the UI. @@ -48,7 +51,8 @@ def self.start_local( namespace: 'default', data_converter: Converters::DataConverter.default, interceptors: [], - # TODO(cretz): More client connect options + logger: Logger.new($stdout, level: Logger::WARN), + default_workflow_query_reject_condition: nil, ip: '127.0.0.1', port: nil, ui: false, # rubocop:disable Naming/MethodParameterName @@ -84,6 +88,8 @@ def self.start_local( namespace, data_converter:, interceptors:, + logger:, + default_workflow_query_reject_condition:, runtime: ) server = Ephemeral.new(client, core_server) diff --git a/temporalio/lib/temporalio/worker.rb b/temporalio/lib/temporalio/worker.rb new file mode 100644 index 00000000..c34fff08 --- /dev/null +++ b/temporalio/lib/temporalio/worker.rb @@ -0,0 +1,414 @@ +# frozen_string_literal: true + +require 'temporalio/activity' +require 'temporalio/cancellation' +require 'temporalio/client' +require 'temporalio/error' +require 'temporalio/internal/bridge' +require 'temporalio/internal/bridge/worker' +require 'temporalio/internal/worker/activity_worker' +require 'temporalio/internal/worker/multi_runner' +require 'temporalio/worker/activity_executor' +require 'temporalio/worker/interceptor' +require 'temporalio/worker/tuner' + +module Temporalio + # Worker for processing activities and workflows on a task queue. + # + # Workers are created for a task queue and the items they can run. Then {run} is used for running a single worker, or + # {run_all} is used for a collection of workers. These can wait until a block is complete or a {Cancellation} is + # canceled. + class Worker + # Options as returned from {options} for `**to_h`` splat use in {initialize}. See {initialize} for details. + Options = Struct.new( + :client, + :task_queue, + :activities, + :activity_executors, + :tuner, + :interceptors, + :build_id, + :identity, + :logger, + :max_cached_workflows, + :max_concurrent_workflow_task_polls, + :nonsticky_to_sticky_poll_ratio, + :max_concurrent_activity_task_polls, + :no_remote_activities, + :sticky_queue_schedule_to_start_timeout, + :max_heartbeat_throttle_interval, + :default_heartbeat_throttle_interval, + :max_activities_per_second, + :max_task_queue_activities_per_second, + :graceful_shutdown_period, + :use_worker_versioning, + keyword_init: true + ) + + # @return [String] Memoized default build ID. This default value is built as a checksum of all of the loaded Ruby + # source files in `$LOADED_FEATURES`. Users may prefer to set the build ID to a better representation of the + # source. + def self.default_build_id + @default_build_id ||= _load_default_build_id + end + + # @!visibility private + def self._load_default_build_id + # The goal is to get a hash of runtime code, both Temporal's and the + # user's. After all options were explored, we have decided to default to + # hashing all bytecode of required files. This means later/dynamic require + # won't be accounted for because this is memoized. It also means the + # tiniest code change will affect this, which is what we want since this + # is meant to be a "binary checksum". We have chosen to use MD5 for speed, + # similarity with other SDKs, and because security is not a factor. + # TODO(cretz): Ensure Temporal bridge library is imported or something is + # off + + $LOADED_FEATURES.each_with_object(Digest::MD5.new) do |file, digest| + digest.update(File.read(file)) if File.file?(file) + end.hexdigest + end + + # Run all workers until cancellation or optional block completes. When the cancellation or block is complete, the + # workers are shut down. This will return the block result if everything successful or raise an error if not. See + # {run} for details on how worker shutdown works. + # + # @param workers [Array] Workers to run. + # @param cancellation [Cancellation] Cancellation that can be canceled to shut down all workers. + # @param raise_in_block_on_shutdown [Exception, nil] Exception to {::Thread.raise} or {::Fiber.raise} if a block is + # present and still running on shutdown. If nil, `raise` is not used. + # @param wait_block_complete [Boolean] If block given and shutdown caused by something else (e.g. cancellation + # canceled), whether to wait on the block to complete before returning. + # @yield Optional block. This will be run in a new background thread or fiber. Workers will shut down upon + # completion of this and, assuming no other failures, return/bubble success/exception of the block. + # @return [Object] Return value of the block or nil of no block given. + def self.run_all( + *workers, + cancellation: Cancellation.new, + raise_in_block_on_shutdown: Error::CanceledError.new('Workers finished'), + wait_block_complete: true, + &block + ) + # Confirm there is at least one and they are all workers + raise ArgumentError, 'At least one worker required' if workers.empty? + raise ArgumentError, 'Not all parameters are workers' unless workers.all? { |w| w.is_a?(Worker) } + + Internal::Bridge.assert_fiber_compatibility! + + # Start the multi runner + runner = Internal::Worker::MultiRunner.new(workers:) + + # Apply block + runner.apply_thread_or_fiber_block(&block) + + # Reuse first worker logger + logger = workers.first&.options&.logger or raise # Help steep + + # On cancel, initiate shutdown + cancellation.add_cancel_callback do + logger.info('Cancel invoked, beginning worker shutdown') + runner.initiate_shutdown + end + + # Poller loop, run until all pollers shut down + first_error = nil + block_result = nil + loop do + event = runner.next_event + case event + when Internal::Worker::MultiRunner::Event::PollSuccess + # Successful poll + event.worker._on_poll_bytes(event.worker_type, event.bytes) + when Internal::Worker::MultiRunner::Event::PollFailure + # Poll failure, this causes shutdown of all workers + logger.error('Poll failure (beginning worker shutdown if not alaredy occurring)') + logger.error(event.error) + first_error ||= event.error + runner.initiate_shutdown + when Internal::Worker::MultiRunner::Event::PollerShutDown + # Individual poller shut down. Nothing to do here until we support + # worker status or something. + when Internal::Worker::MultiRunner::Event::AllPollersShutDown + # This is where we break the loop, no more polling can happen + break + when Internal::Worker::MultiRunner::Event::BlockSuccess + logger.info('Block completed, beginning worker shutdown') + block_result = event + runner.initiate_shutdown + when Internal::Worker::MultiRunner::Event::BlockFailure + logger.error('Block failure (beginning worker shutdown)') + logger.error(event.error) + block_result = event + first_error ||= event.error + runner.initiate_shutdown + else + raise "Unexpected event: #{event}" + end + end + + # Now that all pollers have stopped, let's wait for all to complete + begin + runner.wait_complete_and_finalize_shutdown + rescue StandardError => e + logger.warn('Failed waiting and finalizing') + logger.warn(e) + end + + # If there was a block but not a result yet, we want to raise if that is + # wanted, and wait if that is wanted + if block_given? && block_result.nil? + runner.raise_in_thread_or_fiber_block(raise_in_block_on_shutdown) unless raise_in_block_on_shutdown.nil? + if wait_block_complete + event = runner.next_event + case event + when Internal::Worker::MultiRunner::Event::BlockSuccess + logger.info('Block completed (after worker shutdown)') + block_result = event + when Internal::Worker::MultiRunner::Event::BlockFailure + logger.error('Block failure (after worker shutdown)') + logger.error(event.error) + block_result = event + first_error ||= event.error + else + raise "Unexpected event: #{event}" + end + end + end + + # If there was an shutdown-causing error, we raise that + if !first_error.nil? + raise first_error + elsif block_result.is_a?(Internal::Worker::MultiRunner::Event::BlockSuccess) + block_result.result + end + end + + # @return [Options] Frozen options for this client which has the same attributes as {initialize}. + attr_reader :options + + # Create a new worker. At least one activity or workflow must be present. + # + # @param client [Client] Client for this worker. + # @param task_queue [String] Task queue for this worker. + # @param activities [Array, Activity::Definition>] Activities for this worker. + # @param activity_executors [Hash] Executors that activities can run within. + # @param tuner [Tuner] Tuner that controls the amount of concurrent activities/workflows that run at a time. + # @param interceptors [Array] Interceptors specific to this worker. Note, interceptors set on the + # client that include the {Interceptor} module are automatically included here, so no need to specify them again. + # @param build_id [String] Unique identifier for the current runtime. This is best set as a unique value + # representing all code and should change only when code does. This can be something like a git commit hash. If + # unset, default is hash of known Ruby code. + # @param identity [String, nil] Override the identity for this worker. If unset, client identity is used. + # @param max_cached_workflows [Integer] Number of workflows held in cache for use by sticky task queue. If set to 0, + # workflow caching and sticky queuing are disabled. + # @param max_concurrent_workflow_task_polls [Integer] Maximum number of concurrent poll workflow task requests we + # will perform at a time on this worker's task queue. + # @param nonsticky_to_sticky_poll_ratio [Float] `max_concurrent_workflow_task_polls`` * this number = the number of + # max pollers that will be allowed for the nonsticky queue when sticky tasks are enabled. If both defaults are + # used, the sticky queue will allow 4 max pollers while the nonsticky queue will allow one. The minimum for either + # poller is 1, so if `max_concurrent_workflow_task_polls` is 1 and sticky queues are enabled, there will be 2 + # concurrent polls. + # @param max_concurrent_activity_task_polls [Integer] Maximum number of concurrent poll activity task requests we + # will perform at a time on this worker's task queue. + # @param no_remote_activities [Boolean] If true, this worker will only handle workflow tasks and local activities, + # it will not poll for activity tasks. + # @param sticky_queue_schedule_to_start_timeout [Float] How long a workflow task is allowed to sit on the sticky + # queue before it is timed out and moved to the non-sticky queue where it may be picked up by any worker. + # @param max_heartbeat_throttle_interval [Float] Longest interval for throttling activity heartbeats. + # @param default_heartbeat_throttle_interval [Float] Default interval for throttling activity heartbeats in case + # per-activity heartbeat timeout is unset. Otherwise, it's the per-activity heartbeat timeout * 0.8. + # @param max_activities_per_second [Float, nil] Limits the number of activities per second that this worker will + # process. The worker will not poll for new activities if by doing so it might receive and execute an activity + # which would cause it to exceed this limit. + # @param max_task_queue_activities_per_second [Float, nil] Sets the maximum number of activities per second the task + # queue will dispatch, controlled server-side. Note that this only takes effect upon an activity poll request. If + # multiple workers on the same queue have different values set, they will thrash with the last poller winning. + # @param graceful_shutdown_period [Float] Amount of time after shutdown is called that activities are given to + # complete before their tasks are cancelled. + # @param use_worker_versioning [Boolean] If true, the `build_id` argument must be specified, and this worker opts + # into the worker versioning feature. This ensures it only receives workflow tasks for workflows which it claims + # to be compatible with. For more information, see https://docs.temporal.io/workers#worker-versioning. + def initialize( + client:, + task_queue:, + activities: [], + activity_executors: ActivityExecutor.defaults, + tuner: Tuner.create_fixed, + interceptors: [], + build_id: Worker.default_build_id, + identity: nil, + logger: client.options.logger, + max_cached_workflows: 1000, + max_concurrent_workflow_task_polls: 5, + nonsticky_to_sticky_poll_ratio: 0.2, + max_concurrent_activity_task_polls: 5, + no_remote_activities: false, + sticky_queue_schedule_to_start_timeout: 10, + max_heartbeat_throttle_interval: 60, + default_heartbeat_throttle_interval: 30, + max_activities_per_second: nil, + max_task_queue_activities_per_second: nil, + graceful_shutdown_period: 0, + use_worker_versioning: false + ) + # TODO(cretz): Remove when workflows come about + raise ArgumentError, 'Must have at least one activity' if activities.empty? + + @options = Options.new( + client:, + task_queue:, + activities:, + activity_executors:, + tuner:, + interceptors:, + build_id:, + identity:, + logger:, + max_cached_workflows:, + max_concurrent_workflow_task_polls:, + nonsticky_to_sticky_poll_ratio:, + max_concurrent_activity_task_polls:, + no_remote_activities:, + sticky_queue_schedule_to_start_timeout:, + max_heartbeat_throttle_interval:, + default_heartbeat_throttle_interval:, + max_activities_per_second:, + max_task_queue_activities_per_second:, + graceful_shutdown_period:, + use_worker_versioning: + ).freeze + + # Create the bridge worker + @bridge_worker = Internal::Bridge::Worker.new( + client.connection._core_client, + Internal::Bridge::Worker::Options.new( + activity: !activities.empty?, + workflow: false, + namespace: client.namespace, + task_queue:, + tuner: Internal::Bridge::Worker::TunerOptions.new( + workflow_slot_supplier: to_bridge_slot_supplier_options(tuner.workflow_slot_supplier), + activity_slot_supplier: to_bridge_slot_supplier_options(tuner.activity_slot_supplier), + local_activity_slot_supplier: to_bridge_slot_supplier_options(tuner.local_activity_slot_supplier) + ), + build_id:, + identity_override: identity, + max_cached_workflows:, + max_concurrent_workflow_task_polls:, + nonsticky_to_sticky_poll_ratio:, + max_concurrent_activity_task_polls:, + no_remote_activities:, + sticky_queue_schedule_to_start_timeout:, + max_heartbeat_throttle_interval:, + default_heartbeat_throttle_interval:, + max_worker_activities_per_second: max_activities_per_second, + max_task_queue_activities_per_second:, + graceful_shutdown_period:, + use_worker_versioning: + ) + ) + + # Collect interceptors from client and params + @all_interceptors = client.options.interceptors.select { |i| i.is_a?(Interceptor) } + interceptors + + # Cancellation for the whole worker + @worker_shutdown_cancellation = Cancellation.new + + # Create workers + # TODO(cretz): Make conditional when workflows appear + @activity_worker = Internal::Worker::ActivityWorker.new(self, @bridge_worker) + + # Validate worker + @bridge_worker.validate + end + + # @return [String] Task queue set on the worker options. + def task_queue + @options.task_queue + end + + # Run this worker until cancellation or optional block completes. When the cancellation or block is complete, the + # worker is shut down. This will return the block result if everything successful or raise an error if not. + # + # Upon shutdown (either via cancellation, block completion, or worker fatal error), the worker immediately stops + # accepting new work. Then, after an optional grace period, all activities are canceled. This call then waits for + # every activity and workflow task to complete before returning. + # + # @param cancellation [Cancellation] Cancellation that can be canceled to shut down this worker. + # @param raise_in_block_on_shutdown [Exception, nil] Exception to {::Thread.raise} or {::Fiber.raise} if a block is + # present and still running on shutdown. If nil, `raise` is not used. + # @param wait_block_complete [Boolean] If block given and shutdown caused by something else (e.g. cancellation + # canceled), whether to wait on the block to complete before returning. + # @yield Optional block. This will be run in a new background thread or fiber. Worker will shut down upon completion + # of this and, assuming no other failures, return/bubble success/exception of the block. + # @return [Object] Return value of the block or nil of no block given. + def run( + cancellation: Cancellation.new, + # TODO(cretz): Document that this can be set to nil + raise_in_block_on_shutdown: Error::CanceledError.new('Workers finished'), + wait_block_complete: true, + &block + ) + Worker.run_all(self, cancellation:, raise_in_block_on_shutdown:, wait_block_complete:, &block) + end + + # @!visibility private + def _worker_shutdown_cancellation + @worker_shutdown_cancellation + end + + # @!visibility private + def _initiate_shutdown + _bridge_worker.initiate_shutdown + _, cancel_proc = _worker_shutdown_cancellation + cancel_proc.call + end + + # @!visibility private + def _wait_all_complete + @activity_worker&.wait_all_complete + end + + # @!visibility private + def _bridge_worker + @bridge_worker + end + + # @!visibility private + def _all_interceptors + @all_interceptors + end + + # @!visibility private + def _on_poll_bytes(worker_type, bytes) + # TODO(cretz): Workflow workers + raise "Unrecognized worker type #{worker_type}" unless worker_type == :activity + + @activity_worker.handle_task(Internal::Bridge::Api::ActivityTask::ActivityTask.decode(bytes)) + end + + private + + def to_bridge_slot_supplier_options(slot_supplier) + if slot_supplier.is_a?(Tuner::SlotSupplier::Fixed) + Internal::Bridge::Worker::TunerSlotSupplierOptions.new( + fixed_size: slot_supplier.slots, + resource_based: nil + ) + elsif slot_supplier.is_a?(Tuner::SlotSupplier::ResourceBased) + Internal::Bridge::Worker::TunerSlotSupplierOptions.new( + fixed_size: nil, + resource_based: Internal::Bridge::Worker::TunerResourceBasedSlotSupplierOptions.new( + target_mem_usage: slot_supplier.tuner_options.target_memory_usage, + target_cpu_usage: slot_supplier.tuner_options.target_cpu_usage, + min_slots: slot_supplier.slot_options.min_slots, + max_slots: slot_supplier.slot_options.max_slots, + ramp_throttle: slot_supplier.slot_options.ramp_throttle + ) + ) + else + raise ArgumentError, 'Tuner slot suppliers must be instances of Fixed or ResourceBased' + end + end + end +end diff --git a/temporalio/lib/temporalio/worker/activity_executor.rb b/temporalio/lib/temporalio/worker/activity_executor.rb new file mode 100644 index 00000000..ae0666f3 --- /dev/null +++ b/temporalio/lib/temporalio/worker/activity_executor.rb @@ -0,0 +1,54 @@ +# frozen_string_literal: true + +require 'temporalio/worker/activity_executor/fiber' +require 'temporalio/worker/activity_executor/thread_pool' + +module Temporalio + class Worker + # Base class to be extended by activity executor implementations. Most users will not use this, but rather keep with + # the two defaults of thread pool and fiber executors. + class ActivityExecutor + # @return [Hash] Default set of executors (immutable). + def self.defaults + @defaults ||= { + default: ThreadPool.default, + thread_pool: ThreadPool.default, + fiber: Fiber.default + }.freeze + end + + # Initialize an activity. This is called on worker initialize for every activity that will use this executor. This + # allows executor implementations to do eager validation based on the definition. This does not have to be + # implemented and the default is a no-op. + # + # @param defn [Activity::Definition] Activity definition. + def initialize_activity(defn) + # Default no-op + end + + # Execute the given block in the executor. The block is built to never raise and need no arguments. Implementers + # must implement this. + # + # @param defn [Activity::Definition] Activity definition. + # @yield Block to execute. + def execute_activity(defn, &) + raise NotImplementedError + end + + # @return [Activity::Context, nil] Get the current activity context. This is called by users from inside the + # activity. Implementers must implement this. + def activity_context + raise NotImplementedError + end + + # Set the current activity context (or unset if nil). This is called by the system from within the block given to + # {execute_activity} with a context before user code is executed and with nil after user code is complete. + # Implementers must implement this. + # + # @param context [Activity::Context, nil] The value to set. + def activity_context=(context) + raise NotImplementedError + end + end + end +end diff --git a/temporalio/lib/temporalio/worker/activity_executor/fiber.rb b/temporalio/lib/temporalio/worker/activity_executor/fiber.rb new file mode 100644 index 00000000..ec739977 --- /dev/null +++ b/temporalio/lib/temporalio/worker/activity_executor/fiber.rb @@ -0,0 +1,49 @@ +# frozen_string_literal: true + +require 'temporalio/error' +require 'temporalio/worker/activity_executor' + +module Temporalio + class Worker + class ActivityExecutor + # Activity executor for scheduling activites as fibers. + class Fiber + # @return [Fiber] Default/shared Fiber executor instance. + def self.default + @default ||= new + end + + # @see ActivityExecutor.initialize_activity + def initialize_activity(defn) + # If there is not a current scheduler, we're going to preemptively + # fail the registration + return unless ::Fiber.current_scheduler.nil? + + raise ArgumentError, "Activity '#{defn.name}' wants a fiber executor but no current fiber scheduler" + end + + # @see ActivityExecutor.initialize_activity + def execute_activity(_defn, &) + ::Fiber.schedule(&) + end + + # @see ActivityExecutor.activity_context + def activity_context + ::Fiber[:temporal_activity_context] + end + + # @see ActivityExecutor.activity_context= + def activity_context=(context) + ::Fiber[:temporal_activity_context] = context + # If they have opted in to raising on cancel, wire that up + return unless context&.definition&.cancel_raise + + fiber = ::Fiber.current + context.cancellation.add_cancel_callback do + fiber.raise(Error::CanceledError.new('Activity canceled')) + end + end + end + end + end +end diff --git a/temporalio/lib/temporalio/worker/activity_executor/thread_pool.rb b/temporalio/lib/temporalio/worker/activity_executor/thread_pool.rb new file mode 100644 index 00000000..563301dc --- /dev/null +++ b/temporalio/lib/temporalio/worker/activity_executor/thread_pool.rb @@ -0,0 +1,254 @@ +# frozen_string_literal: true + +# Much of this logic taken from +# https://github.com/ruby-concurrency/concurrent-ruby/blob/044020f44b36930b863b930f3ee8fa1e9f750469/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb, +# see MIT license at +# https://github.com/ruby-concurrency/concurrent-ruby/blob/044020f44b36930b863b930f3ee8fa1e9f750469/LICENSE.txt + +module Temporalio + class Worker + class ActivityExecutor + # Activity executor for scheduling activities in their own thread. This implementation is a stripped down form of + # Concurrent Ruby's `CachedThreadPool`. + class ThreadPool < ActivityExecutor + # @return [ThreadPool] Default/shared thread pool executor instance with unlimited max threads. + def self.default + @default ||= new + end + + # @!visibility private + def self._monotonic_time + Process.clock_gettime(Process::CLOCK_MONOTONIC) + end + + # Create a new thread pool executor that creates threads as needed. + # + # @param max_threads [Integer, nil] Maximum number of thread workers to create, or nil for unlimited max. + # @param idle_timeout [Float] Number of seconds before a thread worker with no work should be stopped. Note, + # the check of whether a thread worker is idle is only done on each new activity. + def initialize(max_threads: nil, idle_timeout: 20) # rubocop:disable Lint/MissingSuper + @max_threads = max_threads + @idle_timeout = idle_timeout + + @mutex = Mutex.new + @pool = [] + @ready = [] + @queue = [] + @scheduled_task_count = 0 + @completed_task_count = 0 + @largest_length = 0 + @workers_counter = 0 + @prune_interval = @idle_timeout / 2 + @next_prune_time = ThreadPool._monotonic_time + @prune_interval + end + + # @see ActivityExecutor.execute_activity + def execute_activity(_defn, &block) + @mutex.synchronize do + locked_assign_worker(&block) || locked_enqueue(&block) + @scheduled_task_count += 1 + locked_prune_pool if @next_prune_time < ThreadPool._monotonic_time + end + end + + # @see ActivityExecutor.activity_context + def activity_context + Thread.current[:temporal_activity_context] + end + + # @see ActivityExecutor.activity_context= + def activity_context=(context) + Thread.current[:temporal_activity_context] = context + # If they have opted in to raising on cancel, wire that up + return unless context&.definition&.cancel_raise + + thread = Thread.current + context.cancellation.add_cancel_callback do + thread.raise(Error::CanceledError.new('Activity canceled')) if thread[:temporal_activity_context] == context + end + end + + # @return [Integer] The largest number of threads that have been created in the pool since construction. + def largest_length + @mutex.synchronize { @largest_length } + end + + # @return [Integer] The number of tasks that have been scheduled for execution on the pool since construction. + def scheduled_task_count + @mutex.synchronize { @scheduled_task_count } + end + + # @return [Integer] The number of tasks that have been completed by the pool since construction. + def completed_task_count + @mutex.synchronize { @completed_task_count } + end + + # @return [Integer] The number of threads that are actively executing tasks. + def active_count + @mutex.synchronize { @pool.length - @ready.length } + end + + # @return [Integer] The number of threads currently in the pool. + def length + @mutex.synchronize { @pool.length } + end + + # @return [Integer] The number of tasks in the queue awaiting execution. + def queue_length + @mutex.synchronize { @queue.length } + end + + # Gracefully shutdown each thread when it is done with its current task. This should not be called until all + # workers using this executor are complete. This does not need to be called at all on program exit (e.g. for the + # global default). + def shutdown + @mutex.synchronize do + # Stop all workers + @pool.each(&:stop) + end + end + + # Kill each thread. This should not be called until all workers using this executor are complete. This does not + # need to be called at all on program exit (e.g. for the global default). + def kill + @mutex.synchronize do + # Kill all workers + @pool.each(&:kill) + @pool.clear + @ready.clear + end + end + + # @!visibility private + def _remove_busy_worker(worker) + @mutex.synchronize { locked_remove_busy_worker(worker) } + end + + # @!visibility private + def _ready_worker(worker, last_message) + @mutex.synchronize { locked_ready_worker(worker, last_message) } + end + + # @!visibility private + def _worker_died(worker) + @mutex.synchronize { locked_worker_died(worker) } + end + + # @!visibility private + def _worker_task_completed + @mutex.synchronize { @completed_task_count += 1 } + end + + private + + def locked_assign_worker(&block) + # keep growing if the pool is not at the minimum yet + worker, = @ready.pop || locked_add_busy_worker + if worker + worker << block + true + else + false + end + end + + def locked_enqueue(&block) + @queue << block + end + + def locked_add_busy_worker + return if @max_threads && @pool.size >= @max_threads + + @workers_counter += 1 + @pool << (worker = Worker.new(self, @workers_counter)) + @largest_length = @pool.length if @pool.length > @largest_length + worker + end + + def locked_prune_pool + now = ThreadPool._monotonic_time + stopped_workers = 0 + while !@ready.empty? && (@pool.size - stopped_workers).positive? + worker, last_message = @ready.first + break unless now - last_message > @idle_timeout + + stopped_workers += 1 + @ready.shift + worker << :stop + + end + + @next_prune_time = ThreadPool._monotonic_time + @prune_interval + end + + def locked_remove_busy_worker(worker) + @pool.delete(worker) + end + + def locked_ready_worker(worker, last_message) + block = @queue.shift + if block + worker << block + else + @ready.push([worker, last_message]) + end + end + + def locked_worker_died(worker) + locked_remove_busy_worker(worker) + replacement_worker = locked_add_busy_worker + locked_ready_worker(replacement_worker, ThreadPool._monotonic_time) if replacement_worker + end + + # @!visibility private + class Worker + def initialize(pool, id) + @queue = Queue.new + @thread = Thread.new(@queue, pool) do |my_queue, my_pool| + catch(:stop) do + loop do + case block = my_queue.pop + when :stop + pool._remove_busy_worker(self) + throw :stop + else + begin + block.call + my_pool._worker_task_completed + my_pool._ready_worker(self, ThreadPool._monotonic_time) + rescue StandardError => e + # Ignore + warn("Unexpected activity block error: #{e}") + rescue Exception => e # rubocop:disable Lint/RescueException + warn("Unexpected activity block exception: #{e}") + my_pool._worker_died(self) + throw :stop + end + end + end + end + end + @thread.name = "activity-thread-#{id}" + end + + # @!visibility private + def <<(block) + @queue << block + end + + # @!visibility private + def stop + @queue << :stop + end + + # @!visibility private + def kill + @thread.kill + end + end + + private_constant :Worker + end + end + end +end diff --git a/temporalio/lib/temporalio/worker/interceptor.rb b/temporalio/lib/temporalio/worker/interceptor.rb new file mode 100644 index 00000000..cc0d0f54 --- /dev/null +++ b/temporalio/lib/temporalio/worker/interceptor.rb @@ -0,0 +1,88 @@ +# frozen_string_literal: true + +module Temporalio + class Worker + # Mixin for intercepting worker work. Clases that `include` may implement their own {intercept_activity} that + # returns their own instance of {ActivityInbound}. + # + # @note Input classes herein may get new required fields added and therefore the constructors of the Input classes + # may change in backwards incompatible ways. Users should not try to construct Input classes themselves. + module Interceptor + # Method called when intercepting an activity. This is called when starting an activity attempt. + # + # @param next_interceptor [ActivityInbound] Next interceptor in the chain that should be called. This is usually + # passed to {ActivityInbound} constructor. + # @return [ActivityInbound] Interceptor to be called for activity calls. + def intercept_activity(next_interceptor) + next_interceptor + end + + # Input for {ActivityInbound.execute}. + ExecuteActivityInput = Struct.new( + :proc, + :args, + :headers, + keyword_init: true + ) + + # Input for {ActivityOutbound.heartbeat}. + HeartbeatActivityInput = Struct.new( + :details, + keyword_init: true + ) + + # Inbound interceptor for intercepting inbound activity calls. This should be extended by users needing to + # intercept activities. + class ActivityInbound + # @return [ActivityInbound] Next interceptor in the chain. + attr_reader :next_interceptor + + # Initialize inbound with the next interceptor in the chain. + # + # @param next_interceptor [ActivityInbound] Next interceptor in the chain. + def initialize(next_interceptor) + @next_interceptor = next_interceptor + end + + # Initialize the outbound interceptor. This should be extended by users to return their own {ActivityOutbound} + # implementation that wraps the parameter here. + # + # @param outbound [ActivityOutbound] Next outbound interceptor in the chain. + # @return [ActivityOutbound] Outbound activity interceptor. + def init(outbound) + @next_interceptor.init(outbound) + end + + # Execute an activity and return result or raise exception. Next interceptor in chain (i.e. `super`) will + # perform the execution. + # + # @param input [ExecuteActivityInput] Input information. + # @return [Object] Activity result. + def execute(input) + @next_interceptor.execute(input) + end + end + + # Outbound interceptor for intercepting outbound activity calls. This should be extended by users needing to + # intercept activity calls. + class ActivityOutbound + # @return [ActivityInbound] Next interceptor in the chain. + attr_reader :next_interceptor + + # Initialize outbound with the next interceptor in the chain. + # + # @param next_interceptor [ActivityOutbound] Next interceptor in the chain. + def initialize(next_interceptor) + @next_interceptor = next_interceptor + end + + # Issue a heartbeat. + # + # @param input [HeartbeatActivityInput] Input information. + def heartbeat(input) + @next_interceptor.heartbeat(input) + end + end + end + end +end diff --git a/temporalio/lib/temporalio/worker/tuner.rb b/temporalio/lib/temporalio/worker/tuner.rb new file mode 100644 index 00000000..cb495d7a --- /dev/null +++ b/temporalio/lib/temporalio/worker/tuner.rb @@ -0,0 +1,151 @@ +# frozen_string_literal: true + +module Temporalio + class Worker + # Worker tuner that allows for dynamic customization of some aspects of worker configuration. + class Tuner + # Slot supplier used for reserving slots for execution. Currently the only implementations allowed are {Fixed} and + # {ResourceBased}. + class SlotSupplier + # A fixed-size slot supplier that will never issue more than a fixed number of slots. + class Fixed < SlotSupplier + # @return [Integer] The maximum number of slots that can be issued. + attr_reader :slots + + # Create fixed-size slot supplier. + # + # @param slots [Integer] The maximum number of slots that can be issued. + def initialize(slots) # rubocop:disable Lint/MissingSuper + @slots = slots + end + end + + # A slot supplier that will dynamically adjust the number of slots based on resource usage. + # + # @note WARNING: This API is experimental. + class ResourceBased < SlotSupplier + attr_reader :tuner_options, :slot_options + + # Create a reosurce-based slot supplier. + # + # @param tuner_options [ResourceBasedTunerOptions] General tuner options. + # @param slot_options [ResourceBasedSlotOptions] Slot-supplier-specific tuner options. + def initialize(tuner_options:, slot_options:) # rubocop:disable Lint/MissingSuper + @tuner_options = tuner_options + @slot_options = slot_options + end + end + end + + # Options for {create_resource_based} or {SlotSupplier::ResourceBased}. + # + # @!attribute target_memory_usage + # @return [Float] A value between 0 and 1 that represents the target (system) memory usage. It's not recommended + # to set this higher than 0.8, since how much memory a workflow may use is not predictable, and you don't want + # to encounter OOM errors. + # @!attribute target_cpu_usage + # @return [Float] A value between 0 and 1 that represents the target (system) CPU usage. This can be set to 1.0 + # if desired, but it's recommended to leave some headroom for other processes. + ResourceBasedTunerOptions = Struct.new( + :target_memory_usage, + :target_cpu_usage, + keyword_init: true + ) + + # Options for a specific slot type being used with {SlotSupplier::ResourceBased}. + # + # @!attribute min_slots + # @return [Integer, nil] Amount of slots that will be issued regardless of any other checks. Defaults to 5 for + # workflows and 1 for activities. + # @!attribute max_slots + # @return [Integer, nil] Maximum amount of slots permitted. Defaults to 500. + # @!attribute ramp_throttle + # @return [Float, nil] Minimum time we will wait (after passing the minimum slots number) between handing out + # new slots in seconds. Defaults to 0 for workflows and 0.05 for activities. + # + # This value matters because how many resources a task will use cannot be determined ahead of time, and thus + # the system should wait to see how much resources are used before issuing more slots. + ResourceBasedSlotOptions = Struct.new( + :min_slots, + :max_slots, + :ramp_throttle, + keyword_init: true + ) + + # Create a fixed-size tuner with the provided number of slots. + # + # @param workflow_slots [Integer] Maximum number of workflow task slots. + # @param activity_slots [Integer] Maximum number of activity slots. + # @param local_activity_slots [Integer] Maximum number of local activity slots. + # @return [Tuner] Created tuner. + def self.create_fixed( + workflow_slots: 100, + activity_slots: 100, + local_activity_slots: 100 + ) + new( + workflow_slot_supplier: SlotSupplier::Fixed.new(workflow_slots), + activity_slot_supplier: SlotSupplier::Fixed.new(activity_slots), + local_activity_slot_supplier: SlotSupplier::Fixed.new(local_activity_slots) + ) + end + + # Create a resource-based tuner with the provided options. + # + # @param target_memory_usage [Float] A value between 0 and 1 that represents the target (system) memory usage. + # It's not recommended to set this higher than 0.8, since how much memory a workflow may use is not predictable, + # and you don't want to encounter OOM errors. + # @param target_cpu_usage [Float] A value between 0 and 1 that represents the target (system) CPU usage. This can + # be set to 1.0 if desired, but it's recommended to leave some headroom for other processes. + # @param workflow_options [ResourceBasedSlotOptions] Resource-based options for workflow slot supplier. + # @param activity_options [ResourceBasedSlotOptions] Resource-based options for activity slot supplier. + # @param local_activity_options [ResourceBasedSlotOptions] Resource-based options for local activity slot + # supplier. + # @return [Tuner] Created tuner. + def self.create_resource_based( + target_memory_usage:, + target_cpu_usage:, + workflow_options: ResourceBasedSlotOptions.new(min_slots: 5, max_slots: 500, ramp_throttle: 0.0), + activity_options: ResourceBasedSlotOptions.new(min_slots: 1, max_slots: 500, ramp_throttle: 0.05), + local_activity_options: ResourceBasedSlotOptions.new(min_slots: 1, max_slots: 500, ramp_throttle: 0.05) + ) + tuner_options = ResourceBasedTunerOptions.new(target_memory_usage:, target_cpu_usage:) + new( + workflow_slot_supplier: SlotSupplier::ResourceBased.new( + tuner_options:, slot_options: workflow_options + ), + activity_slot_supplier: SlotSupplier::ResourceBased.new( + tuner_options:, slot_options: activity_options + ), + local_activity_slot_supplier: SlotSupplier::ResourceBased.new( + tuner_options:, slot_options: local_activity_options + ) + ) + end + + # @return [SlotSupplier] Slot supplier for workflows. + attr_reader :workflow_slot_supplier + + # @return [SlotSupplier] Slot supplier for activities. + attr_reader :activity_slot_supplier + + # @return [SlotSupplier] Slot supplier for local activities. + attr_reader :local_activity_slot_supplier + + # Create a tuner from 3 slot suppliers. + # + # @param workflow_slot_supplier [SlotSupplier] Slot supplier for workflows. + # @param activity_slot_supplier [SlotSupplier] Slot supplier for activities. + # @param local_activity_slot_supplier [SlotSupplier] Slot supplier for local activities. + def initialize( + workflow_slot_supplier:, + activity_slot_supplier:, + local_activity_slot_supplier: + ) + @workflow_slot_supplier = workflow_slot_supplier + @activity_slot_supplier = activity_slot_supplier + @local_activity_slot_supplier = local_activity_slot_supplier + end + end + end +end diff --git a/temporalio/lib/temporalio/workflow_history.rb b/temporalio/lib/temporalio/workflow_history.rb index c60dc20c..521a556c 100644 --- a/temporalio/lib/temporalio/workflow_history.rb +++ b/temporalio/lib/temporalio/workflow_history.rb @@ -10,5 +10,13 @@ class WorkflowHistory def initialize(events) @events = events end + + # @return [String] ID of the workflow, extracted from the first event. + def workflow_id + start = events.first&.workflow_execution_started_event_attributes + raise 'First event not a start event' if start.nil? + + start.workflow_id + end end end diff --git a/temporalio/rbs_collection.lock.yaml b/temporalio/rbs_collection.lock.yaml index 99b3a2d0..ce5a41b1 100644 --- a/temporalio/rbs_collection.lock.yaml +++ b/temporalio/rbs_collection.lock.yaml @@ -141,6 +141,14 @@ gems: version: '0' source: type: stdlib +- name: sqlite3 + version: '2.0' + source: + type: git + name: ruby/gem_rbs_collection + revision: 7ae9e3cf731a9628e0cc39064ed6e2cf51d822da + remote: https://github.com/ruby/gem_rbs_collection.git + repo_dir: gems - name: strscan version: '0' source: diff --git a/temporalio/sig/temporalio/activity.rbs b/temporalio/sig/temporalio/activity.rbs new file mode 100644 index 00000000..63e4ee31 --- /dev/null +++ b/temporalio/sig/temporalio/activity.rbs @@ -0,0 +1,15 @@ +module Temporalio + class Activity + def self.activity_name: (String | Symbol name) -> void + def self.activity_executor: (Symbol executor_name) -> void + def self.activity_cancel_raise: (bool cancel_raise) -> void + + def self._activity_definition_details: -> { + activity_name: String | Symbol, + activity_executor: Symbol, + activity_cancel_raise: bool + } + + def execute: (?) -> untyped + end +end \ No newline at end of file diff --git a/temporalio/sig/temporalio/activity/complete_async_error.rbs b/temporalio/sig/temporalio/activity/complete_async_error.rbs new file mode 100644 index 00000000..c0be3b60 --- /dev/null +++ b/temporalio/sig/temporalio/activity/complete_async_error.rbs @@ -0,0 +1,6 @@ +module Temporalio + class Activity + class CompleteAsyncError < Error + end + end +end \ No newline at end of file diff --git a/temporalio/sig/temporalio/activity/context.rbs b/temporalio/sig/temporalio/activity/context.rbs new file mode 100644 index 00000000..cf835774 --- /dev/null +++ b/temporalio/sig/temporalio/activity/context.rbs @@ -0,0 +1,22 @@ +module Temporalio + class Activity + class Context + def self.current: -> Context + def self.current_or_nil: -> Context? + def self.exist?: -> bool + + def self._current_executor: -> Worker::ActivityExecutor? + def self._current_executor=: (Worker::ActivityExecutor? executor) -> void + + def info: -> Info + def heartbeat: (*Object? details) -> void + def cancellation: -> Cancellation + def worker_shutdown_cancellation: -> Cancellation + def payload_converter: -> Converters::PayloadConverter + def logger: -> ScopedLogger + def definition: -> Definition + + def _scoped_logger_info: -> Hash[Symbol, Object] + end + end +end \ No newline at end of file diff --git a/temporalio/sig/temporalio/activity/definition.rbs b/temporalio/sig/temporalio/activity/definition.rbs new file mode 100644 index 00000000..b1477454 --- /dev/null +++ b/temporalio/sig/temporalio/activity/definition.rbs @@ -0,0 +1,19 @@ +module Temporalio + class Activity + class Definition + attr_reader name: String | Symbol + attr_reader proc: Proc + attr_reader executor: Symbol + attr_reader cancel_raise: bool + + def self.from_activity: (Activity | singleton(Activity) | Definition activity) -> Definition + + def initialize: ( + name: String | Symbol, + ?proc: Proc?, + ?executor: Symbol, + ?cancel_raise: bool + ) ?{ (?) -> untyped } -> void + end + end +end \ No newline at end of file diff --git a/temporalio/sig/temporalio/activity/info.rbs b/temporalio/sig/temporalio/activity/info.rbs new file mode 100644 index 00000000..93559bb9 --- /dev/null +++ b/temporalio/sig/temporalio/activity/info.rbs @@ -0,0 +1,43 @@ +module Temporalio + class Activity + class Info + attr_reader activity_id: String + attr_reader activity_type: String + attr_reader attempt: Integer + attr_reader current_attempt_scheduled_time: Time + attr_reader heartbeat_details: Array[Object?] + attr_reader heartbeat_timeout: Float? + attr_reader local?: bool + attr_reader schedule_to_close_timeout: Float? + attr_reader scheduled_time: Time + attr_reader start_to_close_timeout: Float? + attr_reader started_time: Time + attr_reader task_queue: String + attr_reader task_token: String + attr_reader workflow_id: String + attr_reader workflow_namespace: String + attr_reader workflow_run_id: String + attr_reader workflow_type: String + + def initialize: ( + activity_id: String, + activity_type: String, + attempt: Integer, + current_attempt_scheduled_time: Time, + heartbeat_details: Array[Object?], + heartbeat_timeout: Float?, + local?: bool, + schedule_to_close_timeout: Float?, + scheduled_time: Time, + start_to_close_timeout: Float?, + started_time: Time, + task_queue: String, + task_token: String, + workflow_id: String, + workflow_namespace: String, + workflow_run_id: String, + workflow_type: String + ) -> void + end + end +end \ No newline at end of file diff --git a/temporalio/sig/temporalio/cancellation.rbs b/temporalio/sig/temporalio/cancellation.rbs new file mode 100644 index 00000000..84a8f18a --- /dev/null +++ b/temporalio/sig/temporalio/cancellation.rbs @@ -0,0 +1,18 @@ +module Temporalio + class Cancellation + def initialize: (*Cancellation parents) -> void + + def canceled?: -> bool + def canceled_reason: -> String? + def pending_canceled?: -> bool + def pending_canceled_reason: -> String? + def check!: (?Exception err) -> void + def to_ary: -> [Cancellation, Proc] + def wait: -> void + def shield: [T] { (?) -> untyped } -> T + def add_cancel_callback: (?Proc proc) ?{ -> untyped } -> void + + private def on_cancel: (reason: Object?) -> void + private def prepare_cancel: (reason: Object?) -> Array[Proc]? + end +end \ No newline at end of file diff --git a/temporalio/sig/temporalio/client.rbs b/temporalio/sig/temporalio/client.rbs index 5e2c07ce..d3e281c5 100644 --- a/temporalio/sig/temporalio/client.rbs +++ b/temporalio/sig/temporalio/client.rbs @@ -5,6 +5,7 @@ module Temporalio attr_accessor namespace: String attr_accessor data_converter: Converters::DataConverter attr_accessor interceptors: Array[Interceptor] + attr_accessor logger: Logger attr_accessor default_workflow_query_reject_condition: WorkflowQueryRejectCondition::enum? def initialize: ( @@ -12,6 +13,7 @@ module Temporalio namespace: String, data_converter: Converters::DataConverter, interceptors: Array[Interceptor], + logger: Logger, default_workflow_query_reject_condition: WorkflowQueryRejectCondition::enum? ) -> void end @@ -23,6 +25,7 @@ module Temporalio ?tls: bool | Connection::TLSOptions, ?data_converter: Converters::DataConverter, ?interceptors: Array[Interceptor], + ?logger: Logger, ?default_workflow_query_reject_condition: WorkflowQueryRejectCondition::enum?, ?rpc_metadata: Hash[String, String], ?rpc_retry: Connection::RPCRetryOptions, @@ -40,6 +43,7 @@ module Temporalio namespace: String, ?data_converter: Converters::DataConverter, ?interceptors: Array[Interceptor], + ?logger: Logger, ?default_workflow_query_reject_condition: WorkflowQueryRejectCondition::enum? ) -> void @@ -87,7 +91,7 @@ module Temporalio ?request_eager_start: bool, ?rpc_metadata: Hash[String, String]?, ?rpc_timeout: Float? - ) -> Object + ) -> Object? def workflow_handle: ( String workflow_id, diff --git a/temporalio/sig/temporalio/client/async_activity_handle.rbs b/temporalio/sig/temporalio/client/async_activity_handle.rbs index db6881ac..2ece2218 100644 --- a/temporalio/sig/temporalio/client/async_activity_handle.rbs +++ b/temporalio/sig/temporalio/client/async_activity_handle.rbs @@ -11,7 +11,7 @@ module Temporalio ) -> void def heartbeat: ( - *Object details, + *Object? details, ?rpc_metadata: Hash[String, String]?, ?rpc_timeout: Float? ) -> void @@ -24,16 +24,18 @@ module Temporalio def fail: ( Exception error, - ?last_heartbeat_details: Array[Object], + ?last_heartbeat_details: Array[Object?], ?rpc_metadata: Hash[String, String]?, ?rpc_timeout: Float? ) -> void def report_cancellation: ( - *Object details, + *Object? details, ?rpc_metadata: Hash[String, String]?, ?rpc_timeout: Float? ) -> void + + private def task_token_or_id_reference: -> (String | ActivityIDReference) end end end \ No newline at end of file diff --git a/temporalio/sig/temporalio/client/interceptor.rbs b/temporalio/sig/temporalio/client/interceptor.rbs index 22050ed7..545f08fe 100644 --- a/temporalio/sig/temporalio/client/interceptor.rbs +++ b/temporalio/sig/temporalio/client/interceptor.rbs @@ -15,7 +15,7 @@ module Temporalio attr_accessor id_conflict_policy: WorkflowIDConflictPolicy::enum attr_accessor retry_policy: RetryPolicy? attr_accessor cron_schedule: String? - attr_accessor memo: Hash[String, Object]? + attr_accessor memo: Hash[String, Object?]? attr_accessor search_attributes: SearchAttributes? attr_accessor start_delay: Float? attr_accessor request_eager_start: bool @@ -35,7 +35,7 @@ module Temporalio id_conflict_policy: WorkflowIDConflictPolicy::enum, retry_policy: RetryPolicy?, cron_schedule: String?, - memo: Hash[String, Object]?, + memo: Hash[String, Object?]?, search_attributes: SearchAttributes?, start_delay: Float?, request_eager_start: bool, @@ -206,7 +206,7 @@ module Temporalio attr_accessor run_id: String? attr_accessor first_execution_run_id: String? attr_accessor reason: String? - attr_accessor details: Array[Object] + attr_accessor details: Array[Object?] attr_accessor rpc_metadata: Hash[String, String]? attr_accessor rpc_timeout: Float? @@ -221,6 +221,64 @@ module Temporalio ) -> void end + class HeartbeatAsyncActivityInput + attr_accessor task_token_or_id_reference: String | ActivityIDReference + attr_accessor details: Array[Object?] + attr_accessor rpc_metadata: Hash[String, String]? + attr_accessor rpc_timeout: Float? + + def initialize: ( + task_token_or_id_reference: String | ActivityIDReference, + details: Array[Object?], + rpc_metadata: Hash[String, String]?, + rpc_timeout: Float? + ) -> void + end + + class CompleteAsyncActivityInput + attr_accessor task_token_or_id_reference: String | ActivityIDReference + attr_accessor result: Object? + attr_accessor rpc_metadata: Hash[String, String]? + attr_accessor rpc_timeout: Float? + + def initialize: ( + task_token_or_id_reference: String | ActivityIDReference, + result: Object?, + rpc_metadata: Hash[String, String]?, + rpc_timeout: Float? + ) -> void + end + + class FailAsyncActivityInput + attr_accessor task_token_or_id_reference: String | ActivityIDReference + attr_accessor error: Exception + attr_accessor last_heartbeat_details: Array[Object?] + attr_accessor rpc_metadata: Hash[String, String]? + attr_accessor rpc_timeout: Float? + + def initialize: ( + task_token_or_id_reference: String | ActivityIDReference, + error: Exception, + last_heartbeat_details: Array[Object?], + rpc_metadata: Hash[String, String]?, + rpc_timeout: Float? + ) -> void + end + + class ReportCancellationAsyncActivityInput + attr_accessor task_token_or_id_reference: String | ActivityIDReference + attr_accessor details: Array[Object?] + attr_accessor rpc_metadata: Hash[String, String]? + attr_accessor rpc_timeout: Float? + + def initialize: ( + task_token_or_id_reference: String | ActivityIDReference, + details: Array[Object?], + rpc_metadata: Hash[String, String]?, + rpc_timeout: Float? + ) -> void + end + class Outbound attr_reader next_interceptor: Outbound @@ -247,6 +305,14 @@ module Temporalio def cancel_workflow: (CancelWorkflowInput input) -> void def terminate_workflow: (TerminateWorkflowInput input) -> void + + def heartbeat_async_activity: (HeartbeatAsyncActivityInput input) -> void + + def complete_async_activity: (CompleteAsyncActivityInput input) -> void + + def fail_async_activity: (FailAsyncActivityInput input) -> void + + def report_cancellation_async_activity: (ReportCancellationAsyncActivityInput input) -> void end end end diff --git a/temporalio/sig/temporalio/client/workflow_execution.rbs b/temporalio/sig/temporalio/client/workflow_execution.rbs index b1669424..8047a1c8 100644 --- a/temporalio/sig/temporalio/client/workflow_execution.rbs +++ b/temporalio/sig/temporalio/client/workflow_execution.rbs @@ -9,7 +9,7 @@ module Temporalio def execution_time: -> Time? def history_length: -> Integer def id: -> String - def memo: -> Hash[String, Object] + def memo: -> Hash[String, Object?] def parent_id: -> String? def parent_run_id: -> String? def run_id: -> String diff --git a/temporalio/sig/temporalio/client/workflow_execution_count.rbs b/temporalio/sig/temporalio/client/workflow_execution_count.rbs index 74ddf429..efa75f4e 100644 --- a/temporalio/sig/temporalio/client/workflow_execution_count.rbs +++ b/temporalio/sig/temporalio/client/workflow_execution_count.rbs @@ -8,9 +8,9 @@ module Temporalio class AggregationGroup attr_reader count: Integer - attr_reader group_values: Array[Object] + attr_reader group_values: Array[Object?] - def initialize: (Integer count, Array[Object] group_values) -> void + def initialize: (Integer count, Array[Object?] group_values) -> void end end end diff --git a/temporalio/sig/temporalio/client/workflow_handle.rbs b/temporalio/sig/temporalio/client/workflow_handle.rbs index 76277ab9..2e3bbe1b 100644 --- a/temporalio/sig/temporalio/client/workflow_handle.rbs +++ b/temporalio/sig/temporalio/client/workflow_handle.rbs @@ -18,7 +18,7 @@ module Temporalio ?follow_runs: bool, ?rpc_metadata: Hash[String, String]?, ?rpc_timeout: Float? - ) -> Object + ) -> Object? def describe: ( ?rpc_metadata: Hash[String, String]?, diff --git a/temporalio/sig/temporalio/error.rbs b/temporalio/sig/temporalio/error.rbs index cb0effa6..03638035 100644 --- a/temporalio/sig/temporalio/error.rbs +++ b/temporalio/sig/temporalio/error.rbs @@ -8,7 +8,7 @@ module Temporalio cause: Exception? ) -> Exception - class WorkflowFailureError < Error + class WorkflowFailedError < Error def initialize: -> void end diff --git a/temporalio/sig/temporalio/error/failure.rbs b/temporalio/sig/temporalio/error/failure.rbs index 035530aa..35ba7f36 100644 --- a/temporalio/sig/temporalio/error/failure.rbs +++ b/temporalio/sig/temporalio/error/failure.rbs @@ -31,7 +31,7 @@ module Temporalio class CanceledError < Failure attr_reader details: Array[Object?] - def initialize: (String message, details: Array[Object?]) -> void + def initialize: (String message, ?details: Array[Object?]) -> void end class TerminatedError < Failure diff --git a/temporalio/sig/temporalio/internal/bridge.rbs b/temporalio/sig/temporalio/internal/bridge.rbs index df44cb00..2d95bd07 100644 --- a/temporalio/sig/temporalio/internal/bridge.rbs +++ b/temporalio/sig/temporalio/internal/bridge.rbs @@ -1,12 +1,8 @@ module Temporalio module Internal module Bridge - interface _ResultQueue[T] - def push: ([T, Exception]) -> void - def pop: () -> [T, Exception] - end - - def self.async_call: [T] { (_ResultQueue[T] queue) -> void } -> T + def self.assert_fiber_compatibility!: -> void + def self.fibers_supported: -> bool # Defined in Rust diff --git a/temporalio/sig/temporalio/internal/bridge/client.rbs b/temporalio/sig/temporalio/internal/bridge/client.rbs index 27575de8..e26961f6 100644 --- a/temporalio/sig/temporalio/internal/bridge/client.rbs +++ b/temporalio/sig/temporalio/internal/bridge/client.rbs @@ -92,7 +92,7 @@ module Temporalio def self.new: (Runtime runtime, Options options) -> Client - def self.async_new: (Runtime runtime, Options options) { ([Client, Exception]) -> void } -> void + def self.async_new: (Runtime runtime, Options options, Queue queue) -> void def async_invoke_rpc: ( service: Integer, @@ -100,8 +100,9 @@ module Temporalio request: String, rpc_retry: bool, rpc_metadata: Hash[String, String]?, - rpc_timeout: Float? - ) { ([String, RPCFailure]) -> void } -> void + rpc_timeout: Float?, + queue: Queue + ) -> void class RPCFailure < Error def code: -> Temporalio::Error::RPCError::Code::enum diff --git a/temporalio/sig/temporalio/internal/bridge/testing.rbs b/temporalio/sig/temporalio/internal/bridge/testing.rbs index ca000d99..da264d6d 100644 --- a/temporalio/sig/temporalio/internal/bridge/testing.rbs +++ b/temporalio/sig/temporalio/internal/bridge/testing.rbs @@ -43,12 +43,13 @@ module Temporalio def self.async_start_dev_server: ( Runtime runtime, - StartDevServerOptions options - ) { ([EphemeralServer, Error]) -> void } -> void + StartDevServerOptions options, + Queue queue + ) -> void def target: -> String - def async_shutdown: { ([nil, Error]) -> void } -> void + def async_shutdown: (Queue queue) -> void end end end diff --git a/temporalio/sig/temporalio/internal/bridge/worker.rbs b/temporalio/sig/temporalio/internal/bridge/worker.rbs new file mode 100644 index 00000000..2a543e3e --- /dev/null +++ b/temporalio/sig/temporalio/internal/bridge/worker.rbs @@ -0,0 +1,121 @@ +module Temporalio + module Internal + module Bridge + class Worker + class Options + attr_accessor activity: bool + attr_accessor workflow: bool + attr_accessor namespace: String + attr_accessor task_queue: String + attr_accessor tuner: TunerOptions + attr_accessor build_id: String + attr_accessor identity_override: String? + attr_accessor max_cached_workflows: Integer + attr_accessor max_concurrent_workflow_task_polls: Integer + attr_accessor nonsticky_to_sticky_poll_ratio: Float + attr_accessor max_concurrent_activity_task_polls: Integer + attr_accessor no_remote_activities: bool + attr_accessor sticky_queue_schedule_to_start_timeout: Float + attr_accessor max_heartbeat_throttle_interval: Float + attr_accessor default_heartbeat_throttle_interval: Float + attr_accessor max_worker_activities_per_second: Float? + attr_accessor max_task_queue_activities_per_second: Float? + attr_accessor graceful_shutdown_period: Float + attr_accessor use_worker_versioning: bool + + def initialize: ( + activity: bool, + workflow: bool, + namespace: String, + task_queue: String, + tuner: TunerOptions, + build_id: String, + identity_override: String?, + max_cached_workflows: Integer, + max_concurrent_workflow_task_polls: Integer, + nonsticky_to_sticky_poll_ratio: Float, + max_concurrent_activity_task_polls: Integer, + no_remote_activities: bool, + sticky_queue_schedule_to_start_timeout: Float, + max_heartbeat_throttle_interval: Float, + default_heartbeat_throttle_interval: Float, + max_worker_activities_per_second: Float?, + max_task_queue_activities_per_second: Float?, + graceful_shutdown_period: Float, + use_worker_versioning: bool + ) -> void + end + + class TunerOptions + attr_accessor workflow_slot_supplier: TunerSlotSupplierOptions + attr_accessor activity_slot_supplier: TunerSlotSupplierOptions + attr_accessor local_activity_slot_supplier: TunerSlotSupplierOptions + + def initialize: ( + workflow_slot_supplier: TunerSlotSupplierOptions, + activity_slot_supplier: TunerSlotSupplierOptions, + local_activity_slot_supplier: TunerSlotSupplierOptions + ) -> void + end + + class TunerSlotSupplierOptions + attr_accessor fixed_size: Integer? + attr_accessor resource_based: TunerResourceBasedSlotSupplierOptions? + + def initialize: ( + fixed_size: Integer?, + resource_based: TunerResourceBasedSlotSupplierOptions? + ) -> void + end + + class TunerResourceBasedSlotSupplierOptions + attr_accessor target_mem_usage: Float + attr_accessor target_cpu_usage: Float + attr_accessor min_slots: Integer? + attr_accessor max_slots: Integer? + attr_accessor ramp_throttle: Float? + + def initialize: ( + target_mem_usage: Float, + target_cpu_usage: Float, + min_slots: Integer?, + max_slots: Integer?, + ramp_throttle: Float? + ) -> void + end + + def self.finalize_shutdown_all: (Array[Worker] workers) -> void + + def validate: -> void + + def complete_activity_task: (untyped proto) -> void + + def complete_activity_task_in_background: (untyped proto) -> void + + # Defined in Rust + + def self.new: (Client client, Options options) -> Worker + + def self.async_poll_all: ( + Array[Worker] workers, + Queue queue + ) -> void + + def self.async_finalize_all: ( + Array[Worker] workers, + Queue queue + ) -> void + + def async_validate: (Queue queue) -> void + + def async_complete_activity_task: (String proto, Queue queue) -> void + + def record_activity_heartbeat: (String proto) -> void + + def replace_client: (Client client) -> void + + def initiate_shutdown: -> void + end + end + end +end \ No newline at end of file diff --git a/temporalio/sig/temporalio/internal/proto_utils.rbs b/temporalio/sig/temporalio/internal/proto_utils.rbs index a66e6aae..7c7094ca 100644 --- a/temporalio/sig/temporalio/internal/proto_utils.rbs +++ b/temporalio/sig/temporalio/internal/proto_utils.rbs @@ -17,6 +17,16 @@ module Temporalio | (String? str, String default) -> String def self.enum_to_int: (untyped enum_mod, untyped enum_val, ?zero_means_nil: bool) -> Integer + + def self.convert_from_payload_array: ( + Converters::DataConverter | Converters::PayloadConverter converter, + Array[untyped] payloads + ) -> Array[Object?] + + def self.convert_to_payload_array: ( + Converters::DataConverter | Converters::PayloadConverter converter, + Array[Object?] values + ) -> Array[untyped] end end end \ No newline at end of file diff --git a/temporalio/sig/temporalio/internal/worker/activity_worker.rbs b/temporalio/sig/temporalio/internal/worker/activity_worker.rbs new file mode 100644 index 00000000..5bf1ede7 --- /dev/null +++ b/temporalio/sig/temporalio/internal/worker/activity_worker.rbs @@ -0,0 +1,52 @@ +module Temporalio + module Internal + module Worker + class ActivityWorker + attr_reader worker: Temporalio::Worker + attr_reader bridge_worker: Bridge::Worker + + def initialize: ( + Temporalio::Worker worker, + Bridge::Worker bridge_worker, + ) -> void + + def set_running_activity: (String task_token, RunningActivity? activity) -> void + def get_running_activity: (String task_token) -> RunningActivity? + def remove_running_activity: (String task_token) -> void + def wait_all_complete: -> void + + def handle_task: (untyped task) -> void + def handle_start_task: (String task_token, untyped start) -> void + def handle_cancel_task: (String task_token, untyped cancel) -> void + + def execute_activity: (String task_token, Activity::Definition defn, untyped start) -> void + def run_activity: ( + RunningActivity activity, + Temporalio::Worker::Interceptor::ExecuteActivityInput input + ) -> void + + class RunningActivity < Activity::Context + attr_accessor _outbound_impl: Temporalio::Worker::Interceptor::ActivityOutbound? + attr_accessor _server_requested_cancel: bool + + def initialize: ( + info: Activity::Info, + cancellation: Cancellation, + worker_shutdown_cancellation: Cancellation, + payload_converter: Converters::PayloadConverter, + logger: ScopedLogger, + definition: Activity::Definition + ) -> void + end + + class InboundImplementation < Temporalio::Worker::Interceptor::ActivityInbound + def initialize: (ActivityWorker worker) -> void + end + + class OutboundImplementation < Temporalio::Worker::Interceptor::ActivityOutbound + def initialize: (ActivityWorker worker) -> void + end + end + end + end +end \ No newline at end of file diff --git a/temporalio/sig/temporalio/internal/worker/multi_runner.rbs b/temporalio/sig/temporalio/internal/worker/multi_runner.rbs new file mode 100644 index 00000000..6ce81db9 --- /dev/null +++ b/temporalio/sig/temporalio/internal/worker/multi_runner.rbs @@ -0,0 +1,77 @@ +module Temporalio + module Internal + module Worker + class MultiRunner + def initialize: (workers: Array[Temporalio::Worker]) -> void + + def apply_thread_or_fiber_block: ?{ (?) -> untyped } -> void + + def raise_in_thread_or_fiber_block: (Exception error) -> void + + def initiate_shutdown: -> void + + def wait_complete_and_finalize_shutdown: -> void + + def next_event: -> Event + + class Event + class PollSuccess < Event + attr_reader worker: Temporalio::Worker + attr_reader worker_type: Symbol + attr_reader bytes: String + + def initialize: ( + worker: Temporalio::Worker, + worker_type: Symbol, + bytes: String + ) -> void + end + + class PollFailure < Event + attr_reader worker: Temporalio::Worker + attr_reader worker_type: Symbol + attr_reader error: Exception + + def initialize: ( + worker: Temporalio::Worker, + worker_type: Symbol, + error: Exception + ) -> void + end + + class PollerShutDown < Event + attr_reader worker: Temporalio::Worker + attr_reader worker_type: Symbol + + def initialize: ( + worker: Temporalio::Worker, + worker_type: Symbol + ) -> void + end + + class AllPollersShutDown < Event + def self.instance: -> AllPollersShutDown + end + + class BlockSuccess < Event + attr_reader result: Object? + + def initialize: (result: Object?) -> void + end + + class BlockFailure < Event + attr_reader error: Exception + + def initialize: (error: Exception) -> void + end + end + + class InjectEventForTesting < Temporalio::Error + attr_reader event: Event + + def initialize: (Event event) -> void + end + end + end + end +end \ No newline at end of file diff --git a/temporalio/sig/temporalio/scoped_logger.rbs b/temporalio/sig/temporalio/scoped_logger.rbs new file mode 100644 index 00000000..bf53b32f --- /dev/null +++ b/temporalio/sig/temporalio/scoped_logger.rbs @@ -0,0 +1,15 @@ +module Temporalio + class ScopedLogger < Logger + attr_accessor scoped_values_getter: Proc? + attr_accessor disable_scoped_values: bool + + def initialize: (Logger) -> void + + class LogMessage + attr_reader message: Object + attr_reader scoped_values: Object + + def initialize: (Object message, Object scoped_values) -> void + end + end +end \ No newline at end of file diff --git a/temporalio/sig/temporalio/testing/workflow_environment.rbs b/temporalio/sig/temporalio/testing/workflow_environment.rbs index c88467f9..31627f8b 100644 --- a/temporalio/sig/temporalio/testing/workflow_environment.rbs +++ b/temporalio/sig/temporalio/testing/workflow_environment.rbs @@ -7,6 +7,7 @@ module Temporalio ?namespace: String, ?data_converter: Converters::DataConverter, ?interceptors: Array[Client::Interceptor], + ?logger: Logger, ?ip: String, ?port: Integer?, ?ui: bool, diff --git a/temporalio/sig/temporalio/worker.rbs b/temporalio/sig/temporalio/worker.rbs new file mode 100644 index 00000000..dd21dd3e --- /dev/null +++ b/temporalio/sig/temporalio/worker.rbs @@ -0,0 +1,106 @@ +module Temporalio + class Worker + class Options + attr_accessor client: Client + attr_accessor task_queue: String + attr_accessor activities: Array[Activity | singleton(Activity) | Activity::Definition] + attr_accessor activity_executors: Hash[Symbol, Worker::ActivityExecutor] + attr_accessor tuner: Tuner + attr_accessor interceptors: Array[Interceptor] + attr_accessor build_id: String + attr_accessor identity: String + attr_accessor logger: Logger + attr_accessor max_cached_workflows: Integer + attr_accessor max_concurrent_workflow_task_polls: Integer + attr_accessor nonsticky_to_sticky_poll_ratio: Float + attr_accessor max_concurrent_activity_task_polls: Integer + attr_accessor no_remote_activities: bool + attr_accessor sticky_queue_schedule_to_start_timeout: Float + attr_accessor max_heartbeat_throttle_interval: Float + attr_accessor default_heartbeat_throttle_interval: Float + attr_accessor max_activities_per_second: Float? + attr_accessor max_task_queue_activities_per_second: Float? + attr_accessor graceful_shutdown_period: Float + attr_accessor use_worker_versioning: bool + + def initialize: ( + client: Client, + task_queue: String, + activities: Array[Activity | singleton(Activity) | Activity::Definition], + activity_executors: Hash[Symbol, Worker::ActivityExecutor], + tuner: Tuner, + interceptors: Array[Interceptor], + build_id: String, + identity: String?, + logger: Logger, + max_cached_workflows: Integer, + max_concurrent_workflow_task_polls: Integer, + nonsticky_to_sticky_poll_ratio: Float, + max_concurrent_activity_task_polls: Integer, + no_remote_activities: bool, + sticky_queue_schedule_to_start_timeout: Float, + max_heartbeat_throttle_interval: Float, + default_heartbeat_throttle_interval: Float, + max_activities_per_second: Float?, + max_task_queue_activities_per_second: Float?, + graceful_shutdown_period: Float, + use_worker_versioning: bool + ) -> void + end + + def self.default_build_id: -> String + def self._load_default_build_id: -> String + + def self.run_all: [T] ( + *Worker workers, + ?cancellation: Cancellation, + ?raise_in_block_on_shutdown: Exception?, + ?wait_block_complete: bool + ) ?{ -> T } -> T + + attr_reader options: Options + + def initialize: ( + client: Client, + task_queue: String, + ?activities: Array[Activity | singleton(Activity) | Activity::Definition], + ?activity_executors: Hash[Symbol, Worker::ActivityExecutor], + ?tuner: Tuner, + ?interceptors: Array[Interceptor], + ?build_id: String, + ?identity: String?, + ?logger: Logger, + ?max_cached_workflows: Integer, + ?max_concurrent_workflow_task_polls: Integer, + ?nonsticky_to_sticky_poll_ratio: Float, + ?max_concurrent_activity_task_polls: Integer, + ?no_remote_activities: bool, + ?sticky_queue_schedule_to_start_timeout: Float, + ?max_heartbeat_throttle_interval: Float, + ?default_heartbeat_throttle_interval: Float, + ?max_activities_per_second: Float?, + ?max_task_queue_activities_per_second: Float?, + ?graceful_shutdown_period: Float, + ?use_worker_versioning: bool + ) -> void + + def task_queue: -> String + + def run: [T] ( + ?cancellation: Cancellation, + ?raise_in_block_on_shutdown: Exception?, + ?wait_block_complete: bool + ) ?{ -> T } -> T + + def _worker_shutdown_cancellation: -> Cancellation + def _initiate_shutdown: -> void + def _wait_all_complete: -> void + def _bridge_worker: -> Internal::Bridge::Worker + def _all_interceptors: -> Array[Interceptor] + def _on_poll_bytes: (Symbol worker_type, String bytes) -> void + + private def to_bridge_slot_supplier_options: ( + Tuner::SlotSupplier slot_supplier + ) -> Internal::Bridge::Worker::TunerSlotSupplierOptions + end +end \ No newline at end of file diff --git a/temporalio/sig/temporalio/worker/activity_executor.rbs b/temporalio/sig/temporalio/worker/activity_executor.rbs new file mode 100644 index 00000000..af7e03ee --- /dev/null +++ b/temporalio/sig/temporalio/worker/activity_executor.rbs @@ -0,0 +1,12 @@ +module Temporalio + class Worker + class ActivityExecutor + def self.defaults: -> Hash[Symbol, ActivityExecutor] + + def initialize_activity: (Activity::Definition defn) -> void + def execute_activity: (Activity::Definition defn) { -> void } -> void + def activity_context: -> Activity::Context? + def activity_context=: (Activity::Context? context) -> void + end + end +end \ No newline at end of file diff --git a/temporalio/sig/temporalio/worker/activity_executor/fiber.rbs b/temporalio/sig/temporalio/worker/activity_executor/fiber.rbs new file mode 100644 index 00000000..9d5ccc81 --- /dev/null +++ b/temporalio/sig/temporalio/worker/activity_executor/fiber.rbs @@ -0,0 +1,9 @@ +module Temporalio + class Worker + class ActivityExecutor + class Fiber < ActivityExecutor + def self.default: -> Fiber + end + end + end +end \ No newline at end of file diff --git a/temporalio/sig/temporalio/worker/activity_executor/thread_pool.rbs b/temporalio/sig/temporalio/worker/activity_executor/thread_pool.rbs new file mode 100644 index 00000000..ae1cd918 --- /dev/null +++ b/temporalio/sig/temporalio/worker/activity_executor/thread_pool.rbs @@ -0,0 +1,44 @@ +module Temporalio + class Worker + class ActivityExecutor + class ThreadPool < ActivityExecutor + def self.default: -> ThreadPool + + def self._monotonic_time: -> Float + + def initialize: ( + ?max_threads: Integer?, + ?idle_timeout: Float + ) -> void + + def largest_length: -> Integer + def scheduled_task_count: -> Integer + def completed_task_count: -> Integer + def active_count: -> Integer + def length: -> Integer + def queue_length: -> Integer + def shutdown: -> void + def kill: -> void + + def _remove_busy_worker: (Worker worker) -> void + def _ready_worker: (Worker worker, Float last_message) -> void + def _worker_died: (Worker worker) -> void + def _worker_task_completed: -> void + private def locked_assign_worker: { (?) -> untyped } -> void + private def locked_enqueue: { (?) -> untyped } -> void + private def locked_add_busy_worker: -> Worker? + private def locked_prune_pool: -> void + private def locked_remove_busy_worker: (Worker worker) -> void + private def locked_ready_worker: (Worker worker, Float last_message) -> void + private def locked_worker_died: (Worker worker) -> void + + class Worker + def initialize: (ThreadPool pool, Integer id) -> void + def <<: (Proc block) -> void + def stop: -> void + def kill: -> void + end + end + end + end +end \ No newline at end of file diff --git a/temporalio/sig/temporalio/worker/interceptor.rbs b/temporalio/sig/temporalio/worker/interceptor.rbs new file mode 100644 index 00000000..03363baf --- /dev/null +++ b/temporalio/sig/temporalio/worker/interceptor.rbs @@ -0,0 +1,43 @@ +module Temporalio + class Worker + module Interceptor + def intercept_activity: (ActivityInbound next_interceptor) -> ActivityInbound + + class ExecuteActivityInput + attr_accessor proc: Proc + attr_accessor args: Array[Object?] + attr_accessor headers: Hash[String, String] + + def initialize: ( + proc: Proc, + args: Array[Object?], + headers: Hash[String, String] + ) -> void + end + + class HeartbeatActivityInput + attr_accessor details: Array[Object?] + + def initialize: (details: Array[Object?]) -> void + end + + class ActivityInbound + attr_reader next_interceptor: ActivityInbound + + def initialize: (ActivityInbound next_interceptor) -> void + + def init: (ActivityOutbound outbound) -> ActivityOutbound + + def execute: (ExecuteActivityInput input) -> Object? + end + + class ActivityOutbound + attr_reader next_interceptor: ActivityOutbound + + def initialize: (ActivityOutbound next_interceptor) -> void + + def heartbeat: (HeartbeatActivityInput input) -> void + end + end + end +end \ No newline at end of file diff --git a/temporalio/sig/temporalio/worker/tuner.rbs b/temporalio/sig/temporalio/worker/tuner.rbs new file mode 100644 index 00000000..22ee1666 --- /dev/null +++ b/temporalio/sig/temporalio/worker/tuner.rbs @@ -0,0 +1,69 @@ +module Temporalio + class Worker + class Tuner + class SlotSupplier + class Fixed < SlotSupplier + attr_reader slots: Integer + + def initialize: (Integer slots) -> void + end + + class ResourceBased < SlotSupplier + attr_reader tuner_options: ResourceBasedTunerOptions + attr_reader slot_options: ResourceBasedSlotOptions + + def initialize: ( + tuner_options: ResourceBasedTunerOptions, + slot_options: ResourceBasedSlotOptions + ) -> void + end + end + + class ResourceBasedTunerOptions + attr_accessor target_memory_usage: Float + attr_accessor target_cpu_usage: Float + + def initialize: ( + target_memory_usage: Float, + target_cpu_usage: Float + ) -> void + end + + class ResourceBasedSlotOptions + attr_accessor min_slots: Integer? + attr_accessor max_slots: Integer? + attr_accessor ramp_throttle: Float? + + def initialize: ( + min_slots: Integer?, + max_slots: Integer?, + ramp_throttle: Float? + ) -> void + end + + def self.create_fixed: ( + ?workflow_slots: Integer, + ?activity_slots: Integer, + ?local_activity_slots: Integer + ) -> Tuner + + def self.create_resource_based: ( + target_memory_usage: Float, + target_cpu_usage: Float, + ?workflow_options: ResourceBasedSlotOptions, + ?activity_options: ResourceBasedSlotOptions, + ?local_activity_options: ResourceBasedSlotOptions + ) -> Tuner + + attr_reader workflow_slot_supplier: SlotSupplier + attr_reader activity_slot_supplier: SlotSupplier + attr_reader local_activity_slot_supplier: SlotSupplier + + def initialize: ( + workflow_slot_supplier: SlotSupplier, + activity_slot_supplier: SlotSupplier, + local_activity_slot_supplier: SlotSupplier + ) -> void + end + end +end \ No newline at end of file diff --git a/temporalio/temporalio.gemspec b/temporalio/temporalio.gemspec index db09160c..a6e567f2 100644 --- a/temporalio/temporalio.gemspec +++ b/temporalio/temporalio.gemspec @@ -34,6 +34,7 @@ Gem::Specification.new do |spec| spec.add_development_dependency 'base64' spec.add_development_dependency 'grpc', '>= 1.65.0.pre2' spec.add_development_dependency 'grpc-tools' + spec.add_development_dependency 'memory_profiler' spec.add_development_dependency 'minitest' spec.add_development_dependency 'rake' spec.add_development_dependency 'rake-compiler' diff --git a/temporalio/test/cancellation_test.rb b/temporalio/test/cancellation_test.rb new file mode 100644 index 00000000..0b8049a0 --- /dev/null +++ b/temporalio/test/cancellation_test.rb @@ -0,0 +1,63 @@ +# frozen_string_literal: true + +require 'temporalio/cancellation' +require 'test' + +class CancellationTest < Test + also_run_all_tests_in_fiber + + def test_simple_cancellation + # Create and confirm uncanceled state + cancel, cancel_proc = Temporalio::Cancellation.new + refute cancel.canceled? + cancel.check! + got_cancel1 = false + cancel.add_cancel_callback { got_cancel1 = true } + refute got_cancel1 + got_cancel_from_wait_queue = Queue.new + run_in_background do + cancel.wait + got_cancel_from_wait_queue.push(true) + end + + # Do the cancel and confirm state + cancel_proc.call + assert cancel.canceled? + assert_raises(Temporalio::Error::CanceledError) { cancel.check! } + assert got_cancel1 + got_cancel2 = false + cancel.add_cancel_callback { got_cancel2 = true } + assert got_cancel2 + assert got_cancel_from_wait_queue.pop + cancel.wait + end + + def test_parent_cancellation + cancel_grandparent, cancel_grandparent_proc = Temporalio::Cancellation.new + cancel_parent = Temporalio::Cancellation.new(cancel_grandparent) + cancel = Temporalio::Cancellation.new(cancel_parent) + + refute cancel_grandparent.canceled? + refute cancel_parent.canceled? + refute cancel.canceled? + + cancel_grandparent_proc.call + assert cancel_grandparent.canceled? + assert cancel_parent.canceled? + assert cancel.canceled? + end + + def test_shielding + # Create a cancellation, shield a couple of levels deep and confirm + cancel, cancel_proc = Temporalio::Cancellation.new + cancel.shield do + cancel.shield do + cancel_proc.call(reason: 'some reason') + refute cancel.canceled? + end + refute cancel.canceled? + end + assert cancel.canceled? + assert_equal 'some reason', cancel.canceled_reason + end +end diff --git a/temporalio/test/client_workflow_test.rb b/temporalio/test/client_workflow_test.rb index 6d8f14f3..ced03840 100644 --- a/temporalio/test/client_workflow_test.rb +++ b/temporalio/test/client_workflow_test.rb @@ -6,7 +6,9 @@ require 'test' class ClientWorkflowTest < Test - def start_simple + also_run_all_tests_in_fiber + + def test_start_simple # Create ephemeral test server env.with_kitchen_sink_worker do |task_queue| # Start 5 workflows @@ -24,16 +26,6 @@ def start_simple end end - def test_start_simple_threaded - start_simple - end - - def test_start_simple_async - Sync do - start_simple - end - end - def test_workflow_exists env.with_kitchen_sink_worker do |task_queue| # Create a workflow that hangs @@ -177,7 +169,7 @@ def test_start_delay def test_failure env.with_kitchen_sink_worker do |task_queue| # Simple error - err = assert_raises(Temporalio::Error::WorkflowFailureError) do + err = assert_raises(Temporalio::Error::WorkflowFailedError) do env.client.execute_workflow( 'kitchen_sink', { actions: [{ error: { message: 'some error', type: 'error-type', details: { foo: 'bar', baz: 123.45 } } }] }, @@ -192,7 +184,7 @@ def test_failure assert_equal [{ 'foo' => 'bar', 'baz' => 123.45 }], err.cause.details # Activity does not exist, for checking causes - err = assert_raises(Temporalio::Error::WorkflowFailureError) do + err = assert_raises(Temporalio::Error::WorkflowFailedError) do env.client.execute_workflow( 'kitchen_sink', { actions: [{ execute_activity: { name: 'does-not-exist' } }] }, @@ -208,7 +200,7 @@ def test_failure def test_retry_policy env.with_kitchen_sink_worker do |task_queue| - err = assert_raises(Temporalio::Error::WorkflowFailureError) do + err = assert_raises(Temporalio::Error::WorkflowFailedError) do env.client.execute_workflow( 'kitchen_sink', { actions: [{ error: { attempt: true } }] }, @@ -231,6 +223,7 @@ def test_list_and_count # Start 5 workflows, 3 that complete and 2 that don't, and with different # SAs for odd ones vs even + text_val = "test-list-#{SecureRandom.uuid}" env.with_kitchen_sink_worker do |task_queue| handles = 5.times.map do |i| env.client.start_workflow( @@ -244,7 +237,7 @@ def test_list_and_count task_queue:, search_attributes: Temporalio::SearchAttributes.new( { - ATTR_KEY_TEXT => 'test-list', + ATTR_KEY_TEXT => text_val, ATTR_KEY_KEYWORD => i.even? ? 'even' : 'odd' } ) @@ -253,31 +246,31 @@ def test_list_and_count # Make sure all 5 come back in list assert_eventually do - wfs = env.client.list_workflows("`#{ATTR_KEY_TEXT.name}` = 'test-list'").to_a + wfs = env.client.list_workflows("`#{ATTR_KEY_TEXT.name}` = '#{text_val}'").to_a assert_equal 5, wfs.size # Check each item is present too assert_equal handles.map(&:id).sort, wfs.map(&:id).sort # Check the first has search attr - assert_equal 'test-list', wfs.first&.search_attributes&.[](ATTR_KEY_TEXT) + assert_equal text_val, wfs.first&.search_attributes&.[](ATTR_KEY_TEXT) end # Query for just the odd ones and make sure it's two assert_eventually do - wfs = env.client.list_workflows("`#{ATTR_KEY_TEXT.name}` = 'test-list' AND " \ + wfs = env.client.list_workflows("`#{ATTR_KEY_TEXT.name}` = '#{text_val}' AND " \ "`#{ATTR_KEY_KEYWORD.name}` = 'odd'").to_a assert_equal 2, wfs.size end # Normal count assert_eventually do - count = env.client.count_workflows("`#{ATTR_KEY_TEXT.name}` = 'test-list'") + count = env.client.count_workflows("`#{ATTR_KEY_TEXT.name}` = '#{text_val}'") assert_equal 5, count.count assert_empty count.groups end # Count with group by making sure eventually first 3 are complete assert_eventually do - count = env.client.count_workflows("`#{ATTR_KEY_TEXT.name}` = 'test-list' GROUP BY ExecutionStatus") + count = env.client.count_workflows("`#{ATTR_KEY_TEXT.name}` = '#{text_val}' GROUP BY ExecutionStatus") assert_equal 5, count.count groups = count.groups.sort_by(&:count) # 2 running, 3 completed @@ -438,7 +431,7 @@ def test_cancel task_queue: ) handle.cancel - err = assert_raises(Temporalio::Error::WorkflowFailureError) do + err = assert_raises(Temporalio::Error::WorkflowFailedError) do handle.result end assert_instance_of Temporalio::Error::CanceledError, err.cause @@ -454,7 +447,7 @@ def test_terminate task_queue: ) handle.terminate('some reason', details: ['some details']) - err = assert_raises(Temporalio::Error::WorkflowFailureError) do + err = assert_raises(Temporalio::Error::WorkflowFailedError) do handle.result end assert_instance_of Temporalio::Error::TerminatedError, err.cause diff --git a/temporalio/test/golangworker/main.go b/temporalio/test/golangworker/main.go index ec105d7d..a971cbe8 100644 --- a/temporalio/test/golangworker/main.go +++ b/temporalio/test/golangworker/main.go @@ -62,6 +62,7 @@ type KitchenSinkAction struct { UpdateHandler *UpdateHandlerAction `json:"update_handler"` Signal *SignalAction `json:"signal"` ExecuteActivity *ExecuteActivityAction `json:"execute_activity"` + Concurrent []*KitchenSinkAction `json:"concurrent"` } type ResultAction struct { @@ -110,6 +111,7 @@ type ExecuteActivityAction struct { StartToCloseTimeoutMS int64 `json:"start_to_close_timeout_ms"` ScheduleToStartTimeoutMS int64 `json:"schedule_to_start_timeout_ms"` CancelAfterMS int64 `json:"cancel_after_ms"` + CancelOnSignal string `json:"cancel_on_signal"` WaitForCancellation bool `json:"wait_for_cancellation"` HeartbeatTimeoutMS int64 `json:"heartbeat_timeout_ms"` RetryMaxAttempts int `json:"retry_max_attempts"` // 0 same as 1 @@ -250,6 +252,14 @@ func handleAction( cancel() }) } + if action.ExecuteActivity.CancelOnSignal != "" { + var cancel workflow.CancelFunc + actCtx, cancel = workflow.WithCancel(actCtx) + workflow.Go(actCtx, func(actCtx workflow.Context) { + workflow.GetSignalChannel(actCtx, action.ExecuteActivity.CancelOnSignal).Receive(actCtx, nil) + cancel() + }) + } args := action.ExecuteActivity.Args if action.ExecuteActivity.IndexAsArg { args = []interface{}{i} @@ -262,6 +272,29 @@ func handleAction( } return true, lastResponse, lastErr + case len(action.Concurrent) > 0: + var futs []workflow.Future + for _, action := range action.Concurrent { + action := action + fut, set := workflow.NewFuture(ctx) + workflow.Go(ctx, func(ctx workflow.Context) { + _, ret, err := handleAction(ctx, params, action) + set.Set(ret, err) + }) + futs = append(futs, fut) + } + var lastErr error + var vals []any + for _, fut := range futs { + var val any + if err := fut.Get(ctx, &val); err != nil { + lastErr = err + } else { + vals = append(vals, val) + } + } + return true, vals, lastErr + default: return true, nil, fmt.Errorf("unrecognized action") } diff --git a/temporalio/test/scoped_logger_test.rb b/temporalio/test/scoped_logger_test.rb new file mode 100644 index 00000000..ba962cc8 --- /dev/null +++ b/temporalio/test/scoped_logger_test.rb @@ -0,0 +1,43 @@ +# frozen_string_literal: true + +require 'temporalio/scoped_logger' +require 'test' + +class ScopedLoggerTest < Test + def test_logger_with_values + # Default doesn't change anything + out, = capture_io do + logger = Temporalio::ScopedLogger.new(Logger.new($stdout, level: Logger::INFO)) + logger.info('info1') + logger.error('error1') + logger.debug('debug1') + logger.with_level(Logger::DEBUG) { logger.debug('debug2') } # steep:ignore + logger.error(RuntimeError.new('exception1')) + end + lines = out.split("\n") + assert(lines.one? { |l| l.include?('INFO') && l.end_with?('info1') }) + assert(lines.one? { |l| l.include?('ERROR') && l.end_with?('error1') }) + assert(lines.none? { |l| l.include?('debug1') }) + assert(lines.one? { |l| l.include?('DEBUG') && l.end_with?('debug2') }) + assert(lines.one? { |l| l.include?('ERROR') && l.end_with?('exception1 (RuntimeError)') }) + + # With a getter that returns some values + extra_vals = { some_key: { foo: 'bar', 'baz' => 123 } } + out, = capture_io do + logger = Temporalio::ScopedLogger.new(Logger.new($stdout, level: Logger::INFO)) + logger.scoped_values_getter = proc { extra_vals } + logger.add(Logger::WARN, 'warn1') + logger.info('info1') + logger.error('error1') + logger.debug('debug1') + logger.with_level(Logger::DEBUG) { logger.debug('debug2') } # steep:ignore + logger.error(RuntimeError.new('exception1')) + end + lines = out.split("\n") + assert(lines.one? { |l| l.include?('INFO') && l.end_with?("info1 #{extra_vals.inspect}") }) + assert(lines.one? { |l| l.include?('ERROR') && l.end_with?("error1 #{extra_vals.inspect}") }) + assert(lines.none? { |l| l.include?('debug1') }) + assert(lines.one? { |l| l.include?('DEBUG') && l.end_with?("debug2 #{extra_vals.inspect}") }) + assert(lines.one? { |l| l.include?('ERROR') && l.end_with?("exception1 #{extra_vals.inspect} (RuntimeError)") }) + end +end diff --git a/temporalio/test/sig/test.rbs b/temporalio/test/sig/test.rbs index 54a09f27..e58c8d88 100644 --- a/temporalio/test/sig/test.rbs +++ b/temporalio/test/sig/test.rbs @@ -9,8 +9,12 @@ class Test < Minitest::Test ATTR_KEY_TIME: Temporalio::SearchAttributes::Key ATTR_KEY_KEYWORD_LIST: Temporalio::SearchAttributes::Key + def self.also_run_all_tests_in_fiber: -> void + def env: -> TestEnvironment + def run_in_background: { (?) -> untyped } -> (Thread | Fiber) + class TestEnvironment include Singleton diff --git a/temporalio/test/sig/worker_activity_test.rbs b/temporalio/test/sig/worker_activity_test.rbs new file mode 100644 index 00000000..27d13859 --- /dev/null +++ b/temporalio/test/sig/worker_activity_test.rbs @@ -0,0 +1,36 @@ +class WorkerActivityTest < Test + def execute_activity: [T] ( + untyped activity, + *untyped args, + ?retry_max_attempts: Integer, + ?logger: Logger?, + ?heartbeat_timeout: Float?, + ?start_to_close_timeout: Float?, + ?override_name: String?, + ?cancel_on_signal: String?, + ?wait_for_cancellation: bool, + ?cancellation: Temporalio::Cancellation, + ?raise_in_block_on_shutdown: bool, + ?activity_executors: Hash[Symbol, Temporalio::Worker::ActivityExecutor], + ?interceptors: Array[Temporalio::Worker::Interceptor], + ?client: Temporalio::Client + ) ?{ (Temporalio::Client::WorkflowHandle, Temporalio::Worker) -> T } -> T | ( + untyped activity, + *untyped args, + ?retry_max_attempts: Integer, + ?logger: Logger?, + ?heartbeat_timeout: Float?, + ?start_to_close_timeout: Float?, + ?override_name: String?, + ?cancel_on_signal: String?, + ?wait_for_cancellation: bool, + ?cancellation: Temporalio::Cancellation, + ?raise_in_block_on_shutdown: bool, + ?activity_executors: Hash[Symbol, Temporalio::Worker::ActivityExecutor], + ?interceptors: Array[Temporalio::Worker::Interceptor], + ?client: Temporalio::Client + ) -> Object? + + def assert_multi_worker_activities: (?) -> untyped + def assert_single_worker_activities: (?) -> untyped +end \ No newline at end of file diff --git a/temporalio/test/test.rb b/temporalio/test/test.rb index b9eb4396..ad96573c 100644 --- a/temporalio/test/test.rb +++ b/temporalio/test/test.rb @@ -1,12 +1,22 @@ # frozen_string_literal: true +require 'async' require 'extra_assertions' +require 'logger' require 'minitest/autorun' require 'securerandom' require 'singleton' +require 'temporalio/internal/bridge' require 'temporalio/testing' require 'timeout' +# require 'memory_profiler' +# MemoryProfiler.start +# Minitest.after_run do +# report = MemoryProfiler.stop +# report.pretty_print +# end + class Test < Minitest::Test include ExtraAssertions @@ -27,17 +37,71 @@ class Test < Minitest::Test Temporalio::SearchAttributes::IndexedValueType::KEYWORD_LIST ) + def self.also_run_all_tests_in_fiber + @also_run_all_tests_in_fiber = true + end + + def self.method_added(method_name) + super + # If we are also running all tests in fiber, define `_in_fiber` equivalent, + # unless we are < 3.3 + unless @also_run_all_tests_in_fiber && + method_name.start_with?('test_') && + !method_name.end_with?('_in_fiber') && + Temporalio::Internal::Bridge.fibers_supported + return + end + + original_method = instance_method(method_name) + define_method("#{method_name}_in_fiber") do + Async do |_task| + original_method.bind(self).call + end + end + end + + def skip_if_fibers_not_supported! + return if Temporalio::Internal::Bridge.fibers_supported + + skip('Fibers not supported in this Ruby version') + end + def env TestEnvironment.instance end + def run_in_background(&) + if Fiber.current_scheduler + Fiber.schedule(&) # steep:ignore + else + Thread.new(&) # steep:ignore + end + end + + def after_teardown + super + return if passed? + + # Dump full cause chain on error + puts 'Full cause chain:' + current = failures.first&.error + while current + puts "Exception: #{current.class} - #{current.message}" + puts 'Backtrace:' + puts current.backtrace.join("\n") + puts '-' * 50 + + current = current.cause + end + end + class TestEnvironment include Singleton attr_reader :server def initialize - @server = Temporalio::Testing::WorkflowEnvironment.start_local + @server = Temporalio::Testing::WorkflowEnvironment.start_local(logger: Logger.new($stdout)) Minitest.after_run do @server.shutdown end diff --git a/temporalio/test/worker/activity_executor/thread_pool_test.rb b/temporalio/test/worker/activity_executor/thread_pool_test.rb new file mode 100644 index 00000000..e2c53cec --- /dev/null +++ b/temporalio/test/worker/activity_executor/thread_pool_test.rb @@ -0,0 +1,111 @@ +# frozen_string_literal: true + +require 'temporalio/activity' +require 'test' + +module Worker + module ActivityExecutor + class ThreadPoolTest < Test + DO_NOTHING_ACTIVITY = Temporalio::Activity::Definition.new(name: 'ignore') do + # Empty + end + + def test_unlimited_max_with_idle + pool = Temporalio::Worker::ActivityExecutor::ThreadPool.new(idle_timeout: 0.3) + + # Start some activities + pending_activity_queues = Queue.new + 20.times do + pool.execute_activity(DO_NOTHING_ACTIVITY) do + queue = Queue.new + pending_activity_queues << queue + queue.pop + end + end + + # Wait for all to be waiting + assert_eventually { assert_equal 20, pending_activity_queues.size } + + # Confirm some values + assert_equal 20, pool.largest_length + assert_equal 20, pool.scheduled_task_count + assert_equal 0, pool.completed_task_count + assert_equal 20, pool.active_count + assert_equal 20, pool.length + assert_equal 0, pool.queue_length + + # Complete 7 of the activities + 7.times { pending_activity_queues.pop << nil } + + # Confirm values have changed + assert_eventually do + assert_equal 20, pool.largest_length + assert_equal 20, pool.scheduled_task_count + assert_equal 7, pool.completed_task_count + assert_equal 13, pool.active_count + assert_equal 0, pool.queue_length + end + + # Wait twice as long as the idle timeout and send an immediately + # completing activity and confirm pool length trimmed down + sleep(0.6) + pool.execute_activity(DO_NOTHING_ACTIVITY) { nil } + assert_eventually do + assert pool.length == 13 || pool.length == 14, "Pool length: #{pool.length}" + end + + # Finish the rest, shutdown, confirm eventually all done + pending_activity_queues.pop << nil until pending_activity_queues.empty? + pool.shutdown + assert_eventually do + assert_equal 20, pool.largest_length + assert_equal 21, pool.scheduled_task_count + assert_equal 21, pool.completed_task_count + assert_equal 0, pool.length + end + end + + def test_limited_max + pool = Temporalio::Worker::ActivityExecutor::ThreadPool.new(max_threads: 7) + + # Start some activities + pending_activity_queues = Queue.new + 20.times do + pool.execute_activity(DO_NOTHING_ACTIVITY) do + queue = Queue.new + pending_activity_queues << queue + queue.pop + end + end + + # Wait for 7 to be waiting + assert_eventually { assert_equal 7, pending_activity_queues.size } + + # Confirm some values + assert_equal 7, pool.largest_length + assert_equal 20, pool.scheduled_task_count + assert_equal 0, pool.completed_task_count + assert_equal 7, pool.active_count + assert_equal 7, pool.length + assert_equal 13, pool.queue_length + + # Complete 9 of the activities and confirm some values + 9.times { pending_activity_queues.pop << nil } + assert_eventually do + assert_equal 9, pool.completed_task_count + assert_equal 7, pool.active_count + assert_equal 7, pool.length + # Only 4 left because 9 completed and 7 are running + assert_equal 4, pool.queue_length + end + + # Complete the rest + 11.times { pending_activity_queues.pop << nil } + assert_eventually do + assert_equal 20, pool.completed_task_count + assert_equal 0, pool.queue_length + end + end + end + end +end diff --git a/temporalio/test/worker_activity_test.rb b/temporalio/test/worker_activity_test.rb new file mode 100644 index 00000000..02979123 --- /dev/null +++ b/temporalio/test/worker_activity_test.rb @@ -0,0 +1,881 @@ +# frozen_string_literal: true + +require 'async' +require 'async/notification' +require 'base64' +require 'securerandom' +require 'temporalio/client' +require 'temporalio/testing' +require 'temporalio/worker' +require 'test' + +class WorkerActivityTest < Test + also_run_all_tests_in_fiber + + class ClassActivity < Temporalio::Activity + def execute(name) + "Hello, #{name}!" + end + end + + def test_class + assert_equal 'Hello, Class!', execute_activity(ClassActivity, 'Class') + end + + class InstanceActivity < Temporalio::Activity + def initialize(greeting) + @greeting = greeting + end + + def execute(name) + "#{@greeting}, #{name}!" + end + end + + def test_instance + assert_equal 'Howdy, Instance!', execute_activity(InstanceActivity.new('Howdy'), 'Instance') + end + + def test_block + activity = Temporalio::Activity::Definition.new(name: 'BlockActivity') { |name| "Greetings, #{name}!" } + assert_equal 'Greetings, Block!', execute_activity(activity, 'Block') + end + + class FiberActivity < Temporalio::Activity + attr_reader :waiting_notification, :result_notification + + activity_executor :fiber + + def initialize + @waiting_notification = Async::Notification.new + @result_notification = Async::Notification.new + end + + def execute + @waiting_notification.signal + value = @result_notification.wait + "Hello, #{value}!" + end + end + + def test_fiber + # Tests are doubly executed in threaded and fiber, so we start a new Async block just in case + Async do |_task| + activity = FiberActivity.new + result = execute_activity(activity) do |handle| + # Wait for activity to reach its waiting point + activity.waiting_notification.wait + # Send signal + activity.result_notification.signal 'Fiber' + # Wait for result + handle.result + end + flunk('Should have failed') unless Temporalio::Internal::Bridge.fibers_supported + assert_equal 'Hello, Fiber!', result + rescue StandardError => e + raise if Temporalio::Internal::Bridge.fibers_supported + raise unless e.message.include?('Ruby 3.3 and newer') + end + end + + class LoggingActivity < Temporalio::Activity + def execute + # Log and then raise only on first attempt + Temporalio::Activity::Context.current.logger.info('Test log') + raise 'Intentional failure' if Temporalio::Activity::Context.current.info.attempt == 1 + + 'done' + end + end + + def test_logging + out, = capture_io do + # New logger each time since stdout is replaced + execute_activity(LoggingActivity, retry_max_attempts: 2, logger: Logger.new($stdout)) + end + lines = out.split("\n") + assert(lines.one? { |l| l.include?('Test log') && l.include?(':attempt=>1') }) + assert(lines.one? { |l| l.include?('Test log') && l.include?(':attempt=>2') }) + end + + class CustomNameActivity < Temporalio::Activity + activity_name 'my-activity' + + def execute + 'done' + end + end + + def test_custom_name + execute_activity(CustomNameActivity) do |handle| + assert_equal 'done', handle.result + assert(handle.fetch_history.events.one? do |e| + e.activity_task_scheduled_event_attributes&.activity_type&.name == 'my-activity' + end) + end + end + + class DuplicateNameActivity1 < Temporalio::Activity + end + + class DuplicateNameActivity2 < Temporalio::Activity + activity_name :DuplicateNameActivity1 + end + + def test_duplicate_name + error = assert_raises(ArgumentError) do + Temporalio::Worker.new( + client: env.client, + task_queue: "tq-#{SecureRandom.uuid}", + activities: [DuplicateNameActivity1, DuplicateNameActivity2] + ) + end + assert_equal 'Multiple activities named DuplicateNameActivity1', error.message + end + + class UnknownExecutorActivity < Temporalio::Activity + activity_executor :some_unknown + end + + def test_unknown_executor + error = assert_raises(ArgumentError) do + Temporalio::Worker.new( + client: env.client, + task_queue: "tq-#{SecureRandom.uuid}", + activities: [UnknownExecutorActivity] + ) + end + assert_equal "Unknown executor 'some_unknown'", error.message + end + + class NotAnActivity # rubocop:disable Lint/EmptyClass + end + + def test_not_an_activity + error = assert_raises(ArgumentError) do + Temporalio::Worker.new( + client: env.client, + task_queue: "tq-#{SecureRandom.uuid}", + activities: [NotAnActivity] + ) + end + assert error.message.end_with?('does not extend Activity') + end + + class FailureActivity < Temporalio::Activity + def execute(form) + case form + when 'simple' + raise 'simple-error' + when 'argument' + raise ArgumentError, 'argument-error' + when 'application' + raise Temporalio::Error::ApplicationError.new( + 'application-error', + { foo: 'bar' }, + 'detail2', + type: 'some-error-type', + non_retryable: true, + next_retry_delay: 1.23 + ) + end + end + end + + def test_failure + # Check basic error + error = assert_raises(Temporalio::Error::WorkflowFailedError) { execute_activity(FailureActivity, 'simple') } + assert_kind_of Temporalio::Error::ActivityError, error.cause + assert_equal 'FailureActivity', error.cause.activity_type + assert_kind_of Temporalio::Error::ApplicationError, error.cause.cause + assert_equal 'simple-error', error.cause.cause.message + assert_includes error.cause.cause.backtrace.first, 'worker_activity_test.rb' + assert_equal 'RuntimeError', error.cause.cause.type + + # Check that the error type is properly changed + error = assert_raises(Temporalio::Error::WorkflowFailedError) { execute_activity(FailureActivity, 'argument') } + assert_equal 'ArgumentError', error.cause.cause.type + + # Check that application error details are set + error = assert_raises(Temporalio::Error::WorkflowFailedError) { execute_activity(FailureActivity, 'application') } + assert_equal 'application-error', error.cause.cause.message + assert_equal [{ 'foo' => 'bar' }, 'detail2'], error.cause.cause.details + assert_equal 'some-error-type', error.cause.cause.type + assert error.cause.cause.non_retryable + assert_equal 1.23, error.cause.cause.next_retry_delay + end + + class UnimplementedExecuteActivity < Temporalio::Activity + end + + def test_unimplemented_execute + error = assert_raises(Temporalio::Error::WorkflowFailedError) { execute_activity(UnimplementedExecuteActivity) } + assert_equal 'Activity did not implement "execute"', error.cause.cause.message + end + + def test_not_found + error = assert_raises(Temporalio::Error::WorkflowFailedError) do + execute_activity(UnimplementedExecuteActivity, override_name: 'not-found') + end + assert error.cause.cause.message.end_with?( + 'is not registered on this worker, available activities: UnimplementedExecuteActivity' + ) + end + + class MultiParamActivity < Temporalio::Activity + def execute(arg1, arg2, arg3) + "Args: #{arg1}, #{arg2}, #{arg3}" + end + end + + def test_multi_param + assert_equal 'Args: {"foo"=>"bar"}, 123, baz', execute_activity(MultiParamActivity, { foo: 'bar' }, 123, 'baz') + end + + class InfoActivity < Temporalio::Activity + def execute + # Task token is non-utf8 safe string, so we need to base64 it + info_hash = Temporalio::Activity::Context.current.info.to_h # steep:ignore + info_hash[:task_token] = Base64.encode64(info_hash[:task_token]) + info_hash + end + end + + def test_info + info_hash = execute_activity(InfoActivity) + info = Temporalio::Activity::Info.new(**info_hash) # steep:ignore + refute_nil info.activity_id + assert_equal 'InfoActivity', info.activity_type + assert_equal 1, info.attempt + refute_nil info.current_attempt_scheduled_time + assert_equal false, info.local? + refute_nil info.schedule_to_close_timeout + refute_nil info.scheduled_time + refute_nil info.current_attempt_scheduled_time + refute_nil info.start_to_close_timeout + refute_nil info.started_time + refute_nil info.task_queue + refute_nil info.task_token + refute_nil info.workflow_id + assert_equal env.client.namespace, info.workflow_namespace + refute_nil info.workflow_run_id + assert_equal 'kitchen_sink', info.workflow_type + end + + class CancellationActivity < Temporalio::Activity + attr_reader :canceled + + def initialize(swallow: false) + @started = Queue.new + @swallow = swallow + end + + def execute + @started.push(nil) + # Heartbeat every 50ms + loop do + sleep(0.05) + Temporalio::Activity::Context.current.heartbeat + end + rescue Temporalio::Error::CanceledError + @canceled = true + raise unless @swallow + + 'done' + end + + def wait_started + @started.pop + end + end + + def test_cancellation_simple + act = CancellationActivity.new + execute_activity( + act, + cancel_on_signal: 'cancel-activity', + wait_for_cancellation: true, + heartbeat_timeout: 0.8 + ) do |handle| + # Wait for it to start + act.wait_started + # Send activity cancel + handle.signal('cancel-activity') + # Wait for completion + error = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result } + assert_kind_of Temporalio::Error::CanceledError, error.cause + # Confirm thrown in activity + assert act.canceled + end + end + + def test_cancellation_swallowed + act = CancellationActivity.new(swallow: true) + execute_activity( + act, + cancel_on_signal: 'cancel-activity', + wait_for_cancellation: true, + heartbeat_timeout: 0.8 + ) do |handle| + # Wait for it to start + act.wait_started + # Send activity cancel + handle.signal('cancel-activity') + # Wait for completion + assert_equal 'done', handle.result + # Confirm thrown in activity + assert act.canceled + end + end + + class HeartbeatDetailsActivity < Temporalio::Activity + def execute + # First attempt sends a heartbeat with details and fails, + # next attempt just returns the first attempt's details + if Temporalio::Activity::Context.current.info.attempt == 1 + Temporalio::Activity::Context.current.heartbeat('detail1', 'detail2') + raise 'Intentional error' + else + "details: #{Temporalio::Activity::Context.current.info.heartbeat_details}" + end + end + end + + def test_heartbeat_details + assert_equal 'details: ["detail1", "detail2"]', + execute_activity(HeartbeatDetailsActivity, retry_max_attempts: 2, heartbeat_timeout: 0.8) + end + + class ShieldingActivity < Temporalio::Activity + attr_reader :canceled, :levels_reached + + def initialize + @waiting = Queue.new + @canceled = false + @levels_reached = 0 + end + + def execute + # Do an outer shield and an inner shield and confirm not canceled until + # after + Temporalio::Activity::Context.current.cancellation.shield do + Temporalio::Activity::Context.current.cancellation.shield do + @waiting.push(nil) + # Heartbeat every 50ms waiting for cancel + until Temporalio::Activity::Context.current.cancellation.pending_canceled? + sleep(0.05) + Temporalio::Activity::Context.current.heartbeat + end + @levels_reached += 1 + end + @levels_reached += 1 + end + rescue Temporalio::Error::CanceledError + @canceled = true + raise + end + + def wait_until_waiting + @waiting.pop + end + end + + def test_activity_shielding + act = ShieldingActivity.new + execute_activity( + act, + cancel_on_signal: 'cancel-activity', + wait_for_cancellation: true, + heartbeat_timeout: 0.8 + ) do |handle| + # Wait for it to be waiting + act.wait_until_waiting + # Send activity cancel + handle.signal('cancel-activity') + # Wait for completion + error = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result } + assert_kind_of Temporalio::Error::CanceledError, error.cause + # Confirm thrown in activity but the proper levels reached + assert act.canceled + assert_equal 2, act.levels_reached + end + end + + class NoRaiseCancellationActivity < Temporalio::Activity + activity_cancel_raise false + attr_reader :canceled + + def initialize + @started = Queue.new + end + + def execute + @started.push(nil) + # Heartbeat until cancellation and then heartbeat a few more + # ensuring we're not cancelling + until Temporalio::Activity::Context.current.cancellation.canceled? + sleep(0.05) + Temporalio::Activity::Context.current.heartbeat + end + 5.times do + sleep(0.05) + Temporalio::Activity::Context.current.heartbeat + end + 'got canceled' + end + + def wait_started + @started.pop + end + end + + def test_no_raise_cancellation + act = NoRaiseCancellationActivity.new + execute_activity( + act, + cancel_on_signal: 'cancel-activity', + wait_for_cancellation: true, + heartbeat_timeout: 0.8 + ) do |handle| + # Wait for it to start + act.wait_started + # Send activity cancel + handle.signal('cancel-activity') + # Wait for completion + assert_equal 'got canceled', handle.result + end + end + + class WorkerShutdownActivity < Temporalio::Activity + attr_reader :canceled + + def initialize + @started = Queue.new + @cancel_received = Queue.new + @reraise_cancel = Queue.new + end + + def execute + @started.push(nil) + # Heartbeat every 50ms + loop do + sleep(0.05) + Temporalio::Activity::Context.current.heartbeat + end + rescue Temporalio::Error::CanceledError + raise 'Not canceled' unless Temporalio::Activity::Context.current.worker_shutdown_cancellation.canceled? + + @cancel_received.push(nil) + @reraise_cancel.pop + raise + end + + def wait_started + @started.pop + end + + def wait_cancel_received + @cancel_received.pop + end + + def reraise_cancel + @reraise_cancel.push(nil) + end + end + + def test_worker_shutdown + act = WorkerShutdownActivity.new + # Start the activity, then cancel worker but let block complete + worker_cancel, worker_cancel_proc = Temporalio::Cancellation.new + workflow_handle = execute_activity( + act, + wait_for_cancellation: true, + cancellation: worker_cancel, + raise_in_block_on_shutdown: false + ) do |handle| + # Wait for it to be started + act.wait_started + # Do worker cancellation + worker_cancel_proc.call + act.wait_cancel_received + act.reraise_cancel + # Wait for workflow result + assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result } + handle + end + + # Check that cancel was due to worker shutdown + error = assert_raises(Temporalio::Error::WorkflowFailedError) { workflow_handle.result } + assert_kind_of Temporalio::Error::ActivityError, error.cause + assert_kind_of Temporalio::Error::ApplicationError, error.cause.cause + assert_equal 'WorkerShutdown', error.cause.cause.type + end + + class AsyncCompletionActivity < Temporalio::Activity + def initialize + @task_token = Queue.new + end + + def execute + @task_token.push(Temporalio::Activity::Context.current.info.task_token) + raise Temporalio::Activity::CompleteAsyncError + end + + def wait_task_token + @task_token.pop + end + end + + def test_async_completion_success + act = AsyncCompletionActivity.new + execute_activity(act) do |handle| + # Wait for token + task_token = act.wait_task_token + + # Send completion and confirm result + env.client.async_activity_handle(task_token).complete('some result') + assert_equal 'some result', handle.result + end + end + + def test_async_completion_heartbeat_and_fail + act = AsyncCompletionActivity.new + execute_activity(act) do |handle| + # Wait for token + task_token = act.wait_task_token + + # Send heartbeat and confirm details accurate + env.client.async_activity_handle(task_token).heartbeat('foo', 'bar') + assert_equal %w[foo bar], + env.client.data_converter.from_payloads( + handle.describe.raw_description.pending_activities.first.heartbeat_details + ) + + # Send failure and confirm accurate + env.client.async_activity_handle(task_token).fail(RuntimeError.new('Oh no')) + error = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result } + assert_kind_of Temporalio::Error::ActivityError, error.cause + assert_kind_of Temporalio::Error::ApplicationError, error.cause.cause + assert_equal 'Oh no', error.cause.cause.message + end + end + + def test_async_completion_cancel + act = AsyncCompletionActivity.new + execute_activity(act, wait_for_cancellation: true) do |handle| + # Wait for token + task_token = act.wait_task_token + + # Cancel workflow and confirm activity wants to be canceled + handle.cancel + assert_eventually do + assert_raises(Temporalio::Error::AsyncActivityCanceledError) do + env.client.async_activity_handle(task_token).heartbeat + end + end + + # Send cancel and confirm canceled + env.client.async_activity_handle(task_token).report_cancellation + error = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result } + assert_kind_of Temporalio::Error::CanceledError, error.cause + end + end + + def test_async_completion_timeout + act = AsyncCompletionActivity.new + execute_activity(act, start_to_close_timeout: 0.5, wait_for_cancellation: true) do + # Wait for token + task_token = act.wait_task_token + + # Wait for activity to show not-found + assert_eventually do + error = assert_raises(Temporalio::Error::RPCError) do + env.client.async_activity_handle(task_token).heartbeat + end + assert_equal Temporalio::Error::RPCError::Code::NOT_FOUND, error.code + end + end + end + + class CustomExecutor < Temporalio::Worker::ActivityExecutor + def execute_activity(_defn, &block) + Thread.new do + Thread.current[:some_local_val] = 'foo' + block.call # steep:ignore + end + end + + def activity_context + Thread.current[:temporal_activity_context] + end + + def activity_context=(context) + Thread.current[:temporal_activity_context] = context + end + end + + class CustomExecutorActivity < Temporalio::Activity + activity_executor :my_executor + + def execute + "local val: #{Thread.current[:some_local_val]}" + end + end + + def test_custom_executor + assert_equal 'local val: foo', + execute_activity(CustomExecutorActivity, activity_executors: { my_executor: CustomExecutor.new }) + end + + class ConcurrentActivity < Temporalio::Activity + def initialize + @started = Queue.new + @continue = Queue.new + end + + def execute(num) + @started.push(nil) + @continue.pop + "done: #{num}" + end + + def wait_started + @started.pop + end + + def continue + @continue.push(nil) + end + end + + class ConcurrentFiberActivity < ConcurrentActivity + activity_name 'ConcurrentActivity' # steep:ignore + activity_executor :fiber # steep:ignore + end + + def assert_multi_worker_activities(activities) + workers = activities.each_with_index.map do |activity, index| + Temporalio::Worker.new( + client: env.client, + task_queue: "tq-#{index}-#{SecureRandom.uuid}", + activities: [activity], + build_id: 'ignore' + ) + end + Temporalio::Worker.run_all(*workers) do + env.with_kitchen_sink_worker do |kitchen_sink_task_queue| + # Start workflow w/ concurrent activities + handle = env.client.start_workflow( + 'kitchen_sink', + { actions: [{ + concurrent: workers.each_with_index.map do |worker, index| + { + execute_activity: { + name: 'ConcurrentActivity', + task_queue: worker.task_queue, + args: [index] + } + } + end + }] }, + id: "wf-#{SecureRandom.uuid}", + task_queue: kitchen_sink_task_queue + ) + # Wait for all to be started + activities.each(&:wait_started) + # Continue all + activities.each(&:continue) + # Confirm result + assert_equal activities.size.times.map { |i| "done: #{i}" }, handle.result + end + end + end + + def test_concurrent_multi_worker_threaded_activities + assert_multi_worker_activities(50.times.map { ConcurrentActivity.new }) + end + + def test_concurrent_multi_worker_fiber_activities + skip 'Must be fiber-based worker to do fiber-based activities' if Fiber.current_scheduler.nil? + assert_multi_worker_activities(50.times.map { ConcurrentFiberActivity.new }) + end + + def assert_single_worker_activities(activity, count) + worker = Temporalio::Worker.new( + client: env.client, + task_queue: "tq-#{SecureRandom.uuid}", + activities: [activity] + ) + worker.run do + env.with_kitchen_sink_worker do |kitchen_sink_task_queue| + handle = env.client.start_workflow( + 'kitchen_sink', + { actions: [{ + concurrent: count.times.map do |index| + { + execute_activity: { + name: 'ConcurrentActivity', + task_queue: worker.task_queue, + args: [index] + } + } + end + }] }, + id: "wf-#{SecureRandom.uuid}", + task_queue: kitchen_sink_task_queue + ) + # Wait for all to be started + count.times.each { activity.wait_started } + # Continue all + count.times.each { activity.continue } # rubocop:disable Style/CombinableLoops + # Confirm result + assert_equal count.times.map { |i| "done: #{i}" }, handle.result + end + end + end + + def test_concurrent_single_worker_threaded_activities + assert_single_worker_activities(ConcurrentActivity.new, 50) + end + + def test_concurrent_single_worker_fiber_activities + skip 'Must be fiber-based worker to do fiber-based activities' if Fiber.current_scheduler.nil? + assert_single_worker_activities(ConcurrentFiberActivity.new, 50) + end + + class TrackCallsInterceptor + include Temporalio::Worker::Interceptor + # Also include client interceptor so we can test worker interceptors at a + # client level + include Temporalio::Client::Interceptor + + attr_accessor :calls + + def initialize + @calls = [] + end + + def intercept_activity(next_interceptor) + Inbound.new(self, next_interceptor) + end + + class Inbound < Temporalio::Worker::Interceptor::ActivityInbound + def initialize(root, next_interceptor) + super(next_interceptor) + @root = root + end + + def init(outbound) + @root.calls.push(['activity_init', Temporalio::Activity::Context.current.info.activity_type]) + super(Outbound.new(@root, outbound)) + end + + def execute(input) + @root.calls.push(['activity_execute', input]) + super + end + end + + class Outbound < Temporalio::Worker::Interceptor::ActivityOutbound + def initialize(root, next_interceptor) + super(next_interceptor) + @root = root + end + + def heartbeat(input) + @root.calls.push(['activity_heartbeat', input]) + super + end + end + end + + class InterceptorActivity < Temporalio::Activity + def execute(name) + Temporalio::Activity::Context.current.heartbeat('heartbeat-val') + "Hello, #{name}!" + end + end + + def test_interceptor + interceptor = TrackCallsInterceptor.new + assert_equal 'Hello, Temporal!', execute_activity(InterceptorActivity, 'Temporal', interceptors: [interceptor]) + assert_equal 'activity_init', interceptor.calls[0].first + assert_equal 'InterceptorActivity', interceptor.calls[0][1] + assert_equal 'activity_execute', interceptor.calls[1].first + assert_equal ['Temporal'], interceptor.calls[1][1].args + assert_equal 'activity_heartbeat', interceptor.calls[2].first + assert_equal ['heartbeat-val'], interceptor.calls[2][1].details + end + + def test_interceptor_from_client + interceptor = TrackCallsInterceptor.new + # Create new client with the interceptor set + new_options = env.client.options.dup + new_options.interceptors = [interceptor] + new_client = Temporalio::Client.new(**new_options.to_h) # steep:ignore + assert_equal 'Hello, Temporal!', execute_activity(InterceptorActivity, 'Temporal', client: new_client) + assert_equal 'activity_init', interceptor.calls[0].first + assert_equal 'InterceptorActivity', interceptor.calls[0][1] + assert_equal 'activity_execute', interceptor.calls[1].first + assert_equal ['Temporal'], interceptor.calls[1][1].args + assert_equal 'activity_heartbeat', interceptor.calls[2].first + assert_equal ['heartbeat-val'], interceptor.calls[2][1].details + end + + # steep:ignore + def execute_activity( + activity, + *args, + retry_max_attempts: 1, + logger: nil, + heartbeat_timeout: nil, + start_to_close_timeout: nil, + override_name: nil, + cancel_on_signal: nil, + wait_for_cancellation: false, + cancellation: nil, + raise_in_block_on_shutdown: true, + activity_executors: nil, + interceptors: [], + client: env.client + ) + activity_defn = Temporalio::Activity::Definition.from_activity(activity) + extra_worker_args = {} + extra_worker_args[:activity_executors] = activity_executors if activity_executors + worker = Temporalio::Worker.new( + client:, + task_queue: "tq-#{SecureRandom.uuid}", + activities: [activity], + logger: logger || client.options.logger, + interceptors:, + **extra_worker_args + ) + run_args = {} + run_args[:cancellation] = cancellation unless cancellation.nil? + run_args[:raise_in_block_on_shutdown] = nil unless raise_in_block_on_shutdown + worker.run(**run_args) do + env.with_kitchen_sink_worker do |kitchen_sink_task_queue| + handle = client.start_workflow( + 'kitchen_sink', + { actions: [{ execute_activity: { + name: override_name || activity_defn.name, + task_queue: worker.task_queue, + args:, + retry_max_attempts:, + cancel_on_signal:, + wait_for_cancellation:, + heartbeat_timeout_ms: heartbeat_timeout ? (heartbeat_timeout * 1000).to_i : nil, + start_to_close_timeout_ms: start_to_close_timeout ? (start_to_close_timeout * 1000).to_i : nil + } }] }, + id: "wf-#{SecureRandom.uuid}", + task_queue: kitchen_sink_task_queue + ) + if block_given? + yield handle, worker + else + handle.result + end + end + end + end +end diff --git a/temporalio/test/worker_test.rb b/temporalio/test/worker_test.rb new file mode 100644 index 00000000..54ebc78c --- /dev/null +++ b/temporalio/test/worker_test.rb @@ -0,0 +1,163 @@ +# frozen_string_literal: true + +require 'temporalio/client' +require 'temporalio/testing' +require 'temporalio/worker' +require 'test' + +class WorkerTest < Test + also_run_all_tests_in_fiber + + class SimpleActivity < Temporalio::Activity + def execute(name) + "Hello, #{name}!" + end + end + + def test_run_with_cancellation + worker = Temporalio::Worker.new( + client: env.client, + task_queue: "tq-#{SecureRandom.uuid}", + activities: [SimpleActivity] + ) + cancellation, cancel_proc = Temporalio::Cancellation.new + done = Queue.new + run_in_background do + # We will test for Thread.raise if threaded + if Fiber.current_scheduler + worker.run(cancellation:) + done.push(nil) + else + worker.run(cancellation:) { Queue.new.pop } + end + rescue StandardError => e + done.push(e) + end + cancel_proc.call + err = done.pop + assert_nil err if Fiber.current_scheduler + assert_equal 'Workers finished', err.message unless Fiber.current_scheduler + end + + def test_run_immediately_complete_block + worker = Temporalio::Worker.new( + client: env.client, + task_queue: "tq-#{SecureRandom.uuid}", + activities: [SimpleActivity] + ) + assert_equal('done', worker.run { 'done' }) + end + + def test_poll_failure_causes_shutdown + worker = Temporalio::Worker.new( + client: env.client, + task_queue: "tq-#{SecureRandom.uuid}", + activities: [SimpleActivity] + ) + + # Run in background + done = Queue.new + run_in_background do + worker.run do + # Mimic a poll failure + raise Temporalio::Internal::Worker::MultiRunner::InjectEventForTesting.new( # rubocop:disable Style/RaiseArgs + Temporalio::Internal::Worker::MultiRunner::Event::PollFailure.new( + worker:, + worker_type: :activity, + error: RuntimeError.new('Intentional error') + ) + ) + end + rescue StandardError => e + done.push(e) + end + err = done.pop + assert_kind_of RuntimeError, err + assert_equal 'Intentional error', err.message + end + + def test_block_failure_causes_shutdown + worker = Temporalio::Worker.new( + client: env.client, + task_queue: "tq-#{SecureRandom.uuid}", + activities: [SimpleActivity] + ) + + # Run in background + done = Queue.new + run_in_background do + worker.run { raise 'Intentional error' } + rescue StandardError => e + done.push(e) + end + err = done.pop + assert_kind_of RuntimeError, err + assert_equal 'Intentional error', err.message + end + + def test_can_run_with_resource_tuner + worker = Temporalio::Worker.new( + client: env.client, + task_queue: "tq-#{SecureRandom.uuid}", + activities: [SimpleActivity], + tuner: Temporalio::Worker::Tuner.create_resource_based(target_memory_usage: 0.5, target_cpu_usage: 0.5) + ) + worker.run do + env.with_kitchen_sink_worker do |kitchen_sink_task_queue| + result = env.client.execute_workflow( + 'kitchen_sink', + { actions: [{ execute_activity: { name: 'SimpleActivity', + task_queue: worker.task_queue, + args: ['Temporal'] } }] }, + id: "wf-#{SecureRandom.uuid}", + task_queue: kitchen_sink_task_queue + ) + assert_equal 'Hello, Temporal!', result + end + end + end + + def test_can_run_with_composite_tuner + resource_tuner_options = Temporalio::Worker::Tuner::ResourceBasedTunerOptions.new( + target_memory_usage: 0.5, + target_cpu_usage: 0.5 + ) + worker = Temporalio::Worker.new( + client: env.client, + task_queue: "tq-#{SecureRandom.uuid}", + activities: [SimpleActivity], + tuner: Temporalio::Worker::Tuner.new( + workflow_slot_supplier: Temporalio::Worker::Tuner::SlotSupplier::Fixed.new(5), + activity_slot_supplier: Temporalio::Worker::Tuner::SlotSupplier::ResourceBased.new( + tuner_options: resource_tuner_options, + slot_options: Temporalio::Worker::Tuner::ResourceBasedSlotOptions.new( + min_slots: 1, + max_slots: 20, + ramp_throttle: 0.06 + ) + ), + local_activity_slot_supplier: Temporalio::Worker::Tuner::SlotSupplier::ResourceBased.new( + tuner_options: resource_tuner_options, + slot_options: Temporalio::Worker::Tuner::ResourceBasedSlotOptions.new( + min_slots: 1, + max_slots: 5, + ramp_throttle: 0.06 + ) + ) + ) + ) + worker.run do + env.with_kitchen_sink_worker do |kitchen_sink_task_queue| + result = env.client.execute_workflow( + 'kitchen_sink', + { actions: [{ execute_activity: { name: 'SimpleActivity', + task_queue: worker.task_queue, + args: ['Temporal'] } }] }, + id: "wf-#{SecureRandom.uuid}", + task_queue: kitchen_sink_task_queue + ) + assert_equal 'Hello, Temporal!', result + end + end + end +end