From 3a83254213455dd237ba8d02d9b9a9e7d14b3c6a Mon Sep 17 00:00:00 2001
From: Courtney Garcia <97773072+courtneyga@users.noreply.github.com>
Date: Wed, 17 Apr 2024 10:56:22 -0500
Subject: [PATCH 01/35] Update insert-functions.md
---
src/connections/functions/insert-functions.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/src/connections/functions/insert-functions.md b/src/connections/functions/insert-functions.md
index c82cda282f..24f79e556b 100644
--- a/src/connections/functions/insert-functions.md
+++ b/src/connections/functions/insert-functions.md
@@ -47,6 +47,9 @@ Use this page to edit and manage insert functions in your workspace.
You can also use this page to [enable destination insert functions](#enable-the-insert-function) in your workspace.
+> warning "Storage Destination Limit"
+> Currently, you are not able to connect a Storage Destination to an Insert Function.
+
## Code the destination insert function
Segment invokes a separate part of the function (called a "handler") for each event type that you send to your destination insert function.
From 8e266a3788aaab539d51a22e420e0e2707c63056 Mon Sep 17 00:00:00 2001
From: Ashton Huxtable <78318468+ashton-huxtable@users.noreply.github.com>
Date: Tue, 30 Apr 2024 21:05:28 -0600
Subject: [PATCH 02/35] Update to reflect support of email as identifier
---
.../destinations/catalog/braze-cloud-mode-actions/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/connections/destinations/catalog/braze-cloud-mode-actions/index.md b/src/connections/destinations/catalog/braze-cloud-mode-actions/index.md
index 0cd30764e2..f6cafe0e26 100644
--- a/src/connections/destinations/catalog/braze-cloud-mode-actions/index.md
+++ b/src/connections/destinations/catalog/braze-cloud-mode-actions/index.md
@@ -34,7 +34,7 @@ Braze Cloud Mode (Actions) provides the following benefit over Braze Classic:
- **REST Endpoint**: Your Braze REST Endpoint. For more information, see [API Overview](https://www.braze.com/docs/api/basics/){:target="_blank"} in the Braze documentation.
> info ""
-> Braze requires that you include a `userId` or `braze_id` for all calls made in cloud-mode. Segment sends a `braze_id` if the `userId` is missing. When you use a device-mode connection, Braze automatically tracks anonymous activity using the `braze_id` if a `userId` is missing.
+> Braze now supports sending `email` as an identifier. Braze requires that you include `userId`, `braze_id`, or `email` for all calls made in cloud-mode. Segment sends a `braze_id` if the `userId` is missing. When you use a device-mode connection, Braze automatically tracks anonymous activity using the `braze_id` if a `userId` is missing.
{% include components/actions-fields.html settings="true"%}
From 9df0ae0c146e2df04c5a3d2f68e3c0c81fc99e43 Mon Sep 17 00:00:00 2001
From: Jazma Foskin <82051355+jfoskin@users.noreply.github.com>
Date: Fri, 31 May 2024 15:42:50 -0400
Subject: [PATCH 03/35] Schema validated against version 1 of Tracking Plan
faq.md
---
src/protocols/faq.md | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/src/protocols/faq.md b/src/protocols/faq.md
index 314f620723..74f1c53a26 100644
--- a/src/protocols/faq.md
+++ b/src/protocols/faq.md
@@ -154,6 +154,11 @@ Segment's [Schema Controls](docs/connections/sources/schema/destination-data-con
2. **Standard Schema Controls/"JSON Schema Violations"**: Segment checks the names and evaluates the values of properties/traits. This is useful if you've specified a pattern or a list of acceptable values in the [JSON schema](/docs/protocols/tracking-plan/create/#edit-underlying-json-schema) for each Track event listed in the Tracking Plan.
3. **Advanced Blocking Controls/"Common JSON Schema Violations"**: Segment evaluates incoming events thoroughly, including event names, context field names and values, and the names and values of properties/traits, against the [Common JSON schema](/docs/protocols/tracking-plan/create/#common-json-schema) in your Tracking Plan.
+
+### Why am I still seeing unplanned properties within the source Schema, when the properties have been added to newer versions of the Tracking Plan?
+
+The schema will only validate events against the oldest event version that exists in the tracking plan, so if you have version 1 and version 2, the schema page will only check the tracking plan against version 1.
+
### Do blocked and discarded events count towards my MTU counts?
Blocking events within a [Source Schema](/docs/connections/sources/schema/) or [Tracking Plan](/docs/protocols/tracking-plan/create/) excludes them from API call and MTU calculations, as the events are discarded before they reach the pipeline that Segment uses for calculations.
From 64631538c5bb08a70ed00591ee1a4d40dba60486 Mon Sep 17 00:00:00 2001
From: Alan Charles <50601149+alanjcharles@users.noreply.github.com>
Date: Tue, 17 Sep 2024 11:22:50 -0400
Subject: [PATCH 04/35] add web setup guide to auto-inst [netlify-build]
---
Gemfile.lock | 15 ++-
src/connections/auto-instrumentation/index.md | 46 +++++++-
.../{setup.md => kotlin-setup.md} | 58 +---------
.../auto-instrumentation/swift-setup.md | 92 +++++++++++++++
.../auto-instrumentation/web-setup.md | 109 ++++++++++++++++++
5 files changed, 258 insertions(+), 62 deletions(-)
rename src/connections/auto-instrumentation/{setup.md => kotlin-setup.md} (69%)
create mode 100644 src/connections/auto-instrumentation/swift-setup.md
create mode 100644 src/connections/auto-instrumentation/web-setup.md
diff --git a/Gemfile.lock b/Gemfile.lock
index 8f5e6c086c..ba53330b9f 100755
--- a/Gemfile.lock
+++ b/Gemfile.lock
@@ -45,6 +45,7 @@ GEM
ffi (1.15.5)
filesize (0.2.0)
forwardable-extended (2.6.0)
+ google-protobuf (3.23.2-arm64-darwin)
google-protobuf (3.23.2-x86_64-darwin)
http_parser.rb (0.8.0)
httpclient (2.8.3)
@@ -87,7 +88,9 @@ GEM
rb-fsevent (~> 0.10, >= 0.10.3)
rb-inotify (~> 0.9, >= 0.9.10)
mercenary (0.4.0)
- nokogiri (1.15.2-x86_64-darwin)
+ nokogiri (1.13.10-arm64-darwin)
+ racc (~> 1.4)
+ nokogiri (1.13.10-x86_64-darwin)
racc (~> 1.4)
pathutil (0.16.2)
forwardable-extended (~> 2.6)
@@ -101,10 +104,12 @@ GEM
rb-inotify (0.10.1)
ffi (~> 1.0)
rexml (3.2.5)
- rouge (4.1.2)
+ rouge (3.30.0)
ruby2_keywords (0.0.5)
safe_yaml (1.0.5)
- sass-embedded (1.62.1-x86_64-darwin)
+ sass-embedded (1.58.3-arm64-darwin)
+ google-protobuf (~> 3.21)
+ sass-embedded (1.58.3-x86_64-darwin)
google-protobuf (~> 3.21)
terminal-table (3.0.2)
unicode-display_width (>= 1.1.1, < 3)
@@ -119,7 +124,7 @@ GEM
webrick (1.8.1)
PLATFORMS
- ruby
+ arm64-darwin-23
x86_64-darwin-19
x86_64-darwin-20
@@ -141,4 +146,4 @@ DEPENDENCIES
wdm (~> 0.1.0)
BUNDLED WITH
- 2.2.18
+ 2.4.5
diff --git a/src/connections/auto-instrumentation/index.md b/src/connections/auto-instrumentation/index.md
index e90e23bb9e..01259e427d 100644
--- a/src/connections/auto-instrumentation/index.md
+++ b/src/connections/auto-instrumentation/index.md
@@ -1,6 +1,25 @@
---
title: Auto-Instrumentation
hidden: true
+sources:
+ - name: Android
+ url: /connections/auto-instrumentation/kotlin-setup/
+ logo:
+ url: https://cdn.filepicker.io/api/file/9BoiIqVRFmsAuBbMMy9D
+ mark:
+ url: https://cdn.filepicker.io/api/file/9BoiIqVRFmsAuBbMMy9D
+ - name: Apple
+ url: /connections/auto-instrumentation/swift-setup/
+ logo:
+ url: https://cdn.filepicker.io/api/file/qWgSP5cpS7eeW2voq13u
+ mark:
+ url: https://cdn.filepicker.io/api/file/qWgSP5cpS7eeW2voq13u
+ - name: Web
+ url: /connections/auto-instrumentation/web-setup/
+ logo:
+ url: https://cdn.filepicker.io/api/file/aRgo4XJQZausZxD4gZQq
+ mark:
+ url: https://cdn.filepicker.io/api/file/aRgo4XJQZausZxD4gZQq
---
Auto-Instrumentation simplifies tracking in your websites and apps by eliminating the need for a traditional Segment instrumentation.
@@ -29,10 +48,35 @@ Some Auto-Instrumentation advantages include:
## How it works
-After you [integrate the Analytics SDK and Signals SDK into your application](/docs/connections/auto-instrumentation/setup/), Segment begins to passively monitor user activity like button clicks, page navigation, and network data. Segment captures these events as "signals" and sends them to your Auto-Instrumentation source in real time.
+Once you integrate the Analytics SDK and Signals SDK into your website or application, Segment begins to passively monitor user activity like button clicks, page navigation, and network data. Segment captures these events as "signals" and sends them to your Auto-Instrumentation source in real time.
In Segment, the Auto-Instrumentation source lets you view raw signals. You can then [use this data to create detailed analytics events](/docs/connections/auto-instrumentation/configuration/) based on those signals, enriching your insights into user behavior and applicatino performance.
+## Setup Guides
+
+
+
+
+ {% assign category = "source" %}
+ {% assign resources = page.sources %}
+ {% for resource in resources %}
+
+ {% endfor %}
+
+
+
+
## Privacy
Auto-Instrumentation removes personally identifiable information (PII) from breadcrumbs before they get sent to Segment. No user data is visible to Segment.
diff --git a/src/connections/auto-instrumentation/setup.md b/src/connections/auto-instrumentation/kotlin-setup.md
similarity index 69%
rename from src/connections/auto-instrumentation/setup.md
rename to src/connections/auto-instrumentation/kotlin-setup.md
index 841aefc31a..09b6e62c5e 100644
--- a/src/connections/auto-instrumentation/setup.md
+++ b/src/connections/auto-instrumentation/kotlin-setup.md
@@ -3,7 +3,7 @@ title: Auto-Instrumentation Setup
hidden: true
---
-This guide outlines the steps required to set up the Signals SDK in your applications using Swift or Kotlin.
+This guide outlines the steps required to set up the Signals SDK in your Android OS applications using Kotlin.
You'll learn how to add Auto-Instrumentation sources, integrate dependencies, and ensure that your setup captures and processes data as intended.
@@ -25,61 +25,7 @@ You'll first need to add a source and copy its write key:
## Step 2: Add dependencies and initialization code
-Next, you'll need to add the Signals SDKs to your Swift and Kotlin development environments.
-
-### Swift
-
-Follow these steps to integrate the Signals SDK into your Swift application:
-
-1. Use Swift Package Manager to add the Signals SDK from the following repository:
-
- ```zsh
- https://github.com/segmentio/Signals-swift.git
- ```
-
-2. Add the initialization code:
-
- ```swift
- // Configure Analytics with your settings
- {... ....}
-
- // Set up the Signals SDK configuration
- let config = Signals.Configuration(
- writeKey: "", // Replace with the write key you previously copied
- maximumBufferSize: 100,
- useSwiftUIAutoSignal: true,
- useNetworkAutoSignal: true
- )
-
- // Locate and set the fallback JavaScript file for edge functions
- let fallbackURL = Bundle.main.url(forResource: "MyEdgeFunctions", withExtension: "js")
-
- // Apply the configuration and add the Signals plugin
- Signals.shared.useConfiguration(config)
- Analytics.main.add(plugin: LivePlugins(fallbackFileURL: fallbackURL))
- Analytics.main.add(plugin: Signals.shared)
- ```
-
-Verify that you replaced `` with the actual write key you copied in Step 1.
-
-#### SwiftUI projects
-
-If your app is written in SwiftUI, you'll need to add a `TypeAlias.swift` file to your project that captures interaction and navigation Signals, like in this example:
-
-```swift
-import Foundation
-import Signals
-
-typealias Button = SignalButton
-typealias NavigationStack = SignalNavigationStack
-typealias NavigationLink = SignalNavigationLink
-typealias TextField = SignalTextField
-typealias SecureField = SignalSecureField
-```
-
-### Kotlin
-
-Follow these steps to integrate the Signals SDK into your Kotlin application:
+Next, you'll need to add the Signals SDKs to your Kotlin application.
1. Update your module’s Gradle build file to add the right dependencies:
diff --git a/src/connections/auto-instrumentation/swift-setup.md b/src/connections/auto-instrumentation/swift-setup.md
new file mode 100644
index 0000000000..972681c683
--- /dev/null
+++ b/src/connections/auto-instrumentation/swift-setup.md
@@ -0,0 +1,92 @@
+---
+title: Auto-Instrumentation Setup
+hidden: true
+---
+
+This guide outlines the steps required to set up the Signals SDK in your Apple OS applications using Swift.
+
+You'll learn how to add Auto-Instrumentation sources, integrate dependencies, and ensure that your setup captures and processes data as intended.
+
+> info "Auto-Instrumentation Pilot"
+> Auto-Instrumentation is currently in pilot and is governed by Segment's [First Access and Beta Preview Terms](https://www.twilio.com/en-us/legal/tos){:target="_blank"}. Segment doesn't recommend Auto-Instrumentation for use in a production environment, as Segment is actively iterating on and improving the user experience.
+
+> success "Enable Auto-Instrumentation"
+> To enable Auto-Instrumentation in your Segment workspace, reach out to your dedicated account manager.
+
+## Step 1: Add a source and get its write key
+
+You'll first need to add a source and copy its write key:
+
+1. In your Segment workspace, navigate to **Connections > Auto-Instrumentation** and click **Add source**.
+2. Select a source, give the source a name, and click **Save**.
+3. Return to **Connections > Sources** to view your sources.
+4. In the **My sources** table, find and click the new source you just set up.
+5. In the **Initialize the Client** section, look for and copy the `writeKey` displayed in the code block.
+
+## Step 2: Add dependencies and initialization code
+
+Next, you'll need to add the Signals SDKs to your Swift applicatiion.
+
+1. Use Swift Package Manager to add the Signals SDK from the following repository:
+
+ ```zsh
+ https://github.com/segmentio/Signals-swift.git
+ ```
+
+2. Add the initialization code:
+
+ ```swift
+ // Configure Analytics with your settings
+ {... ....}
+
+ // Set up the Signals SDK configuration
+ let config = Signals.Configuration(
+ writeKey: "", // Replace with the write key you previously copied
+ maximumBufferSize: 100,
+ useSwiftUIAutoSignal: true,
+ useNetworkAutoSignal: true
+ )
+
+ // Locate and set the fallback JavaScript file for edge functions
+ let fallbackURL = Bundle.main.url(forResource: "MyEdgeFunctions", withExtension: "js")
+
+ // Apply the configuration and add the Signals plugin
+ Signals.shared.useConfiguration(config)
+ Analytics.main.add(plugin: LivePlugins(fallbackFileURL: fallbackURL))
+ Analytics.main.add(plugin: Signals.shared)
+ ```
+
+Verify that you replaced `` with the actual write key you copied in Step 1.
+
+#### SwiftUI projects
+
+If your app is written in SwiftUI, you'll need to add a `TypeAlias.swift` file to your project that captures interaction and navigation Signals, like in this example:
+
+```swift
+import Foundation
+import Signals
+
+typealias Button = SignalButton
+typealias NavigationStack = SignalNavigationStack
+typealias NavigationLink = SignalNavigationLink
+typealias TextField = SignalTextField
+typealias SecureField = SignalSecureField
+```
+## Step 3: Verify and deploy events
+
+Next, you'll need to verify signal emission and [create rules](/docs/connections/auto-instrumentation/configuration/#example-rule-implementations) to convert those signals into events:
+
+1. In your Segment workspace, return to **Connections > Auto-Instrumentation** and click on the new source you created.
+2. Verify that signals appear as expected on the dashboard.
+
+ ![Signals successfully appearing in the Segment UI](images/autoinstrumentation_signals.png "Signals successfully appearing in the Segment UI")
+
+3. Click **Create Rules**.
+4. In the Rules Editor, add a rule that converts signal data into an event.
+5. Click **Preview**, then click **Save & Deploy**.
+
+Segment displays `Rule updated successfully` to verify that it saved your rule.
+
+## Next steps
+
+This guide walked you through initial Signals SDK/Auto-Instrumentation setup. Next, read the [Auto-Instrumentation Signals Implementation Guide](/docs/connections/auto-instrumentation/configuration/), which dives deeper into Signals and offers examples rules.
diff --git a/src/connections/auto-instrumentation/web-setup.md b/src/connections/auto-instrumentation/web-setup.md
new file mode 100644
index 0000000000..b2329f78e0
--- /dev/null
+++ b/src/connections/auto-instrumentation/web-setup.md
@@ -0,0 +1,109 @@
+---
+title: Auto-Instrumentation Setup
+hidden: true
+---
+
+This guide outlines the steps required to set up the Signals SDK in your JavaScript website.
+
+You'll learn how to add Auto-Instrumentation sources, integrate dependencies, and ensure that your setup captures and processes data as intended.
+
+> info "Auto-Instrumentation Pilot"
+> Auto-Instrumentation is currently in pilot and is governed by Segment's [First Access and Beta Preview Terms](https://www.twilio.com/en-us/legal/tos){:target="_blank"}. Segment doesn't recommend Auto-Instrumentation for use in a production environment, as Segment is actively iterating on and improving the user experience.
+
+> success "Enable Auto-Instrumentation"
+> To enable Auto-Instrumentation in your Segment workspace, reach out to your dedicated account manager.
+
+## Step 1: Add a source and get its write key
+
+You'll first need to add a source and copy its write key:
+
+1. In your Segment workspace, navigate to **Connections > Auto-Instrumentation** and click **Add source**.
+2. Select a source, give the source a name, and click **Save**.
+3. Return to **Connections > Sources** to view your sources.
+4. In the **My sources** table, find and click the new source you just set up.
+5. In the **Initialize the Client** section, look for and copy the `writeKey` displayed in the code block.
+
+## Step 2: Add dependencies and initialization code
+
+Next, you'll need to add the Signals SDKs to your web environment.
+
+Follow these steps to integrate the Signals SDK into your website:
+
+1. Add the Signals SDK to your project:
+
+```bash
+ # npm
+ npm install @segment/analytics-signals
+ # yarn
+ yarn add @segment/analytics-signals
+ # pnpm
+ pnpm install @segment/analytics-signals
+```
+
+2. Add the initialization code:
+
+```ts
+// analytics.js/ts
+import { AnalyticsBrowser } from '@segment/analytics-next'
+import { SignalsPlugin } from '@segment/analytics-signals'
+
+const analytics = new AnalyticsBrowser()
+const signalsPlugin = new SignalsPlugin()
+analytics.register(signalsPlugin)
+
+analytics.load({
+ writeKey: ''
+})
+```
+
+Verify that you replaced `` with the actual write key you copied in Step 1.
+
+4. Build and run your app.
+
+## Step 3: Verify and deploy events
+
+Next, you'll need to verify signal emission and [create rules](/docs/connections/auto-instrumentation/configuration/#example-rule-implementations) to convert those signals into events:
+
+1. In your Segment workspace, return to **Connections > Auto-Instrumentation** and click on the new source you created.
+2. Verify that signals appear as expected on the dashboard.
+
+ ![Signals successfully appearing in the Segment UI](images/autoinstrumentation_signals.png "Signals successfully appearing in the Segment UI")
+
+3. Click **Create Rules**.
+4. In the Rules Editor, add a rule that converts signal data into an event.
+5. Click **Preview**, then click **Save & Deploy**.
+
+Segment displays `Rule updated successfully` to verify that it saved your rule.
+
+### Debugging
+#### Enable debug mode
+Values sent to the signals API are redacted by default.
+This adds a local storage key. To disable redaction, add a magic query string:
+```
+https://my-website.com?segment_signals_debug=true
+```
+You can *turn off debugging* by doing:
+```
+https://my-website.com?segment_signals_debug=false
+```
+
+### Advanced
+
+#### Listening to signals
+```ts
+const signalsPlugin = new SignalsPlugin()
+signalsPlugin.onSignal((signal) => console.log(signal))
+```
+
+### Emitting Signals
+```ts
+const signalsPlugin = new SignalsPlugin()
+signalsPlugin.addSignal({
+ type: 'userDefined',
+ data: { foo: 'bar' }
+})
+```
+
+## Next steps
+
+This guide walked you through initial Signals SDK/Auto-Instrumentation setup. Next, read the [Auto-Instrumentation Signals Implementation Guide](/docs/connections/auto-instrumentation/configuration/), which dives deeper into Signals and offers examples rules.
From 0854956e72a11eab90c62aef3873e617d7723ff4 Mon Sep 17 00:00:00 2001
From: Alan Charles <50601149+alanjcharles@users.noreply.github.com>
Date: Tue, 17 Sep 2024 14:13:08 -0400
Subject: [PATCH 05/35] add config options to each setup guide [netlify-build]
---
.../auto-instrumentation/configuration.md | 36 ++-----------------
.../auto-instrumentation/kotlin-setup.md | 17 ++++++++-
.../auto-instrumentation/swift-setup.md | 22 +++++++++++-
.../auto-instrumentation/web-setup.md | 21 ++++++++++-
4 files changed, 60 insertions(+), 36 deletions(-)
diff --git a/src/connections/auto-instrumentation/configuration.md b/src/connections/auto-instrumentation/configuration.md
index b7ed3975c7..d7fe863e81 100644
--- a/src/connections/auto-instrumentation/configuration.md
+++ b/src/connections/auto-instrumentation/configuration.md
@@ -3,48 +3,18 @@ title: Generate Events from Signals
hidden: true
---
-This guide is a reference to configuring, generating, and using signals in the Signals SDK with Auto-Instrumentation. On this page, you'll find details on:
+This guide details how to use signals, and their associated data, generated in one of the Signals SDKs with the Auto-Instrumentation dashboard in your Segment workspace. On this page, you'll find details on:
-- Setting up and managing signal types in the Signals SDK
- Creating custom rules to capture and translate signals into actionable analytics events
- Example rules that you can use as a basis for further customization
-This guide assumes that you've already added the Signals SDK to your application. If you haven't yet, see the [Auto-Instrumentation Setup](/docs/connections/auto-instrumentation/setup/) guide for initial setup.
+This guide assumes that you've already added the Signals SDK to your application. If you haven't yet, see the [Auto-Instrumentation Setup](/docs/connections/auto-instrumentation/) guide for initial setup.
> info "Auto-Instrumentation Pilot"
> Auto-Instrumentation is currently in pilot and is governed by Segment's [First Access and Beta Preview Terms](https://www.twilio.com/en-us/legal/tos){:target="_blank"}. Segment doesn't recommend Auto-Instrumentation for use in a production environment, as Segment is actively iterating on and improving the user experience.
> success "Enable Auto-Instrumentation"
-> To enable Auto-Instrumentation in your Segment workspace, reach out to your dedicated account manager.
-
-## Signals configuration
-
-Using the Signals Configuration object, you can control the destination, frequency, and types of signals that Segment automatically tracks within your application. The following tables detail the configuration options for both Signals-Swift and Signals-Kotlin.
-
-### Signals-Swift
-
-| `Option` | Required | Value | Description |
-| ---------------------- | -------- | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| `writeKey` | Yes | String | Source write key |
-| `maximumBufferSize` | No | Integer | The number of signals to be kept for JavaScript inspection. This buffer is first-in, first-out. Default is `1000`. |
-| `relayCount` | No | Integer | Relays signals to Segment every Xth event. Default is `20`. |
-| `relayInterval` | No | TimeInterval | Relays signals to segment every X seconds. Default is `60`. |
-| `broadcasters` | No | `SignalBroadcaster` | An array of broadcasters. These objects forward signal data to their destinations, like `WebhookBroadcaster` or `DebugBroadcaster` writing to the developer console. Default is `SegmentBroadcaster`. |
-| `useUIKitAutoSignal` | No | Bool | Tracks UIKit component interactions automatically. Default is `false`. |
-| `useSwiftUIAutoSignal` | No | Bool | Tracks SwiftUI component interactions automatically. Default is `false`. |
-| `useNetworkAutoSignal` | No | Bool | Tracks network events automatically. Default is `false`. |
-| `allowedNetworkHosts` | No | Array | An array of allowed network hosts. |
-| `blockedNetworkHosts` | No | Array | An array of blocked network hosts. |
-
-
-### Signals-Kotlin
-
-| `Option` | Required | Value | Description |
-| ------------------- | -------- | ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `writeKey` | Yes | String | Source write key |
-| `maximumBufferSize` | No | Integer | The number of signals to be kept for JavaScript inspection. This buffer is first-in, first-out. Default is `1000`. |
-| `broadcastInterval` | No | Integer | Broadcasts signals to Segment every X event. Default is `60`. |
-| `broadcasters` | No | `List` | An array of broadcasters. These objects forward signal data to their destinations, like `WebhookBroadcaster` or `DebugBroadcaster` writing to the developer console. Default is `SegmentBroadcaster`. |
+> To enable Auto-Instrumentation in your Segment workspace, reach out to your dedicated account manager.
## Converting signals to events
diff --git a/src/connections/auto-instrumentation/kotlin-setup.md b/src/connections/auto-instrumentation/kotlin-setup.md
index 09b6e62c5e..cc6b3114e1 100644
--- a/src/connections/auto-instrumentation/kotlin-setup.md
+++ b/src/connections/auto-instrumentation/kotlin-setup.md
@@ -44,7 +44,10 @@ Next, you'll need to add the Signals SDKs to your Kotlin application.
}
```
-2. Add the following code to your application to initialize the Signals SDK:
+2. Add the initialization code and configuration options:
+
+> success ""
+> see [configuration options](#configuration-options) for a complete list.
```kotlin
// Configure Analytics with your settings
@@ -89,6 +92,18 @@ Next, you'll need to verify signal emission and [create rules](/docs/connections
Segment displays `Rule updated successfully` to verify that it saved your rule.
+## Configuration Options
+
+Using the Signals Configuration object, you can control the destination, frequency, and types of signals that Segment automatically tracks within your application. The following table details the configuration options for Signals-Kotlin.
+
+| `Option` | Required | Value | Description |
+| ------------------- | -------- | ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `writeKey` | Yes | String | Source write key |
+| `maximumBufferSize` | No | Integer | The number of signals to be kept for JavaScript inspection. This buffer is first-in, first-out. Default is `1000`. |
+| `broadcastInterval` | No | Integer | Broadcasts signals to Segment every X event. Default is `60`. |
+| `broadcasters` | No | `List` | An array of broadcasters. These objects forward signal data to their destinations, like `WebhookBroadcaster` or `DebugBroadcaster` writing to the developer console. Default is `SegmentBroadcaster`. |
+
+
## Next steps
This guide walked you through initial Signals SDK/Auto-Instrumentation setup. Next, read the [Auto-Instrumentation Signals Implementation Guide](/docs/connections/auto-instrumentation/configuration/), which dives deeper into Signals and offers examples rules.
diff --git a/src/connections/auto-instrumentation/swift-setup.md b/src/connections/auto-instrumentation/swift-setup.md
index 972681c683..b693722ea7 100644
--- a/src/connections/auto-instrumentation/swift-setup.md
+++ b/src/connections/auto-instrumentation/swift-setup.md
@@ -33,7 +33,10 @@ Next, you'll need to add the Signals SDKs to your Swift applicatiion.
https://github.com/segmentio/Signals-swift.git
```
-2. Add the initialization code:
+2. Add the initialization code and configuration options:
+
+> success ""
+> see [configuration options](#configuration-options) for a complete list.
```swift
// Configure Analytics with your settings
@@ -87,6 +90,23 @@ Next, you'll need to verify signal emission and [create rules](/docs/connections
Segment displays `Rule updated successfully` to verify that it saved your rule.
+## Configuration Options
+
+Using the Signals Configuration object, you can control the destination, frequency, and types of signals that Segment automatically tracks within your application. The following table details the configuration options for Signals-Swift.
+
+| `Option` | Required | Value | Description |
+| ---------------------- | -------- | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| `writeKey` | Yes | String | Source write key |
+| `maximumBufferSize` | No | Integer | The number of signals to be kept for JavaScript inspection. This buffer is first-in, first-out. Default is `1000`. |
+| `relayCount` | No | Integer | Relays signals to Segment every Xth event. Default is `20`. |
+| `relayInterval` | No | TimeInterval | Relays signals to segment every X seconds. Default is `60`. |
+| `broadcasters` | No | `SignalBroadcaster` | An array of broadcasters. These objects forward signal data to their destinations, like `WebhookBroadcaster` or `DebugBroadcaster` writing to the developer console. Default is `SegmentBroadcaster`. |
+| `useUIKitAutoSignal` | No | Bool | Tracks UIKit component interactions automatically. Default is `false`. |
+| `useSwiftUIAutoSignal` | No | Bool | Tracks SwiftUI component interactions automatically. Default is `false`. |
+| `useNetworkAutoSignal` | No | Bool | Tracks network events automatically. Default is `false`. |
+| `allowedNetworkHosts` | No | Array | An array of allowed network hosts. |
+| `blockedNetworkHosts` | No | Array | An array of blocked network hosts.
+
## Next steps
This guide walked you through initial Signals SDK/Auto-Instrumentation setup. Next, read the [Auto-Instrumentation Signals Implementation Guide](/docs/connections/auto-instrumentation/configuration/), which dives deeper into Signals and offers examples rules.
diff --git a/src/connections/auto-instrumentation/web-setup.md b/src/connections/auto-instrumentation/web-setup.md
index b2329f78e0..7e39c40358 100644
--- a/src/connections/auto-instrumentation/web-setup.md
+++ b/src/connections/auto-instrumentation/web-setup.md
@@ -40,7 +40,10 @@ Follow these steps to integrate the Signals SDK into your website:
pnpm install @segment/analytics-signals
```
-2. Add the initialization code:
+2. Add the initialization code and configuration options:
+
+> success ""
+> see [configuration options](#configuration-options) for a complete list.
```ts
// analytics.js/ts
@@ -104,6 +107,22 @@ signalsPlugin.addSignal({
})
```
+## Configuration Options
+
+Using the Signals Configuration object, you can control the destination, frequency, and types of signals that Segment automatically tracks within your application. The following table details the configuration options for Signals-Kotlin.
+
+| `Option` | Required | Value | Description |
+| ------------------- | -------- | ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `writeKey` | Yes | string | Source write key |
+| `maxBufferSize` | No | number | The number of signals to be kept for JavaScript inspection. This buffer is first-in, first-out. Default is `1000`. |
+| `processSignal` | No | string | Override the default signal processing function from the edge function. If this is set, the edge function will not be used.
+| `enableDebugLogging` | No | boolean | Enable debug logs.
+| `disableSignalRedaction` | No | boolean | Disable default Signal data redaction.
+| `apiHost` | No | string | Override the default signals API host. Default is `signals.segment.io/v1`.
+| `functionHost` | No | string | Override the default edge host. Default is `cdn.edgefn.segment.com`
+| `flushAt` | No | number | How many signals to flush at once when sending to the signals API. Default is `5` . |
+| `flushInterval` | No | number | How many ms to wait before flushing signals to the API. The default is `2000`. |
+
## Next steps
This guide walked you through initial Signals SDK/Auto-Instrumentation setup. Next, read the [Auto-Instrumentation Signals Implementation Guide](/docs/connections/auto-instrumentation/configuration/), which dives deeper into Signals and offers examples rules.
From 5fa0154fd595ca6cbedc9b6a043865db9e3f5f2d Mon Sep 17 00:00:00 2001
From: AnnieZhao17
Date: Mon, 23 Sep 2024 12:39:44 -0700
Subject: [PATCH 06/35] Update postgres and redshift instructions for clarity
---
src/connections/aws-privatelink.md | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/src/connections/aws-privatelink.md b/src/connections/aws-privatelink.md
index 851f5470b8..4cbcd6e687 100644
--- a/src/connections/aws-privatelink.md
+++ b/src/connections/aws-privatelink.md
@@ -43,10 +43,10 @@ If any updates are made to the Availability Zones (AZs) enabled for your NLB, pl
### Configure PrivateLink for RDS Postgres
1. Create a Network Load Balancer VPC endpoint service using the instructions in the [Create a service powered by AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html){:target="_blank”} documentation.
-2. Reach out to your Customer Success Manager (CSM) for more details about Segment's AWS principal.
+2. Reach out to your Customer Success Manager (CSM) for details about Segment's AWS principal.
3. Add the Segment AWS principal as an “Allowed Principal” to consume the Network Load Balancer VPC endpoint service you created in step 1.
-4. Reach out to your CSM and provide them with the Service name for the service that you created above. Segment's engineering team provisions a VPC endpoint for the service in the Segment Edge VPC.
-5. After creating the VPC endpoint, Segment provides you with private DNS so you can update the **Host** in your Segment app settings or create a new Postgres integration.
The following RDS Postgres integrations support PrivateLink:
+4. Reach out to your CSM and provide them with the Service Name for the service that you created above. Segment's engineering team provisions a VPC endpoint for the service in the Segment Edge VPC.
+5. Segment provides you with the VPC endpoint's private DNS name. Use the DNS name as the **Host** setting to update or create new Postgres integrations in the Segment app.
The following RDS Postgres integrations support PrivateLink:
- [RDS Postgres storage destination](/docs/connections/storage/catalog/postgres/)
- [RDS Postgres Reverse ETL source](/docs/connections/reverse-etl/reverse-etl-source-setup-guides/postgres-setup/)
@@ -61,8 +61,8 @@ If any updates are made to the Availability Zones (AZs) enabled for your NLB, pl
Implement Segment's PrivateLink integration by taking the following steps:
1. Let your Customer Success Manager (CSM) know that you're interested in PrivateLink. They will share information with you about Segment’s Edge account and VPC.
2. After you receive the Edge account ID and VPC ID, [grant cluster access to Segment's Edge account and VPC](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-cross-vpc-console-grantor.html){:target="_blank”}.
-3. Reach back out to your CSM and provide them with the Cluster identifier for your cluster and your AWS account ID.
-4. Segment creates a Redshift managed VPC endpoint within the Segment Redshift subnet on your behalf, which creates a PrivateLink Endpoint URL. Segment then provides you with the internal PrivateLink Endpoint URL.
-5. After Segment provides you with the URL, use it to update or create new Redshift integrations. The following integrations support PrivateLink:
+3. Reach back out to your CSM and provide them with the Cluster Identifier for your cluster and your AWS account ID.
+4. Segment's engineering team creates a Redshift managed VPC endpoint within the Segment Redshift subnet on your behalf, which creates a PrivateLink Endpoint URL. Segment then provides you with the internal PrivateLink Endpoint URL.
+5. Use the provided PrivateLink Endpoint URL as the **Hostname** setting to update or create new Redshift integrations in the Segment app. The following integrations support PrivateLink:
- [Redshift storage destination](/docs/connections/storage/catalog/redshift/)
- [Redshift Reverse ETL source](/docs/connections/reverse-etl/reverse-etl-source-setup-guides/redshift-setup/)
From cfba11de313e035fe22844f86d06b77894a995f1 Mon Sep 17 00:00:00 2001
From: AnnieZhao17
Date: Mon, 23 Sep 2024 12:47:30 -0700
Subject: [PATCH 07/35] [netlify-build]
From bcf2f32382daaba561b77102cfa2a325dab05cee Mon Sep 17 00:00:00 2001
From: forstisabella <92472883+forstisabella@users.noreply.github.com>
Date: Mon, 23 Sep 2024 16:29:06 -0400
Subject: [PATCH 08/35] init model/mapping alerting sections
---
src/connections/reverse-etl/manage-retl.md | 52 ++++++++++++++++++++--
1 file changed, 48 insertions(+), 4 deletions(-)
diff --git a/src/connections/reverse-etl/manage-retl.md b/src/connections/reverse-etl/manage-retl.md
index b03b681ad5..c0235c81a3 100644
--- a/src/connections/reverse-etl/manage-retl.md
+++ b/src/connections/reverse-etl/manage-retl.md
@@ -53,9 +53,17 @@ To reset a sync:
You can choose to replay syncs. To replay a specific sync, contact [friends@segment.com](mailto:friends@segment.com). Keep in mind that triggering a replay resyncs all records for a given sync.
## Alerting
-You can opt in to receive email, Slack, and in-app alerts about Reverse ETL sync failures and partial successes.
+You can opt in to receive email, Slack, and in-app alerts about Reverse ETL sync failures, spikes in data ingested from your model, and fluctuations in the volume of events successfully delivered from your mapping.
-To subscribe to alerts:
+
+
+The notification channels that you select for one alert will apply to all alerts in your workspace.
+
+> success ""
+> If you opted to receive notifications by email, you can click **View active email addresses** to see the email addresses that are currently signed up to receive notifications.
+
+### Failed or partially successful syncs
+To subscribe to alerts for a failed or partially successful sync:
1. Navigate to **Settings > User Preferences**.
2. Select **Reverse ETL** in the **Activity Notifications** section.
3. Click the Reverse ETL sync status that you'd like to receive notifications for. You can select one or more of the following sync statuses:
@@ -67,5 +75,41 @@ To subscribe to alerts:
- **Enable in-app notifications**: Select this option to see an in-app notification.
5. Click **Create alert**.
-> success ""
-> If you opted to receive notifications by email, you can click **View active email addresses** to see the email addresses that are currently signed up to receive notifications.
+### Model-level volume spike alerts
+
+You can create an alert that notifies you when the volume of events received by your source in the last 24 hours changes beyond a set percentage. For example, if you set a change percentage of 4% and your source received 100 events over the first 24 hours, Segment would notify you the following day if your source ingested fewer than 96 or more than 104 events.
+
+To receive a volume spike alert in a Slack channel, you must first create a Slack webhook. For more information about Slack webhooks, see the [Sending messages using incoming webhooks](https://api.slack.com/messaging/webhooks){:target="_blank”} documentation.
+
+1. Navigate to the model you'd like to create an alert for and select the **Alerts** tab.
+2. Click **Create alert**.
+3. Set a *change in event volume* percentage, or the percentage of change in event volume from your source that would prompt an alert.
+4. Select one or more of the following notification channels:
+ - **Email**: Select this channel to receive emailed alerts at the email address that you use to sign in to Segment.
+ - **Slack notification**: Enter a Webhook URL and a Slack channel name to receive alerts in a Slack channel.
+ - **In-app notifications**: Select this to receive notifications in the Segment app. To view your notifications, select the bell next to your user icon in the Segment app.
+5. Toggle the **Enable alert** setting on and click **Create**.
+
+To edit or disable your alert, navigate to your model's Alerts tab and select the Actions menu.
+
+
+
+### Mapping-level successful delivery rate fluctuations
+
+You can create an alert that notifies you when the volume of events successfully received by your mapping in the last 24 hours falls below a percentage you set. For example, if you set a percentage of 99%, Segment notifies you if your destination had a successful delivery rate of 98% or below.
+
+To receive a successful delivery rate fluctuation alert in a Slack channel, you must first create a Slack webhook. For more information about Slack webhooks, see Slack's [Sending messages using incoming webhooks](https://api.slack.com/messaging/webhooks){:target="_blank”} documentation.
+
+To subscribe to alerts for successful delivery fluctuations at the mapping level:
+1. Navigate to your intended mapping and select the **Alerts** tab.
+2. Click **Create alert**.
+3. Set an *alert threshold*, or the percentage of successfully delivered events that would prompt an alert.
+4. Select one or more of the following notification channels:
+ - **Email**: Select this channel to receive emailed alerts at the email address that you use to sign in to Segment.
+ - **Slack notification**: Enter a Webhook URL and a Slack channel name to receive alerts in a Slack channel.
+ - **In-app notifications**: Select this to receive notifications in the Segment app. To view your notifications, select the bell next to your user icon in the Segment app.
+5. Toggle the **Enable alert** setting on and click **Create**.
+
+To edit or disable your alert, navigate to your mapping's Alerts tab and select the Actions menu.
+
+
From 1026bc99db9748c1ea6881737e11276f254224c8 Mon Sep 17 00:00:00 2001
From: forstisabella <92472883+forstisabella@users.noreply.github.com>
Date: Tue, 24 Sep 2024 09:25:52 -0400
Subject: [PATCH 09/35] Update manage-retl.md
---
src/connections/reverse-etl/manage-retl.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/connections/reverse-etl/manage-retl.md b/src/connections/reverse-etl/manage-retl.md
index c0235c81a3..5e739a9f8d 100644
--- a/src/connections/reverse-etl/manage-retl.md
+++ b/src/connections/reverse-etl/manage-retl.md
@@ -59,9 +59,6 @@ You can opt in to receive email, Slack, and in-app alerts about Reverse ETL sync
The notification channels that you select for one alert will apply to all alerts in your workspace.
-> success ""
-> If you opted to receive notifications by email, you can click **View active email addresses** to see the email addresses that are currently signed up to receive notifications.
-
### Failed or partially successful syncs
To subscribe to alerts for a failed or partially successful sync:
1. Navigate to **Settings > User Preferences**.
@@ -75,6 +72,9 @@ To subscribe to alerts for a failed or partially successful sync:
- **Enable in-app notifications**: Select this option to see an in-app notification.
5. Click **Create alert**.
+> success ""
+> If you opted to receive notifications by email, you can click **View active email addresses** to see the email addresses that are currently signed up to receive notifications.
+
### Model-level volume spike alerts
You can create an alert that notifies you when the volume of events received by your source in the last 24 hours changes beyond a set percentage. For example, if you set a change percentage of 4% and your source received 100 events over the first 24 hours, Segment would notify you the following day if your source ingested fewer than 96 or more than 104 events.
From 2af318fcf3acd7e39c2ad5924c8ae43ffa36929f Mon Sep 17 00:00:00 2001
From: AnnieZhao17
Date: Tue, 24 Sep 2024 15:20:36 -0700
Subject: [PATCH 10/35] Add Snowflake instructions [netlify-build]
---
src/connections/aws-privatelink.md | 27 +++++++++++++++++++++++----
1 file changed, 23 insertions(+), 4 deletions(-)
diff --git a/src/connections/aws-privatelink.md b/src/connections/aws-privatelink.md
index 4cbcd6e687..f9c6331e2d 100644
--- a/src/connections/aws-privatelink.md
+++ b/src/connections/aws-privatelink.md
@@ -7,7 +7,7 @@ title: Amazon Web Services PrivateLink
> info ""
> Segment's PrivateLink integration is currently in private beta and is governed by Segment’s [First Access and Beta Preview Terms](https://www.twilio.com/en-us/legal/tos){:target="_blank”}. Only warehouses located in regions `us-east-1`, `us-west-2`, or `eu-west-1` are eligible for PrivateLink. You might incur additional networking costs while using AWS PrivateLink.
-During the Private Beta, you can set up AWS PrivateLink for [Databricks](#databricks), [RDS Postgres](#rds-postgres), and [Redshift](#redshift).
+During the Private Beta, you can set up AWS PrivateLink for [Databricks](#databricks), [RDS Postgres](#rds-postgres), [Redshift](#redshift), and [Snowflake](#snowflake).
## Databricks
@@ -22,7 +22,7 @@ Before you can configure AWS PrivateLink for Databricks, complete the following
- Configure a [security group](https://docs.databricks.com/en/security/network/classic/customer-managed-vpc.html#security-groups){:target="_blank”} with bidirectional access to 0.0.0.0/0 and ports 443, 3306, 6666, 2443, and 8443-8451.
### Configure PrivateLink for Databricks
-To configure PrivateLink for Databricks:
+To implement Segment's PrivateLink integration for Databricks:
1. Follow the instructions in Databricks' [Enable private connectivity using AWS PrivateLink](https://docs.databricks.com/en/security/network/classic/privatelink.html){:target="_blank”} documentation. You must create a [back-end](https://docs.databricks.com/en/security/network/classic/privatelink.html#private-connectivity-overview){:target="_blank”} connection to integrate with Segment's front-end connection.
2. After you've configured a back-end connection for Databricks, request access to Segment's PrivateLink integration by reaching out to your Customer Success Manager (CSM).
3. Your CSM sets up a call with Segment R&D to continue the onboarding process.
@@ -34,7 +34,7 @@ The following Databricks integrations support PrivateLink:
## RDS Postgres
### Prerequisites
-Before you can configure AWS PrivateLink for RDS Postgres, complete the following prerequisites in your Databricks workspace:
+Before you can configure AWS PrivateLink for RDS Postgres, complete the following prerequisites:
- **Set up a Network Load Balancer (NLB) to route traffic to your Postgres database**: Segment recommends creating a NLB that has target group IP address synchronization, using a solution like AWS Lambda.
If any updates are made to the Availability Zones (AZs) enabled for your NLB, please let your CSM know so that Segment can update the AZs of your VPC endpoint.
- **Configure your NLB with one of the following settings**:
@@ -42,6 +42,7 @@ If any updates are made to the Availability Zones (AZs) enabled for your NLB, pl
- If you must enforce inbound rules on PrivateLink traffic, add an inbound rule that allows traffic belonging to Segment's PrivateLink/Edge CIDR: `10.0.0.0/8`
### Configure PrivateLink for RDS Postgres
+To implement Segment's PrivateLink integration for RDS Postgres:
1. Create a Network Load Balancer VPC endpoint service using the instructions in the [Create a service powered by AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html){:target="_blank”} documentation.
2. Reach out to your Customer Success Manager (CSM) for details about Segment's AWS principal.
3. Add the Segment AWS principal as an “Allowed Principal” to consume the Network Load Balancer VPC endpoint service you created in step 1.
@@ -58,7 +59,7 @@ If any updates are made to the Availability Zones (AZs) enabled for your NLB, pl
- **Your cluster is using a port within the ranges 5431-5455 or 8191-8215**: Clusters with cluster relocation enabled [might encounter an error if updated to include a port outside of this range](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-recovery.html#:~:text=You%20can%20change%20to%20another%20port%20from%20the%20port%20range%20of%205431%2D5455%20or%208191%2D8215.%20(Don%27t%20change%20to%20a%20port%20outside%20the%20ranges.%20It%20results%20in%20an%20error.)){:target="_blank”}.
### Configure PrivateLink for Redshift
-Implement Segment's PrivateLink integration by taking the following steps:
+To implement Segment's PrivateLink integration for Redshift:
1. Let your Customer Success Manager (CSM) know that you're interested in PrivateLink. They will share information with you about Segment’s Edge account and VPC.
2. After you receive the Edge account ID and VPC ID, [grant cluster access to Segment's Edge account and VPC](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-cross-vpc-console-grantor.html){:target="_blank”}.
3. Reach back out to your CSM and provide them with the Cluster Identifier for your cluster and your AWS account ID.
@@ -66,3 +67,21 @@ Implement Segment's PrivateLink integration by taking the following steps:
5. Use the provided PrivateLink Endpoint URL as the **Hostname** setting to update or create new Redshift integrations in the Segment app. The following integrations support PrivateLink:
- [Redshift storage destination](/docs/connections/storage/catalog/redshift/)
- [Redshift Reverse ETL source](/docs/connections/reverse-etl/reverse-etl-source-setup-guides/redshift-setup/)
+
+## Snowflake
+
+### Prerequisites
+Before you can configure AWS PrivateLink for Snowflake, complete the following prerequisites:
+- Your Snowflake account must be on the Business Critical [Edition](https://docs.snowflake.com/en/user-guide/intro-editions){:target="_blank”} or higher.
+- Your Snowflake account is hosted on the Amazon Web Services (AWS) [cloud platform](https://docs.snowflake.com/en/user-guide/intro-cloud-platforms){:target="_blank”}.
+
+### Configure PrivateLink for Snowflake
+To implement Segment's PrivateLink integration for Snowflake:
+1. Follow Snowflake's PrivateLink documentation to [enable AWS PrivateLink](https://docs.snowflake.com/en/user-guide/admin-security-privatelink#enabling-aws-privatelink){:target="_blank”} for your Snowflake account.
+2. Let your Customer Success Manager (CSM) know that you're interested in PrivateLink. They will provide you with Segment’s AWS Edge account ID.
+3. Create a Snowflake Support Case to authorize PrivateLink connections from Segment's AWS account ID as a third party vendor to your Snowflake account.
+4. After Snowflake support authorizes Segment, call the [SYSTEM$GET_PRIVATELINK_CONFIG](https://docs.snowflake.com/en/sql-reference/functions/system_get_privatelink_config) function while using the Snowflake ACCOUNTADMIN role. Reach back out to your Segment CSM and provide them with the **privatelink-vpce-id** and **privatelink-account-url** values from the function output. Note down for yourself the **privatelink-account-name** value.
+5. Segment's engineering team creates a VPC endpoint on your behalf. Segment also creates a CNAME record to reroute Segment traffic to use your VPC endpoint. This ensures that Segment connections to your **privatelink-account-name** are made over PrivateLink.
+6. Your CSM notifies you that the setup on Segment's side is complete. Use your **privatelink-account-name** as the **Account** setting to update or create new Snowflake integrations in the Segment app. The following integrations support PrivateLink:
+ - [Snowflake storage destination](/docs/connections/storage/catalog/snowflake/)
+ - [Snowflake Reverse ETL source](/docs/connections/reverse-etl/reverse-etl-source-setup-guides/snowflake-setup/)
From 2a3c2fdf6e6ec45864a74f3e73183c228cd1dde5 Mon Sep 17 00:00:00 2001
From: AnnieZhao17
Date: Tue, 24 Sep 2024 17:27:06 -0700
Subject: [PATCH 11/35] Update Databricks instructions [netlify-build]
---
src/connections/aws-privatelink.md | 48 ++++++++++++++++++------------
1 file changed, 29 insertions(+), 19 deletions(-)
diff --git a/src/connections/aws-privatelink.md b/src/connections/aws-privatelink.md
index f9c6331e2d..2736f47332 100644
--- a/src/connections/aws-privatelink.md
+++ b/src/connections/aws-privatelink.md
@@ -11,6 +11,10 @@ During the Private Beta, you can set up AWS PrivateLink for [Databricks](#databr
## Databricks
+The following Databricks integrations support PrivateLink:
+- [Databricks storage destination](/docs/connections/storage/catalog/databricks/)
+- [Databricks Reverse ETL source](/docs/connections/reverse-etl/reverse-etl-source-setup-guides/databricks-setup/)
+
> info "Segment recommends reviewing the Databricks documentation before attempting AWS PrivateLink setup"
> The setup required to configure the Databricks PrivateLink integration requires front-end and back-end PrivateLink configuration. Review the [Databricks documentation on AWS PrivateLink](https://docs.databricks.com/en/security/network/classic/privatelink.html){:target="_blank”} to ensure you have everything required to set up this configuration before continuing.
@@ -24,15 +28,19 @@ Before you can configure AWS PrivateLink for Databricks, complete the following
### Configure PrivateLink for Databricks
To implement Segment's PrivateLink integration for Databricks:
1. Follow the instructions in Databricks' [Enable private connectivity using AWS PrivateLink](https://docs.databricks.com/en/security/network/classic/privatelink.html){:target="_blank”} documentation. You must create a [back-end](https://docs.databricks.com/en/security/network/classic/privatelink.html#private-connectivity-overview){:target="_blank”} connection to integrate with Segment's front-end connection.
-2. After you've configured a back-end connection for Databricks, request access to Segment's PrivateLink integration by reaching out to your Customer Success Manager (CSM).
-3. Your CSM sets up a call with Segment R&D to continue the onboarding process.
-
-The following Databricks integrations support PrivateLink:
- - [Databricks storage destination](/docs/connections/storage/catalog/databricks/)
- - [Databricks Reverse ETL source](/docs/connections/reverse-etl/reverse-etl-source-setup-guides/databricks-setup/)
+2. After you've configured a back-end connection for Databricks, let your Customer Success Manager (CSM) know that you're interested in PrivateLink.
+3. Segment's engineering team creates a custom VPC endpoint on your behalf. Segment then provides you with the VPC endpoint's ID.
+4. Follow the instructions in Databricks' [Register PrivateLink objects](https://docs.databricks.com/en/security/network/classic/privatelink.html#step-3-register-privatelink-objects){:target="_blank”} documentation. It'll instruct you to register the VPC endpoint in your Databricks account and to create or update your Private Access Setting to include the VPC endpoint.
+5. Configure your Databricks workspace to [use the Private Access Setting object](https://docs.databricks.com/en/security/network/classic/privatelink.html#step-4-create-or-update-your-workspace-with-privatelink-objects) from the previous step.
+6. Reach back out to your CSM and provide them with your Databricks Workspace URL. Segment configures their internal DNS to reroute Segment traffic for your Databricks workspace to your VPC endpoint.
+7. Your CSM notifies you that Segment's PrivateLink integration is complete. If you have any existing Segment Databricks integrations that use your Databricks workspace URL, they now use PrivateLink. You can also create new Databricks integrations in the Segment app. All newly created integrations using your Databricks workspace URL will automatically use PrivateLink.
## RDS Postgres
+The following RDS Postgres integrations support PrivateLink:
+- [RDS Postgres storage destination](/docs/connections/storage/catalog/postgres/)
+- [RDS Postgres Reverse ETL source](/docs/connections/reverse-etl/reverse-etl-source-setup-guides/postgres-setup/)
+
### Prerequisites
Before you can configure AWS PrivateLink for RDS Postgres, complete the following prerequisites:
- **Set up a Network Load Balancer (NLB) to route traffic to your Postgres database**: Segment recommends creating a NLB that has target group IP address synchronization, using a solution like AWS Lambda.
@@ -44,15 +52,17 @@ If any updates are made to the Availability Zones (AZs) enabled for your NLB, pl
### Configure PrivateLink for RDS Postgres
To implement Segment's PrivateLink integration for RDS Postgres:
1. Create a Network Load Balancer VPC endpoint service using the instructions in the [Create a service powered by AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html){:target="_blank”} documentation.
-2. Reach out to your Customer Success Manager (CSM) for details about Segment's AWS principal.
+2. Let your Customer Success Manager (CSM) know that you're interested in PrivateLink. They will share information with you about Segment's AWS principal.
3. Add the Segment AWS principal as an “Allowed Principal” to consume the Network Load Balancer VPC endpoint service you created in step 1.
4. Reach out to your CSM and provide them with the Service Name for the service that you created above. Segment's engineering team provisions a VPC endpoint for the service in the Segment Edge VPC.
-5. Segment provides you with the VPC endpoint's private DNS name. Use the DNS name as the **Host** setting to update or create new Postgres integrations in the Segment app.
The following RDS Postgres integrations support PrivateLink:
- - [RDS Postgres storage destination](/docs/connections/storage/catalog/postgres/)
- - [RDS Postgres Reverse ETL source](/docs/connections/reverse-etl/reverse-etl-source-setup-guides/postgres-setup/)
+5. Segment provides you with the VPC endpoint's private DNS name. Use the DNS name as the **Host** setting to update or create new Postgres integrations in the Segment app.
## Redshift
+The following Redshift integrations support PrivateLink:
+- [Redshift storage destination](/docs/connections/storage/catalog/redshift/)
+- [Redshift Reverse ETL source](/docs/connections/reverse-etl/reverse-etl-source-setup-guides/redshift-setup/)
+
### Prerequisites
- **You're using the RA3 node type**: To access Segment's PrivateLink integration, use an RA3 instance.
- **You've enabled cluster relocation**: Cluster relocation migrates your cluster behind a proxy and keeps the cluster endpoint unchanged, even if your cluster needs to be migrated to a new Availability Zone. A consistent cluster endpoint makes it possible for Segment's Edge account and VPC to remain connected to your cluster. To enable cluster relocation, follow the instructions in the AWS [Relocating your cluster](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-recovery.html){:target="_blank”} documentation.
@@ -64,16 +74,18 @@ To implement Segment's PrivateLink integration for Redshift:
2. After you receive the Edge account ID and VPC ID, [grant cluster access to Segment's Edge account and VPC](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-cross-vpc-console-grantor.html){:target="_blank”}.
3. Reach back out to your CSM and provide them with the Cluster Identifier for your cluster and your AWS account ID.
4. Segment's engineering team creates a Redshift managed VPC endpoint within the Segment Redshift subnet on your behalf, which creates a PrivateLink Endpoint URL. Segment then provides you with the internal PrivateLink Endpoint URL.
-5. Use the provided PrivateLink Endpoint URL as the **Hostname** setting to update or create new Redshift integrations in the Segment app. The following integrations support PrivateLink:
- - [Redshift storage destination](/docs/connections/storage/catalog/redshift/)
- - [Redshift Reverse ETL source](/docs/connections/reverse-etl/reverse-etl-source-setup-guides/redshift-setup/)
+5. Use the provided PrivateLink Endpoint URL as the **Hostname** setting to update or create new Redshift integrations in the Segment app.
## Snowflake
+The following Snowflake integrations support PrivateLink:
+- [Snowflake storage destination](/docs/connections/storage/catalog/snowflake/)
+- [Snowflake Reverse ETL source](/docs/connections/reverse-etl/reverse-etl-source-setup-guides/snowflake-setup/)
+
### Prerequisites
Before you can configure AWS PrivateLink for Snowflake, complete the following prerequisites:
-- Your Snowflake account must be on the Business Critical [Edition](https://docs.snowflake.com/en/user-guide/intro-editions){:target="_blank”} or higher.
-- Your Snowflake account is hosted on the Amazon Web Services (AWS) [cloud platform](https://docs.snowflake.com/en/user-guide/intro-cloud-platforms){:target="_blank”}.
+- Your Snowflake account is on the Business Critical [Edition](https://docs.snowflake.com/en/user-guide/intro-editions){:target="_blank”} or higher.
+- Your Snowflake account is hosted on the [AWS cloud platform](https://docs.snowflake.com/en/user-guide/intro-cloud-platforms){:target="_blank”}.
### Configure PrivateLink for Snowflake
To implement Segment's PrivateLink integration for Snowflake:
@@ -81,7 +93,5 @@ To implement Segment's PrivateLink integration for Snowflake:
2. Let your Customer Success Manager (CSM) know that you're interested in PrivateLink. They will provide you with Segment’s AWS Edge account ID.
3. Create a Snowflake Support Case to authorize PrivateLink connections from Segment's AWS account ID as a third party vendor to your Snowflake account.
4. After Snowflake support authorizes Segment, call the [SYSTEM$GET_PRIVATELINK_CONFIG](https://docs.snowflake.com/en/sql-reference/functions/system_get_privatelink_config) function while using the Snowflake ACCOUNTADMIN role. Reach back out to your Segment CSM and provide them with the **privatelink-vpce-id** and **privatelink-account-url** values from the function output. Note down for yourself the **privatelink-account-name** value.
-5. Segment's engineering team creates a VPC endpoint on your behalf. Segment also creates a CNAME record to reroute Segment traffic to use your VPC endpoint. This ensures that Segment connections to your **privatelink-account-name** are made over PrivateLink.
-6. Your CSM notifies you that the setup on Segment's side is complete. Use your **privatelink-account-name** as the **Account** setting to update or create new Snowflake integrations in the Segment app. The following integrations support PrivateLink:
- - [Snowflake storage destination](/docs/connections/storage/catalog/snowflake/)
- - [Snowflake Reverse ETL source](/docs/connections/reverse-etl/reverse-etl-source-setup-guides/snowflake-setup/)
+5. Segment's engineering team creates a custom VPC endpoint on your behalf. Segment also creates a CNAME record to reroute Segment traffic to use your VPC endpoint. This ensures that Segment connections to your **privatelink-account-name** are made over PrivateLink.
+6. Your CSM notifies you that the setup on Segment's side is complete. Use your **privatelink-account-name** as the **Account** setting to update or create new Snowflake integrations in the Segment app.
From 093eb22d9482af687692400e5ebe3dc393ce84d4 Mon Sep 17 00:00:00 2001
From: AnnieZhao17
Date: Tue, 24 Sep 2024 17:44:00 -0700
Subject: [PATCH 12/35] Minor adjustment [netlify-build]
---
src/connections/aws-privatelink.md | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/src/connections/aws-privatelink.md b/src/connections/aws-privatelink.md
index 2736f47332..da6fce65b9 100644
--- a/src/connections/aws-privatelink.md
+++ b/src/connections/aws-privatelink.md
@@ -33,7 +33,7 @@ To implement Segment's PrivateLink integration for Databricks:
4. Follow the instructions in Databricks' [Register PrivateLink objects](https://docs.databricks.com/en/security/network/classic/privatelink.html#step-3-register-privatelink-objects){:target="_blank”} documentation. It'll instruct you to register the VPC endpoint in your Databricks account and to create or update your Private Access Setting to include the VPC endpoint.
5. Configure your Databricks workspace to [use the Private Access Setting object](https://docs.databricks.com/en/security/network/classic/privatelink.html#step-4-create-or-update-your-workspace-with-privatelink-objects) from the previous step.
6. Reach back out to your CSM and provide them with your Databricks Workspace URL. Segment configures their internal DNS to reroute Segment traffic for your Databricks workspace to your VPC endpoint.
-7. Your CSM notifies you that Segment's PrivateLink integration is complete. If you have any existing Segment Databricks integrations that use your Databricks workspace URL, they now use PrivateLink. You can also create new Databricks integrations in the Segment app. All newly created integrations using your Databricks workspace URL will automatically use PrivateLink.
+7. Your CSM notifies you that Segment's PrivateLink integration is complete. If you have any existing Segment Databricks integrations that use your Databricks workspace URL, they now automatically use PrivateLink. You can also create new Databricks integrations in the Segment app. All newly created integrations using your Databricks workspace URL will automatically use PrivateLink.
## RDS Postgres
@@ -64,6 +64,7 @@ The following Redshift integrations support PrivateLink:
- [Redshift Reverse ETL source](/docs/connections/reverse-etl/reverse-etl-source-setup-guides/redshift-setup/)
### Prerequisites
+Before you can configure AWS PrivateLink for Redshift, complete the following prerequisites:
- **You're using the RA3 node type**: To access Segment's PrivateLink integration, use an RA3 instance.
- **You've enabled cluster relocation**: Cluster relocation migrates your cluster behind a proxy and keeps the cluster endpoint unchanged, even if your cluster needs to be migrated to a new Availability Zone. A consistent cluster endpoint makes it possible for Segment's Edge account and VPC to remain connected to your cluster. To enable cluster relocation, follow the instructions in the AWS [Relocating your cluster](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-recovery.html){:target="_blank”} documentation.
- **Your cluster is using a port within the ranges 5431-5455 or 8191-8215**: Clusters with cluster relocation enabled [might encounter an error if updated to include a port outside of this range](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-recovery.html#:~:text=You%20can%20change%20to%20another%20port%20from%20the%20port%20range%20of%205431%2D5455%20or%208191%2D8215.%20(Don%27t%20change%20to%20a%20port%20outside%20the%20ranges.%20It%20results%20in%20an%20error.)){:target="_blank”}.
From fea718fe8aec906175146e8f6a8e2df6df055abf Mon Sep 17 00:00:00 2001
From: forstisabella <92472883+forstisabella@users.noreply.github.com>
Date: Thu, 26 Sep 2024 09:42:06 -0400
Subject: [PATCH 13/35] [netlify-build]
---
src/connections/reverse-etl/manage-retl.md | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/src/connections/reverse-etl/manage-retl.md b/src/connections/reverse-etl/manage-retl.md
index 5e739a9f8d..d03eef2d37 100644
--- a/src/connections/reverse-etl/manage-retl.md
+++ b/src/connections/reverse-etl/manage-retl.md
@@ -55,8 +55,6 @@ You can choose to replay syncs. To replay a specific sync, contact [friends@segm
## Alerting
You can opt in to receive email, Slack, and in-app alerts about Reverse ETL sync failures, spikes in data ingested from your model, and fluctuations in the volume of events successfully delivered from your mapping.
-
-
The notification channels that you select for one alert will apply to all alerts in your workspace.
### Failed or partially successful syncs
@@ -83,16 +81,14 @@ To receive a volume spike alert in a Slack channel, you must first create a Slac
1. Navigate to the model you'd like to create an alert for and select the **Alerts** tab.
2. Click **Create alert**.
-3. Set a *change in event volume* percentage, or the percentage of change in event volume from your source that would prompt an alert.
+3. Set a *change in event volume* percentage, or the percentage of change in event volume from your source that would prompt an alert.
4. Select one or more of the following notification channels:
- **Email**: Select this channel to receive emailed alerts at the email address that you use to sign in to Segment.
- **Slack notification**: Enter a Webhook URL and a Slack channel name to receive alerts in a Slack channel.
- **In-app notifications**: Select this to receive notifications in the Segment app. To view your notifications, select the bell next to your user icon in the Segment app.
5. Toggle the **Enable alert** setting on and click **Create**.
-To edit or disable your alert, navigate to your model's Alerts tab and select the Actions menu.
-
-
+To edit or disable your alert, navigate to your model's Alerts tab and select the Actions menu for the model you'd like to edit.
### Mapping-level successful delivery rate fluctuations
@@ -110,6 +106,6 @@ To subscribe to alerts for successful delivery fluctuations at the mapping level
- **In-app notifications**: Select this to receive notifications in the Segment app. To view your notifications, select the bell next to your user icon in the Segment app.
5. Toggle the **Enable alert** setting on and click **Create**.
-To edit or disable your alert, navigate to your mapping's Alerts tab and select the Actions menu.
+To edit or disable your alert, navigate to your mapping's Alerts tab and select the Actions menu for the alert you'd like to edit.
From bdd7b6e972aa2ad6451d2cf8953aa7d86a0bb9ef Mon Sep 17 00:00:00 2001
From: forstisabella <92472883+forstisabella@users.noreply.github.com>
Date: Thu, 26 Sep 2024 09:42:59 -0400
Subject: [PATCH 14/35] rm note
---
src/connections/reverse-etl/manage-retl.md | 2 --
1 file changed, 2 deletions(-)
diff --git a/src/connections/reverse-etl/manage-retl.md b/src/connections/reverse-etl/manage-retl.md
index d03eef2d37..0eced3730e 100644
--- a/src/connections/reverse-etl/manage-retl.md
+++ b/src/connections/reverse-etl/manage-retl.md
@@ -107,5 +107,3 @@ To subscribe to alerts for successful delivery fluctuations at the mapping level
5. Toggle the **Enable alert** setting on and click **Create**.
To edit or disable your alert, navigate to your mapping's Alerts tab and select the Actions menu for the alert you'd like to edit.
-
-
From b5f48d2072de583b1fb45e8900a6ff9b01c2e0c9 Mon Sep 17 00:00:00 2001
From: forstisabella <92472883+forstisabella@users.noreply.github.com>
Date: Thu, 26 Sep 2024 12:57:40 -0400
Subject: [PATCH 15/35] add actions v2 section to salesforce actions docs
---
.../catalog/actions-salesforce/index.md | 27 +++++++++++++++++++
1 file changed, 27 insertions(+)
diff --git a/src/connections/destinations/catalog/actions-salesforce/index.md b/src/connections/destinations/catalog/actions-salesforce/index.md
index ae3e9b4ff0..6f2476d05a 100644
--- a/src/connections/destinations/catalog/actions-salesforce/index.md
+++ b/src/connections/destinations/catalog/actions-salesforce/index.md
@@ -42,6 +42,33 @@ Before you connect Segment to Salesforce, please ensure you have a Salesforce ac
> _For additional information on these limitations, see the Salesforce [Manage OAuth-Enabled Connected Apps Access to Your Data](https://help.salesforce.com/s/articleView?id=sf.remoteaccess_request_manage.htm&type=5#:~:text=Each%20connected%20app%20allows%20five%20unique%20approvals%20per%20user.){:target="_blank”} documentation._
+## Actions v2
+
+Segment created new Actions v2 to provide you with additional access to features. Segment's Actions v2 support the following features:
+ - **Sync modes**: Control how Segment updates Salesforce by selecting a [sync mode](#sync-modes), or a strategy for updating your downstream data.
+ - **Dynamic dropdowns**: When creating or updating a mapping in the Segment app, the dropdown auto-populates all of the available properties directly from Salesforce.
+ - **Create and modify data**: Use Sync modes to create objects in your downstream destination without having to leave the Segment app.
+
+> warning ""
+> You might need to reauthorize your Salesforce account to use all of the features associated with Actions v2.
+
+The following Actions support the Actions v2 functionality:
+ - [Account v2](#account-v2)
+ - [Custom Object v2](#custom-object-v2)
+ - [Case v2](#case-v2)
+ - [Opportunity v2](#opportunity-v2)
+ - [Lead v2](#lead-v2)
+ - [Contact v2](#contact-v2)
+
+### Sync modes
+Sync modes allow users to define how Segment should update the data in your destination.
+
+Available sync modes for the Salesforce (Actions) destination include:
+- **Add**: Add a new record when the specified identifier doesn't exist. If it does exist, Segment skips the record.
+- **Update**: Update a record if a match with the specified identifier is found. Segment does nothing if the record doesn't exist.
+- **Upsert**: If a record with the specified identifier is found, it is updated. If not, Segment creates a new record
+- **Delete**: Remove the record associated with a specified identifier. Not available when using batching.
+
{% include components/actions-fields.html %}
## Configuration options
From f40d412989e03101755d1b7e9c58935816cb77c9 Mon Sep 17 00:00:00 2001
From: forstisabella <92472883+forstisabella@users.noreply.github.com>
Date: Thu, 26 Sep 2024 12:59:30 -0400
Subject: [PATCH 16/35] [netlify-build]
---
src/connections/destinations/catalog/actions-salesforce/index.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/connections/destinations/catalog/actions-salesforce/index.md b/src/connections/destinations/catalog/actions-salesforce/index.md
index 6f2476d05a..783e87c506 100644
--- a/src/connections/destinations/catalog/actions-salesforce/index.md
+++ b/src/connections/destinations/catalog/actions-salesforce/index.md
@@ -201,3 +201,4 @@ For "Bulk Upsert External ID", see [Salesforce’s help documentation](https://h
> warning ""
> The field mapped to Bulk Upsert External Id should **not** be included in the Other Fields mapping. Including it as a custom field will cause an error in Salesforce. Although the Bulk API may return successful responses, the [Bulk Data Load Jobs](https://help.salesforce.com/s/articleView?id=sf.monitoring_async_api_jobs.htm&type=5) page in Salesforce will display error messages for failed operations.
+
From 6e205b7198b98810558611306b0ceebe626e940f Mon Sep 17 00:00:00 2001
From: AnnieZhao17
Date: Thu, 26 Sep 2024 11:48:51 -0700
Subject: [PATCH 17/35] Wording changes from PR reviews [netlify-build]
---
src/connections/aws-privatelink.md | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/src/connections/aws-privatelink.md b/src/connections/aws-privatelink.md
index da6fce65b9..ff90ab189b 100644
--- a/src/connections/aws-privatelink.md
+++ b/src/connections/aws-privatelink.md
@@ -19,21 +19,21 @@ The following Databricks integrations support PrivateLink:
> The setup required to configure the Databricks PrivateLink integration requires front-end and back-end PrivateLink configuration. Review the [Databricks documentation on AWS PrivateLink](https://docs.databricks.com/en/security/network/classic/privatelink.html){:target="_blank”} to ensure you have everything required to set up this configuration before continuing.
### Prerequisites
-Before you can configure AWS PrivateLink for Databricks, complete the following prerequisites in your Databricks workspace:
+Before you can implement AWS PrivateLink for Databricks, complete the following prerequisites in your Databricks workspace:
- Databricks account must be on the [Enterprise pricing tier](https://www.databricks.com/product/pricing/platform-addons){:target="_blank”} and use the [E2 version](https://docs.databricks.com/en/archive/aws/end-of-life-legacy-workspaces.html#e2-architecture){:target="_blank”} of the platform.
- Databricks workspace must use a [Customer-managed VPC](https://docs.databricks.com/en/security/network/classic/customer-managed-vpc.html){:target="_blank”} and [Secure cluster connectivity](https://docs.databricks.com/en/security/network/classic/secure-cluster-connectivity.html){:target="_blank”}.
- Configure your [VPC](https://docs.databricks.com/en/security/network/classic/customer-managed-vpc.html){:target="_blank”} with DNS hostnames and DNS resolution
- Configure a [security group](https://docs.databricks.com/en/security/network/classic/customer-managed-vpc.html#security-groups){:target="_blank”} with bidirectional access to 0.0.0.0/0 and ports 443, 3306, 6666, 2443, and 8443-8451.
-### Configure PrivateLink for Databricks
+### Implement PrivateLink for Databricks
To implement Segment's PrivateLink integration for Databricks:
1. Follow the instructions in Databricks' [Enable private connectivity using AWS PrivateLink](https://docs.databricks.com/en/security/network/classic/privatelink.html){:target="_blank”} documentation. You must create a [back-end](https://docs.databricks.com/en/security/network/classic/privatelink.html#private-connectivity-overview){:target="_blank”} connection to integrate with Segment's front-end connection.
2. After you've configured a back-end connection for Databricks, let your Customer Success Manager (CSM) know that you're interested in PrivateLink.
3. Segment's engineering team creates a custom VPC endpoint on your behalf. Segment then provides you with the VPC endpoint's ID.
-4. Follow the instructions in Databricks' [Register PrivateLink objects](https://docs.databricks.com/en/security/network/classic/privatelink.html#step-3-register-privatelink-objects){:target="_blank”} documentation. It'll instruct you to register the VPC endpoint in your Databricks account and to create or update your Private Access Setting to include the VPC endpoint.
-5. Configure your Databricks workspace to [use the Private Access Setting object](https://docs.databricks.com/en/security/network/classic/privatelink.html#step-4-create-or-update-your-workspace-with-privatelink-objects) from the previous step.
+4. Register the VPC endpoint in your Databricks account and create or update your Private Access Setting to include the VPC endpoint. For more information, see Databricks' [Register PrivateLink objects](https://docs.databricks.com/en/security/network/classic/privatelink.html#step-3-register-privatelink-objects){:target="_blank”} documentation.
+5. Configure your Databricks workspace to [use the Private Access Setting object](https://docs.databricks.com/en/security/network/classic/privatelink.html#step-4-create-or-update-your-workspace-with-privatelink-objects){:target="_blank”} from the previous step.
6. Reach back out to your CSM and provide them with your Databricks Workspace URL. Segment configures their internal DNS to reroute Segment traffic for your Databricks workspace to your VPC endpoint.
-7. Your CSM notifies you that Segment's PrivateLink integration is complete. If you have any existing Segment Databricks integrations that use your Databricks workspace URL, they now automatically use PrivateLink. You can also create new Databricks integrations in the Segment app. All newly created integrations using your Databricks workspace URL will automatically use PrivateLink.
+7. Your CSM notifies you that Segment's PrivateLink integration is complete. If you have any existing Segment Databricks integrations that use your Databricks workspace URL, they now automatically use PrivateLink. Any new Databricks integrations created in the Segment app using your Databricks workspace URL will also automatically use PrivateLink.
## RDS Postgres
@@ -42,14 +42,14 @@ The following RDS Postgres integrations support PrivateLink:
- [RDS Postgres Reverse ETL source](/docs/connections/reverse-etl/reverse-etl-source-setup-guides/postgres-setup/)
### Prerequisites
-Before you can configure AWS PrivateLink for RDS Postgres, complete the following prerequisites:
+Before you can implement AWS PrivateLink for RDS Postgres, complete the following prerequisites:
- **Set up a Network Load Balancer (NLB) to route traffic to your Postgres database**: Segment recommends creating a NLB that has target group IP address synchronization, using a solution like AWS Lambda.
If any updates are made to the Availability Zones (AZs) enabled for your NLB, please let your CSM know so that Segment can update the AZs of your VPC endpoint.
- **Configure your NLB with one of the following settings**:
- Disable the **Enforce inbound rules on PrivateLink traffic** setting
- If you must enforce inbound rules on PrivateLink traffic, add an inbound rule that allows traffic belonging to Segment's PrivateLink/Edge CIDR: `10.0.0.0/8`
-### Configure PrivateLink for RDS Postgres
+### Implement PrivateLink for RDS Postgres
To implement Segment's PrivateLink integration for RDS Postgres:
1. Create a Network Load Balancer VPC endpoint service using the instructions in the [Create a service powered by AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html){:target="_blank”} documentation.
2. Let your Customer Success Manager (CSM) know that you're interested in PrivateLink. They will share information with you about Segment's AWS principal.
@@ -64,12 +64,12 @@ The following Redshift integrations support PrivateLink:
- [Redshift Reverse ETL source](/docs/connections/reverse-etl/reverse-etl-source-setup-guides/redshift-setup/)
### Prerequisites
-Before you can configure AWS PrivateLink for Redshift, complete the following prerequisites:
+Before you can implement AWS PrivateLink for Redshift, complete the following prerequisites:
- **You're using the RA3 node type**: To access Segment's PrivateLink integration, use an RA3 instance.
- **You've enabled cluster relocation**: Cluster relocation migrates your cluster behind a proxy and keeps the cluster endpoint unchanged, even if your cluster needs to be migrated to a new Availability Zone. A consistent cluster endpoint makes it possible for Segment's Edge account and VPC to remain connected to your cluster. To enable cluster relocation, follow the instructions in the AWS [Relocating your cluster](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-recovery.html){:target="_blank”} documentation.
- **Your cluster is using a port within the ranges 5431-5455 or 8191-8215**: Clusters with cluster relocation enabled [might encounter an error if updated to include a port outside of this range](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-recovery.html#:~:text=You%20can%20change%20to%20another%20port%20from%20the%20port%20range%20of%205431%2D5455%20or%208191%2D8215.%20(Don%27t%20change%20to%20a%20port%20outside%20the%20ranges.%20It%20results%20in%20an%20error.)){:target="_blank”}.
-### Configure PrivateLink for Redshift
+### Implement PrivateLink for Redshift
To implement Segment's PrivateLink integration for Redshift:
1. Let your Customer Success Manager (CSM) know that you're interested in PrivateLink. They will share information with you about Segment’s Edge account and VPC.
2. After you receive the Edge account ID and VPC ID, [grant cluster access to Segment's Edge account and VPC](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-cross-vpc-console-grantor.html){:target="_blank”}.
@@ -84,15 +84,15 @@ The following Snowflake integrations support PrivateLink:
- [Snowflake Reverse ETL source](/docs/connections/reverse-etl/reverse-etl-source-setup-guides/snowflake-setup/)
### Prerequisites
-Before you can configure AWS PrivateLink for Snowflake, complete the following prerequisites:
-- Your Snowflake account is on the Business Critical [Edition](https://docs.snowflake.com/en/user-guide/intro-editions){:target="_blank”} or higher.
+Before you can implement AWS PrivateLink for Snowflake, complete the following prerequisites:
+- Your Snowflake account is on the [Business Critical Edition](https://docs.snowflake.com/en/user-guide/intro-editions){:target="_blank”} or higher.
- Your Snowflake account is hosted on the [AWS cloud platform](https://docs.snowflake.com/en/user-guide/intro-cloud-platforms){:target="_blank”}.
-### Configure PrivateLink for Snowflake
+### Implement PrivateLink for Snowflake
To implement Segment's PrivateLink integration for Snowflake:
1. Follow Snowflake's PrivateLink documentation to [enable AWS PrivateLink](https://docs.snowflake.com/en/user-guide/admin-security-privatelink#enabling-aws-privatelink){:target="_blank”} for your Snowflake account.
2. Let your Customer Success Manager (CSM) know that you're interested in PrivateLink. They will provide you with Segment’s AWS Edge account ID.
3. Create a Snowflake Support Case to authorize PrivateLink connections from Segment's AWS account ID as a third party vendor to your Snowflake account.
-4. After Snowflake support authorizes Segment, call the [SYSTEM$GET_PRIVATELINK_CONFIG](https://docs.snowflake.com/en/sql-reference/functions/system_get_privatelink_config) function while using the Snowflake ACCOUNTADMIN role. Reach back out to your Segment CSM and provide them with the **privatelink-vpce-id** and **privatelink-account-url** values from the function output. Note down for yourself the **privatelink-account-name** value.
+4. After Snowflake support authorizes Segment, call the [SYSTEM$GET_PRIVATELINK_CONFIG](https://docs.snowflake.com/en/sql-reference/functions/system_get_privatelink_config){:target="_blank”} function while using the Snowflake ACCOUNTADMIN role. Reach back out to your Segment CSM and provide them with the **privatelink-vpce-id** and **privatelink-account-url** values from the function output. Note down for yourself the **privatelink-account-name** value.
5. Segment's engineering team creates a custom VPC endpoint on your behalf. Segment also creates a CNAME record to reroute Segment traffic to use your VPC endpoint. This ensures that Segment connections to your **privatelink-account-name** are made over PrivateLink.
6. Your CSM notifies you that the setup on Segment's side is complete. Use your **privatelink-account-name** as the **Account** setting to update or create new Snowflake integrations in the Segment app.
From a458456c5e5ef643ada2c461b102d5b2d7ce09fd Mon Sep 17 00:00:00 2001
From: forstisabella <92472883+forstisabella@users.noreply.github.com>
Date: Thu, 26 Sep 2024 16:25:45 -0400
Subject: [PATCH 18/35] Update
src/connections/destinations/catalog/actions-salesforce/index.md
Co-authored-by: stayseesong <83784848+stayseesong@users.noreply.github.com>
---
.../destinations/catalog/actions-salesforce/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/connections/destinations/catalog/actions-salesforce/index.md b/src/connections/destinations/catalog/actions-salesforce/index.md
index 783e87c506..0d984a8591 100644
--- a/src/connections/destinations/catalog/actions-salesforce/index.md
+++ b/src/connections/destinations/catalog/actions-salesforce/index.md
@@ -44,7 +44,7 @@ Before you connect Segment to Salesforce, please ensure you have a Salesforce ac
## Actions v2
-Segment created new Actions v2 to provide you with additional access to features. Segment's Actions v2 support the following features:
+Segment's Actions v2 provide you with access to the following features:
- **Sync modes**: Control how Segment updates Salesforce by selecting a [sync mode](#sync-modes), or a strategy for updating your downstream data.
- **Dynamic dropdowns**: When creating or updating a mapping in the Segment app, the dropdown auto-populates all of the available properties directly from Salesforce.
- **Create and modify data**: Use Sync modes to create objects in your downstream destination without having to leave the Segment app.
From 480f2980153868bfc82e2c29b950304e32174234 Mon Sep 17 00:00:00 2001
From: Joe Ayoub
Date: Fri, 27 Sep 2024 09:09:48 +0100
Subject: [PATCH 19/35] spelling correction
---
src/_includes/components/actions-fields.html | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/_includes/components/actions-fields.html b/src/_includes/components/actions-fields.html
index 1cd9f40c29..830084324d 100644
--- a/src/_includes/components/actions-fields.html
+++ b/src/_includes/components/actions-fields.html
@@ -137,7 +137,7 @@
- For more information, see HubSpot's [Assosciate records](https://knowledge.hubspot.com/records/associate-records){:target="_blank”} documentation.
+ For more information, see HubSpot's [Associate records](https://knowledge.hubspot.com/records/associate-records){:target="_blank”} documentation.
From 21cc0da655b8463c30177eb45e98237d4bf7901e Mon Sep 17 00:00:00 2001
From: forstisabella <92472883+forstisabella@users.noreply.github.com>
Date: Fri, 27 Sep 2024 11:51:33 -0400
Subject: [PATCH 20/35] bump up destination settings one level + rm duplicate
actions
---
src/_includes/components/actions-fields.html | 2 +-
.../destinations/catalog/actions-customerio/index.md | 2 --
2 files changed, 1 insertion(+), 3 deletions(-)
diff --git a/src/_includes/components/actions-fields.html b/src/_includes/components/actions-fields.html
index 830084324d..2223fb96ef 100644
--- a/src/_includes/components/actions-fields.html
+++ b/src/_includes/components/actions-fields.html
@@ -20,7 +20,7 @@
{% if settings.size > 0 %}
-### Destination Settings
+## Destination Settings