Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Structural improvements to SpeziLLM #45

Merged
merged 26 commits into from
Feb 23, 2024
Merged
Show file tree
Hide file tree
Changes from 22 commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 9 additions & 3 deletions Package.swift
Original file line number Diff line number Diff line change
Expand Up @@ -24,14 +24,18 @@ let package = Package(
.library(name: "SpeziLLMOpenAI", targets: ["SpeziLLMOpenAI"])
],
dependencies: [
.package(url: "https://github.com/MacPaw/OpenAI", .upToNextMinor(from: "0.2.5")),
.package(
url: "https://github.com/MacPaw/OpenAI",
revision: "35afc9a6ee127b8f22a85a31aec2036a987478af" // No new release from MacPaw, use commit on main until release is tagged
),
philippzagar marked this conversation as resolved.
Show resolved Hide resolved
.package(url: "https://github.com/StanfordBDHG/llama.cpp", .upToNextMinor(from: "0.1.8")),
.package(url: "https://github.com/StanfordSpezi/Spezi", from: "1.1.0"),
.package(url: "https://github.com/StanfordSpezi/SpeziStorage", from: "1.0.0"),
.package(url: "https://github.com/StanfordSpezi/SpeziOnboarding", from: "1.0.0"),
.package(url: "https://github.com/StanfordSpezi/SpeziSpeech", from: "1.0.0"),
.package(url: "https://github.com/StanfordSpezi/SpeziChat", .upToNextMinor(from: "0.1.4")),
.package(url: "https://github.com/StanfordSpezi/SpeziViews", from: "1.0.0")
.package(url: "https://github.com/StanfordSpezi/SpeziChat", .upToNextMinor(from: "0.1.5")),
.package(url: "https://github.com/StanfordSpezi/SpeziViews", from: "1.0.0"),
.package(url: "https://github.com/groue/Semaphore.git", exact: "0.0.8")
philippzagar marked this conversation as resolved.
Show resolved Hide resolved
],
targets: [
.target(
Expand All @@ -47,6 +51,7 @@ let package = Package(
dependencies: [
.target(name: "SpeziLLM"),
.product(name: "llama", package: "llama.cpp"),
.product(name: "Semaphore", package: "Semaphore"),
.product(name: "Spezi", package: "Spezi")
],
swiftSettings: [
Expand All @@ -65,6 +70,7 @@ let package = Package(
dependencies: [
.target(name: "SpeziLLM"),
.product(name: "OpenAI", package: "OpenAI"),
.product(name: "Semaphore", package: "Semaphore"),
.product(name: "Spezi", package: "Spezi"),
.product(name: "SpeziChat", package: "SpeziChat"),
.product(name: "SpeziSecureStorage", package: "SpeziStorage"),
Expand Down
95 changes: 51 additions & 44 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,14 +91,14 @@ The target enables developers to easily execute medium-size Language Models (LLM
#### Setup

You can configure the Spezi Local LLM execution within the typical `SpeziAppDelegate`.
In the example below, the `LLMRunner` from the [SpeziLLM](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm) target which is responsible for providing LLM functionality within the Spezi ecosystem is configured with the `LLMLocalRunnerSetupTask` from the [SpeziLLMLocal](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillmlocal) target. This prepares the `LLMRunner` to locally execute Language Models.
In the example below, the `LLMRunner` from the [SpeziLLM](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm) target which is responsible for providing LLM functionality within the Spezi ecosystem is configured with the `LLMLocalPlatform` from the [SpeziLLMLocal](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillmlocal) target. This prepares the `LLMRunner` to locally execute Language Models.

```
```swift
class TestAppDelegate: SpeziAppDelegate {
override var configuration: Configuration {
Configuration {
LLMRunner {
LLMLocalRunnerSetupTask()
LLMLocalPlatform()
}
}
}
Expand All @@ -107,27 +107,30 @@ class TestAppDelegate: SpeziAppDelegate {

#### Usage

The code example below showcases the interaction with the `LLMLocal` through the the [SpeziLLM](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm) [`LLMRunner`](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm/llmrunner), which is injected into the SwiftUI `Environment` via the `Configuration` shown above..
Based on a `String` prompt, the `LLMGenerationTask/generate(prompt:)` method returns an `AsyncThrowingStream` which yields the inferred characters until the generation has completed.
The code example below showcases the interaction with local LLMs through the the [SpeziLLM](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm) [`LLMRunner`](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm/llmrunner), which is injected into the SwiftUI `Environment` via the `Configuration` shown above.

The `LLMLocalSchema` defines the type and configurations of the to-be-executed `LLMLocalSession`. This transformation is done via the [`LLMRunner`](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm/llmrunner) that uses the `LLMLocalPlatform`. The inference via `LLMLocalSession/generate()` returns an `AsyncThrowingStream` that yields all generated `String` pieces.

```swift
struct LocalLLMChatView: View {
@Environment(LLMRunner.self) var runner: LLMRunner

// The locally executed LLM
@State var model: LLMLocal = .init(
modelPath: ...
)
@State var responseText: String

func executePrompt(prompt: String) {
// Execute the query on the runner, returning a stream of outputs
let stream = try await runner(with: model).generate(prompt: "Hello LLM!")

for try await token in stream {
responseText.append(token)
}
}
struct LLMLocalDemoView: View {
philippzagar marked this conversation as resolved.
Show resolved Hide resolved
@Environment(LLMRunner.self) var runner: LLMRunner
@State var responseText = ""

var body: some View {
Text(responseText)
.task {
// Instantiate the `LLMLocalSchema` to an `LLMLocalSession` via the `LLMRunner`.
let llmSession: LLMLocalSession = await runner(
with: LLMLocalSchema(
modelPath: URL(string: "URL to the local model file")!
)
)

for try await token in try await llmSession.generate() {
responseText.append(token)
}
}
}
}
```

Expand All @@ -142,15 +145,15 @@ In addition, `SpeziLLMOpenAI` provides developers with a declarative Domain Spec

#### Setup

In order to use `LLMOpenAI`, the [SpeziLLM](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm) [`LLMRunner`](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm/llmrunner) needs to be initialized in the Spezi `Configuration`. Only after, the `LLMRunner` can be used to execute the ``LLMOpenAI``.
In order to use OpenAI LLMs within the Spezi ecosystem, the [SpeziLLM](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm) [`LLMRunner`](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm/llmrunner) needs to be initialized in the Spezi `Configuration` with the `LLMOpenAIPlatform`. Only after, the `LLMRunner` can be used for inference of OpenAI LLMs.
See the [SpeziLLM documentation](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm) for more details.

```swift
class LLMOpenAIAppDelegate: SpeziAppDelegate {
override var configuration: Configuration {
Configuration {
LLMRunner {
LLMOpenAIRunnerSetupTask()
LLMOpenAIPlatform()
}
}
}
Expand All @@ -159,29 +162,33 @@ class LLMOpenAIAppDelegate: SpeziAppDelegate {

#### Usage

The code example below showcases the interaction with the `LLMOpenAI` through the the [SpeziLLM](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm) [`LLMRunner`](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm/llmrunner), which is injected into the SwiftUI `Environment` via the `Configuration` shown above.
Based on a `String` prompt, the `LLMGenerationTask/generate(prompt:)` method returns an `AsyncThrowingStream` which yields the inferred characters until the generation has completed.
The code example below showcases the interaction with an OpenAI LLM through the the [SpeziLLM](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm) [`LLMRunner`](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm/llmrunner), which is injected into the SwiftUI `Environment` via the `Configuration` shown above.

The `LLMOpenAISchema` defines the type and configurations of the to-be-executed `LLMOpenAISession`. This transformation is done via the [`LLMRunner`](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm/llmrunner) that uses the `LLMOpenAIPlatform`. The inference via `LLMOpenAISession/generate()` returns an `AsyncThrowingStream` that yields all generated `String` pieces.

```swift
struct LLMOpenAIChatView: View {
struct LLMOpenAIDemoView: View {
@Environment(LLMRunner.self) var runner: LLMRunner

@State var model: LLMOpenAI = .init(
parameters: .init(
modelType: .gpt3_5Turbo,
systemPrompt: "You're a helpful assistant that answers questions from users.",
overwritingToken: "abc123"
)
)
@State var responseText: String

func executePrompt(prompt: String) {
// Execute the query on the runner, returning a stream of outputs
let stream = try await runner(with: model).generate(prompt: "Hello LLM!")

for try await token in stream {
responseText.append(token)
}
@State var responseText = ""

var body: some View {
Text(responseText)
.task {
// Instantiate the `LLMOpenAISchema` to an `LLMOpenAISession` via the `LLMRunner`.
let llmSession: LLMOpenAISession = await runner(
with: LLMOpenAISchema(
parameters: .init(
modelType: .gpt3_5Turbo,
systemPrompt: "You're a helpful assistant that answers questions from users.",
overwritingToken: "abc123"
)
)
)

for try await token in try await llmSession.generate() {
responseText.append(token)
}
}
}
}
```
Expand Down
29 changes: 29 additions & 0 deletions Sources/SpeziLLM/Helpers/Chat+Init.swift
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
//
// This source file is part of the Stanford Spezi open-source project
//
// SPDX-FileCopyrightText: 2022 Stanford University and the project authors (see CONTRIBUTORS.md)
//
// SPDX-License-Identifier: MIT
//

import SpeziChat


extension Chat {
/// Creates a new `Chat` array with an arbitrary number of system messages.
///
/// - Parameters:
/// - systemMessages: `String`s that should be used as system messages.
public init(systemMessages: String...) {
self = systemMessages.map { systemMessage in
.init(role: .system, content: systemMessage)
}
}

Check warning on line 21 in Sources/SpeziLLM/Helpers/Chat+Init.swift

View check run for this annotation

Codecov / codecov/patch

Sources/SpeziLLM/Helpers/Chat+Init.swift#L17-L21

Added lines #L17 - L21 were not covered by tests


/// Resets the `Chat` array, deleting all persisted content.
@MainActor
public mutating func reset() {
self = []
}

Check warning on line 28 in Sources/SpeziLLM/Helpers/Chat+Init.swift

View check run for this annotation

Codecov / codecov/patch

Sources/SpeziLLM/Helpers/Chat+Init.swift#L26-L28

Added lines #L26 - L28 were not covered by tests
}
78 changes: 0 additions & 78 deletions Sources/SpeziLLM/LLM.swift

This file was deleted.

49 changes: 38 additions & 11 deletions Sources/SpeziLLM/LLMError.swift
Original file line number Diff line number Diff line change
Expand Up @@ -9,34 +9,61 @@
import Foundation


/// Defines errors that may occur during setting up the runner environment for ``LLM`` generation jobs.
public enum LLMRunnerError: LLMError {
/// Indicates an error occurred during setup of the LLM generation.
case setupError
/// Defines universally occurring `Error`s while handling LLMs with SpeziLLM.
public enum LLMDefaultError: LLMError {
/// Indicates an unknown error during LLM execution.
case unknown(Error)


public var errorDescription: String? {
switch self {
case .setupError:
String(localized: LocalizedStringResource("LLM_SETUP_ERROR_DESCRIPTION", bundle: .atURL(from: .module)))
case .unknown:
String(localized: LocalizedStringResource("LLM_UNKNOWN_ERROR_DESCRIPTION", bundle: .atURL(from: .module)))

Check warning on line 21 in Sources/SpeziLLM/LLMError.swift

View check run for this annotation

Codecov / codecov/patch

Sources/SpeziLLM/LLMError.swift#L20-L21

Added lines #L20 - L21 were not covered by tests
}
}

public var recoverySuggestion: String? {
switch self {
case .setupError:
String(localized: LocalizedStringResource("LLM_SETUP_ERROR_RECOVERY_SUGGESTION", bundle: .atURL(from: .module)))
case .unknown:
String(localized: LocalizedStringResource("LLM_UNKNOWN_ERROR_RECOVERY_SUGGESTION", bundle: .atURL(from: .module)))

Check warning on line 28 in Sources/SpeziLLM/LLMError.swift

View check run for this annotation

Codecov / codecov/patch

Sources/SpeziLLM/LLMError.swift#L27-L28

Added lines #L27 - L28 were not covered by tests
}
}

public var failureReason: String? {
switch self {
case .setupError:
String(localized: LocalizedStringResource("LLM_SETUP_ERROR_FAILURE_REASON", bundle: .atURL(from: .module)))
case .unknown:
String(localized: LocalizedStringResource("LLM_UNKNOWN_ERROR_FAILURE_REASON", bundle: .atURL(from: .module)))
}
}

Check warning on line 37 in Sources/SpeziLLM/LLMError.swift

View check run for this annotation

Codecov / codecov/patch

Sources/SpeziLLM/LLMError.swift#L34-L37

Added lines #L34 - L37 were not covered by tests


public static func == (lhs: LLMDefaultError, rhs: LLMDefaultError) -> Bool {
switch (lhs, rhs) {
case (.unknown, .unknown): true

Check warning on line 42 in Sources/SpeziLLM/LLMError.swift

View check run for this annotation

Codecov / codecov/patch

Sources/SpeziLLM/LLMError.swift#L40-L42

Added lines #L40 - L42 were not covered by tests
}
}
}


/// The ``LLMError`` defines a common error protocol which should be used for defining errors within the SpeziLLM ecosystem.
/// Defines a common `Error` protocol which should be used for defining errors within the SpeziLLM ecosystem.
///
/// An example conformance to the ``LLMError`` can be found in the `SpeziLLMLocal` target.
///
/// ```swift
/// public enum LLMLocalError: LLMError {
/// case modelNotFound
///
/// public var errorDescription: String? { "Some example error description" }
/// public var recoverySuggestion: String? { "Some example recovery suggestion" }
/// public var failureReason: String? { "Some example failure reason" }
/// }
/// ```
public protocol LLMError: LocalizedError, Equatable {}


/// Ensure the conformance of the Swift `CancellationError` to ``LLMError``.
extension CancellationError: LLMError {
public static func == (lhs: CancellationError, rhs: CancellationError) -> Bool {
true
}

Check warning on line 68 in Sources/SpeziLLM/LLMError.swift

View check run for this annotation

Codecov / codecov/patch

Sources/SpeziLLM/LLMError.swift#L66-L68

Added lines #L66 - L68 were not covered by tests
}
19 changes: 0 additions & 19 deletions Sources/SpeziLLM/LLMHostingType.swift

This file was deleted.

Loading
Loading