Skip to content

kpavlov/langchain4j-kotlin

Repository files navigation

LangChain4j-Kotlin

Maven Central Kotlin CI with Maven Codacy Badge Codacy Coverage codecov Maintainability Api Docs Kotlin enhancements for LangChain4j, providing coroutine support and Flow-based streaming capabilities for chat language models.

See the discussion on LangChain4j project.

ℹ️ This project is a playground for LangChain4j's Kotlin API. If accepted, some code might be adopted into the original LangChain4j project and removed from here. Mean while, enjoy it here.

Features

See api docs for more details.

Installation

Maven

Add the following dependencies to your pom.xml:

<dependencies>
    <!-- LangChain4j Kotlin Extensions -->
    <dependency>
        <groupId>me.kpavlov.langchain4j.kotlin</groupId>
        <artifactId>langchain4j-kotlin</artifactId>
        <version>[LATEST_VERSION]</version>
    </dependency>
    
    <!-- Extra Dependencies -->
    <dependency>
      <groupId>dev.langchain4j</groupId>
      <artifactId>langchain4j</artifactId>
      <version>1.0.0-alpha1</version>
    </dependency>
    <dependency>
         <groupId>dev.langchain4j</groupId>
         <artifactId>langchain4j-open-ai</artifactId>
      <version>1.0.0-alpha1</version>
    </dependency>
</dependencies>

Gradle (Kotlin DSL)

Add the following to your build.gradle.kts:

dependencies {
    implementation("me.kpavlov.langchain4j.kotlin:langchain4j-kotlin:$LATEST_VERSION")
  implementation("dev.langchain4j:langchain4j-open-ai:1.0.0-alpha1")
}

Quick Start

Basic Chat Request

Extension can convert ChatLanguageModel response into Kotlin Suspending Function:

val model: ChatLanguageModel = OpenAiChatModel.builder()
    .apiKey("your-api-key")
    // more configuration parameters here ...
    .build()

// sync call
val response =
  model.chat(chatRequest {
    messages += systemMessage("You are a helpful assistant")
    messages += userMessage("Hello!")
  })
println(response.aiMessage().text())

// Using coroutines
CoroutineScope(Dispatchers.IO).launch {
    val response =
      model.chatAsync {
        messages += systemMessage("You are a helpful assistant")
        messages += userMessage("Say Hello")
        parameters(OpenAiChatRequestParameters.builder()) {
          temperature = 0.1
          builder.seed(42) // OpenAI specific parameter
        }
      }
    println(response.aiMessage().text())
}      

Sample code:

Streaming Chat Language Model support

Extension can convert StreamingChatLanguageModel response into Kotlin Asynchronous Flow:

val model: StreamingChatLanguageModel = OpenAiStreamingChatModel.builder()
    .apiKey("your-api-key")
    // more configuration parameters here ...
    .build()

model.generateFlow(messages).collect { reply ->
    when (reply) {
        is Completion ->
            println(
                "Final response: ${reply.response.content().text()}",
            )

        is Token -> println("Received token: ${reply.token}")
        else -> throw IllegalArgumentException("Unsupported event: $reply")
    }
}

Kotlin Notebook

The Kotlin Notebook environment allows you to:

  • Experiment with LLM features in real-time
  • Test different configurations and scenarios
  • Visualize results directly in the notebook
  • Share reproducible examples with others

You can easyly get started with LangChain4j-Kotlin notebooks:

%useLatestDescriptors
%use coroutines

@file:DependsOn("dev.langchain4j:langchain4j:0.36.2")
@file:DependsOn("dev.langchain4j:langchain4j-open-ai:0.36.2")

// add maven dependency
@file:DependsOn("me.kpavlov.langchain4j.kotlin:langchain4j-kotlin:0.1.1")
// ... or add project's target/classes to classpath
//@file:DependsOn("../target/classes")

import dev.langchain4j.data.message.SystemMessage.systemMessage
import dev.langchain4j.data.message.UserMessage.userMessage
import dev.langchain4j.model.openai.OpenAiChatModel

import me.kpavlov.langchain4j.kotlin.model.chat.generateAsync

  
val model = OpenAiChatModel.builder()
  .apiKey("demo")
  .modelName("gpt-4o-mini")
  .temperature(0.0)
  .maxTokens(1024)
  .build()

// Invoke using CoroutineScope
val scope = CoroutineScope(Dispatchers.IO)

runBlocking {
  val result = model.generateAsync(
    listOf(
      systemMessage("You are helpful assistant"),
      userMessage("Make a haiku about Kotlin, Langchani4j and LLM"),
    )
  )
  println(result.content().text())
}

Try this Kotlin Notebook yourself:

Development Setup

Prerequisites

  1. Create .env file in root directory and add your API keys:
OPENAI_API_KEY=sk-xxxxx

Building the Project

Using Maven:

mvn clean verify

Using Make:

make build

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Run before submitting your changes

make lint

Acknowledgements

License

MIT License