Kotlin enhancements for LangChain4j, providing coroutine support and Flow-based streaming capabilities for chat language models.
See the discussion on LangChain4j project.
ℹ️ I am verifying my ideas for improving LangChain4j here. If an idea is accepted, the code might be adopted into the original LangChain4j project. If not - you may enjoy it here.
- ✨ Kotlin Coroutine support for ChatLanguageModels
- 🌊 Kotlin Asynchronous Flow support for StreamingChatLanguageModels
- 💄External Prompt Templates support. Basic implementation loads both system and user prompt templates from the classpath, but PromptTemplateSource provides extension mechanism.
- 💾Async Document Processing Extensions support parallel document processing with Kotlin coroutines for efficient I/O operations in LangChain4j
Add the following dependencies to your pom.xml
:
<dependencies>
<!-- LangChain4j Kotlin Extensions -->
<dependency>
<groupId>me.kpavlov.langchain4j.kotlin</groupId>
<artifactId>langchain4j-kotlin</artifactId>
<version>[LATEST_VERSION]</version>
</dependency>
<!-- Extra Dependencies -->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j</artifactId>
<version>0.36.2</version>
</dependency>
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-open-ai</artifactId>
<version>0.36.2</version>
</dependency>
</dependencies>
Add the following to your build.gradle.kts
:
dependencies {
implementation("me.kpavlov.langchain4j.kotlin:langchain4j-kotlin:$LATEST_VERSION")
implementation("dev.langchain4j:langchain4j-open-ai:0.36.2")
}
Extension can convert ChatLanguageModel
response into Kotlin Suspending Function:
val model: ChatLanguageModel = OpenAiChatModel.builder()
.apiKey("your-api-key")
// more configuration parameters here ...
.build()
// sync call
val response =
model.chat(
ChatRequest
.builder()
.messages(
listOf(
SystemMessage.from("You are a helpful assistant"),
UserMessage.from("Hello!"),
),
).build(),
)
println(response.aiMessage().text())
// Using coroutines
CoroutineScope(Dispatchers.IO).launch {
val response =
model.chatAsync(
ChatRequest
.builder()
.messages(
listOf(
SystemMessage.from("You are a helpful assistant"),
UserMessage.from("Hello!"),
),
),
)
println(response.aiMessage().text())
}
Extension can convert StreamingChatLanguageModel response into Kotlin Asynchronous Flow:
val model: StreamingChatLanguageModel = OpenAiStreamingChatModel.builder()
.apiKey("your-api-key")
// more configuration parameters here ...
.build()
model.generateFlow(messages).collect { reply ->
when (reply) {
is Completion ->
println(
"Final response: ${reply.response.content().text()}",
)
is Token -> println("Received token: ${reply.token}")
else -> throw IllegalArgumentException("Unsupported event: $reply")
}
}
The Kotlin Notebook environment allows you to:
- Experiment with LLM features in real-time
- Test different configurations and scenarios
- Visualize results directly in the notebook
- Share reproducible examples with others
You can easyly get started with LangChain4j-Kotlin notebooks:
%useLatestDescriptors
%use coroutines
@file:DependsOn("dev.langchain4j:langchain4j:0.36.2")
@file:DependsOn("dev.langchain4j:langchain4j-open-ai:0.36.2")
// add maven dependency
@file:DependsOn("me.kpavlov.langchain4j.kotlin:langchain4j-kotlin:0.1.1")
// ... or add project's target/classes to classpath
//@file:DependsOn("../target/classes")
import dev.langchain4j.data.message.*
import dev.langchain4j.model.openai.OpenAiChatModel
import me.kpavlov.langchain4j.kotlin.model.chat.generateAsync
val model = OpenAiChatModel.builder()
.apiKey("demo")
.modelName("gpt-4o-mini")
.temperature(0.0)
.maxTokens(1024)
.build()
// Invoke using CoroutineScope
val scope = CoroutineScope(Dispatchers.IO)
runBlocking {
val result = model.generateAsync(
listOf(
SystemMessage.from("You are helpful assistant"),
UserMessage.from("Make a haiku about Kotlin, Langchani4j and LLM"),
)
)
println(result.content().text())
}
Try this Kotlin Notebook yourself:
- Create
.env
file in root directory and add your API keys:
OPENAI_API_KEY=sk-xxxxx
Using Maven:
mvn clean verify
Using Make:
make build
Contributions are welcome! Please feel free to submit a Pull Request.
Run before submitting your changes
make lint
- LangChain4j - The core library this project enhances
- Training data from Project Gutenberg: