This repo contains libraries and examples of how to use the LLM canister on the IC.
The ic-llm
crate can be used to deploy Rust agents on the Internet Computer with a few lines of code.
Example: Prompting
use ic_llm::Model;
ic_llm::prompt(Model::Llama3_1_8B, "What's the speed of light?").await;
Example: Chatting with multiple messages
use ic_llm::{Model, ChatMessage, Role};
ic_llm::chat(
Model::Llama3_1_8B,
vec![
ChatMessage {
role: Role::System,
content: "You are a helpful assistant".to_string(),
},
ChatMessage {
role: Role::User,
content: "How big is the sun?".to_string(),
},
],
)
.await;
Similarly, the mo:llm
package can be used to deploy Motoko agents on the Internet Computer with a few lines of code.
Example: Prompting
import LLM "mo:llm";
await LLM.prompt(#Llama3_1_8B, prompt)
Example: Chatting with multiple messages
import LLM "mo:llm";
await LLM.chat(#Llama3_1_8B, [
{
role = #system_;
content = "You are a helpful assistant.";
},
{
role = #user;
content = "How big is the sun?";
}
])
This is a simple agent that simply relays whatever messages the user gives to the underlying models without any modification. It's meant to serve as a boilerplate project for those who want to get started building agents on the IC.
A Rust and a Motoko implementation are provided in the examples
folder.
Additionally, a live deployment of this agent can be accessed here.