Building a Rust CLI to Chat with Llama 3.2

Building a Rust CLI to Chat with Llama 3.2

Introduction: Build a CLI that can chat with advanced large language models like Llama 3.2. Learn how Rust and the Ollama library make this easy.

As a developer learning Rust, I wanted to build a practical project to apply my new skills. With the rise of large language models like Anthropic’s Llama 3.2, I thought it would be fun to create a Rust command line interface (CLI) to interact with the model.

In just a few minutes, I was able to assemble a usable CLI using the Ollama Rust library. This CLI is called “Jarvis” and allows you to chat with Llama 3.2 and execute some basic commands like checking the time, date, and listing directory contents.

In this article, I will introduce the key components of the Jarvis CLI and explain how to interact with Llama 3.2 or other large language models using Rust. Finally, you will see how Rust’s performance and expressiveness make it an excellent choice for AI applications.

Jarvis CLI Structure

The main components of Jarvis CLI include:

1. JarvisConfig Structure

  • Defines the available commands

  • Methods for validating commands and printing help text

2. Command Handling Logic in main()

  • Parses command line arguments

  • Calls the appropriate function based on the command

3. Functionality for Each Command

  • – get current time

  • date – get today’s date

  • hello – print a customizable greeting

  • ls – list directory contents

  • chat – interact with Llama 3.2 using the Ollama lib

Here is a streamlined version of the code:

struct JarvisConfig {    commands: Vec<&'static str>,}impl JarvisConfig {    fn new() -> Self {...}     fn print_help(&self) {...}    fn is_valid_command(&self, command: &str) -> bool {...}}#[tokio::main]async fn main() {    let config = JarvisConfig::new();    let args: Vec<string> = env::args().collect();    match args[1].as_str() {        "time" => {...}        "date" => {...}        "hello" => {...}        "ls" => {...}        "chat" => {            let ollama = Ollama::default();            match ollama                .generate(GenerationRequest::new(                    "llama3.2".to_string(),                    args[2].to_string(),                ))                .await            {                Ok(res) => println!("{}", res.response),                Err(e) => println!("Failed to generate response: {}", e),            }        }        _ => {            println!("Unknown command: {}", args[1]);            config.print_help();        }    }}</string>

The most interesting part of chatting with Llama 3.2 using Ollama is the “chat” command, which interacts with Llama 3.2 using the Ollama Rust library.

After adding the Ollama dependency to Cargo.toml, it’s very simple to use:

Create an Ollama instance with default settings:

let ollama = Ollama::default();

That’s it! With just a few lines of code, we can send prompts to Llama 3.2 and receive generated responses.

Example Usage

Here are some example interactions with the Jarvis CLI:

GenerationRequest::new(    "llama3.2".to_string(),     args[2].to_string())
Second example.
match ollama.generate(...).await {    Ok(res) => println!("{}", res.response),    Err(e) => println!("Failed to generate response: {}", e),}

Some Conversation Examples:

$ jarvis hello  Hello, World!$ jarvis hello AliceHello, Alice! $ jarvis timeCurrent time in format (HH:mm:ss): 14:30:15$ jarvis ls /documents/documents/report.pdf: file/documents/images: directory$ jarvis chat "What is the capital of France?" Paris is the capital and most populous city of France.

While Python remains the preferred choice for AI/ML, Rust is an attractive alternative when the highest performance, concurrency, and/or safety are required. It’s exciting to see Rust being adopted more in this field.

Conclusion

In this article, we learned how to build a Rust CLI that interacts with Llama 3.2 using the Ollama library. With basic Rust knowledge, we can assemble a useful AI-driven tool in just a few minutes. Rust’s unique advantages make it very suitable for AI/ML system development. As the ecosystem matures, I expect we will see more adoption.

I encourage you to try using Rust in your next AI project, whether it’s a simple CLI like this or a more complex system.

Its performance, safety, and expressiveness may surprise you.

Author: The Fish Listening to Music

Leave a Comment