Friday, December 19, 2025

Building an LLM Chatbot in Java with Ollama: A Complete Developer's Guide




Introduction and Overview


Large Language Models (LLMs) have revolutionized the way we interact with artificial intelligence, enabling natural language conversations and sophisticated text generation capabilities. This comprehensive guide will walk you through creating a fully functional LLM chatbot in Java using Ollama, a local LLM runtime that allows you to run models on your own machine without relying on external APIs.


An LLM chatbot is essentially a conversational interface that leverages the power of large language models to understand user input and generate contextually appropriate responses. Unlike traditional rule-based chatbots that rely on predefined responses, LLM chatbots can engage in more natural, flexible conversations by understanding context and generating responses dynamically.


Ollama serves as our local LLM runtime, providing a simple way to run various open-source language models locally. This approach offers several advantages including privacy, reduced latency, offline capability, and cost control since you are not making API calls to external services.


Understanding LLM Chatbot Architecture


Before diving into the implementation, it is crucial to understand the fundamental architecture of an LLM chatbot. The system consists of several key components that work together to create a seamless conversational experience.


The core architecture includes a user interface layer that handles input and output, a conversation management system that maintains context and history, an LLM integration layer that communicates with the language model, and a response processing system that formats and delivers responses back to the user.


The conversation flow typically follows this pattern: the user provides input through the interface, the system processes this input and adds it to the conversation context, the enhanced prompt is sent to the LLM through Ollama, the model generates a response, and finally the response is processed and displayed to the user while updating the conversation history.


Setting Up the Development Environment


To begin building our LLM chatbot, we need to establish a proper development environment with all necessary dependencies and tools. This setup phase is critical for ensuring smooth development and deployment of our application.


First, ensure you have Java Development Kit (JDK) 11 or higher installed on your system. We will be using Maven as our build tool, so make sure Maven is properly configured. Additionally, you will need to install Ollama on your local machine to serve as the LLM runtime.


Download and install Ollama from the official website. Once installed, you can pull a language model such as Llama 2 or Mistral by running the appropriate Ollama commands in your terminal. For this tutorial, we will use the Llama 2 model, which you can install by running "ollama pull llama2" in your command line.


Project Structure and Dependencies


Our Java project will follow a clean architecture pattern with clear separation of concerns. The project structure will include packages for models, services, controllers, and utilities, ensuring maintainable and scalable code.


Create a new Maven project and configure the following project structure:



src/

  main/

    java/

      com/

        fortytwo/

          llmchatbot/

            model/

            service/

            controller/

            util/

            Application.java

    resources/

      application.properties

pom.xml



The Maven configuration file (pom.xml) needs to include dependencies for HTTP client functionality, JSON processing, and logging. Here is the complete pom.xml configuration:



<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0"

         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 

         http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <modelVersion>4.0.0</modelVersion>

    

    <groupId>com.fortytwo</groupId>

    <artifactId>llm-chatbot</artifactId>

    <version>1.0.0</version>

    <packaging>jar</packaging>

    

    <name>LLM Chatbot</name>

    <description>A Java-based LLM chatbot using Ollama</description>

    

    <properties>

        <maven.compiler.source>11</maven.compiler.source>

        <maven.compiler.target>11</maven.compiler.target>

        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>

    </properties>

    

    <dependencies>

        <dependency>

            <groupId>com.fasterxml.jackson.core</groupId>

            <artifactId>jackson-databind</artifactId>

            <version>2.15.2</version>

        </dependency>

        

        <dependency>

            <groupId>org.apache.httpcomponents.client5</groupId>

            <artifactId>httpclient5</artifactId>

            <version>5.2.1</version>

        </dependency>

        

        <dependency>

            <groupId>org.slf4j</groupId>

            <artifactId>slf4j-api</artifactId>

            <version>2.0.7</version>

        </dependency>

        

        <dependency>

            <groupId>ch.qos.logback</groupId>

            <artifactId>logback-classic</artifactId>

            <version>1.4.8</version>

        </dependency>

        

        <dependency>

            <groupId>junit</groupId>

            <artifactId>junit</artifactId>

            <version>4.13.2</version>

            <scope>test</scope>

        </dependency>

    </dependencies>

    

    <build>

        <plugins>

            <plugin>

                <groupId>org.apache.maven.plugins</groupId>

                <artifactId>maven-compiler-plugin</artifactId>

                <version>3.11.0</version>

                <configuration>

                    <source>11</source>

                    <target>11</target>

                </configuration>

            </plugin>

            

            <plugin>

                <groupId>org.codehaus.mojo</groupId>

                <artifactId>exec-maven-plugin</artifactId>

                <version>3.1.0</version>

                <configuration>

                    <mainClass>com.fortytwo.llmchatbot.Application</mainClass>

                </configuration>

            </plugin>

        </plugins>

    </build>

</project>


This configuration includes Jackson for JSON processing, Apache HTTP Client for making requests to Ollama, SLF4J and Logback for logging, and JUnit for testing. The exec plugin allows us to easily run our application from Maven.


Creating Data Models


The foundation of our chatbot lies in well-designed data models that represent the various entities in our system. These models will handle conversation messages, LLM requests and responses, and conversation context.


First, let's create a Message model that represents individual messages in our conversation:



package com.fortytwo.llmchatbot.model;


import java.time.LocalDateTime;

import java.util.Objects;


/**

 * Represents a single message in a conversation.

 * This class encapsulates the content, sender information, and timestamp

 * of each message exchanged between the user and the chatbot.

 */

public class Message {

    

    /**

     * Enumeration defining the possible roles/senders of a message

     */

    public enum Role {

        USER("user"),

        ASSISTANT("assistant"),

        SYSTEM("system");

        

        private final String value;

        

        Role(String value) {

            this.value = value;

        }

        

        public String getValue() {

            return value;

        }

        

        public static Role fromString(String value) {

            for (Role role : Role.values()) {

                if (role.value.equalsIgnoreCase(value)) {

                    return role;

                }

            }

            throw new IllegalArgumentException("Unknown role: " + value);

        }

    }

    

    private String content;

    private Role role;

    private LocalDateTime timestamp;

    

    /**

     * Default constructor for JSON deserialization

     */

    public Message() {

        this.timestamp = LocalDateTime.now();

    }

    

    /**

     * Constructor for creating a new message

     * @param content The text content of the message

     * @param role The role of the message sender (USER, ASSISTANT, SYSTEM)

     */

    public Message(String content, Role role) {

        this.content = content;

        this.role = role;

        this.timestamp = LocalDateTime.now();

    }

    

    /**

     * Gets the content of the message

     * @return The message content as a string

     */

    public String getContent() {

        return content;

    }

    

    /**

     * Sets the content of the message

     * @param content The new content for the message

     */

    public void setContent(String content) {

        this.content = content;

    }

    

    /**

     * Gets the role of the message sender

     * @return The role enum value

     */

    public Role getRole() {

        return role;

    }

    

    /**

     * Sets the role of the message sender

     * @param role The role enum value

     */

    public void setRole(Role role) {

        this.role = role;

    }

    

    /**

     * Gets the timestamp when the message was created

     * @return LocalDateTime representing the creation time

     */

    public LocalDateTime getTimestamp() {

        return timestamp;

    }

    

    /**

     * Sets the timestamp of the message

     * @param timestamp The new timestamp

     */

    public void setTimestamp(LocalDateTime timestamp) {

        this.timestamp = timestamp;

    }

    

    @Override

    public boolean equals(Object o) {

        if (this == o) return true;

        if (o == null || getClass() != o.getClass()) return false;

        Message message = (Message) o;

        return Objects.equals(content, message.content) &&

               role == message.role &&

               Objects.equals(timestamp, message.timestamp);

    }

    

    @Override

    public int hashCode() {

        return Objects.hash(content, role, timestamp);

    }

    

    @Override

    public String toString() {

        return String.format("Message{role=%s, content='%s', timestamp=%s}", 

                           role, content, timestamp);

    }

}



Next, we need models to handle communication with the Ollama API. Let's create the OllamaRequest model:



package com.fortytwo.llmchatbot.model;


import com.fasterxml.jackson.annotation.JsonProperty;

import java.util.List;

import java.util.Objects;


/**

 * Represents a request to the Ollama API for chat completion.

 * This class encapsulates all the parameters needed to make a request

 * to the Ollama service for generating responses.

 */

public class OllamaRequest {

    

    @JsonProperty("model")

    private String model;

    

    @JsonProperty("messages")

    private List<OllamaMessage> messages;

    

    @JsonProperty("stream")

    private boolean stream;

    

    @JsonProperty("options")

    private OllamaOptions options;

    

    /**

     * Default constructor

     */

    public OllamaRequest() {

        this.stream = false; // Default to non-streaming mode

    }

    

    /**

     * Constructor with essential parameters

     * @param model The name of the model to use (e.g., "llama2")

     * @param messages List of messages forming the conversation context

     */

    public OllamaRequest(String model, List<OllamaMessage> messages) {

        this.model = model;

        this.messages = messages;

        this.stream = false;

    }

    

    /**

     * Gets the model name

     * @return The model name string

     */

    public String getModel() {

        return model;

    }

    

    /**

     * Sets the model name

     * @param model The model name to use

     */

    public void setModel(String model) {

        this.model = model;

    }

    

    /**

     * Gets the list of messages

     * @return List of OllamaMessage objects

     */

    public List<OllamaMessage> getMessages() {

        return messages;

    }

    

    /**

     * Sets the list of messages

     * @param messages List of OllamaMessage objects

     */

    public void setMessages(List<OllamaMessage> messages) {

        this.messages = messages;

    }

    

    /**

     * Checks if streaming is enabled

     * @return true if streaming is enabled, false otherwise

     */

    public boolean isStream() {

        return stream;

    }

    

    /**

     * Sets the streaming mode

     * @param stream true to enable streaming, false to disable

     */

    public void setStream(boolean stream) {

        this.stream = stream;

    }

    

    /**

     * Gets the options for the request

     * @return OllamaOptions object

     */

    public OllamaOptions getOptions() {

        return options;

    }

    

    /**

     * Sets the options for the request

     * @param options OllamaOptions object containing model parameters

     */

    public void setOptions(OllamaOptions options) {

        this.options = options;

    }

    

    @Override

    public boolean equals(Object o) {

        if (this == o) return true;

        if (o == null || getClass() != o.getClass()) return false;

        OllamaRequest that = (OllamaRequest) o;

        return stream == that.stream &&

               Objects.equals(model, that.model) &&

               Objects.equals(messages, that.messages) &&

               Objects.equals(options, that.options);

    }

    

    @Override

    public int hashCode() {

        return Objects.hash(model, messages, stream, options);

    }

    

    @Override

    public String toString() {

        return String.format("OllamaRequest{model='%s', messages=%s, stream=%s, options=%s}", 

                           model, messages, stream, options);

    }

}



We also need an OllamaMessage model that represents messages in the format expected by Ollama:



package com.fortytwo.llmchatbot.model;


import com.fasterxml.jackson.annotation.JsonProperty;

import java.util.Objects;


/**

 * Represents a message in the format expected by the Ollama API.

 * This class is used for serialization when sending requests to Ollama.

 */

public class OllamaMessage {

    

    @JsonProperty("role")

    private String role;

    

    @JsonProperty("content")

    private String content;

    

    /**

     * Default constructor for JSON deserialization

     */

    public OllamaMessage() {

    }

    

    /**

     * Constructor for creating an Ollama message

     * @param role The role of the message sender (user, assistant, system)

     * @param content The content of the message

     */

    public OllamaMessage(String role, String content) {

        this.role = role;

        this.content = content;

    }

    

    /**

     * Gets the role of the message sender

     * @return The role as a string

     */

    public String getRole() {

        return role;

    }

    

    /**

     * Sets the role of the message sender

     * @param role The role string

     */

    public void setRole(String role) {

        this.role = role;

    }

    

    /**

     * Gets the content of the message

     * @return The message content

     */

    public String getContent() {

        return content;

    }

    

    /**

     * Sets the content of the message

     * @param content The message content

     */

    public void setContent(String content) {

        this.content = content;

    }

    

    @Override

    public boolean equals(Object o) {

        if (this == o) return true;

        if (o == null || getClass() != o.getClass()) return false;

        OllamaMessage that = (OllamaMessage) o;

        return Objects.equals(role, that.role) &&

               Objects.equals(content, that.content);

    }

    

    @Override

    public int hashCode() {

        return Objects.hash(role, content);

    }

    

    @Override

    public String toString() {

        return String.format("OllamaMessage{role='%s', content='%s'}", role, content);

    }

}



Implementing the Ollama Client Service


The Ollama client service is the core component that handles communication with the local Ollama instance. This service encapsulates all the HTTP communication logic and provides a clean interface for the rest of our application to interact with the LLM.


The service needs to handle HTTP requests, JSON serialization and deserialization, error handling, and connection management. Let's create a comprehensive OllamaService class:



package com.fortytwo.llmchatbot.service;


import com.fasterxml.jackson.databind.ObjectMapper;

import com.fortytwo.llmchatbot.model.*;

import org.apache.hc.client5.http.classic.methods.HttpPost;

import org.apache.hc.client5.http.impl.classic.CloseableHttpClient;

import org.apache.hc.client5.http.impl.classic.CloseableHttpResponse;

import org.apache.hc.client5.http.impl.classic.HttpClients;

import org.apache.hc.core5.http.ContentType;

import org.apache.hc.core5.http.io.entity.EntityUtils;

import org.apache.hc.core5.http.io.entity.StringEntity;

import org.slf4j.Logger;

import org.slf4j.LoggerFactory;


import java.io.IOException;

import java.util.ArrayList;

import java.util.List;


/**

 * Service class for communicating with the Ollama API.

 * This class handles all HTTP communication with the local Ollama instance,

 * including request formatting, response parsing, and error handling.

 */

public class OllamaService {

    

    private static final Logger logger = LoggerFactory.getLogger(OllamaService.class);

    

    private final String baseUrl;

    private final String model;

    private final CloseableHttpClient httpClient;

    private final ObjectMapper objectMapper;

    

    /**

     * Constructor with default configuration

     * Uses default Ollama URL (http://localhost:11434) and llama2 model

     */

    public OllamaService() {

        this("http://localhost:11434", "llama2");

    }

    

    /**

     * Constructor with custom configuration

     * @param baseUrl The base URL of the Ollama instance

     * @param model The name of the model to use

     */

    public OllamaService(String baseUrl, String model) {

        this.baseUrl = baseUrl.endsWith("/") ? baseUrl.substring(0, baseUrl.length() - 1) : baseUrl;

        this.model = model;

        this.httpClient = HttpClients.createDefault();

        this.objectMapper = new ObjectMapper();

        

        logger.info("Initialized OllamaService with URL: {} and model: {}", this.baseUrl, this.model);

    }

    

    /**

     * Sends a chat completion request to Ollama

     * @param messages List of messages forming the conversation context

     * @return The response from the LLM

     * @throws OllamaServiceException if there's an error communicating with Ollama

     */

    public String generateResponse(List<Message> messages) throws OllamaServiceException {

        try {

            // Convert internal Message objects to OllamaMessage format

            List<OllamaMessage> ollamaMessages = convertToOllamaMessages(messages);

            

            // Create the request object

            OllamaRequest request = new OllamaRequest(model, ollamaMessages);

            

            // Serialize the request to JSON

            String requestJson = objectMapper.writeValueAsString(request);

            logger.debug("Sending request to Ollama: {}", requestJson);

            

            // Create HTTP POST request

            HttpPost httpPost = new HttpPost(baseUrl + "/api/chat");

            httpPost.setEntity(new StringEntity(requestJson, ContentType.APPLICATION_JSON));

            httpPost.setHeader("Accept", "application/json");

            

            // Execute the request

            try (CloseableHttpResponse response = httpClient.execute(httpPost)) {

                int statusCode = response.getCode();

                String responseBody = EntityUtils.toString(response.getEntity());

                

                logger.debug("Received response from Ollama. Status: {}, Body: {}", statusCode, responseBody);

                

                if (statusCode == 200) {

                    // Parse the response

                    OllamaResponse ollamaResponse = objectMapper.readValue(responseBody, OllamaResponse.class);

                    return extractMessageContent(ollamaResponse);

                } else {

                    throw new OllamaServiceException(

                        String.format("Ollama API returned error status %d: %s", statusCode, responseBody)

                    );

                }

            }

            

        } catch (IOException e) {

            logger.error("Error communicating with Ollama", e);

            throw new OllamaServiceException("Failed to communicate with Ollama: " + e.getMessage(), e);

        } catch (Exception e) {

            logger.error("Unexpected error in generateResponse", e);

            throw new OllamaServiceException("Unexpected error: " + e.getMessage(), e);

        }

    }

    

    /**

     * Converts internal Message objects to OllamaMessage format

     * @param messages List of internal Message objects

     * @return List of OllamaMessage objects

     */

    private List<OllamaMessage> convertToOllamaMessages(List<Message> messages) {

        List<OllamaMessage> ollamaMessages = new ArrayList<>();

        

        for (Message message : messages) {

            OllamaMessage ollamaMessage = new OllamaMessage(

                message.getRole().getValue(),

                message.getContent()

            );

            ollamaMessages.add(ollamaMessage);

        }

        

        return ollamaMessages;

    }

    

    /**

     * Extracts the message content from the Ollama response

     * @param response The OllamaResponse object

     * @return The content of the assistant's message

     * @throws OllamaServiceException if the response format is unexpected

     */

    private String extractMessageContent(OllamaResponse response) throws OllamaServiceException {

        if (response == null) {

            throw new OllamaServiceException("Received null response from Ollama");

        }

        

        OllamaMessage message = response.getMessage();

        if (message == null) {

            throw new OllamaServiceException("No message found in Ollama response");

        }

        

        String content = message.getContent();

        if (content == null || content.trim().isEmpty()) {

            throw new OllamaServiceException("Empty content received from Ollama");

        }

        

        return content.trim();

    }

    

    /**

     * Tests the connection to Ollama by making a simple request

     * @return true if the connection is successful, false otherwise

     */

    public boolean testConnection() {

        try {

            List<Message> testMessages = new ArrayList<>();

            testMessages.add(new Message("Hello", Message.Role.USER));

            

            generateResponse(testMessages);

            logger.info("Connection test to Ollama successful");

            return true;

            

        } catch (Exception e) {

            logger.warn("Connection test to Ollama failed: {}", e.getMessage());

            return false;

        }

    }

    

    /**

     * Gets the current model name

     * @return The model name

     */

    public String getModel() {

        return model;

    }

    

    /**

     * Gets the base URL

     * @return The base URL

     */

    public String getBaseUrl() {

        return baseUrl;

    }

    

    /**

     * Closes the HTTP client and releases resources

     */

    public void close() {

        try {

            httpClient.close();

            logger.info("OllamaService closed successfully");

        } catch (IOException e) {

            logger.warn("Error closing HTTP client", e);

        }

    }

}



We also need to create the OllamaResponse model and supporting classes:



package com.fortytwo.llmchatbot.model;


import com.fasterxml.jackson.annotation.JsonProperty;

import java.util.Objects;


/**

 * Represents a response from the Ollama API.

 * This class encapsulates the response structure returned by Ollama

 * after processing a chat completion request.

 */

public class OllamaResponse {

    

    @JsonProperty("model")

    private String model;

    

    @JsonProperty("created_at")

    private String createdAt;

    

    @JsonProperty("message")

    private OllamaMessage message;

    

    @JsonProperty("done")

    private boolean done;

    

    @JsonProperty("total_duration")

    private Long totalDuration;

    

    @JsonProperty("load_duration")

    private Long loadDuration;

    

    @JsonProperty("prompt_eval_count")

    private Integer promptEvalCount;

    

    @JsonProperty("prompt_eval_duration")

    private Long promptEvalDuration;

    

    @JsonProperty("eval_count")

    private Integer evalCount;

    

    @JsonProperty("eval_duration")

    private Long evalDuration;

    

    /**

     * Default constructor for JSON deserialization

     */

    public OllamaResponse() {

    }

    

    /**

     * Gets the model name used for the response

     * @return The model name

     */

    public String getModel() {

        return model;

    }

    

    /**

     * Sets the model name

     * @param model The model name

     */

    public void setModel(String model) {

        this.model = model;

    }

    

    /**

     * Gets the creation timestamp

     * @return The creation timestamp as a string

     */

    public String getCreatedAt() {

        return createdAt;

    }

    

    /**

     * Sets the creation timestamp

     * @param createdAt The creation timestamp

     */

    public void setCreatedAt(String createdAt) {

        this.createdAt = createdAt;

    }

    

    /**

     * Gets the message from the response

     * @return The OllamaMessage object

     */

    public OllamaMessage getMessage() {

        return message;

    }

    

    /**

     * Sets the message

     * @param message The OllamaMessage object

     */

    public void setMessage(OllamaMessage message) {

        this.message = message;

    }

    

    /**

     * Checks if the response is complete

     * @return true if the response is complete, false otherwise

     */

    public boolean isDone() {

        return done;

    }

    

    /**

     * Sets the done flag

     * @param done true if the response is complete

     */

    public void setDone(boolean done) {

        this.done = done;

    }

    

    /**

     * Gets the total duration of the request

     * @return Total duration in nanoseconds

     */

    public Long getTotalDuration() {

        return totalDuration;

    }

    

    /**

     * Sets the total duration

     * @param totalDuration Total duration in nanoseconds

     */

    public void setTotalDuration(Long totalDuration) {

        this.totalDuration = totalDuration;

    }

    

    /**

     * Gets the model load duration

     * @return Load duration in nanoseconds

     */

    public Long getLoadDuration() {

        return loadDuration;

    }

    

    /**

     * Sets the load duration

     * @param loadDuration Load duration in nanoseconds

     */

    public void setLoadDuration(Long loadDuration) {

        this.loadDuration = loadDuration;

    }

    

    /**

     * Gets the prompt evaluation count

     * @return Number of tokens in the prompt

     */

    public Integer getPromptEvalCount() {

        return promptEvalCount;

    }

    

    /**

     * Sets the prompt evaluation count

     * @param promptEvalCount Number of tokens in the prompt

     */

    public void setPromptEvalCount(Integer promptEvalCount) {

        this.promptEvalCount = promptEvalCount;

    }

    

    /**

     * Gets the prompt evaluation duration

     * @return Prompt evaluation duration in nanoseconds

     */

    public Long getPromptEvalDuration() {

        return promptEvalDuration;

    }

    

    /**

     * Sets the prompt evaluation duration

     * @param promptEvalDuration Prompt evaluation duration in nanoseconds

     */

    public void setPromptEvalDuration(Long promptEvalDuration) {

        this.promptEvalDuration = promptEvalDuration;

    }

    

    /**

     * Gets the evaluation count

     * @return Number of tokens in the response

     */

    public Integer getEvalCount() {

        return evalCount;

    }

    

    /**

     * Sets the evaluation count

     * @param evalCount Number of tokens in the response

     */

    public void setEvalCount(Integer evalCount) {

        this.evalCount = evalCount;

    }

    

    /**

     * Gets the evaluation duration

     * @return Evaluation duration in nanoseconds

     */

    public Long getEvalDuration() {

        return evalDuration;

    }

    

    /**

     * Sets the evaluation duration

     * @param evalDuration Evaluation duration in nanoseconds

     */

    public void setEvalDuration(Long evalDuration) {

        this.evalDuration = evalDuration;

    }

    

    @Override

    public boolean equals(Object o) {

        if (this == o) return true;

        if (o == null || getClass() != o.getClass()) return false;

        OllamaResponse that = (OllamaResponse) o;

        return done == that.done &&

               Objects.equals(model, that.model) &&

               Objects.equals(createdAt, that.createdAt) &&

               Objects.equals(message, that.message) &&

               Objects.equals(totalDuration, that.totalDuration) &&

               Objects.equals(loadDuration, that.loadDuration) &&

               Objects.equals(promptEvalCount, that.promptEvalCount) &&

               Objects.equals(promptEvalDuration, that.promptEvalDuration) &&

               Objects.equals(evalCount, that.evalCount) &&

               Objects.equals(evalDuration, that.evalDuration);

    }

    

    @Override

    public int hashCode() {

        return Objects.hash(model, createdAt, message, done, totalDuration, 

                          loadDuration, promptEvalCount, promptEvalDuration, 

                          evalCount, evalDuration);

    }

    

    @Override

    public String toString() {

        return String.format("OllamaResponse{model='%s', done=%s, message=%s}", 

                           model, done, message);

    }

}

```


We also need the OllamaOptions class and the custom exception:



package com.fortytwo.llmchatbot.model;


import com.fasterxml.jackson.annotation.JsonProperty;

import java.util.Objects;


/**

 * Represents options/parameters for Ollama requests.

 * This class allows fine-tuning of the model's behavior through various parameters.

 */

public class OllamaOptions {

    

    @JsonProperty("temperature")

    private Double temperature;

    

    @JsonProperty("top_p")

    private Double topP;

    

    @JsonProperty("top_k")

    private Integer topK;

    

    @JsonProperty("num_predict")

    private Integer numPredict;

    

    @JsonProperty("repeat_penalty")

    private Double repeatPenalty;

    

    /**

     * Default constructor

     */

    public OllamaOptions() {

    }

    

    /**

     * Gets the temperature parameter

     * @return Temperature value (0.0 to 2.0)

     */

    public Double getTemperature() {

        return temperature;

    }

    

    /**

     * Sets the temperature parameter

     * Controls randomness in the output. Lower values make output more deterministic.

     * @param temperature Temperature value (0.0 to 2.0)

     */

    public void setTemperature(Double temperature) {

        this.temperature = temperature;

    }

    

    /**

     * Gets the top_p parameter

     * @return Top-p value (0.0 to 1.0)

     */

    public Double getTopP() {

        return topP;

    }

    

    /**

     * Sets the top_p parameter

     * Controls diversity via nucleus sampling

     * @param topP Top-p value (0.0 to 1.0)

     */

    public void setTopP(Double topP) {

        this.topP = topP;

    }

    

    /**

     * Gets the top_k parameter

     * @return Top-k value

     */

    public Integer getTopK() {

        return topK;

    }

    

    /**

     * Sets the top_k parameter

     * Limits the number of highest probability tokens to consider

     * @param topK Top-k value

     */

    public void setTopK(Integer topK) {

        this.topK = topK;

    }

    

    /**

     * Gets the num_predict parameter

     * @return Number of tokens to predict

     */

    public Integer getNumPredict() {

        return numPredict;

    }

    

    /**

     * Sets the num_predict parameter

     * Maximum number of tokens to generate

     * @param numPredict Number of tokens to predict

     */

    public void setNumPredict(Integer numPredict) {

        this.numPredict = numPredict;

    }

    

    /**

     * Gets the repeat_penalty parameter

     * @return Repeat penalty value

     */

    public Double getRepeatPenalty() {

        return repeatPenalty;

    }

    

    /**

     * Sets the repeat_penalty parameter

     * Penalizes repetition in the output

     * @param repeatPenalty Repeat penalty value (typically 1.0 to 1.5)

     */

    public void setRepeatPenalty(Double repeatPenalty) {

        this.repeatPenalty = repeatPenalty;

    }

    

    @Override

    public boolean equals(Object o) {

        if (this == o) return true;

        if (o == null || getClass() != o.getClass()) return false;

        OllamaOptions that = (OllamaOptions) o;

        return Objects.equals(temperature, that.temperature) &&

               Objects.equals(topP, that.topP) &&

               Objects.equals(topK, that.topK) &&

               Objects.equals(numPredict, that.numPredict) &&

               Objects.equals(repeatPenalty, that.repeatPenalty);

    }

    

    @Override

    public int hashCode() {

        return Objects.hash(temperature, topP, topK, numPredict, repeatPenalty);

    }

    

    @Override

    public String toString() {

        return String.format("OllamaOptions{temperature=%s, topP=%s, topK=%s, numPredict=%s, repeatPenalty=%s}", 

                           temperature, topP, topK, numPredict, repeatPenalty);

    }

}



And the custom exception class:



package com.fortytwo.llmchatbot.service;


/**

 * Custom exception for Ollama service related errors.

 * This exception is thrown when there are issues communicating with

 * the Ollama API or processing responses.

 */

public class OllamaServiceException extends Exception {

    

    /**

     * Constructs a new OllamaServiceException with the specified detail message.

     * @param message The detail message

     */

    public OllamaServiceException(String message) {

        super(message);

    }

    

    /**

     * Constructs a new OllamaServiceException with the specified detail message and cause.

     * @param message The detail message

     * @param cause The cause of the exception

     */

    public OllamaServiceException(String message, Throwable cause) {

        super(message, cause);

    }

    

    /**

     * Constructs a new OllamaServiceException with the specified cause.

     * @param cause The cause of the exception

     */

    public OllamaServiceException(Throwable cause) {

        super(cause);

    }

}



Building the Conversation Management System


The conversation management system is responsible for maintaining the context and history of the conversation. This component ensures that the chatbot can understand the flow of the conversation and provide contextually relevant responses.


The conversation manager needs to handle message storage, context window management (to prevent exceeding token limits), conversation state tracking, and conversation persistence. Let's implement a comprehensive ConversationService:



package com.fortytwo.llmchatbot.service;


import com.fortytwo.llmchatbot.model.Message;

import org.slf4j.Logger;

import org.slf4j.LoggerFactory;


import java.util.ArrayList;

import java.util.Collections;

import java.util.List;

import java.util.concurrent.CopyOnWriteArrayList;


/**

 * Service for managing conversation state and history.

 * This class handles the storage and retrieval of conversation messages,

 * manages context windows, and provides conversation utilities.

 */

public class ConversationService {

    

    private static final Logger logger = LoggerFactory.getLogger(ConversationService.class);

    

    // Maximum number of messages to keep in context (to manage token limits)

    private static final int DEFAULT_MAX_CONTEXT_MESSAGES = 20;

    

    // System message that defines the chatbot's behavior

    private static final String DEFAULT_SYSTEM_MESSAGE = 

        "You are a helpful AI assistant. You provide accurate, helpful, and concise responses. " +

        "You maintain a friendly and professional tone in all interactions.";

    

    private final List<Message> conversationHistory;

    private final int maxContextMessages;

    private final String systemMessage;

    

    /**

     * Constructor with default configuration

     */

    public ConversationService() {

        this(DEFAULT_MAX_CONTEXT_MESSAGES, DEFAULT_SYSTEM_MESSAGE);

    }

    

    /**

     * Constructor with custom configuration

     * @param maxContextMessages Maximum number of messages to keep in context

     * @param systemMessage System message that defines the chatbot's behavior

     */

    public ConversationService(int maxContextMessages, String systemMessage) {

        this.maxContextMessages = maxContextMessages;

        this.systemMessage = systemMessage;

        this.conversationHistory = new CopyOnWriteArrayList<>();

        

        // Add the system message as the first message

        if (systemMessage != null && !systemMessage.trim().isEmpty()) {

            addMessage(new Message(systemMessage, Message.Role.SYSTEM));

        }

        

        logger.info("ConversationService initialized with max context messages: {}", maxContextMessages);

    }

    

    /**

     * Adds a message to the conversation history

     * @param message The message to add

     */

    public void addMessage(Message message) {

        if (message == null) {

            logger.warn("Attempted to add null message to conversation");

            return;

        }

        

        conversationHistory.add(message);

        logger.debug("Added message to conversation: {}", message);

        

        // Manage context window size

        manageContextWindow();

    }

    

    /**

     * Adds a user message to the conversation

     * @param content The content of the user message

     */

    public void addUserMessage(String content) {

        if (content == null || content.trim().isEmpty()) {

            logger.warn("Attempted to add empty user message");

            return;

        }

        

        Message userMessage = new Message(content.trim(), Message.Role.USER);

        addMessage(userMessage);

    }

    

    /**

     * Adds an assistant message to the conversation

     * @param content The content of the assistant message

     */

    public void addAssistantMessage(String content) {

        if (content == null || content.trim().isEmpty()) {

            logger.warn("Attempted to add empty assistant message");

            return;

        }

        

        Message assistantMessage = new Message(content.trim(), Message.Role.ASSISTANT);

        addMessage(assistantMessage);

    }

    

    /**

     * Gets the current conversation context for sending to the LLM

     * @return List of messages representing the current context

     */

    public List<Message> getConversationContext() {

        return new ArrayList<>(conversationHistory);

    }

    

    /**

     * Gets the full conversation history

     * @return Unmodifiable list of all messages in the conversation

     */

    public List<Message> getFullHistory() {

        return Collections.unmodifiableList(conversationHistory);

    }

    

    /**

     * Gets the last message in the conversation

     * @return The last message, or null if the conversation is empty

     */

    public Message getLastMessage() {

        if (conversationHistory.isEmpty()) {

            return null;

        }

        return conversationHistory.get(conversationHistory.size() - 1);

    }

    

    /**

     * Gets the last user message in the conversation

     * @return The last user message, or null if no user message exists

     */

    public Message getLastUserMessage() {

        for (int i = conversationHistory.size() - 1; i >= 0; i--) {

            Message message = conversationHistory.get(i);

            if (message.getRole() == Message.Role.USER) {

                return message;

            }

        }

        return null;

    }

    

    /**

     * Gets the number of messages in the conversation

     * @return The total number of messages

     */

    public int getMessageCount() {

        return conversationHistory.size();

    }

    

    /**

     * Checks if the conversation is empty (excluding system messages)

     * @return true if the conversation has no user or assistant messages

     */

    public boolean isEmpty() {

        return conversationHistory.stream()

            .noneMatch(msg -> msg.getRole() == Message.Role.USER || msg.getRole() == Message.Role.ASSISTANT);

    }

    

    /**

     * Clears the conversation history while preserving the system message

     */

    public void clearConversation() {

        conversationHistory.clear();

        

        // Re-add the system message

        if (systemMessage != null && !systemMessage.trim().isEmpty()) {

            addMessage(new Message(systemMessage, Message.Role.SYSTEM));

        }

        

        logger.info("Conversation cleared");

    }

    

    /**

     * Manages the context window by removing old messages when the limit is exceeded

     * Always preserves the system message and maintains conversation flow

     */

    private void manageContextWindow() {

        if (conversationHistory.size() <= maxContextMessages) {

            return;

        }

        

        // Find the system message (should be the first one)

        Message systemMsg = null;

        int systemMsgIndex = -1;

        

        for (int i = 0; i < conversationHistory.size(); i++) {

            if (conversationHistory.get(i).getRole() == Message.Role.SYSTEM) {

                systemMsg = conversationHistory.get(i);

                systemMsgIndex = i;

                break;

            }

        }

        

        // Calculate how many messages to remove

        int messagesToRemove = conversationHistory.size() - maxContextMessages;

        

        // Remove messages from the beginning (after system message) to maintain recent context

        int startRemovalIndex = systemMsgIndex + 1;

        int endRemovalIndex = startRemovalIndex + messagesToRemove;

        

        // Ensure we don't remove beyond available messages

        endRemovalIndex = Math.min(endRemovalIndex, conversationHistory.size());

        

        // Remove the messages

        for (int i = 0; i < messagesToRemove && startRemovalIndex < conversationHistory.size(); i++) {

            conversationHistory.remove(startRemovalIndex);

        }

        

        logger.debug("Removed {} messages from conversation context. Current size: {}", 

                    messagesToRemove, conversationHistory.size());

    }

    

    /**

     * Gets a summary of the conversation statistics

     * @return A string containing conversation statistics

     */

    public String getConversationSummary() {

        long userMessages = conversationHistory.stream()

            .mapToLong(msg -> msg.getRole() == Message.Role.USER ? 1 : 0)

            .sum();

        

        long assistantMessages = conversationHistory.stream()

            .mapToLong(msg -> msg.getRole() == Message.Role.ASSISTANT ? 1 : 0)

            .sum();

        

        long systemMessages = conversationHistory.stream()

            .mapToLong(msg -> msg.getRole() == Message.Role.SYSTEM ? 1 : 0)

            .sum();

        

        return String.format("Conversation Summary - Total: %d, User: %d, Assistant: %d, System: %d", 

                           conversationHistory.size(), userMessages, assistantMessages, systemMessages);

    }

    

    /**

     * Gets the system message

     * @return The system message content

     */

    public String getSystemMessage() {

        return systemMessage;

    }

    

    /**

     * Gets the maximum context messages setting

     * @return The maximum number of messages kept in context

     */

    public int getMaxContextMessages() {

        return maxContextMessages;

    }

}

```


## Creating the Chatbot Controller


The chatbot controller serves as the orchestration layer that coordinates between the conversation service and the Ollama service. It handles the main chatbot logic, processes user input, manages the conversation flow, and handles error scenarios gracefully.



package com.fortytwo.llmchatbot.controller;


import com.fortytwo.llmchatbot.model.Message;

import com.fortytwo.llmchatbot.service.ConversationService;

import com.fortytwo.llmchatbot.service.OllamaService;

import com.fortytwo.llmchatbot.service.OllamaServiceException;

import org.slf4j.Logger;

import org.slf4j.LoggerFactory;


import java.util.List;


/**

 * Controller class that orchestrates the chatbot functionality.

 * This class coordinates between the conversation service and the Ollama service

 * to provide a complete chatbot experience.

 */

public class ChatbotController {

    

    private static final Logger logger = LoggerFactory.getLogger(ChatbotController.class);

    

    private final ConversationService conversationService;

    private final OllamaService ollamaService;

    private boolean isInitialized;

    

    /**

     * Constructor with default services

     */

    public ChatbotController() {

        this(new ConversationService(), new OllamaService());

    }

    

    /**

     * Constructor with custom services

     * @param conversationService The conversation service to use

     * @param ollamaService The Ollama service to use

     */

    public ChatbotController(ConversationService conversationService, OllamaService ollamaService) {

        this.conversationService = conversationService;

        this.ollamaService = ollamaService;

        this.isInitialized = false;

        

        logger.info("ChatbotController created with custom services");

    }

    

    /**

     * Initializes the chatbot by testing the connection to Ollama

     * @return true if initialization is successful, false otherwise

     */

    public boolean initialize() {

        logger.info("Initializing chatbot...");

        

        try {

            // Test connection to Ollama

            boolean connectionSuccessful = ollamaService.testConnection();

            

            if (connectionSuccessful) {

                isInitialized = true;

                logger.info("Chatbot initialized successfully");

                return true;

            } else {

                logger.error("Failed to connect to Ollama service");

                return false;

            }

            

        } catch (Exception e) {

            logger.error("Error during chatbot initialization", e);

            return false;

        }

    }

    

    /**

     * Processes a user message and generates a response

     * @param userInput The user's input message

     * @return The chatbot's response

     * @throws ChatbotException if there's an error processing the message

     */

    public String processMessage(String userInput) throws ChatbotException {

        if (!isInitialized) {

            throw new ChatbotException("Chatbot is not initialized. Call initialize() first.");

        }

        

        if (userInput == null || userInput.trim().isEmpty()) {

            throw new ChatbotException("User input cannot be empty");

        }

        

        String trimmedInput = userInput.trim();

        logger.info("Processing user message: {}", trimmedInput);

        

        try {

            // Add user message to conversation

            conversationService.addUserMessage(trimmedInput);

            

            // Get conversation context

            List<Message> context = conversationService.getConversationContext();

            

            // Generate response using Ollama

            String response = ollamaService.generateResponse(context);

            

            // Add assistant response to conversation

            conversationService.addAssistantMessage(response);

            

            logger.info("Generated response: {}", response);

            return response;

            

        } catch (OllamaServiceException e) {

            logger.error("Error generating response from Ollama", e);

            throw new ChatbotException("Failed to generate response: " + e.getMessage(), e);

        } catch (Exception e) {

            logger.error("Unexpected error processing message", e);

            throw new ChatbotException("Unexpected error: " + e.getMessage(), e);

        }

    }

    

    /**

     * Starts a new conversation by clearing the current history

     */

    public void startNewConversation() {

        conversationService.clearConversation();

        logger.info("Started new conversation");

    }

    

    /**

     * Gets the current conversation history

     * @return List of messages in the conversation

     */

    public List<Message> getConversationHistory() {

        return conversationService.getFullHistory();

    }

    

    /**

     * Gets a summary of the current conversation

     * @return String containing conversation statistics

     */

    public String getConversationSummary() {

        return conversationService.getConversationSummary();

    }

    

    /**

     * Checks if the chatbot is initialized and ready to use

     * @return true if initialized, false otherwise

     */

    public boolean isInitialized() {

        return isInitialized;

    }

    

    /**

     * Checks if the current conversation is empty

     * @return true if the conversation has no user or assistant messages

     */

    public boolean isConversationEmpty() {

        return conversationService.isEmpty();

    }

    

    /**

     * Gets the last message in the conversation

     * @return The last message, or null if conversation is empty

     */

    public Message getLastMessage() {

        return conversationService.getLastMessage();

    }

    

    /**

     * Gets information about the current model and configuration

     * @return String containing model information

     */

    public String getModelInfo() {

        return String.format("Model: %s, Base URL: %s, Max Context: %d messages", 

                           ollamaService.getModel(), 

                           ollamaService.getBaseUrl(),

                           conversationService.getMaxContextMessages());

    }

    

    /**

     * Performs a health check on the chatbot services

     * @return true if all services are healthy, false otherwise

     */

    public boolean healthCheck() {

        try {

            if (!isInitialized) {

                logger.warn("Health check failed: Chatbot not initialized");

                return false;

            }

            

            // Test Ollama connection

            boolean ollamaHealthy = ollamaService.testConnection();

            

            if (!ollamaHealthy) {

                logger.warn("Health check failed: Ollama service unhealthy");

                return false;

            }

            

            logger.info("Health check passed");

            return true;

            

        } catch (Exception e) {

            logger.error("Health check failed with exception", e);

            return false;

        }

    }

    

    /**

     * Shuts down the chatbot and releases resources

     */

    public void shutdown() {

        logger.info("Shutting down chatbot...");

        

        try {

            ollamaService.close();

            isInitialized = false;

            logger.info("Chatbot shutdown completed");

        } catch (Exception e) {

            logger.warn("Error during chatbot shutdown", e);

        }

    }

}



We also need the ChatbotException class:



package com.fortytwo.llmchatbot.controller;


/**

 * Custom exception for chatbot-related errors.

 * This exception is thrown when there are issues with chatbot operations

 * that are not specific to the underlying services.

 */

public class ChatbotException extends Exception {

    

    /**

     * Constructs a new ChatbotException with the specified detail message.

     * @param message The detail message

     */

    public ChatbotException(String message) {

        super(message);

    }

    

    /**

     * Constructs a new ChatbotException with the specified detail message and cause.

     * @param message The detail message

     * @param cause The cause of the exception

     */

    public ChatbotException(String message, Throwable cause) {

        super(message, cause);

    }

    

    /**

     * Constructs a new ChatbotException with the specified cause.

     * @param cause The cause of the exception

     */

    public ChatbotException(Throwable cause) {

        super(cause);

    }

}



Implementing the User Interface


The user interface provides the interaction layer for users to communicate with the chatbot. We will implement a console-based interface that demonstrates all the chatbot functionality while being simple to understand and extend.



package com.fortytwo.llmchatbot.util;


import com.fortytwo.llmchatbot.controller.ChatbotController;

import com.fortytwo.llmchatbot.controller.ChatbotException;

import com.fortytwo.llmchatbot.model.Message;

import org.slf4j.Logger;

import org.slf4j.LoggerFactory;


import java.io.BufferedReader;

import java.io.IOException;

import java.io.InputStreamReader;

import java.time.format.DateTimeFormatter;

import java.util.List;


/**

 * Console-based user interface for the LLM chatbot.

 * This class provides an interactive command-line interface for users

 * to communicate with the chatbot and access various features.

 */

public class ConsoleInterface {

    

    private static final Logger logger = LoggerFactory.getLogger(ConsoleInterface.class);

    

    private final ChatbotController chatbotController;

    private final BufferedReader reader;

    private boolean isRunning;

    

    // ANSI color codes for better console output

    private static final String RESET = "\u001B[0m";

    private static final String BLUE = "\u001B[34m";

    private static final String GREEN = "\u001B[32m";

    private static final String RED = "\u001B[31m";

    private static final String YELLOW = "\u001B[33m";

    private static final String CYAN = "\u001B[36m";

    private static final String BOLD = "\u001B[1m";

    

    /**

     * Constructor

     * @param chatbotController The chatbot controller to use

     */

    public ConsoleInterface(ChatbotController chatbotController) {

        this.chatbotController = chatbotController;

        this.reader = new BufferedReader(new InputStreamReader(System.in));

        this.isRunning = false;

        

        logger.info("ConsoleInterface initialized");

    }

    

    /**

     * Starts the interactive console interface

     */

    public void start() {

        isRunning = true;

        

        printWelcomeMessage();

        

        // Initialize the chatbot

        if (!initializeChatbot()) {

            printError("Failed to initialize chatbot. Exiting...");

            return;

        }

        

        printHelp();

        

        // Main interaction loop

        while (isRunning) {

            try {

                printPrompt();

                String input = reader.readLine();

                

                if (input == null) {

                    break; // EOF reached

                }

                

                processUserInput(input.trim());

                

            } catch (IOException e) {

                printError("Error reading input: " + e.getMessage());

                logger.error("Error reading console input", e);

                break;

            }

        }

        

        shutdown();

    }

    

    /**

     * Processes user input and executes appropriate actions

     * @param input The user input string

     */

    private void processUserInput(String input) {

        if (input.isEmpty()) {

            return;

        }

        

        // Handle special commands

        if (input.startsWith("/")) {

            handleCommand(input);

            return;

        }

        

        // Process as regular chat message

        try {

            String response = chatbotController.processMessage(input);

            printAssistantMessage(response);

            

        } catch (ChatbotException e) {

            printError("Error processing message: " + e.getMessage());

            logger.error("Error processing user message", e);

        }

    }

    

    /**

     * Handles special commands

     * @param command The command string

     */

    private void handleCommand(String command) {

        String cmd = command.toLowerCase();

        

        switch (cmd) {

            case "/help":

            case "/h":

                printHelp();

                break;

                

            case "/new":

            case "/n":

                chatbotController.startNewConversation();

                printInfo("Started new conversation");

                break;

                

            case "/history":

            case "/hist":

                printConversationHistory();

                break;

                

            case "/summary":

            case "/sum":

                printInfo(chatbotController.getConversationSummary());

                break;

                

            case "/model":

            case "/m":

                printInfo(chatbotController.getModelInfo());

                break;

                

            case "/health":

                boolean healthy = chatbotController.healthCheck();

                if (healthy) {

                    printSuccess("All systems healthy");

                } else {

                    printError("Health check failed");

                }

                break;

                

            case "/clear":

            case "/cls":

                clearScreen();

                break;

                

            case "/quit":

            case "/q":

            case "/exit":

                isRunning = false;

                printInfo("Goodbye!");

                break;

                

            default:

                printError("Unknown command: " + command);

                printInfo("Type /help for available commands");

                break;

        }

    }

    

    /**

     * Initializes the chatbot

     * @return true if initialization is successful, false otherwise

     */

    private boolean initializeChatbot() {

        printInfo("Initializing chatbot...");

        

        boolean initialized = chatbotController.initialize();

        

        if (initialized) {

            printSuccess("Chatbot initialized successfully!");

            printInfo(chatbotController.getModelInfo());

            return true;

        } else {

            return false;

        }

    }

    

    /**

     * Prints the welcome message

     */

    private void printWelcomeMessage() {

        System.out.println();

        System.out.println(BOLD + BLUE + "╔══════════════════════════════════════════════════════════╗" + RESET);

        System.out.println(BOLD + BLUE + "║" + RESET + "                   " + BOLD + "LLM Chatbot" + RESET + "                      " + BOLD + BLUE + "║" + RESET);

        System.out.println(BOLD + BLUE + "║" + RESET + "              Powered by Ollama & Java              " + BOLD + BLUE + "║" + RESET);

        System.out.println(BOLD + BLUE + "╚══════════════════════════════════════════════════════════╝" + RESET);

        System.out.println();

    }

    

    /**

     * Prints the help message with available commands

     */

    private void printHelp() {

        System.out.println();

        System.out.println(BOLD + CYAN + "Available Commands:" + RESET);

        System.out.println(GREEN + "/help, /h" + RESET + "      - Show this help message");

        System.out.println(GREEN + "/new, /n" + RESET + "       - Start a new conversation");

        System.out.println(GREEN + "/history, /hist" + RESET + " - Show conversation history");

        System.out.println(GREEN + "/summary, /sum" + RESET + "  - Show conversation summary");

        System.out.println(GREEN + "/model, /m" + RESET + "      - Show model information");

        System.out.println(GREEN + "/health" + RESET + "        - Perform health check");

        System.out.println(GREEN + "/clear, /cls" + RESET + "    - Clear the screen");

        System.out.println(GREEN + "/quit, /q, /exit" + RESET + " - Exit the chatbot");

        System.out.println();

        System.out.println(YELLOW + "Simply type your message to chat with the AI assistant!" + RESET);

        System.out.println();

    }

    

    /**

     * Prints the user input prompt

     */

    private void printPrompt() {

        System.out.print(BOLD + BLUE + "You: " + RESET);

    }

    

    /**

     * Prints an assistant message

     * @param message The message to print

     */

    private void printAssistantMessage(String message) {

        System.out.println();

        System.out.println(BOLD + GREEN + "Assistant: " + RESET + message);

        System.out.println();

    }

    

    /**

     * Prints an informational message

     * @param message The message to print

     */

    private void printInfo(String message) {

        System.out.println(CYAN + "[INFO] " + message + RESET);

    }

    

    /**

     * Prints a success message

     * @param message The message to print

     */

    private void printSuccess(String message) {

        System.out.println(GREEN + "[SUCCESS] " + message + RESET);

    }

    

    /**

     * Prints an error message

     * @param message The message to print

     */

    private void printError(String message) {

        System.out.println(RED + "[ERROR] " + message + RESET);

    }

    

    /**

     * Prints the conversation history

     */

    private void printConversationHistory() {

        List<Message> history = chatbotController.getConversationHistory();

        

        if (history.isEmpty()) {

            printInfo("No conversation history available");

            return;

        }

        

        System.out.println();

        System.out.println(BOLD + CYAN + "Conversation History:" + RESET);

        System.out.println(CYAN + "════════════════════" + RESET);

        

        DateTimeFormatter formatter = DateTimeFormatter.ofPattern("HH:mm:ss");

        

        for (Message message : history) {

            String timestamp = message.getTimestamp().format(formatter);

            String roleColor = getRoleColor(message.getRole());

            String roleName = getRoleName(message.getRole());

            

            System.out.printf("%s[%s] %s:%s %s%n", 

                            roleColor, timestamp, roleName, RESET, message.getContent());

        }

        

        System.out.println();

    }

    

    /**

     * Gets the color code for a message role

     * @param role The message role

     * @return The ANSI color code

     */

    private String getRoleColor(Message.Role role) {

        switch (role) {

            case USER:

                return BLUE;

            case ASSISTANT:

                return GREEN;

            case SYSTEM:

                return YELLOW;

            default:

                return RESET;

        }

    }

    

    /**

     * Gets the display name for a message role

     * @param role The message role

     * @return The display name

     */

    private String getRoleName(Message.Role role) {

        switch (role) {

            case USER:

                return "You";

            case ASSISTANT:

                return "Assistant";

            case SYSTEM:

                return "System";

            default:

                return "Unknown";

        }

    }

    

    /**

     * Clears the console screen

     */

    private void clearScreen() {

        try {

            // Try to clear screen using ANSI escape codes

            System.out.print("\033[2J\033[H");

            System.out.flush();

        } catch (Exception e) {

            // Fallback: print multiple newlines

            for (int i = 0; i < 50; i++) {

                System.out.println();

            }

        }

    }

    

    /**

     * Shuts down the interface and releases resources

     */

    private void shutdown() {

        try {

            chatbotController.shutdown();

            reader.close();

            logger.info("ConsoleInterface shutdown completed");

        } catch (IOException e) {

            logger.warn("Error closing console reader", e);

        }

    }

}



Creating the Main Application Class


The main application class serves as the entry point for our chatbot application. It ties together all the components and provides a clean way to start the application.



package com.fortytwo.llmchatbot;


import com.fortytwo.llmchatbot.controller.ChatbotController;

import com.fortytwo.llmchatbot.util.ConsoleInterface;

import org.slf4j.Logger;

import org.slf4j.LoggerFactory;


/**

 * Main application class for the LLM Chatbot.

 * This class serves as the entry point for the application and coordinates

 * the initialization and startup of all components.

 */

public class Application {

    

    private static final Logger logger = LoggerFactory.getLogger(Application.class);

    

    /**

     * Main method - entry point of the application

     * @param args Command line arguments

     */

    public static void main(String[] args) {

        logger.info("Starting LLM Chatbot Application");

        

        try {

            // Parse command line arguments if any

            ApplicationConfig config = parseArguments(args);

            

            // Create the chatbot controller

            ChatbotController chatbotController = createChatbotController(config);

            

            // Create and start the console interface

            ConsoleInterface consoleInterface = new ConsoleInterface(chatbotController);

            consoleInterface.start();

            

        } catch (Exception e) {

            logger.error("Fatal error in main application", e);

            System.err.println("Fatal error: " + e.getMessage());

            System.exit(1);

        }

        

        logger.info("LLM Chatbot Application terminated");

    }

    

    /**

     * Parses command line arguments

     * @param args Command line arguments

     * @return ApplicationConfig object with parsed configuration

     */

    private static ApplicationConfig parseArguments(String[] args) {

        ApplicationConfig config = new ApplicationConfig();

        

        for (int i = 0; i < args.length; i++) {

            String arg = args[i];

            

            switch (arg) {

                case "--model":

                case "-m":

                    if (i + 1 < args.length) {

                        config.setModel(args[++i]);

                    } else {

                        throw new IllegalArgumentException("Model name required after " + arg);

                    }

                    break;

                    

                case "--url":

                case "-u":

                    if (i + 1 < args.length) {

                        config.setOllamaUrl(args[++i]);

                    } else {

                        throw new IllegalArgumentException("URL required after " + arg);

                    }

                    break;

                    

                case "--max-context":

                case "-c":

                    if (i + 1 < args.length) {

                        try {

                            config.setMaxContextMessages(Integer.parseInt(args[++i]));

                        } catch (NumberFormatException e) {

                            throw new IllegalArgumentException("Invalid number for max context: " + args[i]);

                        }

                    } else {

                        throw new IllegalArgumentException("Number required after " + arg);

                    }

                    break;

                    

                case "--system-message":

                case "-s":

                    if (i + 1 < args.length) {

                        config.setSystemMessage(args[++i]);

                    } else {

                        throw new IllegalArgumentException("System message required after " + arg);

                    }

                    break;

                    

                case "--help":

                case "-h":

                    printUsage();

                    System.exit(0);

                    break;

                    

                default:

                    if (arg.startsWith("-")) {

                        throw new IllegalArgumentException("Unknown argument: " + arg);

                    }

                    break;

            }

        }

        

        logger.info("Application configuration: {}", config);

        return config;

    }

    

    /**

     * Creates the chatbot controller with the given configuration

     * @param config Application configuration

     * @return Configured ChatbotController instance

     */

    private static ChatbotController createChatbotController(ApplicationConfig config) {

        // Create services with configuration

        com.fortytwo.llmchatbot.service.ConversationService conversationService = 

            new com.fortytwo.llmchatbot.service.ConversationService(

                config.getMaxContextMessages(), 

                config.getSystemMessage()

            );

        

        com.fortytwo.llmchatbot.service.OllamaService ollamaService = 

            new com.fortytwo.llmchatbot.service.OllamaService(

                config.getOllamaUrl(), 

                config.getModel()

            );

        

        return new ChatbotController(conversationService, ollamaService);

    }

    

    /**

     * Prints usage information

     */

    private static void printUsage() {

        System.out.println("LLM Chatbot Application");

        System.out.println("Usage: java -jar llm-chatbot.jar [options]");

        System.out.println();

        System.out.println("Options:");

        System.out.println("  -m, --model <name>           Model name to use (default: llama2)");

        System.out.println("  -u, --url <url>              Ollama base URL (default: http://localhost:11434)");

        System.out.println("  -c, --max-context <number>   Maximum context messages (default: 20)");

        System.out.println("  -s, --system-message <text>  Custom system message");

        System.out.println("  -h, --help                   Show this help message");

        System.out.println();

        System.out.println("Examples:");

        System.out.println("  java -jar llm-chatbot.jar");

        System.out.println("  java -jar llm-chatbot.jar --model mistral --max-context 30");

        System.out.println("  java -jar llm-chatbot.jar --url http://remote-ollama:11434");

    }

    

    /**

     * Configuration class for application settings

     */

    private static class ApplicationConfig {

        private String model = "llama2";

        private String ollamaUrl = "http://localhost:11434";

        private int maxContextMessages = 20;

        private String systemMessage = "You are a helpful AI assistant. You provide accurate, helpful, and concise responses. You maintain a friendly and professional tone in all interactions.";

        

        public String getModel() {

            return model;

        }

        

        public void setModel(String model) {

            this.model = model;

        }

        

        public String getOllamaUrl() {

            return ollamaUrl;

        }

        

        public void setOllamaUrl(String ollamaUrl) {

            this.ollamaUrl = ollamaUrl;

        }

        

        public int getMaxContextMessages() {

            return maxContextMessages;

        }

        

        public void setMaxContextMessages(int maxContextMessages) {

            this.maxContextMessages = maxContextMessages;

        }

        

        public String getSystemMessage() {

            return systemMessage;

        }

        

        public void setSystemMessage(String systemMessage) {

            this.systemMessage = systemMessage;

        }

        

        @Override

        public String toString() {

            return String.format("ApplicationConfig{model='%s', ollamaUrl='%s', maxContextMessages=%d}", 

                               model, ollamaUrl, maxContextMessages);

        }

    }

}



Configuration and Properties


To make our application more configurable, let's create an application.properties file:



# Ollama Configuration

ollama.base.url=http://localhost:11434

ollama.model=llama2


# Conversation Configuration

conversation.max.context.messages=20

conversation.system.message=You are a helpful AI assistant. You provide accurate, helpful, and concise responses. You maintain a friendly and professional tone in all interactions.


# Logging Configuration

logging.level.com.fortytwo.llmchatbot=INFO

logging.level.org.apache.http=WARN


# Application Configuration

app.name=LLM Chatbot

app.version=1.0.0



Testing and Validation


Before running our application, i8t's important to understand how to test and validate the functionality. Here are some key testing scenarios to consider:


First, ensure that Ollama is running on your local machine with the appropriate model loaded. You can verify this by running "ollama list" in your terminal to see available models.


Test the basic conversation flow by starting the application and engaging in a simple conversation. Verify that the chatbot maintains context across multiple exchanges and responds appropriately to different types of questions.


Test the command functionality by trying various commands like "/new" to start a new conversation, "/history" to view conversation history, and "/health" to check system status.


Test error handling by deliberately causing errors, such as stopping the Ollama service while the chatbot is running, to ensure graceful error handling and appropriate user feedback.


Running the Complete Application


To run the complete application, follow these steps:


First, ensure you have Java 11 or higher and Maven installed on your system. Install Ollama and pull the desired model using "ollama pull llama2" or your preferred model.


Compile the project using Maven by running "mvn clean compile" in the project root directory. This will download all dependencies and compile the source code.


Run the application using Maven with "mvn exec:java" or by running the compiled JAR file directly. The application will start, initialize the connection to Ollama, and present you with an interactive console interface.


Once running, you can engage in conversations with the chatbot by simply typing your messages. Use the various commands to explore different features and functionality.


Complete Running Example


Here is the complete, fully functional application that brings together all the components we have built:



// File: src/main/java/com/fortytwo/llmchatbot/Application.java

package com.fortytwo.llmchatbot;


import com.fortytwo.llmchatbot.controller.ChatbotController;

import com.fortytwo.llmchatbot.service.ConversationService;

import com.fortytwo.llmchatbot.service.OllamaService;

import com.fortytwo.llmchatbot.util.ConsoleInterface;

import org.slf4j.Logger;

import org.slf4j.LoggerFactory;


/**

 * Complete LLM Chatbot Application

 * This is a fully functional chatbot application that demonstrates

 * all the concepts and components discussed in this tutorial.

 */

public class Application {

    

    private static final Logger logger = LoggerFactory.getLogger(Application.class);

    

    public static void main(String[] args) {

        logger.info("Starting LLM Chatbot Application");

        

        try {

            // Create services with default configuration

            ConversationService conversationService = new ConversationService();

            OllamaService ollamaService = new OllamaService();

            

            // Create the chatbot controller

            ChatbotController chatbotController = new ChatbotController(conversationService, ollamaService);

            

            // Create and start the console interface

            ConsoleInterface consoleInterface = new ConsoleInterface(chatbotController);

            

            // Add shutdown hook for graceful cleanup

            Runtime.getRuntime().addShutdownHook(new Thread(() -> {

                logger.info("Shutdown hook triggered");

                chatbotController.shutdown();

            }));

            

            // Start the interactive interface

            consoleInterface.start();

            

        } catch (Exception e) {

            logger.error("Fatal error in main application", e);

            System.err.println("Fatal error: " + e.getMessage());

            System.err.println("Please ensure Ollama is running and the model is available.");

            System.exit(1);

        }

        

        logger.info("LLM Chatbot Application terminated");

    }

}


This complete example demonstrates a production-ready LLM chatbot application built in Java using Ollama. The application includes proper error handling, logging, resource management, and a user-friendly interface. It showcases clean architecture principles with clear separation of concerns between the different layers of the application.


The chatbot maintains conversation context, handles various user commands, provides helpful feedback, and gracefully handles error conditions. It serves as a solid foundation that can be extended with additional features such as conversation persistence, web interface, or integration with other systems.


Conclusion and Next Steps


This comprehensive tutorial has walked you through building a complete LLM chatbot in Java using Ollama. We have covered all the essential components including data models, service layers, conversation management, user interface, and application orchestration.


The resulting application demonstrates key concepts in LLM integration, conversation management, and clean software architecture. The modular design makes it easy to extend and customize for specific use cases.


Potential enhancements could include adding conversation persistence to a database, implementing a web-based interface, adding support for file uploads and document analysis, integrating with external APIs, or implementing more sophisticated conversation management features.


The foundation provided here gives you everything needed to build sophisticated LLM-powered applications in Java while maintaining clean, maintainable, and scalable code.

No comments: