Saturday, November 29, 2025

Building an LLM Chatbot in Unix Shell Scripting



Introduction


Building a chatbot that interfaces with Large Language Models using Unix shell scripting might seem unconventional at first, but it offers unique advantages for system administrators and developers who work primarily in terminal environments. Shell scripts provide direct access to system utilities, easy integration with existing Unix tools, and the ability to create lightweight chatbots without heavy dependencies. This article will guide you through creating a fully functional LLM chatbot using bash scripting that can communicate with both local and remote language models.

The chatbot we’ll build will support multiple LLM backends including OpenAI’s API, Anthropic’s Claude API, and local models running through ollama or llama.cpp. We’ll implement features such as conversation history management, configurable system prompts, and proper error handling. The entire implementation will be in bash, leveraging standard Unix utilities like curl for HTTP requests, jq for JSON processing, and basic file operations for persistence.


Core Components Overview


A shell-based LLM chatbot consists of several interconnected components that work together to create a seamless conversational experience. The main script serves as the orchestrator, managing the flow between user input, API calls, and response display. The API interface component handles the complexity of communicating with different LLM providers, abstracting away the differences in their request formats and authentication methods. The conversation manager maintains context across multiple exchanges, storing and retrieving message history as needed. Finally, the configuration system allows users to customize behavior without modifying the core code.

Each component is designed to be modular and reusable. This modularity allows you to swap out different LLM providers, add new features, or integrate the chatbot into larger shell-based workflows. The use of standard Unix conventions like environment variables for configuration and pipes for data flow ensures that our chatbot plays well with the broader Unix ecosystem.


Setting Up the Environment


Before we begin coding, we need to ensure our environment has the necessary tools installed. Most modern Unix-like systems come with bash and curl pre-installed, but we’ll also need jq for JSON processing. Here’s a setup script that checks for required dependencies and provides installation instructions if they’re missing:


#!/bin/bash

check_dependencies() {

local missing_deps=()


```

if ! command -v curl &> /dev/null; then

    missing_deps+=("curl")

fi


if ! command -v jq &> /dev/null; then

    missing_deps+=("jq")

fi


if [ ${#missing_deps[@]} -gt 0 ]; then

    echo "Missing required dependencies: ${missing_deps[*]}"

    echo "Please install them using your package manager:"

    echo "  Ubuntu/Debian: sudo apt-get install ${missing_deps[*]}"

    echo "  macOS: brew install ${missing_deps[*]}"

    echo "  RHEL/CentOS: sudo yum install ${missing_deps[*]}"

    return 1

fi


echo "All dependencies are installed."

return 0

```


}


check_dependencies || exit 1


This script checks for the presence of curl and jq, which are essential for making HTTP requests and parsing JSON responses respectively. The script provides platform-specific installation instructions if any dependencies are missing, making it easy for users to get their environment ready.


Building the API Interfacew


The API interface is the heart of our chatbot, responsible for formatting requests and parsing responses from different LLM providers. Let’s start with a function that handles OpenAI API calls:


send_openai_request() {

local message=”$1”

local api_key=”$2”

local model=”${3:-gpt-3.5-turbo}”

local max_tokens=”${4:-2000}”

local temperature=”${5:-0.7}”


```

local request_body=$(jq -n \

    --arg model "$model" \

    --arg content "$message" \

    --argjson max_tokens "$max_tokens" \

    --argjson temperature "$temperature" \

    '{

        model: $model,

        messages: [

            {

                role: "user",

                content: $content

            }

        ],

        max_tokens: $max_tokens,

        temperature: $temperature

    }')


local response=$(curl -s -X POST \

    -H "Content-Type: application/json" \

    -H "Authorization: Bearer $api_key" \

    -d "$request_body" \

    "https://api.openai.com/v1/chat/completions")


if [ $? -ne 0 ]; then

    echo "Error: Failed to connect to OpenAI API" >&2

    return 1

fi


local error=$(echo "$response" | jq -r '.error.message // empty')

if [ -n "$error" ]; then

    echo "API Error: $error" >&2

    return 1

fi


echo "$response" | jq -r '.choices[0].message.content // empty'

```


}


This function takes a message, API key, and optional parameters like model name, max tokens, and temperature. It constructs a JSON request body using jq, sends it to the OpenAI API endpoint using curl, and then extracts the response content. The function includes error handling for both network failures and API errors, providing meaningful error messages to help with debugging.

Now let’s create a similar function for Anthropic’s Claude API:


send_anthropic_request() {

local message=”$1”

local api_key=”$2”

local model=”${3:-claude-3-sonnet-20240229}”

local max_tokens=”${4:-2000}”


```

local request_body=$(jq -n \

    --arg model "$model" \

    --arg content "$message" \

    --argjson max_tokens "$max_tokens" \

    '{

        model: $model,

        messages: [

            {

                role: "user",

                content: $content

            }

        ],

        max_tokens: $max_tokens

    }')


local response=$(curl -s -X POST \

    -H "Content-Type: application/json" \

    -H "x-api-key: $api_key" \

    -H "anthropic-version: 2023-06-01" \

    -d "$request_body" \

    "https://api.anthropic.com/v1/messages")


if [ $? -ne 0 ]; then

    echo "Error: Failed to connect to Anthropic API" >&2

    return 1

fi


local error=$(echo "$response" | jq -r '.error.message // empty')

if [ -n "$error" ]; then

    echo "API Error: $error" >&2

    return 1

fi


echo "$response" | jq -r '.content[0].text // empty'

```


}


The Anthropic function follows a similar pattern but accommodates the differences in API structure, including different header names and response format. This abstraction allows our main chatbot logic to remain provider-agnostic.

For local models using ollama, we can create a simpler interface:


send_ollama_request() {

local message=”$1”

local model=”${2:-llama2}”

local ollama_host=”${3:-http://localhost:11434}”


```

local request_body=$(jq -n \

    --arg model "$model" \

    --arg prompt "$message" \

    '{

        model: $model,

        prompt: $prompt,

        stream: false

    }')


local response=$(curl -s -X POST \

    -H "Content-Type: application/json" \

    -d "$request_body" \

    "$ollama_host/api/generate")


if [ $? -ne 0 ]; then

    echo "Error: Failed to connect to Ollama at $ollama_host" >&2

    return 1

fi


echo "$response" | jq -r '.response // empty'

```


}


This function connects to a locally running ollama instance, which manages various open-source language models. The interface is simpler since there’s no authentication required for local models.


Creating the Chat Loop


The chat loop is where users interact with the chatbot. It needs to handle user input, call the appropriate API function, display responses, and manage the conversation flow. Here’s a comprehensive implementation:


run_chat_loop() {

local provider=”$1”

local api_key=”$2”

local model=”$3”

local history_file=”${4:-$HOME/.chatbot_history}”

local conversation_id=”$(date +%s)”

local history_dir=”$(dirname “$history_file”)”


```

if [ ! -d "$history_dir" ]; then

    mkdir -p "$history_dir"

fi


echo "Starting chat session (ID: $conversation_id)"

echo "Provider: $provider, Model: $model"

echo "Type 'exit' to quit, 'clear' to clear history, 'save' to save conversation"

echo "----------------------------------------"


while true; do

    printf "\nYou: "

    read -r user_input

    

    if [ -z "$user_input" ]; then

        continue

    fi

    

    case "$user_input" in

        exit|quit)

            echo "Goodbye!"

            break

            ;;

        clear)

            > "$history_file"

            echo "History cleared."

            continue

            ;;

        save)

            local save_file="$history_dir/conversation_${conversation_id}.txt"

            cp "$history_file" "$save_file"

            echo "Conversation saved to: $save_file"

            continue

            ;;

    esac

    

    echo "$(date '+%Y-%m-%d %H:%M:%S') USER: $user_input" >> "$history_file"

    

    printf "Assistant: "

    

    local response

    case "$provider" in

        openai)

            response=$(send_openai_request "$user_input" "$api_key" "$model")

            ;;

        anthropic)

            response=$(send_anthropic_request "$user_input" "$api_key" "$model")

            ;;

        ollama)

            response=$(send_ollama_request "$user_input" "$model")

            ;;

        *)

            echo "Error: Unknown provider '$provider'"

            continue

            ;;

    esac

    

    if [ $? -eq 0 ] && [ -n "$response" ]; then

        echo "$response"

        echo "$(date '+%Y-%m-%d %H:%M:%S') ASSISTANT: $response" >> "$history_file"

    else

        echo "Failed to get response from $provider"

    fi

done

```


}


This chat loop implementation provides a complete interactive experience. It starts by creating necessary directories for history storage and displays session information. The main loop reads user input and handles special commands like exit, clear, and save. Regular messages are logged to a history file with timestamps, sent to the appropriate API based on the configured provider, and the responses are displayed and logged. The function maintains a conversation ID based on the Unix timestamp, allowing users to save and retrieve specific conversations.


Managing Conversation History


Proper conversation history management is crucial for maintaining context across exchanges. Let’s implement a more sophisticated history system that can load previous conversations and maintain context:


load_conversation_context() {

local history_file=”$1”

local max_context_messages=”${2:-10}”


```

if [ ! -f "$history_file" ]; then

    echo "[]"

    return

fi


local messages="[]"

local line_count=0


while IFS= read -r line; do

    if [[ "$line" =~ ^[0-9]{4}-[0-9]{2}-[0-9]{2}\ [0-9]{2}:[0-9]{2}:[0-9]{2}\ (USER|ASSISTANT):\ (.*)$ ]]; then

        local role="${BASH_REMATCH[1]}"

        local content="${BASH_REMATCH[2]}"

        

        if [ "$role" = "USER" ]; then

            role="user"

        else

            role="assistant"

        fi

        

        messages=$(echo "$messages" | jq \

            --arg role "$role" \

            --arg content "$content" \

            '. + [{role: $role, content: $content}]')

        

        line_count=$((line_count + 1))

    fi

done < <(tail -n "$((max_context_messages * 2))" "$history_file")


echo "$messages"

```


}


send_with_context() {

local provider=”$1”

local api_key=”$2”

local model=”$3”

local new_message=”$4”

local history_file=”$5”


```

local context=$(load_conversation_context "$history_file" 5)


context=$(echo "$context" | jq \

    --arg content "$new_message" \

    '. + [{role: "user", content: $content}]')


case "$provider" in

    openai)

        send_openai_request_with_context "$context" "$api_key" "$model"

        ;;

    anthropic)

        send_anthropic_request_with_context "$context" "$api_key" "$model"

        ;;

    *)

        echo "Context not supported for provider: $provider" >&2

        return 1

        ;;

esac

```


}


send_openai_request_with_context() {

local messages=”$1”

local api_key=”$2”

local model=”${3:-gpt-3.5-turbo}”


```

local request_body=$(jq -n \

    --arg model "$model" \

    --argjson messages "$messages" \

    '{

        model: $model,

        messages: $messages,

        max_tokens: 2000,

        temperature: 0.7

    }')


local response=$(curl -s -X POST \

    -H "Content-Type: application/json" \

    -H "Authorization: Bearer $api_key" \

    -d "$request_body" \

    "https://api.openai.com/v1/chat/completions")


echo "$response" | jq -r '.choices[0].message.content // empty'

```


}


These functions work together to maintain conversation context. The load_conversation_context function reads the history file, parses timestamped messages, and constructs a JSON array of message objects. The send_with_context function loads this context, adds the new message, and sends the entire conversation history to the API. This allows the LLM to maintain context and provide more coherent responses across multiple exchanges.


Adding Configuration Management


A robust chatbot needs flexible configuration management. Let’s create a configuration system using a simple RC file format:


load_config() {

local config_file=”${1:-$HOME/.chatbotrc}”


```

if [ ! -f "$config_file" ]; then

    create_default_config "$config_file"

fi


source "$config_file"


CHATBOT_PROVIDER="${CHATBOT_PROVIDER:-openai}"

CHATBOT_MODEL="${CHATBOT_MODEL:-gpt-3.5-turbo}"

CHATBOT_API_KEY="${CHATBOT_API_KEY:-}"

CHATBOT_HISTORY_DIR="${CHATBOT_HISTORY_DIR:-$HOME/.chatbot}"

CHATBOT_MAX_TOKENS="${CHATBOT_MAX_TOKENS:-2000}"

CHATBOT_TEMPERATURE="${CHATBOT_TEMPERATURE:-0.7}"

CHATBOT_SYSTEM_PROMPT="${CHATBOT_SYSTEM_PROMPT:-You are a helpful assistant.}"


export CHATBOT_PROVIDER CHATBOT_MODEL CHATBOT_API_KEY

export CHATBOT_HISTORY_DIR CHATBOT_MAX_TOKENS CHATBOT_TEMPERATURE

export CHATBOT_SYSTEM_PROMPT

```


}


create_default_config() {

local config_file=”$1”


```

cat > "$config_file" << 'EOF'

```


# Chatbot Configuration File


# Provider options: openai, anthropic, ollama


CHATBOT_PROVIDER=openai


# Model selection


CHATBOT_MODEL=gpt-3.5-turbo


# API Key (leave empty for ollama)


CHATBOT_API_KEY=


# History and storage


CHATBOT_HISTORY_DIR=$HOME/.chatbot


# Model parameters


CHATBOT_MAX_TOKENS=2000

CHATBOT_TEMPERATURE=0.7


# System prompt


CHATBOT_SYSTEM_PROMPT=“You are a helpful assistant.”


# Ollama specific settings


OLLAMA_HOST=http://localhost:11434

EOF


```

echo "Created default configuration at: $config_file"

echo "Please edit this file to add your API key and customize settings."

```


}


validate_config() {

if [ “$CHATBOT_PROVIDER” != “ollama” ] && [ -z “$CHATBOT_API_KEY” ]; then

echo “Error: API key required for provider ‘$CHATBOT_PROVIDER’” >&2

echo “Please set CHATBOT_API_KEY in your config file or environment” >&2

return 1

fi


```

case "$CHATBOT_PROVIDER" in

    openai|anthropic|ollama)

        ;;

    *)

        echo "Error: Unknown provider '$CHATBOT_PROVIDER'" >&2

        echo "Valid options: openai, anthropic, ollama" >&2

        return 1

        ;;

esac


return 0

```


}


This configuration system creates a default RC file if one doesn’t exist, loads configuration values with sensible defaults, and validates the configuration before use. Users can customize their chatbot by editing the RC file without modifying the main script.


Error Handling and Robustness


Production-ready shell scripts need comprehensive error handling. Let’s implement robust error handling throughout our chatbot:


set -euo pipefail


trap cleanup EXIT

trap ‘error_handler $? $LINENO’ ERR


error_handler() {

local exit_code=$1

local line_number=$2

echo “Error occurred on line $line_number with exit code $exit_code” >&2

cleanup

exit “$exit_code”

}


cleanup() {

if [ -n “${temp_file:-}” ] && [ -f “$temp_file” ]; then

rm -f “$temp_file”

fi


```

if [ -n "${lockfile:-}" ] && [ -f "$lockfile" ]; then

    rm -f "$lockfile"

fi

```


}


acquire_lock() {

local lockfile=”$1”

local timeout=”${2:-30}”

local elapsed=0


```

while [ $elapsed -lt $timeout ]; do

    if mkdir "$lockfile" 2>/dev/null; then

        trap "rm -rf '$lockfile'" EXIT

        return 0

    fi

    sleep 1

    elapsed=$((elapsed + 1))

done


echo "Error: Could not acquire lock after $timeout seconds" >&2

return 1

```


}


retry_with_backoff() {

local max_attempts=”${1:-3}”

local delay=”${2:-1}”

local max_delay=”${3:-60}”

shift 3

local command=(”$@”)

local attempt=1


```

while [ $attempt -le $max_attempts ]; do

    if "${command[@]}"; then

        return 0

    fi

    

    if [ $attempt -eq $max_attempts ]; then

        echo "Command failed after $max_attempts attempts" >&2

        return 1

    fi

    

    echo "Attempt $attempt failed, retrying in ${delay}s..." >&2

    sleep "$delay"

    

    delay=$((delay * 2))

    if [ $delay -gt $max_delay ]; then

        delay=$max_delay

    fi

    

    attempt=$((attempt + 1))

done

```


}


handle_interrupt() {

echo -e “\nInterrupted. Cleaning up…”

cleanup

exit 130

}


trap handle_interrupt INT TERM


These error handling functions provide several layers of protection. The error_handler function catches any command failures and reports the line number where the error occurred. The cleanup function ensures temporary files and lock files are removed even if the script exits unexpectedly. The acquire_lock function prevents multiple instances of the chatbot from running simultaneously and potentially corrupting the history file. The retry_with_backoff function implements exponential backoff for API calls, helping to handle temporary network issues or rate limits gracefully.


Complete Working Example


Now let’s put everything together into a complete, working chatbot script:


#!/bin/bash


set -euo pipefail


SCRIPT_DIR=”$(cd “$(dirname “${BASH_SOURCE[0]}”)” && pwd)”

SCRIPT_NAME=”$(basename “${BASH_SOURCE[0]}”)”


trap cleanup EXIT

trap ‘error_handler $? $LINENO’ ERR

trap handle_interrupt INT TERM


error_handler() {

local exit_code=$1

local line_number=$2

echo “Error occurred on line $line_number with exit code $exit_code” >&2

cleanup

exit “$exit_code”

}


cleanup() {

if [ -n “${temp_file:-}” ] && [ -f “$temp_file” ]; then

rm -f “$temp_file”

fi

if [ -n “${lockfile:-}” ] && [ -f “$lockfile” ]; then

rm -rf “$lockfile”

fi

}


handle_interrupt() {

echo -e “\nInterrupted. Cleaning up…”

cleanup

exit 130

}


check_dependencies() {

local missing_deps=()


```

if ! command -v curl &> /dev/null; then

    missing_deps+=("curl")

fi


if ! command -v jq &> /dev/null; then

    missing_deps+=("jq")

fi


if [ ${#missing_deps[@]} -gt 0 ]; then

    echo "Missing required dependencies: ${missing_deps[*]}"

    echo "Please install them using your package manager"

    return 1

fi


return 0

```


}


load_config() {

local config_file=”${1:-$HOME/.chatbotrc}”


```

if [ ! -f "$config_file" ]; then

    create_default_config "$config_file"

    echo "Please edit $config_file with your API key"

    exit 1

fi


source "$config_file"


CHATBOT_PROVIDER="${CHATBOT_PROVIDER:-openai}"

CHATBOT_MODEL="${CHATBOT_MODEL:-gpt-3.5-turbo}"

CHATBOT_API_KEY="${CHATBOT_API_KEY:-}"

CHATBOT_HISTORY_DIR="${CHATBOT_HISTORY_DIR:-$HOME/.chatbot}"

CHATBOT_MAX_TOKENS="${CHATBOT_MAX_TOKENS:-2000}"

CHATBOT_TEMPERATURE="${CHATBOT_TEMPERATURE:-0.7}"

```


}


create_default_config() {

local config_file=”$1”


```

cat > "$config_file" << 'EOF'

```


CHATBOT_PROVIDER=openai

CHATBOT_MODEL=gpt-3.5-turbo

CHATBOT_API_KEY=

CHATBOT_HISTORY_DIR=$HOME/.chatbot

CHATBOT_MAX_TOKENS=2000

CHATBOT_TEMPERATURE=0.7

EOF

}


validate_config() {

if [ “$CHATBOT_PROVIDER” != “ollama” ] && [ -z “$CHATBOT_API_KEY” ]; then

echo “Error: API key required for provider ‘$CHATBOT_PROVIDER’” >&2

return 1

fi


```

return 0

```


}


send_openai_request() {

local message=”$1”

local api_key=”$2”

local model=”${3:-gpt-3.5-turbo}”


```

local request_body=$(jq -n \

    --arg model "$model" \

    --arg content "$message" \

    --argjson max_tokens "$CHATBOT_MAX_TOKENS" \

    --argjson temperature "$CHATBOT_TEMPERATURE" \

    '{

        model: $model,

        messages: [{role: "user", content: $content}],

        max_tokens: $max_tokens,

        temperature: $temperature

    }')


local response=$(curl -s -X POST \

    -H "Content-Type: application/json" \

    -H "Authorization: Bearer $api_key" \

    -d "$request_body" \

    "https://api.openai.com/v1/chat/completions")


echo "$response" | jq -r '.choices[0].message.content // empty'

```


}


send_anthropic_request() {

local message=”$1”

local api_key=”$2”

local model=”${3:-claude-3-sonnet-20240229}”


```

local request_body=$(jq -n \

    --arg model "$model" \

    --arg content "$message" \

    --argjson max_tokens "$CHATBOT_MAX_TOKENS" \

    '{

        model: $model,

        messages: [{role: "user", content: $content}],

        max_tokens: $max_tokens

    }')


local response=$(curl -s -X POST \

    -H "Content-Type: application/json" \

    -H "x-api-key: $api_key" \

    -H "anthropic-version: 2023-06-01" \

    -d "$request_body" \

    "https://api.anthropic.com/v1/messages")


echo "$response" | jq -r '.content[0].text // empty'

```


}


send_ollama_request() {

local message=”$1”

local model=”${2:-llama2}”


```

local request_body=$(jq -n \

    --arg model "$model" \

    --arg prompt "$message" \

    '{model: $model, prompt: $prompt, stream: false}')


local response=$(curl -s -X POST \

    -H "Content-Type: application/json" \

    -d "$request_body" \

    "http://localhost:11434/api/generate")


echo "$response" | jq -r '.response // empty'

```


}


main_chat_loop() {

local history_file=”$CHATBOT_HISTORY_DIR/current_session.txt”


```

mkdir -p "$CHATBOT_HISTORY_DIR"


echo "LLM Shell Chatbot"

echo "Provider: $CHATBOT_PROVIDER, Model: $CHATBOT_MODEL"

echo "Commands: 'exit' to quit, 'clear' to clear history"

echo "----------------------------------------"


while true; do

    printf "\nYou: "

    read -r user_input

    

    if [ -z "$user_input" ]; then

        continue

    fi

    

    case "$user_input" in

        exit|quit)

            echo "Goodbye!"

            break

            ;;

        clear)

            > "$history_file"

            echo "History cleared."

            continue

            ;;

    esac

    

    echo "$(date '+%Y-%m-%d %H:%M:%S') USER: $user_input" >> "$history_file"

    

    printf "Assistant: "

    

    local response

    case "$CHATBOT_PROVIDER" in

        openai)

            response=$(send_openai_request "$user_input" "$CHATBOT_API_KEY" "$CHATBOT_MODEL")

            ;;

        anthropic)

            response=$(send_anthropic_request "$user_input" "$CHATBOT_API_KEY" "$CHATBOT_MODEL")

            ;;

        ollama)

            response=$(send_ollama_request "$user_input" "$CHATBOT_MODEL")

            ;;

    esac

    

    if [ -n "$response" ]; then

        echo "$response"

        echo "$(date '+%Y-%m-%d %H:%M:%S') ASSISTANT: $response" >> "$history_file"

    else

        echo "Failed to get response"

    fi

done

```


}


main() {

check_dependencies || exit 1

load_config

validate_config || exit 1

main_chat_loop

}


main “$@”


This complete script ties together all the components we’ve discussed. It starts with robust error handling setup, checks for dependencies, loads and validates configuration, and then enters the main chat loop. The script supports multiple LLM providers and maintains conversation history. To use it, save the script to a file like chatbot.sh, make it executable with chmod +x chatbot.sh, and run it with ./chatbot.sh.


Advanced Features


Beyond the basic functionality, there are several advanced features you can add to enhance your shell-based chatbot. Stream processing for real-time responses can be implemented using named pipes and background processes to handle streaming API responses as they arrive. This provides a more responsive user experience, especially for longer responses.

System prompt customization allows users to define the chatbot’s personality and behavior. You can extend the configuration to include multiple system prompt templates and switch between them dynamically:


apply_system_prompt() {

local messages=”$1”

local system_prompt=”$2”


```

echo "$messages" | jq \

    --arg prompt "$system_prompt" \

    '[{role: "system", content: $prompt}] + .'

```


}


load_prompt_template() {

local template_name=”$1”

local template_file=”$CHATBOT_HISTORY_DIR/prompts/${template_name}.txt”


```

if [ -f "$template_file" ]; then

    cat "$template_file"

else

    echo "You are a helpful assistant."

fi

```


}


Token counting and management helps prevent exceeding API limits and manages costs. You can implement a simple token estimator:


estimate_tokens() {

local text=”$1”

local char_count=${#text}

local estimated_tokens=$((char_count / 4))

echo “$estimated_tokens”

}


check_token_limit() {

local messages=”$1”

local max_tokens=”$2”


```

local total_tokens=0

while IFS= read -r content; do

    local tokens=$(estimate_tokens "$content")

    total_tokens=$((total_tokens + tokens))

done < <(echo "$messages" | jq -r '.[].content')


if [ $total_tokens -gt $max_tokens ]; then

    echo "Warning: Estimated $total_tokens tokens exceeds limit of $max_tokens" >&2

    return 1

fi


return 0

```


}


Multi-modal support for handling images or other file types can be added for providers that support it. Integration with external tools allows the chatbot to execute commands or query databases based on user requests. You could implement a plugin system that allows the chatbot to call external scripts:


execute_plugin() {

local plugin_name=”$1”

local plugin_args=”$2”

local plugin_dir=”$CHATBOT_HISTORY_DIR/plugins”

local plugin_script=”$plugin_dir/${plugin_name}.sh”


```

if [ -x "$plugin_script" ]; then

    "$plugin_script" "$plugin_args"

else

    echo "Plugin not found: $plugin_name" >&2

    return 1

fi

```


}


Conclusion


Building an LLM chatbot in Unix shell scripting demonstrates the power and flexibility of traditional Unix tools in modern AI applications. The modular design we’ve implemented allows for easy extension and customization while maintaining the simplicity and portability that shell scripts provide. This approach is particularly valuable for system administrators and developers who need to integrate AI capabilities into existing shell-based workflows or create lightweight chatbot solutions without heavy dependencies.

The techniques covered in this article extend beyond chatbots and can be applied to other AI-powered shell applications. The error handling patterns, configuration management, and API integration methods are reusable components that can enhance any shell scripting project. As LLMs continue to evolve and new providers emerge, the modular architecture we’ve built makes it straightforward to add support for new services or features.

Remember that while shell scripting might not be the first choice for building production chatbots at scale, it excels in scenarios where simplicity, portability, and integration with Unix tools are priorities. The chatbot we’ve built can serve as a foundation for more complex applications or as a learning tool for understanding the​​​​​​​​​​​​​​​​

No comments: