Monday, December 29, 2025

LM Studio — a deep look at the local AI workbench

Introduction


LM Studio is a desktop application and developer platform for running large language models locally on macOS, Windows, and Linux. It wraps high-performance inference engines such as llama.cpp and Apple’s MLX in a friendly GUI and a programmatic server that mirrors the OpenAI API surface, so you can chat with models in the app, serve them to your own tools, or automate them with a CLI and SDKs. The project emphasizes privacy by default, strong developer ergonomics, and support for modern GPUs, all while staying usable for people whose first contact with local LLMs is a download button rather than a build system. The result is a hybrid between a model runner, a small LLM server, and a developer toolkit. 


A short history


LM Studio’s origin story traces to the “LLaMA leak” moment when local inference moved from a niche curiosity to a community priority. In a widely read Hacker News comment, the founder identified himself as Yagil (“yags”) and described starting LM Studio in that context to make local models practical for normal users. That is firsthand but informal testimony, and it matches the product’s trajectory.

Pinning dates requires care because the company’s own posts use two anchors. In July 2025, LM Studio announced that the app is now “free for use at work” and, in that same article, referred to “launching LM Studio back in May 2023,” a statement about the earliest availability or terms. By contrast, the 0.3.0 release in August 2024 says LM Studio “first shipped in May 2024” and then catalogues the big features that arrived with 0.3.x. The most reasonable reading is that early versions and terms landed in 2023, with a widely-used, redesigned, and aggressively updated 0.3 line shipping in 2024. Since then the blog shows a rapid cadence through 2025, including speculative decoding, multi-GPU controls, an MLX multimodal engine unification, MCP host support, ROCm improvements, and support for OpenAI’s gpt-oss models. 




What LM Studio is in practice


If you never touch a line of code, LM Studio looks like a native chat app that can discover, download, and run local models, then hold multi-turn conversations with them. It can also “chat with documents,” meaning it will either stuff a small file directly into the prompt or switch to retrieval-augmented generation for longer inputs. It supports structured output, tool calling, thinking displays for reasoning models, and more, and it does this across macOS, Windows, and Linux with an emphasis on Apple Silicon and NVIDIA GPUs. For many users, the “killer feature” is that LM Studio can act as a local OpenAI-compatible server, so any tool that knows how to talk to the OpenAI API can instead talk to your laptop or workstation. 


Under the hood: engines, runtimes, formats, memory, and GPUs


LM Studio separates the app from its inference engines using “LM Runtimes.” These include multiple variants of llama.cpp and an Apple MLX engine. The llama.cpp variants track backends such as CPU-only, CUDA, Vulkan, ROCm, and Metal, while the MLX engine targets Apple Silicon. Runtimes are packaged and updateable independently of the app, and the team added auto-updates so the latest engine improvements arrive without a full reinstall. On Apple platforms, LM Studio ships a dedicated MLX engine and, later, a unified multimodal MLX architecture that layers mlx-vlm “vision add-ons” onto mlx-lm text models for better performance and parity. On disk, GGUF is the lingua franca for llama.cpp models and MLX packages are supported for Apple. 

The server side mirrors OpenAI’s chat completions endpoints and adds an evolving REST API of its own. You can start the server in the GUI or headlessly, and you can enable just-in-time model loading so the process will pull a model into memory the first time a client asks for it. To keep memory from climbing as you switch models, you can set an idle time-to-live per model and let LM Studio auto-evict old ones before loading new ones. The programmatic story continues with a CLI called “lms,” plus official SDKs for Python and for JavaScript/TypeScript that manage connections and expose chat, embeddings, and tool-use features. All of this exists so that editors, IDE agents, and your own scripts can treat LM Studio as a local LLM service. 

Performance work shows up in several places. LM Studio added speculative decoding for both llama.cpp and MLX, pairing a small draft model with a larger main model to accelerate generation when token acceptance rates are high. It also introduced a multi-GPU control panel that lets you enable or disable specific GPUs, choose allocation strategies, and, on CUDA today, constrain model weights to dedicated GPU memory for more predictable performance. On Windows and Linux with NVIDIA hardware, the app tracks newer CUDA stacks, and NVIDIA’s own write-up highlighted the automatic jump to CUDA 12.8 for faster loads and inference on RTX systems. On Linux with AMD GPUs, LM Studio has moved ROCm support forward and has continued to tune both ROCm and Vulkan paths. 


Privacy, offline operation, and system requirements


The default privacy posture is simple: once a model is on your machine, the core activities of chatting, chatting with documents, and running the local server do not require the internet and LM Studio states that nothing you type leaves your device during those operations. Connectivity is required for catalog search, for downloading models or runtimes, and for checking updates. System requirements are straightforward: Apple Silicon on macOS 13.4 or newer, x64 or ARM Windows with AVX2 on x64, and an AppImage on recent Ubuntu for Linux. The team recommends 16 GB RAM or more for comfortable use. 


Developer surface: APIs, SDKs, MCP, presets, and import


Beyond the OpenAI-compatible endpoints, LM Studio exposes a structured-output mechanism that accepts JSON Schema and returns conforming JSON, so you can push model responses into strongly typed code paths. Tool calling is supported, and starting with 0.3.17 the app can host Model Context Protocol servers as tools, so your local models in LM Studio can safely reach out to resources the way Claude Desktop does. The SDKs make this reachable from Python and Node, while the lms CLI handles service startup, model load, and even import of models you obtained outside the app. Presets, which are JSON descriptors of model settings, can be saved locally and, in newer builds, published to a community hub for sharing and reuse. 


Strengths in real use


LM Studio’s first strength is that it makes modern local LLMs accessible without taking away power. A non-developer can discover a model, run it, and ask it questions in a polished chat window; a developer can point an existing OpenAI client at localhost and keep coding. Because the server and engines are local, latency can be low and privacy is straightforward. The second strength is breadth: one application spans three OSes and multiple backends, so a MacBook with an M-series GPU and a Windows tower with an RTX GPU both work, and the Linux story includes ROCm for AMD GPUs. The third strength is performance-aware engineering: speculative decoding, multi-GPU allocation controls, and just-in-time loading with TTL and auto-evict show up where they matter. Finally, the developer surface is cohesive: you get a CLI for automation and official SDKs for both Python and JavaScript, plus a path to structured output and tool use. The pricing and licensing situation became simpler in July 2025 when the team made the app free for work use, eliminating a previous source of friction for teams experimenting with local LLMs. 


Weaknesses and common pitfalls


No serious tool avoids rough edges, and LM Studio is no exception. The core app is not open source, which matters to some organizations and makes community-level debugging of app logic less direct, even though the MLX engine it ships is open-sourced and the SDKs are MIT-licensed. Features sometimes roll out backend-by-backend, so a CUDA-only control such as the dedicated-memory constraint appeared first on NVIDIA while parity for other backends followed later or remains in progress. ROCm and Vulkan paths have historically been more temperamental across distros and driver versions than CUDA, and users have reported runtime install glitches or download loops that needed fixes in later builds. The team also calls some surfaces “new” or “in beta,” such as the evolving REST API and, earlier, the Hub for publishing presets, so you should expect changes and occasional rough edges there. Finally, speculative decoding is not a magic switch; it speeds things up when draft acceptance is good and can slow things down otherwise, and multi-GPU tuning demands a working understanding of your memory budget and model sizes. All of these downsides are tractable, but they are real considerations when you plan production workflows. 


Implementation details worth understanding before you deploy


A few specifics make a practical difference. Model formats matter: GGUF is the standard path for llama.cpp engines across OSes, while MLX packages target Apple Silicon. That means you will sometimes find a given model available in both forms, often with similar names and different file trees. The app’s import and directory conventions let you sideload files you got elsewhere so you are not locked into the in-app catalog. The headless service option makes LM Studio behave like any other long-running system service, which is valuable if you want your editor or agent to connect at login without manual clicks. Memory policy matters in multi-model workflows; just-in-time loading paired with TTL and auto-evict prevents a graveyard of idle models from occupying VRAM and RAM, but you should set realistic TTLs for your usage. Finally, because the app mirrors OpenAI’s routes, you can point standard OpenAI clients at the local server and even enable structured output with JSON Schema, which reduces the amount of fragile response-parsing code in your application. 


The future, as signaled by the team’s own roadmap breadcrumbs


The quickest way to anticipate LM Studio’s direction is to look at what they have shipped lately and the themes they emphasize. The unified MLX engine architecture for multimodal models suggests continued work on vision and possibly audio on Apple Silicon, with a strong focus on parity and performance between text-only and multimodal paths. The MCP host integration indicates a longer-term aim to make the app an orchestrator for safe, permissioned tool use, not just a model runner. The multi-GPU controls, CUDA 12.8 alignment, and ROCm improvements suggest an ongoing investment in squeezing performance out of heterogeneous hardware in a predictable way. The “free for work” licensing change, SDKs, and headless mode point at team use in editors and agents, with LM Studio acting as a local LLM hub behind the scenes. None of that is a promise, but taken together it implies a pragmatic roadmap: keep up with model families and engines, make the developer surface smoother, and reduce operational friction on the machines people actually own. 


Conclusion


LM Studio began life as an answer to a simple question: could running modern language models on your own machine be both powerful and approachable. The answer today is yes, with caveats that largely mirror the state of local inference itself. When it shines, it is because the engines are fast, the controls are clear, the server is compatible with tools you already use, and the privacy story is obvious. When it stumbles, it is usually at the boundaries between models, drivers, and platforms, or at the leading edge where new features reach one backend before another. If you want a practical way to explore local LLMs, script them, and plug them into your workflow without sending data to a third party, LM Studio is a serious option that keeps moving quickly. If any claim above seems uncertain, I have tried to cite the source or to say so explicitly. If you plan to adopt it for a team, test your exact GPU, driver, and model mix, enable the server in headless mode with just-in-time loading and sensible TTLs, and keep an eye on release notes, because this ecosystem evolves fast.


Sunday, December 28, 2025

CREATING A MODERN ELIZA: BUILDING HUMAN-LIKE CHATBOTS WITH LARGE LANGUAGE MODELS




INTRODUCTION


Joseph Weizenbaum's ELIZA, created in 1966 at MIT, was one of the first computer programs to engage in natural language conversation with humans. The original ELIZA used simple pattern matching and substitution rules to simulate a Rogerian psychotherapist, creating an illusion of understanding through clever text manipulation. While groundbreaking for its time, ELIZA's responses were generated through predetermined scripts and keyword recognition rather than genuine comprehension.

Today, Large Language Models (LLMs) offer unprecedented opportunities to create chatbots that can engage in more sophisticated, contextually aware conversations while maintaining the therapeutic and reflective qualities that made ELIZA so compelling. This article explores how to build a modern ELIZA-inspired chatbot that leverages the power of contemporary AI while preserving the essential characteristics that made the original so effective.

The fundamental difference between the original ELIZA and our modern approach lies in the underlying technology. Where Weizenbaum's creation relied on pattern matching against a database of rules, our LLM-based system can understand context, maintain conversation history, and generate responses that demonstrate deeper comprehension of human communication patterns.


ARCHITECTURAL FOUNDATIONS


Our modern ELIZA implementation consists of several interconnected components that work together to create a seamless conversational experience. The core architecture follows clean architecture principles, separating concerns and ensuring maintainability.

The primary components include a Conversation Manager that handles session state and context, a Response Generator that interfaces with the LLM, a Memory System that maintains conversation history, and a Personality Engine that ensures consistent therapeutic behavior. Each component operates independently while contributing to the overall conversational experience.

The Conversation Manager serves as the central orchestrator, receiving user input and coordinating between other components to generate appropriate responses. It maintains session state, tracks conversation flow, and ensures that responses align with ELIZA's therapeutic persona.


import json

import uuid

from datetime import datetime

from typing import Dict, List, Optional, Any

from dataclasses import dataclass, asdict

from abc import ABC, abstractmethod


@dataclass

class ConversationTurn:

    """Represents a single turn in the conversation."""

    timestamp: datetime

    user_input: str

    bot_response: str

    emotional_state: Optional[str] = None

    key_topics: List[str] = None

    

    def __post_init__(self):

        if self.key_topics is None:

            self.key_topics = []


class ConversationManager:

    """

    Manages conversation state and coordinates between components.

    Implements clean architecture principles by serving as the main controller.

    """

    

    def __init__(self, session_id: str = None):

        self.session_id = session_id or str(uuid.uuid4())

        self.conversation_history: List[ConversationTurn] = []

        self.session_metadata = {

            'start_time': datetime.now(),

            'turn_count': 0,

            'primary_topics': [],

            'emotional_trajectory': []

        }

    

    def add_turn(self, user_input: str, bot_response: str, 

                 emotional_state: str = None, key_topics: List[str] = None) -> None:

        """Add a new conversation turn to the history."""

        turn = ConversationTurn(

            timestamp=datetime.now(),

            user_input=user_input,

            bot_response=bot_response,

            emotional_state=emotional_state,

            key_topics=key_topics or []

        )

        

        self.conversation_history.append(turn)

        self.session_metadata['turn_count'] += 1

        

        # Update session-level tracking

        if key_topics:

            self.session_metadata['primary_topics'].extend(key_topics)

        if emotional_state:

            self.session_metadata['emotional_trajectory'].append(emotional_state)

    

    def get_recent_context(self, turns: int = 5) -> List[ConversationTurn]:

        """Retrieve recent conversation turns for context."""

        return self.conversation_history[-turns:] if self.conversation_history else []

    

    def get_session_summary(self) -> Dict[str, Any]:

        """Generate a summary of the current session."""

        return {

            'session_id': self.session_id,

            'duration_minutes': (datetime.now() - self.session_metadata['start_time']).total_seconds() / 60,

            'total_turns': self.session_metadata['turn_count'],

            'primary_topics': list(set(self.session_metadata['primary_topics'])),

            'emotional_progression': self.session_metadata['emotional_trajectory']

        }


The Response Generator interfaces with the LLM to produce contextually appropriate responses. Unlike the original ELIZA's rigid pattern matching, this component can understand nuanced input and generate responses that demonstrate genuine comprehension while maintaining the therapeutic stance that characterized the original.


class LLMInterface(ABC):

    """Abstract interface for LLM providers to ensure flexibility."""

    

    @abstractmethod

    def generate_response(self, prompt: str, max_tokens: int = 150) -> str:

        pass

    

    @abstractmethod

    def analyze_sentiment(self, text: str) -> Dict[str, float]:

        pass


class OpenAILLMInterface(LLMInterface):

    """

    Concrete implementation for OpenAI's GPT models.

    Handles API communication and response processing.

    """

    

    def __init__(self, api_key: str, model: str = "gpt-3.5-turbo"):

        self.api_key = api_key

        self.model = model

        # In a real implementation, you would initialize the OpenAI client here

        

    def generate_response(self, prompt: str, max_tokens: int = 150) -> str:

        """

        Generate response using OpenAI's API.

        In production, this would make actual API calls.

        """

        # Simulated response for demonstration

        # Real implementation would use openai.ChatCompletion.create()

        return "I understand you're sharing something important with me. Can you tell me more about how that makes you feel?"

    

    def analyze_sentiment(self, text: str) -> Dict[str, float]:

        """Analyze emotional content of user input."""

        # Simplified sentiment analysis

        # Real implementation would use proper sentiment analysis

        return {

            'positive': 0.3,

            'negative': 0.1,

            'neutral': 0.6,

            'confidence': 0.8

        }


class ResponseGenerator:

    """

    Generates contextually appropriate responses using LLM capabilities.

    Maintains ELIZA's therapeutic persona while leveraging modern AI.

    """

    

    def __init__(self, llm_interface: LLMInterface):

        self.llm = llm_interface

        self.base_persona = self._load_persona_template()

    

    def _load_persona_template(self) -> str:

        """Load the core personality template for ELIZA."""

        return """You are ELIZA, a compassionate and reflective conversational partner inspired by Carl Rogers' person-centered therapy approach. Your responses should:


1. Demonstrate active listening through reflection and paraphrasing

2. Ask open-ended questions that encourage deeper exploration

3. Avoid giving direct advice, instead helping users discover their own insights

4. Show empathy and unconditional positive regard

5. Use therapeutic techniques like clarification and summarization

6. Maintain appropriate boundaries while being genuinely helpful


Remember to be warm, non-judgmental, and focused on understanding the user's perspective."""

    

    def generate_response(self, user_input: str, conversation_context: List[ConversationTurn]) -> Dict[str, Any]:

        """

        Generate a response that maintains ELIZA's therapeutic approach

        while leveraging LLM capabilities for better understanding.

        """

        

        # Analyze user input for emotional content

        sentiment = self.llm.analyze_sentiment(user_input)

        

        # Build context-aware prompt

        context_summary = self._build_context_summary(conversation_context)

        

        prompt = f"""{self.base_persona}


Conversation Context:

{context_summary}


User's latest message: "{user_input}"


Current emotional tone detected: {self._interpret_sentiment(sentiment)}


Please provide a response that acknowledges what the user has shared, reflects their emotional state appropriately, and gently encourages further exploration of their thoughts and feelings."""


        # Generate response using LLM

        response_text = self.llm.generate_response(prompt)

        

        # Extract key topics and emotional indicators

        key_topics = self._extract_key_topics(user_input)

        emotional_state = self._determine_emotional_state(sentiment)

        

        return {

            'response': response_text,

            'emotional_state': emotional_state,

            'key_topics': key_topics,

            'confidence': sentiment.get('confidence', 0.5)

        }

    

    def _build_context_summary(self, context: List[ConversationTurn]) -> str:

        """Create a concise summary of recent conversation for context."""

        if not context:

            return "This is the beginning of our conversation."

        

        summary_parts = []

        for turn in context[-3:]:  # Use last 3 turns for context

            summary_parts.append(f"User said: {turn.user_input[:100]}...")

            summary_parts.append(f"You responded: {turn.bot_response[:100]}...")

        

        return "\n".join(summary_parts)

    

    def _interpret_sentiment(self, sentiment: Dict[str, float]) -> str:

        """Convert sentiment scores to descriptive text."""

        if sentiment['positive'] > 0.6:

            return "positive and upbeat"

        elif sentiment['negative'] > 0.6:

            return "troubled or distressed"

        else:

            return "neutral and contemplative"

    

    def _extract_key_topics(self, text: str) -> List[str]:

        """Extract main topics from user input."""

        # Simplified topic extraction

        # Real implementation would use NLP libraries or LLM-based extraction

        common_topics = ['family', 'work', 'relationships', 'anxiety', 'depression', 'goals', 'fears']

        found_topics = [topic for topic in common_topics if topic.lower() in text.lower()]

        return found_topics

    

    def _determine_emotional_state(self, sentiment: Dict[str, float]) -> str:

        """Determine primary emotional state from sentiment analysis."""

        if sentiment['positive'] > sentiment['negative']:

            return 'positive'

        elif sentiment['negative'] > sentiment['positive']:

            return 'negative'

        else:

            return 'neutral'


MEMORY AND CONTEXT MANAGEMENT


One of the key advantages of our modern approach is the ability to maintain sophisticated memory of past interactions. While the original ELIZA had no memory beyond simple keyword tracking, our system can remember themes, emotional patterns, and important details across sessions.

The Memory System component handles both short-term conversational context and long-term session memory. This allows the chatbot to reference previous discussions, track emotional progression, and maintain continuity that creates a more human-like interaction experience.


from collections import defaultdict

import pickle

from pathlib import Path


class MemorySystem:

    """

    Manages both short-term and long-term memory for the chatbot.

    Enables continuity and personalization across conversations.

    """

    

    def __init__(self, storage_path: str = "eliza_memory"):

        self.storage_path = Path(storage_path)

        self.storage_path.mkdir(exist_ok=True)

        

        # Short-term memory (current session)

        self.working_memory = {

            'key_phrases': [],

            'emotional_patterns': [],

            'important_topics': defaultdict(int),

            'user_preferences': {}

        }

        

        # Long-term memory (persistent across sessions)

        self.long_term_memory = self._load_long_term_memory()

    

    def _load_long_term_memory(self) -> Dict[str, Any]:

        """Load persistent memory from storage."""

        memory_file = self.storage_path / "long_term_memory.pkl"

        if memory_file.exists():

            try:

                with open(memory_file, 'rb') as f:

                    return pickle.load(f)

            except Exception as e:

                print(f"Error loading memory: {e}")

                return self._initialize_long_term_memory()

        return self._initialize_long_term_memory()

    

    def _initialize_long_term_memory(self) -> Dict[str, Any]:

        """Initialize empty long-term memory structure."""

        return {

            'user_profile': {

                'recurring_themes': defaultdict(int),

                'emotional_baseline': 'neutral',

                'communication_style': 'unknown',

                'preferred_topics': []

            },

            'session_history': [],

            'significant_moments': [],

            'therapeutic_progress': {

                'insights_gained': [],

                'coping_strategies_discussed': [],

                'goals_identified': []

            }

        }

    

    def update_working_memory(self, user_input: str, bot_response: str, 

                            emotional_state: str, key_topics: List[str]) -> None:

        """Update short-term memory with current interaction."""

        

        # Track key phrases from user input

        significant_phrases = self._extract_significant_phrases(user_input)

        self.working_memory['key_phrases'].extend(significant_phrases)

        

        # Track emotional patterns

        self.working_memory['emotional_patterns'].append({

            'state': emotional_state,

            'timestamp': datetime.now(),

            'trigger_phrase': user_input[:50]

        })

        

        # Update topic frequency

        for topic in key_topics:

            self.working_memory['important_topics'][topic] += 1

        

        # Detect user preferences from interaction patterns

        self._update_user_preferences(user_input, bot_response)

    

    def _extract_significant_phrases(self, text: str) -> List[str]:

        """Extract emotionally or contextually significant phrases."""

        # Simplified phrase extraction

        # Real implementation would use more sophisticated NLP

        significant_indicators = [

            "I feel", "I think", "I believe", "I'm worried", "I'm excited",

            "I remember", "I hope", "I fear", "I love", "I hate"

        ]

        

        phrases = []

        text_lower = text.lower()

        for indicator in significant_indicators:

            if indicator in text_lower:

                # Extract the phrase following the indicator

                start_idx = text_lower.find(indicator)

                end_idx = min(start_idx + 100, len(text))

                phrase = text[start_idx:end_idx].strip()

                phrases.append(phrase)

        

        return phrases

    

    def _update_user_preferences(self, user_input: str, bot_response: str) -> None:

        """Learn user communication preferences from interactions."""

        

        # Detect if user prefers direct questions vs. open-ended reflection

        if "?" in user_input:

            self.working_memory['user_preferences']['asks_questions'] = True

        

        # Detect preference for emotional vs. analytical discussion

        emotional_words = ['feel', 'emotion', 'heart', 'soul', 'love', 'fear']

        analytical_words = ['think', 'analyze', 'logic', 'reason', 'plan', 'strategy']

        

        emotional_count = sum(1 for word in emotional_words if word in user_input.lower())

        analytical_count = sum(1 for word in analytical_words if word in user_input.lower())

        

        if emotional_count > analytical_count:

            self.working_memory['user_preferences']['communication_style'] = 'emotional'

        elif analytical_count > emotional_count:

            self.working_memory['user_preferences']['communication_style'] = 'analytical'

    

    def get_relevant_context(self, current_topic: str) -> Dict[str, Any]:

        """Retrieve relevant context for the current topic."""

        

        context = {

            'related_past_discussions': [],

            'emotional_history': [],

            'user_insights': []

        }

        

        # Find related past discussions

        for phrase in self.working_memory['key_phrases']:

            if current_topic.lower() in phrase.lower():

                context['related_past_discussions'].append(phrase)

        

        # Get emotional history for this topic

        topic_emotions = [

            pattern for pattern in self.working_memory['emotional_patterns']

            if current_topic.lower() in pattern['trigger_phrase'].lower()

        ]

        context['emotional_history'] = topic_emotions

        

        # Check long-term memory for insights

        if current_topic in self.long_term_memory['therapeutic_progress']['insights_gained']:

            context['user_insights'] = [

                insight for insight in self.long_term_memory['therapeutic_progress']['insights_gained']

                if current_topic.lower() in insight.lower()

            ]

        

        return context

    

    def consolidate_session_memory(self, conversation_manager: ConversationManager) -> None:

        """Move important information from working memory to long-term storage."""

        

        session_summary = conversation_manager.get_session_summary()

        

        # Update long-term user profile

        for topic, frequency in self.working_memory['important_topics'].items():

            self.long_term_memory['user_profile']['recurring_themes'][topic] += frequency

        

        # Store significant emotional patterns

        if self.working_memory['emotional_patterns']:

            dominant_emotion = max(

                set(p['state'] for p in self.working_memory['emotional_patterns']),

                key=lambda x: sum(1 for p in self.working_memory['emotional_patterns'] if p['state'] == x)

            )

            self.long_term_memory['user_profile']['emotional_baseline'] = dominant_emotion

        

        # Save session to history

        self.long_term_memory['session_history'].append({

            'session_summary': session_summary,

            'key_insights': self.working_memory['key_phrases'][:5],  # Top 5 insights

            'emotional_journey': self.working_memory['emotional_patterns']

        })

        

        # Persist to storage

        self._save_long_term_memory()

    

    def _save_long_term_memory(self) -> None:

        """Save long-term memory to persistent storage."""

        memory_file = self.storage_path / "long_term_memory.pkl"

        try:

            with open(memory_file, 'wb') as f:

                pickle.dump(self.long_term_memory, f)

        except Exception as e:

            print(f"Error saving memory: {e}")


PERSONALITY ENGINE AND THERAPEUTIC APPROACH


The Personality Engine ensures that our modern ELIZA maintains the therapeutic qualities that made the original so effective. This component goes beyond simple response generation to implement specific therapeutic techniques and maintain consistent personality traits across all interactions.

The engine incorporates principles from person-centered therapy, including unconditional positive regard, empathetic understanding, and genuineness. It also implements specific conversational techniques such as reflection, clarification, and gentle challenging that are hallmarks of effective therapeutic communication.


from enum import Enum

from typing import Tuple

import re


class TherapeuticTechnique(Enum):

    """Enumeration of therapeutic techniques ELIZA can employ."""

    REFLECTION = "reflection"

    CLARIFICATION = "clarification"

    SUMMARIZATION = "summarization"

    OPEN_ENDED_QUESTION = "open_ended_question"

    EMPATHETIC_RESPONSE = "empathetic_response"

    GENTLE_CHALLENGE = "gentle_challenge"

    NORMALIZATION = "normalization"


class PersonalityEngine:

    """

    Implements ELIZA's therapeutic personality and conversational techniques.

    Ensures consistent application of person-centered therapy principles.

    """

    

    def __init__(self):

        self.core_values = {

            'unconditional_positive_regard': True,

            'empathetic_understanding': True,

            'genuineness': True,

            'non_directive_approach': True

        }

        

        self.technique_patterns = self._initialize_technique_patterns()

        self.response_templates = self._load_response_templates()

    

    def _initialize_technique_patterns(self) -> Dict[TherapeuticTechnique, List[str]]:

        """Initialize patterns for recognizing when to use specific techniques."""

        return {

            TherapeuticTechnique.REFLECTION: [

                r"I feel (.+)",

                r"I am (.+)",

                r"I think (.+)",

                r"It makes me (.+)"

            ],

            TherapeuticTechnique.CLARIFICATION: [

                r"I don't know",

                r"I'm confused",

                r"I'm not sure",

                r"Maybe"

            ],

            TherapeuticTechnique.OPEN_ENDED_QUESTION: [

                r"(.+) happened",

                r"(.+) is difficult",

                r"I want (.+)",

                r"I need (.+)"

            ],

            TherapeuticTechnique.EMPATHETIC_RESPONSE: [

                r"(.+) hurt",

                r"(.+) sad",

                r"(.+) angry",

                r"(.+) scared"

            ]

        }

    

    def _load_response_templates(self) -> Dict[TherapeuticTechnique, List[str]]:

        """Load response templates for each therapeutic technique."""

        return {

            TherapeuticTechnique.REFLECTION: [

                "It sounds like you're feeling {emotion} about {situation}.",

                "You seem to be experiencing {emotion} when {situation}.",

                "I hear that {situation} brings up feelings of {emotion} for you."

            ],

            TherapeuticTechnique.CLARIFICATION: [

                "Can you help me understand what you mean by {unclear_term}?",

                "I'd like to better understand {topic}. Could you tell me more?",

                "When you say {phrase}, what does that look like for you?"

            ],

            TherapeuticTechnique.OPEN_ENDED_QUESTION: [

                "What thoughts come up for you when you think about {topic}?",

                "How does {situation} affect you?",

                "What would it mean to you if {desired_outcome} happened?"

            ],

            TherapeuticTechnique.EMPATHETIC_RESPONSE: [

                "That sounds really {emotion_adjective}. It takes courage to share that.",

                "I can imagine how {emotion_adjective} that must be for you.",

                "It's understandable that you would feel {emotion} in that situation."

            ],

            TherapeuticTechnique.GENTLE_CHALLENGE: [

                "I'm curious about {assumption}. What makes you think that?",

                "You mentioned {belief}. Have you always felt this way?",

                "I wonder if there might be another way to look at {situation}?"

            ],

            TherapeuticTechnique.NORMALIZATION: [

                "Many people struggle with {issue}. You're not alone in feeling this way.",

                "What you're experiencing with {situation} is quite common.",

                "It's natural to feel {emotion} when dealing with {situation}."

            ]

        }

    

    def select_therapeutic_approach(self, user_input: str, emotional_state: str, 

                                  conversation_context: List[ConversationTurn]) -> Tuple[TherapeuticTechnique, Dict[str, str]]:

        """

        Select the most appropriate therapeutic technique based on user input and context.

        Returns the technique and extracted parameters for response generation.

        """

        

        user_input_lower = user_input.lower()

        

        # Check for emotional distress - prioritize empathetic response

        distress_indicators = ['hurt', 'pain', 'sad', 'angry', 'scared', 'anxious', 'depressed']

        if any(indicator in user_input_lower for indicator in distress_indicators):

            emotion = self._extract_emotion(user_input)

            return TherapeuticTechnique.EMPATHETIC_RESPONSE, {'emotion': emotion, 'emotion_adjective': f'{emotion}'}

        

        # Check for confusion or uncertainty - use clarification

        uncertainty_indicators = ['confused', 'not sure', 'don\'t know', 'maybe', 'unclear']

        if any(indicator in user_input_lower for indicator in uncertainty_indicators):

            unclear_term = self._extract_unclear_concept(user_input)

            return TherapeuticTechnique.CLARIFICATION, {'unclear_term': unclear_term, 'topic': unclear_term}

        

        # Check for feeling statements - use reflection

        feeling_patterns = [r"I feel (.+)", r"I am (.+)", r"It makes me (.+)"]

        for pattern in feeling_patterns:

            match = re.search(pattern, user_input, re.IGNORECASE)

            if match:

                feeling_content = match.group(1)

                emotion, situation = self._parse_feeling_statement(feeling_content)

                return TherapeuticTechnique.REFLECTION, {'emotion': emotion, 'situation': situation}

        

        # Check conversation frequency for technique variety

        recent_techniques = self._get_recent_techniques(conversation_context)

        

        # Avoid overusing the same technique

        if len(recent_techniques) >= 2 and len(set(recent_techniques[-2:])) == 1:

            # Switch to a different technique

            if recent_techniques[-1] == TherapeuticTechnique.REFLECTION:

                topic = self._extract_main_topic(user_input)

                return TherapeuticTechnique.OPEN_ENDED_QUESTION, {'topic': topic, 'situation': topic}

            else:

                emotion, situation = self._parse_feeling_statement(user_input)

                return TherapeuticTechnique.REFLECTION, {'emotion': emotion, 'situation': situation}

        

        # Default to open-ended questions to encourage exploration

        topic = self._extract_main_topic(user_input)

        return TherapeuticTechnique.OPEN_ENDED_QUESTION, {'topic': topic, 'situation': topic, 'desired_outcome': 'positive change'}

    

    def _extract_emotion(self, text: str) -> str:

        """Extract emotional words from user input."""

        emotion_words = {

            'sad': ['sad', 'depressed', 'down', 'blue', 'melancholy'],

            'angry': ['angry', 'mad', 'furious', 'irritated', 'annoyed'],

            'anxious': ['anxious', 'worried', 'nervous', 'scared', 'afraid'],

            'happy': ['happy', 'joyful', 'excited', 'pleased', 'content'],

            'confused': ['confused', 'lost', 'uncertain', 'unclear']

        }

        

        text_lower = text.lower()

        for emotion, synonyms in emotion_words.items():

            if any(synonym in text_lower for synonym in synonyms):

                return emotion

        

        return 'uncertain'

    

    def _extract_unclear_concept(self, text: str) -> str:

        """Extract the concept the user is unclear about."""

        # Simple extraction - look for nouns after uncertainty indicators

        uncertainty_phrases = ['not sure about', 'confused about', 'unclear on', 'don\'t understand']

        

        for phrase in uncertainty_phrases:

            if phrase in text.lower():

                start_idx = text.lower().find(phrase) + len(phrase)

                remaining_text = text[start_idx:].strip()

                # Extract first few words as the unclear concept

                words = remaining_text.split()[:3]

                return ' '.join(words) if words else 'that'

        

        return 'what you mentioned'

    

    def _parse_feeling_statement(self, feeling_content: str) -> Tuple[str, str]:

        """Parse a feeling statement to extract emotion and situation."""

        # Simple parsing - split on common connectors

        connectors = [' when ', ' because ', ' about ', ' that ']

        

        emotion = feeling_content

        situation = 'this situation'

        

        for connector in connectors:

            if connector in feeling_content.lower():

                parts = feeling_content.split(connector, 1)

                emotion = parts[0].strip()

                situation = parts[1].strip() if len(parts) > 1 else situation

                break

        

        return emotion, situation

    

    def _extract_main_topic(self, text: str) -> str:

        """Extract the main topic from user input."""

        # Simple topic extraction - look for key nouns

        common_topics = [

            'work', 'job', 'career', 'family', 'relationship', 'friend', 'partner',

            'health', 'money', 'future', 'past', 'decision', 'choice', 'problem'

        ]

        

        text_lower = text.lower()

        for topic in common_topics:

            if topic in text_lower:

                return topic

        

        # Default to extracting first noun-like word

        words = text.split()

        for word in words:

            if len(word) > 3 and word.isalpha():

                return word.lower()

        

        return 'this'

    

    def _get_recent_techniques(self, conversation_context: List[ConversationTurn]) -> List[TherapeuticTechnique]:

        """Extract therapeutic techniques used in recent conversation."""

        # This would typically be stored with each turn

        # For now, we'll return empty list as this is a simplified implementation

        return []

    

    def generate_therapeutic_response(self, technique: TherapeuticTechnique, 

                                    parameters: Dict[str, str]) -> str:

        """Generate a response using the specified therapeutic technique."""

        

        templates = self.response_templates.get(technique, [])

        if not templates:

            return "I'd like to understand more about what you're experiencing."

        

        # Select template based on available parameters

        selected_template = templates[0]  # Simple selection for demonstration

        

        try:

            # Format template with extracted parameters

            response = selected_template.format(**parameters)

            return response

        except KeyError as e:

            # Fallback if parameter is missing

            return f"I hear what you're saying about {parameters.get('topic', 'this situation')}. Can you tell me more?"

    

    def ensure_therapeutic_boundaries(self, response: str) -> str:

        """

        Ensure the response maintains appropriate therapeutic boundaries.

        Removes advice-giving and maintains non-directive approach.

        """

        

        # Remove directive language

        directive_patterns = [

            r"You should (.+)",

            r"You need to (.+)",

            r"You must (.+)",

            r"I recommend (.+)",

            r"My advice is (.+)"

        ]

        

        for pattern in directive_patterns:

            response = re.sub(pattern, r"Have you considered \1?", response, flags=re.IGNORECASE)

        

        # Ensure questions end with question marks

        if response.strip() and not response.strip().endswith(('?', '.', '!')):

            response += '?'

        

        return response


INTEGRATION AND ORCHESTRATION


The main ELIZA class brings together all components to create a cohesive conversational experience. This orchestrator manages the flow between components while maintaining the therapeutic persona and ensuring smooth interactions.


class ModernEliza:

    """

    Main orchestrator class that brings together all components

    to create a modern ELIZA-inspired therapeutic chatbot.

    """

    

    def __init__(self, llm_interface: LLMInterface, storage_path: str = "eliza_sessions"):

        self.llm_interface = llm_interface

        self.storage_path = Path(storage_path)

        self.storage_path.mkdir(exist_ok=True)

        

        # Initialize components

        self.conversation_manager = None

        self.response_generator = ResponseGenerator(llm_interface)

        self.memory_system = MemorySystem(str(storage_path / "memory"))

        self.personality_engine = PersonalityEngine()

        

        # Session state

        self.is_session_active = False

        self.greeting_given = False

    

    def start_session(self, session_id: str = None) -> str:

        """

        Start a new conversation session.

        Returns the opening greeting.

        """

        self.conversation_manager = ConversationManager(session_id)

        self.is_session_active = True

        self.greeting_given = True

        

        # Load any relevant long-term memory

        user_profile = self.memory_system.long_term_memory.get('user_profile', {})

        recurring_themes = user_profile.get('recurring_themes', {})

        

        # Personalize greeting if we have history

        if recurring_themes:

            main_theme = max(recurring_themes.items(), key=lambda x: x[1])[0]

            greeting = f"Hello again. I remember we've talked about {main_theme} before. How are you feeling today?"

        else:

            greeting = "Hello. I'm ELIZA, and I'm here to listen and understand. What's on your mind today?"

        

        return greeting

    

    def process_input(self, user_input: str) -> str:

        """

        Process user input and generate an appropriate response.

        This is the main interaction method.

        """

        

        if not self.is_session_active:

            return self.start_session()

        

        # Handle session management commands

        if user_input.lower().strip() in ['goodbye', 'bye', 'exit', 'quit']:

            return self._end_session()

        

        # Get conversation context

        recent_context = self.conversation_manager.get_recent_context()

        

        # Generate initial response using LLM

        llm_response_data = self.response_generator.generate_response(user_input, recent_context)

        

        # Apply therapeutic personality and techniques

        technique, parameters = self.personality_engine.select_therapeutic_approach(

            user_input, 

            llm_response_data['emotional_state'], 

            recent_context

        )

        

        # Generate therapeutic response

        therapeutic_response = self.personality_engine.generate_therapeutic_response(technique, parameters)

        

        # Ensure therapeutic boundaries

        final_response = self.personality_engine.ensure_therapeutic_boundaries(therapeutic_response)

        

        # Update memory systems

        self.memory_system.update_working_memory(

            user_input, 

            final_response, 

            llm_response_data['emotional_state'],

            llm_response_data['key_topics']

        )

        

        # Add to conversation history

        self.conversation_manager.add_turn(

            user_input, 

            final_response,

            llm_response_data['emotional_state'],

            llm_response_data['key_topics']

        )

        

        return final_response

    

    def _end_session(self) -> str:

        """

        End the current session and consolidate memory.

        """

        if not self.is_session_active:

            return "We haven't started our conversation yet. Would you like to begin?"

        

        # Consolidate session memory

        self.memory_system.consolidate_session_memory(self.conversation_manager)

        

        # Generate closing response

        session_summary = self.conversation_manager.get_session_summary()

        primary_topics = session_summary.get('primary_topics', [])

        

        if primary_topics:

            closing = f"Thank you for sharing your thoughts about {', '.join(primary_topics[:2])} with me today. "

        else:

            closing = "Thank you for our conversation today. "

        

        closing += "Remember, you have the strength to work through whatever you're facing. Take care."

        

        # Reset session state

        self.is_session_active = False

        self.greeting_given = False

        self.conversation_manager = None

        

        return closing

    

    def get_session_insights(self) -> Dict[str, Any]:

        """

        Get insights about the current session for analysis or debugging.

        """

        if not self.conversation_manager:

            return {'error': 'No active session'}

        

        session_summary = self.conversation_manager.get_session_summary()

        memory_context = self.memory_system.working_memory

        

        return {

            'session_summary': session_summary,

            'key_topics_frequency': dict(memory_context['important_topics']),

            'emotional_progression': [p['state'] for p in memory_context['emotional_patterns']],

            'significant_phrases': memory_context['key_phrases'][-5:],  # Last 5 phrases

            'user_preferences': memory_context['user_preferences']

        }

    

    def save_session(self, filename: str = None) -> str:

        """

        Save the current session to a file for later analysis.

        """

        if not self.conversation_manager:

            return "No active session to save."

        

        if filename is None:

            filename = f"session_{self.conversation_manager.session_id}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json"

        

        session_data = {

            'session_metadata': self.conversation_manager.get_session_summary(),

            'conversation_history': [asdict(turn) for turn in self.conversation_manager.conversation_history],

            'insights': self.get_session_insights()

        }

        

        filepath = self.storage_path / filename

        try:

            with open(filepath, 'w', encoding='utf-8') as f:

                json.dump(session_data, f, indent=2, default=str)

            return f"Session saved to {filepath}"

        except Exception as e:

            return f"Error saving session: {e}"


ADVANCED FEATURES AND CONSIDERATIONS


Our modern ELIZA implementation includes several advanced features that extend beyond the original's capabilities. These include emotional intelligence, adaptive personality, and sophisticated context awareness that enable more nuanced and helpful interactions.

The emotional intelligence component analyzes not just the words users say, but the emotional undertones and patterns in their communication. This allows ELIZA to respond more appropriately to the user's emotional state and track emotional changes over time.

Adaptive personality means that ELIZA can adjust its communication style based on what works best for each individual user. Some users respond better to direct questions, while others prefer gentle reflection. The system learns these preferences and adapts accordingly.

Context awareness extends beyond simple keyword matching to understand the deeper themes and connections in conversations. This enables ELIZA to make meaningful connections between different parts of the conversation and provide more coherent, helpful responses.

The system also includes safety features to recognize when users might be in crisis and need professional help. While ELIZA can provide supportive conversation, it's important that it recognizes its limitations and can guide users to appropriate resources when necessary.

Error handling and graceful degradation ensure that the system continues to function even when components fail or when unexpected input is received. This robustness is essential for maintaining the therapeutic relationship even when technical issues arise.


COMPLETE WORKING EXAMPLE


Here is a complete, runnable implementation that demonstrates all the concepts discussed:



#!/usr/bin/env python3
"""
Modern ELIZA: A therapeutic chatbot inspired by Weizenbaum's ELIZA
but powered by Large Language Models and modern AI techniques.
This implementation demonstrates clean architecture principles,
therapeutic conversation techniques, and sophisticated memory management.
"""
import json
import uuid
import pickle
import re
from datetime import datetime
from pathlib import Path
from typing import Dict, List, Optional, Any, Tuple
from dataclasses import dataclass, asdict
from abc import ABC, abstractmethod
from collections import defaultdict
from enum import Enum
# ============================================================================
# Data Models and Core Structures
# ============================================================================
@dataclass
class ConversationTurn:
    """Represents a single turn in the conversation with metadata."""
    timestamp: datetime
    user_input: str
    bot_response: str
    emotional_state: Optional[str] = None
    key_topics: List[str] = None
    therapeutic_technique: Optional[str] = None
    
    def __post_init__(self):
        if self.key_topics is None:
            self.key_topics = []
class TherapeuticTechnique(Enum):
    """Therapeutic techniques that ELIZA can employ."""
    REFLECTION = "reflection"
    CLARIFICATION = "clarification"
    SUMMARIZATION = "summarization"
    OPEN_ENDED_QUESTION = "open_ended_question"
    EMPATHETIC_RESPONSE = "empathetic_response"
    GENTLE_CHALLENGE = "gentle_challenge"
    NORMALIZATION = "normalization"
    SUPPORTIVE_STATEMENT = "supportive_statement"
# ============================================================================
# LLM Interface Layer
# ============================================================================
class LLMInterface(ABC):
    """Abstract interface for LLM providers to ensure flexibility."""
    
    @abstractmethod
    def generate_response(self, prompt: str, max_tokens: int = 150) -> str:
        pass
    
    @abstractmethod
    def analyze_sentiment(self, text: str) -> Dict[str, float]:
        pass
class MockLLMInterface(LLMInterface):
    """
    Mock LLM interface for demonstration purposes.
    In production, this would be replaced with actual LLM API calls.
    """
    
    def __init__(self):
        self.response_templates = [
            "I understand that you're sharing something important with me. Can you tell me more about how that makes you feel?",
            "It sounds like this situation is significant for you. What thoughts come up when you think about it?",
            "I hear what you're saying. How long have you been feeling this way?",
            "That's a lot to process. What would it mean to you if things were different?",
            "I can sense this is meaningful to you. What do you think might help you move forward?"
        ]
        self.sentiment_keywords = {
            'positive': ['happy', 'good', 'great', 'wonderful', 'excited', 'love', 'joy'],
            'negative': ['sad', 'bad', 'terrible', 'awful', 'hate', 'angry', 'depressed', 'anxious'],
            'neutral': ['okay', 'fine', 'normal', 'usual', 'regular']
        }
    
    def generate_response(self, prompt: str, max_tokens: int = 150) -> str:
        """Generate a mock response based on prompt analysis."""
        # Simple response selection based on prompt content
        prompt_lower = prompt.lower()
        
        if 'emotional' in prompt_lower or 'feeling' in prompt_lower:
            return "I can hear the emotion in what you're sharing. These feelings are important and valid."
        elif 'question' in prompt_lower or 'confused' in prompt_lower:
            return "It's natural to have questions and feel uncertain sometimes. What would help clarify things for you?"
        elif 'past' in prompt_lower or 'remember' in prompt_lower:
            return "Our past experiences shape us in important ways. How do you think this experience has affected you?"
        else:
            # Return a random template response
            import random
            return random.choice(self.response_templates)
    
    def analyze_sentiment(self, text: str) -> Dict[str, float]:
        """Analyze sentiment using keyword matching."""
        text_lower = text.lower()
        scores = {'positive': 0.0, 'negative': 0.0, 'neutral': 0.0}
        
        for sentiment, keywords in self.sentiment_keywords.items():
            score = sum(1 for keyword in keywords if keyword in text_lower)
            scores[sentiment] = min(score / 10.0, 1.0)  # Normalize to 0-1
        
        # If no sentiment detected, default to neutral
        if all(score == 0 for score in scores.values()):
            scores['neutral'] = 0.5
        
        # Add confidence based on total sentiment detected
        total_sentiment = sum(scores.values())
        confidence = min(total_sentiment, 1.0)
        
        return {**scores, 'confidence': confidence}
# ============================================================================
# Conversation Management
# ============================================================================
class ConversationManager:
    """
    Manages conversation state and coordinates between components.
    Implements clean architecture principles by serving as the main controller.
    """
    
    def __init__(self, session_id: str = None):
        self.session_id = session_id or str(uuid.uuid4())
        self.conversation_history: List[ConversationTurn] = []
        self.session_metadata = {
            'start_time': datetime.now(),
            'turn_count': 0,
            'primary_topics': [],
            'emotional_trajectory': [],
            'techniques_used': []
        }
    
    def add_turn(self, user_input: str, bot_response: str, 
                 emotional_state: str = None, key_topics: List[str] = None,
                 technique: TherapeuticTechnique = None) -> None:
        """Add a new conversation turn to the history."""
        turn = ConversationTurn(
            timestamp=datetime.now(),
            user_input=user_input,
            bot_response=bot_response,
            emotional_state=emotional_state,
            key_topics=key_topics or [],
            therapeutic_technique=technique.value if technique else None
        )
        
        self.conversation_history.append(turn)
        self.session_metadata['turn_count'] += 1
        
        # Update session-level tracking
        if key_topics:
            self.session_metadata['primary_topics'].extend(key_topics)
        if emotional_state:
            self.session_metadata['emotional_trajectory'].append(emotional_state)
        if technique:
            self.session_metadata['techniques_used'].append(technique.value)
    
    def get_recent_context(self, turns: int = 5) -> List[ConversationTurn]:
        """Retrieve recent conversation turns for context."""
        return self.conversation_history[-turns:] if self.conversation_history else []
    
    def get_session_summary(self) -> Dict[str, Any]:
        """Generate a comprehensive summary of the current session."""
        duration = (datetime.now() - self.session_metadata['start_time']).total_seconds() / 60
        
        # Calculate topic frequencies
        topic_freq = defaultdict(int)
        for topic in self.session_metadata['primary_topics']:
            topic_freq[topic] += 1
        
        return {
            'session_id': self.session_id,
            'duration_minutes': round(duration, 2),
            'total_turns': self.session_metadata['turn_count'],
            'primary_topics': dict(topic_freq),
            'emotional_progression': self.session_metadata['emotional_trajectory'],
            'techniques_used': self.session_metadata['techniques_used'],
            'start_time': self.session_metadata['start_time'].isoformat()
        }
# ============================================================================
# Response Generation
# ============================================================================
class ResponseGenerator:
    """
    Generates contextually appropriate responses using LLM capabilities.
    Maintains ELIZA's therapeutic persona while leveraging modern AI.
    """
    
    def __init__(self, llm_interface: LLMInterface):
        self.llm = llm_interface
        self.base_persona = self._load_persona_template()
    
    def _load_persona_template(self) -> str:
        """Load the core personality template for ELIZA."""
        return """You are ELIZA, a compassionate and reflective conversational partner inspired by Carl Rogers' person-centered therapy approach. Your responses should:
1. Demonstrate active listening through reflection and paraphrasing
2. Ask open-ended questions that encourage deeper exploration
3. Avoid giving direct advice, instead helping users discover their own insights
4. Show empathy and unconditional positive regard
5. Use therapeutic techniques like clarification and summarization
6. Maintain appropriate boundaries while being genuinely helpful
7. Be warm, non-judgmental, and focused on understanding the user's perspective
Remember: You are not a replacement for professional therapy, but a supportive conversational partner."""
    
    def generate_response(self, user_input: str, conversation_context: List[ConversationTurn]) -> Dict[str, Any]:
        """
        Generate a response that maintains ELIZA's therapeutic approach
        while leveraging LLM capabilities for better understanding.
        """
        
        # Analyze user input for emotional content
        sentiment = self.llm.analyze_sentiment(user_input)
        
        # Build context-aware prompt
        context_summary = self._build_context_summary(conversation_context)
        
        prompt = f"""{self.base_persona}
Conversation Context:
{context_summary}
User's latest message: "{user_input}"
Current emotional tone detected: {self._interpret_sentiment(sentiment)}
Please provide a response that acknowledges what the user has shared, reflects their emotional state appropriately, and gently encourages further exploration of their thoughts and feelings."""
        # Generate response using LLM
        response_text = self.llm.generate_response(prompt)
        
        # Extract key topics and emotional indicators
        key_topics = self._extract_key_topics(user_input)
        emotional_state = self._determine_emotional_state(sentiment)
        
        return {
            'response': response_text,
            'emotional_state': emotional_state,
            'key_topics': key_topics,
            'confidence': sentiment.get('confidence', 0.5),
            'sentiment_scores': sentiment
        }
    
    def _build_context_summary(self, context: List[ConversationTurn]) -> str:
        """Create a concise summary of recent conversation for context."""
        if not context:
            return "This is the beginning of our conversation."
        
        summary_parts = []
        for turn in context[-3:]:  # Use last 3 turns for context
            summary_parts.append(f"User: {turn.user_input[:80]}...")
            summary_parts.append(f"ELIZA: {turn.bot_response[:80]}...")
        
        return "\n".join(summary_parts)
    
    def _interpret_sentiment(self, sentiment: Dict[str, float]) -> str:
        """Convert sentiment scores to descriptive text."""
        if sentiment['positive'] > 0.6:
            return "positive and upbeat"
        elif sentiment['negative'] > 0.6:
            return "troubled or distressed"
        else:
            return "neutral and contemplative"
    
    def _extract_key_topics(self, text: str) -> List[str]:
        """Extract main topics from user input using keyword matching."""
        topic_keywords = {
            'family': ['family', 'mother', 'father', 'parent', 'sibling', 'brother', 'sister', 'child'],
            'work': ['work', 'job', 'career', 'boss', 'colleague', 'office', 'business'],
            'relationships': ['relationship', 'partner', 'boyfriend', 'girlfriend', 'spouse', 'marriage'],
            'health': ['health', 'sick', 'illness', 'doctor', 'medical', 'pain', 'tired'],
            'emotions': ['feel', 'emotion', 'mood', 'happy', 'sad', 'angry', 'anxious'],
            'future': ['future', 'plan', 'goal', 'dream', 'hope', 'want', 'wish'],
            'past': ['past', 'memory', 'remember', 'childhood', 'history', 'before'],
            'stress': ['stress', 'pressure', 'overwhelmed', 'burden', 'difficult', 'hard']
        }
        
        text_lower = text.lower()
        found_topics = []
        
        for topic, keywords in topic_keywords.items():
            if any(keyword in text_lower for keyword in keywords):
                found_topics.append(topic)
        
        return found_topics
    
    def _determine_emotional_state(self, sentiment: Dict[str, float]) -> str:
        """Determine primary emotional state from sentiment analysis."""
        max_sentiment = max(sentiment.items(), key=lambda x: x[1] if x[0] != 'confidence' else 0)
        return max_sentiment[0]
# ============================================================================
# Memory Management
# ============================================================================
class MemorySystem:
    """
    Manages both short-term and long-term memory for the chatbot.
    Enables continuity and personalization across conversations.
    """
    
    def __init__(self, storage_path: str = "eliza_memory"):
        self.storage_path = Path(storage_path)
        self.storage_path.mkdir(exist_ok=True)
        
        # Short-term memory (current session)
        self.working_memory = {
            'key_phrases': [],
            'emotional_patterns': [],
            'important_topics': defaultdict(int),
            'user_preferences': {},
            'significant_moments': []
        }
        
        # Long-term memory (persistent across sessions)
        self.long_term_memory = self._load_long_term_memory()
    
    def _load_long_term_memory(self) -> Dict[str, Any]:
        """Load persistent memory from storage."""
        memory_file = self.storage_path / "long_term_memory.pkl"
        if memory_file.exists():
            try:
                with open(memory_file, 'rb') as f:
                    return pickle.load(f)
            except Exception as e:
                print(f"Warning: Error loading memory: {e}")
                return self._initialize_long_term_memory()
        return self._initialize_long_term_memory()
    
    def _initialize_long_term_memory(self) -> Dict[str, Any]:
        """Initialize empty long-term memory structure."""
        return {
            'user_profile': {
                'recurring_themes': defaultdict(int),
                'emotional_baseline': 'neutral',
                'communication_style': 'unknown',
                'preferred_topics': [],
                'session_count': 0
            },
            'session_history': [],
            'significant_moments': [],
            'therapeutic_progress': {
                'insights_gained': [],
                'coping_strategies_discussed': [],
                'goals_identified': [],
                'breakthrough_moments': []
            }
        }
    
    def update_working_memory(self, user_input: str, bot_response: str, 
                            emotional_state: str, key_topics: List[str]) -> None:
        """Update short-term memory with current interaction."""
        
        # Track key phrases from user input
        significant_phrases = self._extract_significant_phrases(user_input)
        self.working_memory['key_phrases'].extend(significant_phrases)
        
        # Track emotional patterns
        self.working_memory['emotional_patterns'].append({
            'state': emotional_state,
            'timestamp': datetime.now(),
            'trigger_phrase': user_input[:50],
            'intensity': self._assess_emotional_intensity(user_input)
        })
        
        # Update topic frequency
        for topic in key_topics:
            self.working_memory['important_topics'][topic] += 1
        
        # Detect significant moments
        if self._is_significant_moment(user_input, emotional_state):
            self.working_memory['significant_moments'].append({
                'content': user_input,
                'emotional_state': emotional_state,
                'timestamp': datetime.now(),
                'significance_score': self._calculate_significance_score(user_input, emotional_state)
            })
        
        # Update user preferences
        self._update_user_preferences(user_input, bot_response)
    
    def _extract_significant_phrases(self, text: str) -> List[str]:
        """Extract emotionally or contextually significant phrases."""
        significant_indicators = [
            r"I feel (.{1,50})",
            r"I think (.{1,50})",
            r"I believe (.{1,50})",
            r"I'm worried (.{1,50})",
            r"I'm excited (.{1,50})",
            r"I remember (.{1,50})",
            r"I hope (.{1,50})",
            r"I fear (.{1,50})",
            r"I love (.{1,50})",
            r"I hate (.{1,50})",
            r"I want (.{1,50})",
            r"I need (.{1,50})"
        ]
        
        phrases = []
        for pattern in significant_indicators:
            matches = re.findall(pattern, text, re.IGNORECASE)
            for match in matches:
                phrases.append(f"{pattern.split('(')[0].replace('I ', 'User ')}{match}")
        
        return phrases
    
    def _assess_emotional_intensity(self, text: str) -> float:
        """Assess the emotional intensity of the user's input."""
        intensity_indicators = {
            'high': ['extremely', 'incredibly', 'absolutely', 'completely', 'totally', 'devastated', 'ecstatic'],
            'medium': ['very', 'really', 'quite', 'pretty', 'fairly', 'somewhat'],
            'low': ['a bit', 'slightly', 'kind of', 'sort of', 'maybe']
        }
        
        text_lower = text.lower()
        
        for level, indicators in intensity_indicators.items():
            if any(indicator in text_lower for indicator in indicators):
                return {'high': 0.9, 'medium': 0.6, 'low': 0.3}[level]
        
        return 0.5  # Default medium intensity

    

    def _is_significant_moment(self, text: str, emotional_state: str) -> bool:

        """Determine if this moment is significant enough to remember long-term."""

        significance_indicators = [

            'breakthrough', 'realization', 'understand now', 'finally see',

            'changed my mind', 'different perspective', 'never thought',

            'first time', 'always wanted', 'biggest fear', 'greatest hope'

        ]

        

        text_lower = text.lower()

        has_significance_indicator = any(indicator in text_lower for indicator in significance_indicators)

        is_intense_emotion = emotional_state in ['very_positive', 'very_negative']

        is_long_reflection = len(text.split()) > 30

        

        return has_significance_indicator or is_intense_emotion or is_long_reflection

    

    def _calculate_significance_score(self, text: str, emotional_state: str) -> float:

        """Calculate a significance score for the moment."""

        score = 0.0

        

        # Length factor

        score += min(len(text.split()) / 50.0, 0.3)

        

        # Emotional intensity factor

        if emotional_state in ['positive', 'negative']:

            score += 0.4

        

        # Insight indicators

        insight_words = ['realize', 'understand', 'see', 'know', 'learn', 'discover']

        if any(word in text.lower() for word in insight_words):

            score += 0.3

        

        return min(score, 1.0)

    

    def _update_user_preferences(self, user_input: str, bot_response: str) -> None:

        """Learn user communication preferences from interactions."""

        

        # Detect question preference

        if "?" in user_input:

            self.working_memory['user_preferences']['asks_questions'] = True

        

        # Detect communication style preference

        emotional_words = ['feel', 'emotion', 'heart', 'soul', 'love', 'fear', 'joy', 'pain']

        analytical_words = ['think', 'analyze', 'logic', 'reason', 'plan', 'strategy', 'consider']

        

        emotional_count = sum(1 for word in emotional_words if word in user_input.lower())

        analytical_count = sum(1 for word in analytical_words if word in user_input.lower())

        

        if emotional_count > analytical_count:

            self.working_memory['user_preferences']['communication_style'] = 'emotional'

        elif analytical_count > emotional_count:

            self.working_memory['user_preferences']['communication_style'] = 'analytical'

        

        # Detect response length preference

        if len(user_input.split()) > 20:

            self.working_memory['user_preferences']['prefers_detailed_discussion'] = True

        elif len(user_input.split()) < 5:

            self.working_memory['user_preferences']['prefers_brief_exchanges'] = True

    

    def consolidate_session_memory(self, conversation_manager: ConversationManager) -> None:

        """Move important information from working memory to long-term storage."""

        

        session_summary = conversation_manager.get_session_summary()

        

        # Update long-term user profile

        for topic, frequency in self.working_memory['important_topics'].items():

            self.long_term_memory['user_profile']['recurring_themes'][topic] += frequency

        

        # Update emotional baseline

        if self.working_memory['emotional_patterns']:

            recent_emotions = [p['state'] for p in self.working_memory['emotional_patterns']]

            dominant_emotion = max(set(recent_emotions), key=recent_emotions.count)

            self.long_term_memory['user_profile']['emotional_baseline'] = dominant_emotion

        

        # Store significant moments

        for moment in self.working_memory['significant_moments']:

            if moment['significance_score'] > 0.7:

                self.long_term_memory['significant_moments'].append(moment)

        

        # Update session count

        self.long_term_memory['user_profile']['session_count'] += 1

        

        # Store session summary

        self.long_term_memory['session_history'].append({

            'session_summary': session_summary,

            'key_insights': self.working_memory['key_phrases'][:5],

            'emotional_journey': self.working_memory['emotional_patterns'],

            'significant_moments': self.working_memory['significant_moments']

        })

        

        # Keep only last 10 sessions to manage memory size

        if len(self.long_term_memory['session_history']) > 10:

            self.long_term_memory['session_history'] = self.long_term_memory['session_history'][-10:]

        

        # Persist to storage

        self._save_long_term_memory()

        

        # Clear working memory

        self.working_memory = {

            'key_phrases': [],

            'emotional_patterns': [],

            'important_topics': defaultdict(int),

            'user_preferences': {},

            'significant_moments': []

        }

    

    def _save_long_term_memory(self) -> None:

        """Save long-term memory to persistent storage."""

        memory_file = self.storage_path / "long_term_memory.pkl"

        try:

            with open(memory_file, 'wb') as f:

                pickle.dump(self.long_term_memory, f)

        except Exception as e:

            print(f"Warning: Error saving memory: {e}")


# ============================================================================

# Personality Engine

# ============================================================================


class PersonalityEngine:

    """

    Implements ELIZA's therapeutic personality and conversational techniques.

    Ensures consistent application of person-centered therapy principles.

    """

    

    def __init__(self):

        self.core_values = {

            'unconditional_positive_regard': True,

            'empathetic_understanding': True,

            'genuineness': True,

            'non_directive_approach': True

        }

        

        self.technique_patterns = self._initialize_technique_patterns()

        self.response_templates = self._load_response_templates()

        self.crisis_indicators = self._load_crisis_indicators()

    

    def _initialize_technique_patterns(self) -> Dict[TherapeuticTechnique, List[str]]:

        """Initialize patterns for recognizing when to use specific techniques."""

        return {

            TherapeuticTechnique.REFLECTION: [

                r"I feel (.+)",

                r"I am (.+)",

                r"I think (.+)",

                r"It makes me (.+)",

                r"I'm (.+)"

            ],

            TherapeuticTechnique.CLARIFICATION: [

                r"I don't know",

                r"I'm confused",

                r"I'm not sure",

                r"Maybe",

                r"I guess"

            ],

            TherapeuticTechnique.OPEN_ENDED_QUESTION: [

                r"(.+) happened",

                r"(.+) is difficult",

                r"I want (.+)",

                r"I need (.+)",

                r"I wish (.+)"

            ],

            TherapeuticTechnique.EMPATHETIC_RESPONSE: [

                r"(.+) hurt",

                r"(.+) sad",

                r"(.+) angry",

                r"(.+) scared",

                r"(.+) anxious",

                r"(.+) depressed"

            ],

            TherapeuticTechnique.NORMALIZATION: [

                r"I'm the only one",

                r"Nobody understands",

                r"I'm weird",

                r"Something's wrong with me"

            ]

        }

    

    def _load_response_templates(self) -> Dict[TherapeuticTechnique, List[str]]:

        """Load response templates for each therapeutic technique."""

        return {

            TherapeuticTechnique.REFLECTION: [

                "It sounds like you're feeling {emotion} about {situation}.",

                "You seem to be experiencing {emotion} when {situation}.",

                "I hear that {situation} brings up feelings of {emotion} for you.",

                "So you're feeling {emotion} in relation to {situation}."

            ],

            TherapeuticTechnique.CLARIFICATION: [

                "Can you help me understand what you mean by {unclear_term}?",

                "I'd like to better understand {topic}. Could you tell me more?",

                "When you say {phrase}, what does that look like for you?",

                "Could you elaborate on {concept}?"

            ],

            TherapeuticTechnique.OPEN_ENDED_QUESTION: [

                "What thoughts come up for you when you think about {topic}?",

                "How does {situation} affect you?",

                "What would it mean to you if {desired_outcome} happened?",

                "What's it like for you when {situation} occurs?"

            ],

            TherapeuticTechnique.EMPATHETIC_RESPONSE: [

                "That sounds really {emotion_adjective}. It takes courage to share that.",

                "I can imagine how {emotion_adjective} that must be for you.",

                "It's understandable that you would feel {emotion} in that situation.",

                "That sounds like a {emotion_adjective} experience."

            ],

            TherapeuticTechnique.GENTLE_CHALLENGE: [

                "I'm curious about {assumption}. What makes you think that?",

                "You mentioned {belief}. Have you always felt this way?",

                "I wonder if there might be another way to look at {situation}?",

                "What would it be like if {alternative_view} were true?"

            ],

            TherapeuticTechnique.NORMALIZATION: [

                "Many people struggle with {issue}. You're not alone in feeling this way.",

                "What you're experiencing with {situation} is quite common.",

                "It's natural to feel {emotion} when dealing with {situation}.",

                "You're certainly not the only person who has felt this way about {topic}."

            ],

            TherapeuticTechnique.SUPPORTIVE_STATEMENT: [

                "Thank you for sharing that with me. That took courage.",

                "I appreciate your openness in discussing {topic}.",

                "It's clear that {situation} is important to you.",

                "You're doing important work by exploring these feelings."

            ]

        }

    

    def _load_crisis_indicators(self) -> List[str]:

        """Load indicators that might suggest the user is in crisis."""

        return [

            'want to die', 'kill myself', 'end it all', 'not worth living',

            'hurt myself', 'suicide', 'suicidal', 'end my life',

            'nobody cares', 'better off dead', 'can\'t go on'

        ]

    

    def select_therapeutic_approach(self, user_input: str, emotional_state: str, 

                                  conversation_context: List[ConversationTurn]) -> Tuple[TherapeuticTechnique, Dict[str, str]]:

        """

        Select the most appropriate therapeutic technique based on user input and context.

        Returns the technique and extracted parameters for response generation.

        """

        

        user_input_lower = user_input.lower()

        

        # Check for crisis indicators first

        if self._detect_crisis(user_input):

            return TherapeuticTechnique.SUPPORTIVE_STATEMENT, {

                'topic': 'your wellbeing',

                'situation': 'this difficult time'

            }

        

        # Check for emotional distress - prioritize empathetic response

        distress_indicators = ['hurt', 'pain', 'sad', 'angry', 'scared', 'anxious', 'depressed', 'overwhelmed']

        if any(indicator in user_input_lower for indicator in distress_indicators):

            emotion = self._extract_emotion(user_input)

            emotion_adjective = self._get_emotion_adjective(emotion)

            return TherapeuticTechnique.EMPATHETIC_RESPONSE, {

                'emotion': emotion, 

                'emotion_adjective': emotion_adjective

            }

        

        # Check for normalization needs

        isolation_indicators = ['only one', 'nobody understands', 'weird', 'wrong with me', 'different']

        if any(indicator in user_input_lower for indicator in isolation_indicators):

            issue = self._extract_main_topic(user_input)

            return TherapeuticTechnique.NORMALIZATION, {

                'issue': issue,

                'situation': issue,

                'emotion': emotional_state,

                'topic': issue

            }

        

        # Check for confusion or uncertainty - use clarification

        uncertainty_indicators = ['confused', 'not sure', 'don\'t know', 'maybe', 'unclear', 'don\'t understand']

        if any(indicator in user_input_lower for indicator in uncertainty_indicators):

            unclear_term = self._extract_unclear_concept(user_input)

            return TherapeuticTechnique.CLARIFICATION, {

                'unclear_term': unclear_term, 

                'topic': unclear_term,

                'phrase': unclear_term,

                'concept': unclear_term

            }

        

        # Check for feeling statements - use reflection

        feeling_patterns = [r"I feel (.+)", r"I am (.+)", r"It makes me (.+)", r"I'm (.+)"]

        for pattern in feeling_patterns:

            match = re.search(pattern, user_input, re.IGNORECASE)

            if match:

                feeling_content = match.group(1)

                emotion, situation = self._parse_feeling_statement(feeling_content)

                return TherapeuticTechnique.REFLECTION, {

                    'emotion': emotion, 

                    'situation': situation

                }

        

        # Check conversation history for technique variety

        recent_techniques = self._get_recent_techniques(conversation_context)

        

        # Avoid overusing the same technique

        if len(recent_techniques) >= 2 and len(set(recent_techniques[-2:])) == 1:

            last_technique = recent_techniques[-1]

            if last_technique == TherapeuticTechnique.REFLECTION.value:

                topic = self._extract_main_topic(user_input)

                return TherapeuticTechnique.OPEN_ENDED_QUESTION, {

                    'topic': topic, 

                    'situation': topic,

                    'desired_outcome': 'positive change'

                }

            elif last_technique == TherapeuticTechnique.OPEN_ENDED_QUESTION.value:

                emotion, situation = self._parse_feeling_statement(user_input)

                return TherapeuticTechnique.REFLECTION, {

                    'emotion': emotion, 

                    'situation': situation

                }

        

        # Default to open-ended questions to encourage exploration

        topic = self._extract_main_topic(user_input)

        return TherapeuticTechnique.OPEN_ENDED_QUESTION, {

            'topic': topic, 

            'situation': topic, 

            'desired_outcome': 'understanding'

        }

    

    def _detect_crisis(self, text: str) -> bool:

        """Detect if the user might be in crisis."""

        text_lower = text.lower()

        return any(indicator in text_lower for indicator in self.crisis_indicators)

    

    def _extract_emotion(self, text: str) -> str:

        """Extract emotional words from user input."""

        emotion_words = {

            'sad': ['sad', 'depressed', 'down', 'blue', 'melancholy', 'miserable'],

            'angry': ['angry', 'mad', 'furious', 'irritated', 'annoyed', 'frustrated'],

            'anxious': ['anxious', 'worried', 'nervous', 'scared', 'afraid', 'panicked'],

            'happy': ['happy', 'joyful', 'excited', 'pleased', 'content', 'elated'],

            'confused': ['confused', 'lost', 'uncertain', 'unclear', 'bewildered'],

            'hurt': ['hurt', 'pain', 'wounded', 'injured', 'damaged'],

            'overwhelmed': ['overwhelmed', 'swamped', 'buried', 'drowning']

        }

        

        text_lower = text.lower()

        for emotion, synonyms in emotion_words.items():

            if any(synonym in text_lower for synonym in synonyms):

                return emotion

        

        return 'uncertain'

    

    def _get_emotion_adjective(self, emotion: str) -> str:

        """Convert emotion to appropriate adjective form."""

        adjective_map = {

            'sad': 'difficult',

            'angry': 'frustrating',

            'anxious': 'overwhelming',

            'happy': 'wonderful',

            'confused': 'confusing',

            'hurt': 'painful',

            'overwhelmed': 'overwhelming'

        }

        return adjective_map.get(emotion, 'challenging')

    

    def _extract_unclear_concept(self, text: str) -> str:

        """Extract the concept the user is unclear about."""

        uncertainty_phrases = ['not sure about', 'confused about', 'unclear on', 'don\'t understand']

        

        for phrase in uncertainty_phrases:

            if phrase in text.lower():

                start_idx = text.lower().find(phrase) + len(phrase)

                remaining_text = text[start_idx:].strip()

                words = remaining_text.split()[:3]

                return ' '.join(words) if words else 'that'

        

        # Look for question words

        question_words = ['what', 'why', 'how', 'when', 'where', 'who']

        words = text.lower().split()

        for i, word in enumerate(words):

            if word in question_words and i + 1 < len(words):

                return ' '.join(words[i+1:i+4])

        

        return 'what you mentioned'

    

    def _parse_feeling_statement(self, feeling_content: str) -> Tuple[str, str]:

        """Parse a feeling statement to extract emotion and situation."""

        connectors = [' when ', ' because ', ' about ', ' that ', ' if ', ' since ']

        

        emotion = feeling_content.strip()

        situation = 'this situation'

        

        for connector in connectors:

            if connector in feeling_content.lower():

                parts = feeling_content.split(connector, 1)

                emotion = parts[0].strip()

                situation = parts[1].strip() if len(parts) > 1 else situation

                break

        

        # Clean up emotion and situation

        emotion = emotion.replace('like', '').strip()

        if not situation or situation == emotion:

            situation = 'this situation'

        

        return emotion, situation

    

    def _extract_main_topic(self, text: str) -> str:

        """Extract the main topic from user input."""

        common_topics = [

            'work', 'job', 'career', 'family', 'relationship', 'friend', 'partner',

            'health', 'money', 'future', 'past', 'decision', 'choice', 'problem',

            'school', 'college', 'stress', 'anxiety', 'depression', 'love', 'life'

        ]

        

        text_lower = text.lower()

        for topic in common_topics:

            if topic in text_lower:

                return topic

        

        # Extract first meaningful noun

        words = text.split()

        for word in words:

            if len(word) > 3 and word.isalpha() and word.lower() not in ['this', 'that', 'with', 'have', 'been']:

                return word.lower()

        

        return 'this'

    

    def _get_recent_techniques(self, conversation_context: List[ConversationTurn]) -> List[str]:

        """Extract therapeutic techniques used in recent conversation."""

        return [turn.therapeutic_technique for turn in conversation_context[-3:] 

                if turn.therapeutic_technique]

    

    def generate_therapeutic_response(self, technique: TherapeuticTechnique, 

                                    parameters: Dict[str, str]) -> str:

        """Generate a response using the specified therapeutic technique."""

        

        templates = self.response_templates.get(technique, [])

        if not templates:

            return "I'd like to understand more about what you're experiencing."

        

        # Select template (could be made more sophisticated)

        import random

        selected_template = random.choice(templates)

        

        try:

            # Format template with extracted parameters

            response = selected_template.format(**parameters)

            return response

        except KeyError as e:

            # Fallback if parameter is missing

            fallback_responses = {

                TherapeuticTechnique.REFLECTION: f"I hear what you're saying about {parameters.get('situation', 'this')}.",

                TherapeuticTechnique.CLARIFICATION: f"Can you tell me more about {parameters.get('topic', 'that')}?",

                TherapeuticTechnique.OPEN_ENDED_QUESTION: f"What comes up for you when you think about {parameters.get('topic', 'this')}?",

                TherapeuticTechnique.EMPATHETIC_RESPONSE: "That sounds like it's been difficult for you.",

                TherapeuticTechnique.NORMALIZATION: "What you're experiencing is more common than you might think.",

                TherapeuticTechnique.SUPPORTIVE_STATEMENT: "Thank you for sharing that with me."

            }

            return fallback_responses.get(technique, "I'd like to understand more about what you're experiencing.")

    

    def ensure_therapeutic_boundaries(self, response: str) -> str:

        """

        Ensure the response maintains appropriate therapeutic boundaries.

        Removes advice-giving and maintains non-directive approach.

        """

        

        # Remove directive language

        directive_patterns = [

            (r"You should (.+)", r"Have you considered \1?"),

            (r"You need to (.+)", r"What would it be like if you \1?"),

            (r"You must (.+)", r"How would you feel about \1?"),

            (r"I recommend (.+)", r"What are your thoughts about \1?"),

            (r"My advice is (.+)", r"How does \1 sound to you?")

        ]

        

        for pattern, replacement in directive_patterns:

            response = re.sub(pattern, replacement, response, flags=re.IGNORECASE)

        

        # Ensure questions end appropriately

        if response.strip() and not response.strip().endswith(('?', '.', '!')):

            if 'what' in response.lower() or 'how' in response.lower() or 'why' in response.lower():

                response += '?'

            else:

                response += '.'

        

        return response

    

    def generate_crisis_response(self) -> str:

        """Generate an appropriate response for crisis situations."""

        return ("I'm concerned about what you're sharing with me. These feelings are important, "

                "and you deserve support. Have you considered speaking with a mental health "

                "professional or calling a crisis helpline? I'm here to listen, but professional "

                "help might be beneficial for you right now.")


# ============================================================================

# Main ELIZA Class

# ============================================================================


class ModernEliza:

    """

    Main orchestrator class that brings together all components

    to create a modern ELIZA-inspired therapeutic chatbot.

    """

    

    def __init__(self, llm_interface: LLMInterface = None, storage_path: str = "eliza_sessions"):

        self.llm_interface = llm_interface or MockLLMInterface()

        self.storage_path = Path(storage_path)

        self.storage_path.mkdir(exist_ok=True)

        

        # Initialize components

        self.conversation_manager = None

        self.response_generator = ResponseGenerator(self.llm_interface)

        self.memory_system = MemorySystem(str(self.storage_path / "memory"))

        self.personality_engine = PersonalityEngine()

        

        # Session state

        self.is_session_active = False

        self.greeting_given = False

    

    def start_session(self, session_id: str = None) -> str:

        """

        Start a new conversation session.

        Returns the opening greeting.

        """

        self.conversation_manager = ConversationManager(session_id)

        self.is_session_active = True

        self.greeting_given = True

        

        # Load any relevant long-term memory

        user_profile = self.memory_system.long_term_memory.get('user_profile', {})

        recurring_themes = user_profile.get('recurring_themes', {})

        session_count = user_profile.get('session_count', 0)

        

        # Personalize greeting based on history

        if session_count > 0:

            if recurring_themes:

                main_theme = max(recurring_themes.items(), key=lambda x: x[1])[0]

                greeting = f"Hello again. I remember we've talked about {main_theme} before. How are you feeling today?"

            else:

                greeting = "Hello again. It's good to see you back. What's on your mind today?"

        else:

            greeting = "Hello. I'm ELIZA, and I'm here to listen and understand. What would you like to talk about today?"

        

        return greeting

    

    def process_input(self, user_input: str) -> str:

        """

        Process user input and generate an appropriate response.

        This is the main interaction method.

        """

        

        if not self.is_session_active:

            return self.start_session()

        

        # Handle session management commands

        goodbye_commands = ['goodbye', 'bye', 'exit', 'quit', 'end', 'stop']

        if user_input.lower().strip() in goodbye_commands:

            return self._end_session()

        

        # Handle empty or very short input

        if not user_input.strip() or len(user_input.strip()) < 2:

            return "I'm here and listening. What would you like to share?"

        

        # Get conversation context

        recent_context = self.conversation_manager.get_recent_context()

        

        # Check for crisis indicators

        if self.personality_engine._detect_crisis(user_input):

            crisis_response = self.personality_engine.generate_crisis_response()

            self.conversation_manager.add_turn(

                user_input, 

                crisis_response,

                'crisis',

                ['crisis', 'safety'],

                TherapeuticTechnique.SUPPORTIVE_STATEMENT

            )

            return crisis_response

        

        # Generate initial response using LLM

        llm_response_data = self.response_generator.generate_response(user_input, recent_context)

        

        # Apply therapeutic personality and techniques

        technique, parameters = self.personality_engine.select_therapeutic_approach(

            user_input, 

            llm_response_data['emotional_state'], 

            recent_context

        )

        

        # Generate therapeutic response

        therapeutic_response = self.personality_engine.generate_therapeutic_response(technique, parameters)

        

        # Ensure therapeutic boundaries

        final_response = self.personality_engine.ensure_therapeutic_boundaries(therapeutic_response)

        

        # Update memory systems

        self.memory_system.update_working_memory(

            user_input, 

            final_response, 

            llm_response_data['emotional_state'],

            llm_response_data['key_topics']

        )

        

        # Add to conversation history

        self.conversation_manager.add_turn(

            user_input, 

            final_response,

            llm_response_data['emotional_state'],

            llm_response_data['key_topics'],

            technique

        )

        

        return final_response

    

    def _end_session(self) -> str:

        """

        End the current session and consolidate memory.

        """

        if not self.is_session_active:

            return "We haven't started our conversation yet. Would you like to begin?"

        

        # Consolidate session memory

        self.memory_system.consolidate_session_memory(self.conversation_manager)

        

        # Generate personalized closing response

        session_summary = self.conversation_manager.get_session_summary()

        primary_topics = list(session_summary.get('primary_topics', {}).keys())

        

        if primary_topics:

            topics_text = ', '.join(primary_topics[:2])

            if len(primary_topics) > 2:

                topics_text += f", and {len(primary_topics) - 2} other topic{'s' if len(primary_topics) > 3 else ''}"

            closing = f"Thank you for sharing your thoughts about {topics_text} with me today. "

        else:

            closing = "Thank you for our conversation today. "

        

        closing += "Remember, you have the strength and wisdom to work through whatever you're facing. Take care of yourself."

        

        # Reset session state

        self.is_session_active = False

        self.greeting_given = False

        self.conversation_manager = None

        

        return closing

    

    def get_session_insights(self) -> Dict[str, Any]:

        """

        Get insights about the current session for analysis or debugging.

        """

        if not self.conversation_manager:

            return {'error': 'No active session'}

        

        session_summary = self.conversation_manager.get_session_summary()

        memory_context = self.memory_system.working_memory

        

        return {

            'session_summary': session_summary,

            'key_topics_frequency': dict(memory_context['important_topics']),

            'emotional_progression': [p['state'] for p in memory_context['emotional_patterns']],

            'significant_phrases': memory_context['key_phrases'][-5:],

            'user_preferences': memory_context['user_preferences'],

            'significant_moments': len(memory_context['significant_moments']),

            'conversation_length': len(self.conversation_manager.conversation_history)

        }

    

    def save_session(self, filename: str = None) -> str:

        """

        Save the current session to a file for later analysis.

        """

        if not self.conversation_manager:

            return "No active session to save."

        

        if filename is None:

            timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')

            filename = f"session_{self.conversation_manager.session_id}_{timestamp}.json"

        

        session_data = {

            'session_metadata': self.conversation_manager.get_session_summary(),

            'conversation_history': [asdict(turn) for turn in self.conversation_manager.conversation_history],

            'insights': self.get_session_insights(),

            'memory_state': {

                'working_memory': {

                    'key_phrases': self.memory_system.working_memory['key_phrases'],

                    'emotional_patterns': [

                        {**pattern, 'timestamp': pattern['timestamp'].isoformat()} 

                        for pattern in self.memory_system.working_memory['emotional_patterns']

                    ],

                    'important_topics': dict(self.memory_system.working_memory['important_topics']),

                    'user_preferences': self.memory_system.working_memory['user_preferences']

                }

            }

        }

        

        filepath = self.storage_path / filename

        try:

            with open(filepath, 'w', encoding='utf-8') as f:

                json.dump(session_data, f, indent=2, default=str)

            return f"Session saved to {filepath}"

        except Exception as e:

            return f"Error saving session: {e}"


# ============================================================================

# Command Line Interface

# ============================================================================


def main():

    """

    Main function to run the ELIZA chatbot with a simple command-line interface.

    """

    print("=" * 60)

    print("MODERN ELIZA - Therapeutic Chatbot")

    print("=" * 60)

    print("Type 'quit', 'exit', 'bye', or 'goodbye' to end the session.")

    print("Type 'insights' to see session analysis.")

    print("Type 'save' to save the current session.")

    print("-" * 60)

    

    # Initialize ELIZA with mock LLM interface

    eliza = ModernEliza()

    

    # Start the session

    greeting = eliza.start_session()

    print(f"\nELIZA: {greeting}\n")

    

    try:

        while True:

            # Get user input

            user_input = input("You: ").strip()

            

            # Handle special commands

            if user_input.lower() == 'insights':

                insights = eliza.get_session_insights()

                print("\n" + "=" * 40)

                print("SESSION INSIGHTS")

                print("=" * 40)

                for key, value in insights.items():

                    print(f"{key}: {value}")

                print("=" * 40 + "\n")

                continue

            

            if user_input.lower() == 'save':

                result = eliza.save_session()

                print(f"\n[System: {result}]\n")

                continue

            

            # Process input and get response

            response = eliza.process_input(user_input)

            print(f"\nELIZA: {response}\n")

            

            # Check if session ended

            if not eliza.is_session_active:

                break

                

    except KeyboardInterrupt:

        print("\n\n[Session interrupted by user]")

        if eliza.is_session_active:

            final_message = eliza._end_session()

            print(f"ELIZA: {final_message}")

    

    except Exception as e:

        print(f"\n[Error occurred: {e}]")

        if eliza.is_session_active:

            print("ELIZA: I apologize, but I encountered a technical issue. Thank you for our conversation.")

    

    print("\nGoodbye!")


if __name__ == "__main__":

    main()


CONCLUSION AND FUTURE DIRECTIONS


This modern implementation of ELIZA demonstrates how contemporary AI technologies can enhance the therapeutic conversational experience while preserving the essential qualities that made the original so compelling. By combining Large Language Models with sophisticated memory management, personality engines, and therapeutic techniques, we create a system that can engage in more nuanced and helpful conversations than simple pattern matching would allow.

The architecture presented here follows clean code principles and maintains separation of concerns, making it extensible and maintainable. The modular design allows for easy replacement of components, such as swapping different LLM providers or adding new therapeutic techniques.

Key advantages of this modern approach include contextual understanding that goes beyond keyword matching, persistent memory that enables continuity across sessions, adaptive personality that learns user preferences, sophisticated emotional intelligence that can track and respond to emotional patterns, and robust error handling that maintains the therapeutic relationship even when technical issues arise.

Future enhancements could include integration with professional therapy frameworks, voice interface capabilities for more natural interaction, multi-modal input processing including text, voice, and potentially physiological data, integration with mental health resources and crisis intervention systems, and advanced analytics for tracking therapeutic progress over time.

The implementation also demonstrates important considerations for AI safety and ethics in therapeutic applications. While this system can provide valuable support and reflection opportunities, it's designed to recognize its limitations and guide users to professional help when appropriate.

This modern ELIZA serves as both a practical therapeutic tool and a demonstration of how classic AI concepts can be enhanced with contemporary technologies to create more effective and helpful systems. The combination of historical therapeutic wisdom with modern AI capabilities creates new possibilities for supportive technology that can genuinely help people explore their thoughts and feelings in a safe, non-judgmental environment.