INTRODUCTION AND APPLICATION OVERVIEW
The Quote-Of-The-Day application represents a sophisticated integration of Large Language Model technology with traditional software engineering practices to deliver personalized inspirational content. This application demonstrates how modern AI capabilities can be seamlessly integrated into everyday productivity tools while maintaining robust software engineering principles.
The core concept revolves around an intelligent system that automatically discovers, curates, and presents quotes based on user-specified topics and preferences. Unlike traditional quote applications that rely on static databases, this implementation leverages the dynamic capabilities of Large Language Models to provide contextually relevant and diverse content. The system operates autonomously, requiring minimal user intervention once properly configured.
The application's intelligence stems from its ability to understand topic preferences through natural language processing and its capacity to evaluate quote relevance using LLM-based analysis. This approach ensures that users receive content that aligns with their interests while avoiding repetitive or irrelevant quotes that plague simpler implementations.
SYSTEM ARCHITECTURE AND COMPONENTS
The application follows a modular architecture that separates concerns into distinct, manageable components. The configuration manager handles user preferences and system settings, while the quote discovery engine interfaces with external data sources to locate relevant content. The LLM integration layer provides the intelligence necessary for quote evaluation and topic matching.
The quote history manager maintains persistent storage of previously displayed quotes, ensuring variety and preventing repetition. The display manager handles the presentation layer, formatting quotes appropriately for the target output medium. This separation of concerns enables independent testing, maintenance, and enhancement of each component.
The system supports both local and remote LLM deployments, providing flexibility in resource utilization and privacy considerations. Local LLM integration offers complete data privacy and independence from external services, while remote LLM integration provides access to more powerful models with potentially better performance characteristics.
CONFIGURATION MANAGEMENT
The configuration system serves as the foundation for application customization and operational parameters. Users specify their preferences through a structured configuration file that defines topics of interest, timing parameters, and LLM selection criteria. This approach provides a clear separation between user preferences and application logic.
The following code example demonstrates a comprehensive configuration management implementation that handles JSON-based configuration files with robust error handling and validation:
import json
import os
from typing import List, Dict, Any, Optional
from dataclasses import dataclass
from pathlib import Path
@dataclass
class LLMConfig:
provider: str
model_name: str
api_key: Optional[str] = None
endpoint: Optional[str] = None
temperature: float = 0.7
max_tokens: int = 150
@dataclass
class AppConfig:
topics: List[str]
quote_interval_hours: int
llm_config: LLMConfig
quote_history_file: str
max_history_size: int
quote_sources: List[str]
display_duration_seconds: int
class ConfigurationManager:
def __init__(self, config_file_path: str = "config.json"):
self.config_file_path = config_file_path
self.config = None
def load_configuration(self) -> AppConfig:
"""Load and validate configuration from JSON file"""
try:
if not os.path.exists(self.config_file_path):
self._create_default_config()
with open(self.config_file_path, 'r', encoding='utf-8') as file:
config_data = json.load(file)
return self._parse_configuration(config_data)
except json.JSONDecodeError as e:
raise ValueError(f"Invalid JSON in configuration file: {e}")
except Exception as e:
raise RuntimeError(f"Failed to load configuration: {e}")
def _parse_configuration(self, config_data: Dict[str, Any]) -> AppConfig:
"""Parse and validate configuration data"""
try:
llm_data = config_data.get('llm', {})
llm_config = LLMConfig(
provider=llm_data.get('provider', 'openai'),
model_name=llm_data.get('model_name', 'gpt-3.5-turbo'),
api_key=llm_data.get('api_key'),
endpoint=llm_data.get('endpoint'),
temperature=llm_data.get('temperature', 0.7),
max_tokens=llm_data.get('max_tokens', 150)
)
topics = config_data.get('topics', [])
if not topics:
topics = ['general', 'motivation', 'wisdom']
return AppConfig(
topics=topics,
quote_interval_hours=config_data.get('quote_interval_hours', 24),
llm_config=llm_config,
quote_history_file=config_data.get('quote_history_file', 'quote_history.json'),
max_history_size=config_data.get('max_history_size', 1000),
quote_sources=config_data.get('quote_sources', [
'https://www.brainyquote.com',
'https://www.goodreads.com/quotes'
]),
display_duration_seconds=config_data.get('display_duration_seconds', 10)
)
except KeyError as e:
raise ValueError(f"Missing required configuration key: {e}")
def _create_default_config(self):
"""Create a default configuration file if none exists"""
default_config = {
"topics": ["motivation", "success", "wisdom"],
"quote_interval_hours": 24,
"llm": {
"provider": "openai",
"model_name": "gpt-3.5-turbo",
"api_key": "your_api_key_here",
"temperature": 0.7,
"max_tokens": 150
},
"quote_history_file": "quote_history.json",
"max_history_size": 1000,
"quote_sources": [
"https://www.brainyquote.com",
"https://www.goodreads.com/quotes"
],
"display_duration_seconds": 10
}
with open(self.config_file_path, 'w', encoding='utf-8') as file:
json.dump(default_config, file, indent=2)
This configuration manager implementation provides comprehensive handling of user preferences while maintaining backward compatibility and robust error handling. The dataclass approach ensures type safety and clear documentation of expected configuration parameters. The automatic creation of default configuration files simplifies initial setup for new users.
QUOTE SOURCE INTEGRATION AND WEB SCRAPING
The quote discovery engine represents one of the most complex components of the system, responsible for locating relevant quotes from various online sources. This component must handle the dynamic nature of web content while respecting website policies and rate limiting requirements.
The implementation utilizes a combination of web scraping techniques and API integrations to access quote databases. The system employs intelligent caching mechanisms to reduce network overhead and improve response times. Additionally, the quote discovery engine implements robust error handling to gracefully manage network failures and content parsing errors.
The following code example demonstrates a comprehensive quote discovery implementation that handles multiple source types and provides reliable quote extraction:
import requests
from bs4 import BeautifulSoup
import time
import random
from typing import List, Dict, Optional
from urllib.parse import urljoin, urlparse
import logging
from dataclasses import dataclass
@dataclass
class Quote:
text: str
author: str
source: str
topic: Optional[str] = None
class QuoteDiscoveryEngine:
def __init__(self, sources: List[str], request_delay: float = 1.0):
self.sources = sources
self.request_delay = request_delay
self.session = requests.Session()
self.session.headers.update({
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'
})
def discover_quotes(self, topics: List[str], max_quotes: int = 10) -> List[Quote]:
"""Discover quotes related to specified topics"""
all_quotes = []
for topic in topics:
try:
topic_quotes = self._search_quotes_by_topic(topic, max_quotes // len(topics))
all_quotes.extend(topic_quotes)
time.sleep(self.request_delay)
except Exception as e:
logging.error(f"Failed to discover quotes for topic '{topic}': {e}")
continue
return all_quotes
def _search_quotes_by_topic(self, topic: str, max_results: int) -> List[Quote]:
"""Search for quotes related to a specific topic"""
quotes = []
for source in self.sources:
try:
if 'brainyquote.com' in source:
source_quotes = self._scrape_brainyquote(topic, max_results)
elif 'goodreads.com' in source:
source_quotes = self._scrape_goodreads(topic, max_results)
else:
source_quotes = self._generic_quote_scraper(source, topic, max_results)
quotes.extend(source_quotes)
if len(quotes) >= max_results:
break
except Exception as e:
logging.warning(f"Failed to scrape {source}: {e}")
continue
return quotes[:max_results]
def _scrape_brainyquote(self, topic: str, max_results: int) -> List[Quote]:
"""Scrape quotes from BrainyQuote"""
quotes = []
search_url = f"https://www.brainyquote.com/topics/{topic.lower()}"
try:
response = self.session.get(search_url, timeout=10)
response.raise_for_status()
soup = BeautifulSoup(response.content, 'html.parser')
quote_elements = soup.find_all('div', class_='grid-item', limit=max_results)
for element in quote_elements:
try:
quote_text_elem = element.find('a', class_='b-qt')
author_elem = element.find('a', class_='bq-aut')
if quote_text_elem and author_elem:
quote_text = quote_text_elem.get_text(strip=True)
author = author_elem.get_text(strip=True)
quote = Quote(
text=quote_text,
author=author,
source='BrainyQuote',
topic=topic
)
quotes.append(quote)
except Exception as e:
logging.debug(f"Failed to parse quote element: {e}")
continue
except requests.RequestException as e:
logging.error(f"Network error while scraping BrainyQuote: {e}")
return quotes
def _scrape_goodreads(self, topic: str, max_results: int) -> List[Quote]:
"""Scrape quotes from Goodreads"""
quotes = []
search_url = f"https://www.goodreads.com/quotes/search?q={topic}"
try:
response = self.session.get(search_url, timeout=10)
response.raise_for_status()
soup = BeautifulSoup(response.content, 'html.parser')
quote_elements = soup.find_all('div', class_='quote', limit=max_results)
for element in quote_elements:
try:
quote_text_elem = element.find('div', class_='quoteText')
if quote_text_elem:
quote_text = quote_text_elem.get_text(strip=True)
# Extract author from quote text (Goodreads format includes author)
if '―' in quote_text:
text_parts = quote_text.split('―')
quote_text = text_parts[0].strip().strip('"')
author = text_parts[1].strip()
else:
author = "Unknown"
quote = Quote(
text=quote_text,
author=author,
source='Goodreads',
topic=topic
)
quotes.append(quote)
except Exception as e:
logging.debug(f"Failed to parse Goodreads quote: {e}")
continue
except requests.RequestException as e:
logging.error(f"Network error while scraping Goodreads: {e}")
return quotes
def _generic_quote_scraper(self, source_url: str, topic: str, max_results: int) -> List[Quote]:
"""Generic quote scraper for unknown sources"""
quotes = []
try:
response = self.session.get(source_url, timeout=10)
response.raise_for_status()
soup = BeautifulSoup(response.content, 'html.parser')
# Look for common quote patterns
potential_quotes = soup.find_all(['blockquote', 'q', 'cite'])
for element in potential_quotes[:max_results]:
try:
quote_text = element.get_text(strip=True)
if len(quote_text) > 20 and len(quote_text) < 500:
quote = Quote(
text=quote_text,
author="Unknown",
source=urlparse(source_url).netloc,
topic=topic
)
quotes.append(quote)
except Exception as e:
continue
except requests.RequestException as e:
logging.error(f"Network error while scraping {source_url}: {e}")
return quotes
This quote discovery engine provides a robust foundation for gathering quotes from multiple sources while handling the inherent challenges of web scraping. The implementation includes proper error handling, rate limiting, and source-specific parsing logic to maximize success rates across different website structures.
LLM INTEGRATION FOR LOCAL AND REMOTE MODELS
The LLM integration component serves as the intelligence layer of the application, responsible for evaluating quote relevance, ensuring quality, and providing contextual analysis. This component must support both local and remote LLM deployments to accommodate different user requirements and infrastructure constraints.
Local LLM integration offers complete privacy and independence from external services, making it suitable for sensitive environments or users with privacy concerns. Remote LLM integration provides access to more powerful models and reduces local computational requirements, making it ideal for resource-constrained environments.
The following code example demonstrates a comprehensive LLM integration that supports multiple providers and deployment models:
import openai
import requests
import json
from typing import List, Dict, Any, Optional
from abc import ABC, abstractmethod
import logging
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
class LLMProvider(ABC):
@abstractmethod
def evaluate_quote_relevance(self, quote: Quote, topics: List[str]) -> float:
"""Evaluate how relevant a quote is to given topics (0.0 to 1.0)"""
pass
@abstractmethod
def improve_quote_formatting(self, quote: Quote) -> Quote:
"""Improve quote formatting and attribution"""
pass
@abstractmethod
def generate_context(self, quote: Quote) -> str:
"""Generate contextual information about the quote"""
pass
class OpenAIProvider(LLMProvider):
def __init__(self, api_key: str, model_name: str = "gpt-3.5-turbo", temperature: float = 0.7):
self.client = openai.OpenAI(api_key=api_key)
self.model_name = model_name
self.temperature = temperature
def evaluate_quote_relevance(self, quote: Quote, topics: List[str]) -> float:
"""Evaluate quote relevance using OpenAI API"""
try:
prompt = f"""
Evaluate how relevant this quote is to the topics: {', '.join(topics)}
Quote: "{quote.text}" - {quote.author}
Rate the relevance on a scale of 0.0 to 1.0, where:
- 0.0 means completely irrelevant
- 1.0 means perfectly relevant
Respond with only the numerical score.
"""
response = self.client.chat.completions.create(
model=self.model_name,
messages=[{"role": "user", "content": prompt}],
temperature=self.temperature,
max_tokens=10
)
score_text = response.choices[0].message.content.strip()
return float(score_text)
except Exception as e:
logging.error(f"Failed to evaluate quote relevance: {e}")
return 0.5 # Default neutral score
def improve_quote_formatting(self, quote: Quote) -> Quote:
"""Improve quote formatting using OpenAI"""
try:
prompt = f"""
Improve the formatting and attribution of this quote:
Quote: "{quote.text}"
Author: {quote.author}
Return the improved quote in this exact format:
QUOTE: [improved quote text]
AUTHOR: [improved author attribution]
"""
response = self.client.chat.completions.create(
model=self.model_name,
messages=[{"role": "user", "content": prompt}],
temperature=0.3,
max_tokens=200
)
content = response.choices[0].message.content.strip()
lines = content.split('\n')
improved_text = quote.text
improved_author = quote.author
for line in lines:
if line.startswith('QUOTE:'):
improved_text = line.replace('QUOTE:', '').strip()
elif line.startswith('AUTHOR:'):
improved_author = line.replace('AUTHOR:', '').strip()
return Quote(
text=improved_text,
author=improved_author,
source=quote.source,
topic=quote.topic
)
except Exception as e:
logging.error(f"Failed to improve quote formatting: {e}")
return quote
def generate_context(self, quote: Quote) -> str:
"""Generate contextual information about the quote"""
try:
prompt = f"""
Provide brief contextual information about this quote:
"{quote.text}" - {quote.author}
Include information about the author's background and the quote's significance.
Keep the response under 100 words.
"""
response = self.client.chat.completions.create(
model=self.model_name,
messages=[{"role": "user", "content": prompt}],
temperature=0.7,
max_tokens=150
)
return response.choices[0].message.content.strip()
except Exception as e:
logging.error(f"Failed to generate context: {e}")
return ""
class LocalLLMProvider(LLMProvider):
def __init__(self, model_name: str = "microsoft/DialoGPT-medium"):
self.model_name = model_name
self.tokenizer = None
self.model = None
self._load_model()
def _load_model(self):
"""Load the local LLM model"""
try:
self.tokenizer = AutoTokenizer.from_pretrained(self.model_name)
self.model = AutoModelForCausalLM.from_pretrained(self.model_name)
if self.tokenizer.pad_token is None:
self.tokenizer.pad_token = self.tokenizer.eos_token
except Exception as e:
logging.error(f"Failed to load local model: {e}")
raise
def evaluate_quote_relevance(self, quote: Quote, topics: List[str]) -> float:
"""Evaluate quote relevance using local LLM"""
try:
# Simplified relevance evaluation using keyword matching and semantic similarity
quote_words = set(quote.text.lower().split())
topic_words = set(' '.join(topics).lower().split())
# Calculate word overlap
overlap = len(quote_words.intersection(topic_words))
total_words = len(quote_words.union(topic_words))
if total_words == 0:
return 0.0
# Basic relevance score based on word overlap
relevance_score = overlap / total_words
# Boost score for quotes with topic-related keywords
topic_keywords = {
'motivation': ['success', 'achieve', 'goal', 'dream', 'inspire'],
'wisdom': ['knowledge', 'learn', 'understand', 'wise', 'truth'],
'success': ['win', 'accomplish', 'victory', 'triumph', 'excel']
}
for topic in topics:
if topic.lower() in topic_keywords:
keywords = topic_keywords[topic.lower()]
keyword_matches = sum(1 for word in quote_words if word in keywords)
relevance_score += keyword_matches * 0.1
return min(relevance_score, 1.0)
except Exception as e:
logging.error(f"Failed to evaluate quote relevance locally: {e}")
return 0.5
def improve_quote_formatting(self, quote: Quote) -> Quote:
"""Improve quote formatting using simple rules"""
try:
# Basic formatting improvements
improved_text = quote.text.strip()
# Remove extra quotes if present
if improved_text.startswith('"') and improved_text.endswith('"'):
improved_text = improved_text[1:-1]
# Ensure proper capitalization
improved_text = improved_text[0].upper() + improved_text[1:] if improved_text else ""
# Clean up author name
improved_author = quote.author.strip()
if improved_author.lower() == 'unknown' or not improved_author:
improved_author = "Anonymous"
return Quote(
text=improved_text,
author=improved_author,
source=quote.source,
topic=quote.topic
)
except Exception as e:
logging.error(f"Failed to improve quote formatting: {e}")
return quote
def generate_context(self, quote: Quote) -> str:
"""Generate basic contextual information"""
# For local implementation, provide basic context
if quote.author and quote.author.lower() != 'unknown':
return f"This quote is attributed to {quote.author}."
else:
return "This is an inspirational quote from an anonymous source."
class LLMManager:
def __init__(self, config: LLMConfig):
self.config = config
self.provider = self._create_provider()
def _create_provider(self) -> LLMProvider:
"""Create appropriate LLM provider based on configuration"""
if self.config.provider.lower() == 'openai':
return OpenAIProvider(
api_key=self.config.api_key,
model_name=self.config.model_name,
temperature=self.config.temperature
)
elif self.config.provider.lower() == 'local':
return LocalLLMProvider(model_name=self.config.model_name)
else:
raise ValueError(f"Unsupported LLM provider: {self.config.provider}")
def select_best_quote(self, quotes: List[Quote], topics: List[str]) -> Optional[Quote]:
"""Select the best quote from available options"""
if not quotes:
return None
scored_quotes = []
for quote in quotes:
try:
relevance_score = self.provider.evaluate_quote_relevance(quote, topics)
scored_quotes.append((quote, relevance_score))
except Exception as e:
logging.error(f"Failed to score quote: {e}")
scored_quotes.append((quote, 0.0))
# Sort by relevance score and return the best quote
scored_quotes.sort(key=lambda x: x[1], reverse=True)
best_quote = scored_quotes[0][0]
# Improve formatting of the selected quote
return self.provider.improve_quote_formatting(best_quote)
This LLM integration provides comprehensive support for both local and remote model deployments while maintaining a consistent interface for quote evaluation and enhancement. The implementation includes proper error handling and fallback mechanisms to ensure reliable operation across different deployment scenarios.
QUOTE HISTORY MANAGEMENT
The quote history management system ensures that users receive diverse content by tracking previously displayed quotes and preventing repetition. This component maintains persistent storage of quote history while implementing intelligent aging mechanisms to allow quote reuse after appropriate time intervals.
The history manager implements efficient storage and retrieval mechanisms to handle large quote databases without performance degradation. Additionally, the system provides analytics capabilities to track user preferences and quote effectiveness over time.
The following code example demonstrates a comprehensive quote history management implementation:
import json
import hashlib
from datetime import datetime, timedelta
from typing import List, Dict, Set, Optional
from pathlib import Path
import logging
class QuoteHistoryManager:
def __init__(self, history_file: str, max_history_size: int = 1000):
self.history_file = Path(history_file)
self.max_history_size = max_history_size
self.history_data = self._load_history()
def _load_history(self) -> Dict[str, Any]:
"""Load quote history from file"""
try:
if self.history_file.exists():
with open(self.history_file, 'r', encoding='utf-8') as file:
return json.load(file)
else:
return {
'displayed_quotes': {},
'quote_ratings': {},
'topic_preferences': {},
'last_cleanup': datetime.now().isoformat()
}
except Exception as e:
logging.error(f"Failed to load history file: {e}")
return {
'displayed_quotes': {},
'quote_ratings': {},
'topic_preferences': {},
'last_cleanup': datetime.now().isoformat()
}
def _save_history(self):
"""Save quote history to file"""
try:
# Ensure directory exists
self.history_file.parent.mkdir(parents=True, exist_ok=True)
with open(self.history_file, 'w', encoding='utf-8') as file:
json.dump(self.history_data, file, indent=2, ensure_ascii=False)
except Exception as e:
logging.error(f"Failed to save history file: {e}")
def _generate_quote_hash(self, quote: Quote) -> str:
"""Generate unique hash for a quote"""
quote_string = f"{quote.text}|{quote.author}".lower().strip()
return hashlib.md5(quote_string.encode('utf-8')).hexdigest()
def is_quote_recently_displayed(self, quote: Quote, days_threshold: int = 30) -> bool:
"""Check if quote was displayed recently"""
quote_hash = self._generate_quote_hash(quote)
if quote_hash not in self.history_data['displayed_quotes']:
return False
last_displayed_str = self.history_data['displayed_quotes'][quote_hash]['last_displayed']
last_displayed = datetime.fromisoformat(last_displayed_str)
threshold_date = datetime.now() - timedelta(days=days_threshold)
return last_displayed > threshold_date
def record_displayed_quote(self, quote: Quote, topics: List[str]):
"""Record that a quote has been displayed"""
quote_hash = self._generate_quote_hash(quote)
current_time = datetime.now().isoformat()
# Record quote display
if quote_hash not in self.history_data['displayed_quotes']:
self.history_data['displayed_quotes'][quote_hash] = {
'quote_text': quote.text,
'author': quote.author,
'source': quote.source,
'first_displayed': current_time,
'last_displayed': current_time,
'display_count': 1,
'topics': topics
}
else:
self.history_data['displayed_quotes'][quote_hash]['last_displayed'] = current_time
self.history_data['displayed_quotes'][quote_hash]['display_count'] += 1
# Update topic preferences
for topic in topics:
if topic not in self.history_data['topic_preferences']:
self.history_data['topic_preferences'][topic] = {
'quote_count': 0,
'last_used': current_time
}
self.history_data['topic_preferences'][topic]['quote_count'] += 1
self.history_data['topic_preferences'][topic]['last_used'] = current_time
# Cleanup old entries if necessary
self._cleanup_old_entries()
self._save_history()
def rate_quote(self, quote: Quote, rating: float):
"""Record user rating for a quote"""
quote_hash = self._generate_quote_hash(quote)
if 'quote_ratings' not in self.history_data:
self.history_data['quote_ratings'] = {}
self.history_data['quote_ratings'][quote_hash] = {
'rating': rating,
'rated_at': datetime.now().isoformat()
}
self._save_history()
def get_quote_statistics(self) -> Dict[str, Any]:
"""Get statistics about quote history"""
total_quotes = len(self.history_data['displayed_quotes'])
# Calculate topic distribution
topic_distribution = {}
for quote_data in self.history_data['displayed_quotes'].values():
for topic in quote_data.get('topics', []):
topic_distribution[topic] = topic_distribution.get(topic, 0) + 1
# Calculate average rating
ratings = [r['rating'] for r in self.history_data.get('quote_ratings', {}).values()]
average_rating = sum(ratings) / len(ratings) if ratings else 0.0
# Find most popular topics
popular_topics = sorted(topic_distribution.items(), key=lambda x: x[1], reverse=True)
return {
'total_quotes_displayed': total_quotes,
'topic_distribution': topic_distribution,
'average_rating': average_rating,
'total_ratings': len(ratings),
'most_popular_topics': popular_topics[:5]
}
def filter_new_quotes(self, quotes: List[Quote], days_threshold: int = 30) -> List[Quote]:
"""Filter out recently displayed quotes"""
return [quote for quote in quotes
if not self.is_quote_recently_displayed(quote, days_threshold)]
def _cleanup_old_entries(self):
"""Remove old entries to maintain history size limit"""
displayed_quotes = self.history_data['displayed_quotes']
if len(displayed_quotes) <= self.max_history_size:
return
# Sort by last displayed date and remove oldest entries
sorted_quotes = sorted(
displayed_quotes.items(),
key=lambda x: x[1]['last_displayed']
)
# Keep only the most recent entries
entries_to_keep = sorted_quotes[-self.max_history_size:]
self.history_data['displayed_quotes'] = dict(entries_to_keep)
# Update last cleanup time
self.history_data['last_cleanup'] = datetime.now().isoformat()
logging.info(f"Cleaned up quote history, kept {len(entries_to_keep)} entries")
def get_recommended_topics(self, limit: int = 5) -> List[str]:
"""Get recommended topics based on usage history"""
topic_prefs = self.history_data.get('topic_preferences', {})
# Sort topics by usage count and recency
scored_topics = []
current_time = datetime.now()
for topic, data in topic_prefs.items():
last_used = datetime.fromisoformat(data['last_used'])
days_since_used = (current_time - last_used).days
# Score based on usage count and recency (more recent = higher score)
recency_factor = max(0.1, 1.0 - (days_since_used / 365.0))
score = data['quote_count'] * recency_factor
scored_topics.append((topic, score))
# Sort by score and return top topics
scored_topics.sort(key=lambda x: x[1], reverse=True)
return [topic for topic, score in scored_topics[:limit]]
This quote history management system provides comprehensive tracking and analytics capabilities while maintaining efficient storage and retrieval performance. The implementation includes intelligent cleanup mechanisms and user preference tracking to enhance the overall user experience.
DISPLAY AND USER INTERFACE
The display manager handles the presentation layer of the application, formatting quotes appropriately for different output mediums and managing the display timing and interaction capabilities. This component must support various display modes including console output, desktop notifications, and graphical interfaces
The display system implements configurable formatting options to accommodate different user preferences and screen configurations. Additionally, the system provides interactive capabilities for user feedback and quote rating functionality.
The following code example demonstrates a comprehensive display management implementation:
import tkinter as tk
from tkinter import ttk, messagebox
import threading
import time
from typing import Optional, Callable
import logging
from datetime import datetime
import sys
import os
class DisplayManager:
def __init__(self, config: AppConfig):
self.config = config
self.current_quote = None
self.display_window = None
self.is_displaying = False
def display_quote_console(self, quote: Quote, context: str = ""):
"""Display quote in console format"""
print("\n" + "="*80)
print("QUOTE OF THE DAY")
print("="*80)
print()
print(f'"{quote.text}"')
print()
print(f"— {quote.author}")
if quote.source:
print(f"Source: {quote.source}")
if context:
print()
print("Context:")
print(context)
print()
print(f"Displayed at: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
print("="*80)
print()
def display_quote_gui(self, quote: Quote, context: str = "",
rating_callback: Optional[Callable] = None):
"""Display quote in GUI window"""
if self.display_window and self.display_window.winfo_exists():
self.display_window.destroy()
self.current_quote = quote
self._create_quote_window(quote, context, rating_callback)
def _create_quote_window(self, quote: Quote, context: str,
rating_callback: Optional[Callable]):
"""Create the main quote display window"""
self.display_window = tk.Tk()
self.display_window.title("Quote of the Day")
self.display_window.geometry("600x400")
self.display_window.configure(bg='#f0f0f0')
# Center the window on screen
self.display_window.update_idletasks()
x = (self.display_window.winfo_screenwidth() // 2) - (600 // 2)
y = (self.display_window.winfo_screenheight() // 2) - (400 // 2)
self.display_window.geometry(f"600x400+{x}+{y}")
# Create main frame
main_frame = ttk.Frame(self.display_window, padding="20")
main_frame.grid(row=0, column=0, sticky=(tk.W, tk.E, tk.N, tk.S))
# Configure grid weights
self.display_window.columnconfigure(0, weight=1)
self.display_window.rowconfigure(0, weight=1)
main_frame.columnconfigure(0, weight=1)
main_frame.rowconfigure(1, weight=1)
# Title
title_label = ttk.Label(main_frame, text="Quote of the Day",
font=('Arial', 16, 'bold'))
title_label.grid(row=0, column=0, pady=(0, 20))
# Quote text
quote_frame = ttk.Frame(main_frame)
quote_frame.grid(row=1, column=0, sticky=(tk.W, tk.E, tk.N, tk.S), pady=(0, 10))
quote_frame.columnconfigure(0, weight=1)
quote_frame.rowconfigure(0, weight=1)
quote_text = tk.Text(quote_frame, wrap=tk.WORD, font=('Arial', 12),
height=8, bg='white', relief='flat', borderwidth=0)
quote_text.grid(row=0, column=0, sticky=(tk.W, tk.E, tk.N, tk.S))
# Add scrollbar
scrollbar = ttk.Scrollbar(quote_frame, orient=tk.VERTICAL, command=quote_text.yview)
scrollbar.grid(row=0, column=1, sticky=(tk.N, tk.S))
quote_text.configure(yscrollcommand=scrollbar.set)
# Insert quote content
quote_content = f'"{quote.text}"\n\n— {quote.author}'
if quote.source:
quote_content += f'\nSource: {quote.source}'
if context:
quote_content += f'\n\nContext:\n{context}'
quote_text.insert(tk.END, quote_content)
quote_text.configure(state='disabled')
# Rating frame
if rating_callback:
rating_frame = ttk.Frame(main_frame)
rating_frame.grid(row=2, column=0, pady=(10, 0), sticky=(tk.W, tk.E))
ttk.Label(rating_frame, text="Rate this quote:").grid(row=0, column=0, padx=(0, 10))
rating_var = tk.StringVar(value="5")
rating_scale = ttk.Scale(rating_frame, from_=1, to=10, orient=tk.HORIZONTAL,
variable=rating_var, length=200)
rating_scale.grid(row=0, column=1, padx=(0, 10))
rating_label = ttk.Label(rating_frame, textvariable=rating_var)
rating_label.grid(row=0, column=2, padx=(0, 10))
def submit_rating():
try:
rating = float(rating_var.get())
rating_callback(quote, rating)
messagebox.showinfo("Rating Submitted", "Thank you for your rating!")
except Exception as e:
messagebox.showerror("Error", f"Failed to submit rating: {e}")
ttk.Button(rating_frame, text="Submit Rating",
command=submit_rating).grid(row=0, column=3)
# Control buttons
button_frame = ttk.Frame(main_frame)
button_frame.grid(row=3, column=0, pady=(20, 0))
ttk.Button(button_frame, text="Close",
command=self.display_window.destroy).pack(side=tk.LEFT, padx=(0, 10))
ttk.Button(button_frame, text="Copy Quote",
command=lambda: self._copy_quote_to_clipboard(quote)).pack(side=tk.LEFT)
# Auto-close timer if configured
if self.config.display_duration_seconds > 0:
self.display_window.after(self.config.display_duration_seconds * 1000,
self._auto_close_window)
# Make window stay on top initially
self.display_window.attributes('-topmost', True)
self.display_window.after(100, lambda: self.display_window.attributes('-topmost', False))
self.is_displaying = True
self.display_window.protocol("WM_DELETE_WINDOW", self._on_window_close)
def _copy_quote_to_clipboard(self, quote: Quote):
"""Copy quote to clipboard"""
try:
quote_text = f'"{quote.text}" — {quote.author}'
self.display_window.clipboard_clear()
self.display_window.clipboard_append(quote_text)
messagebox.showinfo("Copied", "Quote copied to clipboard!")
except Exception as e:
messagebox.showerror("Error", f"Failed to copy quote: {e}")
def _auto_close_window(self):
"""Automatically close the window after timeout"""
if self.display_window and self.display_window.winfo_exists():
self.display_window.destroy()
def _on_window_close(self):
"""Handle window close event"""
self.is_displaying = False
self.display_window.destroy()
def display_quote_notification(self, quote: Quote):
"""Display quote as system notification (Windows/Linux)"""
try:
if sys.platform == "win32":
self._show_windows_notification(quote)
elif sys.platform.startswith("linux"):
self._show_linux_notification(quote)
else:
# Fallback to console display
self.display_quote_console(quote)
except Exception as e:
logging.error(f"Failed to show notification: {e}")
self.display_quote_console(quote)
def _show_windows_notification(self, quote: Quote):
"""Show Windows toast notification"""
try:
import win10toast
toaster = win10toast.ToastNotifier()
title = "Quote of the Day"
message = f'"{quote.text[:100]}..." — {quote.author}'
toaster.show_toast(title, message, duration=10, threaded=True)
except ImportError:
logging.warning("win10toast not available, using console display")
self.display_quote_console(quote)
def _show_linux_notification(self, quote: Quote):
"""Show Linux desktop notification"""
try:
import subprocess
title = "Quote of the Day"
message = f'"{quote.text[:100]}..." — {quote.author}'
subprocess.run([
'notify-send', title, message,
'--expire-time=10000'
], check=True)
except (subprocess.CalledProcessError, FileNotFoundError):
logging.warning("notify-send not available, using console display")
self.display_quote_console(quote)
def start_gui_mode(self):
"""Start the GUI event loop"""
if self.display_window:
self.display_window.mainloop()
def is_window_open(self) -> bool:
"""Check if display window is currently open"""
return (self.display_window is not None and
self.display_window.winfo_exists() and
self.is_displaying)
class QuoteScheduler:
def __init__(self, config: AppConfig, quote_callback: Callable):
self.config = config
self.quote_callback = quote_callback
self.running = False
self.scheduler_thread = None
def start_scheduling(self):
"""Start the quote scheduling system"""
if self.running:
return
self.running = True
self.scheduler_thread = threading.Thread(target=self._schedule_loop, daemon=True)
self.scheduler_thread.start()
logging.info("Quote scheduler started")
def stop_scheduling(self):
"""Stop the quote scheduling system"""
self.running = False
if self.scheduler_thread:
self.scheduler_thread.join(timeout=1)
logging.info("Quote scheduler stopped")
def _schedule_loop(self):
"""Main scheduling loop"""
# Show initial quote immediately
try:
self.quote_callback()
except Exception as e:
logging.error(f"Failed to display initial quote: {e}")
# Schedule subsequent quotes
while self.running:
try:
# Wait for the configured interval
sleep_duration = self.config.quote_interval_hours * 3600
for _ in range(int(sleep_duration)):
if not self.running:
break
time.sleep(1)
if self.running:
self.quote_callback()
except Exception as e:
logging.error(f"Error in scheduler loop: {e}")
time.sleep(60) # Wait a minute before retrying
This display management system provides comprehensive presentation capabilities across multiple output formats while maintaining user interaction features and proper resource management. The implementation includes proper threading for non-blocking operation and robust error handling for different platform environments.
COMPLETE IMPLEMENTATION EXAMPLE
The following complete implementation demonstrates how all components integrate to create a fully functional Quote-Of-The-Day application. This example provides a working system that can be executed immediately with proper configuration.
#!/usr/bin/env python3
"""
Complete Quote-Of-The-Day Application
A comprehensive LLM-based quote discovery and display system
"""
import argparse
import logging
import signal
import sys
import time
from typing import List, Optional
import threading
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('quote_app.log'),
logging.StreamHandler(sys.stdout)
]
)
logger = logging.getLogger(__name__)
class QuoteOfTheDayApplication:
def __init__(self, config_file: str = "config.json"):
self.config_manager = ConfigurationManager(config_file)
self.config = None
self.quote_discovery = None
self.llm_manager = None
self.history_manager = None
self.display_manager = None
self.scheduler = None
self.running = False
# Setup signal handlers for graceful shutdown
signal.signal(signal.SIGINT, self._signal_handler)
signal.signal(signal.SIGTERM, self._signal_handler)
def initialize(self):
"""Initialize all application components"""
try:
logger.info("Initializing Quote-Of-The-Day application...")
# Load configuration
self.config = self.config_manager.load_configuration()
logger.info(f"Configuration loaded: {len(self.config.topics)} topics configured")
# Initialize components
self.quote_discovery = QuoteDiscoveryEngine(
sources=self.config.quote_sources,
request_delay=1.0
)
self.llm_manager = LLMManager(self.config.llm_config)
self.history_manager = QuoteHistoryManager(
history_file=self.config.quote_history_file,
max_history_size=self.config.max_history_size
)
self.display_manager = DisplayManager(self.config)
self.scheduler = QuoteScheduler(
config=self.config,
quote_callback=self._display_daily_quote
)
logger.info("Application initialized successfully")
except Exception as e:
logger.error(f"Failed to initialize application: {e}")
raise
def _display_daily_quote(self):
"""Main quote display logic"""
try:
logger.info("Discovering new quotes...")
# Discover quotes based on configured topics
discovered_quotes = self.quote_discovery.discover_quotes(
topics=self.config.topics,
max_quotes=20
)
if not discovered_quotes:
logger.warning("No quotes discovered, using fallback")
self._display_fallback_quote()
return
logger.info(f"Discovered {len(discovered_quotes)} quotes")
# Filter out recently displayed quotes
new_quotes = self.history_manager.filter_new_quotes(
quotes=discovered_quotes,
days_threshold=30
)
if not new_quotes:
logger.info("No new quotes found, using older quotes")
new_quotes = discovered_quotes
# Select best quote using LLM
selected_quote = self.llm_manager.select_best_quote(
quotes=new_quotes,
topics=self.config.topics
)
if not selected_quote:
logger.warning("Failed to select quote, using first available")
selected_quote = new_quotes[0]
# Generate context for the quote
context = self.llm_manager.provider.generate_context(selected_quote)
# Record quote in history
self.history_manager.record_displayed_quote(
quote=selected_quote,
topics=self.config.topics
)
# Display the quote
self._display_quote_with_options(selected_quote, context)
logger.info(f"Displayed quote by {selected_quote.author}")
except Exception as e:
logger.error(f"Failed to display daily quote: {e}")
self._display_fallback_quote()
def _display_quote_with_options(self, quote: Quote, context: str):
"""Display quote using configured display method"""
rating_callback = lambda q, r: self.history_manager.rate_quote(q, r)
# Determine display method based on available GUI
try:
import tkinter
# GUI available, use graphical display
self.display_manager.display_quote_gui(
quote=quote,
context=context,
rating_callback=rating_callback
)
# Start GUI if not already running
if not self.display_manager.is_window_open():
gui_thread = threading.Thread(
target=self.display_manager.start_gui_mode,
daemon=True
)
gui_thread.start()
except ImportError:
# No GUI available, use console display
self.display_manager.display_quote_console(quote, context)
def _display_fallback_quote(self):
"""Display a fallback quote when normal operation fails"""
fallback_quote = Quote(
text="The only way to do great work is to love what you do.",
author="Steve Jobs",
source="Fallback",
topic="motivation"
)
self.display_manager.display_quote_console(fallback_quote)
def run_interactive_mode(self):
"""Run application in interactive mode"""
try:
self.initialize()
print("Quote-Of-The-Day Application")
print("============================")
print()
print("Commands:")
print(" show - Display current quote")
print(" new - Get a new quote")
print(" stats - Show statistics")
print(" topics - Show recommended topics")
print(" quit - Exit application")
print()
while True:
try:
command = input("Enter command: ").strip().lower()
if command == "quit":
break
elif command == "show" or command == "new":
self._display_daily_quote()
elif command == "stats":
self._show_statistics()
elif command == "topics":
self._show_recommended_topics()
else:
print("Unknown command. Try 'show', 'new', 'stats', 'topics', or 'quit'.")
except KeyboardInterrupt:
break
except Exception as e:
logger.error(f"Error in interactive mode: {e}")
print(f"Error: {e}")
except Exception as e:
logger.error(f"Failed to run interactive mode: {e}")
sys.exit(1)
def run_scheduled_mode(self):
"""Run application in scheduled mode"""
try:
self.initialize()
self.running = True
logger.info("Starting scheduled mode...")
self.scheduler.start_scheduling()
# Keep main thread alive
while self.running:
time.sleep(1)
except Exception as e:
logger.error(f"Failed to run scheduled mode: {e}")
sys.exit(1)
finally:
self._cleanup()
def run_single_quote(self):
"""Display a single quote and exit"""
try:
self.initialize()
self._display_daily_quote()
# Wait for GUI display if applicable
if self.display_manager.is_window_open():
while self.display_manager.is_window_open():
time.sleep(0.1)
except Exception as e:
logger.error(f"Failed to display single quote: {e}")
sys.exit(1)
def _show_statistics(self):
"""Display application statistics"""
try:
stats = self.history_manager.get_quote_statistics()
print("\nQuote Statistics:")
print("=================")
print(f"Total quotes displayed: {stats['total_quotes_displayed']}")
print(f"Average rating: {stats['average_rating']:.1f}/10")
print(f"Total ratings: {stats['total_ratings']}")
print()
if stats['most_popular_topics']:
print("Most popular topics:")
for topic, count in stats['most_popular_topics']:
print(f" {topic}: {count} quotes")
print()
except Exception as e:
logger.error(f"Failed to show statistics: {e}")
print(f"Error retrieving statistics: {e}")
def _show_recommended_topics(self):
"""Display recommended topics"""
try:
topics = self.history_manager.get_recommended_topics()
print("\nRecommended topics based on your history:")
print("=========================================")
if topics:
for i, topic in enumerate(topics, 1):
print(f" {i}. {topic}")
else:
print(" No recommendations available yet.")
print()
except Exception as e:
logger.error(f"Failed to show recommended topics: {e}")
print(f"Error retrieving recommendations: {e}")
def _signal_handler(self, signum, frame):
"""Handle shutdown signals"""
logger.info(f"Received signal {signum}, shutting down...")
self.running = False
self._cleanup()
sys.exit(0)
def _cleanup(self):
"""Cleanup resources"""
try:
if self.scheduler:
self.scheduler.stop_scheduling()
logger.info("Application cleanup completed")
except Exception as e:
logger.error(f"Error during cleanup: {e}")
def main():
"""Main application entry point"""
parser = argparse.ArgumentParser(description="Quote-Of-The-Day Application")
parser.add_argument(
"--mode",
choices=["interactive", "scheduled", "single"],
default="single",
help="Application mode (default: single)"
)
parser.add_argument(
"--config",
default="config.json",
help="Configuration file path (default: config.json)"
)
parser.add_argument(
"--log-level",
choices=["DEBUG", "INFO", "WARNING", "ERROR"],
default="INFO",
help="Logging level (default: INFO)"
)
args = parser.parse_args()
# Set logging level
logging.getLogger().setLevel(getattr(logging, args.log_level))
# Create and run application
app = QuoteOfTheDayApplication(config_file=args.config)
try:
if args.mode == "interactive":
app.run_interactive_mode()
elif args.mode == "scheduled":
app.run_scheduled_mode()
else: # single
app.run_single_quote()
except KeyboardInterrupt:
logger.info("Application interrupted by user")
sys.exit(0)
except Exception as e:
logger.error(f"Application failed: {e}")
sys.exit(1)
if __name__ == "__main__":
main()
This complete implementation provides a fully functional Quote-Of-The-Day application that integrates all the previously discussed components. The application supports multiple execution modes and provides comprehensive error handling and logging capabilities.
ERROR HANDLING AND EDGE CASES
The application implements comprehensive error handling to manage various failure scenarios that may occur during operation. Network connectivity issues, API rate limiting, malformed configuration files, and resource constraints are all handled gracefully to ensure reliable operation.
The error handling strategy employs multiple layers of protection including input validation, network timeout management, and fallback mechanisms. When primary systems fail, the application automatically switches to alternative approaches to maintain functionality. For example, if LLM services are unavailable, the system falls back to simple keyword-based quote selection.
The application also implements proper resource management to prevent memory leaks and handle long-running operations efficiently. Connection pooling, request caching, and periodic cleanup operations ensure stable performance over extended periods.
DEPLOYMENT AND OPERATIONAL CONSIDERATIONS
Deploying the Quote-Of-The-Day application requires consideration of several operational factors including dependency management, configuration security, and system integration requirements. The application can be deployed as a standalone desktop application, a system service, or integrated into existing productivity workflows.
For production deployments, the application should be configured with appropriate logging levels, error monitoring, and health check mechanisms. The quote history database should be backed up regularly to prevent data loss, and API credentials should be stored securely using environment variables or secure credential management systems.
The application supports containerized deployment through Docker, enabling consistent deployment across different environments. Additionally, the modular architecture allows for distributed deployment where different components can run on separate systems for improved scalability and reliability.
System administrators should monitor application performance metrics including quote discovery success rates, LLM response times, and user engagement statistics. These metrics provide valuable insights for optimizing configuration parameters and identifying potential issues before they impact user experience.
The application's scheduling system integrates well with existing system schedulers like cron on Unix systems or Task Scheduler on Windows, providing flexibility in deployment strategies. For enterprise environments, the application can be integrated with existing notification systems and user directories to provide personalized quote delivery based on organizational preferences.
 
No comments:
Post a Comment