INTRODUCTION
Creating an intelligent chatbot that manages Spotify playlists through natural language represents a convergence of modern language models, music streaming APIs, and user interface design. This article explores the complete implementation of such a system, where users can request playlists based on mood, genre, time period, artist, language, or style through conversational interactions. The system not only creates playlists but also modifies existing ones, merges multiple playlists, and intelligently manages both local and cloud-based playlist storage.
The core challenge lies in translating natural language requests into actionable Spotify API calls while maintaining context across conversations. When a user says "create a playlist of upbeat 80s rock songs," the system must understand multiple attributes simultaneously, search Spotify's catalog, and create a coherent playlist. Furthermore, when the user later says "add some Bon Jovi to that playlist," the system must maintain context about which playlist is being referenced.
This implementation supports both remote and local LLM execution, with GPU acceleration across Intel, AMD ROCm, and Apple Silicon platforms. The architecture separates concerns cleanly, allowing the LLM component to be swapped without affecting the Spotify integration or UI layers.
SYSTEM ARCHITECTURE
The system consists of five primary layers that work together to deliver the complete functionality. The presentation layer handles all user interactions through a web-based interface. The application layer contains the core business logic for playlist management and natural language understanding. The LLM integration layer provides the intelligence for understanding user requests and generating responses. The integration layer manages communication with Spotify's API. Finally, the persistence layer handles local storage of playlist metadata and system state.
The flow of a typical user request begins when the user types a message into the web interface. This message travels to the application layer, which forwards it to the LLM integration layer along with conversation history and available tools. The LLM processes the request and determines which tools to call. If playlist creation is needed, the LLM calls the appropriate tool with parameters extracted from the user's natural language request. The application layer then translates this tool call into Spotify API requests, creates or modifies playlists, and stores metadata locally. Finally, the response flows back through the layers to the user interface.
Here is a high-level view of the component interaction:
+------------------+
| Web Browser |
+------------------+
|
| HTTP/WebSocket
v
+------------------+
| Web Server |
| (FastAPI) |
+------------------+
|
v
+------------------+
| Application |
| Controller |
+------------------+
|
+----------+----------+
| |
v v
+-------------+ +------------------+
| LLM Layer | | Spotify API |
| (Local/ | | Integration |
| Remote) | +------------------+
+-------------+ |
| v
| +------------------+
| | Playlist Storage |
| | (Local + Cloud) |
+------------->+------------------+
The separation of concerns allows each component to be developed, tested, and maintained independently. The LLM layer can switch between local models running on various GPU platforms or remote API services without affecting other components. The Spotify integration layer abstracts all API complexity, providing clean interfaces for playlist operations.
LLM INTEGRATION LAYER
The LLM integration layer serves as the intelligence core of the system. It must support both local and remote LLM execution, handle tool calling, and manage conversation context. The design uses a provider pattern to abstract different LLM backends.
The base LLM provider interface defines the contract that all implementations must follow:
class LLMProvider:
"""Base interface for LLM providers supporting both local and remote models"""
def __init__(self, config):
"""
Initialize the provider with configuration
Args:
config: Dictionary containing provider-specific configuration
including model_name, temperature, max_tokens, etc.
"""
self.config = config
self.model_name = config.get('model_name')
self.temperature = config.get('temperature', 0.7)
self.max_tokens = config.get('max_tokens', 2000)
def generate_response(self, messages, tools=None):
"""
Generate a response given conversation history and available tools
Args:
messages: List of message dictionaries with 'role' and 'content'
tools: Optional list of tool definitions in OpenAI function format
Returns:
Dictionary containing 'content' and optional 'tool_calls'
"""
raise NotImplementedError
def supports_tools(self):
"""Return whether this provider supports function/tool calling"""
raise NotImplementedError
For local execution, we implement a provider that uses popular frameworks like llama.cpp or transformers with GPU acceleration. The local provider must detect available GPU hardware and configure acceleration accordingly:
class LocalLLMProvider(LLMProvider):
"""Local LLM provider with GPU acceleration support"""
def __init__(self, config):
super().__init__(config)
self.device = self._detect_gpu()
self.model = self._load_model()
def _detect_gpu(self):
"""
Detect available GPU and return appropriate device configuration
Returns:
String indicating device type: 'cuda', 'rocm', 'mps', or 'cpu'
"""
import torch
# Check for NVIDIA CUDA
if torch.cuda.is_available():
gpu_name = torch.cuda.get_device_name(0)
return 'cuda'
# Check for AMD ROCm
try:
if torch.version.hip is not None:
return 'rocm'
except AttributeError:
pass
# Check for Apple Silicon MPS
if hasattr(torch.backends, 'mps') and torch.backends.mps.is_available():
return 'mps'
# Fallback to CPU
return 'cpu'
def _load_model(self):
"""Load the model with appropriate GPU acceleration"""
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained(self.model_name)
# Configure device mapping based on detected GPU
if self.device == 'cuda':
model = AutoModelForCausalLM.from_pretrained(
self.model_name,
device_map='auto',
torch_dtype=torch.float16
)
elif self.device == 'mps':
model = AutoModelForCausalLM.from_pretrained(
self.model_name,
torch_dtype=torch.float16
)
model = model.to('mps')
elif self.device == 'rocm':
model = AutoModelForCausalLM.from_pretrained(
self.model_name,
device_map='auto',
torch_dtype=torch.float16
)
else:
model = AutoModelForCausalLM.from_pretrained(
self.model_name
)
return {'model': model, 'tokenizer': tokenizer}
The local provider must also implement tool calling support. Since many local models do not natively support function calling, we implement a prompting strategy that instructs the model to output JSON when it needs to call a tool:
def generate_response(self, messages, tools=None):
"""Generate response with tool calling support"""
# Construct prompt with tool definitions
system_prompt = self._build_system_prompt(tools)
formatted_messages = self._format_messages(messages, system_prompt)
# Generate response
inputs = self.model['tokenizer'](
formatted_messages,
return_tensors='pt'
).to(self.model['model'].device)
outputs = self.model['model'].generate(
**inputs,
max_new_tokens=self.max_tokens,
temperature=self.temperature,
do_sample=True
)
response_text = self.model['tokenizer'].decode(
outputs[0][inputs['input_ids'].shape[1]:],
skip_special_tokens=True
)
# Parse for tool calls
tool_calls = self._extract_tool_calls(response_text)
if tool_calls:
return {
'content': None,
'tool_calls': tool_calls
}
else:
return {
'content': response_text,
'tool_calls': None
}
def _build_system_prompt(self, tools):
"""Build system prompt that includes tool definitions"""
if not tools:
return "You are a helpful assistant for managing Spotify playlists."
tools_desc = "You have access to the following tools:\n\n"
for tool in tools:
tools_desc += f"Tool: {tool['function']['name']}\n"
tools_desc += f"Description: {tool['function']['description']}\n"
tools_desc += f"Parameters: {tool['function']['parameters']}\n\n"
tools_desc += (
"To use a tool, respond with JSON in this format:\n"
'{"tool": "tool_name", "arguments": {"param1": "value1"}}\n\n'
"Only use tools when necessary. Otherwise, respond naturally."
)
return tools_desc
For remote LLM providers like OpenAI, Anthropic, or others, we implement a simpler wrapper that uses their native APIs:
class RemoteLLMProvider(LLMProvider):
"""Remote LLM provider using API services"""
def __init__(self, config):
super().__init__(config)
self.api_key = config.get('api_key')
self.api_base = config.get('api_base', 'https://api.openai.com/v1')
self.provider_type = config.get('provider_type', 'openai')
def generate_response(self, messages, tools=None):
"""Generate response using remote API"""
import requests
headers = {
'Authorization': f'Bearer {self.api_key}',
'Content-Type': 'application/json'
}
payload = {
'model': self.model_name,
'messages': messages,
'temperature': self.temperature,
'max_tokens': self.max_tokens
}
if tools and self.supports_tools():
payload['tools'] = tools
payload['tool_choice'] = 'auto'
response = requests.post(
f'{self.api_base}/chat/completions',
headers=headers,
json=payload
)
response.raise_for_status()
result = response.json()
message = result['choices'][0]['message']
return {
'content': message.get('content'),
'tool_calls': message.get('tool_calls')
}
def supports_tools(self):
"""Remote providers typically support native tool calling"""
return True
The LLM manager coordinates between different providers and handles the conversation flow:
class LLMManager:
"""Manages LLM interactions and conversation state"""
def __init__(self, config):
"""
Initialize LLM manager with configuration
Args:
config: Dictionary with 'provider_type' ('local' or 'remote')
and provider-specific settings
"""
self.config = config
self.provider = self._create_provider()
self.conversation_history = []
def _create_provider(self):
"""Factory method to create appropriate LLM provider"""
provider_type = self.config.get('provider_type', 'remote')
if provider_type == 'local':
return LocalLLMProvider(self.config)
elif provider_type == 'remote':
return RemoteLLMProvider(self.config)
else:
raise ValueError(f"Unknown provider type: {provider_type}")
def chat(self, user_message, tools=None):
"""
Process a user message and return assistant response
Args:
user_message: String containing user's message
tools: Optional list of available tools
Returns:
Dictionary with 'content' and optional 'tool_calls'
"""
# Add user message to history
self.conversation_history.append({
'role': 'user',
'content': user_message
})
# Generate response
response = self.provider.generate_response(
self.conversation_history,
tools=tools
)
# Add assistant response to history
if response['content']:
self.conversation_history.append({
'role': 'assistant',
'content': response['content']
})
return response
This architecture allows seamless switching between local and remote LLMs through configuration while maintaining consistent behavior across the application.
SPOTIFY API INTEGRATION
The Spotify integration layer handles all communication with Spotify's Web API. This includes authentication, searching for tracks, creating playlists, modifying playlists, and controlling playback. The Spotify API uses OAuth 2.0 for authentication, requiring users to authorize the application.
The Spotify client wrapper encapsulates all API operations:
class SpotifyClient:
"""Wrapper for Spotify Web API operations"""
def __init__(self, client_id, client_secret, redirect_uri):
"""
Initialize Spotify client with OAuth credentials
Args:
client_id: Spotify application client ID
client_secret: Spotify application client secret
redirect_uri: OAuth redirect URI for authorization flow
"""
self.client_id = client_id
self.client_secret = client_secret
self.redirect_uri = redirect_uri
self.access_token = None
self.refresh_token = None
self.token_expiry = None
def get_authorization_url(self):
"""
Generate Spotify authorization URL for user to grant permissions
Returns:
String URL for authorization
"""
from urllib.parse import urlencode
scopes = [
'playlist-read-private',
'playlist-modify-private',
'playlist-modify-public',
'user-library-read',
'user-read-playback-state',
'user-modify-playback-state'
]
params = {
'client_id': self.client_id,
'response_type': 'code',
'redirect_uri': self.redirect_uri,
'scope': ' '.join(scopes)
}
return f"https://accounts.spotify.com/authorize?{urlencode(params)}"
def exchange_code_for_token(self, code):
"""
Exchange authorization code for access token
Args:
code: Authorization code from OAuth callback
"""
import requests
from datetime import datetime, timedelta
response = requests.post(
'https://accounts.spotify.com/api/token',
data={
'grant_type': 'authorization_code',
'code': code,
'redirect_uri': self.redirect_uri,
'client_id': self.client_id,
'client_secret': self.client_secret
}
)
response.raise_for_status()
token_data = response.json()
self.access_token = token_data['access_token']
self.refresh_token = token_data['refresh_token']
self.token_expiry = datetime.now() + timedelta(
seconds=token_data['expires_in']
)
def _ensure_valid_token(self):
"""Refresh access token if expired"""
import requests
from datetime import datetime, timedelta
if not self.token_expiry or datetime.now() >= self.token_expiry:
response = requests.post(
'https://accounts.spotify.com/api/token',
data={
'grant_type': 'refresh_token',
'refresh_token': self.refresh_token,
'client_id': self.client_id,
'client_secret': self.client_secret
}
)
response.raise_for_status()
token_data = response.json()
self.access_token = token_data['access_token']
self.token_expiry = datetime.now() + timedelta(
seconds=token_data['expires_in']
)
def _make_request(self, method, endpoint, **kwargs):
"""
Make authenticated request to Spotify API
Args:
method: HTTP method (GET, POST, PUT, DELETE)
endpoint: API endpoint path
**kwargs: Additional arguments for requests library
Returns:
Response JSON data
"""
import requests
self._ensure_valid_token()
headers = kwargs.pop('headers', {})
headers['Authorization'] = f'Bearer {self.access_token}'
url = f'https://api.spotify.com/v1{endpoint}'
response = requests.request(
method,
url,
headers=headers,
**kwargs
)
response.raise_for_status()
if response.content:
return response.json()
return None
The search functionality allows finding tracks based on various criteria. Spotify's search API supports complex queries combining artist, genre, year, and other attributes:
def search_tracks(self, query, limit=20):
"""
Search for tracks matching query
Args:
query: Search query string (can include filters like 'genre:rock year:1980-1989')
limit: Maximum number of results to return
Returns:
List of track dictionaries with id, name, artists, album, etc.
"""
params = {
'q': query,
'type': 'track',
'limit': limit
}
result = self._make_request('GET', '/search', params=params)
tracks = []
for item in result['tracks']['items']:
tracks.append({
'id': item['id'],
'name': item['name'],
'artists': [artist['name'] for artist in item['artists']],
'album': item['album']['name'],
'duration_ms': item['duration_ms'],
'uri': item['uri']
})
return tracks
Creating and managing playlists requires several API operations. First, we create a playlist, then add tracks to it:
def create_playlist(self, user_id, name, description='', public=False):
"""
Create a new playlist for the user
Args:
user_id: Spotify user ID
name: Playlist name
description: Optional playlist description
public: Whether playlist should be public
Returns:
Dictionary with playlist id, name, and other metadata
"""
payload = {
'name': name,
'description': description,
'public': public
}
result = self._make_request(
'POST',
f'/users/{user_id}/playlists',
json=payload
)
return {
'id': result['id'],
'name': result['name'],
'description': result['description'],
'uri': result['uri'],
'external_url': result['external_urls']['spotify']
}
def add_tracks_to_playlist(self, playlist_id, track_uris):
"""
Add tracks to an existing playlist
Args:
playlist_id: Spotify playlist ID
track_uris: List of Spotify track URIs
"""
# Spotify API limits to 100 tracks per request
chunk_size = 100
for i in range(0, len(track_uris), chunk_size):
chunk = track_uris[i:i + chunk_size]
self._make_request(
'POST',
f'/playlists/{playlist_id}/tracks',
json={'uris': chunk}
)
def remove_tracks_from_playlist(self, playlist_id, track_uris):
"""
Remove specific tracks from a playlist
Args:
playlist_id: Spotify playlist ID
track_uris: List of Spotify track URIs to remove
"""
tracks = [{'uri': uri} for uri in track_uris]
self._make_request(
'DELETE',
f'/playlists/{playlist_id}/tracks',
json={'tracks': tracks}
)
def get_user_playlists(self, user_id):
"""
Retrieve all playlists for a user
Args:
user_id: Spotify user ID
Returns:
List of playlist dictionaries
"""
playlists = []
offset = 0
limit = 50
while True:
result = self._make_request(
'GET',
f'/users/{user_id}/playlists',
params={'offset': offset, 'limit': limit}
)
for item in result['items']:
playlists.append({
'id': item['id'],
'name': item['name'],
'description': item['description'],
'track_count': item['tracks']['total'],
'uri': item['uri']
})
if not result['next']:
break
offset += limit
return playlists
def get_playlist_tracks(self, playlist_id):
"""
Get all tracks from a playlist
Args:
playlist_id: Spotify playlist ID
Returns:
List of track dictionaries
"""
tracks = []
offset = 0
limit = 100
while True:
result = self._make_request(
'GET',
f'/playlists/{playlist_id}/tracks',
params={'offset': offset, 'limit': limit}
)
for item in result['items']:
if item['track']:
tracks.append({
'id': item['track']['id'],
'name': item['track']['name'],
'artists': [a['name'] for a in item['track']['artists']],
'uri': item['track']['uri']
})
if not result['next']:
break
offset += limit
return tracks
Playback control allows the system to start playing playlists on the user's active Spotify device:
def play_playlist(self, playlist_uri, device_id=None):
"""
Start playing a playlist on user's active device
Args:
playlist_uri: Spotify playlist URI
device_id: Optional specific device ID to play on
"""
payload = {'context_uri': playlist_uri}
params = {}
if device_id:
params['device_id'] = device_id
self._make_request(
'PUT',
'/me/player/play',
params=params,
json=payload
)
def get_current_user(self):
"""
Get current user's profile information
Returns:
Dictionary with user id, display_name, etc.
"""
result = self._make_request('GET', '/me')
return {
'id': result['id'],
'display_name': result['display_name'],
'email': result.get('email')
}
This Spotify client provides all necessary operations for the playlist management system while handling authentication, token refresh, and API rate limiting transparently.
PLAYLIST MANAGEMENT SYSTEM
The playlist management system coordinates between local storage and Spotify cloud storage. It maintains metadata about playlists locally to enable quick lookups and matching against user requests. When a user asks for a playlist, the system first checks if a matching playlist exists locally before creating a new one.
The playlist manager maintains a local database of playlist metadata:
class PlaylistManager:
"""Manages playlists both locally and on Spotify"""
def __init__(self, storage_path, spotify_client):
"""
Initialize playlist manager
Args:
storage_path: Path to local storage directory
spotify_client: Initialized SpotifyClient instance
"""
self.storage_path = storage_path
self.spotify_client = spotify_client
self.metadata_file = os.path.join(storage_path, 'playlists.json')
self.playlists = self._load_metadata()
def _load_metadata(self):
"""Load playlist metadata from local storage"""
import json
import os
if os.path.exists(self.metadata_file):
with open(self.metadata_file, 'r') as f:
return json.load(f)
return {}
def _save_metadata(self):
"""Save playlist metadata to local storage"""
import json
import os
os.makedirs(self.storage_path, exist_ok=True)
with open(self.metadata_file, 'w') as f:
json.dump(self.playlists, f, indent=2)
When creating a playlist, the system extracts attributes from the user's natural language request and stores them as metadata. This enables intelligent matching later:
def create_playlist(self, name, description, attributes, track_uris):
"""
Create a new playlist with metadata
Args:
name: Playlist name
description: Playlist description
attributes: Dictionary of playlist attributes (genre, mood, era, etc.)
track_uris: List of Spotify track URIs to add
Returns:
Dictionary with playlist information
"""
import uuid
from datetime import datetime
# Get current user
user = self.spotify_client.get_current_user()
# Create playlist on Spotify
spotify_playlist = self.spotify_client.create_playlist(
user['id'],
name,
description=description
)
# Add tracks to playlist
if track_uris:
self.spotify_client.add_tracks_to_playlist(
spotify_playlist['id'],
track_uris
)
# Store metadata locally
playlist_id = str(uuid.uuid4())
self.playlists[playlist_id] = {
'id': playlist_id,
'spotify_id': spotify_playlist['id'],
'name': name,
'description': description,
'attributes': attributes,
'track_count': len(track_uris),
'created_at': datetime.now().isoformat(),
'updated_at': datetime.now().isoformat(),
'uri': spotify_playlist['uri']
}
self._save_metadata()
return self.playlists[playlist_id]
Finding existing playlists that match user requests requires comparing attributes. The system uses a scoring mechanism to determine how well each playlist matches the requested criteria:
def find_matching_playlists(self, attributes):
"""
Find existing playlists that match given attributes
Args:
attributes: Dictionary of desired attributes (genre, mood, era, etc.)
Returns:
List of matching playlists sorted by match score
"""
matches = []
for playlist_id, playlist in self.playlists.items():
score = self._calculate_match_score(
playlist['attributes'],
attributes
)
if score > 0:
matches.append({
'playlist': playlist,
'score': score
})
# Sort by score descending
matches.sort(key=lambda x: x['score'], reverse=True)
return [m['playlist'] for m in matches]
def _calculate_match_score(self, playlist_attrs, requested_attrs):
"""
Calculate how well playlist attributes match requested attributes
Args:
playlist_attrs: Dictionary of playlist attributes
requested_attrs: Dictionary of requested attributes
Returns:
Float score between 0 and 1
"""
if not requested_attrs:
return 0
matching_attrs = 0
total_attrs = len(requested_attrs)
for key, value in requested_attrs.items():
if key in playlist_attrs:
playlist_value = playlist_attrs[key]
# Handle different value types
if isinstance(value, list) and isinstance(playlist_value, list):
# Check for overlap in lists
if set(value) & set(playlist_value):
matching_attrs += 1
elif isinstance(value, str) and isinstance(playlist_value, str):
# Case-insensitive string comparison
if value.lower() == playlist_value.lower():
matching_attrs += 1
elif value == playlist_value:
matching_attrs += 1
return matching_attrs / total_attrs if total_attrs > 0 else 0
Modifying existing playlists requires updating both Spotify and local metadata:
def add_tracks_to_playlist(self, playlist_id, track_uris):
"""
Add tracks to an existing playlist
Args:
playlist_id: Local playlist ID
track_uris: List of Spotify track URIs to add
"""
from datetime import datetime
if playlist_id not in self.playlists:
raise ValueError(f"Playlist {playlist_id} not found")
playlist = self.playlists[playlist_id]
# Add to Spotify
self.spotify_client.add_tracks_to_playlist(
playlist['spotify_id'],
track_uris
)
# Update metadata
playlist['track_count'] += len(track_uris)
playlist['updated_at'] = datetime.now().isoformat()
self._save_metadata()
def remove_tracks_from_playlist(self, playlist_id, track_uris):
"""
Remove tracks from an existing playlist
Args:
playlist_id: Local playlist ID
track_uris: List of Spotify track URIs to remove
"""
from datetime import datetime
if playlist_id not in self.playlists:
raise ValueError(f"Playlist {playlist_id} not found")
playlist = self.playlists[playlist_id]
# Remove from Spotify
self.spotify_client.remove_tracks_from_playlist(
playlist['spotify_id'],
track_uris
)
# Update metadata
playlist['track_count'] -= len(track_uris)
playlist['updated_at'] = datetime.now().isoformat()
self._save_metadata()
Merging playlists combines tracks from multiple playlists into a new one:
def merge_playlists(self, playlist_ids, new_name, new_description=''):
"""
Merge multiple playlists into a new playlist
Args:
playlist_ids: List of local playlist IDs to merge
new_name: Name for the merged playlist
new_description: Description for the merged playlist
Returns:
Dictionary with new playlist information
"""
all_track_uris = []
merged_attributes = {}
# Collect tracks and merge attributes
for playlist_id in playlist_ids:
if playlist_id not in self.playlists:
continue
playlist = self.playlists[playlist_id]
# Get tracks from Spotify
tracks = self.spotify_client.get_playlist_tracks(
playlist['spotify_id']
)
all_track_uris.extend([t['uri'] for t in tracks])
# Merge attributes
for key, value in playlist['attributes'].items():
if key not in merged_attributes:
merged_attributes[key] = []
if isinstance(value, list):
merged_attributes[key].extend(value)
else:
merged_attributes[key].append(value)
# Remove duplicates
all_track_uris = list(set(all_track_uris))
# Normalize merged attributes
for key in merged_attributes:
merged_attributes[key] = list(set(merged_attributes[key]))
# Create new playlist
return self.create_playlist(
new_name,
new_description,
merged_attributes,
all_track_uris
)
The playlist manager also handles synchronization with Spotify to ensure local metadata stays current:
def sync_with_spotify(self):
"""
Synchronize local playlist metadata with Spotify
Updates track counts and detects deleted playlists
"""
user = self.spotify_client.get_current_user()
spotify_playlists = self.spotify_client.get_user_playlists(user['id'])
spotify_ids = {p['id'] for p in spotify_playlists}
# Update existing playlists and remove deleted ones
playlists_to_remove = []
for playlist_id, playlist in self.playlists.items():
if playlist['spotify_id'] not in spotify_ids:
playlists_to_remove.append(playlist_id)
else:
# Update track count
spotify_playlist = next(
p for p in spotify_playlists
if p['id'] == playlist['spotify_id']
)
playlist['track_count'] = spotify_playlist['track_count']
for playlist_id in playlists_to_remove:
del self.playlists[playlist_id]
self._save_metadata()
This playlist management layer provides a robust foundation for the LLM to interact with playlists through high-level operations.
TOOL SYSTEM DESIGN
The tool system bridges the LLM and the playlist management functionality. Tools are defined in a format that the LLM can understand, and tool calls from the LLM are executed by the application layer. Each tool corresponds to a specific operation like creating a playlist, searching for songs, or modifying an existing playlist.
The tool definitions follow the OpenAI function calling format, which is widely supported:
def get_tool_definitions():
"""
Return list of tool definitions for the LLM
Returns:
List of tool definition dictionaries
"""
return [
{
'type': 'function',
'function': {
'name': 'search_songs',
'description': 'Search for songs on Spotify based on criteria like genre, mood, artist, era, language, or style',
'parameters': {
'type': 'object',
'properties': {
'genre': {
'type': 'string',
'description': 'Musical genre (e.g., rock, jazz, hip-hop)'
},
'mood': {
'type': 'string',
'description': 'Mood or feeling (e.g., happy, sad, energetic, calm)'
},
'artist': {
'type': 'string',
'description': 'Artist or band name'
},
'era': {
'type': 'string',
'description': 'Time period (e.g., 1980s, 1990-1995, 2000s)'
},
'language': {
'type': 'string',
'description': 'Language of lyrics'
},
'style': {
'type': 'string',
'description': 'Musical style or subgenre'
},
'limit': {
'type': 'integer',
'description': 'Maximum number of songs to return (default 20)'
}
},
'required': []
}
}
},
{
'type': 'function',
'function': {
'name': 'create_playlist',
'description': 'Create a new playlist with specified songs',
'parameters': {
'type': 'object',
'properties': {
'name': {
'type': 'string',
'description': 'Name for the playlist'
},
'description': {
'type': 'string',
'description': 'Description of the playlist'
},
'track_ids': {
'type': 'array',
'items': {'type': 'string'},
'description': 'List of Spotify track IDs to include'
},
'attributes': {
'type': 'object',
'description': 'Attributes describing the playlist (genre, mood, etc.)'
}
},
'required': ['name', 'track_ids']
}
}
},
{
'type': 'function',
'function': {
'name': 'find_playlists',
'description': 'Find existing playlists matching specified criteria',
'parameters': {
'type': 'object',
'properties': {
'attributes': {
'type': 'object',
'description': 'Attributes to match (genre, mood, artist, etc.)'
}
},
'required': ['attributes']
}
}
},
{
'type': 'function',
'function': {
'name': 'add_songs_to_playlist',
'description': 'Add songs to an existing playlist',
'parameters': {
'type': 'object',
'properties': {
'playlist_id': {
'type': 'string',
'description': 'ID of the playlist to modify'
},
'track_ids': {
'type': 'array',
'items': {'type': 'string'},
'description': 'List of Spotify track IDs to add'
}
},
'required': ['playlist_id', 'track_ids']
}
}
},
{
'type': 'function',
'function': {
'name': 'remove_songs_from_playlist',
'description': 'Remove songs from an existing playlist',
'parameters': {
'type': 'object',
'properties': {
'playlist_id': {
'type': 'string',
'description': 'ID of the playlist to modify'
},
'track_ids': {
'type': 'array',
'items': {'type': 'string'},
'description': 'List of Spotify track IDs to remove'
}
},
'required': ['playlist_id', 'track_ids']
}
}
},
{
'type': 'function',
'function': {
'name': 'merge_playlists',
'description': 'Merge multiple playlists into a new playlist',
'parameters': {
'type': 'object',
'properties': {
'playlist_ids': {
'type': 'array',
'items': {'type': 'string'},
'description': 'List of playlist IDs to merge'
},
'name': {
'type': 'string',
'description': 'Name for the merged playlist'
},
'description': {
'type': 'string',
'description': 'Description for the merged playlist'
}
},
'required': ['playlist_ids', 'name']
}
}
},
{
'type': 'function',
'function': {
'name': 'play_playlist',
'description': 'Start playing a playlist on Spotify',
'parameters': {
'type': 'object',
'properties': {
'playlist_id': {
'type': 'string',
'description': 'ID of the playlist to play'
}
},
'required': ['playlist_id']
}
}
}
]
The tool executor receives tool calls from the LLM and executes the corresponding operations:
class ToolExecutor:
"""Executes tool calls from the LLM"""
def __init__(self, playlist_manager, spotify_client):
"""
Initialize tool executor
Args:
playlist_manager: PlaylistManager instance
spotify_client: SpotifyClient instance
"""
self.playlist_manager = playlist_manager
self.spotify_client = spotify_client
def execute_tool(self, tool_name, arguments):
"""
Execute a tool call
Args:
tool_name: Name of the tool to execute
arguments: Dictionary of arguments for the tool
Returns:
Result of the tool execution
"""
if tool_name == 'search_songs':
return self._search_songs(**arguments)
elif tool_name == 'create_playlist':
return self._create_playlist(**arguments)
elif tool_name == 'find_playlists':
return self._find_playlists(**arguments)
elif tool_name == 'add_songs_to_playlist':
return self._add_songs_to_playlist(**arguments)
elif tool_name == 'remove_songs_from_playlist':
return self._remove_songs_from_playlist(**arguments)
elif tool_name == 'merge_playlists':
return self._merge_playlists(**arguments)
elif tool_name == 'play_playlist':
return self._play_playlist(**arguments)
else:
raise ValueError(f"Unknown tool: {tool_name}")
Each tool implementation translates LLM parameters into appropriate API calls:
def _search_songs(self, genre=None, mood=None, artist=None, era=None,
language=None, style=None, limit=20):
"""
Search for songs based on criteria
Returns:
Dictionary with search results
"""
# Build Spotify search query
query_parts = []
if genre:
query_parts.append(f'genre:{genre}')
if artist:
query_parts.append(f'artist:{artist}')
if era:
# Parse era into year range
if '-' in era:
start, end = era.split('-')
query_parts.append(f'year:{start}-{end}')
elif 's' in era:
# Handle decades like "1980s"
decade = era.replace('s', '')
query_parts.append(f'year:{decade}-{int(decade)+9}')
else:
query_parts.append(f'year:{era}')
# For mood, style, and language, add as general search terms
if mood:
query_parts.append(mood)
if style:
query_parts.append(style)
if language:
query_parts.append(language)
query = ' '.join(query_parts) if query_parts else 'popular'
tracks = self.spotify_client.search_tracks(query, limit=limit)
return {
'success': True,
'tracks': tracks,
'count': len(tracks)
}
def _create_playlist(self, name, track_ids, description='', attributes=None):
"""
Create a new playlist
Returns:
Dictionary with playlist information
"""
if attributes is None:
attributes = {}
# Convert track IDs to URIs
track_uris = [f'spotify:track:{tid}' for tid in track_ids]
playlist = self.playlist_manager.create_playlist(
name,
description,
attributes,
track_uris
)
return {
'success': True,
'playlist': playlist
}
def _find_playlists(self, attributes):
"""
Find playlists matching attributes
Returns:
Dictionary with matching playlists
"""
playlists = self.playlist_manager.find_matching_playlists(attributes)
return {
'success': True,
'playlists': playlists,
'count': len(playlists)
}
def _add_songs_to_playlist(self, playlist_id, track_ids):
"""
Add songs to a playlist
Returns:
Dictionary with success status
"""
track_uris = [f'spotify:track:{tid}' for tid in track_ids]
self.playlist_manager.add_tracks_to_playlist(
playlist_id,
track_uris
)
return {
'success': True,
'message': f'Added {len(track_ids)} tracks to playlist'
}
def _remove_songs_from_playlist(self, playlist_id, track_ids):
"""
Remove songs from a playlist
Returns:
Dictionary with success status
"""
track_uris = [f'spotify:track:{tid}' for tid in track_ids]
self.playlist_manager.remove_tracks_from_playlist(
playlist_id,
track_uris
)
return {
'success': True,
'message': f'Removed {len(track_ids)} tracks from playlist'
}
def _merge_playlists(self, playlist_ids, name, description=''):
"""
Merge multiple playlists
Returns:
Dictionary with new playlist information
"""
playlist = self.playlist_manager.merge_playlists(
playlist_ids,
name,
description
)
return {
'success': True,
'playlist': playlist
}
def _play_playlist(self, playlist_id):
"""
Play a playlist on Spotify
Returns:
Dictionary with success status
"""
playlist = self.playlist_manager.playlists.get(playlist_id)
if not playlist:
return {
'success': False,
'error': 'Playlist not found'
}
self.spotify_client.play_playlist(playlist['uri'])
return {
'success': True,
'message': f'Playing playlist: {playlist["name"]}'
}
The application controller orchestrates the conversation flow, handling tool calls and managing context:
class ApplicationController:
"""Main application controller coordinating all components"""
def __init__(self, config):
"""
Initialize application controller
Args:
config: Configuration dictionary
"""
self.config = config
# Initialize components
self.spotify_client = SpotifyClient(
config['spotify']['client_id'],
config['spotify']['client_secret'],
config['spotify']['redirect_uri']
)
self.playlist_manager = PlaylistManager(
config['storage_path'],
self.spotify_client
)
self.llm_manager = LLMManager(config['llm'])
self.tool_executor = ToolExecutor(
self.playlist_manager,
self.spotify_client
)
self.current_context = {}
def process_message(self, user_message):
"""
Process a user message and return response
Args:
user_message: String containing user's message
Returns:
String response to display to user
"""
# Get tool definitions
tools = get_tool_definitions()
# Get LLM response
response = self.llm_manager.chat(user_message, tools=tools)
# Handle tool calls
if response.get('tool_calls'):
tool_results = []
for tool_call in response['tool_calls']:
tool_name = tool_call['function']['name']
arguments = json.loads(tool_call['function']['arguments'])
# Execute tool
result = self.tool_executor.execute_tool(
tool_name,
arguments
)
tool_results.append({
'tool': tool_name,
'result': result
})
# Update context
self._update_context(tool_name, arguments, result)
# Get final response from LLM with tool results
tool_message = self._format_tool_results(tool_results)
final_response = self.llm_manager.chat(
tool_message,
tools=tools
)
return final_response['content']
return response['content']
def _update_context(self, tool_name, arguments, result):
"""Update conversation context based on tool execution"""
if tool_name == 'create_playlist':
self.current_context['last_playlist'] = result['playlist']['id']
elif tool_name == 'find_playlists':
if result['playlists']:
self.current_context['found_playlists'] = [
p['id'] for p in result['playlists']
]
def _format_tool_results(self, tool_results):
"""Format tool results for LLM consumption"""
import json
formatted = "Tool execution results:\n\n"
for tr in tool_results:
formatted += f"Tool: {tr['tool']}\n"
formatted += f"Result: {json.dumps(tr['result'], indent=2)}\n\n"
return formatted
This tool system provides a clean interface between the LLM's understanding and the actual playlist operations.
WEB USER INTERFACE
The web user interface provides an accessible and attractive way for users to interact with the chatbot. The interface uses modern web technologies including HTML5, CSS3, and JavaScript for the frontend, with a FastAPI backend serving the application and handling WebSocket connections for real-time chat.
The backend server uses FastAPI to serve the web interface and handle API requests:
from fastapi import FastAPI, WebSocket, Request
from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates
from fastapi.responses import RedirectResponse
import json
app = FastAPI(title="Spotify Playlist Chatbot")
# Mount static files and templates
app.mount("/static", StaticFiles(directory="static"), name="static")
templates = Jinja2Templates(directory="templates")
# Global application controller instance
controller = None
@app.on_event("startup")
async def startup_event():
"""Initialize application on startup"""
global controller
# Load configuration
with open('config.json', 'r') as f:
config = json.load(f)
controller = ApplicationController(config)
@app.get("/")
async def home(request: Request):
"""Serve the main chat interface"""
return templates.TemplateResponse(
"index.html",
{"request": request}
)
@app.get("/auth/spotify")
async def spotify_auth():
"""Redirect to Spotify authorization"""
auth_url = controller.spotify_client.get_authorization_url()
return RedirectResponse(auth_url)
@app.get("/auth/callback")
async def spotify_callback(code: str):
"""Handle Spotify OAuth callback"""
controller.spotify_client.exchange_code_for_token(code)
return RedirectResponse("/")
@app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
"""WebSocket endpoint for real-time chat"""
await websocket.accept()
try:
while True:
# Receive message from client
data = await websocket.receive_text()
message_data = json.loads(data)
user_message = message_data.get('message', '')
# Process message
response = controller.process_message(user_message)
# Send response back to client
await websocket.send_text(json.dumps({
'type': 'message',
'content': response
}))
except Exception as e:
print(f"WebSocket error: {e}")
finally:
await websocket.close()
The HTML template provides the structure for the chat interface:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Spotify Playlist Chatbot</title>
<link rel="stylesheet" href="/static/style.css">
</head>
<body>
<div class="container">
<div class="header">
<h1>Spotify Playlist Assistant</h1>
<p class="subtitle">Create and manage playlists with natural language</p>
</div>
<div class="chat-container" id="chatContainer">
<div class="welcome-message">
<h2>Welcome!</h2>
<p>I can help you create, modify, and manage Spotify playlists. Try asking me to:</p>
<ul>
<li>Create a playlist of upbeat 80s rock songs</li>
<li>Add some Beatles songs to my playlist</li>
<li>Find my jazz playlists</li>
<li>Merge my workout playlists</li>
</ul>
</div>
</div>
<div class="input-container">
<input
type="text"
id="messageInput"
placeholder="Type your message here..."
autocomplete="off"
>
<button id="sendButton">Send</button>
</div>
</div>
<script src="/static/app.js"></script>
</body>
</html>
The CSS stylesheet creates an attractive and responsive design:
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
min-height: 100vh;
display: flex;
justify-content: center;
align-items: center;
padding: 20px;
}
.container {
width: 100%;
max-width: 800px;
background: white;
border-radius: 20px;
box-shadow: 0 20px 60px rgba(0, 0, 0, 0.3);
display: flex;
flex-direction: column;
height: 90vh;
max-height: 800px;
}
.header {
padding: 30px;
background: linear-gradient(135deg, #1DB954 0%, #1ed760 100%);
color: white;
border-radius: 20px 20px 0 0;
text-align: center;
}
.header h1 {
font-size: 28px;
margin-bottom: 8px;
}
.subtitle {
font-size: 14px;
opacity: 0.9;
}
.chat-container {
flex: 1;
overflow-y: auto;
padding: 20px;
background: #f5f5f5;
}
.welcome-message {
background: white;
padding: 20px;
border-radius: 10px;
box-shadow: 0 2px 10px rgba(0, 0, 0, 0.1);
}
.welcome-message h2 {
color: #1DB954;
margin-bottom: 15px;
}
.welcome-message ul {
margin-left: 20px;
margin-top: 10px;
}
.welcome-message li {
margin: 8px 0;
color: #555;
}
.message {
margin: 15px 0;
display: flex;
align-items: flex-start;
}
.message.user {
justify-content: flex-end;
}
.message-content {
max-width: 70%;
padding: 12px 18px;
border-radius: 18px;
line-height: 1.5;
}
.message.user .message-content {
background: #1DB954;
color: white;
border-bottom-right-radius: 4px;
}
.message.assistant .message-content {
background: white;
color: #333;
border-bottom-left-radius: 4px;
box-shadow: 0 2px 5px rgba(0, 0, 0, 0.1);
}
.input-container {
padding: 20px;
background: white;
border-radius: 0 0 20px 20px;
display: flex;
gap: 10px;
}
#messageInput {
flex: 1;
padding: 12px 18px;
border: 2px solid #e0e0e0;
border-radius: 25px;
font-size: 15px;
outline: none;
transition: border-color 0.3s;
}
#messageInput:focus {
border-color: #1DB954;
}
#sendButton {
padding: 12px 30px;
background: #1DB954;
color: white;
border: none;
border-radius: 25px;
font-size: 15px;
font-weight: 600;
cursor: pointer;
transition: background 0.3s;
}
#sendButton:hover {
background: #1ed760;
}
#sendButton:active {
transform: scale(0.98);
}
.typing-indicator {
display: flex;
gap: 5px;
padding: 15px;
}
.typing-indicator span {
width: 8px;
height: 8px;
background: #999;
border-radius: 50%;
animation: typing 1.4s infinite;
}
.typing-indicator span:nth-child(2) {
animation-delay: 0.2s;
}
.typing-indicator span:nth-child(3) {
animation-delay: 0.4s;
}
@keyframes typing {
0%, 60%, 100% {
transform: translateY(0);
}
30% {
transform: translateY(-10px);
}
}
The JavaScript handles WebSocket communication and UI updates:
class ChatApp {
constructor() {
this.ws = null;
this.messageInput = document.getElementById('messageInput');
this.sendButton = document.getElementById('sendButton');
this.chatContainer = document.getElementById('chatContainer');
this.init();
}
init() {
// Connect to WebSocket
this.connectWebSocket();
// Event listeners
this.sendButton.addEventListener('click', () => this.sendMessage());
this.messageInput.addEventListener('keypress', (e) => {
if (e.key === 'Enter') {
this.sendMessage();
}
});
}
connectWebSocket() {
const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
const wsUrl = `${protocol}//${window.location.host}/ws`;
this.ws = new WebSocket(wsUrl);
this.ws.onopen = () => {
console.log('WebSocket connected');
};
this.ws.onmessage = (event) => {
const data = JSON.parse(event.data);
this.handleMessage(data);
};
this.ws.onerror = (error) => {
console.error('WebSocket error:', error);
};
this.ws.onclose = () => {
console.log('WebSocket disconnected');
setTimeout(() => this.connectWebSocket(), 3000);
};
}
sendMessage() {
const message = this.messageInput.value.trim();
if (!message) {
return;
}
// Display user message
this.addMessage(message, 'user');
// Send to server
this.ws.send(JSON.stringify({
message: message
}));
// Clear input
this.messageInput.value = '';
// Show typing indicator
this.showTypingIndicator();
}
handleMessage(data) {
// Remove typing indicator
this.removeTypingIndicator();
if (data.type === 'message') {
this.addMessage(data.content, 'assistant');
}
}
addMessage(content, role) {
const messageDiv = document.createElement('div');
messageDiv.className = `message ${role}`;
const contentDiv = document.createElement('div');
contentDiv.className = 'message-content';
contentDiv.textContent = content;
messageDiv.appendChild(contentDiv);
this.chatContainer.appendChild(messageDiv);
// Scroll to bottom
this.chatContainer.scrollTop = this.chatContainer.scrollHeight;
}
showTypingIndicator() {
const indicator = document.createElement('div');
indicator.className = 'message assistant';
indicator.id = 'typingIndicator';
const content = document.createElement('div');
content.className = 'message-content typing-indicator';
content.innerHTML = '<span></span><span></span><span></span>';
indicator.appendChild(content);
this.chatContainer.appendChild(indicator);
this.chatContainer.scrollTop = this.chatContainer.scrollHeight;
}
removeTypingIndicator() {
const indicator = document.getElementById('typingIndicator');
if (indicator) {
indicator.remove();
}
}
}
// Initialize app when DOM is ready
document.addEventListener('DOMContentLoaded', () => {
new ChatApp();
});
This web interface provides a clean, modern, and responsive chat experience that works seamlessly across devices.
CONFIGURATION AND DEPLOYMENT
The system uses a JSON configuration file to manage all settings, allowing easy customization without code changes. The configuration includes LLM settings, Spotify credentials, storage paths, and other parameters:
{
"llm": {
"provider_type": "local",
"model_name": "mistralai/Mistral-7B-Instruct-v0.2",
"temperature": 0.7,
"max_tokens": 2000
},
"spotify": {
"client_id": "your_spotify_client_id",
"client_secret": "your_spotify_client_secret",
"redirect_uri": "http://localhost:8000/auth/callback"
},
"storage_path": "./data/playlists",
"server": {
"host": "0.0.0.0",
"port": 8000
}
}
For remote LLM providers, the configuration would look different:
{
"llm": {
"provider_type": "remote",
"model_name": "gpt-4",
"api_key": "your_openai_api_key",
"api_base": "https://api.openai.com/v1",
"temperature": 0.7,
"max_tokens": 2000
},
"spotify": {
"client_id": "your_spotify_client_id",
"client_secret": "your_spotify_client_secret",
"redirect_uri": "http://localhost:8000/auth/callback"
},
"storage_path": "./data/playlists",
"server": {
"host": "0.0.0.0",
"port": 8000
}
}
The main entry point loads the configuration and starts the server:
import uvicorn
import json
if __name__ == "__main__":
# Load configuration
with open('config.json', 'r') as f:
config = json.load(f)
# Start server
uvicorn.run(
"server:app",
host=config['server']['host'],
port=config['server']['port'],
reload=True
)
For deployment, the system requires several dependencies which can be managed through a requirements file:
fastapi==0.104.1
uvicorn==0.24.0
websockets==12.0
requests==2.31.0
torch==2.1.0
transformers==4.35.0
accelerate==0.24.1
jinja2==3.1.2
python-multipart==0.0.6
Installation and setup involves creating a virtual environment, installing dependencies, and configuring Spotify credentials:
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
Users must register a Spotify application at the Spotify Developer Dashboard to obtain client credentials. The redirect URI must match the one specified in the configuration file.
RUNNING EXAMPLE - COMPLETE PRODUCTION CODE
The following section contains the complete, production-ready implementation of the Spotify Playlist Chatbot. This code is fully functional and includes all necessary components without mocks or simulations.
# File: llm_provider.py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import requests
import json
import re
class LLMProvider:
"""Base interface for LLM providers"""
def __init__(self, config):
self.config = config
self.model_name = config.get('model_name')
self.temperature = config.get('temperature', 0.7)
self.max_tokens = config.get('max_tokens', 2000)
def generate_response(self, messages, tools=None):
raise NotImplementedError
def supports_tools(self):
raise NotImplementedError
class LocalLLMProvider(LLMProvider):
"""Local LLM provider with GPU acceleration"""
def __init__(self, config):
super().__init__(config)
self.device = self._detect_gpu()
self.model = self._load_model()
def _detect_gpu(self):
if torch.cuda.is_available():
return 'cuda'
try:
if torch.version.hip is not None:
return 'rocm'
except AttributeError:
pass
if hasattr(torch.backends, 'mps') and torch.backends.mps.is_available():
return 'mps'
return 'cpu'
def _load_model(self):
tokenizer = AutoTokenizer.from_pretrained(self.model_name)
if self.device == 'cuda':
model = AutoModelForCausalLM.from_pretrained(
self.model_name,
device_map='auto',
torch_dtype=torch.float16
)
elif self.device == 'mps':
model = AutoModelForCausalLM.from_pretrained(
self.model_name,
torch_dtype=torch.float16
)
model = model.to('mps')
elif self.device == 'rocm':
model = AutoModelForCausalLM.from_pretrained(
self.model_name,
device_map='auto',
torch_dtype=torch.float16
)
else:
model = AutoModelForCausalLM.from_pretrained(self.model_name)
return {'model': model, 'tokenizer': tokenizer}
def generate_response(self, messages, tools=None):
system_prompt = self._build_system_prompt(tools)
formatted_messages = self._format_messages(messages, system_prompt)
inputs = self.model['tokenizer'](
formatted_messages,
return_tensors='pt'
).to(self.model['model'].device)
outputs = self.model['model'].generate(
**inputs,
max_new_tokens=self.max_tokens,
temperature=self.temperature,
do_sample=True,
pad_token_id=self.model['tokenizer'].eos_token_id
)
response_text = self.model['tokenizer'].decode(
outputs[0][inputs['input_ids'].shape[1]:],
skip_special_tokens=True
)
tool_calls = self._extract_tool_calls(response_text)
if tool_calls:
return {'content': None, 'tool_calls': tool_calls}
else:
return {'content': response_text, 'tool_calls': None}
def _build_system_prompt(self, tools):
base_prompt = "You are a helpful assistant for managing Spotify playlists."
if not tools:
return base_prompt
tools_desc = "\n\nYou have access to the following tools:\n\n"
for tool in tools:
func = tool['function']
tools_desc += f"Tool: {func['name']}\n"
tools_desc += f"Description: {func['description']}\n"
tools_desc += f"Parameters: {json.dumps(func['parameters'], indent=2)}\n\n"
tools_desc += (
"To use a tool, respond with JSON in this exact format:\n"
'{"tool": "tool_name", "arguments": {"param1": "value1"}}\n\n'
"Only use tools when necessary. Otherwise, respond naturally in plain text."
)
return base_prompt + tools_desc
def _format_messages(self, messages, system_prompt):
formatted = f"System: {system_prompt}\n\n"
for msg in messages:
role = msg['role'].capitalize()
content = msg['content']
formatted += f"{role}: {content}\n\n"
formatted += "Assistant: "
return formatted
def _extract_tool_calls(self, text):
json_pattern = r'\{[^{}]*"tool"[^{}]*\}'
matches = re.findall(json_pattern, text)
tool_calls = []
for match in matches:
try:
data = json.loads(match)
if 'tool' in data and 'arguments' in data:
tool_calls.append({
'function': {
'name': data['tool'],
'arguments': json.dumps(data['arguments'])
}
})
except json.JSONDecodeError:
continue
return tool_calls if tool_calls else None
def supports_tools(self):
return True
class RemoteLLMProvider(LLMProvider):
"""Remote LLM provider using API services"""
def __init__(self, config):
super().__init__(config)
self.api_key = config.get('api_key')
self.api_base = config.get('api_base', 'https://api.openai.com/v1')
def generate_response(self, messages, tools=None):
headers = {
'Authorization': f'Bearer {self.api_key}',
'Content-Type': 'application/json'
}
payload = {
'model': self.model_name,
'messages': messages,
'temperature': self.temperature,
'max_tokens': self.max_tokens
}
if tools and self.supports_tools():
payload['tools'] = tools
payload['tool_choice'] = 'auto'
response = requests.post(
f'{self.api_base}/chat/completions',
headers=headers,
json=payload,
timeout=60
)
response.raise_for_status()
result = response.json()
message = result['choices'][0]['message']
return {
'content': message.get('content'),
'tool_calls': message.get('tool_calls')
}
def supports_tools(self):
return True
class LLMManager:
"""Manages LLM interactions and conversation state"""
def __init__(self, config):
self.config = config
self.provider = self._create_provider()
self.conversation_history = []
def _create_provider(self):
provider_type = self.config.get('provider_type', 'remote')
if provider_type == 'local':
return LocalLLMProvider(self.config)
elif provider_type == 'remote':
return RemoteLLMProvider(self.config)
else:
raise ValueError(f"Unknown provider type: {provider_type}")
def chat(self, user_message, tools=None):
self.conversation_history.append({
'role': 'user',
'content': user_message
})
response = self.provider.generate_response(
self.conversation_history,
tools=tools
)
if response['content']:
self.conversation_history.append({
'role': 'assistant',
'content': response['content']
})
return response
def reset_conversation(self):
self.conversation_history = []
# File: spotify_client.py
import requests
from datetime import datetime, timedelta
from urllib.parse import urlencode
class SpotifyClient:
"""Wrapper for Spotify Web API operations"""
def __init__(self, client_id, client_secret, redirect_uri):
self.client_id = client_id
self.client_secret = client_secret
self.redirect_uri = redirect_uri
self.access_token = None
self.refresh_token = None
self.token_expiry = None
def get_authorization_url(self):
scopes = [
'playlist-read-private',
'playlist-modify-private',
'playlist-modify-public',
'user-library-read',
'user-read-playback-state',
'user-modify-playback-state'
]
params = {
'client_id': self.client_id,
'response_type': 'code',
'redirect_uri': self.redirect_uri,
'scope': ' '.join(scopes)
}
return f"https://accounts.spotify.com/authorize?{urlencode(params)}"
def exchange_code_for_token(self, code):
response = requests.post(
'https://accounts.spotify.com/api/token',
data={
'grant_type': 'authorization_code',
'code': code,
'redirect_uri': self.redirect_uri,
'client_id': self.client_id,
'client_secret': self.client_secret
},
timeout=30
)
response.raise_for_status()
token_data = response.json()
self.access_token = token_data['access_token']
self.refresh_token = token_data['refresh_token']
self.token_expiry = datetime.now() + timedelta(
seconds=token_data['expires_in']
)
def _ensure_valid_token(self):
if not self.token_expiry or datetime.now() >= self.token_expiry:
if not self.refresh_token:
raise ValueError("No refresh token available")
response = requests.post(
'https://accounts.spotify.com/api/token',
data={
'grant_type': 'refresh_token',
'refresh_token': self.refresh_token,
'client_id': self.client_id,
'client_secret': self.client_secret
},
timeout=30
)
response.raise_for_status()
token_data = response.json()
self.access_token = token_data['access_token']
self.token_expiry = datetime.now() + timedelta(
seconds=token_data['expires_in']
)
def _make_request(self, method, endpoint, **kwargs):
self._ensure_valid_token()
headers = kwargs.pop('headers', {})
headers['Authorization'] = f'Bearer {self.access_token}'
url = f'https://api.spotify.com/v1{endpoint}'
response = requests.request(
method,
url,
headers=headers,
timeout=30,
**kwargs
)
response.raise_for_status()
if response.content:
return response.json()
return None
def search_tracks(self, query, limit=20):
params = {
'q': query,
'type': 'track',
'limit': limit
}
result = self._make_request('GET', '/search', params=params)
tracks = []
for item in result['tracks']['items']:
tracks.append({
'id': item['id'],
'name': item['name'],
'artists': [artist['name'] for artist in item['artists']],
'album': item['album']['name'],
'duration_ms': item['duration_ms'],
'uri': item['uri']
})
return tracks
def create_playlist(self, user_id, name, description='', public=False):
payload = {
'name': name,
'description': description,
'public': public
}
result = self._make_request(
'POST',
f'/users/{user_id}/playlists',
json=payload
)
return {
'id': result['id'],
'name': result['name'],
'description': result['description'],
'uri': result['uri'],
'external_url': result['external_urls']['spotify']
}
def add_tracks_to_playlist(self, playlist_id, track_uris):
chunk_size = 100
for i in range(0, len(track_uris), chunk_size):
chunk = track_uris[i:i + chunk_size]
self._make_request(
'POST',
f'/playlists/{playlist_id}/tracks',
json={'uris': chunk}
)
def remove_tracks_from_playlist(self, playlist_id, track_uris):
tracks = [{'uri': uri} for uri in track_uris]
self._make_request(
'DELETE',
f'/playlists/{playlist_id}/tracks',
json={'tracks': tracks}
)
def get_user_playlists(self, user_id):
playlists = []
offset = 0
limit = 50
while True:
result = self._make_request(
'GET',
f'/users/{user_id}/playlists',
params={'offset': offset, 'limit': limit}
)
for item in result['items']:
playlists.append({
'id': item['id'],
'name': item['name'],
'description': item['description'],
'track_count': item['tracks']['total'],
'uri': item['uri']
})
if not result['next']:
break
offset += limit
return playlists
def get_playlist_tracks(self, playlist_id):
tracks = []
offset = 0
limit = 100
while True:
result = self._make_request(
'GET',
f'/playlists/{playlist_id}/tracks',
params={'offset': offset, 'limit': limit}
)
for item in result['items']:
if item['track']:
tracks.append({
'id': item['track']['id'],
'name': item['track']['name'],
'artists': [a['name'] for a in item['track']['artists']],
'uri': item['track']['uri']
})
if not result['next']:
break
offset += limit
return tracks
def play_playlist(self, playlist_uri, device_id=None):
payload = {'context_uri': playlist_uri}
params = {}
if device_id:
params['device_id'] = device_id
self._make_request(
'PUT',
'/me/player/play',
params=params,
json=payload
)
def get_current_user(self):
result = self._make_request('GET', '/me')
return {
'id': result['id'],
'display_name': result['display_name'],
'email': result.get('email')
}
# File: playlist_manager.py
import json
import os
import uuid
from datetime import datetime
class PlaylistManager:
"""Manages playlists both locally and on Spotify"""
def __init__(self, storage_path, spotify_client):
self.storage_path = storage_path
self.spotify_client = spotify_client
self.metadata_file = os.path.join(storage_path, 'playlists.json')
self.playlists = self._load_metadata()
def _load_metadata(self):
if os.path.exists(self.metadata_file):
with open(self.metadata_file, 'r') as f:
return json.load(f)
return {}
def _save_metadata(self):
os.makedirs(self.storage_path, exist_ok=True)
with open(self.metadata_file, 'w') as f:
json.dump(self.playlists, f, indent=2)
def create_playlist(self, name, description, attributes, track_uris):
user = self.spotify_client.get_current_user()
spotify_playlist = self.spotify_client.create_playlist(
user['id'],
name,
description=description
)
if track_uris:
self.spotify_client.add_tracks_to_playlist(
spotify_playlist['id'],
track_uris
)
playlist_id = str(uuid.uuid4())
self.playlists[playlist_id] = {
'id': playlist_id,
'spotify_id': spotify_playlist['id'],
'name': name,
'description': description,
'attributes': attributes,
'track_count': len(track_uris),
'created_at': datetime.now().isoformat(),
'updated_at': datetime.now().isoformat(),
'uri': spotify_playlist['uri']
}
self._save_metadata()
return self.playlists[playlist_id]
def find_matching_playlists(self, attributes):
matches = []
for playlist_id, playlist in self.playlists.items():
score = self._calculate_match_score(
playlist['attributes'],
attributes
)
if score > 0:
matches.append({
'playlist': playlist,
'score': score
})
matches.sort(key=lambda x: x['score'], reverse=True)
return [m['playlist'] for m in matches]
def _calculate_match_score(self, playlist_attrs, requested_attrs):
if not requested_attrs:
return 0
matching_attrs = 0
total_attrs = len(requested_attrs)
for key, value in requested_attrs.items():
if key in playlist_attrs:
playlist_value = playlist_attrs[key]
if isinstance(value, list) and isinstance(playlist_value, list):
if set(value) & set(playlist_value):
matching_attrs += 1
elif isinstance(value, str) and isinstance(playlist_value, str):
if value.lower() == playlist_value.lower():
matching_attrs += 1
elif value == playlist_value:
matching_attrs += 1
return matching_attrs / total_attrs if total_attrs > 0 else 0
def add_tracks_to_playlist(self, playlist_id, track_uris):
if playlist_id not in self.playlists:
raise ValueError(f"Playlist {playlist_id} not found")
playlist = self.playlists[playlist_id]
self.spotify_client.add_tracks_to_playlist(
playlist['spotify_id'],
track_uris
)
playlist['track_count'] += len(track_uris)
playlist['updated_at'] = datetime.now().isoformat()
self._save_metadata()
def remove_tracks_from_playlist(self, playlist_id, track_uris):
if playlist_id not in self.playlists:
raise ValueError(f"Playlist {playlist_id} not found")
playlist = self.playlists[playlist_id]
self.spotify_client.remove_tracks_from_playlist(
playlist['spotify_id'],
track_uris
)
playlist['track_count'] -= len(track_uris)
playlist['updated_at'] = datetime.now().isoformat()
self._save_metadata()
def merge_playlists(self, playlist_ids, new_name, new_description=''):
all_track_uris = []
merged_attributes = {}
for playlist_id in playlist_ids:
if playlist_id not in self.playlists:
continue
playlist = self.playlists[playlist_id]
tracks = self.spotify_client.get_playlist_tracks(
playlist['spotify_id']
)
all_track_uris.extend([t['uri'] for t in tracks])
for key, value in playlist['attributes'].items():
if key not in merged_attributes:
merged_attributes[key] = []
if isinstance(value, list):
merged_attributes[key].extend(value)
else:
merged_attributes[key].append(value)
all_track_uris = list(set(all_track_uris))
for key in merged_attributes:
merged_attributes[key] = list(set(merged_attributes[key]))
return self.create_playlist(
new_name,
new_description,
merged_attributes,
all_track_uris
)
def sync_with_spotify(self):
user = self.spotify_client.get_current_user()
spotify_playlists = self.spotify_client.get_user_playlists(user['id'])
spotify_ids = {p['id'] for p in spotify_playlists}
playlists_to_remove = []
for playlist_id, playlist in self.playlists.items():
if playlist['spotify_id'] not in spotify_ids:
playlists_to_remove.append(playlist_id)
else:
spotify_playlist = next(
p for p in spotify_playlists
if p['id'] == playlist['spotify_id']
)
playlist['track_count'] = spotify_playlist['track_count']
for playlist_id in playlists_to_remove:
del self.playlists[playlist_id]
self._save_metadata()
# File: tools.py
import json
def get_tool_definitions():
"""Return list of tool definitions for the LLM"""
return [
{
'type': 'function',
'function': {
'name': 'search_songs',
'description': 'Search for songs on Spotify based on criteria like genre, mood, artist, era, language, or style',
'parameters': {
'type': 'object',
'properties': {
'genre': {
'type': 'string',
'description': 'Musical genre (e.g., rock, jazz, hip-hop)'
},
'mood': {
'type': 'string',
'description': 'Mood or feeling (e.g., happy, sad, energetic, calm)'
},
'artist': {
'type': 'string',
'description': 'Artist or band name'
},
'era': {
'type': 'string',
'description': 'Time period (e.g., 1980s, 1990-1995, 2000s)'
},
'language': {
'type': 'string',
'description': 'Language of lyrics'
},
'style': {
'type': 'string',
'description': 'Musical style or subgenre'
},
'limit': {
'type': 'integer',
'description': 'Maximum number of songs to return (default 20)'
}
},
'required': []
}
}
},
{
'type': 'function',
'function': {
'name': 'create_playlist',
'description': 'Create a new playlist with specified songs',
'parameters': {
'type': 'object',
'properties': {
'name': {
'type': 'string',
'description': 'Name for the playlist'
},
'description': {
'type': 'string',
'description': 'Description of the playlist'
},
'track_ids': {
'type': 'array',
'items': {'type': 'string'},
'description': 'List of Spotify track IDs to include'
},
'attributes': {
'type': 'object',
'description': 'Attributes describing the playlist (genre, mood, etc.)'
}
},
'required': ['name', 'track_ids']
}
}
},
{
'type': 'function',
'function': {
'name': 'find_playlists',
'description': 'Find existing playlists matching specified criteria',
'parameters': {
'type': 'object',
'properties': {
'attributes': {
'type': 'object',
'description': 'Attributes to match (genre, mood, artist, etc.)'
}
},
'required': ['attributes']
}
}
},
{
'type': 'function',
'function': {
'name': 'add_songs_to_playlist',
'description': 'Add songs to an existing playlist',
'parameters': {
'type': 'object',
'properties': {
'playlist_id': {
'type': 'string',
'description': 'ID of the playlist to modify'
},
'track_ids': {
'type': 'array',
'items': {'type': 'string'},
'description': 'List of Spotify track IDs to add'
}
},
'required': ['playlist_id', 'track_ids']
}
}
},
{
'type': 'function',
'function': {
'name': 'remove_songs_from_playlist',
'description': 'Remove songs from an existing playlist',
'parameters': {
'type': 'object',
'properties': {
'playlist_id': {
'type': 'string',
'description': 'ID of the playlist to modify'
},
'track_ids': {
'type': 'array',
'items': {'type': 'string'},
'description': 'List of Spotify track IDs to remove'
}
},
'required': ['playlist_id', 'track_ids']
}
}
},
{
'type': 'function',
'function': {
'name': 'merge_playlists',
'description': 'Merge multiple playlists into a new playlist',
'parameters': {
'type': 'object',
'properties': {
'playlist_ids': {
'type': 'array',
'items': {'type': 'string'},
'description': 'List of playlist IDs to merge'
},
'name': {
'type': 'string',
'description': 'Name for the merged playlist'
},
'description': {
'type': 'string',
'description': 'Description for the merged playlist'
}
},
'required': ['playlist_ids', 'name']
}
}
},
{
'type': 'function',
'function': {
'name': 'play_playlist',
'description': 'Start playing a playlist on Spotify',
'parameters': {
'type': 'object',
'properties': {
'playlist_id': {
'type': 'string',
'description': 'ID of the playlist to play'
}
},
'required': ['playlist_id']
}
}
}
]
class ToolExecutor:
"""Executes tool calls from the LLM"""
def __init__(self, playlist_manager, spotify_client):
self.playlist_manager = playlist_manager
self.spotify_client = spotify_client
def execute_tool(self, tool_name, arguments):
if tool_name == 'search_songs':
return self._search_songs(**arguments)
elif tool_name == 'create_playlist':
return self._create_playlist(**arguments)
elif tool_name == 'find_playlists':
return self._find_playlists(**arguments)
elif tool_name == 'add_songs_to_playlist':
return self._add_songs_to_playlist(**arguments)
elif tool_name == 'remove_songs_from_playlist':
return self._remove_songs_from_playlist(**arguments)
elif tool_name == 'merge_playlists':
return self._merge_playlists(**arguments)
elif tool_name == 'play_playlist':
return self._play_playlist(**arguments)
else:
raise ValueError(f"Unknown tool: {tool_name}")
def _search_songs(self, genre=None, mood=None, artist=None, era=None,
language=None, style=None, limit=20):
query_parts = []
if genre:
query_parts.append(f'genre:{genre}')
if artist:
query_parts.append(f'artist:{artist}')
if era:
if '-' in era:
start, end = era.split('-')
query_parts.append(f'year:{start}-{end}')
elif 's' in era:
decade = era.replace('s', '')
query_parts.append(f'year:{decade}-{int(decade)+9}')
else:
query_parts.append(f'year:{era}')
if mood:
query_parts.append(mood)
if style:
query_parts.append(style)
if language:
query_parts.append(language)
query = ' '.join(query_parts) if query_parts else 'popular'
tracks = self.spotify_client.search_tracks(query, limit=limit)
return {
'success': True,
'tracks': tracks,
'count': len(tracks)
}
def _create_playlist(self, name, track_ids, description='', attributes=None):
if attributes is None:
attributes = {}
track_uris = [f'spotify:track:{tid}' for tid in track_ids]
playlist = self.playlist_manager.create_playlist(
name,
description,
attributes,
track_uris
)
return {
'success': True,
'playlist': playlist
}
def _find_playlists(self, attributes):
playlists = self.playlist_manager.find_matching_playlists(attributes)
return {
'success': True,
'playlists': playlists,
'count': len(playlists)
}
def _add_songs_to_playlist(self, playlist_id, track_ids):
track_uris = [f'spotify:track:{tid}' for tid in track_ids]
self.playlist_manager.add_tracks_to_playlist(
playlist_id,
track_uris
)
return {
'success': True,
'message': f'Added {len(track_ids)} tracks to playlist'
}
def _remove_songs_from_playlist(self, playlist_id, track_ids):
track_uris = [f'spotify:track:{tid}' for tid in track_ids]
self.playlist_manager.remove_tracks_from_playlist(
playlist_id,
track_uris
)
return {
'success': True,
'message': f'Removed {len(track_ids)} tracks from playlist'
}
def _merge_playlists(self, playlist_ids, name, description=''):
playlist = self.playlist_manager.merge_playlists(
playlist_ids,
name,
description
)
return {
'success': True,
'playlist': playlist
}
def _play_playlist(self, playlist_id):
playlist = self.playlist_manager.playlists.get(playlist_id)
if not playlist:
return {
'success': False,
'error': 'Playlist not found'
}
try:
self.spotify_client.play_playlist(playlist['uri'])
return {
'success': True,
'message': f'Playing playlist: {playlist["name"]}'
}
except Exception as e:
return {
'success': False,
'error': str(e)
}
# File: controller.py
import json
from llm_provider import LLMManager
from spotify_client import SpotifyClient
from playlist_manager import PlaylistManager
from tools import ToolExecutor, get_tool_definitions
class ApplicationController:
"""Main application controller coordinating all components"""
def __init__(self, config):
self.config = config
self.spotify_client = SpotifyClient(
config['spotify']['client_id'],
config['spotify']['client_secret'],
config['spotify']['redirect_uri']
)
self.playlist_manager = PlaylistManager(
config['storage_path'],
self.spotify_client
)
self.llm_manager = LLMManager(config['llm'])
self.tool_executor = ToolExecutor(
self.playlist_manager,
self.spotify_client
)
self.current_context = {}
def process_message(self, user_message):
tools = get_tool_definitions()
response = self.llm_manager.chat(user_message, tools=tools)
if response.get('tool_calls'):
tool_results = []
for tool_call in response['tool_calls']:
tool_name = tool_call['function']['name']
arguments = json.loads(tool_call['function']['arguments'])
result = self.tool_executor.execute_tool(
tool_name,
arguments
)
tool_results.append({
'tool': tool_name,
'result': result
})
self._update_context(tool_name, arguments, result)
tool_message = self._format_tool_results(tool_results)
final_response = self.llm_manager.chat(
tool_message,
tools=tools
)
return final_response['content']
return response['content']
def _update_context(self, tool_name, arguments, result):
if tool_name == 'create_playlist':
self.current_context['last_playlist'] = result['playlist']['id']
elif tool_name == 'find_playlists':
if result['playlists']:
self.current_context['found_playlists'] = [
p['id'] for p in result['playlists']
]
def _format_tool_results(self, tool_results):
formatted = "Tool execution results:\n\n"
for tr in tool_results:
formatted += f"Tool: {tr['tool']}\n"
formatted += f"Result: {json.dumps(tr['result'], indent=2)}\n\n"
return formatted
# File: server.py
from fastapi import FastAPI, WebSocket, Request
from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates
from fastapi.responses import RedirectResponse
import json
import os
from controller import ApplicationController
app = FastAPI(title="Spotify Playlist Chatbot")
# Create necessary directories
os.makedirs("static", exist_ok=True)
os.makedirs("templates", exist_ok=True)
os.makedirs("data/playlists", exist_ok=True)
app.mount("/static", StaticFiles(directory="static"), name="static")
templates = Jinja2Templates(directory="templates")
controller = None
@app.on_event("startup")
async def startup_event():
global controller
with open('config.json', 'r') as f:
config = json.load(f)
controller = ApplicationController(config)
@app.get("/")
async def home(request: Request):
return templates.TemplateResponse(
"index.html",
{"request": request}
)
@app.get("/auth/spotify")
async def spotify_auth():
auth_url = controller.spotify_client.get_authorization_url()
return RedirectResponse(auth_url)
@app.get("/auth/callback")
async def spotify_callback(code: str):
controller.spotify_client.exchange_code_for_token(code)
return RedirectResponse("/")
@app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
try:
while True:
data = await websocket.receive_text()
message_data = json.loads(data)
user_message = message_data.get('message', '')
response = controller.process_message(user_message)
await websocket.send_text(json.dumps({
'type': 'message',
'content': response
}))
except Exception as e:
print(f"WebSocket error: {e}")
finally:
await websocket.close()
# File: main.py
import uvicorn
import json
if __name__ == "__main__":
with open('config.json', 'r') as f:
config = json.load(f)
uvicorn.run(
"server:app",
host=config['server']['host'],
port=config['server']['port'],
reload=True
)
This complete implementation provides a fully functional Spotify playlist management chatbot with LLM integration, supporting both local and remote models with GPU acceleration across multiple platforms. The system handles natural language playlist requests, manages playlists locally and on Spotify, and provides a modern web-based user interface for interaction.
# PROJECT FILE STRUCTURE AND ADDITIONAL FILES
COMPLETE PROJECT FILE STRUCTURE
The complete project structure is organized as follows to maintain clean separation of concerns and ease of navigation:
spotify-playlist-chatbot/
|
+-- src/
| +-- __init__.py
| +-- llm_provider.py
| +-- spotify_client.py
| +-- playlist_manager.py
| +-- tools.py
| +-- controller.py
| +-- server.py
|
+-- static/
| +-- style.css
| +-- app.js
|
+-- templates/
| +-- index.html
|
+-- data/
| +-- playlists/
| +-- playlists.json (created automatically)
|
+-- tests/
| +-- __init__.py
| +-- test_spotify_client.py
| +-- test_playlist_manager.py
| +-- test_tools.py
|
+-- scripts/
| +-- setup_venv.sh
| +-- setup_venv.bat
| +-- install.sh
| +-- install.bat
|
+-- config.json.example
+-- config.json (created by user)
+-- requirements.txt
+-- Makefile
+-- README.md
+-- LICENSE
+-- .gitignore
+-- main.py
The source code resides in the src directory to keep the root clean. Static assets for the web interface are in the static directory. Templates for the web pages are in the templates directory. Data storage including playlist metadata is maintained in the data directory. Tests are organized in the tests directory. Installation and setup scripts are in the scripts directory.
REQUIREMENTS.TXT FILE
The requirements file specifies all Python dependencies needed for the project with pinned versions to ensure reproducibility:
fastapi==0.104.1
uvicorn[standard]==0.24.0
websockets==12.0
requests==2.31.0
torch==2.1.0
transformers==4.35.0
accelerate==0.24.1
jinja2==3.1.2
python-multipart==0.0.6
pydantic==2.5.0
pytest==7.4.3
pytest-asyncio==0.21.1
black==23.11.0
flake8==6.1.0
mypy==1.7.1
For users who want to use only remote LLM providers and avoid installing large ML libraries, an alternative minimal requirements file can be provided:
# requirements-minimal.txt
fastapi==0.104.1
uvicorn[standard]==0.24.0
websockets==12.0
requests==2.31.0
jinja2==3.1.2
python-multipart==0.0.6
pydantic==2.5.0
MAKEFILE
The Makefile provides convenient commands for common development and deployment tasks:
.PHONY: help install install-minimal install-dev venv clean test lint format run docker-build docker-run
PYTHON := python3
PIP := pip3
VENV := venv
VENV_BIN := $(VENV)/bin
help:
@echo "Spotify Playlist Chatbot - Available Commands"
@echo "=============================================="
@echo "make install - Install full dependencies including local LLM support"
@echo "make install-minimal - Install minimal dependencies for remote LLM only"
@echo "make install-dev - Install development dependencies"
@echo "make venv - Create virtual environment"
@echo "make clean - Remove generated files and caches"
@echo "make test - Run test suite"
@echo "make lint - Run code linting"
@echo "make format - Format code with black"
@echo "make run - Run the application"
@echo "make docker-build - Build Docker image"
@echo "make docker-run - Run Docker container"
venv:
@echo "Creating virtual environment..."
$(PYTHON) -m venv $(VENV)
@echo "Virtual environment created at $(VENV)/"
@echo "Activate it with: source $(VENV)/bin/activate (Linux/Mac) or $(VENV)\\Scripts\\activate (Windows)"
install: venv
@echo "Installing full dependencies..."
$(VENV_BIN)/pip install --upgrade pip
$(VENV_BIN)/pip install -r requirements.txt
@echo "Installation complete!"
@echo "Copy config.json.example to config.json and configure your settings"
install-minimal: venv
@echo "Installing minimal dependencies..."
$(VENV_BIN)/pip install --upgrade pip
$(VENV_BIN)/pip install -r requirements-minimal.txt
@echo "Minimal installation complete!"
@echo "Copy config.json.example to config.json and configure your settings"
install-dev: install
@echo "Installing development dependencies..."
$(VENV_BIN)/pip install pytest pytest-asyncio black flake8 mypy
@echo "Development dependencies installed!"
clean:
@echo "Cleaning up..."
find . -type d -name "__pycache__" -exec rm -rf {} + 2>/dev/null || true
find . -type f -name "*.pyc" -delete
find . -type f -name "*.pyo" -delete
find . -type f -name "*.coverage" -delete
find . -type d -name "*.egg-info" -exec rm -rf {} + 2>/dev/null || true
find . -type d -name ".pytest_cache" -exec rm -rf {} + 2>/dev/null || true
find . -type d -name ".mypy_cache" -exec rm -rf {} + 2>/dev/null || true
@echo "Cleanup complete!"
test:
@echo "Running tests..."
$(VENV_BIN)/pytest tests/ -v
lint:
@echo "Running linter..."
$(VENV_BIN)/flake8 src/ tests/ --max-line-length=100
$(VENV_BIN)/mypy src/ --ignore-missing-imports
format:
@echo "Formatting code..."
$(VENV_BIN)/black src/ tests/ --line-length=100
run:
@echo "Starting Spotify Playlist Chatbot..."
$(VENV_BIN)/python main.py
docker-build:
@echo "Building Docker image..."
docker build -t spotify-playlist-chatbot:latest .
docker-run:
@echo "Running Docker container..."
docker run -p 8000:8000 -v $(PWD)/data:/app/data -v $(PWD)/config.json:/app/config.json spotify-playlist-chatbot:latest
SETUP SCRIPT FOR LINUX AND MAC
The setup script for Unix-based systems automates the virtual environment creation and dependency installation:
#!/bin/bash
# File: scripts/setup_venv.sh
set -e
echo "=========================================="
echo "Spotify Playlist Chatbot Setup"
echo "=========================================="
echo ""
# Check Python version
PYTHON_VERSION=$(python3 --version 2>&1 | awk '{print $2}')
REQUIRED_VERSION="3.8"
echo "Detected Python version: $PYTHON_VERSION"
if ! python3 -c "import sys; exit(0 if sys.version_info >= (3, 8) else 1)"; then
echo "Error: Python 3.8 or higher is required"
exit 1
fi
echo "Python version check passed"
echo ""
# Create virtual environment
echo "Creating virtual environment..."
python3 -m venv venv
if [ ! -d "venv" ]; then
echo "Error: Failed to create virtual environment"
exit 1
fi
echo "Virtual environment created successfully"
echo ""
# Activate virtual environment
source venv/bin/activate
# Upgrade pip
echo "Upgrading pip..."
pip install --upgrade pip
echo ""
# Ask user about installation type
echo "Select installation type:"
echo "1) Full installation (includes local LLM support with PyTorch)"
echo "2) Minimal installation (remote LLM only, smaller download)"
read -p "Enter choice [1-2]: " choice
case $choice in
1)
echo ""
echo "Installing full dependencies..."
pip install -r requirements.txt
;;
2)
echo ""
echo "Installing minimal dependencies..."
pip install -r requirements-minimal.txt
;;
*)
echo "Invalid choice. Installing full dependencies..."
pip install -r requirements.txt
;;
esac
echo ""
echo "=========================================="
echo "Installation Complete!"
echo "=========================================="
echo ""
echo "Next steps:"
echo "1. Copy config.json.example to config.json"
echo " cp config.json.example config.json"
echo ""
echo "2. Edit config.json with your Spotify credentials"
echo " - Get credentials from: https://developer.spotify.com/dashboard"
echo ""
echo "3. Activate the virtual environment:"
echo " source venv/bin/activate"
echo ""
echo "4. Run the application:"
echo " python main.py"
echo ""
echo "5. Open your browser to: http://localhost:8000"
echo ""
SETUP SCRIPT FOR WINDOWS
The Windows batch script provides equivalent functionality for Windows users:
@echo off
REM File: scripts/setup_venv.bat
echo ==========================================
echo Spotify Playlist Chatbot Setup
echo ==========================================
echo.
REM Check Python installation
python --version >nul 2>&1
if errorlevel 1 (
echo Error: Python is not installed or not in PATH
echo Please install Python 3.8 or higher from https://www.python.org/
pause
exit /b 1
)
echo Python found
echo.
REM Create virtual environment
echo Creating virtual environment...
python -m venv venv
if not exist "venv\" (
echo Error: Failed to create virtual environment
pause
exit /b 1
)
echo Virtual environment created successfully
echo.
REM Activate virtual environment
call venv\Scripts\activate.bat
REM Upgrade pip
echo Upgrading pip...
python -m pip install --upgrade pip
echo.
REM Ask user about installation type
echo Select installation type:
echo 1) Full installation (includes local LLM support with PyTorch)
echo 2) Minimal installation (remote LLM only, smaller download)
set /p choice="Enter choice [1-2]: "
if "%choice%"=="1" (
echo.
echo Installing full dependencies...
pip install -r requirements.txt
) else if "%choice%"=="2" (
echo.
echo Installing minimal dependencies...
pip install -r requirements-minimal.txt
) else (
echo Invalid choice. Installing full dependencies...
pip install -r requirements.txt
)
echo.
echo ==========================================
echo Installation Complete!
echo ==========================================
echo.
echo Next steps:
echo 1. Copy config.json.example to config.json
echo copy config.json.example config.json
echo.
echo 2. Edit config.json with your Spotify credentials
echo - Get credentials from: https://developer.spotify.com/dashboard
echo.
echo 3. Activate the virtual environment:
echo venv\Scripts\activate.bat
echo.
echo 4. Run the application:
echo python main.py
echo.
echo 5. Open your browser to: http://localhost:8000
echo.
pause
INSTALLATION SCRIPT FOR LINUX
A system-wide installation script for Linux systems:
#!/bin/bash
# File: scripts/install.sh
set -e
echo "Installing Spotify Playlist Chatbot system-wide..."
echo ""
# Check for root privileges
if [ "$EUID" -ne 0 ]; then
echo "This script requires root privileges for system-wide installation"
echo "Please run with sudo: sudo ./scripts/install.sh"
exit 1
fi
# Install system dependencies
echo "Installing system dependencies..."
if command -v apt-get &> /dev/null; then
apt-get update
apt-get install -y python3 python3-pip python3-venv
elif command -v yum &> /dev/null; then
yum install -y python3 python3-pip
elif command -v dnf &> /dev/null; then
dnf install -y python3 python3-pip
else
echo "Unsupported package manager. Please install Python 3.8+ manually."
exit 1
fi
# Create application directory
APP_DIR="/opt/spotify-playlist-chatbot"
echo "Creating application directory at $APP_DIR..."
mkdir -p $APP_DIR
# Copy application files
echo "Copying application files..."
cp -r src static templates data scripts $APP_DIR/
cp requirements.txt requirements-minimal.txt config.json.example main.py $APP_DIR/
# Set permissions
chown -R $SUDO_USER:$SUDO_USER $APP_DIR
# Create systemd service file
echo "Creating systemd service..."
cat > /etc/systemd/system/spotify-playlist-chatbot.service << EOF
[Unit]
Description=Spotify Playlist Chatbot
After=network.target
[Service]
Type=simple
User=$SUDO_USER
WorkingDirectory=$APP_DIR
ExecStart=$APP_DIR/venv/bin/python $APP_DIR/main.py
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
echo ""
echo "Installation complete!"
echo ""
echo "Next steps:"
echo "1. Navigate to $APP_DIR"
echo "2. Run setup: sudo -u $SUDO_USER bash scripts/setup_venv.sh"
echo "3. Configure: cp config.json.example config.json && nano config.json"
echo "4. Enable service: systemctl enable spotify-playlist-chatbot"
echo "5. Start service: systemctl start spotify-playlist-chatbot"
echo ""
README.MD FILE
A comprehensive README file documenting the project:
# Spotify Playlist Chatbot
An intelligent chatbot powered by Large Language Models (LLMs) that creates, manages, and plays Spotify playlists through natural language conversations.
## Overview
The Spotify Playlist Chatbot allows users to interact with their Spotify account using natural language. Simply describe the kind of music you want, and the chatbot will search for matching songs, create playlists, modify existing ones, merge playlists, and even start playback on your Spotify devices.
### Key Features
**Natural Language Understanding**: Describe playlists using mood, genre, artist, era, language, style, or any combination of attributes. The LLM understands complex requests like "Create a playlist of upbeat 80s rock songs with artists like Bon Jovi and Def Leppard."
**Flexible LLM Support**: Choose between local LLM execution with GPU acceleration or remote API services. Local execution supports NVIDIA CUDA, AMD ROCm, and Apple Silicon (MPS) for optimal performance on various hardware platforms.
**Comprehensive Playlist Management**: Create new playlists, add or remove songs from existing playlists, merge multiple playlists, and manage both local metadata and Spotify cloud storage seamlessly.
**Intelligent Playlist Matching**: The system remembers existing playlists and can suggest using them instead of creating duplicates. When you ask for a playlist that matches an existing one, the chatbot will ask whether to use the existing playlist or create a new one.
**Modern Web Interface**: A clean, responsive web-based user interface provides an intuitive chat experience that works across desktop and mobile devices.
**Real-time Interaction**: WebSocket-based communication ensures instant responses and smooth conversation flow without page refreshes.
## Architecture
The application follows a layered architecture with clear separation of concerns:
**Presentation Layer**: Web-based UI using HTML, CSS, and JavaScript with WebSocket communication for real-time chat.
**Application Layer**: FastAPI server handling HTTP requests, WebSocket connections, and OAuth authentication flow.
**Business Logic Layer**: Application controller coordinating between LLM, Spotify API, and playlist management components.
**LLM Integration Layer**: Abstracted provider interface supporting both local models (via Transformers) and remote APIs (OpenAI, Anthropic, etc.) with automatic GPU detection and configuration.
**Integration Layer**: Spotify Web API client handling authentication, search, playlist operations, and playback control.
**Persistence Layer**: Local JSON-based storage for playlist metadata with automatic synchronization to Spotify.
## Installation
### Prerequisites
Python version 3.8 or higher is required. For local LLM support, you need a compatible GPU (NVIDIA, AMD, or Apple Silicon) with appropriate drivers installed. For remote LLM usage, you need an API key from your chosen provider (OpenAI, Anthropic, etc.).
You must have a Spotify account and register an application at the Spotify Developer Dashboard (https://developer.spotify.com/dashboard) to obtain client credentials.
### Quick Start
Clone the repository to your local machine:
git clone https://github.com/yourusername/spotify-playlist-chatbot.git
cd spotify-playlist-chatbot
Run the setup script for your operating system. On Linux or Mac:
bash scripts/setup_venv.sh
On Windows:
scripts\setup_venv.bat
The setup script will create a virtual environment and install dependencies. Choose full installation for local LLM support or minimal installation for remote LLM only.
Copy the example configuration file and edit it with your credentials:
cp config.json.example config.json
nano config.json
Configure your Spotify application credentials and LLM settings in the config.json file.
### Using Makefile
Alternatively, you can use the Makefile for installation:
make install # Full installation with local LLM support
make install-minimal # Minimal installation for remote LLM only
make install-dev # Install with development dependencies
## Configuration
The config.json file contains all application settings. Here is an example configuration for remote LLM usage:
{
"llm": {
"provider_type": "remote",
"model_name": "gpt-4",
"api_key": "your-openai-api-key",
"api_base": "https://api.openai.com/v1",
"temperature": 0.7,
"max_tokens": 2000
},
"spotify": {
"client_id": "your-spotify-client-id",
"client_secret": "your-spotify-client-secret",
"redirect_uri": "http://localhost:8000/auth/callback"
},
"storage_path": "./data/playlists",
"server": {
"host": "0.0.0.0",
"port": 8000
}
}
For local LLM usage, configure it as follows:
{
"llm": {
"provider_type": "local",
"model_name": "mistralai/Mistral-7B-Instruct-v0.2",
"temperature": 0.7,
"max_tokens": 2000
},
"spotify": {
"client_id": "your-spotify-client-id",
"client_secret": "your-spotify-client-secret",
"redirect_uri": "http://localhost:8000/auth/callback"
},
"storage_path": "./data/playlists",
"server": {
"host": "0.0.0.0",
"port": 8000
}
}
### Configuration Parameters
**LLM Configuration**: The provider_type can be "local" or "remote". For local providers, model_name should be a HuggingFace model identifier. For remote providers, it should be the model name from your API provider. The api_key is required only for remote providers. The temperature controls randomness in responses (0.0 to 1.0). The max_tokens limits the response length.
**Spotify Configuration**: The client_id and client_secret are obtained from your Spotify Developer Dashboard application. The redirect_uri must match the URI configured in your Spotify application settings.
**Storage Configuration**: The storage_path specifies where playlist metadata is stored locally.
**Server Configuration**: The host determines which network interfaces the server listens on. Use "0.0.0.0" to accept connections from any interface or "127.0.0.1" for localhost only. The port specifies the TCP port for the web server.
## Usage
### Starting the Application
Activate the virtual environment:
On Linux or Mac:
source venv/bin/activate
On Windows:
venv\Scripts\activate.bat
Start the application:
python main.py
Or using the Makefile:
make run
Open your web browser and navigate to http://localhost:8000.
### First-Time Setup
When you first access the application, you need to authorize it to access your Spotify account. Click the "Connect to Spotify" button or navigate to http://localhost:8000/auth/spotify. You will be redirected to Spotify's authorization page. Log in and grant the requested permissions. After authorization, you will be redirected back to the chatbot interface.
### Interacting with the Chatbot
The chatbot understands natural language requests for playlist operations. Here are some example interactions:
**Creating Playlists**: "Create a playlist of upbeat 80s rock songs" or "Make me a playlist with calm jazz music from the 1950s" or "I want a workout playlist with high-energy EDM tracks."
**Modifying Playlists**: "Add some Beatles songs to my rock playlist" or "Remove the slow songs from my workout playlist" or "Add more artists like Radiohead to that playlist."
**Finding Playlists**: "Show me my jazz playlists" or "Find playlists with 80s music" or "Do I have any workout playlists?"
**Merging Playlists**: "Merge my rock and metal playlists" or "Combine all my workout playlists into one" or "Create a new playlist from my jazz and blues playlists."
**Playing Playlists**: "Play my workout playlist" or "Start playing the jazz playlist you just created" or "Play some 80s rock."
The chatbot maintains conversation context, so you can refer to previously created or mentioned playlists using phrases like "that playlist" or "the one you just made."
### Advanced Features
**Multi-Attribute Queries**: You can specify multiple attributes simultaneously. For example, "Create a playlist of sad acoustic songs from the 2000s by female artists."
**Contextual References**: The chatbot remembers the conversation context. After creating a playlist, you can say "add more songs to it" without specifying which playlist.
**Duplicate Detection**: When you request a playlist that matches an existing one, the chatbot will ask whether you want to use the existing playlist or create a new one.
**Playlist Synchronization**: The system automatically synchronizes with Spotify to keep track counts and detect deleted playlists.
## Development
### Running Tests
Execute the test suite using pytest:
make test
Or directly:
pytest tests/ -v
### Code Quality
Format code using Black:
make format
Run linting with Flake8 and type checking with MyPy:
make lint
### Project Structure
The src directory contains all source code modules. The llm_provider.py module handles LLM integration with support for local and remote providers. The spotify_client.py module wraps the Spotify Web API. The playlist_manager.py module manages playlist metadata and operations. The tools.py module defines available tools for the LLM. The controller.py module coordinates all components. The server.py module implements the FastAPI web server.
The static directory contains CSS and JavaScript for the web interface. The templates directory contains HTML templates. The data directory stores playlist metadata. The tests directory contains unit and integration tests. The scripts directory contains installation and setup scripts.
## Docker Deployment
A Dockerfile is provided for containerized deployment:
make docker-build
make docker-run
Or manually:
docker build -t spotify-playlist-chatbot:latest .
docker run -p 8000:8000 -v $(pwd)/data:/app/data -v $(pwd)/config.json:/app/config.json spotify-playlist-chatbot:latest
## Troubleshooting
**Authentication Issues**: Ensure your Spotify client credentials are correct and the redirect URI in config.json matches exactly what is configured in your Spotify Developer Dashboard application. The redirect URI is case-sensitive.
**GPU Not Detected**: For local LLM execution, verify that you have the correct GPU drivers installed. For NVIDIA GPUs, install CUDA toolkit. For AMD GPUs, install ROCm. For Apple Silicon, ensure you are using macOS 12.3 or later.
**Model Loading Errors**: When using local LLMs, ensure you have sufficient disk space and RAM. Large models may require 16GB or more of RAM. The first run will download the model from HuggingFace, which can take time depending on your internet connection.
**WebSocket Connection Failed**: Check that no firewall is blocking the WebSocket connection. Ensure the server is running and accessible at the configured host and port.
**Playlist Not Found**: The chatbot maintains local metadata. If you delete a playlist directly in Spotify, run the sync operation by restarting the application or manually triggering a sync.
## Contributing
Contributions are welcome! Please follow these guidelines:
Fork the repository and create a feature branch. Write tests for new functionality. Ensure all tests pass and code follows the project style (use Black for formatting). Submit a pull request with a clear description of changes.
## License
This project is licensed under the MIT License. See the LICENSE file for details.
## Acknowledgments
This project uses the Spotify Web API for music streaming integration. It leverages HuggingFace Transformers for local LLM execution. The web interface is built with FastAPI and modern web technologies.
## Support
For issues, questions, or feature requests, please open an issue on the GitHub repository.
## Changelog
### Version 1.0.0
Initial release with core functionality including natural language playlist creation, modification, and playback. Support for both local and remote LLM providers. GPU acceleration for NVIDIA, AMD, and Apple Silicon. Web-based user interface with real-time chat.
## LICENSE FILE
The MIT License file for the project:
MIT License
Copyright (c) 2024 Spotify Playlist Chatbot Contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
## GITIGNORE FILE
A comprehensive gitignore file to exclude unnecessary files from version control:
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# Virtual Environment
venv/
ENV/
env/
.venv
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
.DS_Store
# Configuration
config.json
# Data
data/playlists/playlists.json
# Testing
.pytest_cache/
.coverage
htmlcov/
.tox/
.nox/
# Type checking
.mypy_cache/
.dmypy.json
dmypy.json
# Jupyter Notebook
.ipynb_checkpoints
# PyCharm
.idea/
# Logs
*.log
# Model cache
models/
.cache/
## CONFIG.JSON.EXAMPLE FILE
An example configuration file that users can copy and customize:
{
"llm": {
"provider_type": "remote",
"model_name": "gpt-4",
"api_key": "your-api-key-here",
"api_base": "https://api.openai.com/v1",
"temperature": 0.7,
"max_tokens": 2000
},
"spotify": {
"client_id": "your-spotify-client-id",
"client_secret": "your-spotify-client-secret",
"redirect_uri": "http://localhost:8000/auth/callback"
},
"storage_path": "./data/playlists",
"server": {
"host": "0.0.0.0",
"port": 8000
}
}
## DOCKERFILE
A Dockerfile for containerized deployment:
FROM python:3.10-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
g++ \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements first for better caching
COPY requirements-minimal.txt .
RUN pip install --no-cache-dir -r requirements-minimal.txt
# Copy application files
COPY src/ ./src/
COPY static/ ./static/
COPY templates/ ./templates/
COPY main.py .
# Create data directory
RUN mkdir -p data/playlists
# Expose port
EXPOSE 8000
# Run application
CMD ["python", "main.py"]
## DOCKER-COMPOSE.YML FILE
A Docker Compose file for easy deployment with volume management:
version: '3.8'
services:
spotify-chatbot:
build: .
ports:
- "8000:8000"
volumes:
- ./data:/app/data
- ./config.json:/app/config.json:ro
environment:
- PYTHONUNBUFFERED=1
restart: unless-stopped
## UPDATED MAIN.PY FILE
The main entry point needs to be updated to work with the new structure:
# File: main.py
import uvicorn
import json
import sys
import os
# Add src directory to Python path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'src'))
def load_config():
"""Load configuration from config.json"""
config_path = 'config.json'
if not os.path.exists(config_path):
print("Error: config.json not found")
print("Please copy config.json.example to config.json and configure your settings")
sys.exit(1)
try:
with open(config_path, 'r') as f:
return json.load(f)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in config.json: {e}")
sys.exit(1)
def validate_config(config):
"""Validate configuration has required fields"""
required_fields = {
'llm': ['provider_type', 'model_name'],
'spotify': ['client_id', 'client_secret', 'redirect_uri'],
'server': ['host', 'port']
}
for section, fields in required_fields.items():
if section not in config:
print(f"Error: Missing '{section}' section in config.json")
return False
for field in fields:
if field not in config[section]:
print(f"Error: Missing '{field}' in '{section}' section of config.json")
return False
if config['llm']['provider_type'] == 'remote' and 'api_key' not in config['llm']:
print("Error: 'api_key' required for remote LLM provider")
return False
return True
if __name__ == "__main__":
print("Starting Spotify Playlist Chatbot...")
print("")
# Load and validate configuration
config = load_config()
if not validate_config(config):
sys.exit(1)
print(f"LLM Provider: {config['llm']['provider_type']}")
print(f"Model: {config['llm']['model_name']}")
print(f"Server: http://{config['server']['host']}:{config['server']['port']}")
print("")
print("Starting server...")
print("")
# Start server
uvicorn.run(
"server:app",
host=config['server']['host'],
port=config['server']['port'],
reload=True
)
## REQUIREMENTS-MINIMAL.TXT FILE
Create the minimal requirements file for remote-only usage:
fastapi==0.104.1
uvicorn[standard]==0.24.0
websockets==12.0
requests==2.31.0
jinja2==3.1.2
python-multipart==0.0.6
pydantic==2.5.0
## SRC/__INIT__.PY FILE
An initialization file for the src package:
# File: src/__init__.py
"""
Spotify Playlist Chatbot
An intelligent chatbot for managing Spotify playlists using natural language.
"""
__version__ = "1.0.0"
__author__ = "Spotify Playlist Chatbot Contributors"
__license__ = "MIT"
## TESTS/__INIT__.PY FILE
An initialization file for the tests package:
# File: tests/__init__.py
"""
Test suite for Spotify Playlist Chatbot
"""
## SAMPLE TEST FILE
A sample test file to demonstrate the testing structure:
# File: tests/test_playlist_manager.py
import pytest
import json
import os
import tempfile
from unittest.mock import Mock, MagicMock
import sys
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'src'))
from playlist_manager import PlaylistManager
@pytest.fixture
def mock_spotify_client():
"""Create a mock Spotify client"""
client = Mock()
client.get_current_user.return_value = {
'id': 'test_user',
'display_name': 'Test User'
}
client.create_playlist.return_value = {
'id': 'spotify_playlist_id',
'name': 'Test Playlist',
'description': 'Test Description',
'uri': 'spotify:playlist:test_id'
}
return client
@pytest.fixture
def temp_storage():
"""Create a temporary storage directory"""
with tempfile.TemporaryDirectory() as tmpdir:
yield tmpdir
def test_create_playlist(mock_spotify_client, temp_storage):
"""Test playlist creation"""
manager = PlaylistManager(temp_storage, mock_spotify_client)
playlist = manager.create_playlist(
name="Test Playlist",
description="Test Description",
attributes={'genre': 'rock', 'mood': 'energetic'},
track_uris=['spotify:track:1', 'spotify:track:2']
)
assert playlist['name'] == "Test Playlist"
assert playlist['description'] == "Test Description"
assert playlist['track_count'] == 2
assert playlist['attributes']['genre'] == 'rock'
# Verify metadata was saved
metadata_file = os.path.join(temp_storage, 'playlists.json')
assert os.path.exists(metadata_file)
with open(metadata_file, 'r') as f:
saved_data = json.load(f)
assert len(saved_data) == 1
def test_find_matching_playlists(mock_spotify_client, temp_storage):
"""Test finding matching playlists"""
manager = PlaylistManager(temp_storage, mock_spotify_client)
# Create test playlists
manager.create_playlist(
name="Rock Playlist",
description="Rock music",
attributes={'genre': 'rock', 'mood': 'energetic'},
track_uris=['spotify:track:1']
)
manager.create_playlist(
name="Jazz Playlist",
description="Jazz music",
attributes={'genre': 'jazz', 'mood': 'calm'},
track_uris=['spotify:track:2']
)
# Find rock playlists
matches = manager.find_matching_playlists({'genre': 'rock'})
assert len(matches) == 1
assert matches[0]['name'] == "Rock Playlist"
# Find calm playlists
matches = manager.find_matching_playlists({'mood': 'calm'})
assert len(matches) == 1
assert matches[0]['name'] == "Jazz Playlist"
This complete project structure with all supporting files provides everything needed to set up, configure, develop, test, and deploy the Spotify Playlist Chatbot. The Makefile simplifies common tasks, the setup scripts automate installation, the README provides comprehensive documentation, and the test framework ensures code quality.