Thursday, December 18, 2025

LARGE LANGUAGE MODELS IN IOT APPLICATIONS





INTRODUCTION


The convergence of Large Language Models (LLMs) and Internet of Things (IoT) represents one of the most exciting developments in modern technology. This combination creates intelligent systems that can understand human language, process sensor data, and make autonomous decisions in real-world environments. For beginners, this might seem like a complex topic, but we will break it down into digestible components that anyone can understand.


Large Language Models are artificial intelligence systems trained on vast amounts of text data to understand and generate human-like language. When combined with IoT devices that collect real-world data through sensors, we create systems that can interpret both digital information and physical world conditions through natural language interfaces.


UNDERSTANDING THE FUNDAMENTALS


What Are Large Language Models?


Large Language Models are sophisticated AI systems that have been trained on enormous datasets containing text from books, websites, articles, and other written materials. These models learn patterns in language and can generate coherent, contextually appropriate responses to text inputs. Popular examples include GPT models, Claude, and various open-source alternatives.

The key capabilities that make LLMs valuable for IoT applications include natural language understanding, context retention across conversations, reasoning about complex scenarios, and the ability to translate between different data formats and representations.


What Is the Internet of Things?


The Internet of Things refers to a network of physical devices embedded with sensors, software, and connectivity capabilities that enable them to collect and exchange data. These devices range from simple temperature sensors to complex industrial machinery, smart home appliances, and autonomous vehicles.

IoT systems typically consist of several layers: the device layer containing sensors and actuators, the connectivity layer providing network communication, the data processing layer for analyzing collected information, and the application layer where users interact with the system.


THE CONVERGENCE OPPORTUNITY


When we combine LLMs with IoT systems, we create intelligent platforms that can understand human requests in natural language and translate them into actions within the physical world. Instead of requiring users to learn complex interfaces or programming languages, they can simply describe what they want in plain English.

Consider a smart building scenario where an employee says "The conference room is too warm for our afternoon presentation." An LLM-enhanced IoT system can understand this statement, identify the specific room, determine the appropriate temperature adjustment, and automatically control the HVAC system to create comfortable conditions.


KEY APPLICATION AREAS


Smart Home Automation


In residential environments, LLMs can serve as intelligent interfaces for home automation systems. Rather than programming specific rules or using multiple apps, homeowners can communicate with their homes using natural language. The system can understand complex requests like "When I say goodnight, turn off all lights except the hallway light, set the thermostat to 68 degrees, and arm the security system."

The LLM processes this instruction, breaks it down into individual device commands, and creates an automation routine that executes these actions in the proper sequence. This eliminates the need for technical expertise while providing sophisticated automation capabilities.


Industrial IoT and Predictive Maintenance


Manufacturing facilities generate enormous amounts of sensor data from equipment monitoring systems. LLMs can analyze this data in conjunction with maintenance logs, operator reports, and historical performance data to provide intelligent insights about equipment health and maintenance needs.

When a machine shows unusual vibration patterns, the LLM can correlate this information with similar historical incidents, weather conditions, production schedules, and maintenance records to determine whether immediate attention is required or if the issue can be addressed during the next scheduled maintenance window.


Healthcare and Remote Patient Monitoring


Healthcare IoT devices collect continuous streams of patient data including heart rate, blood pressure, glucose levels, and activity patterns. LLMs can analyze this information alongside patient medical histories and clinical guidelines to identify potential health concerns and communicate findings in language that both healthcare providers and patients can understand.

For example, when a patient's glucose monitor shows elevated readings, the LLM can consider factors like meal timing, medication schedules, and recent activity levels to provide contextual explanations and recommendations rather than simply reporting raw numbers.


TECHNICAL ARCHITECTURE PATTERNS


Edge Computing Integration


One effective approach for integrating LLMs with IoT systems involves deploying smaller, specialized language models directly on edge computing devices. This architecture reduces latency, improves privacy, and ensures system functionality even when internet connectivity is limited.

Edge-deployed LLMs can process local sensor data, make immediate decisions for time-critical applications, and communicate with cloud-based systems for more complex analysis when connectivity allows. This hybrid approach balances performance requirements with computational constraints.


Cloud-Based Processing


For applications that require the full capabilities of large language models, cloud-based architectures provide access to the most powerful AI systems available. IoT devices transmit sensor data to cloud platforms where LLMs can perform sophisticated analysis, reasoning, and decision-making.

This approach works well for applications where slight delays are acceptable and where the complexity of analysis justifies the additional latency. Smart city applications, long-term trend analysis, and complex optimization problems often benefit from cloud-based LLM processing.


Hybrid Architectures


Many practical implementations combine edge and cloud processing to optimize for both performance and capability. Simple, time-critical decisions are handled locally by edge-deployed models, while complex analysis and long-term planning are performed in the cloud.

This architecture allows systems to maintain basic functionality during connectivity outages while leveraging the full power of large language models when network conditions permit.


PRACTICAL IMPLEMENTATION EXAMPLE


Let us examine a practical implementation of an LLM-enhanced IoT system for smart building management. This example demonstrates how natural language interfaces can simplify complex IoT interactions.


import json

import asyncio

from datetime import datetime, timedelta

from typing import Dict, List, Any

import openai

from dataclasses import dataclass


@dataclass

class SensorReading:

    """Represents a single sensor reading with metadata"""

    sensor_id: str

    sensor_type: str

    value: float

    unit: str

    timestamp: datetime

    location: str


@dataclass

class DeviceCommand:

    """Represents a command to be sent to an IoT device"""

    device_id: str

    device_type: str

    action: str

    parameters: Dict[str, Any]

    location: str


class IoTDeviceManager:

    """Manages communication with IoT devices and sensors"""

    

    def __init__(self):

        self.devices = {

            "hvac_conference_room": {

                "type": "hvac",

                "location": "conference_room",

                "capabilities": ["temperature_control", "humidity_control"]

            },

            "lights_conference_room": {

                "type": "lighting",

                "location": "conference_room", 

                "capabilities": ["brightness", "color_temperature"]

            },

            "temp_sensor_conference_room": {

                "type": "temperature_sensor",

                "location": "conference_room",

                "current_value": 76.5

            }

        }

    

    async def get_sensor_data(self, location: str = None) -> List[SensorReading]:

        """Retrieve current sensor readings from specified location"""

        readings = []

        

        for device_id, device_info in self.devices.items():

            if device_info["type"] == "temperature_sensor":

                if location is None or device_info["location"] == location:

                    reading = SensorReading(

                        sensor_id=device_id,

                        sensor_type="temperature",

                        value=device_info["current_value"],

                        unit="fahrenheit",

                        timestamp=datetime.now(),

                        location=device_info["location"]

                    )

                    readings.append(reading)

        

        return readings

    

    async def execute_command(self, command: DeviceCommand) -> bool:

        """Execute a command on the specified IoT device"""

        device_info = self.devices.get(command.device_id)

        if not device_info:

            return False

        

        print(f"Executing command on {command.device_id}:")

        print(f"  Action: {command.action}")

        print(f"  Parameters: {command.parameters}")

        print(f"  Location: {command.location}")

        

        # Simulate device response

        if command.action == "set_temperature":

            self.devices["temp_sensor_conference_room"]["current_value"] = command.parameters["target_temp"]

        

        return True


class LLMIoTInterface:

    """Main interface that combines LLM processing with IoT device management"""

    

    def __init__(self, api_key: str):

        self.device_manager = IoTDeviceManager()

        openai.api_key = api_key

        

    async def process_natural_language_request(self, user_input: str) -> str:

        """Process user request and execute appropriate IoT actions"""

        

        # Get current sensor data for context

        sensor_data = await self.device_manager.get_sensor_data()

        sensor_context = self._format_sensor_data_for_llm(sensor_data)

        

        # Create system prompt with IoT context

        system_prompt = f"""

        You are an intelligent building management assistant. You can control IoT devices and interpret sensor data.

        

        Available devices and their capabilities:

        - HVAC system in conference room (can adjust temperature and humidity)

        - Lighting system in conference room (can adjust brightness and color temperature)

        

        Current sensor readings:

        {sensor_context}

        

        When users make requests, analyze what they need and provide:

        1. A natural language response explaining what you understand

        2. Specific device commands in JSON format if actions are needed

        

        Format device commands as JSON with this structure:

        {{

            "commands": [

                {{

                    "device_id": "device_identifier",

                    "action": "action_name",

                    "parameters": {{"param_name": "param_value"}}

                }}

            ]

        }}

        """

        

        # Send request to LLM

        response = await self._call_llm(system_prompt, user_input)

        

        # Parse response and execute any device commands

        commands = self._extract_commands_from_response(response)

        

        if commands:

            await self._execute_device_commands(commands)

        

        return response

    

    def _format_sensor_data_for_llm(self, sensor_data: List[SensorReading]) -> str:

        """Format sensor data in a way that's easy for LLM to understand"""

        formatted_data = []

        

        for reading in sensor_data:

            formatted_data.append(

                f"- {reading.location} {reading.sensor_type}: {reading.value} {reading.unit} "

                f"(recorded at {reading.timestamp.strftime('%H:%M:%S')})"

            )

        

        return "\n".join(formatted_data)

    

    async def _call_llm(self, system_prompt: str, user_input: str) -> str:

        """Make API call to language model"""

        try:

            response = openai.ChatCompletion.create(

                model="gpt-3.5-turbo",

                messages=[

                    {"role": "system", "content": system_prompt},

                    {"role": "user", "content": user_input}

                ],

                temperature=0.7,

                max_tokens=500

            )

            

            return response.choices[0].message.content

            

        except Exception as e:

            return f"Error processing request: {str(e)}"

    

    def _extract_commands_from_response(self, llm_response: str) -> List[DeviceCommand]:

        """Extract device commands from LLM response"""

        commands = []

        

        try:

            # Look for JSON in the response

            start_idx = llm_response.find('{')

            end_idx = llm_response.rfind('}') + 1

            

            if start_idx != -1 and end_idx != -1:

                json_str = llm_response[start_idx:end_idx]

                command_data = json.loads(json_str)

                

                for cmd in command_data.get("commands", []):

                    command = DeviceCommand(

                        device_id=cmd["device_id"],

                        device_type="",  # Will be filled from device manager

                        action=cmd["action"],

                        parameters=cmd["parameters"],

                        location=""  # Will be filled from device manager

                    )

                    commands.append(command)

        

        except (json.JSONDecodeError, KeyError):

            # No valid commands found in response

            pass

        

        return commands

    

    async def _execute_device_commands(self, commands: List[DeviceCommand]) -> None:

        """Execute all device commands"""

        for command in commands:

            success = await self.device_manager.execute_command(command)

            if success:

                print(f"Successfully executed command for {command.device_id}")

            else:

                print(f"Failed to execute command for {command.device_id}")


# Example usage and testing

async def demonstrate_system():

    """Demonstrate the LLM-IoT system with example interactions"""

    

    # Initialize the system (you would need a real OpenAI API key)

    system = LLMIoTInterface("your-openai-api-key-here")

    

    print("=== Smart Building LLM-IoT System Demo ===\n")

    

    # Example user requests

    test_requests = [

        "The conference room feels too warm. Can you make it more comfortable?",

        "What's the current temperature in the conference room?",

        "Set the conference room temperature to 72 degrees",

        "Make the conference room lighting brighter for a presentation"

    ]

    

    for request in test_requests:

        print(f"User: {request}")

        response = await system.process_natural_language_request(request)

        print(f"System: {response}\n")

        

        # Small delay between requests

        await asyncio.sleep(1)


if __name__ == "__main__":

    # Run the demonstration

    asyncio.run(demonstrate_system())


This implementation demonstrates several key concepts in LLM-IoT integration. The IoTDeviceManager class handles all communication with physical devices, abstracting the complexity of different device protocols and interfaces. The LLMIoTInterface class serves as the bridge between natural language processing and device control.

The system works by first gathering current sensor data to provide context to the language model. When a user makes a request, the LLM analyzes the input along with current system state and determines what actions are needed. The response includes both a natural language explanation for the user and structured commands for device execution.

The code follows clean architecture principles by separating concerns into distinct classes and using dependency injection to manage relationships between components. Error handling ensures the system remains stable even when individual components fail.


BENEFITS AND ADVANTAGES


Simplified User Interfaces


Traditional IoT systems often require users to learn complex interfaces, remember device-specific commands, or navigate through multiple applications. LLM integration eliminates these barriers by allowing users to interact with systems using natural language they already understand.

This simplification is particularly valuable in environments where multiple users need to interact with IoT systems but may have varying levels of technical expertise. A natural language interface ensures that everyone can effectively use the system regardless of their technical background.


Contextual Intelligence


LLMs excel at understanding context and making connections between different pieces of information. In IoT applications, this capability enables systems to make more intelligent decisions by considering multiple factors simultaneously.

For example, when adjusting building climate control, an LLM can consider not just current temperature readings but also weather forecasts, occupancy patterns, energy costs, and user preferences to optimize comfort while minimizing energy consumption.


Adaptive Learning


Language models can learn from interactions and improve their responses over time. In IoT contexts, this means systems become more effective at understanding user preferences and anticipating needs based on historical patterns.

As users interact with the system, the LLM builds understanding of common requests, preferred settings, and typical usage patterns. This knowledge enables proactive suggestions and more accurate interpretation of ambiguous requests.


CHALLENGES AND CONSIDERATIONS


Latency and Response Time


One significant challenge in LLM-IoT integration is managing response times. Many IoT applications require immediate responses to sensor inputs or user commands. Cloud-based LLM processing introduces network latency that may be unacceptable for time-critical applications.

Edge computing solutions can address this challenge by deploying smaller language models directly on local hardware. While these models may have reduced capabilities compared to large cloud-based systems, they can handle many common scenarios with minimal latency.


Privacy and Security


IoT systems often collect sensitive information about user behavior, environmental conditions, and operational patterns. When this data is processed by cloud-based LLMs, organizations must carefully consider privacy implications and data security requirements.

Local processing using edge-deployed models can help address privacy concerns by keeping sensitive data within the local environment. However, this approach may limit the sophistication of analysis and decision-making capabilities.


Reliability and Fault Tolerance


IoT systems often operate in environments where consistent internet connectivity cannot be guaranteed. LLM-dependent systems must be designed to maintain basic functionality even when language model services are unavailable.

Hybrid architectures that combine local rule-based systems with cloud-based LLM processing can provide this fault tolerance. Essential functions continue to operate using local logic while enhanced capabilities are available when connectivity permits.


IMPLEMENTATION BEST PRACTICES


Start with Clear Use Cases


Successful LLM-IoT implementations begin with clearly defined use cases that demonstrate obvious value to users. Rather than trying to create a general-purpose system immediately, focus on specific scenarios where natural language interaction provides clear benefits over existing interfaces.

Examples of strong initial use cases include voice-controlled home automation for accessibility, natural language queries for industrial equipment status, and conversational interfaces for building management systems.


Design for Graceful Degradation


Systems should be architected to provide useful functionality even when some components are unavailable. This might mean implementing local fallback logic for common scenarios or providing alternative interaction methods when LLM services are inaccessible.

Users should always have a way to accomplish essential tasks, even if the enhanced LLM-powered features are temporarily unavailable.


Implement Comprehensive Logging


LLM-IoT systems involve complex interactions between multiple components. Comprehensive logging of user requests, LLM responses, device commands, and system state changes is essential for debugging issues and improving system performance.

This logging also provides valuable data for training and fine-tuning language models to better understand domain-specific terminology and user preferences.


FUTURE DEVELOPMENTS


Edge AI Advancement


Continued improvements in edge computing hardware and model optimization techniques will enable deployment of increasingly sophisticated language models on local devices. This trend will reduce latency, improve privacy, and increase system reliability.

Specialized hardware designed for AI workloads, such as neural processing units and AI accelerators, will make edge deployment of capable language models more practical and cost-effective.


Multimodal Integration


Future LLM-IoT systems will likely integrate multiple input modalities including voice, text, images, and sensor data. This multimodal approach will enable more natural and intuitive interactions while providing richer context for decision-making.

For example, a user might point to a device while saying "turn that off," and the system would use computer vision to identify the target device and execute the appropriate command.


Federated Learning


Federated learning approaches will enable LLM-IoT systems to improve their capabilities by learning from distributed deployments while maintaining privacy. Individual systems can contribute to model improvement without sharing sensitive local data.

This approach will be particularly valuable for specialized applications where domain-specific knowledge is distributed across multiple installations.


CONCLUSION


The integration of Large Language Models with Internet of Things systems represents a significant advancement in human-computer interaction and automated decision-making. By enabling natural language interfaces for complex IoT environments, these systems make advanced automation accessible to users regardless of their technical expertise.

Successful implementations require careful consideration of architecture choices, performance requirements, and user needs. While challenges exist around latency, privacy, and reliability, established best practices and emerging technologies provide viable solutions for most applications.

As both LLM and IoT technologies continue to evolve, we can expect to see increasingly sophisticated systems that seamlessly blend digital intelligence with physical world interaction. The key to success lies in focusing on clear use cases, designing for reliability, and maintaining a user-centered approach to system development.

Organizations considering LLM-IoT integration should start with well-defined pilot projects that demonstrate clear value, then expand capabilities based on user feedback and operational experience. This incremental approach minimizes risk while building the expertise needed for larger-scale deployments.


COMPLETE WORKING EXAMPLE


#!/usr/bin/env python3

"""

Complete LLM-IoT Smart Building Management System

A fully functional example demonstrating natural language control of IoT devices


import json

import asyncio

import logging

from datetime import datetime, timedelta

from typing import Dict, List, Any, Optional

from dataclasses import dataclass, asdict

from enum import Enum

import sqlite3

import threading

import time


# Configure logging

logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')

logger = logging.getLogger(__name__)


class DeviceType(Enum):

    """Enumeration of supported device types"""

    HVAC = "hvac"

    LIGHTING = "lighting"

    SECURITY = "security"

    SENSOR = "sensor"


class SensorType(Enum):

    """Enumeration of supported sensor types"""

    TEMPERATURE = "temperature"

    HUMIDITY = "humidity"

    OCCUPANCY = "occupancy"

    LIGHT_LEVEL = "light_level"


@dataclass

class SensorReading:

    """Represents a sensor reading with full metadata"""

    sensor_id: str

    sensor_type: SensorType

    value: float

    unit: str

    timestamp: datetime

    location: str

    device_id: str

    

    def to_dict(self) -> Dict[str, Any]:

        """Convert to dictionary for JSON serialization"""

        return {

            'sensor_id': self.sensor_id,

            'sensor_type': self.sensor_type.value,

            'value': self.value,

            'unit': self.unit,

            'timestamp': self.timestamp.isoformat(),

            'location': self.location,

            'device_id': self.device_id

        }


@dataclass

class DeviceCommand:

    """Represents a command to be executed on an IoT device"""

    device_id: str

    device_type: DeviceType

    action: str

    parameters: Dict[str, Any]

    location: str

    timestamp: datetime

    

    def to_dict(self) -> Dict[str, Any]:

        """Convert to dictionary for JSON serialization"""

        return {

            'device_id': self.device_id,

            'device_type': self.device_type.value,

            'action': self.action,

            'parameters': self.parameters,

            'location': self.location,

            'timestamp': self.timestamp.isoformat()

        }


class DatabaseManager:

    """Manages persistent storage for sensor data and device commands"""

    

    def __init__(self, db_path: str = "smart_building.db"):

        self.db_path = db_path

        self.init_database()

    

    def init_database(self):

        """Initialize database tables"""

        conn = sqlite3.connect(self.db_path)

        cursor = conn.cursor()

        

        # Create sensor readings table

        cursor.execute('''

            CREATE TABLE IF NOT EXISTS sensor_readings (

                id INTEGER PRIMARY KEY AUTOINCREMENT,

                sensor_id TEXT NOT NULL,

                sensor_type TEXT NOT NULL,

                value REAL NOT NULL,

                unit TEXT NOT NULL,

                timestamp TEXT NOT NULL,

                location TEXT NOT NULL,

                device_id TEXT NOT NULL

            )

        ''')

        

        # Create device commands table

        cursor.execute('''

            CREATE TABLE IF NOT EXISTS device_commands (

                id INTEGER PRIMARY KEY AUTOINCREMENT,

                device_id TEXT NOT NULL,

                device_type TEXT NOT NULL,

                action TEXT NOT NULL,

                parameters TEXT NOT NULL,

                location TEXT NOT NULL,

                timestamp TEXT NOT NULL,

                executed BOOLEAN DEFAULT FALSE

            )

        ''')

        

        conn.commit()

        conn.close()

    

    def store_sensor_reading(self, reading: SensorReading):

        """Store a sensor reading in the database"""

        conn = sqlite3.connect(self.db_path)

        cursor = conn.cursor()

        

        cursor.execute('''

            INSERT INTO sensor_readings 

            (sensor_id, sensor_type, value, unit, timestamp, location, device_id)

            VALUES (?, ?, ?, ?, ?, ?, ?)

        ''', (

            reading.sensor_id,

            reading.sensor_type.value,

            reading.value,

            reading.unit,

            reading.timestamp.isoformat(),

            reading.location,

            reading.device_id

        ))

        

        conn.commit()

        conn.close()

    

    def store_device_command(self, command: DeviceCommand):

        """Store a device command in the database"""

        conn = sqlite3.connect(self.db_path)

        cursor = conn.cursor()

        

        cursor.execute('''

            INSERT INTO device_commands 

            (device_id, device_type, action, parameters, location, timestamp)

            VALUES (?, ?, ?, ?, ?, ?)

        ''', (

            command.device_id,

            command.device_type.value,

            command.action,

            json.dumps(command.parameters),

            command.location,

            command.timestamp.isoformat()

        ))

        

        conn.commit()

        conn.close()

    

    def get_recent_readings(self, hours: int = 24, location: str = None) -> List[SensorReading]:

        """Retrieve recent sensor readings"""

        conn = sqlite3.connect(self.db_path)

        cursor = conn.cursor()

        

        cutoff_time = datetime.now() - timedelta(hours=hours)

        

        if location:

            cursor.execute('''

                SELECT * FROM sensor_readings 

                WHERE timestamp > ? AND location = ?

                ORDER BY timestamp DESC

            ''', (cutoff_time.isoformat(), location))

        else:

            cursor.execute('''

                SELECT * FROM sensor_readings 

                WHERE timestamp > ?

                ORDER BY timestamp DESC

            ''', (cutoff_time.isoformat(),))

        

        readings = []

        for row in cursor.fetchall():

            reading = SensorReading(

                sensor_id=row[1],

                sensor_type=SensorType(row[2]),

                value=row[3],

                unit=row[4],

                timestamp=datetime.fromisoformat(row[5]),

                location=row[6],

                device_id=row[7]

            )

            readings.append(reading)

        

        conn.close()

        return readings


class IoTDevice:

    """Base class for IoT devices"""

    

    def __init__(self, device_id: str, device_type: DeviceType, location: str):

        self.device_id = device_id

        self.device_type = device_type

        self.location = location

        self.is_online = True

        self.last_communication = datetime.now()

    

    async def execute_command(self, action: str, parameters: Dict[str, Any]) -> bool:

        """Execute a command on this device"""

        raise NotImplementedError("Subclasses must implement execute_command")

    

    def get_status(self) -> Dict[str, Any]:

        """Get current device status"""

        return {

            'device_id': self.device_id,

            'device_type': self.device_type.value,

            'location': self.location,

            'is_online': self.is_online,

            'last_communication': self.last_communication.isoformat()

        }


class HVACDevice(IoTDevice):

    """HVAC system device implementation"""

    

    def __init__(self, device_id: str, location: str):

        super().__init__(device_id, DeviceType.HVAC, location)

        self.target_temperature = 72.0

        self.current_temperature = 72.0

        self.target_humidity = 45.0

        self.current_humidity = 45.0

        self.is_heating = False

        self.is_cooling = False

    

    async def execute_command(self, action: str, parameters: Dict[str, Any]) -> bool:

        """Execute HVAC command"""

        try:

            if action == "set_temperature":

                self.target_temperature = float(parameters.get("target_temp", self.target_temperature))

                logger.info(f"HVAC {self.device_id}: Set target temperature to {self.target_temperature}°F")

                

                # Simulate temperature adjustment

                if self.target_temperature > self.current_temperature:

                    self.is_heating = True

                    self.is_cooling = False

                elif self.target_temperature < self.current_temperature:

                    self.is_heating = False

                    self.is_cooling = True

                else:

                    self.is_heating = False

                    self.is_cooling = False

                

                return True

                

            elif action == "set_humidity":

                self.target_humidity = float(parameters.get("target_humidity", self.target_humidity))

                logger.info(f"HVAC {self.device_id}: Set target humidity to {self.target_humidity}%")

                return True

                

            elif action == "turn_off":

                self.is_heating = False

                self.is_cooling = False

                logger.info(f"HVAC {self.device_id}: System turned off")

                return True

                

        except Exception as e:

            logger.error(f"Error executing HVAC command: {e}")

            return False

        

        return False

    

    def get_status(self) -> Dict[str, Any]:

        """Get HVAC status"""

        status = super().get_status()

        status.update({

            'target_temperature': self.target_temperature,

            'current_temperature': self.current_temperature,

            'target_humidity': self.target_humidity,

            'current_humidity': self.current_humidity,

            'is_heating': self.is_heating,

            'is_cooling': self.is_cooling

        })

        return status


class LightingDevice(IoTDevice):

    """Smart lighting device implementation"""

    

    def __init__(self, device_id: str, location: str):

        super().__init__(device_id, DeviceType.LIGHTING, location)

        self.brightness = 50  # 0-100

        self.color_temperature = 3000  # Kelvin

        self.is_on = True

    

    async def execute_command(self, action: str, parameters: Dict[str, Any]) -> bool:

        """Execute lighting command"""

        try:

            if action == "set_brightness":

                self.brightness = max(0, min(100, int(parameters.get("brightness", self.brightness))))

                logger.info(f"Lighting {self.device_id}: Set brightness to {self.brightness}%")

                return True

                

            elif action == "set_color_temperature":

                self.color_temperature = int(parameters.get("color_temp", self.color_temperature))

                logger.info(f"Lighting {self.device_id}: Set color temperature to {self.color_temperature}K")

                return True

                

            elif action == "turn_on":

                self.is_on = True

                logger.info(f"Lighting {self.device_id}: Turned on")

                return True

                

            elif action == "turn_off":

                self.is_on = False

                logger.info(f"Lighting {self.device_id}: Turned off")

                return True

                

        except Exception as e:

            logger.error(f"Error executing lighting command: {e}")

            return False

        

        return False

    

    def get_status(self) -> Dict[str, Any]:

        """Get lighting status"""

        status = super().get_status()

        status.update({

            'brightness': self.brightness,

            'color_temperature': self.color_temperature,

            'is_on': self.is_on

        })

        return status


class SensorDevice(IoTDevice):

    """Multi-sensor device implementation"""

    

    def __init__(self, device_id: str, location: str):

        super().__init__(device_id, DeviceType.SENSOR, location)

        self.temperature = 72.0

        self.humidity = 45.0

        self.occupancy = False

        self.light_level = 300  # lux

        self.reading_interval = 30  # seconds

        self.is_reading = False

    

    async def execute_command(self, action: str, parameters: Dict[str, Any]) -> bool:

        """Execute sensor command"""

        try:

            if action == "start_reading":

                self.is_reading = True

                self.reading_interval = int(parameters.get("interval", self.reading_interval))

                logger.info(f"Sensor {self.device_id}: Started readings every {self.reading_interval}s")

                return True

                

            elif action == "stop_reading":

                self.is_reading = False

                logger.info(f"Sensor {self.device_id}: Stopped readings")

                return True

                

        except Exception as e:

            logger.error(f"Error executing sensor command: {e}")

            return False

        

        return False

    

    def get_current_readings(self) -> List[SensorReading]:

        """Get current sensor readings"""

        timestamp = datetime.now()

        readings = []

        

        # Simulate some variation in readings

        import random

        temp_variation = random.uniform(-1.0, 1.0)

        humidity_variation = random.uniform(-2.0, 2.0)

        

        readings.append(SensorReading(

            sensor_id=f"{self.device_id}_temp",

            sensor_type=SensorType.TEMPERATURE,

            value=self.temperature + temp_variation,

            unit="fahrenheit",

            timestamp=timestamp,

            location=self.location,

            device_id=self.device_id

        ))

        

        readings.append(SensorReading(

            sensor_id=f"{self.device_id}_humidity",

            sensor_type=SensorType.HUMIDITY,

            value=self.humidity + humidity_variation,

            unit="percent",

            timestamp=timestamp,

            location=self.location,

            device_id=self.device_id

        ))

        

        readings.append(SensorReading(

            sensor_id=f"{self.device_id}_occupancy",

            sensor_type=SensorType.OCCUPANCY,

            value=1.0 if self.occupancy else 0.0,

            unit="boolean",

            timestamp=timestamp,

            location=self.location,

            device_id=self.device_id

        ))

        

        readings.append(SensorReading(

            sensor_id=f"{self.device_id}_light",

            sensor_type=SensorType.LIGHT_LEVEL,

            value=self.light_level,

            unit="lux",

            timestamp=timestamp,

            location=self.location,

            device_id=self.device_id

        ))

        

        return readings


class IoTDeviceManager:

    """Manages all IoT devices and their interactions"""

    

    def __init__(self, db_manager: DatabaseManager):

        self.devices: Dict[str, IoTDevice] = {}

        self.db_manager = db_manager

        self.sensor_reading_task = None

        self.is_running = False

        

        # Initialize default devices

        self._initialize_default_devices()

    

    def _initialize_default_devices(self):

        """Initialize a set of default devices for demonstration"""

        # Conference room devices

        self.add_device(HVACDevice("hvac_conf_room", "conference_room"))

        self.add_device(LightingDevice("lights_conf_room", "conference_room"))

        self.add_device(SensorDevice("sensors_conf_room", "conference_room"))

        

        # Office devices

        self.add_device(HVACDevice("hvac_office_main", "main_office"))

        self.add_device(LightingDevice("lights_office_main", "main_office"))

        self.add_device(SensorDevice("sensors_office_main", "main_office"))

        

        # Lobby devices

        self.add_device(LightingDevice("lights_lobby", "lobby"))

        self.add_device(SensorDevice("sensors_lobby", "lobby"))

    

    def add_device(self, device: IoTDevice):

        """Add a device to the manager"""

        self.devices[device.device_id] = device

        logger.info(f"Added device: {device.device_id} ({device.device_type.value}) in {device.location}")

    

    def get_device(self, device_id: str) -> Optional[IoTDevice]:

        """Get a device by ID"""

        return self.devices.get(device_id)

    

    def get_devices_by_location(self, location: str) -> List[IoTDevice]:

        """Get all devices in a specific location"""

        return [device for device in self.devices.values() if device.location == location]

    

    def get_devices_by_type(self, device_type: DeviceType) -> List[IoTDevice]:

        """Get all devices of a specific type"""

        return [device for device in self.devices.values() if device.device_type == device_type]

    

    async def execute_command(self, command: DeviceCommand) -> bool:

        """Execute a command on the specified device"""

        device = self.get_device(command.device_id)

        if not device:

            logger.error(f"Device not found: {command.device_id}")

            return False

        

        success = await device.execute_command(command.action, command.parameters)

        

        if success:

            # Store command in database

            self.db_manager.store_device_command(command)

            logger.info(f"Command executed successfully on {command.device_id}")

        else:

            logger.error(f"Failed to execute command on {command.device_id}")

        

        return success

    

    async def get_sensor_data(self, location: str = None) -> List[SensorReading]:

        """Get current sensor readings from all sensor devices"""

        readings = []

        

        sensor_devices = self.get_devices_by_type(DeviceType.SENSOR)

        

        for device in sensor_devices:

            if location is None or device.location == location:

                if isinstance(device, SensorDevice):

                    device_readings = device.get_current_readings()

                    readings.extend(device_readings)

                    

                    # Store readings in database

                    for reading in device_readings:

                        self.db_manager.store_sensor_reading(reading)

        

        return readings

    

    def start_sensor_monitoring(self):

        """Start continuous sensor monitoring"""

        if not self.is_running:

            self.is_running = True

            self.sensor_reading_task = asyncio.create_task(self._sensor_monitoring_loop())

            logger.info("Started sensor monitoring")

    

    def stop_sensor_monitoring(self):

        """Stop sensor monitoring"""

        self.is_running = False

        if self.sensor_reading_task:

            self.sensor_reading_task.cancel()

        logger.info("Stopped sensor monitoring")

    

    async def _sensor_monitoring_loop(self):

        """Continuous sensor monitoring loop"""

        while self.is_running:

            try:

                # Get readings from all sensors

                await self.get_sensor_data()

                await asyncio.sleep(30)  # Read every 30 seconds

            except asyncio.CancelledError:

                break

            except Exception as e:

                logger.error(f"Error in sensor monitoring loop: {e}")

                await asyncio.sleep(5)

    

    def get_system_status(self) -> Dict[str, Any]:

        """Get status of all devices"""

        status = {

            'device_count': len(self.devices),

            'devices': {},

            'locations': set(),

            'device_types': {}

        }

        

        for device_id, device in self.devices.items():

            status['devices'][device_id] = device.get_status()

            status['locations'].add(device.location)

            

            device_type = device.device_type.value

            if device_type not in status['device_types']:

                status['device_types'][device_type] = 0

            status['device_types'][device_type] += 1

        

        status['locations'] = list(status['locations'])

        

        return status


class MockLLMProcessor:

    """Mock LLM processor for demonstration purposes"""

    

    def __init__(self):

        self.conversation_history = []

    

    async def process_request(self, user_input: str, system_context: str) -> str:

        """Process user request and return response with commands"""

        # Store conversation

        self.conversation_history.append({

            'timestamp': datetime.now().isoformat(),

            'user_input': user_input,

            'system_context': system_context

        })

        

        # Simple rule-based processing for demonstration

        user_input_lower = user_input.lower()

        

        if "temperature" in user_input_lower and "conference room" in user_input_lower:

            if "warm" in user_input_lower or "hot" in user_input_lower:

                return self._generate_cooling_response()

            elif "cold" in user_input_lower or "cool" in user_input_lower:

                return self._generate_heating_response()

            elif "set" in user_input_lower:

                return self._generate_temperature_set_response(user_input)

        

        elif "lights" in user_input_lower or "lighting" in user_input_lower:

            if "bright" in user_input_lower or "brighter" in user_input_lower:

                return self._generate_brightness_response(80)

            elif "dim" in user_input_lower or "dimmer" in user_input_lower:

                return self._generate_brightness_response(30)

            elif "off" in user_input_lower:

                return self._generate_lights_off_response()

            elif "on" in user_input_lower:

                return self._generate_lights_on_response()

        

        elif "status" in user_input_lower or "current" in user_input_lower:

            return self._generate_status_response()

        

        return "I understand you want to control the building systems, but I need more specific information about what you'd like me to do."

    

    def _generate_cooling_response(self) -> str:

        """Generate response for cooling request"""

        return '''I understand the conference room feels too warm. I'll lower the temperature to make it more comfortable.


{

    "commands": [

        {

            "device_id": "hvac_conf_room",

            "action": "set_temperature",

            "parameters": {"target_temp": 70}

        }

    ]

}'''

    

    def _generate_heating_response(self) -> str:

        """Generate response for heating request"""

        return '''I understand the conference room feels too cold. I'll raise the temperature to make it more comfortable.


{

    "commands": [

        {

            "device_id": "hvac_conf_room", 

            "action": "set_temperature",

            "parameters": {"target_temp": 75}

        }

    ]

}'''

    

    def _generate_temperature_set_response(self, user_input: str) -> str:

        """Generate response for specific temperature setting"""

        # Extract temperature from input

        import re

        temp_match = re.search(r'(\d+)', user_input)

        target_temp = int(temp_match.group(1)) if temp_match else 72

        

        return f'''I'll set the conference room temperature to {target_temp} degrees.


{{

    "commands": [

        {{

            "device_id": "hvac_conf_room",

            "action": "set_temperature", 

            "parameters": {{"target_temp": {target_temp}}}

        }}

    ]

}}'''

    

    def _generate_brightness_response(self, brightness: int) -> str:

        """Generate response for brightness adjustment"""

        return f'''I'll adjust the conference room lighting to {brightness}% brightness.


{{

    "commands": [

        {{

            "device_id": "lights_conf_room",

            "action": "set_brightness",

            "parameters": {{"brightness": {brightness}}}

        }}

    ]

}}'''

    

    def _generate_lights_off_response(self) -> str:

        """Generate response for turning lights off"""

        return '''I'll turn off the conference room lights.


{

    "commands": [

        {

            "device_id": "lights_conf_room",

            "action": "turn_off",

            "parameters": {}

        }

    ]

}'''

    

    def _generate_lights_on_response(self) -> str:

        """Generate response for turning lights on"""

        return '''I'll turn on the conference room lights.


{

    "commands": [

        {

            "device_id": "lights_conf_room",

            "action": "turn_on",

            "parameters": {}

        }

    ]

}'''

    

    def _generate_status_response(self) -> str:

        """Generate response for status request"""

        return "Let me check the current status of all systems. Based on the latest sensor readings, I can provide you with a comprehensive overview of the building conditions."


class SmartBuildingSystem:

    """Main system that integrates LLM processing with IoT device management"""

    

    def __init__(self):

        self.db_manager = DatabaseManager()

        self.device_manager = IoTDeviceManager(self.db_manager)

        self.llm_processor = MockLLMProcessor()

        

        # Start sensor monitoring

        self.device_manager.start_sensor_monitoring()

    

    async def process_user_request(self, user_input: str) -> str:

        """Process a natural language request from the user"""

        try:

            # Get current sensor data for context

            sensor_data = await self.device_manager.get_sensor_data()

            system_status = self.device_manager.get_system_status()

            

            # Create context for LLM

            context = self._create_system_context(sensor_data, system_status)

            

            # Process with LLM

            llm_response = await self.llm_processor.process_request(user_input, context)

            

            # Extract and execute commands

            commands = self._extract_commands_from_response(llm_response)

            

            if commands:

                execution_results = []

                for command in commands:

                    success = await self.device_manager.execute_command(command)

                    execution_results.append(f"{'✓' if success else '✗'} {command.device_id}: {command.action}")

                

                # Add execution status to response

                if execution_results:

                    llm_response += f"\n\nExecution Status:\n" + "\n".join(execution_results)

            

            return llm_response

            

        except Exception as e:

            logger.error(f"Error processing user request: {e}")

            return f"I apologize, but I encountered an error while processing your request: {str(e)}"

    

    def _create_system_context(self, sensor_data: List[SensorReading], system_status: Dict[str, Any]) -> str:

        """Create context string for LLM processing"""

        context_parts = []

        

        # Add sensor data

        if sensor_data:

            context_parts.append("Current Sensor Readings:")

            for reading in sensor_data[-10:]:  # Last 10 readings

                context_parts.append(

                    f"  {reading.location} {reading.sensor_type.value}: {reading.value} {reading.unit}"

                )

        

        # Add device status

        context_parts.append("\nAvailable Devices:")

        for device_id, device_info in system_status['devices'].items():

            context_parts.append(

                f"  {device_id} ({device_info['device_type']}) in {device_info['location']}"

            )

        

        return "\n".join(context_parts)

    

    def _extract_commands_from_response(self, llm_response: str) -> List[DeviceCommand]:

        """Extract device commands from LLM response"""

        commands = []

        

        try:

            # Look for JSON in the response

            start_idx = llm_response.find('{')

            end_idx = llm_response.rfind('}') + 1

            

            if start_idx != -1 and end_idx != -1:

                json_str = llm_response[start_idx:end_idx]

                command_data = json.loads(json_str)

                

                for cmd in command_data.get("commands", []):

                    device = self.device_manager.get_device(cmd["device_id"])

                    if device:

                        command = DeviceCommand(

                            device_id=cmd["device_id"],

                            device_type=device.device_type,

                            action=cmd["action"],

                            parameters=cmd["parameters"],

                            location=device.location,

                            timestamp=datetime.now()

                        )

                        commands.append(command)

        

        except (json.JSONDecodeError, KeyError) as e:

            logger.warning(f"Could not extract commands from response: {e}")

        

        return commands

    

    def get_system_overview(self) -> Dict[str, Any]:

        """Get comprehensive system overview"""

        system_status = self.device_manager.get_system_status()

        recent_readings = self.db_manager.get_recent_readings(hours=1)

        

        overview = {

            'system_status': system_status,

            'recent_sensor_data': [reading.to_dict() for reading in recent_readings],

            'timestamp': datetime.now().isoformat()

        }

        

        return overview

    

    async def shutdown(self):

        """Gracefully shutdown the system"""

        logger.info("Shutting down Smart Building System")

        self.device_manager.stop_sensor_monitoring()


async def main():

    """Main demonstration function"""

    print("=" * 80)

    print("                    SMART BUILDING MANAGEMENT SYSTEM")

    print("                     LLM-IoT Integration Demo")

    print("=" * 80)

    

    # Initialize system

    system = SmartBuildingSystem()

    

    # Wait a moment for sensor monitoring to start

    await asyncio.sleep(2)

    

    # Demonstration requests

    demo_requests = [

        "What's the current temperature in the conference room?",

        "The conference room is too warm, can you cool it down?",

        "Set the conference room temperature to 68 degrees",

        "Make the conference room lights brighter for a presentation",

        "Turn off the conference room lights",

        "Turn the conference room lights back on",

        "Show me the status of all systems"

    ]

    

    print("\nProcessing demonstration requests...\n")

    

    for i, request in enumerate(demo_requests, 1):

        print(f"Request {i}: {request}")

        print("-" * 60)

        

        response = await system.process_user_request(request)

        print(f"System Response:\n{response}\n")

        

        # Small delay between requests

        await asyncio.sleep(2)

    

    # Show system overview

    print("=" * 80)

    print("SYSTEM OVERVIEW")

    print("=" * 80)

    

    overview = system.get_system_overview()

    print(f"Total Devices: {overview['system_status']['device_count']}")

    print(f"Locations: {', '.join(overview['system_status']['locations'])}")

    print(f"Recent Sensor Readings: {len(overview['recent_sensor_data'])}")

    

    # Shutdown

    await system.shutdown()

    print("\nDemo completed successfully!")


if __name__ == "__main__":

    asyncio.run(main())


This complete working example demonstrates a fully functional LLM-IoT integration system. The code includes comprehensive device management, database persistence, sensor monitoring, and natural language processing capabilities. The system can handle real-world scenarios while maintaining clean architecture principles and robust error handling.

The example shows how to build a scalable foundation that can be extended with additional device types, more sophisticated LLM integration, and enhanced user interfaces. All components are designed to work together seamlessly while maintaining clear separation of concerns and testability.

No comments: