Tuesday, February 17, 2026

A MULTI-AGENT LLM APPLICATION FOR DESIGN PATTERN DETECTION AND REFACTORING: AN IMPLEMENTATION DEEP DIVE



INTRODUCTION


The software development landscape is constantly evolving, demanding not only functional code but also code that is maintainable, extensible, and robust. Design patterns provide proven solutions to recurring software design problems, yet their consistent application and identification within existing codebases remain a significant challenge. This article delves into the practical implementation of a multi-agent Large Language Model (LLM) application engineered to address this challenge. This system is designed to intelligently scan code for both the presence of established design patterns and opportunities to introduce them, particularly where boilerplate code could be refactored into more elegant, pattern-based solutions. We will explore the architectural components, the underlying technologies, and the intricate logic that enables this intelligent assistant to detect, propose, and even apply design patterns, thereby significantly enhancing code quality and developer productivity.


THE VISION: AN INTELLIGENT CODE ASSISTANT

The fundamental objective of this application is to transcend traditional static analysis by employing LLMs for semantic code understanding. This intelligent assistant is built to act as a proactive guardian of code quality, continuously analyzing the codebase for structural integrity and adherence to established design principles. The implementation aims to deliver several key benefits: improving code quality by advocating for well-understood patterns, enhancing maintainability through the reduction of complexity and boilerplate, and ultimately mitigating technical debt by proactively suggesting and implementing design improvements. Such a system empowers developers to concentrate on core business logic, offloading the cognitive burden associated with continuous code refinement and architectural oversight.


ARCHITECTING THE MULTI-AGENT LLM SYSTEM

To realize this vision, a multi-agent architecture is employed, where specialized LLM agents collaborate to perform distinct tasks within the overall workflow. Each agent is implemented with a specific area of expertise, fostering modularity, scalability, and improved accuracy in its designated role. The interaction between these agents is orchestrated to ensure a seamless and intelligent process, from the initial ingestion of code to the final output of refactored code.


Here is a conceptual diagram illustrating the interaction between the agents and their primary data flows:


+---------------------+       +---------------------+       +---------------------+

|                     |       |                     |       |                     |

|  ORCHESTRATOR AGENT |------>| CODE PARSER/        |------>| PATTERN RECOGNITION |

|  (Central Control)  |<------| ANALYZER AGENT      |<------| AGENT               |

|  (State Management) |       | (AST, CFG, DFG)     |       | (LLM & Graph Match) |

+---------------------+       +---------------------+       +---------------------+

        |                               ^                               |

        |                               |                               |

        V                               |                               V

+---------------------+       +---------------------+       +---------------------+

|                     |       |                     |       |                     |

|  USER INTERACTION   |<------| REFACTORING PROPOSAL|------>| SEQUENCING AGENT    |

|  AGENT              |       | AGENT               |<------| (Dependency Graph)  |

|  (CLI/API)          |       | (LLM Reasoning)     |       |                     |

+---------------------+       +---------------------+       +---------------------+

        |                                                               |

        |                                                               V

        V                                                               +---------------------+

+---------------------+                                                 |                     |

|                     |                                                 |  CODE TRANSFORMATION|

|  CODE TRANSFORMATION|                                                 |  AGENT              |

|  AGENT              |                                                 |  (AST Manipulation) |

+---------------------+                                                 +---------------------+

(LLM-assisted Code Gen)


Let us explore the implementation details of each constituent agent.


AGENT 1: THE ORCHESTRATOR - GUIDING THE WORKFLOW IMPLEMENTATION

The Orchestrator Agent is implemented as the central control plane of the multi-agent system. Its core implementation involves a state machine and a message-passing mechanism to manage the flow of information and control between the various specialized agents.


Implementation Details:

  • State Management: The Orchestrator maintains the overall state of a code analysis and refactoring session. This state includes the original code, the current intermediate representation (IR), a list of detected patterns, a queue of proposed refactorings, and the user's decisions. This state is typically persisted in a lightweight in-memory database or a simple dictionary structure for short-lived sessions, or a more robust database for long-running analyses.
  • Message Queue Integration: For inter-agent communication, a message queue system (e.g., Redis Pub/Sub, a simple Python `queue` module for local execution, or a more robust solution like RabbitMQ for distributed systems) is employed. The Orchestrator publishes tasks to specific agent queues and subscribes to response queues.
  • Workflow Logic: The Orchestrator's logic is implemented as a series of steps, each triggering a specific agent and waiting for its response. For instance, upon receiving initial code, it sends a "PARSE_CODE" message to the Code Parser Agent, then upon receiving the parsed AST, it sends "DETECT_PATTERNS" to the Pattern Recognition Agent, and so forth.
  • LLM Assistance (Optional but powerful): While the core workflow can be rule-based, for complex decisions like initial triage or dynamic adjustment of the workflow based on early findings, a small LLM component within the Orchestrator can be used. This LLM would be prompted with the current state and agent responses to decide the next best action or to re-prioritize tasks.


Example Orchestrator Logic Snippet (simplified Python pseudo-code):


class Orchestrator:

    def __init__(self):

        self.session_state = {} # Stores current code, AST, proposals, etc.

        self.message_bus = MessageBus() # Interface to message queue


    def start_analysis(self, code_input):

        self.session_state['original_code'] = code_input

        self.session_state['current_code'] = code_input # Code might change

        self.session_state['refactoring_proposals'] = []

        self.session_state['applied_patterns'] = []


        # Step 1: Parse and Analyze Code

        self.message_bus.publish('parser_agent_queue', {'task': 'parse', 'code': code_input})

        parsed_data = self.message_bus.wait_for_response('orchestrator_response_queue')

        self.session_state['ast'] = parsed_data['ast']

        self.session_state['cfg'] = parsed_data['cfg']

        # ... store other parsed data


        # Step 2: Detect existing patterns

        self.message_bus.publish('pattern_recognition_queue', {'task': 'detect_existing', 'ast': self.session_state['ast']})

        detected_patterns = self.message_bus.wait_for_response('orchestrator_response_queue')

        self.session_state['detected_patterns'] = detected_patterns['patterns']

        print(f"Detected existing patterns: {self.session_state['detected_patterns']}")


        # Step 3: Propose refactorings

        self.message_bus.publish('refactoring_proposal_queue', {'task': 'propose', 'ast': self.session_state['ast'], 'cfg': self.session_state['cfg']})

        proposals = self.message_bus.wait_for_response('orchestrator_response_queue')

        self.session_state['refactoring_proposals'].extend(proposals['proposals'])


        self._handle_proposals()


    def _handle_proposals(self):

        if not self.session_state['refactoring_proposals']:

            print("No refactoring opportunities found or proposed.")

            return


        # Step 4: User Interaction

        self.message_bus.publish('user_interaction_queue', {'task': 'present_proposals', 'proposals': self.session_state['refactoring_proposals']})

        user_decision = self.message_bus.wait_for_response('orchestrator_response_queue')


        if user_decision['action'] == 'approve':

            approved_proposals = user_decision['approved_proposals']

            # Step 5: Sequence multiple approved patterns

            if len(approved_proposals) > 1:

                self.message_bus.publish('sequencing_agent_queue', {'task': 'sequence', 'proposals': approved_proposals})

                sequenced_proposals = self.message_bus.wait_for_response('orchestrator_response_queue')

                approved_proposals = sequenced_proposals['ordered_proposals']


            for proposal in approved_proposals:

                print(f"Applying pattern: {proposal['pattern_name']}")

                self.message_bus.publish('code_transformation_queue', {

                    'task': 'apply_pattern',

                    'pattern_details': proposal,

                    'current_ast': self.session_state['ast']

                })

                transformed_code_data = self.message_bus.wait_for_response('orchestrator_response_queue')

                self.session_state['current_code'] = transformed_code_data['refactored_code']

                self.session_state['ast'] = transformed_code_data['new_ast'] # Update AST for next step

                self.session_state['applied_patterns'].append(proposal['pattern_name'])

                print(f"Refactored code after '{proposal['pattern_name']}':\n{self.session_state['current_code']}")


                # After each application, re-evaluate for new opportunities or existing patterns

                # This creates an iterative refinement loop

                print("Re-evaluating code after refactoring for new opportunities...")

                self.session_state['refactoring_proposals'] = [] # Clear previous proposals

                self.message_bus.publish('refactoring_proposal_queue', {'task': 'propose', 'ast': self.session_state['ast']})

                new_proposals = self.message_bus.wait_for_response('orchestrator_response_queue')

                self.session_state['refactoring_proposals'].extend(new_proposals['proposals'])

                self._handle_proposals() # Recursively handle new proposals


        else:

            print("User declined refactoring.")


AGENT 2: THE CODE PARSER AND ANALYZER - UNDERSTANDING THE CODE'S DNA IMPLEMENTATION

The Code Parser and Analyzer Agent is implemented using robust static analysis libraries specific to the target programming language. For Python, the built-in `ast` module is invaluable, augmented by custom graph generation logic.


Implementation Details:

  • AST Generation: For Python, the `ast.parse()` function is used to convert source code into an Abstract Syntax Tree. This tree is then traversed to extract detailed information.
  • Symbol Table Construction: During AST traversal, a custom symbol table (a dictionary mapping names to their properties like type, scope, definition location) is built. This involves tracking variable assignments, function/class definitions, and their respective scopes.
  • Control Flow Graph (CFG) Generation: A CFG is constructed by iterating through the AST nodes, identifying control flow constructs (e.g., `if`, `for`, `while`, `def`, `return`). Each basic block (a sequence of statements with one entry and one exit point) becomes a node, and control transfers (jumps, calls) become edges. Libraries like `cfg_builder` (for Python) can assist, or it can be a custom implementation based on AST traversal.
  • Data Flow Analysis (DFA): DFA is implemented by traversing the CFG and tracking definitions and uses of variables. This can involve algorithms like reaching definitions, live variable analysis, or def-use chains. This helps in understanding how data propagates and changes throughout the program.
  • Intermediate Representation (IR): The output of this agent is a rich, language-agnostic IR, typically a JSON or Protocol Buffer serialization of the AST, CFG, DFA results, and symbol table. This standardized IR facilitates consumption by other agents, allowing for a decoupled architecture.


Example Python AST parsing and initial symbol table construction:


import ast


class CodeParserAnalyzer:

    def __init__(self):

        self.symbol_table = {}

        self.ast_tree = None

        self.cfg_graph = None # Placeholder for CFG

        self.dfa_results = None # Placeholder for DFA


    def parse_code(self, code_string):

        """Parses code, builds AST, and populates symbol table."""

        try:

            self.ast_tree = ast.parse(code_string)

            self._build_symbol_table(self.ast_tree)

            self._build_cfg(self.ast_tree) # Call CFG builder

            self._perform_dfa(self.cfg_graph) # Call DFA

            return {

                'ast': self._serialize_ast(self.ast_tree), # Convert AST to serializable format

                'symbol_table': self.symbol_table,

                'cfg': self._serialize_cfg(self.cfg_graph),

                'dfa': self.dfa_results

            }

        except SyntaxError as e:

            print(f"Syntax error in code: {e}")

            return None


    def _build_symbol_table(self, node, scope_name="global"):

        """Recursively builds a symbol table."""

        if isinstance(node, ast.FunctionDef):

            func_name = node.name

            self.symbol_table[func_name] = {'type': 'function', 'scope': scope_name, 'args': [arg.arg for arg in node.args.args]}

            # Recurse into function body with new scope

            for child in node.body:

                self._build_symbol_table(child, func_name)

        elif isinstance(node, ast.ClassDef):

            class_name = node.name

            self.symbol_table[class_name] = {'type': 'class', 'scope': scope_name}

            # Recurse into class body with new scope

            for child in node.body:

                self._build_symbol_table(child, class_name)

        elif isinstance(node, ast.Assign):

            for target in node.targets:

                if isinstance(target, ast.Name):

                    var_name = target.id

                    self.symbol_table[var_name] = {'type': 'variable', 'scope': scope_name, 'assigned_value': ast.dump(node.value)}

        # Continue for other node types as needed

        for child in ast.iter_child_nodes(node):

            if not isinstance(child, (ast.FunctionDef, ast.ClassDef)): # Avoid re-processing new scopes

                self._build_symbol_table(child, scope_name)


    def _build_cfg(self, ast_tree):

        # Implementation for building CFG (e.g., using networkx or custom graph)

        # This would involve traversing the AST, identifying basic blocks and control transfers.

        # For simplicity, this is a placeholder.

        self.cfg_graph = "CFG representation of the code"


    def _perform_dfa(self, cfg_graph):

        # Implementation for Data Flow Analysis

        # This would involve iterating over the CFG and tracking variable definitions/uses.

        # For simplicity, this is a placeholder.

        self.dfa_results = "DFA results"


    def _serialize_ast(self, ast_tree):

        # Convert AST object to a serializable dictionary/JSON structure

        # This can be done recursively or using ast.dump() for a string representation

        return ast.dump(ast_tree) # For simplicity, using string dump

    

    def _serialize_cfg(self, cfg_graph):

        # Convert CFG object to a serializable dictionary/JSON structure

        return str(cfg_graph) # For simplicity, converting to string



AGENT 3: THE PATTERN RECOGNITION AGENT - UNEARTHING HIDDEN STRUCTURES IMPLEMENTATION

The Pattern Recognition Agent is a hybrid system leveraging both traditional graph matching techniques and the semantic understanding capabilities of LLMs.


Implementation Details:

  • Pattern Knowledge Base (PKB): This is implemented as a structured database (e.g., a graph database like Neo4j or a relational database with complex schema) storing detailed definitions of design patterns. Each pattern entry includes:
  • Structural Signatures: Graph patterns representing class hierarchies, method relationships, and composition structures. For example, a "Strategy" pattern might be defined by an interface (`IDocumentProcessingStrategy`), multiple concrete implementations, and a context class (`DocumentProcessor`) that holds a reference to the interface.
  • Behavioral Signatures: Descriptions of typical method call sequences or data flow patterns associated with the pattern.
  • Semantic Keywords/Descriptions: Natural language descriptions and keywords related to the pattern's intent and application.
  • Graph Matching Engine: This component takes the AST and CFG from the Parser Agent and uses graph matching algorithms (e.g., subgraph isomorphism algorithms like VF2 or custom traversals) to find instances of the structural signatures defined in the PKB.
  • LLM for Semantic Pattern Detection: A fine-tuned LLM (or a powerful general-purpose LLM with specific prompting) is used to analyze code snippets (extracted from the AST) and their associated documentation (comments, docstrings). The LLM is prompted to:

   *   Identify code sections that semantically align with the natural language descriptions of patterns in the PKB, even if the structural implementation deviates slightly.

    *  Detect behavioral patterns by analyzing method call sequences or data transformations described in the CFG/DFA results, comparing them against behavioral signatures.

    *  Infer the intent behind certain code structures, which might indicate a pattern's presence despite non-standard naming or organization.


  • Hybrid Matching Logic: The agent combines the results from graph matching and LLM analysis. A high confidence score from both increases the likelihood of a true positive. Discrepancies are flagged for further analysis or lower confidence.


Example Pattern Recognition Logic Snippet (simplified):


class PatternRecognitionAgent:

    def __init__(self, pattern_knowledge_base, llm_model):

        self.pkb = pattern_knowledge_base # Loaded pattern definitions

        self.llm = llm_model # LLM API or local model


    def detect_patterns(self, parsed_data):

        ast_tree = parsed_data['ast']

        cfg_graph = parsed_data['cfg']

        symbol_table = parsed_data['symbol_table']

        detected_patterns = []


        # Step 1: Graph-based structural matching

        for pattern_name, structural_signature in self.pkb.get_structural_patterns():

            matches = self._find_structural_matches(ast_tree, structural_signature)

            for match in matches:

                detected_patterns.append({

                    'pattern_name': pattern_name,

                    'type': 'structural',

                    'components': match, # e.g., {'Context': 'DocumentProcessor', 'Strategy': 'IDocumentProcessingStrategy'}

                    'confidence': 0.8 # Based on graph match quality

                })


        # Step 2: LLM-based semantic/behavioral matching

        # Extract relevant code snippets (e.g., methods with many conditionals)

        code_snippets_for_llm = self._extract_llm_relevant_snippets(ast_tree, symbol_table)

        for snippet_id, snippet_code in code_snippets_for_llm.items():

            prompt = f"Analyze the following Python code for design patterns. Explain if any pattern is present and identify participating components. Code:\n```python\n{snippet_code}\n```"

            llm_response = self.llm.generate_text(prompt)

            # Parse LLM response to extract patterns and components

            llm_identified_patterns = self._parse_llm_response(llm_response)

            for pattern in llm_identified_patterns:

                detected_patterns.append({

                    'pattern_name': pattern['name'],

                    'type': 'semantic_llm',

                    'components': pattern['components'],

                    'confidence': pattern['confidence'] # LLM can estimate confidence

                })


        return {'patterns': detected_patterns}


    def _find_structural_matches(self, ast_tree, signature):

        # Complex graph matching logic here (e.g., comparing AST nodes to pattern graph)

        # For our running example, it would look for an 'if-elif-else' chain on a type field.

        # This would be a negative match for a pattern, but a positive match for a "code smell".

        matches = []

        # Simplified detection for conditional complexity (anti-pattern)

        for node in ast.walk(ast_tree):

            if isinstance(node, ast.If):

                # Check for a series of if/elif based on a variable's value

                # This requires more detailed AST analysis

                if self._is_conditional_complexity(node):

                    matches.append({'smell': 'Conditional Complexity', 'location': ast.get_source_segment(self.original_code, node)})

        return matches


    def _is_conditional_complexity(self, if_node):

        # Heuristic: Check if 'if' statement has multiple 'elif' branches

        # and the test condition involves comparing a variable to different values.

        # This is a simplified check.

        if len(if_node.orelse) > 0 and isinstance(if_node.orelse[0], ast.If):

            return True # Indicates an elif chain

        return False


    def _extract_llm_relevant_snippets(self, ast_tree, symbol_table):

        # Extract function bodies, class definitions, or specific code blocks

        # that are likely candidates for pattern detection.

        snippets = {}

        for node in ast.walk(ast_tree):

            if isinstance(node, (ast.FunctionDef, ast.ClassDef)):

                # Get source code for the node

                # This requires access to the original code string

                # For simplicity, we'll use a placeholder.

                snippets[node.name] = f"Code for {node.name}..."

        return snippets


    def _parse_llm_response(self, llm_output):

        # Use another LLM call or regex to parse structured data from LLM's natural language response.

        # This is a critical step to convert free-form text into actionable structured data.

        # Example: "I found the Strategy pattern. Context: DocumentProcessor, Strategy Interface: IDocumentProcessingStrategy."

        # This requires careful prompt engineering for the LLM to output parsable text.

        return [{'name': 'Strategy', 'components': {'Context': 'DocumentProcessor'}, 'confidence': 0.9}] # Simplified


AGENT 4: THE REFACTORING PROPOSAL AGENT - IDENTIFYING OPTIMIZATION OPPORTUNITIES IMPLEMENTATION

This agent's implementation focuses on identifying code smells and then using an LLM to suggest appropriate design patterns, along with a detailed plan for their application.


Implementation Details:

  • Code Smell Detection:This component uses a combination of static analysis rules and LLM-based reasoning.
  • Rule-Based Smells: Hardcoded rules or static analysis tools (e.g., Pylint, SonarQube integrations) detect common anti-patterns like "Long Method," "Large Class," "Feature Envy," or "Switch Statements on Type Code" (which is relevant to our running example).
  • LLM for Semantic Smells:** An LLM is prompted with code snippets and their context (from symbol table, CFG) to identify less obvious code smells that might not be caught by simple rules, such as unclear responsibilities or tight coupling that isn't immediately apparent from structural analysis alone.
  • LLM for Pattern Suggestion:** When a code smell or a refactoring opportunity is identified, a specialized LLM is invoked. This LLM is prompted with:

    * The identified code smell (e.g., "conditional logic based on type in `DocumentProcessor.process_document`").

    *  The relevant code snippet.

    *  The desired outcome (e.g., "improve extensibility, reduce conditional complexity").

    *  The LLM's task is to suggest one or more design patterns that address the smell, explain *why* they are applicable, and outline the participating components. This requires the LLM to have a strong understanding of design patterns and their problem-solution mappings.

  • Complexity Assessment: This is implemented as a heuristic-based system, potentially augmented by an LLM. Factors considered include:

    *  Number of new classes/interfaces to be created.

    *  Number of existing classes/methods to be modified.

    *  Number of lines of code to be changed.

    *  Depth of inheritance hierarchies involved.

    *  Number of dependencies affected.

    *  An LLM can be prompted to give a qualitative assessment ("low," "medium," "high") based on these metrics.


Example Refactoring Proposal Logic Snippet:


class RefactoringProposalAgent:

    def __init__(self, llm_model):

        self.llm = llm_model


    def propose_refactorings(self, parsed_data, detected_patterns):

        ast_tree = parsed_data['ast']

        symbol_table = parsed_data['symbol_table']

        proposals = []


        # Step 1: Detect code smells (e.g., conditional complexity)

        smells = self._detect_code_smells(ast_tree, symbol_table)


        for smell in smells:

            # Step 2: Use LLM to suggest patterns for the smell

            prompt = f"""

            I've identified a '{smell['name']}' code smell in the following code context:

            {smell['context_code']}


            This smell is located at: {smell['location']}


            Please suggest applicable design patterns to address this code smell.

            For each suggested pattern:

            1. Explain why it is applicable and its benefits.

            2. Identify the specific components (classes, interfaces, methods) that would participate in the pattern implementation based on the provided code context.

            3. Assess the complexity of applying this pattern (Low, Medium, High).


            Output in a structured, parsable format, e.g., JSON.

            """

            llm_response_json = self.llm.generate_structured_output(prompt)

            # Parse LLM's JSON response

            suggested_patterns = self._parse_llm_pattern_suggestions(llm_response_json)


            for pattern_suggestion in suggested_patterns:

                proposals.append({

                    'pattern_name': pattern_suggestion['pattern'],

                    'explanation': pattern_suggestion['explanation'],

                    'participating_components': pattern_suggestion['components'],

                    'complexity': pattern_suggestion['complexity'],

                    'original_smell_location': smell['location']

                })

        return {'proposals': proposals}


    def _detect_code_smells(self, ast_tree, symbol_table):

        smells = []

        # Rule-based detection for conditional complexity in process_document

        for node in ast.walk(ast_tree):

            if isinstance(node, ast.FunctionDef) and node.name == 'process_document':

                # Look for if-elif-else chain on a variable

                for child in node.body:

                    if isinstance(child, ast.If):

                        if self._is_conditional_type_dispatch(child):

                            smells.append({

                                'name': 'Conditional Complexity (Type Dispatch)',

                                'context_code': ast.get_source_segment(self.original_code, node), # Requires original code

                                'location': f"Line {child.lineno}",

                                'node': child # Keep reference to AST node

                            })

        return smells


    def _is_conditional_type_dispatch(self, if_node):

        # Heuristic: Check if 'if' statement and its 'elif' branches

        # compare a variable (like file_extension) to multiple literal values.

        # This is a simplified check and would be more robust in a real system.

        if isinstance(if_node.test, ast.Compare) and len(if_node.test.ops) == 1 and isinstance(if_node.test.ops[0], ast.Eq):

            if isinstance(if_node.test.left, ast.Call) and isinstance(if_node.test.left.func, ast.Attribute) and if_node.test.left.func.attr == 'lower':

                if isinstance(if_node.test.comparators[0], ast.Constant) and isinstance(if_node.test.comparators[0].value, str):

                    return True

        return False


    def _parse_llm_pattern_suggestions(self, llm_json_output):

        # Assume LLM returns a JSON string that can be parsed

        import json

        return json.loads(llm_json_output)


AGENT 5: THE USER INTERACTION AGENT - THE HUMAN-AI BRIDGE IMPLEMENTATION

The User Interaction Agent is implemented as a command-line interface (CLI) for simplicity in this article, but in a production environment, it would likely be an IDE plugin or a web-based UI.


Implementation Details:

  • Input/Output Handling: Uses standard input/output streams (`input()`, `print()`) for CLI interaction.
  • LLM for Prompt Generation: An LLM is used to dynamically construct clear, natural language prompts for the user based on the structured refactoring proposals. This ensures the explanations are easy to understand and tailored to the specific context.
  • Response Parsing: The agent is implemented to parse user responses, typically expecting simple "yes" or "no" answers, or selection of options from a list. For more complex interactions, an LLM could interpret free-form text responses.


Example User Interaction Logic Snippet:


class UserInteractionAgent:

    def __init__(self, llm_model):

        self.llm = llm_model


    def present_proposals(self, proposals):

        print("\n--- Refactoring Proposals ---")

        for i, proposal in enumerate(proposals):

            print(f"\nProposal {i+1}: Apply {proposal['pattern_name']} Pattern")

            print(f"  Explanation: {proposal['explanation']}")

            print(f"  Participating Components: {proposal['participating_components']}")

            print(f"  Estimated Complexity: {proposal['complexity']}")


        # Use LLM to generate a clear question

        prompt = f"""

        Given the following refactoring proposals, please ask the user if they would like to apply them.

        If yes, ask them to list the numbers of the proposals they want to apply, separated by commas.

        If no, ask them to type 'no'.

        Ensure the question is polite and clear.

        """

        question = self.llm.generate_text(prompt)

        user_input = input(question + "\nYour choice: ")


        approved_indices = []

        if user_input.lower() != 'no':

            try:

                approved_indices = [int(x.strip()) - 1 for x in user_input.split(',')]

            except ValueError:

                print("Invalid input. Please enter numbers separated by commas or 'no'.")

                return {'action': 'decline'}


        approved_proposals = [proposals[i] for i in approved_indices if 0 <= i < len(proposals)]

        return {'action': 'approve' if approved_proposals else 'decline', 'approved_proposals': approved_proposals}


AGENT 6: THE SEQUENCING AGENT - ORCHESTRATING MULTIPLE TRANSFORMATIONS IMPLEMENTATION

The Sequencing Agent is implemented using graph theory algorithms to determine a safe and logical order for applying multiple approved refactorings.


Implementation Details:

  • Dependency Graph Construction: For each approved refactoring proposal, the agent analyzes its `participating_components` and the nature of the pattern. It then constructs a directed acyclic graph (DAG) where nodes represent refactoring proposals, and directed edges represent dependencies (e.g., "Refactoring A must be applied before Refactoring B").
    • Rule-Based Dependencies: Predefined rules identify common dependencies. For example, creating an interface (part of Strategy) must precede implementing it, and a Factory Method for object creation depends on the existence of the objects it creates.
    • LLM for Complex Dependencies: For more subtle or context-specific dependencies, an LLM can be prompted with the details of two or more proposals to determine if one must logically precede the other.
  • TopologicalSort:*Once the dependency graph is constructed, a topological sort algorithm (e.g., Kahn's algorithm or depth-first search based) is applied to generate a linear ordering of the refactorings. If a cycle is detected (indicating a circular dependency), the agent flags it as an unresolvable conflict and reports it to the Orchestrator.
  • Prioritization Heuristics: In cases where multiple valid sequences exist (i.e., independent refactorings), heuristics are applied. These might include:
    •     "Least invasive first": Prioritize refactorings with lower complexity.
    • "Structural before behavioral": Apply patterns that change the class structure before those that change object interactions.
    • "Foundational before specific": Apply core architectural patterns before more localized ones.


Example Sequencing Logic Snippet:


import collections


class SequencingAgent:

    def __init__(self, llm_model):

        self.llm = llm_model


    def sequence_proposals(self, proposals):

        # Build a dependency graph

        dependency_graph = collections.defaultdict(list)

        in_degree = collections.defaultdict(int)


        # Initialize all proposals in graph

        for i, prop in enumerate(proposals):

            in_degree[i] = 0


        # Analyze dependencies (simplified rules)

        for i, prop1 in enumerate(proposals):

            for j, prop2 in enumerate(proposals):

                if i == j:

                    continue

                # Example rule: If prop1 creates an interface that prop2 implements, prop1 -> prop2

                if self._depends_on(prop2, prop1):

                    dependency_graph[i].append(j)

                    in_degree[j] += 1

                # More complex dependency analysis can involve LLM

                # prompt = f"Does applying {prop1['pattern_name']} need to happen before {prop2['pattern_name']} given their components?"

                # llm_decision = self.llm.generate_bool(prompt)

                # if llm_decision:

                #     dependency_graph[i].append(j)

                #     in_degree[j] += 1


        # Perform topological sort (Kahn's algorithm)

        queue = collections.deque([i for i in range(len(proposals)) if in_degree[i] == 0])

        ordered_proposals_indices = []


        while queue:

            node_idx = queue.popleft()

            ordered_proposals_indices.append(node_idx)


            for neighbor_idx in dependency_graph[node_idx]:

                in_degree[neighbor_idx] -= 1

                if in_degree[neighbor_idx] == 0:

                    queue.append(neighbor_idx)


        if len(ordered_proposals_indices) != len(proposals):

            print("Warning: Circular dependency detected or some proposals could not be sequenced.")

            # Handle error or return partial sequence

            return {'ordered_proposals': [proposals[i] for i in ordered_proposals_indices]}


        return {'ordered_proposals': [proposals[i] for i in ordered_proposals_indices]}


    def _depends_on(self, proposal_a, proposal_b):

        # Simplified dependency check: if A uses a component B creates/modifies

        # This would be highly specific to pattern types and components.

        # For Strategy and Factory Method: Factory Method depends on Strategy's concrete classes.

        if proposal_a['pattern_name'] == 'Factory Method' and proposal_b['pattern_name'] == 'Strategy':

            # Check if the Factory Method is creating instances of the Strategy's concrete classes

            # This requires parsing the 'participating_components' more deeply.

            return True # Simplified assumption for this example

        return False


AGENT 7: THE CODE TRANSFORMATION AGENT - APPLYING DESIGN PATTERNS IMPLEMENTATION

The Code Transformation Agent is the most complex to implement, as it directly modifies the code's AST and regenerates source code. This agent is a hybrid of programmatic AST manipulation and LLM-driven code generation.


Implementation Details:

  • AST Modification Engine: This is a core component that takes the current AST and a refactoring plan (from a proposal) and programmatically modifies the AST. This involves:
    • Node Creation:** Creating new `ast.ClassDef`, `ast.FunctionDef`, `ast.Assign`, etc., nodes for new classes, methods, or variables required by the pattern.
    • Node Deletion: Removing outdated nodes (e.g., the `if-elif-else` block).
    • Node Replacement/Modification: Changing method calls, variable references, or class inheritance.
    • Import Management: Automatically adding necessary `import` statements (e.g., `from abc import ABC, abstractmethod`).
  • LLM for Targeted Code Generation: For generating the content of new methods, classes, or specific logic within the refactored code, a powerful LLM is employed. The LLM is prompted with:
  • The pattern being applied (e.g., "Strategy pattern").
  • The specific component it needs to generate (e.g., "PdfProcessingStrategy class").
  • The context of the original code (e.g., the original PDF processing logic from the `if` block).
  • The LLM generates the code snippet, which is then parsed into an AST fragment and inserted into the main AST by the AST Modification Engine.
  • Code Regeneration (Pretty-Printing):** After AST modification, the `ast.unparse()` function (Python 3.9+) or a custom pretty-printer is used to convert the modified AST back into source code. This is a critical step to ensure the generated code is well-formatted and readable. For older Python versions or more control over formatting, external tools like `black` or `yapf` can be integrated, or a custom AST visitor can be implemented to generate code with specific formatting rules.
  • Error Handling and Validation: After regeneration, the code is immediately parsed again to check for syntax errors. Semantic validation (e.g., type checking, linting) can also be performed to catch potential issues introduced during transformation.


Example Code Transformation Logic Snippet (simplified for Strategy Pattern):



import ast

import inspect # To get source code of nodes

from abc import ABC, abstractmethod # To simulate imports


class CodeTransformationAgent:

    def __init__(self, llm_model):

        self.llm = llm_model

        self.original_code_string = "" # Store original code to extract snippets


    def apply_pattern(self, pattern_details, current_ast_string, original_code_string):

        self.original_code_string = original_code_string

        current_ast = ast.parse(current_ast_string) # Parse the AST string back to object


        if pattern_details['pattern_name'] == 'Strategy':

            return self._apply_strategy_pattern(current_ast, pattern_details)

        elif pattern_details['pattern_name'] == 'Factory Method':

            return self._apply_factory_method_pattern(current_ast, pattern_details)

        # Add other patterns here

        return {'refactored_code': ast.unparse(current_ast), 'new_ast': ast.dump(current_ast)}


    def _apply_strategy_pattern(self, current_ast, pattern_details):

        # 1. Generate the IDocumentProcessingStrategy interface

        interface_code_str = self._generate_strategy_interface()

        interface_ast = ast.parse(interface_code_str).body[0] # Get the ClassDef node

        current_ast.body.insert(0, interface_ast) # Insert at beginning of module


        # 2. Generate concrete strategy classes (Pdf, Txt, Docx)

        # Find the original conditional blocks to extract logic for LLM

        original_processor_node = None

        for node in current_ast.body:

            if isinstance(node, ast.ClassDef) and node.name == 'DocumentProcessor':

                original_processor_node = node

                break


        if original_processor_node:

            for child in original_processor_node.body:

                if isinstance(child, ast.FunctionDef) and child.name == 'process_document':

                    original_process_method = child

                    break


            # Extract specific logic for each type from the original if-elif-else

            pdf_logic = self._extract_logic_from_conditional(original_process_method, ".pdf")

            txt_logic = self._extract_logic_from_conditional(original_process_method, ".txt")

            docx_logic = self._extract_logic_from_conditional(original_process_method, ".docx")


            # LLM generates concrete strategy classes

            pdf_strategy_code = self._generate_concrete_strategy("PdfProcessingStrategy", "IDocumentProcessingStrategy", pdf_logic)

            txt_strategy_code = self._generate_concrete_strategy("TxtProcessingStrategy", "IDocumentProcessingStrategy", txt_logic)

            docx_strategy_code = self._generate_concrete_strategy("DocxProcessingStrategy", "IDocumentProcessingStrategy", docx_logic)

            unsupported_strategy_code = self._generate_concrete_strategy("UnsupportedDocumentStrategy", "IDocumentProcessingStrategy", "print(f'Unsupported document type for: {file_path}'); return 'Unsupported document type.'")


            current_ast.body.append(ast.parse(pdf_strategy_code).body[0])

            current_ast.body.append(ast.parse(txt_strategy_code).body[0])

            current_ast.body.append(ast.parse(docx_strategy_code).body[0])

            current_ast.body.append(ast.parse(unsupported_strategy_code).body[0])



            # 3. Refactor DocumentProcessor to use the strategy

            # Find and modify the DocumentProcessor class

            for node in current_ast.body:

                if isinstance(node, ast.ClassDef) and node.name == 'DocumentProcessor':

                    self._modify_document_processor_for_strategy(node)

                    break

            

            # 4. Add necessary imports (abc module)

            self._add_import_if_missing(current_ast, 'abc', ['ABC', 'abstractmethod'])


        refactored_code = ast.unparse(current_ast)

        return {'refactored_code': refactored_code, 'new_ast': ast.dump(current_ast)}


    def _generate_strategy_interface(self):

        # Use LLM or template to generate interface code

        prompt = """

        Generate Python code for an abstract base class named 'IDocumentProcessingStrategy'

        that inherits from 'ABC' and has one abstract method 'process(self, file_path)'.

        Include a docstring for the class and the method.

        """

        return self.llm.generate_text(prompt)


    def _generate_concrete_strategy(self, class_name, interface_name, processing_logic):

        # Use LLM to generate concrete strategy class code

        prompt = f"""

        Generate Python code for a concrete class named '{class_name}'

        that implements the interface '{interface_name}'.

        The 'process' method should contain the following logic:

        {processing_logic}

        Include a docstring for the class and the method.

        """

        return self.llm.generate_text(prompt)


    def _extract_logic_from_conditional(self, process_method_node, extension):

        # Traverse the AST of the original process_document method

        # to find the specific 'if' or 'elif' block for the given extension

        # and extract its body. This is a complex AST traversal.

        # For simplicity, we'll return a placeholder.

        return f"print(f'Processing {extension.upper()} document: {{file_path}} with {extension.upper()}-specific logic.')\n        return '{extension.upper()} processed successfully.'"


    def _modify_document_processor_for_strategy(self, class_node):

        # Remove old process_document method

        class_node.body = [node for node in class_node.body if not (isinstance(node, ast.FunctionDef) and node.name == 'process_document')]


        # Add __init__ and new process_document method

        init_method_code = """

    def __init__(self, strategy):

        self._strategy = strategy

"""

        process_method_code = """

    def process_document(self, file_path):

        print(f"DocumentProcessor delegating task for: {file_path}")

        return self._strategy.process(file_path)

"""

        class_node.body.append(ast.parse(init_method_code).body[0])

        class_node.body.append(ast.parse(process_method_code).body[0])


    def _add_import_if_missing(self, ast_tree, module_name, names):

        # Check if import already exists

        for node in ast_tree.body:

            if isinstance(node, ast.ImportFrom) and node.module == module_name:

                for alias in node.names:

                    if alias.name in names:

                        return # Already imported

        # If not, add the import at the top

        import_stmt = ast.ImportFrom(module=module_name, names=[ast.alias(name=name, asname=None) for name in names], level=0)

        ast_tree.body.insert(0, import_stmt)


    def _apply_factory_method_pattern(self, current_ast, pattern_details):

        # This would be a similar process:

        # 1. Generate a new Factory class/method using LLM.

        # 2. Modify client code (where strategies are currently instantiated)

        #    to use the new factory method.

        # This demonstrates the iterative nature of refactoring.

        print("Applying Factory Method pattern...")

        # ... implementation details for Factory Method ...

        return {'refactored_code': ast.unparse(current_ast), 'new_ast': ast.dump(current_ast)}


CONCLUSION

The implementation of a multi-agent LLM application for design pattern detection and refactoring represents a significant advancement in automated code quality assurance and developer productivity. By meticulously dissecting code into its structural and semantic components, identifying both existing patterns and opportunities for improvement, and intelligently guiding the refactoring process, this system empowers developers to maintain cleaner, more robust, and more extensible codebases.


The collaborative intelligence of agents—from the Orchestrator guiding the workflow, to the Code Parser understanding the code's DNA, the Pattern Recognition Agent unearthing hidden structures, the Refactoring Proposal Agent identifying optimization opportunities, the User Interaction Agent bridging human and AI, the Sequencing Agent orchestrating multiple transformations, and finally, the Code Transformation Agent applying design patterns—creates a powerful synergy. This synergy reduces the cognitive load on developers and fosters a culture of continuous code improvement. This intelligent code assistant is a valuable tool, acting as a partner in crafting high-quality software, enabling developers to focus on innovation while the system diligently safeguards the structural integrity and design elegance of their creations.


CHALLENGES IN IMPLEMENTATION

The implementation of this multi-agent LLM application for design pattern detection and refactoring presents several significant challenges that require careful consideration and robust solutions during its construction. These challenges span technical, conceptual, and practical domains, highlighting the complexity inherent in building such an intelligent system.


One of the foremost challenges encountered during development is the ACCURACY OF LLM IN PATTERN IDENTIFICATION. Large Language Models, despite their impressive capabilities, are prone to both false positives and false negatives. A false positive occurs when the LLM incorrectly identifies a design pattern where none exists, or misinterprets a code structure as a pattern. Conversely, a false negative means the LLM fails to detect an actual design pattern or an opportunity for one. Achieving high precision and recall in pattern recognition requires extensive training data, sophisticated prompting techniques, and often hybrid approaches that combine LLM reasoning with more traditional static analysis tools. The nuances of human-written code, including variations in naming conventions, structural deviations, and idiomatic expressions, make this task particularly difficult to perfect.


Another substantial hurdle faced is the COMPLEXITY OF ABSTRACT SYNTAX TREE (AST) MANIPULATION AND CODE GENERATION. The Code Transformation Agent's ability to refactor code relies heavily on its capacity to modify the AST and then accurately regenerate source code from it. This process is far from trivial. Preserving comments, maintaining original formatting (indentation, spacing, line breaks), and ensuring semantic correctness throughout the transformation are critical. A single misplaced node or incorrect attribute change in the AST can lead to syntactically invalid or functionally broken code. Furthermore, different programming languages have distinct AST structures and grammar rules, necessitating language-specific parsing and generation logic, or a highly abstract intermediate representation.


TESTING AND VALIDATION OF REFACTORED CODE also poses a significant challenge during implementation. Even if the Code Transformation Agent successfully regenerates syntactically correct code, there is no inherent guarantee that the refactored code maintains the original program's behavior. Automated unit and integration tests are essential to verify functional equivalence. The system integrates with existing testing frameworks or generates new tests to cover the refactored sections. This adds another layer of complexity, as the LLM needs to understand the test context and generate meaningful assertions.


HANDLING EDGE CASES AND LANGUAGE-SPECIFIC NUANCES is yet another area of difficulty. Programming languages, especially those with dynamic typing or metaprogramming capabilities, often have constructs that defy straightforward pattern matching or refactoring rules. For example, reflection in Java or decorators in Python can obscure the true intent or structure of the code, making it harder for an LLM to accurately identify patterns or safely apply refactorings. The system is designed to handle these exceptions gracefully, often by flagging them for human review rather than attempting an uncertain automated refactoring.


Finally, PERFORMANCE CONSIDERATIONS FOR LARGE CODEBASES are paramount. Analyzing, parsing, and transforming millions of lines of code can be computationally intensive. The multi-agent system is designed for efficiency, employing incremental analysis, caching mechanisms, and distributed processing to handle enterprise-scale applications without significant delays. The time taken for each agent to perform its task, especially the LLM-driven ones, is optimized to provide a responsive user experience.


These challenges underscore that building such a system requires a deep understanding of compilers, static analysis, software engineering principles, and the capabilities and limitations of large language models.


ADDENDUM: FULL RUNNING EXAMPLE - DOCUMENT PROCESSOR


This addendum provides the complete, runnable Python code for the Document Processor example, demonstrating both the initial state with boilerplate conditional logic and the refactored state applying the Strategy design pattern.


PART 1: INITIAL CODE WITH BOILERPLATE


This version of the `DocumentProcessor` handles different document types using a series of `if-elif-else` statements. This approach is less flexible and harder to maintain as new document types are introduced.


import os


class DocumentProcessor:

    """

    A simple class to process different types of documents based on their extension.

    This version uses conditional logic, which can become cumbersome and

    violates the Open/Closed Principle.

    """


    def process_document(self, file_path):

        """

        Processes a document by determining its type and applying specific logic.

        This method contains boilerplate code for type-specific handling.

        """

        # Extract the file extension to determine the document type.

        file_extension = os.path.splitext(file_path)[1].lower()


        # Conditional logic to handle different document types.

        # This section is the target for refactoring using the Strategy pattern.

        if file_extension == ".pdf":

            print(f"Processing PDF document: {file_path} with PDF-specific logic.")

            # In a real application, this would involve complex PDF parsing libraries

            # and specific business logic for PDF content.

            return "PDF processed successfully."

        elif file_extension == ".txt":

            print(f"Processing TXT document: {file_path} with TXT-specific logic.")

            # This would involve reading text files, potentially performing NLP tasks,

            # or other text-specific manipulations.

            return "TXT processed successfully."

        elif file_extension == ".docx":

            print(f"Processing DOCX document: {file_path} with DOCX-specific logic.")

            # This would involve using libraries like python-docx to extract content,

            # modify documents, or convert formats.

            return "DOCX processed successfully."

        else:

            print(f"Unsupported document type for: {file_path}")

            return "Unsupported document type."


# --- Demonstration of the Initial Document Processor ---

print("--- Initial Document Processor Demonstration ---")

initial_processor = DocumentProcessor()


# Simulate processing various document types

print(initial_processor.process_document("report.pdf"))

print(initial_processor.process_document("notes.txt"))

print(initial_processor.process_document("memo.docx"))

print(initial_processor.process_document("image.jpg")) # Unsupported type


print("\n--------------------------------------------\n")


PART 2: REFACTORED CODE WITH STRATEGY DESIGN PATTERN


This version of the `DocumentProcessor` has been refactored to use the Strategy design pattern. The specific processing logic for each document type is now encapsulated in separate strategy classes, making the `DocumentProcessor` flexible and easily extensible.


import os

from abc import ABC, abstractmethod


# 1. Strategy Interface (Abstract Base Class)

#    This defines the common interface for all concrete document processing strategies.

class IDocumentProcessingStrategy(ABC):

    """

    Abstract base class defining the interface for document processing strategies.

    All concrete strategies must implement the 'process' method.

    This ensures that different processing algorithms can be interchanged.

    """

    @abstractmethod

    def process(self, file_path):

        """

        Processes the document at the given file path.

        This method must be implemented by concrete strategy classes to provide

        type-specific document handling logic.

        """

        pass


# 2. Concrete Strategy Implementations

#    Each class encapsulates a specific algorithm for processing a document type.


class PdfProcessingStrategy(IDocumentProcessingStrategy):

    """

    Concrete strategy for processing PDF documents.

    Encapsulates the specific logic for handling PDF files.

    """

    def process(self, file_path):

        """

        Implements PDF-specific processing logic.

        This would involve using a PDF parsing library (e.g., PyPDF2, pdfminer.six).

        """

        print(f"Processing PDF document: {file_path} with PDF-specific logic.")

        # Placeholder for actual complex PDF parsing and handling

        return "PDF processed successfully."


class TxtProcessingStrategy(IDocumentProcessingStrategy):

    """

    Concrete strategy for processing TXT documents.

    Encapsulates the specific logic for handling TXT files.

    """

    def process(self, file_path):

        """

        Implements TXT-specific processing logic.

        This would involve reading the text file content and performing operations.

        """

        print(f"Processing TXT document: {file_path} with TXT-specific logic.")

        # Placeholder for actual complex TXT reading and manipulation

        return "TXT processed successfully."


class DocxProcessingStrategy(IDocumentProcessingStrategy):

    """

    Concrete strategy for processing DOCX documents.

    Encapsulates the specific logic for handling DOCX files.

    """

    def process(self, file_path):

        """

        Implements DOCX-specific processing logic.

        This would involve using a library like 'python-docx' to interact with DOCX files.

        """

        print(f"Processing DOCX document: {file_path} with DOCX-specific logic.")

        # Placeholder for actual complex DOCX parsing and content extraction

        return "DOCX processed successfully."


class UnsupportedDocumentStrategy(IDocumentProcessingStrategy):

    """

    A concrete strategy to handle unsupported document types gracefully.

    """

    def process(self, file_path):

        """

        Handles unsupported document types.

        """

        print(f"Unsupported document type for: {file_path}")

        return "Unsupported document type."


# 3. Context Class

#    The DocumentProcessor (Context) now holds a reference to a strategy object

#    and delegates the processing task to it. It does not contain type-specific logic.

class DocumentProcessor:

    """

    The context class that uses an IDocumentProcessingStrategy.

    It delegates the actual document processing to the chosen strategy.

    This class is now open for extension (new strategies) but closed for modification

    (no changes needed when adding new document types).

    """

    def __init__(self, strategy: IDocumentProcessingStrategy):

        """

        Initializes the DocumentProcessor with a specific processing strategy.

        The strategy can be changed at runtime if needed.

        """

        if not isinstance(strategy, IDocumentProcessingStrategy):

            raise TypeError("Provided strategy must be an instance of IDocumentProcessingStrategy.")

        self._strategy = strategy


    def set_strategy(self, strategy: IDocumentProcessingStrategy):

        """

        Allows changing the processing strategy at runtime.

        """

        if not isinstance(strategy, IDocumentProcessingStrategy):

            raise TypeError("Provided strategy must be an instance of IDocumentProcessingStrategy.")

        self._strategy = strategy


    def process_document(self, file_path):

        """

        Delegates the document processing to the currently set strategy.

        The DocumentProcessor itself does not know the specific processing details.

        """

        print(f"DocumentProcessor delegating task for: {file_path}")

        return self._strategy.process(file_path)


# 4. Strategy Factory (Optional, but good for managing strategy instantiation)

#    This factory helps in creating the correct strategy based on the file extension.

#    This could be a separate refactoring suggested by the LLM (Factory Method pattern).

class DocumentProcessingStrategyFactory:

    """

    A factory class to create appropriate document processing strategies

    based on the file extension. This abstracts the strategy creation logic.

    """

    def get_strategy(self, file_path) -> IDocumentProcessingStrategy:

        """

        Returns a concrete strategy instance based on the file's extension.

        """

        file_extension = os.path.splitext(file_path)[1].lower()

        if file_extension == ".pdf":

            return PdfProcessingStrategy()

        elif file_extension == ".txt":

            return TxtProcessingStrategy()

        elif file_extension == ".docx":

            return DocxProcessingStrategy()

        else:

            return UnsupportedDocumentStrategy()


# --- Demonstration of the Refactored Document Processor ---

print("--- Refactored Document Processor Demonstration (Strategy Pattern) ---")


strategy_factory = DocumentProcessingStrategyFactory()


# Process PDF using the appropriate strategy

pdf_strategy = strategy_factory.get_strategy("report.pdf")

pdf_processor = DocumentProcessor(pdf_strategy)

print(pdf_processor.process_document("report.pdf"))


# Process TXT using the appropriate strategy

txt_strategy = strategy_factory.get_strategy("notes.txt")

txt_processor = DocumentProcessor(txt_strategy)

print(txt_processor.process_document("notes.txt"))


# Process DOCX using the appropriate strategy

docx_strategy = strategy_factory.get_strategy("memo.docx")

docx_processor = DocumentProcessor(docx_strategy)

print(docx_processor.process_document("memo.docx"))


# Handle an unsupported type

unsupported_strategy = strategy_factory.get_strategy("image.jpg")

unsupported_processor = DocumentProcessor(unsupported_strategy)

print(unsupported_processor.process_document("image.jpg"))


# Demonstrate changing strategy at runtime (less common for this specific example,

# but illustrates flexibility)

print("\n--- Demonstrating runtime strategy change ---")

runtime_processor = DocumentProcessor(TxtProcessingStrategy())

print(runtime_processor.process_document("another_notes.txt"))

runtime_processor.set_strategy(PdfProcessingStrategy())

print(runtime_processor.process_document("another_report.pdf"))


print("\n--------------------------------------------\n")

No comments: