Monday, April 06, 2026

LLM-Based Agentic AI Analysis - Architectural Considerations




The increasing complexity of software projects necessitates advanced tools for code and design analysis. Traditional static analysis tools often lack the semantic understanding required to grasp high-level design patterns, inter-module dependencies, or architectural implications. This article explores the development of an LLM-based Agentic AI system designed to analyze project directories and GitHub repositories, providing deep insights into code structure, design principles, and potential areas for improvement. A primary focus will be on overcoming the inherent context memory limitations of large language models through a sophisticated architectural design that leverages context window shifting, Retrieval Augmented Generation (RAG), GraphRAG, fine-tuning, strategic tool usage, multitasking, and intelligent summarization.


Overall Architecture of the Agentic AI System


The proposed Agentic AI system operates as a multi-agent framework orchestrated by a central control unit. This orchestrator delegates tasks to specialized sub-agents, each equipped with specific tools and knowledge access mechanisms. The architecture integrates robust data ingestion pipelines, a dynamic knowledge base, and advanced context management strategies to ensure comprehensive and accurate analysis of even very large codebases. The core idea is to break down complex analysis problems into smaller, manageable tasks that can be handled by focused agents, thereby distributing the cognitive load and information context.


Core Components in Detail


Orchestrator Agent


The Orchestrator Agent serves as the brain of the entire system, responsible for task decomposition, agent coordination, and overall workflow management. When a user initiates an analysis request, such as "Analyze the security vulnerabilities in the `auth` module and suggest mitigations," the Orchestrator first breaks this complex request into smaller, actionable sub-tasks. For example, it might identify the files related to the `auth` module, then delegate a "Code Scanning" task to a dedicated agent, followed by a "Vulnerability Analysis" task. The Orchestrator maintains a global understanding of the project state and the progress of each sub-task, ensuring seamless collaboration among agents and synthesizing their individual outputs into a coherent final report. It is also responsible for managing the overall context flow, deciding when to summarize information, when to retrieve new data, and which agent is best suited for a particular sub-task.


Code and Design Scanners and Parsers


These are specialized tools and agents designed to ingest raw code and documentation files and transform them into structured, machine-readable formats. For code, this involves parsing source files into Abstract Syntax Trees (ASTs), extracting function definitions, class structures, variable declarations, and dependency relationships. For design documents, it might involve natural language processing to identify architectural components, data flows, and design decisions. These scanners are crucial for building the initial knowledge base upon which all subsequent analysis relies. For instance, a Python parser would generate an AST from a `.py` file, allowing the system to programmatically understand the code's structure rather than just treating it as raw text. This structured representation is far more amenable to detailed analysis than raw text, as it explicitly captures the syntactic and semantic relationships within the code.


Here is a small Python code snippet demonstrating how to parse a Python file into an Abstract Syntax Tree (AST):


import ast


def parse_python_file_to_ast(file_path):

    """

    Parses a Python file and returns its Abstract Syntax Tree (AST).


    This function reads the content of a specified Python file and

    uses Python's built-in 'ast' module to parse it into an AST.

    The AST provides a structured, hierarchical representation of

    the source code, making it easier for automated tools to

    understand and analyze the code's structure and components.


    Parameters:

        file_path (str): The path to the Python file that needs to be parsed.


    Returns:

        ast.Module: The root node of the AST, representing the entire

                    module, or None if the file cannot be found or

                    contains syntax errors.

    """

    try:

        with open(file_path, "r", encoding="utf-8") as file:

            source_code = file.read()

        tree = ast.parse(source_code)

        return tree

    except FileNotFoundError:

        print(f"Error: File not found at {file_path}")

        return None

    except SyntaxError as e:

        print(f"Error: Syntax error in {file_path}: {e}")

        return None


# Example usage (commented out to avoid execution in the article text):

# ast_tree = parse_python_file_to_ast("app.py")

# if ast_tree:

#    print("Successfully parsed app.py into an AST.")

#    # Further processing of the AST can be done here, e.g.,

#    # for node in ast.walk(ast_tree):

#    #     if isinstance(node, ast.FunctionDef):

#    #         print(f"  Found function: {node.name}")


Knowledge Base (Vector Store and Graph Database)


The extracted structured information is stored in a multi-modal knowledge base. This knowledge base typically comprises a vector store for semantic search and a graph database for representing relational information. The vector store, often powered by embeddings generated from code snippets, documentation paragraphs, and design descriptions, enables Retrieval Augmented Generation (RAG) by allowing the system to quickly find semantically similar pieces of information relevant to a given query. The graph database, on the other hand, stores entities like classes, functions, modules, files, and their relationships (e.g., 'calls,' 'imports,' 'inherits from,' 'is documented by'). This graph structure is fundamental for GraphRAG, enabling complex queries about dependencies, architectural patterns, and data flow. The combination of these two storage mechanisms provides both semantic understanding and explicit relational context, which is critical for deep code analysis.


Specialized Agents


Beyond the Orchestrator, a suite of specialized agents performs specific analysis tasks. Examples include a 'Dependency Analyzer Agent' that maps out inter-module and external library dependencies, a 'Design Pattern Identifier Agent' that recognizes common architectural patterns (e.g., Singleton, Factory, Observer) within the codebase, a 'Security Vulnerability Agent' that looks for common security flaws, and a 'Documentation Generator Agent' that can draft or update project documentation based on code analysis. Each specialized agent is equipped with its own set of tools and has access to the shared knowledge base, allowing it to operate autonomously on its assigned sub-tasks. This modularity ensures that each agent can be optimized for its specific domain of expertise, leading to more accurate and efficient analysis.


Tooling Layer


The Tooling Layer provides agents with the ability to interact with the external environment and perform specific actions. These tools can range from simple file system operations (reading/writing files) and GitHub API interactions (cloning repositories, fetching pull requests) to more sophisticated static analysis tools (ESLint, SonarQube integrations), dynamic analysis tools, and even custom-built code introspection utilities. Agents invoke these tools based on their current task and the information they need to gather or actions they need to perform. For instance, a 'File Reading Tool' would be used by an agent to access the content of a specific source file, while a 'GitHub API Tool' could fetch repository metadata or commit history. This layer is crucial for providing agents with real-world interaction capabilities beyond their internal reasoning.


Here is a small Python code snippet for a simple File Reading Tool:


import os


class FileReadingTool:

    """

    A tool designed to securely read the content of a file from the local filesystem.


    This tool provides a controlled way for agents to access file contents,

    ensuring that access is restricted to a predefined base directory for

    security and operational integrity.

    """

    def __init__(self, base_path="."):

        """

        Initializes the FileReadingTool with a specified base path.


        The base path defines the root directory from which all file read

        operations are permitted. This prevents agents from accessing files

        outside the intended project scope.


        Parameters:

            base_path (str): The base directory from which files can be read.

                             Defaults to the current working directory.

        """

        self.base_path = os.path.abspath(base_path)


    def read_file(self, relative_file_path):

        """

        Reads the content of a specified file, given its path relative to the

        tool's base path.


        Before reading, it performs a security check to ensure that the

        requested file path does not attempt to access directories outside

        the allowed base path.


        Parameters:

            relative_file_path (str): The path to the file, relative to the

                                      initialized base_path.


        Returns:

            str: The content of the file if successfully read, or an error

                 message string if the file is not found, inaccessible, or

                 attempts to breach the directory boundary.

        """

        full_path = os.path.join(self.base_path, relative_file_path)

        # Ensure the path is within the allowed base_path for security

        if not os.path.abspath(full_path).startswith(self.base_path):

            return f"Error: Access denied. File path {relative_file_path} is outside the allowed directory."

        try:

            with open(full_path, "r", encoding="utf-8") as f:

                content = f.read()

            return content

        except FileNotFoundError:

            return f"Error: File not found at {full_path}"

        except Exception as e:

            return f"Error reading file {full_path}: {e}"


# Example usage (commented out to avoid execution in the article text):

# # Assuming 'my_project' directory exists with 'app.py' inside

# # file_reader = FileReadingTool(base_path="./my_flask_project")

# # content = file_reader.read_file("app.py")

# # if not content.startswith("Error"):

# #    print("File content successfully read.")

# #    # print(content[:200]) # Print first 200 characters

# # else:

# #    print(content)



Addressing Context Limitations


Context Window Shifting (Sliding Window)


Large files or extensive documentation cannot be fed into an LLM's context window all at once. To overcome this, the system employs a context window shifting or sliding window approach. When analyzing a long document or a large code file, the content is broken down into smaller, overlapping chunks. Each chunk is processed sequentially by the LLM, with relevant summaries or key insights from previous chunks being carried forward into the context of the next chunk. This allows the LLM to maintain a coherent understanding across the entire document without exceeding its token limit. For instance, when analyzing a 10,000-line Python file, the system might process it in 1,000-line segments, generating a summary for each segment and including the previous segment's summary in the prompt for the current one. This iterative processing ensures that no critical information is lost due to context window constraints.


Retrieval Augmented Generation (RAG)


RAG is a cornerstone of this architecture for managing context. Instead of trying to fit all potentially relevant information into the LLM's immediate context, RAG allows the system to dynamically retrieve only the most pertinent information from its knowledge base. When an agent needs to answer a question or perform an analysis, it first formulates a query. This query is then used to search the vector store for semantically similar code snippets, documentation fragments, or design descriptions. The retrieved top-k results are then included in the LLM's prompt alongside the original query, significantly enriching the LLM's understanding without overwhelming its context window. For example, if an agent is analyzing a function `process_order` and needs to understand how `Order` objects are defined, it can query the vector store for 'definition of Order class' and retrieve the relevant `models.py` snippet. This on-demand retrieval mechanism ensures that the LLM always has access to the most relevant, up-to-date information.


GraphRAG for Relational Understanding


Building upon traditional RAG, GraphRAG leverages the graph database to provide even richer, context-aware retrieval. When a query is made, the system doesn't just retrieve semantically similar text chunks; it also traverses the graph to find related entities and their properties. For example, if an agent is analyzing a function `calculate_total` and needs to understand its dependencies, a GraphRAG query could identify all functions that call `calculate_total`, all classes it interacts with, and relevant documentation linked to these entities. This relational context is then integrated into the LLM's prompt, offering a holistic view that is impossible with simple vector search alone. This approach is particularly powerful for understanding architectural patterns, data flow, and complex dependency structures, as it explicitly models the relationships between different code components.


Intelligent Summarization


Throughout the analysis process, various forms of summarization are employed to condense information and reduce context load. After an agent processes a large file or a series of interactions, it can generate a concise summary of its findings, key insights, or identified issues. These summaries are then stored in the knowledge base and can be retrieved by other agents or the Orchestrator when a high-level overview is needed. This hierarchical summarization prevents the LLM from repeatedly processing the same raw data, allowing it to focus on higher-level reasoning. For instance, after a 'Dependency Analyzer Agent' completes its task, it might provide a summary like "Identified 5 external libraries: Flask, SQLAlchemy, Requests. Flask used in `app.py`, SQLAlchemy in `models.py` for ORM operations, Requests in `utils.py` for external API calls." These summaries act as condensed knowledge representations, making subsequent queries more efficient.


Strategic Tool Use


Tools are not just for data ingestion; they are also a key strategy for managing context. Instead of feeding an entire database schema or a complex API specification into the LLM, an agent can use a 'Database Schema Query Tool' or an 'API Documentation Lookup Tool' to retrieve only the specific piece of information it needs at that moment. This 'just-in-time' retrieval of information via tools significantly reduces the burden on the LLM's context window, allowing it to focus its reasoning capabilities on the task at hand rather than on parsing vast amounts of irrelevant data. For example, if an agent needs to know the parameters of a specific function, it calls a 'Code Introspection Tool' with the function name, rather than having the entire file in context. This approach is highly efficient and ensures that the LLM's valuable context window is used for high-level reasoning.


Task Decomposition and Agentic Collaboration


Complex analysis tasks are systematically broken down into smaller, more manageable sub-tasks. Each sub-task is then assigned to a specialized agent that possesses the specific expertise and tools required for that task. This decomposition naturally limits the scope of information each agent needs to process at any given time. For instance, analyzing an entire repository for security vulnerabilities might be broken into: 1. 'Identify authentication-related files' (File Scanner Agent), 2. 'Analyze authentication logic for common flaws' (Security Agent on specific files), 3. 'Check dependency versions for known CVEs' (Dependency Agent + Vulnerability DB Tool). Each agent operates with a focused context, and their findings are later synthesized by the Orchestrator, effectively distributing the context load across multiple processing units. This parallel and distributed approach is fundamental to scaling the system to large codebases.


Fine-tuning for Domain Specificity


While powerful, general-purpose LLMs may not always excel at highly specialized code analysis tasks without additional training. Fine-tuning involves training a pre-trained LLM on a smaller, domain-specific dataset. For this Agentic AI, fine-tuning could involve using a dataset of code snippets paired with security vulnerability descriptions, design pattern examples, or refactoring suggestions. This process adapts the LLM's internal representations and reasoning capabilities to better understand the nuances of code, programming languages, and software engineering principles. The result is an LLM that is more accurate and efficient at tasks like identifying subtle bugs, suggesting idiomatic code improvements, or recognizing complex architectural smells, thereby enhancing the overall performance of the agent system. Fine-tuning helps bridge the gap between general linguistic understanding and specialized technical expertise.


Conclusion


By combining a multi-agent architecture with advanced context management techniques such as context window shifting, RAG, GraphRAG, intelligent summarization, strategic tool use, and task decomposition, this Agentic AI system transcends the limitations of traditional LLM applications. It offers a powerful and scalable solution for deep code and design analysis, enabling developers and architects to gain unprecedented insights into their software projects. This approach paves the way for more intelligent development workflows, automated code reviews, and proactive identification of architectural debt, ultimately leading to higher quality and more maintainable software systems. The continuous evolution of LLM capabilities, coupled with sophisticated agentic designs, promises a future where AI becomes an indispensable partner in every stage of the software development lifecycle.


Addendum: Running Example - A Basic Flask Project Analysis


To illustrate the concepts discussed, let us consider a simple Flask web application project. The goal of our simplified `ProjectAnalyzerAgent` will be to scan this project, identify its Python files, and extract basic structural information such as imports, classes, and functions within each file. This demonstrates the initial data ingestion and parsing steps that form the foundation of the knowledge base.


Project Structure:


The example project, named `my_flask_project`, has the following directory structure:


    my_flask_project/

    ├── app.py

    ├── models.py

    ├── utils.py

    └── requirements.txt


File Contents:


1.  File: `my_flask_project/app.py`


# app.py

# This file defines the main Flask application and its routes.


from flask import Flask, jsonify, request

from models import db, User, Product

from utils import generate_uuid, send_notification


# Initialize the Flask application

app = Flask(__name__)

# Configure the database URI for SQLAlchemy

app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///site.db'

# Initialize SQLAlchemy with the Flask app

db.init_app(app)


@app.route('/')

def home():

    """

    Handles the root URL, returning a welcome message.

    """

    return "Welcome to the Flask Project!"


@app.route('/users', methods=['GET'])

def get_users():

    """

    Retrieves all users from the database and returns them as a JSON list.

    """

    users = User.query.all()

    user_list = [{"id": user.id, "name": user.username, "email": user.email} for user in users]

    return jsonify(user_list)


@app.route('/user', methods=['POST'])

def create_user():

    """

    Creates a new user based on JSON data provided in the request body.

    Also sends a notification upon successful user creation.

    """

    data = request.get_json()

    new_user = User(username=data['username'], email=data['email'])

    db.session.add(new_user)

    db.session.commit()

    send_notification(f"New user created: {new_user.username}")

    return jsonify({"message": "User created successfully", "id": new_user.id}), 201


if __name__ == '__main__':

    # Ensure database tables are created within the application context

    with app.app_context():

        db.create_all()

    # Run the Flask development server

    app.run(debug=True)


2.  File: `my_flask_project/models.py`


# models.py

# This file defines the database models using Flask-SQLAlchemy.


from flask_sqlalchemy import SQLAlchemy


# Initialize the SQLAlchemy instance

db = SQLAlchemy()


class User(db.Model):

    """

    Represents a User in the database.

    """

    id = db.Column(db.Integer, primary_key=True)

    username = db.Column(db.String(80), unique=True, nullable=False)

    email = db.Column(db.String(120), unique=True, nullable=False)


    def __repr__(self):

        """

        Returns a string representation of the User object.

        """

        return '<User %r>' % self.username


class Product(db.Model):

    """

    Represents a Product in the database.

    """

    id = db.Column(db.Integer, primary_key=True)

    name = db.Column(db.String(100), nullable=False)

    price = db.Column(db.Float, nullable=False)


    def __repr__(self):

        """

        Returns a string representation of the Product object.

        """

        return '<Product %r>' % self.name


3.  File: `my_flask_project/utils.py`


# utils.py

# This file contains utility functions used across the application.


import uuid

import requests


def generate_uuid():

    """

    Generates a unique identifier using the UUID4 standard.


    Returns:

        str: A string representation of a universally unique identifier.

    """

    return str(uuid.uuid4())


def send_notification(message):

    """

    Sends a notification to an external service (currently mocked).


    In a real-world scenario, this function would interact with a notification

    API (e.g., a messaging service, email service, or push notification provider)

    to deliver the specified message. The 'requests' library is imported to

    demonstrate where an actual API call would typically occur.


    Parameters:

        message (str): The content of the notification to be sent.

    """

    print(f"Sending notification: {message}")

    # Example of using requests library (mocked API call)

    # try:

    #    response = requests.post("https://api.notifications.com/send", json={"message": message})

    #    response.raise_for_status()

    #    print("Notification sent successfully.")

    # except requests.exceptions.RequestException as e:

    #    print(f"Failed to send notification: {e}")


4.  File: `my_flask_project/requirements.txt`


Flask==2.3.3

Flask-SQLAlchemy==3.1.1

requests==2.31.0



Simplified Project Analyzer Agent:


The following Python code defines the `FileReadingTool` and `parse_python_file_to_ast` functions (as presented in the article) and then introduces a `ProjectAnalyzerAgent`. This agent uses these tools to scan the `my_flask_project` directory, identify Python files, parse them into ASTs, and extract basic information like imports, class names, and function names. This extracted information represents a foundational layer of the knowledge base.


To run this example:

1.  Create a directory named `my_flask_project`.

2.  Inside `my_flask_project`, create the `app.py`, `models.py`, `utils.py`, and `requirements.txt` files with the content provided above.

3.  Save the following Python code as a separate file (e.g., `analyzer_script.py`) in the *parent directory* of `my_flask_project`.

4.  Execute `python analyzer_script.py` from your terminal.


import os

import ast

import json


# --- FileReadingTool (as described in the article) ---

class FileReadingTool:

    """

    A tool designed to securely read the content of a file from the local filesystem.


    This tool provides a controlled way for agents to access file contents,

    ensuring that access is restricted to a predefined base directory for

    security and operational integrity.

    """

    def __init__(self, base_path="."):

        """

        Initializes the FileReadingTool with a specified base path.


        The base path defines the root directory from which all file read

        operations are permitted. This prevents agents from accessing files

        outside the intended project scope.


        Parameters:

            base_path (str): The base directory from which files can be read.

                             Defaults to the current working directory.

        """

        self.base_path = os.path.abspath(base_path)


    def read_file(self, relative_file_path):

        """

        Reads the content of a specified file, given its path relative to the

        tool's base path.


        Before reading, it performs a security check to ensure that the

        requested file path does not attempt to access directories outside

        the allowed base path.


        Parameters:

            relative_file_path (str): The path to the file, relative to the

                                      initialized base_path.


        Returns:

            str: The content of the file if successfully read, or an error

                 message string if the file is not found, inaccessible, or

                 attempts to breach the directory boundary.

        """

        full_path = os.path.join(self.base_path, relative_file_path)

        # Ensure the path is within the allowed base_path for security

        if not os.path.abspath(full_path).startswith(self.base_path):

            return f"Error: Access denied. File path {relative_file_path} is outside the allowed directory."

        try:

            with open(full_path, "r", encoding="utf-8") as f:

                content = f.read()

            return content

        except FileNotFoundError:

            return f"Error: File not found at {full_path}"

        except Exception as e:

            return f"Error reading file {full_path}: {e}"


# --- parse_python_file_to_ast function (as described in the article) ---

def parse_python_file_to_ast(file_path):

    """

    Parses a Python file and returns its Abstract Syntax Tree (AST).


    This function reads the content of a specified Python file and

    uses Python's built-in 'ast' module to parse it into an AST.

    The AST provides a structured, hierarchical representation of

    the source code, making it easier for automated tools to

    understand and analyze the code's structure and components.


    Parameters:

        file_path (str): The path to the Python file that needs to be parsed.


    Returns:

        ast.Module: The root node of the AST, representing the entire

                    module, or None if the file cannot be found or

                    contains syntax errors.

    """

    try:

        with open(file_path, "r", encoding="utf-8") as file:

            source_code = file.read()

        tree = ast.parse(source_code)

        return tree

    except FileNotFoundError:

        print(f"Error: File not found at {file_path}")

        return None

    except SyntaxError as e:

        print(f"Error: Syntax error in {file_path}: {e}")

        return None


# --- ProjectAnalyzerAgent ---

class ProjectAnalyzerAgent:

    """

    A simplified agent designed to analyze a project directory.


    This agent's primary function is to identify all Python files within

    a given project root, read their contents, and extract basic structural

    information like import statements, class definitions, and function

    definitions using AST parsing. This forms a foundational understanding

    of the project's codebase.

    """

    def __init__(self, project_root_path):

        """

        Initializes the ProjectAnalyzerAgent with the root path of the project.


        Parameters:

            project_root_path (str): The absolute or relative path to the

                                     project directory to be analyzed.

        """

        self.project_root = os.path.abspath(project_root_path)

        self.file_reader = FileReadingTool(base_path=self.project_root)

        self.project_structure = {} # Dictionary to store parsed information


    def _get_python_files(self):

        """

        Recursively finds all Python files within the project directory.


        This internal helper method walks through the project root and its

        subdirectories to locate all files ending with '.py'.


        Returns:

            list: A list of strings, where each string is the path to a

                  Python file relative to the project root.

        """

        python_files = []

        for root, _, files in os.walk(self.project_root):

            for file in files:

                if file.endswith(".py"):

                    relative_path = os.path.relpath(os.path.join(root, file), self.project_root)

                    python_files.append(relative_path)

        return python_files


    def analyze_project_structure(self):

        """

        Analyzes the entire project to build a basic understanding of its

        Python files and their contents.


        It iterates through all identified Python files, reads their content

        using the FileReadingTool, parses them into an AST, and then

        extracts key structural elements.


        Returns:

            dict: A dictionary containing the parsed project structure,

                  where keys are file paths and values are dictionaries

                  of imports, classes, and functions found in that file.

        """

        print(f"Starting analysis for project: {self.project_root}")

        python_files = self._get_python_files()

        self.project_structure = {"files": {}}


        for py_file in python_files:

            print(f"Analyzing file: {py_file}")

            file_content = self.file_reader.read_file(py_file)

            if file_content.startswith("Error"):

                print(file_content)

                continue


            # Parse the file content into an AST

            ast_tree = parse_python_file_to_ast(os.path.join(self.project_root, py_file))

            if not ast_tree:

                continue


            file_info = {

                "imports": [],

                "classes": [],

                "functions": []

            }


            # Walk the AST to extract imports, class definitions, and function definitions

            for node in ast.walk(ast_tree):

                if isinstance(node, ast.Import):

                    # Handle 'import module' statements

                    for alias in node.names:

                        file_info["imports"].append(alias.name)

                elif isinstance(node, ast.ImportFrom):

                    # Handle 'from module import name' statements

                    module = node.module if node.module else ""

                    for alias in node.names:

                        file_info["imports"].append(f"{module}.{alias.name}" if module else alias.name)

                elif isinstance(node, ast.ClassDef):

                    # Extract class names

                    file_info["classes"].append(node.name)

                elif isinstance(node, ast.FunctionDef):

                    # Extract function names

                    file_info["functions"].append(node.name)


            self.project_structure["files"][py_file] = file_info

        print("Project analysis complete.")

        return self.project_structure


    def get_project_summary(self):

        """

        Generates a high-level, human-readable summary of the analyzed project structure.


        This method provides a concise overview of the findings from the

        `analyze_project_structure` method, listing files and their key

        extracted components.


        Returns:

            str: A formatted string containing the project summary.

        """

        summary = "Project Summary:\n"

        summary += f"Root Path: {self.project_root}\n"

        summary += f"Total Python files found: {len(self.project_structure['files'])}\n\n"


        for file_path, info in self.project_structure["files"].items():

            summary += f"  File: {file_path}\n"

            summary += f"    Imports: {', '.join(info['imports']) if info['imports'] else 'None'}\n"

            summary += f"    Classes: {', '.join(info['classes']) if info['classes'] else 'None'}\n"

            summary += f"    Functions: {', '.join(info['functions']) if info['functions'] else 'None'}\n\n"

        return summary


# --- Main execution block for the running example ---

if __name__ == '__main__':

    # Define the path to the example project relative to where this script is run

    project_path = "./my_flask_project"


    # Create an instance of the ProjectAnalyzerAgent

    analyzer = ProjectAnalyzerAgent(project_path)


    # Run the analysis

    analyzed_data = analyzer.analyze_project_structure()


    # Print the detailed analysis data in JSON format for easy inspection

    print("\nDetailed Analysis Data (JSON format):")

    print(json.dumps(analyzed_data, indent=2))


    # Print a human-readable summary of the project

    print("\n" + analyzer.get_project_summary())

No comments: