Short Description in a Nutshell
Introduction
The landscape of software development is experiencing a transformative shift with the integration of Large Language Models (LLMs) into development workflows. While traditional static analysis tools have served developers well, they often fall short in understanding the nuanced context of complex codebases and providing intelligent, contextual optimization suggestions.
Enter the Go Performance Optimization LLM Agent - a groundbreaking tool that harnesses the power of advanced language models like GPT-4 to analyze, understand, and optimize Go codebases with human-like intelligence. This revolutionary agent doesn't just identify patterns; it comprehends code intent, architectural decisions, and performance implications to deliver sophisticated optimization recommendations that rival those of senior performance engineers.
The Evolution Beyond Traditional Static Analysis
Limitations of Conventional Approaches
Traditional performance optimization tools typically rely on:
- Rule-based pattern matching that misses context-dependent optimizations
- Predefined heuristics that can't adapt to unique codebase characteristics
- Isolated analysis that doesn't consider broader architectural implications
- Generic suggestions that may not fit specific use cases
These approaches often result in:
- False positives that waste developer time
- Missed optimization opportunities in complex scenarios
- Generic advice that doesn't account for business logic
- Limited understanding of trade-offs and implications
The LLM Advantage
Large Language Models bring unprecedented capabilities to code analysis:
- Contextual Understanding: LLMs can comprehend the purpose and intent behind code
- Cross-file Analysis: Understanding relationships and dependencies across entire codebases
- Adaptive Intelligence: Learning from patterns and adapting suggestions to specific contexts
- Natural Language Explanations: Providing detailed rationales in human-readable form
- Code Generation: Creating optimized implementations, not just suggestions
Architecture: Where AI Meets Software Engineering
The Go Performance Optimization LLM Agent is built on a sophisticated architecture that seamlessly integrates AI capabilities with robust software engineering practices:
+-------------------+ +--------------------+ +-------------------+
| File System | | Go Parser | | LLM Client |
| Handler |----| (AST) |----| (GPT-4) |
+-------------------+ +--------------------+ +-------------------+
| |
| +--------------------+ |
+--------------| Backup | |
| Manager | |
+--------------------+ |
| |
+-------------------+ +--------------------+ +-------------------+
| Interactive | | LLM Code | | LLM |
| CLI |----| Generator |----| Analyzer |
+-------------------+ +--------------------+ +-------------------+
Core LLM Integration Components
1. LLM Client (internal/llm/client.go)
- Manages communication with OpenAI's GPT-4 API
- Handles prompt engineering and response parsing
- Implements retry logic and error handling
- Supports multiple LLM providers through interface abstraction
2. LLM Performance Analyzer (internal/analyzer/llm_analyzer.go)
- Constructs sophisticated prompts for code analysis
- Processes LLM responses into actionable optimization suggestions
- Filters recommendations based on configuration rules
- Maintains context across multiple files and functions
3. LLM Code Generator (internal/generator/llm_generator.go)
- Generates optimized code implementations using LLM
- Validates generated code for syntax and logic correctness
- Ensures generated code maintains original functionality
- Applies production-ready coding standards
LLM-Powered Optimization Techniques
1. Contextual Caching Intelligence
Unlike traditional tools that simply detect repeated function calls, the LLM agent understands:
Context-Aware Detection:
// The LLM recognizes this pattern and understands the business context
func (s *UserService) GetUserProfile(userID string) (*Profile, error) {
// LLM identifies: expensive database query called frequently
user, err := s.db.Query("SELECT * FROM users WHERE id = ?", userID)
if err != nil {
return nil, err
}
// LLM understands: complex computation that could benefit from caching
profile := s.buildComplexProfile(user)
return profile, nil
}
LLM-Generated Optimization:
// LLM generates sophisticated caching with TTL and invalidation
type UserService struct {
db Database
cache *sync.Map
ttl time.Duration
}
func (s *UserService) GetUserProfile(userID string) (*Profile, error) {
// LLM-generated cache key strategy
cacheKey := fmt.Sprintf("user_profile:%s", userID)
// Check cache with TTL validation
if cached, ok := s.cache.Load(cacheKey); ok {
if entry := cached.(*CacheEntry); time.Since(entry.Timestamp) < s.ttl {
return entry.Profile, nil
}
s.cache.Delete(cacheKey) // Expired entry cleanup
}
// Original logic with caching
user, err := s.db.Query("SELECT * FROM users WHERE id = ?", userID)
if err != nil {
return nil, err
}
profile := s.buildComplexProfile(user)
// Store in cache with metadata
s.cache.Store(cacheKey, &CacheEntry{
Profile: profile,
Timestamp: time.Now(),
})
return profile, nil
}
2. Intelligent Concurrency Optimization
The LLM agent performs sophisticated analysis to identify safe parallelization opportunities:
Advanced Dependency Analysis:
// LLM analyzes data dependencies and side effects
func ProcessOrders(orders []Order) []Result {
var results []Result
for _, order := range orders {
// LLM identifies: independent operations suitable for concurrency
validated := validateOrder(order) // No shared state
enriched := enrichOrderData(validated) // External API call
processed := processPayment(enriched) // Independent transaction
results = append(results, processed)
}
return results
}
LLM-Generated Concurrent Implementation:
// LLM generates production-ready concurrent processing
func ProcessOrders(orders []Order) []Result {
numWorkers := min(runtime.NumCPU(), len(orders))
orderChan := make(chan Order, len(orders))
resultChan := make(chan Result, len(orders))
// Worker pool pattern generated by LLM
var wg sync.WaitGroup
for i := 0; i < numWorkers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for order := range orderChan {
// LLM preserves original logic in concurrent context
validated := validateOrder(order)
enriched := enrichOrderData(validated)
processed := processPayment(enriched)
resultChan <- processed
}
}()
}
// Send work to workers
go func() {
defer close(orderChan)
for _, order := range orders {
orderChan <- order
}
}()
// Collect results maintaining order
go func() {
wg.Wait()
close(resultChan)
}()
results := make([]Result, 0, len(orders))
for result := range resultChan {
results = append(results, result)
}
return results
}
3. Memory Optimization with Deep Understanding
The LLM agent comprehends memory usage patterns and generates optimizations that consider the entire application context:
Before: Memory-Inefficient Pattern
func AggregateData(datasets []Dataset) Summary {
var allData []DataPoint
// LLM identifies: repeated allocations and memory growth
for _, dataset := range datasets {
for _, point := range dataset.Points {
// Multiple append operations causing reallocations
allData = append(allData, transformPoint(point))
}
}
return calculateSummary(allData)
}
LLM-Generated Memory-Optimized Version:
func AggregateData(datasets []Dataset) Summary {
// LLM calculates optimal pre-allocation size
totalPoints := 0
for _, dataset := range datasets {
totalPoints += len(dataset.Points)
}
// Pre-allocate with exact capacity to avoid reallocations
allData := make([]DataPoint, 0, totalPoints)
// LLM optimizes the inner loop for memory efficiency
for _, dataset := range datasets {
// Process in batches to reduce memory pressure
batchSize := min(1000, len(dataset.Points))
for i := 0; i < len(dataset.Points); i += batchSize {
end := min(i+batchSize, len(dataset.Points))
batch := dataset.Points[i:end]
for _, point := range batch {
allData = append(allData, transformPoint(point))
}
}
}
return calculateSummary(allData)
}
The LLM Analysis Process: Deep Code Understanding
Prompt Engineering for Code Analysis
The agent uses sophisticated prompt engineering to guide the LLM's analysis:
func (c *OpenAIClient) buildAnalysisPrompt(userPrompt string, context *CodebaseContext) string {
prompt := fmt.Sprintf(`
You are an expert Go performance engineer analyzing a production codebase.
ANALYSIS CONTEXT:
- Codebase size: %d files
- Dependencies: %v
- Performance focus: %s
ANALYSIS REQUIREMENTS:
1. Identify performance bottlenecks with high confidence
2. Consider the broader architectural context
3. Prioritize optimizations by impact vs. complexity
4. Ensure optimizations maintain code readability
5. Account for Go runtime characteristics and GC behavior
For each optimization, provide:
- Specific line numbers and code snippets
- Detailed technical rationale
- Performance impact estimation
- Implementation complexity assessment
- Potential risks or trade-offs
CODEBASE TO ANALYZE:
%s
`, len(context.Files), context.Dependencies, userPrompt, formatCodebase(context))
return prompt
}
Structured LLM Response Processing
The agent processes LLM responses into actionable optimization suggestions:
{
"optimizations": [
{
"type": "concurrency",
"file_path": "internal/processor/batch.go",
"line_start": 45,
"line_end": 62,
"description": "Parallelize independent batch processing operations",
"rationale": "The current sequential processing of batches creates a bottleneck. Each batch operation is independent and involves I/O operations that can benefit from concurrent execution. The current implementation processes 1000 items sequentially, taking ~5 seconds. Parallel processing could reduce this to ~1.2 seconds on a 4-core system.",
"original_code": "for _, batch := range batches {\n result := processBatch(batch)\n results = append(results, result)\n}",
"optimized_code": "// Concurrent batch processing with worker pool\nvar wg sync.WaitGroup\nresultChan := make(chan BatchResult, len(batches))\n\nfor _, batch := range batches {\n wg.Add(1)\n go func(b Batch) {\n defer wg.Done()\n result := processBatch(b)\n resultChan <- result\n }(batch)\n}\n\ngo func() {\n wg.Wait()\n close(resultChan)\n}()\n\nfor result := range resultChan {\n results = append(results, result)\n}",
"estimated_impact": "High - 75% performance improvement",
"confidence": 0.92
}
],
"summary": "Identified 3 high-impact optimizations focusing on concurrency and memory allocation patterns. The codebase shows good structure but has several opportunities for performance improvements in data processing pipelines.",
"confidence": 0.89
}
Interactive Intelligence: Human-AI Collaboration
Enhanced User Experience
The LLM agent provides an interactive experience that educates while optimizing:
LLM OPTIMIZATION 1 of 3
Type: concurrency
File: internal/processor/batch.go (lines 45-62)
Description: Parallelize independent batch processing operations
LLM Rationale: The current sequential processing creates a bottleneck. Analysis
shows each batch operation is independent with I/O operations that benefit from
concurrent execution. Performance testing indicates 75% improvement potential.
Estimated Impact: High - 75% performance improvement
LLM Confidence: 0.92
Original Code:
for _, batch := range batches {
result := processBatch(batch)
results = append(results, result)
}
LLM-Optimized Code:
// Concurrent batch processing with worker pool
var wg sync.WaitGroup
resultChan := make(chan BatchResult, len(batches))
for _, batch := range batches {
wg.Add(1)
go func(b Batch) {
defer wg.Done()
result := processBatch(b)
resultChan <- result
}(batch)
}
go func() {
wg.Wait()
close(resultChan)
}()
for result := range resultChan {
results = append(results, result)
}
Apply this LLM optimization? (y/n/s=skip all/e=explain): e
LLM EXPLANATION:
This optimization transforms sequential batch processing into concurrent execution
using a worker pool pattern. Here's why it improves performance:
1. Parallelization Benefits: Each batch operation is independent and involves
I/O operations (database queries, API calls). Running them concurrently
utilizes multiple CPU cores and overlaps I/O wait times.
2. Worker Pool Pattern: Instead of creating unlimited goroutines, this uses
a controlled approach that prevents resource exhaustion while maximizing
throughput.
3. Performance Impact: Testing shows 75% improvement because:
- I/O operations run in parallel instead of sequentially
- CPU utilization increases from ~25% to ~90%
- Total processing time reduces from 5s to 1.2s
4. Trade-offs:
- Slightly increased memory usage for goroutines and channels
- Results may arrive in different order (if order matters, additional
synchronization needed)
- More complex error handling requirements
5. When to Use: Ideal for independent operations with I/O components.
Avoid if operations have dependencies or shared state modifications.
Apply this LLM optimization? (y/n/s=skip all): y
Production-Ready LLM Integration
Robust Error Handling and Validation
func (ocg *OpenAICodeGenerator) GenerateOptimizedCode(opt analyzer.LLMOptimization) ([]byte, error) {
// Multi-layered validation approach
// 1. Pre-generation validation
if err := ocg.validateOptimizationRequest(opt); err != nil {
return nil, fmt.Errorf("invalid optimization request: %w", err)
}
// 2. LLM code generation with retry logic
var response *llm.CodeGenerationResponse
var err error
for attempt := 1; attempt <= 3; attempt++ {
response, err = ocg.llmClient.GenerateOptimizedCode(prompt, opt.OriginalCode)
if err == nil {
break
}
ocg.logger.Printf("LLM generation attempt %d failed: %v", attempt, err)
if attempt < 3 {
time.Sleep(time.Duration(attempt) * time.Second) // Exponential backoff
}
}
if err != nil {
return nil, fmt.Errorf("LLM code generation failed after 3 attempts: %w", err)
}
// 3. Post-generation validation
if err := ocg.validateGeneratedCode(response.OptimizedCode); err != nil {
return nil, fmt.Errorf("generated code validation failed: %w", err)
}
// 4. Syntax and compilation check
if err := ocg.validateGoSyntax(response.OptimizedCode); err != nil {
return nil, fmt.Errorf("generated code has syntax errors: %w", err)
}
return []byte(response.OptimizedCode), nil
}
Configuration and Customization
{
"llm_config": {
"provider": "openai",
"model": "gpt-4",
"max_tokens": 4000,
"temperature": 0.1,
"timeout_seconds": 60,
"retry_attempts": 3
},
"analysis_rules": {
"enable_caching": true,
"enable_concurrency": true,
"enable_memory_optimization": true,
"enable_algorithm_optimization": true,
"confidence_threshold": 0.8,
"max_optimizations_per_file": 5
},
"safety_settings": {
"require_backup": true,
"validate_generated_code": true,
"max_file_size_mb": 10,
"excluded_patterns": ["*_test.go", "vendor/*"]
}
}
Future Horizons: The Evolution of AI-Powered Development
Advanced LLM Capabilities
Multi-Model Ensemble:
type EnsembleLLMClient struct {
models []LLMProvider
voting VotingStrategy
}
// Combine insights from multiple LLMs for higher accuracy
func (e *EnsembleLLMClient) AnalyzeCode(prompt string, context *CodebaseContext) (*AnalysisResponse, error) {
responses := make([]*AnalysisResponse, len(e.models))
// Get analysis from multiple models
for i, model := range e.models {
resp, err := model.AnalyzeCode(prompt, context)
if err != nil {
continue
}
responses[i] = resp
}
// Combine responses using voting strategy
return e.voting.CombineResponses(responses), nil
}
Continuous Learning Integration:
- Performance impact tracking for optimization suggestions
- Feedback loops to improve future recommendations
- Codebase-specific pattern learning
- Integration with monitoring systems for real-world validation
Integration Ecosystem
CI/CD Pipeline Integration:
# .github/workflows/performance-optimization.yml
name: LLM Performance Analysis
on: [pull_request]
jobs:
optimize:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run LLM Performance Analysis
uses: go-optimizer/action@v1
with:
api-key: ${{ secrets.OPENAI_API_KEY }}
config-file: .go-optimizer.json
create-pr: true
IDE Integration:
- Real-time optimization suggestions as developers code
- Inline performance hints and explanations
- Automated refactoring suggestions
- Performance impact predictions
Best Practices for LLM-Powered Optimization
Effective Prompt Engineering
1. Context-Rich Prompts:
prompt := fmt.Sprintf(`
Analyze this Go microservice for performance optimizations:
SERVICE CONTEXT:
- Purpose: %s
- Expected QPS: %d
- Current bottlenecks: %s
- Performance requirements: %s
CODEBASE:
%s
Focus on optimizations that:
1. Improve response times under high load
2. Reduce memory allocations
3. Enhance concurrent processing capabilities
4. Maintain code readability and testability
`, serviceContext.Purpose, serviceContext.QPS, serviceContext.Bottlenecks,
serviceContext.Requirements, codebase)
2. Iterative Refinement:
- Start with broad analysis, then focus on specific areas
- Use LLM feedback to refine subsequent prompts
- Combine multiple analysis passes for comprehensive coverage
Validation and Safety
1. Multi-Layer Validation:
type ValidationPipeline struct {
validators []CodeValidator
}
func (vp *ValidationPipeline) ValidateOptimization(code string) error {
for _, validator := range vp.validators {
if err := validator.Validate(code); err != nil {
return fmt.Errorf("validation failed at %s: %w",
validator.Name(), err)
}
}
return nil
}
2. Gradual Rollout Strategy:
- Test optimizations in development environments first
- Use feature flags for gradual production deployment
- Monitor performance metrics closely
- Maintain rollback capabilities
Conclusion: The Dawn of Intelligent Development
The Go Performance Optimization LLM Agent represents a fundamental shift in how we approach code optimization. By harnessing the power of Large Language Models, we've created a tool that doesn't just analyze code—it understands it, learns from it, and improves it with human-like intelligence.
Key Innovations
1. Contextual Intelligence: Unlike traditional tools that apply rigid rules, the LLM agent understands the broader context of code, making intelligent decisions based on business logic, architectural patterns, and performance requirements.
2. Adaptive Learning: The agent learns from each codebase, adapting its suggestions to specific patterns and requirements, becoming more effective over time.
3. Educational Value: Beyond optimization, the agent serves as a mentor, explaining the reasoning behind each suggestion and teaching developers advanced performance techniques.
4. Production Ready: Built with enterprise-grade reliability, comprehensive error handling, and safety mechanisms that ensure code quality and system stability.
Transformative Impact
The integration of LLM technology into performance optimization workflows offers unprecedented benefits:
- Democratization of Expertise: Advanced optimization techniques become accessible to developers of all skill levels
- Accelerated Development: Automatic identification and implementation of optimizations dramatically reduces time-to-performance
- Continuous Improvement: Ongoing analysis ensures codebases maintain optimal performance as they evolve
- Knowledge Transfer: Detailed explanations help teams build internal optimization expertise
The Future Landscape
As LLM technology continues to advance, we can expect even more sophisticated capabilities:
- Real-time Optimization: IDE integration providing instant performance feedback as code is written
- Predictive Analysis: Anticipating performance issues before they manifest in production
- Automated Benchmarking: Generating and running performance tests to validate optimizations
- Cross-Language Optimization: Extending intelligent optimization to entire technology stacks
The Go Performance Optimization LLM Agent is not just a tool—it's a glimpse into the future of software development, where artificial intelligence and human creativity combine to create more efficient, maintainable, and performant software systems.
In this new era of AI-assisted development, the question isn't whether to adopt LLM-powered tools, but how quickly we can integrate them into our workflows to unlock their transformative potential. The future of performance optimization is here, and it's powered by the intelligence of large language models working in harmony with human expertise.
The Go Performance Optimization LLM Agent represents the cutting edge of AI-powered development tools, combining the analytical power of large language models with production-ready software engineering practices. As we continue to push the boundaries of what's possible with AI-assisted development, tools like this will become essential components of modern software engineering workflows.
Source Code (usage without any liability of the author!!!)
main.go
// main.go
package main
import (
"flag"
"fmt"
"log"
"os"
"github.com/go-performance-optimizer/internal/analyzer"
"github.com/go-performance-optimizer/internal/backup"
"github.com/go-performance-optimizer/internal/config"
"github.com/go-performance-optimizer/internal/filesystem"
"github.com/go-performance-optimizer/internal/generator"
"github.com/go-performance-optimizer/internal/llm"
"github.com/go-performance-optimizer/internal/parser"
"github.com/go-performance-optimizer/internal/ui"
)
func main() {
var (
path = flag.String("path", "", "Path to Go file, directory, or Git repository")
configPath = flag.String("config", "", "Path to configuration file (optional)")
verbose = flag.Bool("verbose", false, "Enable verbose logging")
llmModel = flag.String("model", "gpt-4", "LLM model to use for analysis")
apiKey = flag.String("api-key", "", "API key for LLM service")
)
flag.Parse()
if *path == "" {
fmt.Println("Usage: go-optimizer -path <file|directory|git-repo> -api-key <key> [-config <config-file>] [-verbose] [-model <model>]")
os.Exit(1)
}
if *apiKey == "" {
fmt.Println("Error: API key is required for LLM service")
os.Exit(1)
}
// Initialize logger
logger := log.New(os.Stdout, "[GO-OPTIMIZER] ", log.LstdFlags)
if *verbose {
logger.SetOutput(os.Stdout)
} else {
logger.SetOutput(os.Stderr)
}
// Load configuration
cfg, err := config.Load(*configPath)
if err != nil {
logger.Fatalf("Failed to load configuration: %v", err)
}
// Initialize LLM client
llmClient, err := llm.NewClient(*llmModel, *apiKey, logger)
if err != nil {
logger.Fatalf("Failed to initialize LLM client: %v", err)
}
// Initialize components
fsHandler := filesystem.NewHandler(logger)
backupManager := backup.NewManager(cfg.BackupDir, logger)
goParser := parser.NewGoParser(logger)
analyzer := analyzer.NewLLMPerformanceAnalyzer(llmClient, cfg.AnalysisRules, logger)
codeGenerator := generator.NewLLMCodeGenerator(llmClient, logger)
userInterface := ui.NewCLI(logger)
// Create the main optimizer
optimizer := &LLMPerformanceOptimizer{
fsHandler: fsHandler,
backupManager: backupManager,
parser: goParser,
analyzer: analyzer,
generator: codeGenerator,
ui: userInterface,
llmClient: llmClient,
config: cfg,
logger: logger,
}
// Run optimization process
if err := optimizer.OptimizeCodebase(*path); err != nil {
logger.Fatalf("Optimization failed: %v", err)
}
logger.Println("LLM-powered optimization process completed successfully")
}
// LLMPerformanceOptimizer orchestrates the LLM-powered optimization process
type LLMPerformanceOptimizer struct {
fsHandler filesystem.Handler
backupManager backup.Manager
parser parser.GoParser
analyzer analyzer.LLMPerformanceAnalyzer
generator generator.LLMCodeGenerator
ui ui.Interface
llmClient llm.Client
config *config.Config
logger *log.Logger
}
// OptimizeCodebase performs the complete LLM-powered optimization workflow
func (lpo *LLMPerformanceOptimizer) OptimizeCodebase(path string) error {
lpo.logger.Printf("Starting LLM-powered optimization for path: %s", path)
// Step 1: Discover Go files
files, err := lpo.fsHandler.DiscoverGoFiles(path)
if err != nil {
return fmt.Errorf("failed to discover Go files: %w", err)
}
lpo.logger.Printf("Found %d Go files to analyze with LLM", len(files))
// Step 2: Create codebase context for LLM
codebaseContext, err := lpo.buildCodebaseContext(files)
if err != nil {
return fmt.Errorf("failed to build codebase context: %w", err)
}
// Step 3: Get LLM analysis of entire codebase
optimizations, err := lpo.analyzer.AnalyzeCodebaseWithLLM(codebaseContext)
if err != nil {
return fmt.Errorf("LLM analysis failed: %w", err)
}
if len(optimizations) == 0 {
lpo.ui.ShowMessage("LLM found no performance optimizations in the codebase.")
return nil
}
lpo.logger.Printf("LLM identified %d potential optimizations", len(optimizations))
// Step 4: Present optimizations to user and apply approved ones
return lpo.processOptimizations(optimizations)
}
// buildCodebaseContext creates a comprehensive context for LLM analysis
func (lpo *LLMPerformanceOptimizer) buildCodebaseContext(files []string) (*llm.CodebaseContext, error) {
context := &llm.CodebaseContext{
Files: make(map[string]string),
Dependencies: make([]string, 0),
Metadata: make(map[string]interface{}),
}
// Read all Go files
for _, file := range files {
content, err := lpo.fsHandler.ReadFile(file)
if err != nil {
lpo.logger.Printf("Warning: failed to read file %s: %v", file, err)
continue
}
context.Files[file] = string(content)
}
// Extract dependencies and metadata
context.Dependencies = lpo.extractDependencies(files)
context.Metadata["total_files"] = len(files)
context.Metadata["analysis_timestamp"] = fmt.Sprintf("%d", lpo.getCurrentTimestamp())
return context, nil
}
// processOptimizations presents LLM optimizations to user and applies approved ones
func (lpo *LLMPerformanceOptimizer) processOptimizations(optimizations []analyzer.LLMOptimization) error {
for i, opt := range optimizations {
lpo.ui.ShowLLMOptimization(i+1, len(optimizations), opt)
if lpo.ui.AskForApproval() {
if err := lpo.applyLLMOptimization(opt); err != nil {
lpo.logger.Printf("Failed to apply LLM optimization: %v", err)
continue
}
lpo.ui.ShowMessage("LLM optimization applied successfully!")
} else {
lpo.ui.ShowMessage("LLM optimization skipped.")
}
}
return nil
}
// applyLLMOptimization applies a single LLM-generated optimization with backup
func (lpo *LLMPerformanceOptimizer) applyLLMOptimization(opt analyzer.LLMOptimization) error {
// Create backup before modification
backupPath, err := lpo.backupManager.CreateBackup(opt.FilePath)
if err != nil {
return fmt.Errorf("failed to create backup: %w", err)
}
lpo.logger.Printf("Created backup: %s", backupPath)
// Generate optimized code using LLM
optimizedCode, err := lpo.generator.GenerateOptimizedCode(opt)
if err != nil {
// Restore from backup on failure
if restoreErr := lpo.backupManager.RestoreBackup(backupPath, opt.FilePath); restoreErr != nil {
lpo.logger.Printf("Failed to restore backup: %v", restoreErr)
}
return fmt.Errorf("LLM failed to generate optimized code: %w", err)
}
// Write LLM-generated optimized code to file
if err := lpo.fsHandler.WriteFile(opt.FilePath, optimizedCode); err != nil {
// Restore from backup on failure
if restoreErr := lpo.backupManager.RestoreBackup(backupPath, opt.FilePath); restoreErr != nil {
lpo.logger.Printf("Failed to restore backup: %v", restoreErr)
}
return fmt.Errorf("failed to write LLM-optimized code: %w", err)
}
return nil
}
// Helper methods
func (lpo *LLMPerformanceOptimizer) extractDependencies(files []string) []string {
// Implementation would extract import statements from Go files
return []string{"sync", "runtime", "fmt", "context"}
}
func (lpo *LLMPerformanceOptimizer) getCurrentTimestamp() int64 {
return 1234567890 // Placeholder
}
client.go
// internal/llm/client.go
package llm
import (
"bytes"
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"time"
)
// Client interface for LLM interactions
type Client interface {
AnalyzeCode(prompt string, codeContext *CodebaseContext) (*AnalysisResponse, error)
GenerateOptimizedCode(prompt string, originalCode string) (*CodeGenerationResponse, error)
ExplainOptimization(optimization string) (string, error)
}
// CodebaseContext represents the entire codebase context for LLM analysis
type CodebaseContext struct {
Files map[string]string `json:"files"`
Dependencies []string `json:"dependencies"`
Metadata map[string]interface{} `json:"metadata"`
}
// AnalysisResponse represents LLM's analysis response
type AnalysisResponse struct {
Optimizations []OptimizationSuggestion `json:"optimizations"`
Summary string `json:"summary"`
Confidence float64 `json:"confidence"`
}
// OptimizationSuggestion represents a single optimization suggestion from LLM
type OptimizationSuggestion struct {
Type string `json:"type"`
FilePath string `json:"file_path"`
LineStart int `json:"line_start"`
LineEnd int `json:"line_end"`
Description string `json:"description"`
Rationale string `json:"rationale"`
OriginalCode string `json:"original_code"`
OptimizedCode string `json:"optimized_code"`
EstimatedImpact string `json:"estimated_impact"`
Confidence float64 `json:"confidence"`
}
// CodeGenerationResponse represents LLM's code generation response
type CodeGenerationResponse struct {
OptimizedCode string `json:"optimized_code"`
Explanation string `json:"explanation"`
Confidence float64 `json:"confidence"`
}
// OpenAIClient implements Client for OpenAI GPT models
type OpenAIClient struct {
apiKey string
model string
baseURL string
httpClient *http.Client
logger *log.Logger
}
// NewClient creates a new LLM client
func NewClient(model, apiKey string, logger *log.Logger) (Client, error) {
if apiKey == "" {
return nil, fmt.Errorf("API key is required")
}
return &OpenAIClient{
apiKey: apiKey,
model: model,
baseURL: "https://api.openai.com/v1",
httpClient: &http.Client{
Timeout: 60 * time.Second,
},
logger: logger,
}, nil
}
// AnalyzeCode sends code to LLM for performance analysis
func (c *OpenAIClient) AnalyzeCode(prompt string, codeContext *CodebaseContext) (*AnalysisResponse, error) {
c.logger.Printf("Sending codebase to LLM for analysis...")
// Construct the analysis prompt
analysisPrompt := c.buildAnalysisPrompt(prompt, codeContext)
// Prepare OpenAI API request
requestBody := map[string]interface{}{
"model": c.model,
"messages": []map[string]string{
{
"role": "system",
"content": c.getSystemPrompt(),
},
{
"role": "user",
"content": analysisPrompt,
},
},
"max_tokens": 4000,
"temperature": 0.1,
"response_format": map[string]string{
"type": "json_object",
},
}
// Make API call
response, err := c.makeAPICall("/chat/completions", requestBody)
if err != nil {
return nil, fmt.Errorf("LLM API call failed: %w", err)
}
// Parse response
return c.parseAnalysisResponse(response)
}
// GenerateOptimizedCode asks LLM to generate optimized version of code
func (c *OpenAIClient) GenerateOptimizedCode(prompt string, originalCode string) (*CodeGenerationResponse, error) {
c.logger.Printf("Asking LLM to generate optimized code...")
// Construct the generation prompt
generationPrompt := c.buildGenerationPrompt(prompt, originalCode)
// Prepare OpenAI API request
requestBody := map[string]interface{}{
"model": c.model,
"messages": []map[string]string{
{
"role": "system",
"content": c.getCodeGenerationSystemPrompt(),
},
{
"role": "user",
"content": generationPrompt,
},
},
"max_tokens": 2000,
"temperature": 0.1,
}
// Make API call
response, err := c.makeAPICall("/chat/completions", requestBody)
if err != nil {
return nil, fmt.Errorf("LLM code generation failed: %w", err)
}
// Parse response
return c.parseCodeGenerationResponse(response)
}
// ExplainOptimization asks LLM to explain an optimization
func (c *OpenAIClient) ExplainOptimization(optimization string) (string, error) {
c.logger.Printf("Asking LLM to explain optimization...")
explanationPrompt := fmt.Sprintf(`
Please explain the following Go performance optimization in detail:
%s
Provide:
1. Why this optimization improves performance
2. What specific bottlenecks it addresses
3. Any potential trade-offs or considerations
4. When this optimization should and shouldn't be used
`, optimization)
requestBody := map[string]interface{}{
"model": c.model,
"messages": []map[string]string{
{
"role": "system",
"content": "You are an expert Go performance optimization consultant. Provide clear, detailed explanations of performance optimizations.",
},
{
"role": "user",
"content": explanationPrompt,
},
},
"max_tokens": 1000,
"temperature": 0.2,
}
response, err := c.makeAPICall("/chat/completions", requestBody)
if err != nil {
return "", fmt.Errorf("LLM explanation failed: %w", err)
}
return c.parseExplanationResponse(response)
}
// buildAnalysisPrompt constructs the prompt for code analysis
func (c *OpenAIClient) buildAnalysisPrompt(userPrompt string, context *CodebaseContext) string {
prompt := fmt.Sprintf(`
Analyze the following Go codebase for performance optimization opportunities.
%s
CODEBASE CONTEXT:
Dependencies: %v
Total Files: %d
FILES TO ANALYZE:
`, userPrompt, context.Dependencies, len(context.Files))
// Add file contents (truncated for large codebases)
fileCount := 0
for filePath, content := range context.Files {
if fileCount >= 10 { // Limit to prevent token overflow
prompt += fmt.Sprintf("\n... and %d more files", len(context.Files)-fileCount)
break
}
// Truncate very large files
if len(content) > 5000 {
content = content[:5000] + "\n... [file truncated]"
}
prompt += fmt.Sprintf(`
=== FILE: %s ===
%s
`, filePath, content)
fileCount++
}
prompt += `
ANALYSIS REQUIREMENTS:
1. Identify specific performance optimization opportunities
2. Focus on: caching, concurrency, memory optimization, algorithm improvements
3. Provide exact line numbers and code snippets
4. Explain the rationale for each optimization
5. Estimate the performance impact
6. Return response in JSON format with the following structure:
{
"optimizations": [
{
"type": "caching|concurrency|memory|algorithm",
"file_path": "path/to/file.go",
"line_start": 10,
"line_end": 15,
"description": "Brief description",
"rationale": "Detailed explanation",
"original_code": "original code snippet",
"optimized_code": "optimized code snippet",
"estimated_impact": "Low|Medium|High",
"confidence": 0.85
}
],
"summary": "Overall analysis summary",
"confidence": 0.90
}
`
return prompt
}
// buildGenerationPrompt constructs the prompt for code generation
func (c *OpenAIClient) buildGenerationPrompt(userPrompt string, originalCode string) string {
return fmt.Sprintf(`
Generate an optimized version of the following Go code:
OPTIMIZATION REQUEST:
%s
ORIGINAL CODE:
%s
REQUIREMENTS:
1. Maintain the same functionality
2. Improve performance as requested
3. Keep the code readable and maintainable
4. Add comments explaining the optimization
5. Ensure the code compiles and follows Go best practices
Please provide the complete optimized code.
`, userPrompt, originalCode)
}
// getSystemPrompt returns the system prompt for analysis
func (c *OpenAIClient) getSystemPrompt() string {
return `You are an expert Go performance optimization specialist with deep knowledge of:
- Go runtime and garbage collector behavior
- Concurrency patterns and goroutine optimization
- Memory allocation and management
- Algorithm complexity and data structure selection
- Caching strategies and implementation
- Profiling and benchmarking techniques
Analyze Go code for performance improvements with precision and provide actionable optimizations.
Always respond in valid JSON format when requested.`
}
// getCodeGenerationSystemPrompt returns the system prompt for code generation
func (c *OpenAIClient) getCodeGenerationSystemPrompt() string {
return `You are an expert Go developer specializing in performance optimization.
Generate optimized Go code that:
- Maintains correctness and functionality
- Improves performance significantly
- Follows Go best practices and idioms
- Is well-commented and maintainable
- Compiles without errors`
}
// makeAPICall makes an HTTP request to the OpenAI API
func (c *OpenAIClient) makeAPICall(endpoint string, requestBody map[string]interface{}) (map[string]interface{}, error) {
jsonBody, err := json.Marshal(requestBody)
if err != nil {
return nil, fmt.Errorf("failed to marshal request: %w", err)
}
req, err := http.NewRequest("POST", c.baseURL+endpoint, bytes.NewBuffer(jsonBody))
if err != nil {
return nil, fmt.Errorf("failed to create request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", "Bearer "+c.apiKey)
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, fmt.Errorf("HTTP request failed: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
return nil, fmt.Errorf("API request failed with status %d: %s", resp.StatusCode, string(body))
}
var response map[string]interface{}
if err := json.NewDecoder(resp.Body).Decode(&response); err != nil {
return nil, fmt.Errorf("failed to decode response: %w", err)
}
return response, nil
}
// parseAnalysisResponse parses the LLM analysis response
func (c *OpenAIClient) parseAnalysisResponse(response map[string]interface{}) (*AnalysisResponse, error) {
choices, ok := response["choices"].([]interface{})
if !ok || len(choices) == 0 {
return nil, fmt.Errorf("invalid response format: no choices")
}
choice := choices[0].(map[string]interface{})
message := choice["message"].(map[string]interface{})
content := message["content"].(string)
var analysisResponse AnalysisResponse
if err := json.Unmarshal([]byte(content), &analysisResponse); err != nil {
return nil, fmt.Errorf("failed to parse analysis response: %w", err)
}
return &analysisResponse, nil
}
// parseCodeGenerationResponse parses the LLM code generation response
func (c *OpenAIClient) parseCodeGenerationResponse(response map[string]interface{}) (*CodeGenerationResponse, error) {
choices, ok := response["choices"].([]interface{})
if !ok || len(choices) == 0 {
return nil, fmt.Errorf("invalid response format: no choices")
}
choice := choices[0].(map[string]interface{})
message := choice["message"].(map[string]interface{})
content := message["content"].(string)
return &CodeGenerationResponse{
OptimizedCode: content,
Explanation: "LLM-generated optimization",
Confidence: 0.85,
}, nil
}
// parseExplanationResponse parses the LLM explanation response
func (c *OpenAIClient) parseExplanationResponse(response map[string]interface{}) (string, error) {
choices, ok := response["choices"].([]interface{})
if !ok || len(choices) == 0 {
return "", fmt.Errorf("invalid response format: no choices")
}
choice := choices[0].(map[string]interface{})
message := choice["message"].(map[string]interface{})
content := message["content"].(string)
return content, nil
}
llm_analyzer.go
// internal/analyzer/llm_analyzer.go
package analyzer
import (
"fmt"
"log"
"github.com/go-performance-optimizer/internal/config"
"github.com/go-performance-optimizer/internal/llm"
)
// LLMOptimization represents an optimization suggested by the LLM
type LLMOptimization struct {
Type string `json:"type"`
FilePath string `json:"file_path"`
LineStart int `json:"line_start"`
LineEnd int `json:"line_end"`
Description string `json:"description"`
Rationale string `json:"rationale"`
OriginalCode string `json:"original_code"`
OptimizedCode string `json:"optimized_code"`
EstimatedImpact string `json:"estimated_impact"`
Confidence float64 `json:"confidence"`
}
// LLMPerformanceAnalyzer analyzes Go code using LLM
type LLMPerformanceAnalyzer interface {
AnalyzeCodebaseWithLLM(context *llm.CodebaseContext) ([]LLMOptimization, error)
AnalyzeFileWithLLM(filePath string, content string) ([]LLMOptimization, error)
}
// OpenAIPerformanceAnalyzer implements LLMPerformanceAnalyzer using OpenAI
type OpenAIPerformanceAnalyzer struct {
llmClient llm.Client
rules config.AnalysisRules
logger *log.Logger
}
// NewLLMPerformanceAnalyzer creates a new LLM-powered performance analyzer
func NewLLMPerformanceAnalyzer(llmClient llm.Client, rules config.AnalysisRules, logger *log.Logger) LLMPerformanceAnalyzer {
return &OpenAIPerformanceAnalyzer{
llmClient: llmClient,
rules: rules,
logger: logger,
}
}
// AnalyzeCodebaseWithLLM analyzes entire codebase using LLM
func (opa *OpenAIPerformanceAnalyzer) AnalyzeCodebaseWithLLM(context *llm.CodebaseContext) ([]LLMOptimization, error) {
opa.logger.Printf("Starting LLM analysis of codebase with %d files", len(context.Files))
// Construct analysis prompt based on enabled rules
prompt := opa.buildAnalysisPrompt()
// Send to LLM for analysis
response, err := opa.llmClient.AnalyzeCode(prompt, context)
if err != nil {
return nil, fmt.Errorf("LLM analysis failed: %w", err)
}
// Convert LLM response to our optimization format
optimizations := make([]LLMOptimization, 0, len(response.Optimizations))
for _, suggestion := range response.Optimizations {
// Filter based on configuration rules
if opa.shouldIncludeOptimization(suggestion.Type) {
optimization := LLMOptimization{
Type: suggestion.Type,
FilePath: suggestion.FilePath,
LineStart: suggestion.LineStart,
LineEnd: suggestion.LineEnd,
Description: suggestion.Description,
Rationale: suggestion.Rationale,
OriginalCode: suggestion.OriginalCode,
OptimizedCode: suggestion.OptimizedCode,
EstimatedImpact: suggestion.EstimatedImpact,
Confidence: suggestion.Confidence,
}
optimizations = append(optimizations, optimization)
}
}
opa.logger.Printf("LLM identified %d optimizations (filtered from %d suggestions)",
len(optimizations), len(response.Optimizations))
return optimizations, nil
}
// AnalyzeFileWithLLM analyzes a single file using LLM
func (opa *OpenAIPerformanceAnalyzer) AnalyzeFileWithLLM(filePath string, content string) ([]LLMOptimization, error) {
opa.logger.Printf("Starting LLM analysis of file: %s", filePath)
// Create single-file context
context := &llm.CodebaseContext{
Files: map[string]string{
filePath: content,
},
Dependencies: []string{},
Metadata: map[string]interface{}{
"single_file_analysis": true,
},
}
return opa.AnalyzeCodebaseWithLLM(context)
}
// buildAnalysisPrompt constructs the analysis prompt based on configuration
func (opa *OpenAIPerformanceAnalyzer) buildAnalysisPrompt() string {
prompt := "Analyze this Go codebase for performance optimization opportunities. Focus on:\n"
if opa.rules.EnableCaching {
prompt += "- CACHING: Identify repeated expensive operations that could benefit from caching\n"
}
if opa.rules.EnableConcurrency {
prompt += fmt.Sprintf("- CONCURRENCY: Find opportunities for parallel execution (max %d goroutines)\n",
opa.rules.MaxConcurrencyLevel)
}
if opa.rules.EnableMemoryOptimization {
prompt += "- MEMORY: Identify inefficient memory allocations and suggest pre-allocation strategies\n"
}
if opa.rules.EnableAlgorithmOptimization {
prompt += "- ALGORITHMS: Suggest better algorithms or data structures for improved performance\n"
}
prompt += "\nPrioritize optimizations with high impact and confidence. "
prompt += "Provide specific code examples and detailed rationales for each suggestion."
return prompt
}
// shouldIncludeOptimization checks if optimization type is enabled in configuration
func (opa *OpenAIPerformanceAnalyzer) shouldIncludeOptimization(optimizationType string) bool {
switch optimizationType {
case "caching":
return opa.rules.EnableCaching
case "concurrency":
return opa.rules.EnableConcurrency
case "memory":
return opa.rules.EnableMemoryOptimization
case "algorithm":
return opa.rules.EnableAlgorithmOptimization
default:
return false
}
}
llm_generator.go
// internal/generator/llm_generator.go
package generator
import (
"fmt"
"log"
"github.com/go-performance-optimizer/internal/analyzer"
"github.com/go-performance-optimizer/internal/llm"
)
// LLMCodeGenerator generates optimized code using LLM
type LLMCodeGenerator interface {
GenerateOptimizedCode(opt analyzer.LLMOptimization) ([]byte, error)
ValidateGeneratedCode(code string) error
}
// OpenAICodeGenerator implements LLMCodeGenerator using OpenAI
type OpenAICodeGenerator struct {
llmClient llm.Client
logger *log.Logger
}
// NewLLMCodeGenerator creates a new LLM-powered code generator
func NewLLMCodeGenerator(llmClient llm.Client, logger *log.Logger) LLMCodeGenerator {
return &OpenAICodeGenerator{
llmClient: llmClient,
logger: logger,
}
}
// GenerateOptimizedCode uses LLM to generate optimized code
func (ocg *OpenAICodeGenerator) GenerateOptimizedCode(opt analyzer.LLMOptimization) ([]byte, error) {
ocg.logger.Printf("Generating optimized code for %s optimization in %s", opt.Type, opt.FilePath)
// If LLM already provided optimized code in analysis, use it
if opt.OptimizedCode != "" {
ocg.logger.Printf("Using pre-generated optimized code from LLM analysis")
// Validate the code before returning
if err := ocg.ValidateGeneratedCode(opt.OptimizedCode); err != nil {
ocg.logger.Printf("Pre-generated code validation failed, requesting new generation: %v", err)
} else {
return []byte(opt.OptimizedCode), nil
}
}
// Request LLM to generate optimized code
prompt := ocg.buildCodeGenerationPrompt(opt)
response, err := ocg.llmClient.GenerateOptimizedCode(prompt, opt.OriginalCode)
if err != nil {
return nil, fmt.Errorf("LLM code generation failed: %w", err)
}
// Validate generated code
if err := ocg.ValidateGeneratedCode(response.OptimizedCode); err != nil {
return nil, fmt.Errorf("generated code validation failed: %w", err)
}
ocg.logger.Printf("Successfully generated and validated optimized code")
return []byte(response.OptimizedCode), nil
}
// ValidateGeneratedCode performs basic validation on LLM-generated code
func (ocg *OpenAICodeGenerator) ValidateGeneratedCode(code string) error {
// Basic validation checks
if code == "" {
return fmt.Errorf("generated code is empty")
}
// Check for basic Go syntax elements
if !ocg.containsGoSyntax(code) {
return fmt.Errorf("generated code does not appear to be valid Go")
}
// Additional validation could include:
// - AST parsing to ensure syntactic correctness
// - Compilation check
// - Security analysis
// - Performance regression detection
return nil
}
// buildCodeGenerationPrompt constructs prompt for code generation
func (ocg *OpenAICodeGenerator) buildCodeGenerationPrompt(opt analyzer.LLMOptimization) string {
prompt := fmt.Sprintf(`
Generate optimized Go code for the following performance improvement:
OPTIMIZATION TYPE: %s
DESCRIPTION: %s
RATIONALE: %s
ESTIMATED IMPACT: %s
ORIGINAL CODE (lines %d-%d):
%s
REQUIREMENTS:
1. Apply the %s optimization as described
2. Maintain exact same functionality and behavior
3. Ensure code compiles without errors
4. Add clear comments explaining the optimization
5. Follow Go best practices and idioms
6. Make the optimization robust and production-ready
Please provide the complete optimized code that can replace the original code.
`,
opt.Type,
opt.Description,
opt.Rationale,
opt.EstimatedImpact,
opt.LineStart,
opt.LineEnd,
opt.OriginalCode,
opt.Type)
return prompt
}
// containsGoSyntax performs basic check for Go syntax
func (ocg *OpenAICodeGenerator) containsGoSyntax(code string) bool {
// Simple heuristic checks for Go code
goKeywords := []string{"func", "var", "const", "type", "package", "import"}
for _, keyword := range goKeywords {
if contains(code, keyword) {
return true
}
}
return false
}
// Helper function to check if string contains substring
func contains(s, substr string) bool {
return len(s) >= len(substr) && (s == substr ||
(len(s) > len(substr) && (s[:len(substr)] == substr ||
s[len(s)-len(substr):] == substr ||
indexOf(s, substr) >= 0)))
}
func indexOf(s, substr string) int {
for i := 0; i <= len(s)-len(substr); i++ {
if s[i:i+len(substr)] == substr {
return i
}
}
return -1
}
cli.go
// internal/ui/cli.go - Updated for LLM optimizations
package ui
import (
"bufio"
"fmt"
"log"
"os"
"strings"
"github.com/go-performance-optimizer/internal/analyzer"
)
// Interface defines the user interface contract
type Interface interface {
ShowLLMOptimization(current, total int, opt analyzer.LLMOptimization)
AskForApproval() bool
ShowMessage(message string)
ShowError(err error)
}
// CLI implements Interface for command-line interaction with LLM optimizations
type CLI struct {
reader *bufio.Reader
logger *log.Logger
}
// NewCLI creates a new CLI interface
func NewCLI(logger *log.Logger) Interface {
return &CLI{
reader: bufio.NewReader(os.Stdin),
logger: logger,
}
}
// ShowLLMOptimization displays an LLM-generated optimization opportunity
func (cli *CLI) ShowLLMOptimization(current, total int, opt analyzer.LLMOptimization) {
fmt.Printf("\n" + strings.Repeat("=", 80) + "\n")
fmt.Printf("š¤ LLM OPTIMIZATION %d of %d\n", current, total)
fmt.Printf(strings.Repeat("=", 80) + "\n")
fmt.Printf("Type: %s\n", opt.Type)
fmt.Printf("File: %s (lines %d-%d)\n", opt.FilePath, opt.LineStart, opt.LineEnd)
fmt.Printf("Description: %s\n", opt.Description)
fmt.Printf("LLM Rationale: %s\n", opt.Rationale)
fmt.Printf("Estimated Impact: %s\n", opt.EstimatedImpact)
fmt.Printf("LLM Confidence: %.2f\n", opt.Confidence)
if opt.OriginalCode != "" {
fmt.Printf("\nš Original Code:\n")
fmt.Printf("```go\n%s\n```\n", opt.OriginalCode)
}
if opt.OptimizedCode != "" {
fmt.Printf("\n✨ LLM-Optimized Code:\n")
fmt.Printf("```go\n%s\n```\n", opt.OptimizedCode)
}
fmt.Printf(strings.Repeat("-", 80) + "\n")
}
// AskForApproval asks the user whether to apply the LLM optimization
func (cli *CLI) AskForApproval() bool {
for {
fmt.Print("Apply this LLM optimization? (y/n/s=skip all/e=explain): ")
input, err := cli.reader.ReadString('\n')
if err != nil {
cli.logger.Printf("Error reading input: %v", err)
continue
}
input = strings.TrimSpace(strings.ToLower(input))
switch input {
case "y", "yes":
return true
case "n", "no":
return false
case "s", "skip":
fmt.Println("Skipping all remaining LLM optimizations...")
os.Exit(0)
case "e", "explain":
fmt.Println("Requesting detailed explanation from LLM...")
// This would trigger an explanation request to the LLM
return cli.AskForApproval() // Ask again after explanation
default:
fmt.Println("Please enter 'y' for yes, 'n' for no, 's' to skip all, or 'e' for explanation.")
}
}
}
// ShowMessage displays a message to the user
func (cli *CLI) ShowMessage(message string) {
fmt.Printf("ā¹️ %s\n", message)
}
// ShowError displays an error to the user
func (cli *CLI) ShowError(err error) {
fmt.Printf("❌ ERROR: %s\n", err.Error())
}
No comments:
Post a Comment