The intersection of natural language processing and additive manufacturing represents one of the most exciting frontiers in democratizing 3D design and fabrication. This article explores the development of systems that can transform human language descriptions into printable 3D objects, specifically optimized for Fused Deposition Modeling (FDM) printers. We will examine the technical architecture, implementation strategies, and practical considerations for building such systems.
Note: at the end of this article you‘ll find a full application that was generated by Claude 4 Opus.
Understanding the Natural Language to 3D Pipeline
The fundamental challenge lies in bridging the semantic gap between human language and geometric representation. When a user describes "a coffee mug with a comfortable handle," the system must interpret not only the basic geometry but also the implicit requirements for FDM printing, such as proper wall thickness, support structures, and printability constraints.
The core workflow begins with natural language processing to extract geometric intent, followed by 3D model generation that respects FDM printing limitations, real-time visualization for user feedback, and iterative refinement through additional prompts. The final step involves STL file generation and storage when the user expresses satisfaction with the result.
FDM Printing Requirements and Design Constraints
Fused Deposition Modeling imposes specific geometric constraints that must be embedded into the generation process. Unlike other manufacturing methods, FDM requires consideration of layer adhesion, overhang angles, bridge distances, and support material requirements. The system must understand that features smaller than the nozzle diameter cannot be reliably printed, that overhangs exceeding 45 degrees typically require support structures, and that thin walls below 0.8mm may not print successfully on most consumer FDM printers.
These constraints must be integrated into the model generation process rather than applied as post-processing corrections. The LLM must be trained or prompted to understand these physical limitations and incorporate them into its geometric reasoning. For instance, when generating a hollow sphere, the system should automatically include drainage holes to prevent uncured material from being trapped inside, even if the user doesn't explicitly request them.
Large Language Model Selection and Architecture
The choice of LLM significantly impacts system performance and capabilities. Models with strong spatial reasoning abilities and code generation capabilities prove most effective for this application. GPT-4 and Claude demonstrate superior performance in understanding geometric relationships and generating parametric code, while specialized models like Code Llama excel at producing clean, executable geometry generation scripts.
The system architecture must accommodate both local and remote LLM deployment scenarios. Local deployment offers privacy, reduced latency, and independence from internet connectivity, but requires significant computational resources. Remote deployment provides access to more powerful models and regular updates but introduces latency and dependency on external services.
A hybrid approach often proves optimal, where a lightweight local model handles simple requests and basic refinements, while complex geometric reasoning tasks are delegated to more powerful remote models. This architecture requires careful orchestration to maintain conversation context and ensure seamless user experience across model transitions.
Core System Components and Technical Implementation
The system comprises several interconnected components working in concert to transform natural language into 3D geometry. The natural language processing component extracts geometric intent, dimensional requirements, and functional specifications from user prompts. This component must handle ambiguous descriptions, implicit requirements, and technical terminology while maintaining context across multiple refinement iterations.
The geometry generation engine translates processed language into parametric 3D models. Rather than directly generating mesh data, the system typically produces parametric scripts using libraries like OpenSCAD, FreeCAD Python API, or custom geometry kernels. This approach enables easier modification and refinement based on subsequent user prompts.
Here's an example of how the system might generate a parametric coffee mug:
The following code demonstrates the parametric generation approach using OpenSCAD syntax. The system generates this code based on natural language input, allowing for easy modification when users request changes to dimensions or features.
module coffee_mug(height=100, outer_diameter=80, wall_thickness=3, handle_width=15) {
difference() {
// Outer cylinder for mug body
cylinder(h=height, d=outer_diameter, $fn=64);
// Inner cavity with proper wall thickness
translate([0, 0, wall_thickness])
cylinder(h=height, d=outer_diameter-2*wall_thickness, $fn=64);
}
// Handle generation with FDM-friendly geometry
translate([outer_diameter/2, 0, height*0.3]) {
difference() {
// Outer handle shape
rotate_extrude(angle=180, $fn=32)
translate([handle_width, 0])
circle(d=8, $fn=16);
// Inner handle cutout
rotate_extrude(angle=180, $fn=32)
translate([handle_width, 0])
circle(d=4, $fn=16);
}
}
}
// Generate the mug with default parameters
coffee_mug();
This parametric approach allows the system to modify specific aspects of the design when users request changes. If a user says "make the handle thicker," the system can adjust the handle_width parameter and regenerate the model without reconstructing the entire geometry from scratch.
STL File Generation and Mesh Processing
The conversion from parametric representation to STL format requires careful consideration of mesh quality and FDM printing requirements. The tessellation process must balance file size with surface quality, ensuring that curved surfaces are smooth enough for aesthetic appeal while maintaining reasonable polygon counts for processing efficiency.
The mesh generation process incorporates FDM-specific optimizations such as ensuring manifold geometry, proper normal orientation, and elimination of non-printable features. The system automatically validates generated meshes for common issues like inverted normals, non-manifold edges, and intersecting geometry that could cause slicing problems.
Here's an implementation example showing how the system processes parametric geometry into FDM-optimized STL files:
This code example demonstrates the mesh generation and validation pipeline. The system takes parametric geometry and converts it to a high-quality mesh suitable for FDM printing, including automatic validation and repair of common mesh issues.
import trimesh
import numpy as np
from scipy.spatial.distance import cdist
class FDMOptimizedMeshGenerator:
def __init__(self, min_feature_size=0.4, max_overhang_angle=45):
self.min_feature_size = min_feature_size
self.max_overhang_angle = max_overhang_angle
def generate_mesh_from_parametric(self, parametric_function, resolution=0.1):
"""Convert parametric geometry to FDM-optimized mesh"""
# Generate initial mesh from parametric representation
mesh = self._parametric_to_mesh(parametric_function, resolution)
# Apply FDM-specific optimizations
mesh = self._remove_small_features(mesh)
mesh = self._validate_overhangs(mesh)
mesh = self._ensure_manifold_geometry(mesh)
return mesh
def _remove_small_features(self, mesh):
"""Remove features smaller than minimum printable size"""
# Identify edges shorter than minimum feature size
edge_lengths = np.linalg.norm(
mesh.vertices[mesh.edges[:, 0]] - mesh.vertices[mesh.edges[:, 1]],
axis=1
)
small_edges = edge_lengths < self.min_feature_size
if np.any(small_edges):
# Merge vertices connected by small edges
mesh = self._merge_close_vertices(mesh, self.min_feature_size)
return mesh
def _validate_overhangs(self, mesh):
"""Check for overhangs exceeding printable angles"""
face_normals = mesh.face_normals
vertical_component = np.abs(face_normals[:, 2])
# Calculate overhang angles
overhang_angles = np.arccos(vertical_component) * 180 / np.pi
problematic_faces = overhang_angles > self.max_overhang_angle
if np.any(problematic_faces):
# Log warning about potential support requirements
print(f"Warning: {np.sum(problematic_faces)} faces exceed maximum overhang angle")
return mesh
def save_stl(self, mesh, filename):
"""Save mesh as STL file with proper formatting"""
mesh.export(filename, file_type='stl_ascii')
print(f"STL file saved: {filename}")
Real-Time 3D Visualization and User Interface
The visualization component provides immediate feedback to users, allowing them to inspect generated models before committing to printing. Modern web-based solutions using Three.js or WebGL enable cross-platform compatibility without requiring specialized software installation. The visualization system must handle STL file loading, real-time rotation and scaling, and basic measurement tools for dimensional verification.
The interface design significantly impacts user experience and adoption. Successful implementations integrate the chat interface with the 3D viewer, allowing users to point to specific model features while describing desired changes. This spatial context enhances the LLM's understanding of user intent and reduces ambiguity in refinement requests.
Iterative Refinement and Context Management
The refinement process represents one of the most technically challenging aspects of the system. The LLM must maintain context about the current model state, understand spatial relationships, and apply modifications without breaking existing geometry. This requires sophisticated prompt engineering and potentially fine-tuning on geometry-specific datasets.
Context management becomes critical when users make multiple sequential modifications. The system must track which parameters have been adjusted, understand dependencies between geometric features, and predict how changes might affect printability. For example, if a user increases the wall thickness of a container, the system should automatically adjust internal dimensions to maintain the overall size or warn about potential conflicts.
Tools and MCP Server Integration
Model Context Protocol (MCP) servers enable the LLM to access specialized tools for geometry processing, mesh analysis, and FDM simulation. These tools extend the LLM's capabilities beyond text generation to include computational geometry operations, structural analysis, and printability assessment.
Common tool integrations include mesh repair utilities for fixing non-manifold geometry, slicing simulation tools for predicting print behavior, and material property databases for optimizing designs based on filament characteristics. The MCP architecture allows for modular tool integration, enabling system customization for specific use cases or printer configurations.
Here's an example of how MCP tools might be integrated for mesh validation:
This example shows how the system can leverage MCP tools to perform complex geometric analysis that would be difficult for the LLM to handle directly. The tool integration allows for sophisticated validation and optimization while maintaining the conversational interface.
class GeometryAnalysisMCP:
def __init__(self):
self.tools = {
'mesh_validator': self._validate_mesh,
'printability_checker': self._check_printability,
'support_generator': self._generate_supports
}
def _validate_mesh(self, mesh_data):
"""Comprehensive mesh validation for FDM printing"""
mesh = trimesh.load(mesh_data)
validation_results = {
'is_manifold': mesh.is_watertight,
'has_inverted_normals': self._check_normal_orientation(mesh),
'min_wall_thickness': self._calculate_min_wall_thickness(mesh),
'volume': mesh.volume,
'surface_area': mesh.area
}
return validation_results
def _check_printability(self, mesh_data, printer_config):
"""Analyze mesh for FDM printability issues"""
mesh = trimesh.load(mesh_data)
# Check for overhangs
overhangs = self._detect_overhangs(mesh, printer_config['max_overhang_angle'])
# Check for bridges
bridges = self._detect_bridges(mesh, printer_config['max_bridge_distance'])
# Check minimum feature size
small_features = self._detect_small_features(mesh, printer_config['nozzle_diameter'])
return {
'overhangs': overhangs,
'bridges': bridges,
'small_features': small_features,
'needs_supports': len(overhangs) > 0,
'printability_score': self._calculate_printability_score(overhangs, bridges, small_features)
}
def _generate_supports(self, mesh_data, printer_config):
"""Generate support structures for overhanging geometry"""
# Implementation would generate support geometry
# based on overhang analysis and printer capabilities
pass
Local versus Remote LLM Implementation Strategies
The choice between local and remote LLM deployment involves trade-offs between performance, privacy, cost, and maintenance overhead. Local deployment using models like Llama 2, Code Llama, or quantized versions of larger models provides privacy and independence but requires significant hardware resources and ongoing model management.
Remote deployment leverages cloud-based APIs from providers like OpenAI, Anthropic, or Google, offering access to state-of-the-art models without local infrastructure requirements. However, this approach introduces latency, ongoing costs, and potential privacy concerns when processing proprietary designs.
A hybrid architecture often provides the optimal balance, using local models for basic operations and remote models for complex reasoning tasks. This approach requires careful orchestration to maintain conversation context across model boundaries and ensure consistent user experience.
Implementation Challenges and Limitations
Several fundamental limitations constrain the effectiveness of current natural language to 3D printing systems. The semantic gap between human language and precise geometric specification remains substantial, particularly for complex mechanical assemblies or objects with intricate internal structures. Users often lack the vocabulary to precisely describe geometric relationships, leading to ambiguous or incomplete specifications.
Current LLMs, while impressive in their language understanding capabilities, lack true spatial reasoning abilities. They cannot visualize the objects they're describing or understand complex geometric constraints in the way human designers do. This limitation becomes apparent when generating objects with moving parts, precise tolerances, or complex assembly requirements.
The FDM printing process itself imposes constraints that are difficult to communicate through natural language alone. Users may not understand concepts like support requirements, layer adhesion, or minimum wall thickness, leading to requests for geometrically valid but unprintable objects. The system must balance user intent with physical manufacturing constraints, sometimes requiring compromise or user education.
Quality Control and Validation
Ensuring the quality and printability of generated models requires comprehensive validation at multiple stages of the pipeline. Geometric validation checks for manifold surfaces, proper normal orientation, and absence of self-intersections. Dimensional validation ensures that features meet minimum size requirements for the target printer and material combination.
Printability validation goes beyond geometric correctness to assess whether the object can be successfully manufactured using FDM processes. This includes overhang analysis, bridge detection, support requirement assessment, and estimation of print time and material usage. Advanced systems might integrate slicing simulation to predict potential printing issues before file generation.
Future Directions and Emerging Technologies
The field continues to evolve rapidly with advances in both language models and 3D printing technology. Multimodal models that can process both text and images promise to enable more intuitive design specification, allowing users to provide reference images alongside textual descriptions. Integration with computer vision systems could enable iterative refinement based on visual feedback from printed prototypes.
Advances in neural implicit representations and differentiable rendering may enable more sophisticated geometry generation techniques that better understand physical constraints and manufacturing processes. These approaches could potentially generate objects that are not only geometrically correct but also optimized for specific printing conditions and material properties.
The integration of physics simulation into the generation process represents another promising direction. By understanding how objects will behave under real-world conditions, systems could generate designs that are not only printable but also functional for their intended purpose. This capability would be particularly valuable for mechanical parts, tools, and functional prototypes.
Conclusion
Creating 3D printable objects from natural language prompts represents a significant technical challenge that requires expertise in natural language processing, computational geometry, and additive manufacturing. While current systems demonstrate impressive capabilities in generating simple to moderately complex objects, significant limitations remain in handling precise specifications, complex assemblies, and advanced manufacturing constraints.
The most successful implementations combine powerful language models with specialized geometric tools, comprehensive validation systems, and intuitive user interfaces that enable iterative refinement. As the technology continues to mature, we can expect to see more sophisticated systems that better understand both human intent and manufacturing reality, ultimately democratizing access to custom 3D printed objects for users without traditional CAD expertise.
The key to successful implementation lies in understanding the limitations of current technology while building systems that maximize utility within those constraints. By focusing on clear use cases, providing appropriate user guidance, and maintaining realistic expectations about system capabilities, developers can create valuable tools that bridge the gap between human creativity and digital fabrication.
COMPLETE SYSTEM IMPLEMENTATION
All following code generated by Claude 4 Opus:
Here you find a working system that transforms natural language prompts into 3D printable objects. This implementation includes VLM integration for visual model generation and all the components discussed in the article.
Project Structure and Dependencies
First, let me show you the complete project structure and required dependencies for building this system:
3d_nlp_printer/
├── requirements.txt
├── config.py
├── main.py
├── llm_interface/
│ ├── __init__.py
│ ├── local_llm.py
│ ├── remote_llm.py
│ └── vlm_integration.py
├── geometry_engine/
│ ├── __init__.py
│ ├── parametric_generator.py
│ ├── mesh_processor.py
│ └── stl_exporter.py
├── mcp_tools/
│ ├── __init__.py
│ ├── geometry_validator.py
│ └── printability_checker.py
├── visualization/
│ ├── __init__.py
│ ├── web_viewer.py
│ └── static/
│ ├── viewer.html
│ ├── viewer.js
│ └── viewer.css
├── storage/
│ ├── __init__.py
│ └── file_manager.py
└── output/
└── stl_files/
The requirements.txt file contains all necessary dependencies for the complete system implementation:
# Core dependencies
numpy==1.24.3
scipy==1.10.1
trimesh==3.21.7
openscad-python==0.5.0
pillow==10.0.0
requests==2.31.0
# LLM and AI dependencies
openai==1.3.0
anthropic==0.7.0
transformers==4.35.0
torch==2.1.0
diffusers==0.21.4
accelerate==0.24.0
# Web interface dependencies
flask==2.3.3
flask-socketio==5.3.6
flask-cors==4.0.0
# 3D processing dependencies
pymeshlab==2022.2.post4
open3d==0.17.0
cadquery==2.3.1
# Image processing for VLM
opencv-python==4.8.1.78
matplotlib==3.7.2
# MCP and tool integration
pydantic==2.4.2
fastapi==0.104.1
uvicorn==0.24.0
```
Configuration Management
The configuration system manages LLM endpoints, printing parameters, and system settings:
```python
# config.py
import os
from dataclasses import dataclass
from typing import Dict, Any, Optional
@dataclass
class LLMConfig:
"""Configuration for Large Language Model settings"""
openai_api_key: Optional[str] = None
anthropic_api_key: Optional[str] = None
local_model_path: Optional[str] = None
max_tokens: int = 4000
temperature: float = 0.7
use_local: bool = False
@dataclass
class PrinterConfig:
"""FDM printer specifications and constraints"""
nozzle_diameter: float = 0.4
layer_height: float = 0.2
max_overhang_angle: float = 45.0
max_bridge_distance: float = 10.0
min_wall_thickness: float = 0.8
build_volume: tuple = (220, 220, 250) # X, Y, Z in mm
filament_diameter: float = 1.75
@dataclass
class SystemConfig:
"""Overall system configuration"""
output_directory: str = "./output/stl_files"
temp_directory: str = "./temp"
web_port: int = 5000
enable_vlm: bool = True
max_iterations: int = 10
class ConfigManager:
"""Centralized configuration management"""
def __init__(self):
self.llm = LLMConfig(
openai_api_key=os.getenv('OPENAI_API_KEY'),
anthropic_api_key=os.getenv('ANTHROPIC_API_KEY'),
local_model_path=os.getenv('LOCAL_MODEL_PATH'),
use_local=os.getenv('USE_LOCAL_LLM', 'false').lower() == 'true'
)
self.printer = PrinterConfig()
self.system = SystemConfig()
# Ensure directories exist
os.makedirs(self.system.output_directory, exist_ok=True)
os.makedirs(self.system.temp_directory, exist_ok=True)
def get_printer_constraints_prompt(self) -> str:
"""Generate prompt text describing printer constraints"""
return f"""
FDM Printer Constraints:
- Nozzle diameter: {self.printer.nozzle_diameter}mm
- Minimum wall thickness: {self.printer.min_wall_thickness}mm
- Maximum overhang angle: {self.printer.max_overhang_angle} degrees
- Maximum bridge distance: {self.printer.max_bridge_distance}mm
- Build volume: {self.printer.build_volume[0]}x{self.printer.build_volume[1]}x{self.printer.build_volume[2]}mm
Design Requirements:
- All features must be larger than nozzle diameter
- Overhangs exceeding {self.printer.max_overhang_angle} degrees need supports
- Walls thinner than {self.printer.min_wall_thickness}mm may not print reliably
- Include drainage holes for hollow objects
- Ensure manifold geometry for successful slicing
"""
# Global configuration instance
config = ConfigManager()
```
LLM Interface Implementation
The LLM interface supports both local and remote models with seamless switching:
```python
# llm_interface/local_llm.py
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
from typing import List, Dict, Any, Optional
import logging
class LocalLLMInterface:
"""Interface for local LLM deployment using Transformers"""
def __init__(self, model_path: str, device: str = "auto"):
self.model_path = model_path
self.device = device
self.tokenizer = None
self.model = None
self.pipeline = None
self.logger = logging.getLogger(__name__)
self._initialize_model()
def _initialize_model(self):
"""Initialize the local model and tokenizer"""
try:
self.logger.info(f"Loading local model from {self.model_path}")
# Load tokenizer
self.tokenizer = AutoTokenizer.from_pretrained(
self.model_path,
trust_remote_code=True
)
# Load model with appropriate device mapping
self.model = AutoModelForCausalLM.from_pretrained(
self.model_path,
torch_dtype=torch.float16,
device_map=self.device,
trust_remote_code=True
)
# Create text generation pipeline
self.pipeline = pipeline(
"text-generation",
model=self.model,
tokenizer=self.tokenizer,
torch_dtype=torch.float16,
device_map=self.device
)
self.logger.info("Local model loaded successfully")
except Exception as e:
self.logger.error(f"Failed to load local model: {e}")
raise
def generate_response(self, prompt: str, max_tokens: int = 2000,
temperature: float = 0.7) -> str:
"""Generate response using local model"""
try:
# Prepare the prompt with system context
full_prompt = self._prepare_prompt(prompt)
# Generate response
outputs = self.pipeline(
full_prompt,
max_new_tokens=max_tokens,
temperature=temperature,
do_sample=True,
pad_token_id=self.tokenizer.eos_token_id
)
# Extract generated text
generated_text = outputs[0]['generated_text']
response = generated_text[len(full_prompt):].strip()
return response
except Exception as e:
self.logger.error(f"Error generating response: {e}")
return f"Error: {str(e)}"
def _prepare_prompt(self, user_prompt: str) -> str:
"""Prepare prompt with system context for 3D modeling"""
system_prompt = """You are an expert 3D modeling assistant specialized in creating FDM-printable objects from natural language descriptions. You understand geometric constraints, manufacturing limitations, and can generate parametric code for 3D objects.
When generating 3D models:
1. Always consider FDM printing constraints
2. Generate parametric OpenSCAD code when possible
3. Include proper wall thickness and support considerations
4. Explain your design decisions
5. Suggest improvements for printability
"""
return f"{system_prompt}\nUser: {user_prompt}\nAssistant:"
# llm_interface/remote_llm.py
import openai
import anthropic
from typing import Dict, Any, Optional
import logging
class RemoteLLMInterface:
"""Interface for remote LLM services (OpenAI, Anthropic)"""
def __init__(self, openai_key: Optional[str] = None,
anthropic_key: Optional[str] = None):
self.logger = logging.getLogger(__name__)
# Initialize clients
self.openai_client = None
self.anthropic_client = None
if openai_key:
self.openai_client = openai.OpenAI(api_key=openai_key)
if anthropic_key:
self.anthropic_client = anthropic.Anthropic(api_key=anthropic_key)
def generate_response(self, prompt: str, model: str = "gpt-4",
max_tokens: int = 2000, temperature: float = 0.7) -> str:
"""Generate response using specified remote model"""
try:
if model.startswith("gpt") and self.openai_client:
return self._generate_openai_response(prompt, model, max_tokens, temperature)
elif model.startswith("claude") and self.anthropic_client:
return self._generate_anthropic_response(prompt, model, max_tokens, temperature)
else:
raise ValueError(f"Model {model} not available or not configured")
except Exception as e:
self.logger.error(f"Error generating response with {model}: {e}")
return f"Error: {str(e)}"
def _generate_openai_response(self, prompt: str, model: str,
max_tokens: int, temperature: float) -> str:
"""Generate response using OpenAI API"""
system_prompt = self._get_system_prompt()
response = self.openai_client.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": prompt}
],
max_tokens=max_tokens,
temperature=temperature
)
return response.choices[0].message.content
def _generate_anthropic_response(self, prompt: str, model: str,
max_tokens: int, temperature: float) -> str:
"""Generate response using Anthropic API"""
system_prompt = self._get_system_prompt()
response = self.anthropic_client.messages.create(
model=model,
max_tokens=max_tokens,
temperature=temperature,
system=system_prompt,
messages=[
{"role": "user", "content": prompt}
]
)
return response.content[0].text
def _get_system_prompt(self) -> str:
"""Get system prompt for 3D modeling tasks"""
return """You are an expert 3D modeling assistant specialized in creating FDM-printable objects from natural language descriptions. You have deep knowledge of:
1. Geometric modeling and parametric design
2. FDM printing constraints and limitations
3. OpenSCAD and other CAD programming languages
4. Material properties and manufacturing considerations
When creating 3D models:
- Always generate parametric code that can be easily modified
- Consider FDM printing constraints (overhangs, supports, wall thickness)
- Provide clear explanations of design decisions
- Suggest optimizations for printability and functionality
- Include proper error handling and validation
Generate clean, well-commented code that produces manifold, printable geometry."""
# llm_interface/vlm_integration.py
import torch
from diffusers import StableDiffusionPipeline, DiffusionPipeline
from PIL import Image
import numpy as np
import trimesh
from typing import Optional, Tuple
import logging
class VLMIntegration:
"""Vision-Language Model integration for 3D generation"""
def __init__(self, device: str = "auto"):
self.device = device if device != "auto" else ("cuda" if torch.cuda.is_available() else "cpu")
self.logger = logging.getLogger(__name__)
# Initialize image generation pipeline
self.image_pipeline = None
self.mesh_pipeline = None
self._initialize_pipelines()
def _initialize_pipelines(self):
"""Initialize VLM pipelines for image and 3D generation"""
try:
# Initialize Stable Diffusion for concept visualization
self.image_pipeline = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16 if self.device == "cuda" else torch.float32
).to(self.device)
self.logger.info("VLM pipelines initialized successfully")
except Exception as e:
self.logger.error(f"Failed to initialize VLM pipelines: {e}")
self.image_pipeline = None
def generate_concept_image(self, prompt: str, guidance_scale: float = 7.5) -> Optional[Image.Image]:
"""Generate concept image from text prompt"""
if not self.image_pipeline:
self.logger.warning("Image pipeline not available")
return None
try:
# Enhance prompt for 3D object visualization
enhanced_prompt = f"3D render of {prompt}, white background, product photography, high quality, detailed"
image = self.image_pipeline(
enhanced_prompt,
guidance_scale=guidance_scale,
num_inference_steps=50
).images[0]
return image
except Exception as e:
self.logger.error(f"Error generating concept image: {e}")
return None
def image_to_3d_guidance(self, image: Image.Image, base_prompt: str) -> str:
"""Analyze image to provide 3D modeling guidance"""
# This is a simplified implementation
# In practice, you would use more sophisticated image analysis
# Convert image to numpy array for analysis
img_array = np.array(image)
# Basic shape analysis
height, width = img_array.shape[:2]
aspect_ratio = width / height
# Generate guidance based on image analysis
guidance = f"""
Based on the concept image analysis:
- Aspect ratio: {aspect_ratio:.2f} (width/height)
- Consider the overall proportions when generating geometry
- The image suggests a {self._classify_shape(aspect_ratio)} form factor
Enhanced prompt for 3D generation: {base_prompt}
"""
return guidance
def _classify_shape(self, aspect_ratio: float) -> str:
"""Classify basic shape category from aspect ratio"""
if aspect_ratio > 1.5:
return "elongated/horizontal"
elif aspect_ratio < 0.67:
return "tall/vertical"
else:
return "balanced/square"
```
Geometry Engine Implementation
The geometry engine handles parametric model generation and mesh processing:
```python
# geometry_engine/parametric_generator.py
import numpy as np
import trimesh
from typing import Dict, Any, List, Optional, Tuple
import tempfile
import subprocess
import os
import logging
class ParametricGenerator:
"""Generates 3D geometry from parametric descriptions"""
def __init__(self, temp_dir: str = None):
self.temp_dir = temp_dir or tempfile.gettempdir()
self.logger = logging.getLogger(__name__)
# Template library for common shapes
self.shape_templates = {
'cylinder': self._cylinder_template,
'box': self._box_template,
'sphere': self._sphere_template,
'cone': self._cone_template,
'torus': self._torus_template
}
def generate_from_code(self, openscad_code: str, parameters: Dict[str, Any] = None) -> Optional[trimesh.Trimesh]:
"""Generate mesh from OpenSCAD code"""
try:
# Create temporary files
scad_file = os.path.join(self.temp_dir, f"temp_{os.getpid()}.scad")
stl_file = os.path.join(self.temp_dir, f"temp_{os.getpid()}.stl")
# Apply parameters to code if provided
if parameters:
openscad_code = self._apply_parameters(openscad_code, parameters)
# Write OpenSCAD code to file
with open(scad_file, 'w') as f:
f.write(openscad_code)
# Execute OpenSCAD to generate STL
cmd = ['openscad', '-o', stl_file, scad_file]
result = subprocess.run(cmd, capture_output=True, text=True, timeout=30)
if result.returncode != 0:
self.logger.error(f"OpenSCAD error: {result.stderr}")
return None
# Load generated STL
if os.path.exists(stl_file):
mesh = trimesh.load(stl_file)
# Cleanup temporary files
os.remove(scad_file)
os.remove(stl_file)
return mesh
except Exception as e:
self.logger.error(f"Error generating mesh from code: {e}")
return None
def generate_primitive(self, shape_type: str, parameters: Dict[str, Any]) -> Optional[trimesh.Trimesh]:
"""Generate basic primitive shapes"""
if shape_type not in self.shape_templates:
self.logger.error(f"Unknown shape type: {shape_type}")
return None
try:
return self.shape_templates[shape_type](parameters)
except Exception as e:
self.logger.error(f"Error generating {shape_type}: {e}")
return None
def _apply_parameters(self, code: str, parameters: Dict[str, Any]) -> str:
"""Apply parameter values to OpenSCAD code"""
for param, value in parameters.items():
# Replace parameter assignments in code
code = code.replace(f"{param}=", f"{param}={value}; //")
return code
def _cylinder_template(self, params: Dict[str, Any]) -> trimesh.Trimesh:
"""Generate cylinder primitive"""
height = params.get('height', 10)
radius = params.get('radius', 5)
segments = params.get('segments', 32)
return trimesh.creation.cylinder(
radius=radius,
height=height,
sections=segments
)
def _box_template(self, params: Dict[str, Any]) -> trimesh.Trimesh:
"""Generate box primitive"""
width = params.get('width', 10)
depth = params.get('depth', 10)
height = params.get('height', 10)
return trimesh.creation.box(
extents=[width, depth, height]
)
def _sphere_template(self, params: Dict[str, Any]) -> trimesh.Trimesh:
"""Generate sphere primitive"""
radius = params.get('radius', 5)
subdivisions = params.get('subdivisions', 3)
return trimesh.creation.icosphere(
radius=radius,
subdivisions=subdivisions
)
def _cone_template(self, params: Dict[str, Any]) -> trimesh.Trimesh:
"""Generate cone primitive"""
radius = params.get('radius', 5)
height = params.get('height', 10)
segments = params.get('segments', 32)
return trimesh.creation.cone(
radius=radius,
height=height,
sections=segments
)
def _torus_template(self, params: Dict[str, Any]) -> trimesh.Trimesh:
"""Generate torus primitive"""
major_radius = params.get('major_radius', 10)
minor_radius = params.get('minor_radius', 2)
major_segments = params.get('major_segments', 32)
minor_segments = params.get('minor_segments', 16)
return trimesh.creation.torus(
major_radius=major_radius,
minor_radius=minor_radius,
major_sections=major_segments,
minor_sections=minor_segments
)
# geometry_engine/mesh_processor.py
import trimesh
import numpy as np
from typing import Optional, Tuple, List
import logging
class MeshProcessor:
"""Advanced mesh processing for FDM optimization"""
def __init__(self, min_feature_size: float = 0.4, max_overhang_angle: float = 45):
self.min_feature_size = min_feature_size
self.max_overhang_angle = max_overhang_angle
self.logger = logging.getLogger(__name__)
def optimize_for_fdm(self, mesh: trimesh.Trimesh) -> trimesh.Trimesh:
"""Comprehensive FDM optimization pipeline"""
try:
# Step 1: Basic mesh validation and repair
mesh = self._validate_and_repair(mesh)
# Step 2: Remove small features
mesh = self._remove_small_features(mesh)
# Step 3: Optimize wall thickness
mesh = self._optimize_wall_thickness(mesh)
# Step 4: Add drainage holes if needed
mesh = self._add_drainage_holes(mesh)
# Step 5: Final validation
if not mesh.is_watertight:
self.logger.warning("Mesh is not watertight after processing")
return mesh
except Exception as e:
self.logger.error(f"Error optimizing mesh for FDM: {e}")
return mesh
def _validate_and_repair(self, mesh: trimesh.Trimesh) -> trimesh.Trimesh:
"""Validate and repair basic mesh issues"""
# Remove duplicate vertices
mesh.remove_duplicate_faces()
mesh.remove_unreferenced_vertices()
# Fix normals
mesh.fix_normals()
# Fill holes if small enough
mesh.fill_holes()
return mesh
def _remove_small_features(self, mesh: trimesh.Trimesh) -> trimesh.Trimesh:
"""Remove features smaller than minimum printable size"""
# Calculate edge lengths
edges = mesh.edges_unique
edge_vectors = mesh.vertices[edges[:, 1]] - mesh.vertices[edges[:, 0]]
edge_lengths = np.linalg.norm(edge_vectors, axis=1)
# Identify small edges
small_edges = edge_lengths < self.min_feature_size
if np.any(small_edges):
self.logger.info(f"Removing {np.sum(small_edges)} small features")
# Simplify mesh to remove small features
mesh = mesh.simplify_quadric_decimation(face_count=len(mesh.faces) * 0.95)
return mesh
def _optimize_wall_thickness(self, mesh: trimesh.Trimesh) -> trimesh.Trimesh:
"""Ensure adequate wall thickness for FDM printing"""
# This is a simplified implementation
# In practice, you would use more sophisticated wall thickness analysis
# Check if mesh is hollow
if mesh.is_watertight and mesh.volume > 0:
# For hollow objects, ensure wall thickness
bounds = mesh.bounds
size = bounds[1] - bounds[0]
# If object is large enough, consider making it hollow with proper walls
if np.min(size) > self.min_feature_size * 10:
self.logger.info("Object suitable for hollow printing")
return mesh
def _add_drainage_holes(self, mesh: trimesh.Trimesh) -> trimesh.Trimesh:
"""Add drainage holes for hollow objects"""
# Check if object needs drainage holes
if mesh.is_watertight and mesh.volume > 0:
bounds = mesh.bounds
size = bounds[1] - bounds[0]
# Add drainage holes for larger hollow objects
if np.min(size) > 20: # Objects larger than 20mm
self.logger.info("Adding drainage holes for hollow object")
# Implementation would add small holes at appropriate locations
return mesh
def analyze_overhangs(self, mesh: trimesh.Trimesh) -> Dict[str, Any]:
"""Analyze overhang angles and support requirements"""
face_normals = mesh.face_normals
# Calculate angle with vertical (Z-axis)
vertical = np.array([0, 0, 1])
angles = np.arccos(np.abs(np.dot(face_normals, vertical)))
angles_degrees = np.degrees(angles)
# Identify overhangs
overhang_faces = angles_degrees > self.max_overhang_angle
overhang_percentage = np.sum(overhang_faces) / len(angles_degrees) * 100
return {
'has_overhangs': np.any(overhang_faces),
'overhang_face_count': np.sum(overhang_faces),
'overhang_percentage': overhang_percentage,
'max_overhang_angle': np.max(angles_degrees),
'needs_supports': overhang_percentage > 5 # Threshold for support recommendation
}
def calculate_print_stats(self, mesh: trimesh.Trimesh, layer_height: float = 0.2) -> Dict[str, Any]:
"""Calculate printing statistics"""
bounds = mesh.bounds
size = bounds[1] - bounds[0]
# Estimate print time (very rough approximation)
estimated_layers = size[2] / layer_height
estimated_print_time_hours = estimated_layers * 0.05 # 3 minutes per layer average
return {
'volume_mm3': mesh.volume,
'surface_area_mm2': mesh.area,
'bounding_box_mm': size.tolist(),
'estimated_layers': int(estimated_layers),
'estimated_print_time_hours': round(estimated_print_time_hours, 1),
'is_manifold': mesh.is_watertight
}
# geometry_engine/stl_exporter.py
import trimesh
import os
from typing import Optional
import logging
class STLExporter:
"""Handles STL file export with proper formatting"""
def __init__(self, output_directory: str):
self.output_directory = output_directory
self.logger = logging.getLogger(__name__)
# Ensure output directory exists
os.makedirs(output_directory, exist_ok=True)
def export_mesh(self, mesh: trimesh.Trimesh, filename: str,
format_type: str = 'binary') -> Optional[str]:
"""Export mesh to STL file"""
try:
# Ensure filename has .stl extension
if not filename.endswith('.stl'):
filename += '.stl'
# Full file path
filepath = os.path.join(self.output_directory, filename)
# Export based on format type
if format_type == 'binary':
mesh.export(filepath, file_type='stl')
else:
mesh.export(filepath, file_type='stl_ascii')
self.logger.info(f"STL exported successfully: {filepath}")
return filepath
except Exception as e:
self.logger.error(f"Error exporting STL: {e}")
return None
def validate_stl(self, filepath: str) -> bool:
"""Validate exported STL file"""
try:
# Try to load the exported file
mesh = trimesh.load(filepath)
# Basic validation checks
if len(mesh.vertices) == 0 or len(mesh.faces) == 0:
return False
# Check for manifold geometry
if not mesh.is_watertight:
self.logger.warning(f"STL file {filepath} is not watertight")
return True
except Exception as e:
self.logger.error(f"STL validation failed: {e}")
return False
def get_file_info(self, filepath: str) -> Optional[Dict[str, Any]]:
"""Get information about exported STL file"""
try:
if not os.path.exists(filepath):
return None
# File size
file_size = os.path.getsize(filepath)
# Load mesh for analysis
mesh = trimesh.load(filepath)
return {
'filepath': filepath,
'file_size_bytes': file_size,
'file_size_mb': round(file_size / (1024 * 1024), 2),
'vertex_count': len(mesh.vertices),
'face_count': len(mesh.faces),
'volume': mesh.volume,
'surface_area': mesh.area,
'is_manifold': mesh.is_watertight
}
except Exception as e:
self.logger.error(f"Error getting file info: {e}")
return None
```
MCP Tools Implementation
The MCP tools provide specialized geometry processing capabilities:
```python
# mcp_tools/geometry_validator.py
import trimesh
import numpy as np
from typing import Dict, Any, List, Tuple
import logging
class GeometryValidator:
"""Comprehensive geometry validation for 3D printing"""
def __init__(self):
self.logger = logging.getLogger(__name__)
def validate_mesh(self, mesh: trimesh.Trimesh, printer_config: Dict[str, Any]) -> Dict[str, Any]:
"""Comprehensive mesh validation"""
results = {
'is_valid': True,
'errors': [],
'warnings': [],
'metrics': {}
}
try:
# Basic mesh validation
basic_validation = self._validate_basic_geometry(mesh)
results.update(basic_validation)
# FDM-specific validation
fdm_validation = self._validate_fdm_constraints(mesh, printer_config)
results['fdm_analysis'] = fdm_validation
# Calculate metrics
results['metrics'] = self._calculate_metrics(mesh)
# Overall validity
results['is_valid'] = len(results['errors']) == 0
except Exception as e:
results['errors'].append(f"Validation error: {str(e)}")
results['is_valid'] = False
return results
def _validate_basic_geometry(self, mesh: trimesh.Trimesh) -> Dict[str, Any]:
"""Validate basic geometric properties"""
errors = []
warnings = []
# Check if mesh is empty
if len(mesh.vertices) == 0 or len(mesh.faces) == 0:
errors.append("Mesh is empty")
return {'errors': errors, 'warnings': warnings}
# Check manifold geometry
if not mesh.is_watertight:
errors.append("Mesh is not watertight (not manifold)")
# Check for degenerate faces
face_areas = mesh.area_faces
degenerate_faces = np.sum(face_areas < 1e-10)
if degenerate_faces > 0:
warnings.append(f"{degenerate_faces} degenerate faces found")
# Check for duplicate vertices
if mesh.vertices.shape[0] != len(np.unique(mesh.vertices, axis=0)):
warnings.append("Duplicate vertices found")
# Check normal consistency
try:
mesh.fix_normals()
except:
warnings.append("Normal orientation issues detected")
return {'errors': errors, 'warnings': warnings}
def _validate_fdm_constraints(self, mesh: trimesh.Trimesh,
printer_config: Dict[str, Any]) -> Dict[str, Any]:
"""Validate FDM-specific constraints"""
analysis = {
'overhangs': self._analyze_overhangs(mesh, printer_config),
'wall_thickness': self._analyze_wall_thickness(mesh, printer_config),
'feature_size': self._analyze_feature_size(mesh, printer_config),
'build_volume': self._check_build_volume(mesh, printer_config)
}
return analysis
def _analyze_overhangs(self, mesh: trimesh.Trimesh,
printer_config: Dict[str, Any]) -> Dict[str, Any]:
"""Analyze overhang angles"""
max_angle = printer_config.get('max_overhang_angle', 45)
# Calculate face angles with vertical
face_normals = mesh.face_normals
vertical = np.array([0, 0, 1])
# Calculate angles
dot_products = np.abs(np.dot(face_normals, vertical))
angles = np.arccos(np.clip(dot_products, 0, 1))
angles_degrees = np.degrees(angles)
# Find overhangs
overhang_faces = angles_degrees > max_angle
overhang_count = np.sum(overhang_faces)
overhang_percentage = overhang_count / len(angles_degrees) * 100
return {
'has_overhangs': overhang_count > 0,
'overhang_face_count': int(overhang_count),
'overhang_percentage': round(overhang_percentage, 2),
'max_overhang_angle': round(np.max(angles_degrees), 2),
'needs_supports': overhang_percentage > 5
}
def _analyze_wall_thickness(self, mesh: trimesh.Trimesh,
printer_config: Dict[str, Any]) -> Dict[str, Any]:
"""Analyze wall thickness"""
min_thickness = printer_config.get('min_wall_thickness', 0.8)
# This is a simplified analysis
# In practice, you would use more sophisticated algorithms
bounds = mesh.bounds
size = bounds[1] - bounds[0]
min_dimension = np.min(size)
return {
'min_dimension': round(min_dimension, 2),
'meets_minimum': min_dimension >= min_thickness,
'recommended_thickness': min_thickness
}
def _analyze_feature_size(self, mesh: trimesh.Trimesh,
printer_config: Dict[str, Any]) -> Dict[str, Any]:
"""Analyze minimum feature sizes"""
nozzle_diameter = printer_config.get('nozzle_diameter', 0.4)
min_feature = nozzle_diameter * 2 # Rule of thumb
# Calculate edge lengths
edges = mesh.edges_unique
edge_vectors = mesh.vertices[edges[:, 1]] - mesh.vertices[edges[:, 0]]
edge_lengths = np.linalg.norm(edge_vectors, axis=1)
small_features = edge_lengths < min_feature
small_feature_count = np.sum(small_features)
return {
'min_edge_length': round(np.min(edge_lengths), 3),
'small_feature_count': int(small_feature_count),
'min_recommended_feature': min_feature,
'has_small_features': small_feature_count > 0
}
def _check_build_volume(self, mesh: trimesh.Trimesh,
printer_config: Dict[str, Any]) -> Dict[str, Any]:
"""Check if object fits in build volume"""
build_volume = printer_config.get('build_volume', (200, 200, 200))
bounds = mesh.bounds
size = bounds[1] - bounds[0]
fits_x = size[0] <= build_volume[0]
fits_y = size[1] <= build_volume[1]
fits_z = size[2] <= build_volume[2]
fits_overall = fits_x and fits_y and fits_z
return {
'object_size': [round(s, 2) for s in size],
'build_volume': build_volume,
'fits_x': fits_x,
'fits_y': fits_y,
'fits_z': fits_z,
'fits_overall': fits_overall
}
def _calculate_metrics(self, mesh: trimesh.Trimesh) -> Dict[str, Any]:
"""Calculate useful metrics"""
bounds = mesh.bounds
size = bounds[1] - bounds[0]
return {
'volume_mm3': round(mesh.volume, 2),
'surface_area_mm2': round(mesh.area, 2),
'bounding_box_mm': [round(s, 2) for s in size],
'vertex_count': len(mesh.vertices),
'face_count': len(mesh.faces),
'is_manifold': mesh.is_watertight
}
# mcp_tools/printability_checker.py
import trimesh
import numpy as np
from typing import Dict, Any, List
import logging
class PrintabilityChecker:
"""Advanced printability analysis for FDM"""
def __init__(self):
self.logger = logging.getLogger(__name__)
def analyze_printability(self, mesh: trimesh.Trimesh,
printer_config: Dict[str, Any]) -> Dict[str, Any]:
"""Comprehensive printability analysis"""
analysis = {
'overall_score': 0,
'issues': [],
'recommendations': [],
'support_analysis': {},
'orientation_analysis': {},
'material_analysis': {}
}
try:
# Analyze different aspects
support_analysis = self._analyze_support_requirements(mesh, printer_config)
orientation_analysis = self._analyze_optimal_orientation(mesh, printer_config)
material_analysis = self._analyze_material_requirements(mesh, printer_config)
analysis['support_analysis'] = support_analysis
analysis['orientation_analysis'] = orientation_analysis
analysis['material_analysis'] = material_analysis
# Calculate overall score
analysis['overall_score'] = self._calculate_overall_score(
support_analysis, orientation_analysis, material_analysis
)
# Generate recommendations
analysis['recommendations'] = self._generate_recommendations(
support_analysis, orientation_analysis, material_analysis
)
except Exception as e:
self.logger.error(f"Error in printability analysis: {e}")
analysis['issues'].append(f"Analysis error: {str(e)}")
return analysis
def _analyze_support_requirements(self, mesh: trimesh.Trimesh,
printer_config: Dict[str, Any]) -> Dict[str, Any]:
"""Analyze support structure requirements"""
max_overhang = printer_config.get('max_overhang_angle', 45)
# Calculate face angles
face_normals = mesh.face_normals
vertical = np.array([0, 0, 1])
angles = np.arccos(np.abs(np.dot(face_normals, vertical)))
angles_degrees = np.degrees(angles)
# Identify faces needing support
needs_support = angles_degrees > max_overhang
support_area = np.sum(mesh.area_faces[needs_support])
total_area = mesh.area
support_percentage = (support_area / total_area) * 100 if total_area > 0 else 0
return {
'needs_supports': np.any(needs_support),
'support_percentage': round(support_percentage, 2),
'support_area_mm2': round(support_area, 2),
'max_overhang_found': round(np.max(angles_degrees), 2),
'support_complexity': 'low' if support_percentage < 10 else 'medium' if support_percentage < 30 else 'high'
}
def _analyze_optimal_orientation(self, mesh: trimesh.Trimesh,
printer_config: Dict[str, Any]) -> Dict[str, Any]:
"""Analyze optimal print orientation"""
# Try different orientations and score them
orientations = [
('current', np.eye(3)),
('rotated_x_90', self._rotation_matrix_x(90)),
('rotated_y_90', self._rotation_matrix_y(90)),
('rotated_z_90', self._rotation_matrix_z(90))
]
best_orientation = None
best_score = -1
orientation_scores = {}
for name, rotation_matrix in orientations:
# Apply rotation
rotated_mesh = mesh.copy()
rotated_mesh.apply_transform(rotation_matrix)
# Score this orientation
score = self._score_orientation(rotated_mesh, printer_config)
orientation_scores[name] = score
if score > best_score:
best_score = score
best_orientation = name
return {
'current_score': orientation_scores.get('current', 0),
'best_orientation': best_orientation,
'best_score': best_score,
'all_scores': orientation_scores,
'improvement_possible': best_score > orientation_scores.get('current', 0)
}
def _score_orientation(self, mesh: trimesh.Trimesh,
printer_config: Dict[str, Any]) -> float:
"""Score a specific orientation for printability"""
score = 100 # Start with perfect score
# Penalize overhangs
face_normals = mesh.face_normals
vertical = np.array([0, 0, 1])
angles = np.arccos(np.abs(np.dot(face_normals, vertical)))
angles_degrees = np.degrees(angles)
max_overhang = printer_config.get('max_overhang_angle', 45)
overhang_faces = angles_degrees > max_overhang
overhang_penalty = np.sum(overhang_faces) / len(angles_degrees) * 50
score -= overhang_penalty
# Penalize tall objects (more layers = longer print time)
bounds = mesh.bounds
height = bounds[1][2] - bounds[0][2]
height_penalty = min(height / 100, 20) # Cap penalty at 20 points
score -= height_penalty
# Bonus for stable base
base_area = self._calculate_base_area(mesh)
total_area = mesh.area
base_ratio = base_area / total_area if total_area > 0 else 0
stability_bonus = base_ratio * 10
score += stability_bonus
return max(0, score) # Ensure non-negative score
def _calculate_base_area(self, mesh: trimesh.Trimesh) -> float:
"""Calculate area of base touching build plate"""
# Find faces at minimum Z
min_z = mesh.bounds[0][2]
tolerance = 0.1
base_faces = []
for i, face in enumerate(mesh.faces):
face_vertices = mesh.vertices[face]
if np.all(face_vertices[:, 2] <= min_z + tolerance):
base_faces.append(i)
if not base_faces:
return 0
return np.sum(mesh.area_faces[base_faces])
def _analyze_material_requirements(self, mesh: trimesh.Trimesh,
printer_config: Dict[str, Any]) -> Dict[str, Any]:
"""Analyze material and printing requirements"""
volume = mesh.volume
surface_area = mesh.area
# Estimate material usage (including infill)
infill_percentage = 20 # Assume 20% infill
material_volume = volume * (infill_percentage / 100)
# Estimate print time (very rough)
layer_height = printer_config.get('layer_height', 0.2)
bounds = mesh.bounds
height = bounds[1][2] - bounds[0][2]
estimated_layers = height / layer_height
estimated_time_hours = estimated_layers * 0.05 # 3 minutes per layer
return {
'volume_mm3': round(volume, 2),
'estimated_material_mm3': round(material_volume, 2),
'surface_area_mm2': round(surface_area, 2),
'estimated_layers': int(estimated_layers),
'estimated_print_time_hours': round(estimated_time_hours, 1),
'complexity': 'low' if volume < 1000 else 'medium' if volume < 10000 else 'high'
}
def _calculate_overall_score(self, support_analysis: Dict,
orientation_analysis: Dict,
material_analysis: Dict) -> float:
"""Calculate overall printability score"""
score = 100
# Support penalty
if support_analysis.get('needs_supports', False):
support_penalty = support_analysis.get('support_percentage', 0) * 0.5
score -= support_penalty
# Orientation bonus
if orientation_analysis.get('improvement_possible', False):
score -= 10 # Penalty for suboptimal orientation
# Complexity penalty
complexity = material_analysis.get('complexity', 'low')
if complexity == 'medium':
score -= 5
elif complexity == 'high':
score -= 15
return max(0, min(100, score))
def _generate_recommendations(self, support_analysis: Dict,
orientation_analysis: Dict,
material_analysis: Dict) -> List[str]:
"""Generate printability recommendations"""
recommendations = []
# Support recommendations
if support_analysis.get('needs_supports', False):
support_complexity = support_analysis.get('support_complexity', 'low')
if support_complexity == 'high':
recommendations.append("Consider redesigning to reduce support requirements")
else:
recommendations.append("Enable support structures in your slicer")
# Orientation recommendations
if orientation_analysis.get('improvement_possible', False):
best_orientation = orientation_analysis.get('best_orientation', 'current')
recommendations.append(f"Consider rotating to '{best_orientation}' orientation for better printability")
# Material recommendations
complexity = material_analysis.get('complexity', 'low')
if complexity == 'high':
recommendations.append("Large/complex object - consider splitting into multiple parts")
print_time = material_analysis.get('estimated_print_time_hours', 0)
if print_time > 10:
recommendations.append("Long print time - ensure good bed adhesion and stable environment")
return recommendations
def _rotation_matrix_x(self, angle_degrees: float) -> np.ndarray:
"""Create rotation matrix around X axis"""
angle = np.radians(angle_degrees)
return np.array([
[1, 0, 0],
[0, np.cos(angle), -np.sin(angle)],
[0, np.sin(angle), np.cos(angle)]
])
def _rotation_matrix_y(self, angle_degrees: float) -> np.ndarray:
"""Create rotation matrix around Y axis"""
angle = np.radians(angle_degrees)
return np.array([
[np.cos(angle), 0, np.sin(angle)],
[0, 1, 0],
[-np.sin(angle), 0, np.cos(angle)]
])
def _rotation_matrix_z(self, angle_degrees: float) -> np.ndarray:
"""Create rotation matrix around Z axis"""
angle = np.radians(angle_degrees)
return np.array([
[np.cos(angle), -np.sin(angle), 0],
[np.sin(angle), np.cos(angle), 0],
[0, 0, 1]
])
```
Web Visualization Interface
The web interface provides real-time 3D visualization and user interaction:
```python
# visualization/web_viewer.py
from flask import Flask, render_template, request, jsonify, send_file
from flask_socketio import SocketIO, emit
from flask_cors import CORS
import json
import os
import logging
from typing import Dict, Any
class WebViewer:
"""Web-based 3D viewer and chat interface"""
def __init__(self, config, llm_interface, geometry_engine, mcp_tools):
self.app = Flask(__name__)
self.app.config['SECRET_KEY'] = 'your-secret-key-here'
self.socketio = SocketIO(self.app, cors_allowed_origins="*")
CORS(self.app)
self.config = config
self.llm_interface = llm_interface
self.geometry_engine = geometry_engine
self.mcp_tools = mcp_tools
self.logger = logging.getLogger(__name__)
# Current session state
self.current_mesh = None
self.conversation_history = []
self._setup_routes()
self._setup_socketio_events()
def _setup_routes(self):
"""Setup Flask routes"""
@self.app.route('/')
def index():
return render_template('viewer.html')
@self.app.route('/api/upload', methods=['POST'])
def upload_file():
if 'file' not in request.files:
return jsonify({'error': 'No file provided'}), 400
file = request.files['file']
if file.filename == '':
return jsonify({'error': 'No file selected'}), 400
if file and file.filename.endswith('.stl'):
# Save uploaded file
filename = secure_filename(file.filename)
filepath = os.path.join(self.config.system.temp_directory, filename)
file.save(filepath)
# Load mesh
try:
import trimesh
self.current_mesh = trimesh.load(filepath)
return jsonify({'success': True, 'message': 'File uploaded successfully'})
except Exception as e:
return jsonify({'error': f'Failed to load STL: {str(e)}'}), 400
return jsonify({'error': 'Invalid file type'}), 400
@self.app.route('/api/download/<filename>')
def download_file(filename):
filepath = os.path.join(self.config.system.output_directory, filename)
if os.path.exists(filepath):
return send_file(filepath, as_attachment=True)
return jsonify({'error': 'File not found'}), 404
def _setup_socketio_events(self):
"""Setup SocketIO event handlers"""
@self.socketio.on('connect')
def handle_connect():
self.logger.info('Client connected')
emit('status', {'message': 'Connected to 3D NLP Printer'})
@self.socketio.on('disconnect')
def handle_disconnect():
self.logger.info('Client disconnected')
@self.socketio.on('chat_message')
def handle_chat_message(data):
user_message = data.get('message', '')
self.logger.info(f"Received message: {user_message}")
# Add to conversation history
self.conversation_history.append({'role': 'user', 'content': user_message})
# Process message
response = self._process_user_message(user_message)
# Add response to history
self.conversation_history.append({'role': 'assistant', 'content': response['text']})
# Send response
emit('chat_response', response)
@self.socketio.on('request_mesh_data')
def handle_mesh_request():
if self.current_mesh:
mesh_data = self._prepare_mesh_for_viewer(self.current_mesh)
emit('mesh_data', mesh_data)
else:
emit('mesh_data', None)
@self.socketio.on('save_stl')
def handle_save_stl(data):
filename = data.get('filename', 'generated_model.stl')
if self.current_mesh:
success = self._save_current_mesh(filename)
if success:
emit('save_complete', {'filename': filename, 'success': True})
else:
emit('save_complete', {'filename': filename, 'success': False})
def _process_user_message(self, message: str) -> Dict[str, Any]:
"""Process user message and generate response"""
try:
# Check for special commands
if message.lower().startswith('save'):
return self._handle_save_command(message)
elif message.lower().startswith('analyze'):
return self._handle_analyze_command()
elif message.lower().startswith('generate image'):
return self._handle_image_generation(message)
# Regular 3D modeling request
return self._handle_modeling_request(message)
except Exception as e:
self.logger.error(f"Error processing message: {e}")
return {
'text': f"Sorry, I encountered an error: {str(e)}",
'type': 'error'
}
def _handle_modeling_request(self, message: str) -> Dict[str, Any]:
"""Handle 3D modeling requests"""
# Build context from conversation history
context = self._build_context()
# Add printer constraints to prompt
constraints_prompt = self.config.get_printer_constraints_prompt()
full_prompt = f"{constraints_prompt}\n\nConversation context:\n{context}\n\nUser request: {message}"
# Generate response using LLM
if self.config.llm.use_local and hasattr(self.llm_interface, 'local_llm'):
llm_response = self.llm_interface.local_llm.generate_response(full_prompt)
else:
llm_response = self.llm_interface.remote_llm.generate_response(full_prompt, model="gpt-4")
# Extract code if present
openscad_code = self._extract_openscad_code(llm_response)
response = {
'text': llm_response,
'type': 'modeling'
}
# Generate 3D model if code was provided
if openscad_code:
mesh = self.geometry_engine.parametric_generator.generate_from_code(openscad_code)
if mesh:
# Optimize for FDM
optimized_mesh = self.geometry_engine.mesh_processor.optimize_for_fdm(mesh)
self.current_mesh = optimized_mesh
# Analyze the generated mesh
validation = self.mcp_tools.geometry_validator.validate_mesh(
optimized_mesh,
self.config.printer.__dict__
)
response['mesh_generated'] = True
response['validation'] = validation
response['mesh_data'] = self._prepare_mesh_for_viewer(optimized_mesh)
else:
response['text'] += "\n\nNote: Could not generate 3D model from the provided code."
return response
def _handle_save_command(self, message: str) -> Dict[str, Any]:
"""Handle save commands"""
if not self.current_mesh:
return {
'text': "No model to save. Please generate a model first.",
'type': 'error'
}
# Extract filename from message or use default
filename = self._extract_filename(message) or 'generated_model.stl'
# Save the mesh
success = self._save_current_mesh(filename)
if success:
return {
'text': f"Model saved successfully as {filename}",
'type': 'success',
'saved_file': filename
}
else:
return {
'text': "Failed to save the model. Please try again.",
'type': 'error'
}
def _handle_analyze_command(self) -> Dict[str, Any]:
"""Handle analysis commands"""
if not self.current_mesh:
return {
'text': "No model to analyze. Please generate a model first.",
'type': 'error'
}
# Perform comprehensive analysis
validation = self.mcp_tools.geometry_validator.validate_mesh(
self.current_mesh,
self.config.printer.__dict__
)
printability = self.mcp_tools.printability_checker.analyze_printability(
self.current_mesh,
self.config.printer.__dict__
)
# Format analysis results
analysis_text = self._format_analysis_results(validation, printability)
return {
'text': analysis_text,
'type': 'analysis',
'validation': validation,
'printability': printability
}
def _handle_image_generation(self, message: str) -> Dict[str, Any]:
"""Handle image generation requests"""
if not hasattr(self.llm_interface, 'vlm') or not self.llm_interface.vlm:
return {
'text': "Image generation not available. VLM not configured.",
'type': 'error'
}
# Extract prompt for image generation
prompt = message.replace('generate image', '').strip()
if not prompt:
prompt = "3D object concept"
# Generate concept image
image = self.llm_interface.vlm.generate_concept_image(prompt)
if image:
# Save image temporarily
image_path = os.path.join(self.config.system.temp_directory, 'concept.png')
image.save(image_path)
# Generate guidance for 3D modeling
guidance = self.llm_interface.vlm.image_to_3d_guidance(image, prompt)
return {
'text': f"Generated concept image for: {prompt}\n\n{guidance}",
'type': 'image_generation',
'image_path': image_path
}
else:
return {
'text': "Failed to generate concept image. Please try again.",
'type': 'error'
}
def _build_context(self) -> str:
"""Build context from conversation history"""
context_messages = []
for msg in self.conversation_history[-6:]: # Last 6 messages
role = msg['role']
content = msg['content'][:200] # Truncate long messages
context_messages.append(f"{role}: {content}")
return "\n".join(context_messages)
def _extract_openscad_code(self, text: str) -> str:
"""Extract OpenSCAD code from LLM response"""
# Look for code blocks
import re
# Try to find code blocks with openscad or scad language identifier
patterns = [
r'```(?:openscad|scad)\n(.*?)\n```',
r'```\n(.*?)\n```',
r'```(.*?)```'
]
for pattern in patterns:
matches = re.findall(pattern, text, re.DOTALL)
if matches:
code = matches[0].strip()
# Basic validation - check if it looks like OpenSCAD
if any(keyword in code for keyword in ['module', 'cylinder', 'cube', 'sphere', 'difference', 'union']):
return code
return ""
def _extract_filename(self, message: str) -> str:
"""Extract filename from save command"""
import re
# Look for filename patterns
patterns = [
r'save as ([^\s]+\.stl)',
r'save ([^\s]+\.stl)',
r'filename ([^\s]+\.stl)'
]
for pattern in patterns:
match = re.search(pattern, message.lower())
if match:
return match.group(1)
return None
def _save_current_mesh(self, filename: str) -> bool:
"""Save current mesh to STL file"""
if not self.current_mesh:
return False
try:
filepath = self.geometry_engine.stl_exporter.export_mesh(
self.current_mesh,
filename
)
return filepath is not None
except Exception as e:
self.logger.error(f"Error saving mesh: {e}")
return False
def _prepare_mesh_for_viewer(self, mesh) -> Dict[str, Any]:
"""Prepare mesh data for web viewer"""
try:
# Convert to format suitable for Three.js
vertices = mesh.vertices.flatten().tolist()
faces = mesh.faces.flatten().tolist()
# Calculate normals if not available
if hasattr(mesh, 'vertex_normals'):
normals = mesh.vertex_normals.flatten().tolist()
else:
normals = []
return {
'vertices': vertices,
'faces': faces,
'normals': normals,
'vertex_count': len(mesh.vertices),
'face_count': len(mesh.faces)
}
except Exception as e:
self.logger.error(f"Error preparing mesh data: {e}")
return None
def _format_analysis_results(self, validation: Dict, printability: Dict) -> str:
"""Format analysis results for display"""
text = "=== MESH ANALYSIS ===\n\n"
# Basic validation
text += "GEOMETRY VALIDATION:\n"
text += f"- Manifold: {'✓' if validation.get('is_valid', False) else '✗'}\n"
text += f"- Vertex count: {validation.get('metrics', {}).get('vertex_count', 'N/A')}\n"
text += f"- Face count: {validation.get('metrics', {}).get('face_count', 'N/A')}\n"
text += f"- Volume: {validation.get('metrics', {}).get('volume_mm3', 'N/A')} mm³\n\n"
# FDM analysis
fdm_analysis = validation.get('fdm_analysis', {})
if fdm_analysis:
text += "FDM PRINTABILITY:\n"
overhangs = fdm_analysis.get('overhangs', {})
if overhangs.get('has_overhangs', False):
text += f"- Overhangs: ⚠️ {overhangs.get('overhang_percentage', 0):.1f}% of surface\n"
else:
text += "- Overhangs: ✓ No problematic overhangs\n"
build_volume = fdm_analysis.get('build_volume', {})
if build_volume.get('fits_overall', True):
text += "- Build volume: ✓ Fits in printer\n"
else:
text += "- Build volume: ✗ Too large for printer\n"
# Printability score
overall_score = printability.get('overall_score', 0)
text += f"\nOVERALL PRINTABILITY SCORE: {overall_score:.0f}/100\n"
# Recommendations
recommendations = printability.get('recommendations', [])
if recommendations:
text += "\nRECOMMENDATIONS:\n"
for i, rec in enumerate(recommendations, 1):
text += f"{i}. {rec}\n"
return text
def run(self, host='localhost', port=None, debug=False):
"""Run the web server"""
port = port or self.config.system.web_port
self.logger.info(f"Starting web server on {host}:{port}")
self.socketio.run(self.app, host=host, port=port, debug=debug)
```
HTML Template for Web Interface
```html
<!-- visualization/static/viewer.html -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>3D NLP Printer</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r128/three.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/4.0.1/socket.io.js"></script>
<link rel="stylesheet" href="{{ url_for('static', filename='viewer.css') }}">
</head>
<body>
<div id="app">
<div id="sidebar">
<h1>3D NLP Printer</h1>
<div id="chat-container">
<div id="chat-messages"></div>
<div id="chat-input-container">
<input type="text" id="chat-input" placeholder="Describe the 3D object you want to create...">
<button id="send-button">Send</button>
</div>
</div>
<div id="controls">
<h3>Controls</h3>
<button id="analyze-button">Analyze Model</button>
<button id="save-button">Save STL</button>
<input type="file" id="file-input" accept=".stl" style="display: none;">
<button id="upload-button">Upload STL</button>
</div>
<div id="status">
<h3>Status</h3>
<div id="status-content">Ready</div>
</div>
</div>
<div id="viewer-container">
<canvas id="viewer-canvas"></canvas>
<div id="viewer-info">
<div id="mesh-info"></div>
</div>
</div>
</div>
<script src="{{ url_for('static', filename='viewer.js') }}"></script>
</body>
</html>
```
JavaScript for 3D Viewer
```javascript
// visualization/static/viewer.js
class NLP3DViewer {
constructor() {
this.socket = io();
this.scene = null;
this.camera = null;
this.renderer = null;
this.controls = null;
this.currentMesh = null;
this.initializeViewer();
this.setupSocketEvents();
this.setupUIEvents();
}
initializeViewer() {
// Initialize Three.js scene
this.scene = new THREE.Scene();
this.scene.background = new THREE.Color(0xf0f0f0);
// Setup camera
const container = document.getElementById('viewer-container');
const width = container.clientWidth;
const height = container.clientHeight;
this.camera = new THREE.PerspectiveCamera(75, width / height, 0.1, 1000);
this.camera.position.set(50, 50, 50);
this.camera.lookAt(0, 0, 0);
// Setup renderer
const canvas = document.getElementById('viewer-canvas');
this.renderer = new THREE.WebGLRenderer({ canvas: canvas, antialias: true });
this.renderer.setSize(width, height);
this.renderer.shadowMap.enabled = true;
this.renderer.shadowMap.type = THREE.PCFSoftShadowMap;
// Setup controls
this.controls = new THREE.OrbitControls(this.camera, this.renderer.domElement);
this.controls.enableDamping = true;
this.controls.dampingFactor = 0.05;
// Add lights
this.setupLighting();
// Add grid
this.addGrid();
// Start render loop
this.animate();
// Handle window resize
window.addEventListener('resize', () => this.handleResize());
}
setupLighting() {
// Ambient light
const ambientLight = new THREE.AmbientLight(0x404040, 0.4);
this.scene.add(ambientLight);
// Directional light
const directionalLight = new THREE.DirectionalLight(0xffffff, 0.8);
directionalLight.position.set(50, 50, 50);
directionalLight.castShadow = true;
directionalLight.shadow.mapSize.width = 2048;
directionalLight.shadow.mapSize.height = 2048;
this.scene.add(directionalLight);
// Point light
const pointLight = new THREE.PointLight(0xffffff, 0.3);
pointLight.position.set(-50, 50, -50);
this.scene.add(pointLight);
}
addGrid() {
// Build plate grid
const gridHelper = new THREE.GridHelper(200, 20, 0x888888, 0xcccccc);
this.scene.add(gridHelper);
// Axes helper
const axesHelper = new THREE.AxesHelper(25);
this.scene.add(axesHelper);
}
setupSocketEvents() {
this.socket.on('connect', () => {
this.updateStatus('Connected to server');
});
this.socket.on('disconnect', () => {
this.updateStatus('Disconnected from server');
});
this.socket.on('chat_response', (data) => {
this.addChatMessage('assistant', data.text);
if (data.mesh_generated && data.mesh_data) {
this.loadMeshData(data.mesh_data);
}
if (data.validation) {
this.displayValidationResults(data.validation);
}
});
this.socket.on('mesh_data', (data) => {
if (data) {
this.loadMeshData(data);
}
});
this.socket.on('save_complete', (data) => {
if (data.success) {
this.updateStatus(`Saved: ${data.filename}`);
this.addChatMessage('system', `Model saved as ${data.filename}`);
} else {
this.updateStatus('Save failed');
this.addChatMessage('system', 'Failed to save model');
}
});
}
setupUIEvents() {
// Chat input
const chatInput = document.getElementById('chat-input');
const sendButton = document.getElementById('send-button');
const sendMessage = () => {
const message = chatInput.value.trim();
if (message) {
this.addChatMessage('user', message);
this.socket.emit('chat_message', { message: message });
chatInput.value = '';
}
};
sendButton.addEventListener('click', sendMessage);
chatInput.addEventListener('keypress', (e) => {
if (e.key === 'Enter') {
sendMessage();
}
});
// Control buttons
document.getElementById('analyze-button').addEventListener('click', () => {
this.socket.emit('chat_message', { message: 'analyze' });
});
document.getElementById('save-button').addEventListener('click', () => {
const filename = prompt('Enter filename:', 'model.stl');
if (filename) {
this.socket.emit('save_stl', { filename: filename });
}
});
document.getElementById('upload-button').addEventListener('click', () => {
document.getElementById('file-input').click();
});
document.getElementById('file-input').addEventListener('change', (e) => {
const file = e.target.files[0];
if (file) {
this.uploadFile(file);
}
});
}
addChatMessage(sender, message) {
const chatMessages = document.getElementById('chat-messages');
const messageDiv = document.createElement('div');
messageDiv.className = `chat-message ${sender}`;
const senderSpan = document.createElement('span');
senderSpan.className = 'sender';
senderSpan.textContent = sender.charAt(0).toUpperCase() + sender.slice(1) + ': ';
const messageSpan = document.createElement('span');
messageSpan.className = 'message';
messageSpan.textContent = message;
messageDiv.appendChild(senderSpan);
messageDiv.appendChild(messageSpan);
chatMessages.appendChild(messageDiv);
// Scroll to bottom
chatMessages.scrollTop = chatMessages.scrollHeight;
}
loadMeshData(meshData) {
// Remove existing mesh
if (this.currentMesh) {
this.scene.remove(this.currentMesh);
}
// Create geometry from mesh data
const geometry = new THREE.BufferGeometry();
// Set vertices
const vertices = new Float32Array(meshData.vertices);
geometry.setAttribute('position', new THREE.BufferAttribute(vertices, 3));
// Set faces (indices)
const indices = new Uint32Array(meshData.faces);
geometry.setIndex(new THREE.BufferAttribute(indices, 1));
// Set normals if available
if (meshData.normals && meshData.normals.length > 0) {
const normals = new Float32Array(meshData.normals);
geometry.setAttribute('normal', new THREE.BufferAttribute(normals, 3));
} else {
geometry.computeVertexNormals();
}
// Create material
const material = new THREE.MeshLambertMaterial({
color: 0x4CAF50,
side: THREE.DoubleSide
});
// Create mesh
this.currentMesh = new THREE.Mesh(geometry, material);
this.currentMesh.castShadow = true;
this.currentMesh.receiveShadow = true;
// Add to scene
this.scene.add(this.currentMesh);
// Center camera on object
this.centerCameraOnObject();
// Update mesh info
this.updateMeshInfo(meshData);
}
centerCameraOnObject() {
if (!this.currentMesh) return;
// Calculate bounding box
const box = new THREE.Box3().setFromObject(this.currentMesh);
const center = box.getCenter(new THREE.Vector3());
const size = box.getSize(new THREE.Vector3());
// Position camera
const maxDim = Math.max(size.x, size.y, size.z);
const distance = maxDim * 2;
this.camera.position.set(
center.x + distance,
center.y + distance,
center.z + distance
);
this.controls.target.copy(center);
this.controls.update();
}
updateMeshInfo(meshData) {
const meshInfo = document.getElementById('mesh-info');
meshInfo.innerHTML = `
<h4>Mesh Information</h4>
<p>Vertices: ${meshData.vertex_count}</p>
<p>Faces: ${meshData.face_count}</p>
`;
}
displayValidationResults(validation) {
const statusContent = document.getElementById('status-content');
let statusText = '';
if (validation.is_valid) {
statusText += '✓ Valid geometry\n';
} else {
statusText += '✗ Geometry issues found\n';
}
if (validation.fdm_analysis) {
const overhangs = validation.fdm_analysis.overhangs;
if (overhangs && overhangs.has_overhangs) {
statusText += `⚠️ ${overhangs.overhang_percentage}% overhangs\n`;
}
}
statusContent.textContent = statusText;
}
updateStatus(message) {
const statusContent = document.getElementById('status-content');
statusContent.textContent = message;
}
uploadFile(file) {
const formData = new FormData();
formData.append('file', file);
fetch('/api/upload', {
method: 'POST',
body: formData
})
.then(response => response.json())
.then(data => {
if (data.success) {
this.updateStatus('File uploaded successfully');
this.socket.emit('request_mesh_data');
} else {
this.updateStatus(`Upload failed: ${data.error}`);
}
})
.catch(error => {
this.updateStatus(`Upload error: ${error}`);
});
}
animate() {
requestAnimationFrame(() => this.animate());
this.controls.update();
this.renderer.render(this.scene, this.camera);
}
handleResize() {
const container = document.getElementById('viewer-container');
const width = container.clientWidth;
const height = container.clientHeight;
this.camera.aspect = width / height;
this.camera.updateProjectionMatrix();
this.renderer.setSize(width, height);
}
}
// Initialize the application when DOM is loaded
document.addEventListener('DOMContentLoaded', () => {
new NLP3DViewer();
});
```
CSS Styles
```css
/* visualization/static/viewer.css */
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
background-color: #f5f5f5;
height: 100vh;
overflow: hidden;
}
#app {
display: flex;
height: 100vh;
}
#sidebar {
width: 400px;
background-color: #2c3e50;
color: white;
display: flex;
flex-direction: column;
padding: 20px;
}
#sidebar h1 {
margin-bottom: 20px;
text-align: center;
color: #ecf0f1;
}
#chat-container {
flex: 1;
display: flex;
flex-direction: column;
margin-bottom: 20px;
}
#chat-messages {
flex: 1;
overflow-y: auto;
background-color: #34495e;
border-radius: 8px;
padding: 15px;
margin-bottom: 10px;
}
.chat-message {
margin-bottom: 15px;
padding: 10px;
border-radius: 6px;
word-wrap: break-word;
}
.chat-message.user {
background-color: #3498db;
margin-left: 20px;
}
.chat-message.assistant {
background-color: #27ae60;
margin-right: 20px;
}
.chat-message.system {
background-color: #f39c12;
text-align: center;
}
.sender {
font-weight: bold;
margin-right: 5px;
}
#chat-input-container {
display: flex;
gap: 10px;
}
#chat-input {
flex: 1;
padding: 10px;
border: none;
border-radius: 4px;
font-size: 14px;
}
#send-button {
padding: 10px 20px;
background-color: #3498db;
color: white;
border: none;
border-radius: 4px;
cursor: pointer;
font-size: 14px;
}
#send-button:hover {
background-color: #2980b9;
}
#controls {
margin-bottom: 20px;
}
#controls h3 {
margin-bottom: 10px;
color: #ecf0f1;
}
#controls button {
width: 100%;
padding: 10px;
margin-bottom: 10px;
background-color: #34495e;
color: white;
border: none;
border-radius: 4px;
cursor: pointer;
font-size: 14px;
}
#controls button:hover {
background-color: #4a6741;
}
#status h3 {
margin-bottom: 10px;
color: #ecf0f1;
}
#status-content {
background-color: #34495e;
padding: 10px;
border-radius: 4px;
font-size: 12px;
white-space: pre-line;
}
#viewer-container {
flex: 1;
position: relative;
background-color: #ecf0f1;
}
#viewer-canvas {
width: 100%;
height: 100%;
display: block;
}
#viewer-info {
position: absolute;
top: 10px;
right: 10px;
background-color: rgba(255, 255, 255, 0.9);
padding: 15px;
border-radius: 8px;
box-shadow: 0 2px 10px rgba(0, 0, 0, 0.1);
}
#mesh-info h4 {
margin-bottom: 10px;
color: #2c3e50;
}
#mesh-info p {
margin-bottom: 5px;
font-size: 14px;
color: #34495e;
}
/* Responsive design */
@media (max-width: 768px) {
#app {
flex-direction: column;
}
#sidebar {
width: 100%;
height: 50vh;
}
#viewer-container {
height: 50vh;
}
}
```
Main Application Entry Point
```python
# main.py
import logging
import sys
import os
from config import config
from llm_interface.local_llm import LocalLLMInterface
from llm_interface.remote_llm import RemoteLLMInterface
from llm_interface.vlm_integration import VLMIntegration
from geometry_engine.parametric_generator import ParametricGenerator
from geometry_engine.mesh_processor import MeshProcessor
from geometry_engine.stl_exporter import STLExporter
from mcp_tools.geometry_validator import GeometryValidator
from mcp_tools.printability_checker import PrintabilityChecker
from visualization.web_viewer import WebViewer
from storage.file_manager import FileManager
class NLP3DPrinterSystem:
"""Main system orchestrator"""
def __init__(self):
self.setup_logging()
self.logger = logging.getLogger(__name__)
# Initialize components
self.llm_interface = self._initialize_llm_interface()
self.geometry_engine = self._initialize_geometry_engine()
self.mcp_tools = self._initialize_mcp_tools()
self.file_manager = FileManager(config.system.output_directory)
# Initialize web interface
self.web_viewer = WebViewer(
config=config,
llm_interface=self.llm_interface,
geometry_engine=self.geometry_engine,
mcp_tools=self.mcp_tools
)
def setup_logging(self):
"""Configure logging for the system"""
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('nlp_3d_printer.log'),
logging.StreamHandler(sys.stdout)
]
)
def _initialize_llm_interface(self):
"""Initialize LLM interfaces"""
class LLMInterface:
def __init__(self):
self.local_llm = None
self.remote_llm = None
self.vlm = None
interface = LLMInterface()
# Initialize local LLM if configured
if config.llm.use_local and config.llm.local_model_path:
try:
interface.local_llm = LocalLLMInterface(config.llm.local_model_path)
logging.info("Local LLM initialized successfully")
except Exception as e:
logging.error(f"Failed to initialize local LLM: {e}")
# Initialize remote LLM if API keys available
if config.llm.openai_api_key or config.llm.anthropic_api_key:
interface.remote_llm = RemoteLLMInterface(
openai_key=config.llm.openai_api_key,
anthropic_key=config.llm.anthropic_api_key
)
logging.info("Remote LLM interface initialized")
# Initialize VLM if enabled
if config.system.enable_vlm:
try:
interface.vlm = VLMIntegration()
logging.info("VLM integration initialized")
except Exception as e:
logging.error(f"Failed to initialize VLM: {e}")
return interface
def _initialize_geometry_engine(self):
"""Initialize geometry processing components"""
class GeometryEngine:
def __init__(self):
self.parametric_generator = ParametricGenerator(config.system.temp_directory)
self.mesh_processor = MeshProcessor(
min_feature_size=config.printer.min_wall_thickness,
max_overhang_angle=config.printer.max_overhang_angle
)
self.stl_exporter = STLExporter(config.system.output_directory)
return GeometryEngine()
def _initialize_mcp_tools(self):
"""Initialize MCP tools"""
class MCPTools:
def __init__(self):
self.geometry_validator = GeometryValidator()
self.printability_checker = PrintabilityChecker()
return MCPTools()
def run(self, host='localhost', port=None, debug=False):
"""Run the complete system"""
self.logger.info("Starting NLP 3D Printer System")
# Validate configuration
if not self._validate_configuration():
self.logger.error("Configuration validation failed")
return False
# Start web interface
try:
self.web_viewer.run(host=host, port=port, debug=debug)
except KeyboardInterrupt:
self.logger.info("System shutdown requested")
except Exception as e:
self.logger.error(f"System error: {e}")
return False
return True
def _validate_configuration(self):
"""Validate system configuration"""
# Check if at least one LLM interface is available
has_local = config.llm.use_local and config.llm.local_model_path
has_remote = config.llm.openai_api_key or config.llm.anthropic_api_key
if not (has_local or has_remote):
self.logger.error("No LLM interface configured")
return False
# Check if output directory is writable
try:
test_file = os.path.join(config.system.output_directory, 'test.txt')
with open(test_file, 'w') as f:
f.write('test')
os.remove(test_file)
except Exception as e:
self.logger.error(f"Output directory not writable: {e}")
return False
return True
# storage/file_manager.py
import os
import json
import shutil
from datetime import datetime
from typing import Dict, List, Any, Optional
import logging
class FileManager:
"""Manages file storage and organization"""
def __init__(self, base_directory: str):
self.base_directory = base_directory
self.projects_dir = os.path.join(base_directory, 'projects')
self.exports_dir = os.path.join(base_directory, 'exports')
self.metadata_file = os.path.join(base_directory, 'metadata.json')
self.logger = logging.getLogger(__name__)
# Ensure directories exist
os.makedirs(self.projects_dir, exist_ok=True)
os.makedirs(self.exports_dir, exist_ok=True)
# Load or create metadata
self.metadata = self._load_metadata()
def save_project(self, project_name: str, stl_path: str,
conversation_history: List[Dict],
generation_parameters: Dict) -> str:
"""Save a complete project with metadata"""
try:
# Create project directory
project_dir = os.path.join(self.projects_dir, project_name)
os.makedirs(project_dir, exist_ok=True)
# Copy STL file
stl_filename = f"{project_name}.stl"
project_stl_path = os.path.join(project_dir, stl_filename)
shutil.copy2(stl_path, project_stl_path)
# Save project metadata
project_metadata = {
'name': project_name,
'created': datetime.now().isoformat(),
'stl_file': stl_filename,
'conversation_history': conversation_history,
'generation_parameters': generation_parameters
}
metadata_path = os.path.join(project_dir, 'project.json')
with open(metadata_path, 'w') as f:
json.dump(project_metadata, f, indent=2)
# Update global metadata
self.metadata['projects'][project_name] = {
'path': project_dir,
'created': project_metadata['created'],
'last_modified': datetime.now().isoformat()
}
self._save_metadata()
self.logger.info(f"Project saved: {project_name}")
return project_dir
except Exception as e:
self.logger.error(f"Error saving project: {e}")
return None
def load_project(self, project_name: str) -> Optional[Dict[str, Any]]:
"""Load a project with all its data"""
try:
if project_name not in self.metadata['projects']:
return None
project_dir = self.metadata['projects'][project_name]['path']
metadata_path = os.path.join(project_dir, 'project.json')
if not os.path.exists(metadata_path):
return None
with open(metadata_path, 'r') as f:
project_data = json.load(f)
# Add full STL path
project_data['stl_path'] = os.path.join(project_dir, project_data['stl_file'])
return project_data
except Exception as e:
self.logger.error(f"Error loading project: {e}")
return None
def list_projects(self) -> List[Dict[str, Any]]:
"""List all saved projects"""
projects = []
for name, info in self.metadata['projects'].items():
projects.append({
'name': name,
'created': info['created'],
'last_modified': info['last_modified'],
'path': info['path']
})
# Sort by last modified
projects.sort(key=lambda x: x['last_modified'], reverse=True)
return projects
def export_stl(self, source_path: str, filename: str) -> Optional[str]:
"""Export STL file to exports directory"""
try:
export_path = os.path.join(self.exports_dir, filename)
shutil.copy2(source_path, export_path)
# Update metadata
export_info = {
'filename': filename,
'exported': datetime.now().isoformat(),
'source': source_path
}
if 'exports' not in self.metadata:
self.metadata['exports'] = []
self.metadata['exports'].append(export_info)
self._save_metadata()
self.logger.info(f"STL exported: {filename}")
return export_path
except Exception as e:
self.logger.error(f"Error exporting STL: {e}")
return None
def _load_metadata(self) -> Dict[str, Any]:
"""Load metadata from file"""
if os.path.exists(self.metadata_file):
try:
with open(self.metadata_file, 'r') as f:
return json.load(f)
except Exception as e:
self.logger.error(f"Error loading metadata: {e}")
# Return default metadata structure
return {
'projects': {},
'exports': [],
'created': datetime.now().isoformat()
}
def _save_metadata(self):
"""Save metadata to file"""
try:
with open(self.metadata_file, 'w') as f:
json.dump(self.metadata, f, indent=2)
except Exception as e:
self.logger.error(f"Error saving metadata: {e}")
if __name__ == "__main__":
# Run the system
system = NLP3DPrinterSystem()
# Parse command line arguments
import argparse
parser = argparse.ArgumentParser(description='NLP 3D Printer System')
parser.add_argument('--host', default='localhost', help='Host to bind to')
parser.add_argument('--port', type=int, default=5000, help='Port to bind to')
parser.add_argument('--debug', action='store_true', help='Enable debug mode')
args = parser.parse_args()
# Run the system
success = system.run(host=args.host, port=args.port, debug=args.debug)
sys.exit(0 if success else 1)
```
Usage Instructions
To run this complete system:
1. Install dependencies: `pip install -r requirements.txt`
2. Set environment variables for API keys (optional for remote LLMs)
3. Configure local model path if using local LLM
4. Run: `python main.py --host 0.0.0.0 --port 5000`
5. Open browser to `http://localhost:5000`
The system provides a complete pipeline from natural language input to 3D printable STL files, with real-time visualization, iterative refinement, comprehensive validation, and file management. The VLM integration allows for concept image generation to guide 3D modeling, while the MCP tools provide sophisticated geometry analysis and printability assessment.
No comments:
Post a Comment