INTRODUCTION
Software architecture has long struggled with a fundamental tension. On one side, we have enterprise systems that demand flexibility, scalability, and rapid evolution to meet changing business needs. On the other side, embedded systems require direct hardware access, real-time performance, and resource efficiency. Traditional architectural patterns force us to choose between these worlds or maintain separate architectural approaches for different system types.
This article introduces Capability-Centric Architecture, a novel architectural pattern that resolves this tension. CCA extends and synthesizes concepts from Domain-Driven Design, Hexagonal Architecture, and Clean Architecture while introducing new mechanisms that make it equally applicable to a microcontroller reading sensor data and a cloud-native enterprise platform processing billions of transactions.
The pattern emerged from analyzing why existing architectures fail when systems need to evolve, integrate new technologies like AI and containerization, or span the embedded-to-enterprise spectrum. Rather than treating these as separate problems, CCA provides a unified conceptual framework with built-in mechanisms for managing complexity, dependencies, and change.
THE FUNDAMENTAL PROBLEMS WITH EXISTING ARCHITECTURAL APPROACHES
Before presenting the new pattern, we must understand why existing approaches fall short. Consider a typical layered architecture applied to an industrial control system. The presentation layer displays sensor readings, the business logic layer processes control algorithms, the data access layer manages persistence, and somewhere we need hardware access for reading sensors and controlling actuators.
The immediate problem becomes apparent. Where does the hardware access layer fit? If we place it below the data access layer, we create an awkward dependency structure. If we make it a separate concern, we violate the layered principle. More critically, the rigid layering makes it nearly impossible to optimize critical paths. When a sensor interrupt occurs, the signal must traverse multiple layers before reaching the control algorithm, introducing unacceptable latency.
Hexagonal Architecture attempts to solve this through ports and adapters. The core domain logic sits in the center, and adapters connect to external systems through defined ports. This works well for enterprise systems where database adapters and API adapters make sense. However, for embedded systems, treating a hardware timer as just another adapter obscures the fundamental difference between a replaceable external service and a hardware component that defines the system's real-time capabilities.
Consider this typical hexagonal approach to embedded systems:
// Port definition
public interface SensorPort {
SensorReading read();
}
// Domain logic
public class TemperatureController {
private final SensorPort sensor;
public TemperatureController(SensorPort sensor) {
this.sensor = sensor;
}
public void regulate() {
SensorReading reading = sensor.read();
// Control logic here
}
}
// Hardware adapter
public class HardwareSensorAdapter implements SensorPort {
private static final int SENSOR_REGISTER = 0x40001000;
public SensorReading read() {
// Direct memory access
int rawValue = readRegister(SENSOR_REGISTER);
return new SensorReading(convertToTemperature(rawValue));
}
private native int readRegister(int address);
}
This looks clean, but it hides critical problems. The abstraction prevents the controller from accessing sensor metadata available in adjacent hardware registers. It forces all sensor access through a method call, preventing the use of DMA or interrupt-driven reading. It makes testing harder because we cannot easily inject timing behavior. Most critically, it treats hardware as just another replaceable component when hardware capabilities fundamentally shape what the system can do.
Clean Architecture faces similar issues. Its concentric circles with dependencies pointing inward work beautifully for business applications. The entities layer contains business rules, the use cases layer contains application-specific rules, and outer layers handle UI and infrastructure. But embedded systems do not fit this model. Hardware is not infrastructure that can be abstracted away. It is the foundation upon which capabilities are built.
Enterprise systems face different but equally challenging problems. As systems grow, bounded contexts proliferate, and dependencies between them become tangled. Teams attempt to enforce layering or hexagonal boundaries, but practical pressures create backdoors and shortcuts. A customer service needs data from the inventory service, which needs pricing from the catalog service, which needs customer segments from the customer service. The circular dependency is obvious, but the business need is real.
Modern technologies exacerbate these problems. AI models are not simple components that fit into a layer or adapter. They have their own infrastructure needs, training pipelines, versioning requirements, and inference characteristics. Big Data processing does not align with traditional request-response patterns. Infrastructure as Code blurs the line between application architecture and deployment architecture. Kubernetes and containerization change how we think about deployment units and scaling boundaries.
CORE CONCEPTS OF CAPABILITY-CENTRIC ARCHITECTURE
Capability-Centric Architecture introduces several interconnected concepts that work together to address these challenges. At the foundation is the recognition that systems, whether embedded or enterprise, are built from capabilities. A capability is a cohesive set of functionality that delivers value, either to users or to other capabilities.
This sounds similar to bounded contexts from Domain-Driven Design, and indeed there is significant overlap. However, capabilities extend the concept in important ways. A bounded context focuses on domain modeling and linguistic boundaries. A capability encompasses the domain model but also includes the technical mechanisms needed to deliver that capability, the quality attributes it must satisfy, and the evolution strategy for that capability.
Each capability is structured as a Capability Nucleus. The nucleus contains three concentric regions, but unlike Clean Architecture's circles, these regions have different permeability rules and serve different purposes.
The innermost region is the Essence. This contains the pure domain logic or algorithmic core that defines what the capability does. For a temperature control capability, the essence contains the control algorithm. For a payment processing capability, the essence contains the business rules for validating and executing payments. The essence has no dependencies on anything outside itself except for capability contracts, which we will discuss shortly.
The middle region is the Realization. This contains the mechanisms needed to make the essence work in the real world. For embedded systems, this includes hardware access, interrupt handlers, and DMA controllers. For enterprise systems, this includes database access, message queue integration, and API implementations. The realization depends on the essence and on external technical infrastructure.
The outer region is the Adaptation. This provides the interfaces through which other capabilities interact with this capability and through which this capability interacts with external systems. Unlike traditional adapters, adaptations are bidirectional and can have different thicknesses depending on the capability's needs.
Here is a simple example showing the structure:
// ESSENCE - Pure domain logic
public class TemperatureControlEssence {
private final ControlParameters parameters;
public TemperatureControlEssence(ControlParameters parameters) {
this.parameters = parameters;
}
/**
* Calculates the control output based on current temperature.
* This is pure logic with no side effects or external dependencies.
*
* @param currentTemp The current temperature reading
* @param targetTemp The desired temperature
* @return Control output value between 0.0 and 1.0
*/
public double calculateControl(double currentTemp, double targetTemp) {
double error = targetTemp - currentTemp;
double proportional = parameters.getKp() * error;
double integral = parameters.getKi() * parameters.getAccumulatedError();
double derivative = parameters.getKd() * (error - parameters.getLastError());
double output = proportional + integral + derivative;
return clamp(output, 0.0, 1.0);
}
private double clamp(double value, double min, double max) {
return Math.max(min, Math.min(max, value));
}
}
// REALIZATION - Hardware integration for embedded system
public class TemperatureControlRealization {
private final TemperatureControlEssence essence;
private final HardwareTimer timer;
private final ADCController adc;
private final PWMController pwm;
public TemperatureControlRealization(
TemperatureControlEssence essence,
HardwareTimer timer,
ADCController adc,
PWMController pwm
) {
this.essence = essence;
this.timer = timer;
this.adc = adc;
this.pwm = pwm;
}
/**
* Initializes hardware resources and sets up interrupt-driven control loop.
* This realization uses direct hardware access for real-time performance.
*/
public void initialize() {
// Configure ADC for temperature sensor on channel 0
adc.configure(0, ADCResolution.BITS_12, ADCSampleTime.CYCLES_15);
// Configure PWM for heater control on channel 1
pwm.configure(1, PWMFrequency.KHZ_10, PWMMode.EDGE_ALIGNED);
// Set up timer interrupt for 100Hz control loop
timer.setPeriod(10); // 10ms = 100Hz
timer.setInterruptHandler(this::controlLoopInterrupt);
timer.start();
}
/**
* Interrupt handler called at 100Hz by hardware timer.
* Reads sensor, calculates control output, updates actuator.
* Must complete within 10ms to maintain real-time behavior.
*/
private void controlLoopInterrupt() {
// Read current temperature from ADC
int rawADC = adc.readChannel(0);
double currentTemp = convertADCToTemperature(rawADC);
// Get target temperature from shared memory
double targetTemp = getTargetTemperature();
// Calculate control output using essence
double controlOutput = essence.calculateControl(currentTemp, targetTemp);
// Update PWM duty cycle
pwm.setDutyCycle(1, controlOutput);
}
private double convertADCToTemperature(int adcValue) {
// Convert 12-bit ADC value to temperature
// Assumes linear sensor: 0°C = 0, 100°C = 4095
return (adcValue / 4095.0) * 100.0;
}
private double getTargetTemperature() {
// Read from shared memory location set by user interface
return SharedMemory.readDouble(TARGET_TEMP_ADDRESS);
}
}
// ADAPTATION - Interface for other capabilities
public class TemperatureControlAdaptation {
private final TemperatureControlRealization realization;
public TemperatureControlAdaptation(TemperatureControlRealization realization) {
this.realization = realization;
}
/**
* Provides a high-level interface for other capabilities to interact
* with temperature control without knowing implementation details.
*/
public void start() {
realization.initialize();
}
public void stop() {
// Cleanup and shutdown
}
public TemperatureStatus getStatus() {
// Return current status information
return new TemperatureStatus(/* ... */);
}
}
Notice how the essence contains pure logic that could be tested without any hardware. The realization handles all hardware interaction, including direct register access and interrupt handling. The adaptation provides a clean interface for other capabilities. This structure allows us to optimize the realization for embedded constraints while keeping the essence portable and testable.
For enterprise systems, the same structure applies but with different realization mechanisms:
// ESSENCE - Same pure domain logic
public class PaymentProcessingEssence {
/**
* Validates a payment request according to business rules.
* Pure function with no side effects.
*
* @param request The payment request to validate
* @return Validation result with any errors
*/
public ValidationResult validatePayment(PaymentRequest request) {
ValidationResult result = new ValidationResult();
if (request.getAmount() <= 0) {
result.addError("Amount must be positive");
}
if (request.getAmount() > 10000 && !request.hasManagerApproval()) {
result.addError("Payments over 10000 require manager approval");
}
if (!isValidAccountNumber(request.getAccountNumber())) {
result.addError("Invalid account number format");
}
return result;
}
/**
* Calculates the processing fee for a payment.
* Business logic extracted from infrastructure concerns.
*
* @param amount The payment amount
* @param paymentMethod The method of payment
* @return The calculated fee
*/
public double calculateFee(double amount, PaymentMethod paymentMethod) {
double baseRate = paymentMethod.getBaseRate();
double percentageFee = amount * baseRate;
double minimumFee = paymentMethod.getMinimumFee();
return Math.max(percentageFee, minimumFee);
}
private boolean isValidAccountNumber(String accountNumber) {
// Luhn algorithm or similar validation
return accountNumber.matches("\\d{10,16}");
}
}
// REALIZATION - Enterprise infrastructure integration
public class PaymentProcessingRealization {
private final PaymentProcessingEssence essence;
private final DatabaseConnection database;
private final MessageQueue eventQueue;
private final PaymentGateway gateway;
private final AuditLogger auditLogger;
public PaymentProcessingRealization(
PaymentProcessingEssence essence,
DatabaseConnection database,
MessageQueue eventQueue,
PaymentGateway gateway,
AuditLogger auditLogger
) {
this.essence = essence;
this.database = database;
this.eventQueue = eventQueue;
this.gateway = gateway;
this.auditLogger = auditLogger;
}
/**
* Processes a payment request with full infrastructure integration.
* Coordinates database transactions, external gateway calls, and event publishing.
*
* @param request The payment request to process
* @return The result of the payment processing
*/
public PaymentResult processPayment(PaymentRequest request) {
// Validate using essence
ValidationResult validation = essence.validatePayment(request);
if (!validation.isValid()) {
auditLogger.logValidationFailure(request, validation);
return PaymentResult.validationFailed(validation);
}
// Calculate fee using essence
double fee = essence.calculateFee(request.getAmount(), request.getPaymentMethod());
// Begin database transaction
Transaction transaction = database.beginTransaction();
try {
// Record payment attempt
PaymentRecord record = new PaymentRecord(request, fee);
database.insert(record);
// Call external payment gateway
GatewayResponse gatewayResponse = gateway.processPayment(
request.getAccountNumber(),
request.getAmount() + fee
);
if (gatewayResponse.isSuccessful()) {
// Update payment record
record.setStatus(PaymentStatus.COMPLETED);
record.setGatewayTransactionId(gatewayResponse.getTransactionId());
database.update(record);
// Commit transaction
transaction.commit();
// Publish success event asynchronously
eventQueue.publish(new PaymentCompletedEvent(record));
auditLogger.logPaymentSuccess(record);
return PaymentResult.success(record);
} else {
// Update payment record with failure
record.setStatus(PaymentStatus.FAILED);
record.setFailureReason(gatewayResponse.getErrorMessage());
database.update(record);
transaction.commit();
auditLogger.logPaymentFailure(record, gatewayResponse);
return PaymentResult.failed(gatewayResponse.getErrorMessage());
}
} catch (Exception e) {
transaction.rollback();
auditLogger.logException(request, e);
throw new PaymentProcessingException("Payment processing failed", e);
}
}
}
// ADAPTATION - REST API and message-based interfaces
public class PaymentProcessingAdaptation {
private final PaymentProcessingRealization realization;
public PaymentProcessingAdaptation(PaymentProcessingRealization realization) {
this.realization = realization;
}
/**
* REST API endpoint for synchronous payment processing.
* Adapts HTTP request/response to capability interface.
*/
public HttpResponse handlePaymentRequest(HttpRequest httpRequest) {
try {
PaymentRequest request = parsePaymentRequest(httpRequest);
PaymentResult result = realization.processPayment(request);
if (result.isSuccessful()) {
return HttpResponse.ok(serializePaymentResult(result));
} else {
return HttpResponse.badRequest(result.getErrorMessage());
}
} catch (PaymentProcessingException e) {
return HttpResponse.internalServerError(e.getMessage());
}
}
/**
* Message queue consumer for asynchronous payment processing.
* Adapts message format to capability interface.
*/
public void handlePaymentMessage(Message message) {
PaymentRequest request = deserializePaymentRequest(message.getBody());
PaymentResult result = realization.processPayment(request);
// Send result to reply queue if specified
if (message.hasReplyTo()) {
Message reply = new Message(serializePaymentResult(result));
message.getReplyTo().send(reply);
}
}
private PaymentRequest parsePaymentRequest(HttpRequest request) {
// Parse JSON or XML from HTTP request body
return new PaymentRequest(/* ... */);
}
private PaymentRequest deserializePaymentRequest(byte[] messageBody) {
// Deserialize from message format
return new PaymentRequest(/* ... */);
}
private String serializePaymentResult(PaymentResult result) {
// Serialize to JSON or XML
return "{ ... }";
}
}
The key insight is that both the embedded temperature control and the enterprise payment processing follow the same structural pattern. The essence contains pure domain logic. The realization integrates with infrastructure, whether that is hardware registers or databases and message queues. The adaptation provides interfaces for external interaction.
This brings us to the second core concept: Capability Contracts. Capabilities interact with each other through contracts rather than direct dependencies. A contract defines what a capability provides and what it requires, but not how those provisions and requirements are fulfilled.
A capability contract consists of three parts. The provision declares what the capability offers to others. The requirement declares what the capability needs from others. The protocol defines the interaction patterns supported.
Here is an example contract:
/**
* Contract for temperature monitoring capability.
* Defines what this capability provides and requires.
*/
public interface TemperatureMonitoringContract {
/**
* PROVISION: This capability provides temperature readings.
* Other capabilities can subscribe to receive updates.
*/
public interface Provision {
/**
* Subscribe to temperature updates.
*
* @param subscriber The subscriber to receive updates
* @param updateRate The desired update rate in Hz
*/
void subscribeToTemperature(TemperatureSubscriber subscriber, int updateRate);
/**
* Get the current temperature reading.
*
* @return Current temperature in Celsius
*/
double getCurrentTemperature();
/**
* Get historical temperature data.
*
* @param startTime Start of time range
* @param endTime End of time range
* @return Temperature readings in the specified range
*/
TemperatureHistory getHistory(Timestamp startTime, Timestamp endTime);
}
/**
* REQUIREMENT: This capability requires calibration data.
* Must be provided by a calibration management capability.
*/
public interface Requirement {
/**
* Get calibration parameters for the temperature sensor.
*
* @param sensorId The identifier of the sensor
* @return Calibration parameters
*/
CalibrationParameters getCalibration(String sensorId);
/**
* Notify when calibration parameters change.
*
* @param listener The listener to notify of changes
*/
void onCalibrationChange(CalibrationChangeListener listener);
}
/**
* PROTOCOL: Defines interaction patterns and quality attributes.
*/
public interface Protocol {
/**
* Maximum latency for getCurrentTemperature() call.
*/
int MAX_LATENCY_MS = 10;
/**
* Minimum update rate supported for subscriptions.
*/
int MIN_UPDATE_RATE_HZ = 1;
/**
* Maximum update rate supported for subscriptions.
*/
int MAX_UPDATE_RATE_HZ = 1000;
/**
* Interaction pattern: Subscribe-notify is asynchronous.
* Subscribers receive updates via callback without blocking the provider.
*/
enum InteractionPattern {
SYNCHRONOUS_QUERY, // getCurrentTemperature()
ASYNCHRONOUS_SUBSCRIBE, // subscribeToTemperature()
BATCH_QUERY // getHistory()
}
}
}
Contracts enable capabilities to evolve independently. As long as a capability continues to fulfill its contract, its internal implementation can change without affecting other capabilities. This is similar to interface-based programming, but contracts are richer. They include quality attributes, interaction patterns, and both provisions and requirements.
The third core concept is Efficiency Gradients. This addresses the fundamental challenge of supporting both embedded and enterprise systems. In embedded systems, some operations must execute with minimal overhead, direct hardware access, and predictable timing. In enterprise systems, operations can tolerate more abstraction in exchange for flexibility and maintainability.
An efficiency gradient allows different parts of the system to operate at different levels of abstraction and optimization. Critical paths can be implemented with direct hardware access and minimal indirection. Less critical paths can use higher-level abstractions and more flexible implementations.
Consider a data acquisition system that reads multiple sensors. The sensor reading itself must be fast and deterministic. The data processing can be more flexible. The data storage can be even more abstract:
/**
* Demonstrates efficiency gradients in a data acquisition capability.
* Different operations execute at different abstraction levels.
*/
public class DataAcquisitionCapability {
// CRITICAL PATH: Direct hardware access for sensor reading
// This executes in interrupt context with minimal overhead
private static final int SENSOR_BASE_ADDRESS = 0x40000000;
private static final int SENSOR_COUNT = 8;
/**
* Interrupt handler for sensor data ready signal.
* Executes at highest efficiency gradient level.
* Direct memory access, no abstraction, predictable timing.
*/
public void sensorInterruptHandler() {
// Read all sensors in a tight loop
// No function calls, no allocations, no abstractions
for (int i = 0; i < SENSOR_COUNT; i++) {
int address = SENSOR_BASE_ADDRESS + (i * 4);
int rawValue = readRegisterDirect(address);
// Store in lock-free ring buffer for processing
sensorBuffer[bufferWriteIndex] = rawValue;
bufferWriteIndex = (bufferWriteIndex + 1) % BUFFER_SIZE;
}
}
// MEDIUM PATH: Structured processing with some abstraction
// This executes in normal task context with moderate overhead
private final SensorCalibration calibration;
private final DataValidator validator;
/**
* Processes raw sensor data from buffer.
* Executes at medium efficiency gradient level.
* Uses objects and methods but avoids expensive operations.
*/
public void processSensorData() {
while (bufferReadIndex != bufferWriteIndex) {
int rawValue = sensorBuffer[bufferReadIndex];
bufferReadIndex = (bufferReadIndex + 1) % BUFFER_SIZE;
// Apply calibration using object-oriented design
double calibratedValue = calibration.apply(rawValue);
// Validate the reading
if (validator.isValid(calibratedValue)) {
// Create structured data object
SensorReading reading = new SensorReading(
System.currentTimeMillis(),
calibratedValue
);
// Pass to storage at lower efficiency gradient
storageQueue.enqueue(reading);
}
}
}
// FLEXIBLE PATH: High-level abstraction for storage
// This executes asynchronously with full abstraction
private final DataStorageService storage;
private final AnalyticsEngine analytics;
/**
* Stores processed sensor readings.
* Executes at lowest efficiency gradient level.
* Full abstraction, database transactions, network calls allowed.
*/
public void storeSensorReadings() {
List<SensorReading> batch = new ArrayList<>();
// Collect batch of readings
SensorReading reading;
while ((reading = storageQueue.dequeue()) != null) {
batch.add(reading);
// Process in batches of 100 for efficiency
if (batch.size() >= 100) {
processBatch(batch);
batch.clear();
}
}
// Process remaining readings
if (!batch.isEmpty()) {
processBatch(batch);
}
}
private void processBatch(List<SensorReading> batch) {
// Store in database with transaction
storage.beginTransaction();
try {
for (SensorReading reading : batch) {
storage.insert(reading);
}
storage.commit();
// Trigger analytics asynchronously
analytics.processNewData(batch);
} catch (Exception e) {
storage.rollback();
// Handle error
}
}
private native int readRegisterDirect(int address);
}
The efficiency gradient concept allows the same capability to operate efficiently at the hardware level while providing flexible, maintainable interfaces at higher levels. This is crucial for embedded systems that must balance real-time constraints with software engineering best practices.
The fourth core concept is Evolution Envelopes. An evolution envelope defines how a capability can change over time while maintaining compatibility with other capabilities. It specifies what can change, what must remain stable, and how changes are communicated.
Every capability has an evolution envelope that includes version information, deprecation policies, and migration paths. When a capability needs to change its contract, the evolution envelope guides how that change is introduced without breaking dependent capabilities.
/**
* Evolution envelope for a capability.
* Manages versioning and compatibility.
*/
public class EvolutionEnvelope {
private final String capabilityName;
private final Version currentVersion;
private final List<Version> supportedVersions;
private final DeprecationPolicy deprecationPolicy;
private final MigrationGuide migrationGuide;
/**
* Checks if a capability version is compatible with this capability.
*
* @param requiredVersion The version required by a dependent capability
* @return true if compatible, false otherwise
*/
public boolean isCompatible(Version requiredVersion) {
// Semantic versioning: major.minor.patch
// Same major version = compatible
// Higher minor version = backward compatible
// Patch version = always compatible
if (requiredVersion.getMajor() != currentVersion.getMajor()) {
// Different major version - check if we support it
return supportedVersions.contains(requiredVersion);
}
// Same major version - check minor version
return currentVersion.getMinor() >= requiredVersion.getMinor();
}
/**
* Gets the migration path from an old version to the current version.
*
* @param fromVersion The version to migrate from
* @return Migration steps to reach current version
*/
public List<MigrationStep> getMigrationPath(Version fromVersion) {
return migrationGuide.getPath(fromVersion, currentVersion);
}
/**
* Checks if a feature is deprecated.
*
* @param featureName The name of the feature
* @return Deprecation information if deprecated, null otherwise
*/
public DeprecationInfo getDeprecationInfo(String featureName) {
return deprecationPolicy.getDeprecationInfo(featureName);
}
}
Evolution envelopes make architectural evolution explicit and manageable. Instead of hoping that changes do not break anything, we have a formal mechanism for introducing changes, maintaining backward compatibility, and eventually retiring old versions.
DETAILED ARCHITECTURE DESCRIPTION
Now that we have introduced the core concepts, let us examine how they fit together into a complete architectural pattern. A system built with Capability-Centric Architecture consists of multiple capabilities, each structured as a capability nucleus with an evolution envelope. Capabilities interact through contracts, use efficiency gradients to balance performance and abstraction, and evolve according to their evolution envelopes.
The overall system structure looks like this conceptually:
System
|
+-- Capability A (Nucleus + Envelope)
| |
| +-- Essence (pure logic)
| +-- Realization (infrastructure integration)
| +-- Adaptation (external interfaces)
| +-- Contract (provisions, requirements, protocols)
| +-- Evolution Envelope (versioning, migration)
|
+-- Capability B (Nucleus + Envelope)
| |
| +-- Essence
| +-- Realization
| +-- Adaptation
| +-- Contract
| +-- Evolution Envelope
|
+-- Capability C (Nucleus + Envelope)
|
+-- Essence
+-- Realization
+-- Adaptation
+-- Contract
+-- Evolution Envelope
Capabilities are connected through their contracts. When Capability A requires something that Capability B provides, we establish a contract binding. The binding is managed by a Capability Registry that tracks all capabilities, their contracts, and their bindings.
/**
* Registry that manages capabilities and their interactions.
* Central coordination point for the architecture.
*/
public class CapabilityRegistry {
private final Map<String, CapabilityDescriptor> capabilities;
private final Map<String, List<ContractBinding>> bindings;
private final DependencyResolver resolver;
public CapabilityRegistry() {
this.capabilities = new ConcurrentHashMap<>();
this.bindings = new ConcurrentHashMap<>();
this.resolver = new DependencyResolver();
}
/**
* Registers a capability with the system.
*
* @param descriptor Description of the capability including its contract
*/
public void registerCapability(CapabilityDescriptor descriptor) {
// Validate the capability descriptor
validateDescriptor(descriptor);
// Check for contract compatibility with existing capabilities
checkContractCompatibility(descriptor);
// Register the capability
capabilities.put(descriptor.getName(), descriptor);
// Resolve any pending bindings
resolvePendingBindings(descriptor);
}
/**
* Binds a capability's requirement to another capability's provision.
*
* @param consumer The capability that requires something
* @param provider The capability that provides it
* @param contractType The type of contract being bound
*/
public void bindCapabilities(
String consumer,
String provider,
Class<?> contractType
) {
CapabilityDescriptor consumerDesc = capabilities.get(consumer);
CapabilityDescriptor providerDesc = capabilities.get(provider);
if (consumerDesc == null || providerDesc == null) {
throw new IllegalArgumentException("Both capabilities must be registered");
}
// Verify that provider actually provides this contract
if (!providerDesc.provides(contractType)) {
throw new IllegalArgumentException(
provider + " does not provide " + contractType.getName()
);
}
// Verify that consumer actually requires this contract
if (!consumerDesc.requires(contractType)) {
throw new IllegalArgumentException(
consumer + " does not require " + contractType.getName()
);
}
// Check for circular dependencies
if (resolver.wouldCreateCycle(consumer, provider)) {
throw new CircularDependencyException(
"Binding would create circular dependency: " +
resolver.describeCycle(consumer, provider)
);
}
// Create the binding
ContractBinding binding = new ContractBinding(
consumerDesc,
providerDesc,
contractType
);
bindings.computeIfAbsent(consumer, k -> new ArrayList<>()).add(binding);
// Update dependency graph
resolver.addDependency(consumer, provider);
}
/**
* Gets the initialization order for capabilities.
* Uses topological sort to ensure dependencies are initialized first.
*
* @return List of capability names in initialization order
*/
public List<String> getInitializationOrder() {
return resolver.topologicalSort();
}
private void validateDescriptor(CapabilityDescriptor descriptor) {
if (descriptor.getName() == null || descriptor.getName().isEmpty()) {
throw new IllegalArgumentException("Capability must have a name");
}
if (descriptor.getContract() == null) {
throw new IllegalArgumentException("Capability must have a contract");
}
if (descriptor.getEvolutionEnvelope() == null) {
throw new IllegalArgumentException("Capability must have an evolution envelope");
}
}
private void checkContractCompatibility(CapabilityDescriptor descriptor) {
// Check if any existing capability provides the same contract
for (CapabilityDescriptor existing : capabilities.values()) {
if (existing.provides(descriptor.getContract().getClass())) {
// Multiple providers for same contract - ensure they are compatible
if (!areContractsCompatible(existing.getContract(), descriptor.getContract())) {
throw new ContractConflictException(
"Incompatible contracts: " + existing.getName() +
" and " + descriptor.getName()
);
}
}
}
}
private boolean areContractsCompatible(Object contract1, Object contract2) {
// Contracts are compatible if they have the same interface
// and compatible evolution envelopes
return contract1.getClass().equals(contract2.getClass());
}
private void resolvePendingBindings(CapabilityDescriptor descriptor) {
// Check if any capabilities were waiting for this capability
// and bind them now that it is available
for (CapabilityDescriptor waiting : capabilities.values()) {
for (Class<?> required : waiting.getRequiredContracts()) {
if (descriptor.provides(required)) {
bindCapabilities(waiting.getName(), descriptor.getName(), required);
}
}
}
}
}
The capability registry prevents circular dependencies by checking the dependency graph before creating bindings. This is one of the key mechanisms for avoiding architectural antipatterns. If a binding would create a cycle, the registry rejects it and forces the architect to restructure the capabilities.
Let us examine how this prevents a common antipattern. Suppose we have three capabilities: Customer Management, Order Processing, and Inventory Management. Order Processing needs customer information, so it requires the Customer Management contract. Order Processing also needs to check inventory, so it requires the Inventory Management contract. So far, this is fine.
Now suppose Inventory Management wants to track which customers order which products most frequently for demand forecasting. A naive implementation might have Inventory Management require the Customer Management contract. This creates a potential cycle if Customer Management later needs something from Inventory Management.
The capability registry would reject this binding. Instead, we must restructure. One solution is to introduce a new capability, Customer Analytics, that depends on both Customer Management and Inventory Management. Customer Analytics provides demand forecasting without creating a cycle:
Customer Management --> Customer Analytics <-- Inventory Management
|
v
Demand Forecasting
This forced restructuring leads to better architecture. Customer Analytics is a cohesive capability with a clear purpose. It can evolve independently and be reused by other capabilities that need customer analytics.
Another key mechanism is the Capability Lifecycle Manager. This component manages the lifecycle of capabilities from initialization through operation to shutdown. It uses the initialization order from the capability registry to start capabilities in the correct sequence.
/**
* Manages the lifecycle of all capabilities in the system.
*/
public class CapabilityLifecycleManager {
private final CapabilityRegistry registry;
private final Map<String, CapabilityInstance> instances;
private final ExecutorService executor;
public CapabilityLifecycleManager(CapabilityRegistry registry) {
this.registry = registry;
this.instances = new ConcurrentHashMap<>();
this.executor = Executors.newCachedThreadPool();
}
/**
* Initializes all capabilities in dependency order.
* Ensures that required capabilities are initialized before dependent capabilities.
*/
public void initializeAll() {
List<String> initOrder = registry.getInitializationOrder();
for (String capabilityName : initOrder) {
initializeCapability(capabilityName);
}
}
/**
* Initializes a single capability.
*
* @param capabilityName The name of the capability to initialize
*/
public void initializeCapability(String capabilityName) {
CapabilityDescriptor descriptor = registry.getCapability(capabilityName);
// Create instance
CapabilityInstance instance = createInstance(descriptor);
// Inject dependencies
injectDependencies(instance, capabilityName);
// Initialize
instance.initialize();
// Store instance
instances.put(capabilityName, instance);
// Start if auto-start is enabled
if (descriptor.isAutoStart()) {
instance.start();
}
}
/**
* Shuts down all capabilities in reverse dependency order.
* Ensures that dependent capabilities are shut down before required capabilities.
*/
public void shutdownAll() {
List<String> initOrder = registry.getInitializationOrder();
// Shutdown in reverse order
for (int i = initOrder.size() - 1; i >= 0; i--) {
String capabilityName = initOrder.get(i);
shutdownCapability(capabilityName);
}
executor.shutdown();
}
/**
* Shuts down a single capability.
*
* @param capabilityName The name of the capability to shut down
*/
public void shutdownCapability(String capabilityName) {
CapabilityInstance instance = instances.get(capabilityName);
if (instance != null) {
instance.stop();
instance.cleanup();
instances.remove(capabilityName);
}
}
private CapabilityInstance createInstance(CapabilityDescriptor descriptor) {
// Use reflection or factory to create instance
try {
Class<?> capabilityClass = Class.forName(descriptor.getImplementationClass());
return (CapabilityInstance) capabilityClass.getDeclaredConstructor().newInstance();
} catch (Exception e) {
throw new CapabilityInstantiationException(
"Failed to create instance of " + descriptor.getName(),
e
);
}
}
private void injectDependencies(CapabilityInstance instance, String capabilityName) {
List<ContractBinding> capabilityBindings = registry.getBindings(capabilityName);
for (ContractBinding binding : capabilityBindings) {
// Get the provider instance
CapabilityInstance provider = instances.get(binding.getProviderName());
if (provider == null) {
throw new DependencyNotAvailableException(
capabilityName + " requires " + binding.getProviderName() +
" but it is not initialized"
);
}
// Get the contract implementation from provider
Object contractImpl = provider.getContractImplementation(binding.getContractType());
// Inject into consumer
instance.injectDependency(binding.getContractType(), contractImpl);
}
}
}
The lifecycle manager ensures that capabilities are initialized in the correct order and that dependencies are properly injected. This eliminates a common source of initialization bugs where components try to use dependencies that are not yet ready.
APPLICATION TO EMBEDDED SYSTEMS
Embedded systems present unique challenges that traditional enterprise architectures cannot address. These systems often have hard real-time constraints, limited resources, and direct hardware dependencies. Capability-Centric Architecture handles these challenges through efficiency gradients and careful structuring of the capability nucleus.
Consider a complete embedded system example: an industrial motor controller. This system must read encoder position, execute a control algorithm, and drive motor phases, all within microsecond timing constraints. It must also communicate with a supervisory system, log diagnostic data, and support firmware updates.
We structure this as multiple capabilities with different efficiency gradients:
/**
* Motor Control Capability - Critical real-time capability
* Uses highest efficiency gradient for control loop
*/
public class MotorControlCapability implements CapabilityInstance {
// ESSENCE: Pure control algorithm
private final MotorControlEssence essence;
// REALIZATION: Hardware integration
private static final int ENCODER_REGISTER = 0x40001000;
private static final int PWM_BASE_REGISTER = 0x40002000;
private static final int TIMER_REGISTER = 0x40003000;
private volatile int currentPosition;
private volatile int targetPosition;
private volatile boolean controlEnabled;
public MotorControlCapability() {
// Create essence with control parameters
ControlParameters params = new ControlParameters(
1.0, // Proportional gain
0.1, // Integral gain
0.01 // Derivative gain
);
this.essence = new MotorControlEssence(params);
}
@Override
public void initialize() {
// Configure hardware
configureEncoder();
configurePWM();
configureTimer();
}
@Override
public void start() {
controlEnabled = true;
startTimer();
}
@Override
public void stop() {
controlEnabled = false;
stopTimer();
disablePWM();
}
/**
* Timer interrupt handler - executes every 100 microseconds.
* This is the critical real-time path with highest efficiency gradient.
* Must complete within 100 microseconds to maintain control stability.
*/
public void timerInterruptHandler() {
if (!controlEnabled) {
return;
}
// Read encoder position directly from hardware register
// No function call overhead, no abstraction
currentPosition = readRegisterDirect(ENCODER_REGISTER);
// Calculate control output using essence
// This is the only function call in the critical path
int controlOutput = essence.calculateControlOutput(
currentPosition,
targetPosition
);
// Write PWM values directly to hardware registers
// Three phase motor requires three PWM channels
writeRegisterDirect(PWM_BASE_REGISTER + 0, controlOutput);
writeRegisterDirect(PWM_BASE_REGISTER + 4, calculatePhaseB(controlOutput));
writeRegisterDirect(PWM_BASE_REGISTER + 8, calculatePhaseC(controlOutput));
}
/**
* Sets the target position for the motor.
* This is called from lower priority tasks, not from interrupt context.
* Uses medium efficiency gradient.
*/
public void setTargetPosition(int position) {
// Validate position is within safe range
if (position < 0 || position > MAX_POSITION) {
throw new IllegalArgumentException("Position out of range");
}
// Atomic write to volatile variable
// Interrupt handler will see new value on next iteration
targetPosition = position;
}
/**
* Gets current motor status.
* Uses low efficiency gradient - can allocate objects and use abstractions.
*/
public MotorStatus getStatus() {
// Create status object with current state
return new MotorStatus(
currentPosition,
targetPosition,
controlEnabled,
essence.getControlState()
);
}
// Contract implementation
@Override
public Object getContractImplementation(Class<?> contractType) {
if (contractType == MotorControlContract.class) {
return new MotorControlContractImpl(this);
}
return null;
}
@Override
public void injectDependency(Class<?> contractType, Object implementation) {
// This capability has no dependencies
}
// Hardware configuration methods
private void configureEncoder() {
// Configure encoder interface
// Set resolution, filtering, etc.
}
private void configurePWM() {
// Configure PWM generators
// Set frequency, dead time, etc.
}
private void configureTimer() {
// Configure timer for 10kHz interrupt (100 microsecond period)
writeRegisterDirect(TIMER_REGISTER, 10000);
}
private void startTimer() {
// Enable timer interrupt
}
private void stopTimer() {
// Disable timer interrupt
}
private void disablePWM() {
// Turn off all PWM outputs
writeRegisterDirect(PWM_BASE_REGISTER + 0, 0);
writeRegisterDirect(PWM_BASE_REGISTER + 4, 0);
writeRegisterDirect(PWM_BASE_REGISTER + 8, 0);
}
private int calculatePhaseB(int phaseA) {
// Calculate phase B PWM value based on phase A
// For three-phase motor control
return phaseA; // Simplified
}
private int calculatePhaseC(int phaseA) {
// Calculate phase C PWM value based on phase A
return phaseA; // Simplified
}
private native int readRegisterDirect(int address);
private native void writeRegisterDirect(int address, int value);
}
/**
* Diagnostic Logging Capability - Non-critical capability
* Uses low efficiency gradient with full abstraction
*/
public class DiagnosticLoggingCapability implements CapabilityInstance {
private final Queue<DiagnosticEvent> eventQueue;
private final FileSystem fileSystem;
private final Thread loggingThread;
private volatile boolean running;
// Requires motor control contract to monitor motor status
private MotorControlContract motorControl;
public DiagnosticLoggingCapability() {
this.eventQueue = new ConcurrentLinkedQueue<>();
this.fileSystem = new FileSystem();
}
@Override
public void initialize() {
// Create logging directory if it does not exist
fileSystem.createDirectory("/logs");
}
@Override
public void start() {
running = true;
// Start background thread for logging
loggingThread = new Thread(this::loggingThreadMain);
loggingThread.setPriority(Thread.MIN_PRIORITY);
loggingThread.start();
// Start periodic status monitoring
startStatusMonitoring();
}
@Override
public void stop() {
running = false;
loggingThread.interrupt();
}
/**
* Logs a diagnostic event.
* Can be called from any context, including interrupt handlers.
* Uses lock-free queue to avoid blocking.
*/
public void logEvent(DiagnosticEvent event) {
eventQueue.offer(event);
}
/**
* Background thread that writes events to file system.
* Runs at low priority and uses full abstraction.
*/
private void loggingThreadMain() {
while (running) {
try {
// Process events in batches for efficiency
List<DiagnosticEvent> batch = new ArrayList<>();
DiagnosticEvent event;
while ((event = eventQueue.poll()) != null && batch.size() < 100) {
batch.add(event);
}
if (!batch.isEmpty()) {
writeEventBatch(batch);
}
// Sleep if queue is empty
if (eventQueue.isEmpty()) {
Thread.sleep(100);
}
} catch (InterruptedException e) {
break;
} catch (Exception e) {
// Log error but continue running
System.err.println("Logging error: " + e.getMessage());
}
}
}
private void writeEventBatch(List<DiagnosticEvent> batch) {
// Open log file
String filename = "/logs/diagnostic_" + getCurrentDate() + ".log";
try (FileWriter writer = fileSystem.openForAppend(filename)) {
for (DiagnosticEvent event : batch) {
writer.write(formatEvent(event));
writer.write("\n");
}
} catch (IOException e) {
// Handle error
}
}
/**
* Periodically monitors motor status and logs it.
* Uses the motor control contract to get status.
*/
private void startStatusMonitoring() {
Timer timer = new Timer();
timer.scheduleAtFixedRate(new TimerTask() {
@Override
public void run() {
if (motorControl != null) {
MotorStatus status = motorControl.getStatus();
logEvent(new DiagnosticEvent(
DiagnosticEvent.Type.STATUS,
"Motor status: " + status.toString()
));
}
}
}, 0, 1000); // Log status every second
}
private String formatEvent(DiagnosticEvent event) {
return String.format("[%s] %s: %s",
event.getTimestamp(),
event.getType(),
event.getMessage()
);
}
private String getCurrentDate() {
// Return current date in YYYY-MM-DD format
return "2025-01-01"; // Simplified
}
@Override
public Object getContractImplementation(Class<?> contractType) {
if (contractType == DiagnosticLoggingContract.class) {
return new DiagnosticLoggingContractImpl(this);
}
return null;
}
@Override
public void injectDependency(Class<?> contractType, Object implementation) {
if (contractType == MotorControlContract.class) {
this.motorControl = (MotorControlContract) implementation;
}
}
}
Notice how the motor control capability uses direct hardware access in the interrupt handler for maximum efficiency, while the diagnostic logging capability uses high-level abstractions like threads, queues, and file I/O. Both are capabilities in the same system, but they operate at different efficiency gradients appropriate to their requirements.
This demonstrates a key advantage of Capability-Centric Architecture for embedded systems. We can use bare-metal programming where necessary for real-time performance while using higher-level abstractions for non-critical functionality. The architecture does not force us to choose one approach for the entire system.
Another important aspect for embedded systems is resource management. Embedded systems often have limited memory and must carefully manage resource allocation. Capability-Centric Architecture supports this through explicit resource contracts:
/**
* Resource contract for capabilities that need memory allocation.
*/
public interface ResourceContract {
/**
* Declares the memory requirements of this capability.
*
* @return Memory requirements in bytes
*/
MemoryRequirements getMemoryRequirements();
/**
* Declares the CPU requirements of this capability.
*
* @return CPU requirements as percentage of total CPU time
*/
CPURequirements getCPURequirements();
/**
* Allocates resources for this capability.
* Called during initialization.
*
* @param allocator The resource allocator
*/
void allocateResources(ResourceAllocator allocator);
/**
* Releases resources used by this capability.
* Called during shutdown.
*/
void releaseResources();
}
Capabilities declare their resource requirements through this contract, and the system can verify that sufficient resources are available before initializing capabilities. This prevents runtime resource exhaustion and makes resource usage explicit and manageable.
APPLICATION TO ENTERPRISE SYSTEMS
Enterprise systems have different challenges than embedded systems. They must scale to handle varying loads, integrate with numerous external systems, support multiple deployment models, and evolve rapidly to meet changing business needs. Capability-Centric Architecture addresses these challenges through its contract-based interaction model and evolution envelopes.
Consider an e-commerce platform built with Capability-Centric Architecture. The platform consists of multiple capabilities: Product Catalog, Shopping Cart, Order Processing, Payment Processing, Inventory Management, Customer Management, and Shipping Integration. Each capability is independently deployable and scalable.
The Product Catalog capability provides product information to other capabilities:
/**
* Product Catalog Capability for enterprise e-commerce system.
* Demonstrates enterprise-focused capability structure.
*/
public class ProductCatalogCapability implements CapabilityInstance {
// ESSENCE: Pure product catalog logic
private final ProductCatalogEssence essence;
// REALIZATION: Enterprise infrastructure integration
private final DatabaseConnectionPool database;
private final CacheManager cache;
private final SearchEngine searchEngine;
private final MessageBroker messageBroker;
// Dependencies injected through contracts
private PricingContract pricingService;
private InventoryContract inventoryService;
public ProductCatalogCapability(
DatabaseConnectionPool database,
CacheManager cache,
SearchEngine searchEngine,
MessageBroker messageBroker
) {
this.essence = new ProductCatalogEssence();
this.database = database;
this.cache = cache;
this.searchEngine = searchEngine;
this.messageBroker = messageBroker;
}
@Override
public void initialize() {
// Initialize database schema
initializeSchema();
// Warm up cache with popular products
warmUpCache();
// Initialize search index
initializeSearchIndex();
// Subscribe to inventory change events
subscribeToInventoryChanges();
}
@Override
public void start() {
// Start background tasks
startCacheRefreshTask();
startSearchIndexUpdateTask();
}
@Override
public void stop() {
// Stop background tasks
// Close connections
}
/**
* Gets product information by product ID.
* Uses caching for performance.
*
* @param productId The ID of the product
* @return Product information
*/
public Product getProduct(String productId) {
// Try cache first
Product product = cache.get("product:" + productId, Product.class);
if (product != null) {
return enrichProduct(product);
}
// Cache miss - query database
try (Connection conn = database.getConnection()) {
product = queryProduct(conn, productId);
if (product != null) {
// Store in cache
cache.put("product:" + productId, product, 3600); // 1 hour TTL
return enrichProduct(product);
}
} catch (SQLException e) {
throw new ProductCatalogException("Failed to retrieve product", e);
}
return null;
}
/**
* Searches for products matching criteria.
* Uses search engine for performance.
*
* @param criteria Search criteria
* @return List of matching products
*/
public List<Product> searchProducts(SearchCriteria criteria) {
// Use search engine for full-text search
SearchResults results = searchEngine.search(
"products",
buildSearchQuery(criteria)
);
// Convert search results to product objects
List<Product> products = new ArrayList<>();
for (SearchResult result : results.getResults()) {
String productId = result.getId();
Product product = getProduct(productId);
if (product != null) {
products.add(product);
}
}
return products;
}
/**
* Creates a new product.
* Uses database transaction and publishes event.
*
* @param product The product to create
* @return The created product with generated ID
*/
public Product createProduct(Product product) {
// Validate product using essence
ValidationResult validation = essence.validateProduct(product);
if (!validation.isValid()) {
throw new ValidationException(validation.getErrors());
}
try (Connection conn = database.getConnection()) {
conn.setAutoCommit(false);
try {
// Insert product into database
String productId = insertProduct(conn, product);
product.setId(productId);
// Commit transaction
conn.commit();
// Invalidate cache
cache.invalidate("product:" + productId);
// Update search index asynchronously
searchEngine.indexDocument("products", productId, product);
// Publish product created event
publishProductCreatedEvent(product);
return product;
} catch (Exception e) {
conn.rollback();
throw e;
}
} catch (SQLException e) {
throw new ProductCatalogException("Failed to create product", e);
}
}
/**
* Enriches product with data from other capabilities.
* Uses injected contracts to get pricing and inventory.
*/
private Product enrichProduct(Product product) {
// Get current price from pricing service
if (pricingService != null) {
Price price = pricingService.getPrice(product.getId());
product.setCurrentPrice(price);
}
// Get inventory status from inventory service
if (inventoryService != null) {
InventoryStatus status = inventoryService.getStatus(product.getId());
product.setInventoryStatus(status);
}
return product;
}
private void initializeSchema() {
// Create database tables if they do not exist
}
private void warmUpCache() {
// Load popular products into cache
}
private void initializeSearchIndex() {
// Create search index if it does not exist
}
private void subscribeToInventoryChanges() {
// Subscribe to inventory change events to update product availability
messageBroker.subscribe("inventory.changed", this::handleInventoryChange);
}
private void handleInventoryChange(Message message) {
// Update product availability when inventory changes
String productId = message.getProperty("productId");
cache.invalidate("product:" + productId);
}
private void startCacheRefreshTask() {
// Start background task to refresh cache periodically
}
private void startSearchIndexUpdateTask() {
// Start background task to update search index
}
private Product queryProduct(Connection conn, String productId) throws SQLException {
// Query product from database
return null; // Simplified
}
private String insertProduct(Connection conn, Product product) throws SQLException {
// Insert product into database and return generated ID
return "PROD-" + System.currentTimeMillis(); // Simplified
}
private String buildSearchQuery(SearchCriteria criteria) {
// Build search engine query from criteria
return ""; // Simplified
}
private void publishProductCreatedEvent(Product product) {
// Publish event to message broker
Message message = new Message("product.created");
message.setProperty("productId", product.getId());
message.setBody(serializeProduct(product));
messageBroker.publish(message);
}
private byte[] serializeProduct(Product product) {
// Serialize product to JSON or other format
return new byte[0]; // Simplified
}
@Override
public Object getContractImplementation(Class<?> contractType) {
if (contractType == ProductCatalogContract.class) {
return new ProductCatalogContractImpl(this);
}
return null;
}
@Override
public void injectDependency(Class<?> contractType, Object implementation) {
if (contractType == PricingContract.class) {
this.pricingService = (PricingContract) implementation;
} else if (contractType == InventoryContract.class) {
this.inventoryService = (InventoryContract) implementation;
}
}
}
The Product Catalog capability demonstrates several enterprise patterns. It uses caching for performance, a search engine for full-text search, database transactions for consistency, and message-based events for loose coupling. However, all of these infrastructure concerns are in the realization layer. The essence contains pure product catalog logic that can be tested independently.
The capability interacts with Pricing and Inventory capabilities through contracts. This allows each capability to evolve independently. For example, the Pricing capability could change from a simple database lookup to a complex dynamic pricing algorithm involving machine learning, and the Product Catalog capability would not need to change as long as the contract remains stable.
For enterprise systems, deployment flexibility is crucial. Capability-Centric Architecture supports multiple deployment models through deployment descriptors:
/**
* Deployment descriptor for a capability.
* Specifies how the capability should be deployed.
*/
public class DeploymentDescriptor {
private final String capabilityName;
private final DeploymentMode mode;
private final ScalingPolicy scalingPolicy;
private final ResourceLimits resourceLimits;
private final HealthCheck healthCheck;
public enum DeploymentMode {
EMBEDDED, // Deploy in same process as other capabilities
STANDALONE, // Deploy in separate process
CONTAINERIZED, // Deploy in container (Docker, etc.)
SERVERLESS // Deploy as serverless function
}
/**
* Creates a deployment descriptor.
*
* @param capabilityName Name of the capability
* @param mode Deployment mode
* @param scalingPolicy How to scale this capability
* @param resourceLimits Resource limits for this capability
* @param healthCheck Health check configuration
*/
public DeploymentDescriptor(
String capabilityName,
DeploymentMode mode,
ScalingPolicy scalingPolicy,
ResourceLimits resourceLimits,
HealthCheck healthCheck
) {
this.capabilityName = capabilityName;
this.mode = mode;
this.scalingPolicy = scalingPolicy;
this.resourceLimits = resourceLimits;
this.healthCheck = healthCheck;
}
public String getCapabilityName() {
return capabilityName;
}
public DeploymentMode getMode() {
return mode;
}
public ScalingPolicy getScalingPolicy() {
return scalingPolicy;
}
public ResourceLimits getResourceLimits() {
return resourceLimits;
}
public HealthCheck getHealthCheck() {
return healthCheck;
}
}
A capability can be deployed in different modes depending on requirements. During development, all capabilities might run in a single process for easy debugging. In production, high-traffic capabilities might be deployed in containers with auto-scaling, while low-traffic capabilities might run as serverless functions to reduce costs.
The deployment mode is independent of the capability implementation. The same Product Catalog capability code can run embedded in a monolith, in a standalone microservice, in a Docker container, or as a serverless function. The adaptation layer handles the differences in how the capability is accessed.
SUPPORTING MODERN TECHNOLOGIES
Capability-Centric Architecture is designed to integrate modern technologies like AI, Big Data, Cloud Computing, and containerization. These technologies are not afterthoughts but are supported through the core architectural mechanisms.
For AI integration, we treat AI models as specialized capabilities with specific contracts. An AI model capability provides predictions or classifications through its contract and requires training data and model management through other contracts:
/**
* AI Model Capability for product recommendations.
* Demonstrates integration of machine learning into the architecture.
*/
public class ProductRecommendationAICapability implements CapabilityInstance {
// ESSENCE: Recommendation logic (model-agnostic)
private final RecommendationEssence essence;
// REALIZATION: ML infrastructure integration
private final ModelRegistry modelRegistry;
private final FeatureStore featureStore;
private final InferenceEngine inferenceEngine;
private final ModelTrainingPipeline trainingPipeline;
// Dependencies
private ProductCatalogContract productCatalog;
private CustomerBehaviorContract customerBehavior;
private volatile MLModel currentModel;
public ProductRecommendationAICapability(
ModelRegistry modelRegistry,
FeatureStore featureStore,
InferenceEngine inferenceEngine,
ModelTrainingPipeline trainingPipeline
) {
this.essence = new RecommendationEssence();
this.modelRegistry = modelRegistry;
this.featureStore = featureStore;
this.inferenceEngine = inferenceEngine;
this.trainingPipeline = trainingPipeline;
}
@Override
public void initialize() {
// Load the current production model
currentModel = modelRegistry.getProductionModel("product-recommendation");
// Initialize inference engine with model
inferenceEngine.loadModel(currentModel);
// Start model monitoring
startModelMonitoring();
}
@Override
public void start() {
// Start background model training if enabled
if (trainingPipeline.isEnabled()) {
startModelTraining();
}
}
@Override
public void stop() {
// Stop training pipeline
// Unload model
}
/**
* Gets product recommendations for a customer.
*
* @param customerId The customer ID
* @param context Additional context for recommendations
* @return List of recommended products
*/
public List<ProductRecommendation> getRecommendations(
String customerId,
RecommendationContext context
) {
// Extract features for the customer
Features features = extractFeatures(customerId, context);
// Run inference using current model
InferenceResult result = inferenceEngine.predict(features);
// Convert model output to product recommendations
List<ProductRecommendation> recommendations =
convertToRecommendations(result);
// Apply business rules using essence
recommendations = essence.applyBusinessRules(
recommendations,
customerId,
context
);
// Enrich with product details
enrichRecommendations(recommendations);
return recommendations;
}
/**
* Triggers retraining of the recommendation model.
* Uses latest customer behavior data.
*/
public void retrainModel() {
// Get training data from feature store
TrainingData data = featureStore.getTrainingData(
"product-recommendation",
Timestamp.now().minusDays(30),
Timestamp.now()
);
// Train new model
trainingPipeline.submitTrainingJob(
"product-recommendation",
data,
new TrainingCallback() {
@Override
public void onTrainingComplete(MLModel newModel) {
handleNewModel(newModel);
}
@Override
public void onTrainingFailed(Exception e) {
// Log error and alert
}
}
);
}
private Features extractFeatures(String customerId, RecommendationContext context) {
Features features = new Features();
// Get customer behavior features
if (customerBehavior != null) {
CustomerProfile profile = customerBehavior.getProfile(customerId);
features.add("customer_age", profile.getAge());
features.add("customer_segment", profile.getSegment());
features.add("purchase_history", profile.getPurchaseHistory());
}
// Add context features
features.add("time_of_day", context.getTimeOfDay());
features.add("day_of_week", context.getDayOfWeek());
features.add("current_page", context.getCurrentPage());
return features;
}
private List<ProductRecommendation> convertToRecommendations(InferenceResult result) {
// Convert model output to recommendations
List<ProductRecommendation> recommendations = new ArrayList<>();
float[] scores = result.getScores();
String[] productIds = result.getProductIds();
for (int i = 0; i < scores.length && i < 10; i++) {
recommendations.add(new ProductRecommendation(
productIds[i],
scores[i]
));
}
return recommendations;
}
private void enrichRecommendations(List<ProductRecommendation> recommendations) {
// Get product details from catalog
if (productCatalog != null) {
for (ProductRecommendation rec : recommendations) {
Product product = productCatalog.getProduct(rec.getProductId());
rec.setProductDetails(product);
}
}
}
private void handleNewModel(MLModel newModel) {
// Validate new model performance
ModelMetrics metrics = validateModel(newModel);
if (metrics.meetsQualityThreshold()) {
// Register new model
modelRegistry.registerModel(newModel, "product-recommendation");
// Promote to production
modelRegistry.promoteToProduction(newModel.getId());
// Load new model into inference engine
inferenceEngine.loadModel(newModel);
// Update current model reference
currentModel = newModel;
} else {
// Model does not meet quality threshold - keep current model
// Alert data science team
}
}
private ModelMetrics validateModel(MLModel model) {
// Run validation on hold-out dataset
return new ModelMetrics(); // Simplified
}
private void startModelMonitoring() {
// Monitor model performance and data drift
// Trigger retraining if performance degrades
}
private void startModelTraining() {
// Schedule periodic model retraining
Timer timer = new Timer();
timer.scheduleAtFixedRate(new TimerTask() {
@Override
public void run() {
retrainModel();
}
}, 0, 24 * 60 * 60 * 1000); // Retrain daily
}
@Override
public Object getContractImplementation(Class<?> contractType) {
if (contractType == RecommendationContract.class) {
return new RecommendationContractImpl(this);
}
return null;
}
@Override
public void injectDependency(Class<?> contractType, Object implementation) {
if (contractType == ProductCatalogContract.class) {
this.productCatalog = (ProductCatalogContract) implementation;
} else if (contractType == CustomerBehaviorContract.class) {
this.customerBehavior = (CustomerBehaviorContract) implementation;
}
}
}
The AI capability follows the same structure as other capabilities. The essence contains business logic for applying rules to recommendations. The realization handles ML infrastructure including model registry, feature store, and inference engine. The adaptation provides a simple interface for getting recommendations.
This structure allows the ML technology to evolve independently. We could replace the inference engine with a different one, change the model architecture, or even switch from one ML framework to another, all without affecting capabilities that use recommendations.
For Big Data integration, we treat data processing pipelines as capabilities:
/**
* Customer Analytics Capability using Big Data processing.
* Demonstrates integration of data analytics into the architecture.
*/
public class CustomerAnalyticsCapability implements CapabilityInstance {
// ESSENCE: Analytics logic
private final CustomerAnalyticsEssence essence;
// REALIZATION: Big Data infrastructure
private final DataLake dataLake;
private final SparkCluster sparkCluster;
private final DataWarehouse warehouse;
public CustomerAnalyticsCapability(
DataLake dataLake,
SparkCluster sparkCluster,
DataWarehouse warehouse
) {
this.essence = new CustomerAnalyticsEssence();
this.dataLake = dataLake;
this.sparkCluster = sparkCluster;
this.warehouse = warehouse;
}
@Override
public void initialize() {
// Initialize data lake connections
// Create warehouse tables if needed
}
@Override
public void start() {
// Start scheduled analytics jobs
scheduleAnalyticsJobs();
}
@Override
public void stop() {
// Stop scheduled jobs
}
/**
* Calculates customer lifetime value for all customers.
* Uses Spark for distributed processing of large datasets.
*/
public void calculateCustomerLifetimeValue() {
// Create Spark job
SparkJob job = sparkCluster.createJob("customer-ltv");
// Read customer data from data lake
Dataset customers = job.read(dataLake.getPath("/customers"));
// Read transaction history from data lake
Dataset transactions = job.read(dataLake.getPath("/transactions"));
// Join datasets
Dataset joined = customers.join(transactions, "customer_id");
// Calculate LTV using essence logic
Dataset ltv = job.map(joined, row -> {
CustomerData customer = parseCustomer(row);
List<Transaction> txns = parseTransactions(row);
// Use essence to calculate LTV
double ltvValue = essence.calculateLifetimeValue(customer, txns);
return new LTVResult(customer.getId(), ltvValue);
});
// Aggregate by customer segment
Dataset aggregated = job.groupBy(ltv, "segment")
.agg("ltv", "avg", "min", "max");
// Write results to warehouse
job.write(aggregated, warehouse.getTable("customer_ltv"));
// Execute job
job.execute();
}
/**
* Identifies customer segments based on behavior patterns.
* Uses machine learning clustering on big data.
*/
public void identifyCustomerSegments() {
// Create Spark ML job
SparkMLJob job = sparkCluster.createMLJob("customer-segmentation");
// Read customer features from data lake
Dataset features = job.read(dataLake.getPath("/customer-features"));
// Apply clustering algorithm
ClusteringModel model = job.kmeans(features, 5); // 5 segments
// Predict segments for all customers
Dataset segments = model.predict(features);
// Write segments to warehouse
job.write(segments, warehouse.getTable("customer_segments"));
// Execute job
job.execute();
}
private void scheduleAnalyticsJobs() {
// Schedule LTV calculation weekly
// Schedule segmentation monthly
}
private CustomerData parseCustomer(Row row) {
return new CustomerData(); // Simplified
}
private List<Transaction> parseTransactions(Row row) {
return new ArrayList<>(); // Simplified
}
@Override
public Object getContractImplementation(Class<?> contractType) {
if (contractType == CustomerAnalyticsContract.class) {
return new CustomerAnalyticsContractImpl(this);
}
return null;
}
@Override
public void injectDependency(Class<?> contractType, Object implementation) {
// No dependencies
}
}
The analytics capability uses Spark for distributed data processing but encapsulates all Spark-specific code in the realization layer. The essence contains the actual analytics algorithms. This separation allows us to change the big data technology without affecting the analytics logic.
For cloud and containerization support, we use deployment descriptors and infrastructure capabilities:
/**
* Infrastructure capability for Kubernetes deployment.
* Manages deployment of other capabilities to Kubernetes cluster.
*/
public class KubernetesDeploymentCapability implements CapabilityInstance {
private final KubernetesClient k8sClient;
private final ContainerRegistry containerRegistry;
private final DeploymentPlanner planner;
public KubernetesDeploymentCapability(
KubernetesClient k8sClient,
ContainerRegistry containerRegistry,
DeploymentPlanner planner
) {
this.k8sClient = k8sClient;
this.containerRegistry = containerRegistry;
this.planner = planner;
}
@Override
public void initialize() {
// Connect to Kubernetes cluster
k8sClient.connect();
}
@Override
public void start() {
// Ready to deploy capabilities
}
@Override
public void stop() {
// Disconnect from cluster
}
/**
* Deploys a capability to Kubernetes.
*
* @param descriptor Deployment descriptor for the capability
*/
public void deployCapability(DeploymentDescriptor descriptor) {
// Get container image for capability
String imageName = containerRegistry.getImageName(
descriptor.getCapabilityName(),
descriptor.getVersion()
);
// Create deployment plan
DeploymentPlan plan = planner.createPlan(descriptor);
// Create Kubernetes deployment
Deployment deployment = new Deployment();
deployment.setName(descriptor.getCapabilityName());
deployment.setReplicas(plan.getInitialReplicas());
deployment.setImage(imageName);
deployment.setResourceLimits(descriptor.getResourceLimits());
// Apply deployment to cluster
k8sClient.applyDeployment(deployment);
// Create service for capability
Service service = new Service();
service.setName(descriptor.getCapabilityName());
service.setPort(plan.getServicePort());
service.setTargetPort(plan.getContainerPort());
k8sClient.applyService(service);
// Configure auto-scaling
if (descriptor.getScalingPolicy().isAutoScalingEnabled()) {
HorizontalPodAutoscaler hpa = new HorizontalPodAutoscaler();
hpa.setName(descriptor.getCapabilityName());
hpa.setMinReplicas(descriptor.getScalingPolicy().getMinReplicas());
hpa.setMaxReplicas(descriptor.getScalingPolicy().getMaxReplicas());
hpa.setTargetCPUUtilization(
descriptor.getScalingPolicy().getTargetCPUUtilization()
);
k8sClient.applyHPA(hpa);
}
}
/**
* Updates a deployed capability with a new version.
* Performs rolling update with zero downtime.
*
* @param capabilityName Name of the capability to update
* @param newVersion New version to deploy
*/
public void updateCapability(String capabilityName, String newVersion) {
// Get new container image
String imageName = containerRegistry.getImageName(capabilityName, newVersion);
// Update deployment with new image
k8sClient.updateDeploymentImage(capabilityName, imageName);
// Kubernetes performs rolling update automatically
// Monitor rollout status
waitForRollout(capabilityName);
}
private void waitForRollout(String deploymentName) {
// Wait for rollout to complete
while (!k8sClient.isRolloutComplete(deploymentName)) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
break;
}
}
}
@Override
public Object getContractImplementation(Class<?> contractType) {
if (contractType == DeploymentContract.class) {
return new DeploymentContractImpl(this);
}
return null;
}
@Override
public void injectDependency(Class<?> contractType, Object implementation) {
// No dependencies
}
}
The Kubernetes deployment capability handles all container orchestration concerns. Other capabilities do not need to know they are running in Kubernetes. They simply implement their contracts and can be deployed to any environment.
IMPLEMENTATION GUIDELINES
Implementing a system with Capability-Centric Architecture requires following specific guidelines to achieve the benefits of the pattern. The most important guideline is to identify capabilities based on cohesive functionality rather than technical layers or organizational structure.
A capability should represent a complete unit of functionality that delivers value. It should have a clear purpose that can be stated in a single sentence. For example, Product Catalog manages product information. Payment Processing handles payment transactions. Motor Control regulates motor speed and position. Each of these is a cohesive capability.
Avoid creating capabilities based on technical layers. Do not create a Database Access capability or a User Interface capability. These are technical concerns that should be part of the realization layer of domain capabilities. Similarly, avoid creating capabilities based on organizational boundaries. The fact that different teams work on different parts of the system does not mean those parts should be separate capabilities.
The second guideline is to define clear contracts for each capability. A contract should specify what the capability provides, what it requires, and the protocols for interaction. Contracts should be stable but not rigid. Use semantic versioning to manage contract evolution. Minor version changes add new features while maintaining backward compatibility. Major version changes can break compatibility but should be rare and well-planned.
When defining contracts, focus on what rather than how. A contract specifies what functionality is provided, not how it is implemented. This allows the implementation to evolve without affecting consumers of the contract.
The third guideline is to use efficiency gradients appropriately. Not every operation needs to be optimized to the maximum. Identify the critical paths in your system and optimize those. Use higher-level abstractions for non-critical paths to improve maintainability.
For embedded systems, the critical path is usually the real-time control loop or interrupt handler. These should use direct hardware access and minimal abstraction. Background tasks like logging, diagnostics, and communication can use higher-level abstractions.
For enterprise systems, the critical path is usually the request handling path for high-traffic operations. These should be optimized for performance. Administrative operations, batch processing, and analytics can use more flexible but potentially slower implementations.
The fourth guideline is to manage dependencies carefully. Every dependency should be through a contract, not a direct reference to another capability's implementation. This allows capabilities to be tested in isolation and deployed independently.
Use the capability registry to detect circular dependencies early. If a circular dependency is detected, restructure the capabilities. Often this means extracting a new capability that both original capabilities depend on, breaking the cycle.
The fifth guideline is to plan for evolution from the start. Every capability should have an evolution envelope that specifies its versioning strategy and deprecation policy. When you need to make a breaking change to a contract, introduce it as a new major version and maintain the old version for a transition period.
Document migration paths from old versions to new versions. Provide tools to help consumers migrate. Communicate changes clearly and give consumers adequate time to adapt.
Here is an example of a complete capability implementation following these guidelines:
/**
* Complete example of a capability following implementation guidelines.
* This is a Notification capability for sending alerts and messages.
*/
// STEP 1: Define the contract
/**
* Contract for notification services.
* Provides ability to send notifications through various channels.
*/
public interface NotificationContract {
/**
* Sends a notification to a recipient.
*
* @param notification The notification to send
* @return Result indicating success or failure
*/
NotificationResult sendNotification(Notification notification);
/**
* Sends a batch of notifications.
* More efficient than sending individually.
*
* @param notifications List of notifications to send
* @return Results for each notification
*/
List<NotificationResult> sendBatch(List<Notification> notifications);
/**
* Gets the delivery status of a notification.
*
* @param notificationId ID of the notification
* @return Current delivery status
*/
DeliveryStatus getStatus(String notificationId);
}
// STEP 2: Define the essence
/**
* Pure notification logic without infrastructure dependencies.
*/
public class NotificationEssence {
/**
* Validates a notification before sending.
*
* @param notification The notification to validate
* @return Validation result
*/
public ValidationResult validateNotification(Notification notification) {
ValidationResult result = new ValidationResult();
if (notification.getRecipient() == null ||
notification.getRecipient().isEmpty()) {
result.addError("Recipient is required");
}
if (notification.getMessage() == null ||
notification.getMessage().isEmpty()) {
result.addError("Message is required");
}
if (notification.getChannel() == null) {
result.addError("Channel is required");
}
// Validate message length based on channel
if (notification.getChannel() == NotificationChannel.SMS) {
if (notification.getMessage().length() > 160) {
result.addError("SMS messages cannot exceed 160 characters");
}
}
return result;
}
/**
* Determines the priority of a notification.
*
* @param notification The notification
* @return Priority level
*/
public Priority determinePriority(Notification notification) {
// Business logic for determining priority
if (notification.getType() == NotificationType.ALERT) {
return Priority.HIGH;
} else if (notification.getType() == NotificationType.REMINDER) {
return Priority.MEDIUM;
} else {
return Priority.LOW;
}
}
/**
* Selects the best channel for a notification.
*
* @param notification The notification
* @param availableChannels Channels that are currently available
* @return Selected channel
*/
public NotificationChannel selectChannel(
Notification notification,
Set<NotificationChannel> availableChannels
) {
// Prefer user's preferred channel if available
if (notification.getPreferredChannel() != null &&
availableChannels.contains(notification.getPreferredChannel())) {
return notification.getPreferredChannel();
}
// Fall back to email if available
if (availableChannels.contains(NotificationChannel.EMAIL)) {
return NotificationChannel.EMAIL;
}
// Use any available channel
return availableChannels.iterator().next();
}
}
// STEP 3: Implement the realization
/**
* Notification realization with infrastructure integration.
*/
public class NotificationRealization {
private final NotificationEssence essence;
private final EmailService emailService;
private final SMSService smsService;
private final PushNotificationService pushService;
private final NotificationQueue queue;
private final NotificationRepository repository;
public NotificationRealization(
NotificationEssence essence,
EmailService emailService,
SMSService smsService,
PushNotificationService pushService,
NotificationQueue queue,
NotificationRepository repository
) {
this.essence = essence;
this.emailService = emailService;
this.smsService = smsService;
this.pushService = pushService;
this.queue = queue;
this.repository = repository;
}
/**
* Sends a notification using appropriate infrastructure.
*
* @param notification The notification to send
* @return Result of the send operation
*/
public NotificationResult send(Notification notification) {
// Validate using essence
ValidationResult validation = essence.validateNotification(notification);
if (!validation.isValid()) {
return NotificationResult.validationFailed(validation);
}
// Determine priority using essence
Priority priority = essence.determinePriority(notification);
// Store notification in repository
String notificationId = repository.save(notification);
notification.setId(notificationId);
try {
// Send through appropriate channel
switch (notification.getChannel()) {
case EMAIL:
emailService.send(
notification.getRecipient(),
notification.getSubject(),
notification.getMessage()
);
break;
case SMS:
smsService.send(
notification.getRecipient(),
notification.getMessage()
);
break;
case PUSH:
pushService.send(
notification.getRecipient(),
notification.getMessage()
);
break;
default:
throw new UnsupportedChannelException(
"Channel not supported: " + notification.getChannel()
);
}
// Update status
repository.updateStatus(notificationId, DeliveryStatus.SENT);
return NotificationResult.success(notificationId);
} catch (Exception e) {
// Update status
repository.updateStatus(notificationId, DeliveryStatus.FAILED);
// Queue for retry if appropriate
if (shouldRetry(notification, priority)) {
queue.enqueue(notification, priority);
}
return NotificationResult.failed(e.getMessage());
}
}
/**
* Sends a batch of notifications efficiently.
*
* @param notifications List of notifications
* @return Results for each notification
*/
public List<NotificationResult> sendBatch(List<Notification> notifications) {
List<NotificationResult> results = new ArrayList<>();
// Group notifications by channel for efficient batch sending
Map<NotificationChannel, List<Notification>> byChannel =
groupByChannel(notifications);
for (Map.Entry<NotificationChannel, List<Notification>> entry :
byChannel.entrySet()) {
NotificationChannel channel = entry.getKey();
List<Notification> channelNotifications = entry.getValue();
// Send batch through channel-specific service
List<NotificationResult> channelResults =
sendBatchThroughChannel(channel, channelNotifications);
results.addAll(channelResults);
}
return results;
}
private Map<NotificationChannel, List<Notification>> groupByChannel(
List<Notification> notifications
) {
Map<NotificationChannel, List<Notification>> grouped = new HashMap<>();
for (Notification notification : notifications) {
grouped.computeIfAbsent(
notification.getChannel(),
k -> new ArrayList<>()
).add(notification);
}
return grouped;
}
private List<NotificationResult> sendBatchThroughChannel(
NotificationChannel channel,
List<Notification> notifications
) {
// Channel-specific batch sending logic
return new ArrayList<>(); // Simplified
}
private boolean shouldRetry(Notification notification, Priority priority) {
// Retry high priority notifications
return priority == Priority.HIGH;
}
}
// STEP 4: Implement the adaptation
/**
* Adaptation layer providing external interfaces.
*/
public class NotificationAdaptation {
private final NotificationRealization realization;
public NotificationAdaptation(NotificationRealization realization) {
this.realization = realization;
}
/**
* REST API endpoint for sending notifications.
*/
public HttpResponse handleNotificationRequest(HttpRequest request) {
try {
Notification notification = parseNotification(request);
NotificationResult result = realization.send(notification);
if (result.isSuccessful()) {
return HttpResponse.ok(serializeResult(result));
} else {
return HttpResponse.badRequest(result.getErrorMessage());
}
} catch (Exception e) {
return HttpResponse.internalServerError(e.getMessage());
}
}
/**
* Message queue consumer for asynchronous notifications.
*/
public void handleNotificationMessage(Message message) {
Notification notification = deserializeNotification(message.getBody());
realization.send(notification);
}
private Notification parseNotification(HttpRequest request) {
return new Notification(); // Simplified
}
private Notification deserializeNotification(byte[] body) {
return new Notification(); // Simplified
}
private String serializeResult(NotificationResult result) {
return "{}"; // Simplified
}
}
// STEP 5: Implement the complete capability
/**
* Complete notification capability implementation.
*/
public class NotificationCapability implements CapabilityInstance {
private final NotificationEssence essence;
private final NotificationRealization realization;
private final NotificationAdaptation adaptation;
private final EvolutionEnvelope evolutionEnvelope;
public NotificationCapability(
EmailService emailService,
SMSService smsService,
PushNotificationService pushService,
NotificationQueue queue,
NotificationRepository repository
) {
this.essence = new NotificationEssence();
this.realization = new NotificationRealization(
essence,
emailService,
smsService,
pushService,
queue,
repository
);
this.adaptation = new NotificationAdaptation(realization);
this.evolutionEnvelope = createEvolutionEnvelope();
}
@Override
public void initialize() {
// Initialize infrastructure services
}
@Override
public void start() {
// Start message queue consumers
// Start retry processor
}
@Override
public void stop() {
// Stop consumers
// Flush queues
}
@Override
public Object getContractImplementation(Class<?> contractType) {
if (contractType == NotificationContract.class) {
return new NotificationContractImpl(realization);
}
return null;
}
@Override
public void injectDependency(Class<?> contractType, Object implementation) {
// This capability has no dependencies
}
public EvolutionEnvelope getEvolutionEnvelope() {
return evolutionEnvelope;
}
private EvolutionEnvelope createEvolutionEnvelope() {
return new EvolutionEnvelope(
"NotificationCapability",
new Version(1, 0, 0),
Arrays.asList(new Version(1, 0, 0)),
new DeprecationPolicy(),
new MigrationGuide()
);
}
}
This complete example demonstrates all the key aspects of implementing a capability. The contract defines what the capability provides. The essence contains pure business logic. The realization integrates with infrastructure. The adaptation provides external interfaces. The capability ties everything together and manages the lifecycle.
TESTING STRATEGIES
Testing is a crucial aspect of any architecture, and Capability-Centric Architecture provides several advantages for testing. The separation of essence, realization, and adaptation allows different testing strategies for different parts of the capability.
The essence can be tested with pure unit tests that require no infrastructure. Since the essence has no external dependencies, tests are fast, deterministic, and easy to write:
/**
* Unit tests for notification essence.
* No infrastructure required - pure logic testing.
*/
public class NotificationEssenceTest {
private NotificationEssence essence;
@Before
public void setUp() {
essence = new NotificationEssence();
}
@Test
public void testValidateNotification_ValidNotification_ReturnsValid() {
// Arrange
Notification notification = new Notification();
notification.setRecipient("user@example.com");
notification.setMessage("Test message");
notification.setChannel(NotificationChannel.EMAIL);
// Act
ValidationResult result = essence.validateNotification(notification);
// Assert
assertTrue(result.isValid());
assertEquals(0, result.getErrors().size());
}
@Test
public void testValidateNotification_MissingRecipient_ReturnsInvalid() {
// Arrange
Notification notification = new Notification();
notification.setMessage("Test message");
notification.setChannel(NotificationChannel.EMAIL);
// Act
ValidationResult result = essence.validateNotification(notification);
// Assert
assertFalse(result.isValid());
assertTrue(result.getErrors().contains("Recipient is required"));
}
@Test
public void testValidateNotification_SMSTooLong_ReturnsInvalid() {
// Arrange
Notification notification = new Notification();
notification.setRecipient("+1234567890");
notification.setMessage("A".repeat(161)); // 161 characters
notification.setChannel(NotificationChannel.SMS);
// Act
ValidationResult result = essence.validateNotification(notification);
// Assert
assertFalse(result.isValid());
assertTrue(result.getErrors().contains(
"SMS messages cannot exceed 160 characters"
));
}
@Test
public void testDeterminePriority_AlertType_ReturnsHigh() {
// Arrange
Notification notification = new Notification();
notification.setType(NotificationType.ALERT);
// Act
Priority priority = essence.determinePriority(notification);
// Assert
assertEquals(Priority.HIGH, priority);
}
@Test
public void testSelectChannel_PreferredAvailable_ReturnsPreferred() {
// Arrange
Notification notification = new Notification();
notification.setPreferredChannel(NotificationChannel.SMS);
Set<NotificationChannel> available = new HashSet<>();
available.add(NotificationChannel.EMAIL);
available.add(NotificationChannel.SMS);
// Act
NotificationChannel selected = essence.selectChannel(notification, available);
// Assert
assertEquals(NotificationChannel.SMS, selected);
}
@Test
public void testSelectChannel_PreferredNotAvailable_FallsBackToEmail() {
// Arrange
Notification notification = new Notification();
notification.setPreferredChannel(NotificationChannel.PUSH);
Set<NotificationChannel> available = new HashSet<>();
available.add(NotificationChannel.EMAIL);
available.add(NotificationChannel.SMS);
// Act
NotificationChannel selected = essence.selectChannel(notification, available);
// Assert
assertEquals(NotificationChannel.EMAIL, selected);
}
}
The realization requires integration tests that verify interaction with infrastructure. These tests use test doubles or test infrastructure:
/**
* Integration tests for notification realization.
* Uses mock services to test infrastructure integration.
*/
public class NotificationRealizationTest {
private NotificationEssence essence;
private EmailService emailService;
private SMSService smsService;
private PushNotificationService pushService;
private NotificationQueue queue;
private NotificationRepository repository;
private NotificationRealization realization;
@Before
public void setUp() {
essence = new NotificationEssence();
emailService = mock(EmailService.class);
smsService = mock(SMSService.class);
pushService = mock(PushNotificationService.class);
queue = mock(NotificationQueue.class);
repository = mock(NotificationRepository.class);
realization = new NotificationRealization(
essence,
emailService,
smsService,
pushService,
queue,
repository
);
}
@Test
public void testSend_ValidEmailNotification_SendsEmail() {
// Arrange
Notification notification = new Notification();
notification.setRecipient("user@example.com");
notification.setSubject("Test");
notification.setMessage("Test message");
notification.setChannel(NotificationChannel.EMAIL);
when(repository.save(any(Notification.class))).thenReturn("notif-123");
// Act
NotificationResult result = realization.send(notification);
// Assert
assertTrue(result.isSuccessful());
verify(emailService).send("user@example.com", "Test", "Test message");
verify(repository).updateStatus("notif-123", DeliveryStatus.SENT);
}
@Test
public void testSend_EmailServiceFails_ReturnsFailure() {
// Arrange
Notification notification = new Notification();
notification.setRecipient("user@example.com");
notification.setMessage("Test message");
notification.setChannel(NotificationChannel.EMAIL);
when(repository.save(any(Notification.class))).thenReturn("notif-123");
doThrow(new RuntimeException("Email service unavailable"))
.when(emailService).send(anyString(), anyString(), anyString());
// Act
NotificationResult result = realization.send(notification);
// Assert
assertFalse(result.isSuccessful());
verify(repository).updateStatus("notif-123", DeliveryStatus.FAILED);
}
@Test
public void testSend_InvalidNotification_ReturnsValidationFailure() {
// Arrange
Notification notification = new Notification();
// Missing required fields
// Act
NotificationResult result = realization.send(notification);
// Assert
assertFalse(result.isSuccessful());
verify(emailService, never()).send(anyString(), anyString(), anyString());
}
}
For embedded systems, testing strategies must account for hardware dependencies. The efficiency gradient structure helps here by isolating hardware access:
/**
* Tests for embedded motor control capability.
* Demonstrates testing strategies for hardware-dependent code.
*/
public class MotorControlCapabilityTest {
@Test
public void testEssence_CalculateControlOutput_CorrectPIDCalculation() {
// Test the essence independently of hardware
ControlParameters params = new ControlParameters(1.0, 0.1, 0.01);
MotorControlEssence essence = new MotorControlEssence(params);
int currentPosition = 100;
int targetPosition = 150;
int output = essence.calculateControlOutput(currentPosition, targetPosition);
// Verify PID calculation
assertTrue(output > 0); // Should increase to reach target
}
@Test
public void testRealization_SetTargetPosition_UpdatesTarget() {
// Test realization with mock hardware
HardwareTimer mockTimer = mock(HardwareTimer.class);
ADCController mockADC = mock(ADCController.class);
PWMController mockPWM = mock(PWMController.class);
MotorControlEssence essence = new MotorControlEssence(
new ControlParameters(1.0, 0.1, 0.01)
);
MotorControlRealization realization = new MotorControlRealization(
essence,
mockTimer,
mockADC,
mockPWM
);
realization.setTargetPosition(200);
// Verify target was set
// In real implementation, would check internal state
}
@Test
public void testHardwareIntegration_ControlLoop_UpdatesPWM() {
// Hardware-in-the-loop test
// Requires actual hardware or hardware simulator
// This test would run on target hardware or in a simulator
// that accurately models hardware timing and behavior
}
}
Contract-based testing verifies that capabilities correctly implement their contracts:
/**
* Contract tests verify that capability implementations fulfill their contracts.
*/
public class NotificationContractTest {
private NotificationContract contract;
@Before
public void setUp() {
// Create actual capability instance
NotificationCapability capability = createTestCapability();
contract = (NotificationContract) capability.getContractImplementation(
NotificationContract.class
);
}
@Test
public void testContract_SendNotification_ReturnsResult() {
// Verify contract is implemented correctly
Notification notification = createValidNotification();
NotificationResult result = contract.sendNotification(notification);
assertNotNull(result);
}
@Test
public void testContract_SendBatch_ReturnsResultsForAll() {
List<Notification> notifications = Arrays.asList(
createValidNotification(),
createValidNotification(),
createValidNotification()
);
List<NotificationResult> results = contract.sendBatch(notifications);
assertEquals(notifications.size(), results.size());
}
@Test
public void testContract_GetStatus_ReturnsStatus() {
Notification notification = createValidNotification();
NotificationResult result = contract.sendNotification(notification);
DeliveryStatus status = contract.getStatus(result.getNotificationId());
assertNotNull(status);
}
private NotificationCapability createTestCapability() {
// Create capability with test infrastructure
return new NotificationCapability(
new TestEmailService(),
new TestSMSService(),
new TestPushService(),
new TestNotificationQueue(),
new TestNotificationRepository()
);
}
private Notification createValidNotification() {
Notification notification = new Notification();
notification.setRecipient("test@example.com");
notification.setMessage("Test message");
notification.setChannel(NotificationChannel.EMAIL);
return notification;
}
}
These testing strategies provide comprehensive coverage while keeping tests fast and maintainable. The essence tests are pure unit tests that run in milliseconds. The realization tests use mocks to verify infrastructure integration. Contract tests ensure that capabilities fulfill their promises. Hardware-in-the-loop tests verify embedded system behavior on actual hardware.
CONCLUSION
Capability-Centric Architecture provides a unified architectural pattern that works equally well for embedded and enterprise systems. By organizing systems around capabilities structured as nuclei with essence, realization, and adaptation layers, we achieve separation of concerns that enables independent evolution, testing, and deployment.
The pattern addresses fundamental architectural challenges that have plagued software development for decades. Circular dependencies are prevented through contract-based interaction and dependency graph management. Technology dependencies are isolated in the realization layer, allowing technologies to be replaced without affecting business logic. Quality attributes are explicitly addressed through contracts and efficiency gradients.
For embedded systems, efficiency gradients allow critical paths to use direct hardware access while non-critical paths use higher-level abstractions. This balances real-time performance requirements with software engineering best practices. Resource contracts make resource usage explicit and manageable.
For enterprise systems, contract-based interaction enables independent deployment and scaling of capabilities. Evolution envelopes provide a formal mechanism for managing change over time. Support for modern technologies like AI, Big Data, and containerization is built into the architecture rather than bolted on afterward.
The architecture is practical to implement. Capabilities follow a clear structure that developers can understand and apply consistently. Testing strategies leverage the separation of essence, realization, and adaptation to provide comprehensive coverage with fast, maintainable tests. Deployment flexibility allows the same capability code to run in different environments from embedded devices to cloud platforms.
Capability-Centric Architecture represents an evolution of architectural thinking that synthesizes the best ideas from Domain-Driven Design, Hexagonal Architecture, and Clean Architecture while adding new mechanisms specifically designed to support both embedded and enterprise systems in the modern technological landscape. It is not a replacement for these patterns but rather an extension that makes them applicable to a broader range of systems and challenges.
The pattern has been presented here with detailed examples and explanations to enable architects and developers to apply it to their own systems. While the examples use Java-like syntax for clarity, the concepts apply to any programming language and technology stack. The key is to follow the core principles: organize around capabilities, separate essence from realization, interact through contracts, use efficiency gradients appropriately, and plan for evolution from the start.
By following these principles, teams can build systems that are easier to understand, test, deploy, and evolve over time, whether those systems are controlling industrial machinery, processing billions of transactions, or anything in between.