Monday, May 26, 2025

BRAIN-INSPIRED SPIKING NEURAL NETWORKS: A COMPREHENSIVE GUIDE TO BIOLOGICAL COMPUTATION

For full source code scroll to the end of the article!

This blog post comprises the following part:

  • Part 1: Article on Spiking Neural Networks
  • Part 2: Explanation of an implementation of Spiking Neural Networks
  • Part 3: Mathematical Foundations
  • Part 4: Python source code of the example implementation


PART 1: SPIKING NEURAL NETWORKS

INTRODUCTION AND MOTIVATION 

Traditional artificial neural networks, despite their remarkable success in machine learning applications, operate in ways that are fundamentally different from biological brains. Current deep learning systems use continuous valued activations that are propagated through the network in synchronized waves, with learning achieved through global error backpropagation algorithms. In contrast, biological neural networks communicate through discrete electrical pulses called action potentials or spikes, process information asynchronously, and learn through local synaptic plasticity rules. 

The human brain contains approximately 86 billion neurons connected by trillions of synapses, yet it consumes only about 20 watts of power while performing incredibly complex cognitive tasks. This remarkable efficiency stems from several key biological principles that are absent in traditional artificial neural networks. Real neurons fire spikes only when necessary, creating sparse activity patterns that minimize energy consumption. Learning occurs through local rules that modify synaptic strengths based on the timing of pre and post-synaptic spikes, without requiring global error signals. 

The brain processes information asynchronously, with different neural circuits operating on different timescales and exhibiting complex temporal dynamics. This fundamental mismatch between artificial and biological neural computation has motivated researchers to develop more brain-like artificial neural networks. Spiking neural networks represent the third generation of neural network models, following perceptrons and sigmoidal neural networks. These networks attempt to capture the temporal dynamics and energy efficiency of biological neural computation while maintaining the computational power needed for practical applications. 

BIOLOGICAL FOUNDATIONS 

Understanding the biological basis of neural computation is essential for developing brain-inspired artificial systems. Biological neurons are complex electrochemical systems that maintain electrical potentials across their cell membranes. At rest, neurons maintain a membrane potential of approximately negative 70 millivolts relative to the extracellular environment. When the neuron receives sufficient excitatory input, the membrane potential depolarizes toward a threshold value of around negative 55 millivolts. When the threshold is reached, the neuron generates an action potential, a stereotypical electrical pulse that propagates along the axon to other neurons. This spike is an all-or-nothing event that lasts approximately one millisecond and reaches a peak of about positive 40 millivolts before rapidly returning to the resting potential. 

After generating a spike, neurons enter a refractory period during which they cannot fire another action potential, lasting several milliseconds. The integration of synaptic inputs occurs through complex dendritic processing. Excitatory synapses cause small depolarizations called excitatory postsynaptic potentials, while inhibitory synapses cause hyperpolarizations called inhibitory postsynaptic potentials. These potentials sum spatially across the dendritic tree and temporally over time constants determined by the membrane properties of the neuron. 

 Synaptic plasticity, the ability of synapses to change their strength based on activity, is fundamental to learning and memory in biological systems. The most well-studied form of synaptic plasticity is spike-timing dependent plasticity (STDP), discovered through detailed electrophysiological studies. STDP operates on a simple principle: if a presynaptic neuron fires shortly before a postsynaptic neuron, the synapse strengthens through long-term potentiation. Conversely, if the postsynaptic neuron fires before the presynaptic neuron, the synapse weakens through long-term depression. This learning rule makes intuitive sense from a causal perspective. If neuron A consistently fires before neuron B, it suggests that A may be causally contributing to B's activation, and therefore the connection should be strengthened. 

The precise timing dependence of STDP creates a narrow temporal window for learning, typically spanning plus or minus 20 milliseconds around the spike time. Biological neural networks also exhibit homeostatic mechanisms that maintain stable activity levels despite ongoing plastic changes. These mechanisms include intrinsic plasticity, where individual neurons adjust their excitability to maintain target firing rates, and metaplasticity, where the capacity for further plastic changes depends on the recent history of synaptic modifications. 

FUNDAMENTAL DIFFERENCES FROM TRADITIONAL NEURAL NETWORKS 

The differences between biological and artificial neural networks are profound and have important implications for computational capabilities and efficiency. Traditional artificial neural networks operate in a fundamentally synchronous manner, where all neurons in a layer compute their outputs simultaneously before passing information to the next layer. This synchronous operation requires precise timing and coordination across all network components. 

In contrast, biological neural networks operate asynchronously, with different neurons firing at different times based on their individual dynamics and input patterns. This asynchronous operation allows for more flexible temporal processing and can lead to more robust computation in the presence of noise and variability. 

The representation of information also differs dramatically between the two paradigms. Artificial neural networks typically encode information in the magnitude of continuous-valued activations, while biological networks encode information in the precise timing of discrete spikes. This temporal coding allows biological networks to process information with microsecond precision and enables sophisticated temporal pattern recognition capabilities. 

 Energy consumption represents another crucial difference. Traditional artificial neural networks require energy for every computation, regardless of whether the result contributes meaningful information to the network's output. Biological neurons consume energy primarily when they generate spikes, leading to sparse activity patterns where only a small fraction of neurons are active at any given time. This event-driven computation can be orders of magnitude more energy-efficient than traditional approaches. 

Learning mechanisms also differ fundamentally. Artificial neural networks typically rely on backpropagation, a global optimization algorithm that requires knowledge of the desired output and the ability to propagate error signals backward through the entire network. While mathematically elegant and computationally effective, backpropagation is biologically implausible because real neurons cannot send information backward along their axons, and the brain has no mechanism for computing global error gradients. Biological learning relies instead on local plasticity rules like STDP that depend only on information available at individual synapses. These local rules can achieve sophisticated learning without requiring global coordination or error signals. The challenge for artificial systems is to harness the power of local learning rules while maintaining the computational capabilities needed for complex tasks. 

 CORE CONCEPTS OF SPIKING NEURAL NETWORKS 

Spiking neural networks attempt to bridge the gap between biological realism and computational utility by incorporating key features of biological neural computation. The fundamental computational unit in these networks is the spiking neuron model, which simulates the temporal dynamics of biological neurons while remaining computationally tractable. 




 The most widely used spiking neuron model is the leaky integrate-and-fire neuron, which captures the essential dynamics of biological neurons while maintaining mathematical simplicity. In this model, the neuron's membrane potential evolves according to a differential equation that includes a leak term representing the passive electrical properties of the cell membrane and input terms representing synaptic currents. The leak term causes the membrane potential to decay exponentially toward the resting potential with a time constant that determines how long the neuron can integrate inputs. Typical biological time constants range from 10 to 50 milliseconds, allowing neurons to integrate inputs over behaviorally relevant timescales. When synaptic inputs arrive, they cause transient changes in the membrane potential that sum linearly if they arrive close in time. The spiking threshold determines when the neuron generates an action potential. In the basic integrate-and-fire model, a spike is generated whenever the membrane potential crosses a fixed threshold value. 

After spiking, the membrane potential is reset to a hyperpolarized value, and the neuron may enter a refractory period during which it cannot spike again. More sophisticated neuron models include additional biological features such as spike frequency adaptation, where repeated spiking causes a gradual increase in the effective threshold or a hyperpolarizing adaptation current. This mechanism prevents neurons from firing at unrealistically high rates and contributes to the sparse activity patterns observed in biological networks. 

Synaptic transmission in spiking neural networks involves realistic delays and filtering of spike trains. When a presynaptic neuron generates a spike, the signal must propagate along the axon and across the synaptic cleft before influencing the postsynaptic neuron. These delays, typically ranging from one to several milliseconds, introduce important temporal dynamics that can be exploited for computation. 

The postsynaptic response to a spike is typically modeled as a stereotypical waveform called a postsynaptic potential. These potentials have characteristic rise and decay times that filter the incoming spike train and determine how effectively spikes can sum temporally. The strength of the synaptic response is determined by the synaptic weight, which can be modified through learning rules. Spike-timing dependent plasticity provides the foundation for learning in spiking neural networks. The STDP learning rule modifies synaptic weights based on the relative timing of pre and postsynaptic spikes. If the presynaptic spike precedes the postsynaptic spike by a small interval (typically less than 20 milliseconds), the synaptic weight increases according to an exponentially decaying function of the time interval. If the postsynaptic spike precedes the presynaptic spike, the weight decreases according to a similar exponential function. This learning rule allows synapses to strengthen when they contribute to postsynaptic firing and weaken when they are not causally related to output spikes. The temporal precision of STDP enables learning of complex temporal patterns and sequences that would be difficult to capture with rate-based learning rules. 

 NETWORK ARCHITECTURE AND CONNECTIVITY 

The architecture of brain-inspired spiking neural networks differs significantly from the regular, layered structures typical of traditional neural networks. Biological neural networks exhibit complex connectivity patterns that include feedforward connections, recurrent connections within and between layers, and long-range feedback connections that span multiple processing stages. 

Feedforward connections form the basic information processing pathway, carrying sensory information from input layers through intermediate processing stages to output layers. However, unlike traditional neural networks where every neuron in one layer connects to every neuron in the next layer, biological networks exhibit sparse connectivity patterns where each neuron connects to only a small fraction of potential postsynaptic targets. This sparse connectivity serves several important functions. It reduces the metabolic cost of maintaining synapses, limits the computational load on individual neurons, and creates modular processing units that can specialize for different computational tasks. The specific pattern of connectivity is not random but follows organizing principles that optimize information processing capabilities. 

Recurrent connections within layers create dynamic feedback loops that can sustain activity patterns, implement working memory functions, and generate complex temporal dynamics. These lateral connections often include both excitatory connections that can amplify and spread activity and inhibitory connections that provide competition and normalization. 

Inhibitory neurons play crucial roles in network function despite typically comprising only 10-20 percent of the total neuron population. These neurons provide precise temporal control of network activity, implement winner-take-all competition, and prevent runaway excitation that could lead to pathological activity patterns like epileptic seizures. 

Feedback connections from higher processing levels to lower levels enable top-down modulation of sensory processing, prediction of expected inputs, and attention mechanisms that selectively enhance relevant information. These connections create loops that allow the network to iteratively refine its processing and integrate information across multiple spatial and temporal scales. The developmental organization of these connectivity patterns follows principles of activity-dependent refinement, where initial broadly connected networks are pruned based on correlated activity patterns. Neurons that fire together strengthen their connections, while uncorrelated connections are eliminated through synaptic competition mechanisms.  

TEMPORAL DYNAMICS AND INFORMATION PROCESSING

One of the most important advantages of spiking neural networks is their ability to process temporal information with high precision. Traditional neural networks typically process static input patterns and must rely on external mechanisms like sliding windows or recurrent architectures to handle temporal sequences. 

In contrast, spiking networks naturally process temporal information through their intrinsic dynamics. The temporal precision of spike timing allows these networks to distinguish between input patterns that differ only in their temporal structure. For example, the same set of input neurons firing in different orders can produce completely different network responses, enabling sophisticated sequence recognition and generation capabilities. 

Temporal information can be encoded in multiple ways within spiking neural networks. Rate coding represents information in the average firing rate of neurons over time windows of tens to hundreds of milliseconds. This coding scheme is robust to noise and can represent continuous variables through the frequency of spike generation. 

Temporal coding represents information in the precise timing of individual spikes relative to reference signals or other spikes in the network. This coding scheme can achieve much higher information transmission rates than rate coding but requires more precise temporal processing mechanisms. 

Population coding distributes information across groups of neurons with similar but slightly different response properties. 

By combining the responses of multiple neurons, the network can represent information with higher precision and reliability than any individual neuron could achieve. 

Synchronization dynamics play important roles in temporal information processing. Groups of neurons can synchronize their firing to create coherent oscillations at various frequencies, from slow delta waves (1-4 Hz) to fast gamma oscillations (30-100 Hz). These oscillations can bind distributed neural activity into coherent perceptual or cognitive states and provide temporal reference signals for precise computation. The temporal dynamics of spiking networks also enable sophisticated forms of computation that are difficult to achieve with traditional approaches. Coincidence detection allows networks to identify precisely timed input patterns, delay-line processing can implement temporal pattern recognition, and integration over multiple timescales can combine fast sensory processing with slower cognitive functions. 

 LEARNING MECHANISMS AND PLASTICITY 

Learning in brain-inspired spiking neural networks relies primarily on local plasticity rules that modify synaptic strengths based on the activity patterns of connected neurons. The most fundamental of these rules is spike-timing dependent plasticity, which provides a biologically plausible mechanism for unsupervised learning of temporal patterns. 

The STDP learning rule operates through biochemical mechanisms involving calcium signaling in biological synapses. When presynaptic and postsynaptic activity occur in close temporal proximity, calcium levels at the synapse reach threshold values that trigger molecular cascades leading to synaptic modification. The precise timing dependence arises from the different calcium dynamics associated with presynaptic neurotransmitter release and postsynaptic action potential generation. 

Beyond basic STDP, biological synapses exhibit numerous additional forms of plasticity that enhance learning capabilities. Metaplasticity refers to the plasticity of plasticity itself, where the capacity for synaptic modification depends on the recent history of synaptic activity. Synapses that have undergone recent potentiation may be less likely to be further strengthened, preventing unlimited growth of synaptic weights. Homeostatic plasticity mechanisms maintain stable network activity levels despite ongoing synaptic modifications. 

Synaptic scaling adjusts all synapses on a neuron proportionally to maintain target activity levels, while intrinsic plasticity modifies the excitability of individual neurons. These mechanisms prevent the network from becoming either hyperactive or silent as learning progresses. Heterosynaptic plasticity allows inactive synapses to be modified based on the activity of other synapses on the same neuron. This form of plasticity can implement competitive learning mechanisms where strong synapses become stronger at the expense of weaker synapses, leading to selective strengthening of the most effective input pathways. 

Neuromodulation provides another layer of learning control through the action of neurotransmitters like dopamine, serotonin, and acetylcholine. These neuromodulators can gate synaptic plasticity, determine the sign and magnitude of synaptic changes, and coordinate learning across large populations of neurons based on behavioral context and reward signals. The combination of these multiple plasticity mechanisms allows spiking neural networks to implement sophisticated learning algorithms that can adapt to changing environments, learn from limited data, and maintain stable performance over long periods. The local nature of these learning rules makes them particularly suitable for implementation in neuromorphic hardware systems. 

ENERGY EFFICIENCY AND SPARSE COMPUTATION 

One of the most compelling advantages of brain-inspired spiking neural networks is their potential for dramatically improved energy efficiency compared to traditional neural networks. This efficiency stems from the event-driven nature of spike-based computation, where energy is consumed only when neurons generate spikes rather than continuously during all computational operations. 

In traditional neural networks, every neuron computes its activation function for every input pattern, regardless of whether that computation contributes meaningful information to the network's output. This leads to dense computation where the majority of operations involve small-magnitude values that could be safely ignored without significantly affecting network performance. 

Biological neural networks achieve efficiency through sparse activity patterns where only a small percentage of neurons are active at any given time. In the mammalian cortex, typical firing rates range from 1-10 Hz, meaning that each neuron spikes only once every 100-1000 milliseconds. With billions of neurons, this sparse firing creates activity patterns where less than 1% of neurons are active in any brief time window. This sparsity arises naturally from the energy cost of spike generation in biological systems.

Each action potential requires substantial metabolic energy to restore ionic gradients across the cell membrane, creating evolutionary pressure for neurons to fire only when their contribution is valuable for network function. Spiking neural networks can exploit similar sparsity principles by implementing neurons that fire only when they receive sufficient input to cross their firing thresholds. Neurons receiving weak or conflicting inputs remain silent, eliminating unnecessary computations. This event-driven approach can reduce computational requirements by orders of magnitude compared to traditional approaches. 

The energy efficiency of spiking networks is further enhanced by the asynchronous nature of spike-based computation. Rather than requiring all neurons to compute simultaneously, spiking networks process information as events occur, allowing computational resources to be allocated dynamically based on network activity patterns. Neuromorphic hardware platforms designed specifically for spiking neural networks can achieve even greater efficiency gains by implementing analog or mixed-signal circuits that directly simulate neural dynamics. These platforms consume energy only during spike events and can operate at extremely low power levels suitable for battery-powered or energy-harvesting applications. The sparse computation paradigm also offers advantages for memory access patterns and data movement, which often dominate energy consumption in digital implementations. By processing only active neurons and synapses, spiking networks can dramatically reduce memory bandwidth requirements and improve cache efficiency. 

 IMPLEMENTATION ARCHITECTURE 

 The Python implementation provided represents a comprehensive framework for simulating brain-inspired spiking neural networks with biologically realistic features. The architecture follows object-oriented design principles with distinct classes representing different components of the neural system.  
The SpikingNeuron class implements the leaky integrate-and-fire neuron model with numerous biological features including variable firing thresholds, refractory periods, spike frequency adaptation, and homeostatic regulation mechanisms. Each neuron maintains its own internal state including membrane potential, adaptation currents, and spike history, allowing for heterogeneous populations with diverse response properties. The membrane dynamics are updated using Euler integration of the differential equation governing membrane potential evolution. The leak current drives the membrane potential toward the resting value with a time constant that determines integration properties. Input currents from synaptic connections and external sources are added to the membrane potential, while adaptation currents provide negative feedback that reduces firing rates during sustained activation. Noise plays an important role in the neuron implementation, providing the stochastic variability that is essential for realistic neural dynamics. 

Background noise currents are added to each neuron to simulate the constant bombardment of synaptic inputs that real neurons experience, while variability in neural parameters creates heterogeneous populations with diverse response properties. 

The SpikingSynapse class implements synaptic transmission with realistic delays and spike-timing dependent plasticity. Each synapse maintains a buffer of delayed spikes that are delivered to the postsynaptic neuron after appropriate propagation delays. 

The STDP learning rule monitors the relative timing of pre and postsynaptic spikes and adjusts synaptic weights according to the exponential temporal windows characteristic of biological plasticity. Synaptic weights are constrained within realistic bounds and include mechanisms for both homosynaptic plasticity (based on local spike timing) and heterosynaptic plasticity (based on network-wide activity patterns). 
The implementation includes metaplasticity features where the capacity for weight changes depends on recent modification history. 

The BrainInspiredNetwork class orchestrates the complete network simulation, managing populations of neurons and synapses while providing high-level interfaces for experimental protocols. The network construction follows neurobiologically inspired principles with sparse connectivity patterns, recurrent connections, and inhibitory populations that provide computational functions like normalization and competition. 

The simulation engine updates all network components in the correct order, processing synaptic transmission first to deliver delayed spikes, then updating neuron states to integrate inputs and generate new spikes, and finally applying plasticity rules to modify synaptic weights. This event-driven approach ensures proper temporal ordering of neural events while maintaining computational efficiency.

ADVANCED FEATURES AND CAPABILITIES 

 The implementation includes numerous advanced features that extend beyond basic spiking neural network functionality to provide sophisticated tools for research and application development. The real-time simulation capabilities allow users to interact with running networks, adjusting parameters and monitoring activity patterns as they evolve. 

The RealTimeSimulator class provides interactive simulation capabilities with user control over simulation speed, input patterns, and monitoring parameters. Users can pause simulations to examine detailed network states, inject specific input patterns to test network responses, and observe learning dynamics as they unfold over time. 

Pattern generation capabilities include multiple input paradigms designed to test different aspects of network function. Poisson spike trains provide realistic random input that simulates the irregular firing patterns typical of biological neurons. Periodic patterns test the network's ability to entrain to regular stimuli and develop synchronized responses. Custom pattern generators allow users to create complex temporal sequences for specific experimental protocols. 

The NetworkAnalyzer class provides comprehensive tools for understanding network structure and dynamics. Connectivity analysis reveals important topological properties like small-world structure, hub neurons with high connectivity, and modular organization. These structural features strongly influence network dynamics and computational capabilities. Spike pattern analysis tools identify important temporal features like burst patterns, synchronization dynamics, and oscillatory activity. These analyses are crucial for understanding how temporal coding schemes emerge from network dynamics and how they contribute to information processing capabilities. The performance benchmarking system allows systematic evaluation of computational efficiency across different network sizes and parameter settings. These benchmarks are essential for optimizing implementations and comparing different algorithmic approaches. Visualization capabilities provide multiple perspectives on network structure and dynamics through integrated matplotlib plotting functions. Static visualizations show network connectivity patterns and final analysis results, while animation capabilities can display temporal evolution of activity patterns and learning dynamics. 

 EXPERIMENTAL PROTOCOLS AND DEMONSTRATIONS 

 The implementation includes several demonstration protocols that showcase different aspects of brain-inspired neural computation. These protocols serve both as examples of how to use the system and as validation of key biological principles. The basic demonstration protocol creates a multi-layer network with realistic connectivity patterns and applies Poisson input stimulation to generate natural activity patterns. This protocol demonstrates the emergence of sparse activity patterns, temporal integration properties, and energy-efficient computation through event-driven dynamics. The learning demonstration applies structured temporal patterns while enabling STDP plasticity to show how networks can adapt their connectivity to better process repeated input patterns. This protocol reveals how local learning rules can lead to global improvements in network function without requiring supervised training signals. The pattern recognition demonstration tests the network's ability to learn and distinguish between different temporal input sequences. This capability is crucial for applications in speech recognition, motion detection, and other temporal pattern processing tasks. The real-time interaction demonstration shows how users can manipulate running networks to explore parameter sensitivity and test hypotheses about network function. This interactive capability is valuable for educational purposes and for developing intuition about complex network dynamics. Performance demonstrations compare the computational efficiency of spiking networks with traditional approaches, highlighting the potential advantages of event-driven computation for energy-constrained applications like mobile devices and embedded systems. 

 RUNNING THE PROGRAM 

 To execute the brain-inspired spiking neural network implementation, users need a Python environment with several standard scientific computing libraries installed. The required dependencies include NumPy for numerical computations, Matplotlib for visualization, and standard Python libraries for threading, queuing, and file operations. The main program can be executed directly by running the Python file, which will automatically execute both the basic demonstration and the comprehensive demonstration protocols. The basic demonstration creates a moderate-sized network and runs several experiments showing fundamental network capabilities like temporal dynamics, STDP learning, and pattern processing. The comprehensive demonstration provides a more thorough exploration of network capabilities including interactive simulation, advanced analysis tools, and performance benchmarking. This demonstration generates detailed visualizations and saves network states for further analysis. Users can customize the demonstrations by modifying network parameters like layer sizes, connectivity probabilities, and learning rates. Different input patterns can be tested by creating custom pattern generator functions that return arrays of input currents as functions of simulation time. The interactive simulation mode allows real-time control over network dynamics through command-line input. Users can pause simulations, examine current statistics, and modify input patterns while the network is running. This interactive capability is particularly valuable for exploring parameter sensitivity and developing intuition about network behavior. Output files include comprehensive visualizations showing network structure, activity patterns, learning dynamics, and performance statistics. Network states can be saved in JSON format for later analysis or continued simulation from specific checkpoints. Advanced users can extend the implementation by creating custom neuron models, plasticity rules, or analysis tools. The modular object-oriented design makes it straightforward to add new features while maintaining compatibility with existing functionality.  

SCIENTIFIC AND PRACTICAL APPLICATIONS 

 Brain-inspired spiking neural networks have numerous potential applications across scientific research and practical engineering domains. In computational neuroscience, these networks provide valuable tools for testing hypotheses about biological neural computation and exploring the principles underlying brain function. The temporal processing capabilities of spiking networks make them particularly well-suited for applications involving time-series data, sequential pattern recognition, and real-time sensory processing. Speech recognition systems can exploit the precise temporal coding capabilities to distinguish between phonemes based on their temporal structure rather than just spectral content. Motor control applications can benefit from the natural integration of sensory feedback and motor commands that emerges from recurrent spiking network architectures. The ability to process multiple timescales simultaneously allows these networks to coordinate fast reflexive responses with slower voluntary movements. Robotics applications can exploit the energy efficiency and real-time processing capabilities of spiking networks for autonomous navigation, object recognition, and adaptive behavior. The event-driven computation paradigm is particularly valuable for battery-powered mobile robots that must operate for extended periods without recharging. Neuromorphic engineering aims to implement brain-inspired computation in specialized hardware platforms that can achieve the energy efficiency and processing capabilities of biological neural systems. Spiking neural networks provide the algorithmic foundation for these neuromorphic chips, which represent a promising alternative to traditional von Neumann computing architectures. Medical applications include brain-computer interfaces that must process neural signals in real-time with minimal power consumption, neural prosthetics that replace damaged brain circuits, and diagnostic tools that can identify pathological activity patterns in neural recordings. The implementation provided serves as a foundation for exploring all of these application domains while maintaining the biological realism necessary for advancing our understanding of neural computation principles. 

 FUTURE DIRECTIONS AND RESEARCH OPPORTUNITIES 

 The field of brain-inspired spiking neural networks continues to evolve rapidly with numerous opportunities for advancing both theoretical understanding and practical applications. Current research directions include developing more sophisticated neuron models that capture additional biological features like dendritic computation, multiple neurotransmitter systems, and complex spike generation mechanisms. Learning algorithms remain an active area of research with investigations into how multiple plasticity mechanisms interact to produce adaptive behavior, how neuromodulation can guide learning based on behavioral context, and how local learning rules can solve global optimization problems. Network architecture research explores how connectivity patterns influence computational capabilities, how developmental processes can automatically construct efficient network topologies, and how hierarchical organization emerges from local interaction rules. Hardware implementation represents a crucial frontier for achieving the energy efficiency and processing speed promised by brain-inspired computation. Neuromorphic chips, memristive devices, and analog computing platforms offer potential pathways to implementations that match or exceed biological efficiency. Hybrid systems that combine spiking neural networks with traditional machine learning approaches may provide practical solutions that leverage the strengths of both paradigms while mitigating their respective limitations. The theoretical understanding of spiking neural network computation continues to develop through mathematical analysis of network dynamics, information-theoretic approaches to temporal coding, and systematic comparisons with biological neural systems. Educational applications of spiking neural network simulations can provide valuable tools for teaching computational neuroscience, demonstrating biological principles, and developing intuition about complex neural dynamics. The implementation provided (see below) offers a solid foundation for exploring all of these research directions while remaining accessible to researchers and students at various levels of expertise. By combining biological realism with computational tractability, brain-inspired spiking neural networks represent a promising approach to developing more efficient, adaptive, and intelligent artificial systems. 





PART 2: DETAILED PYTHON CODE EXPLANATION (FOR CODE SEE BELOW): BRAIN-INSPIRED SPIKING NEURAL NETWORK IMPORTS AND DEPENDENCIES 

 The implementation begins with importing essential Python libraries that provide the mathematical, visualization, and data structure capabilities needed for the neural network simulation. NumPy provides the fundamental array operations and mathematical functions used throughout the implementation. All membrane potentials, synaptic weights, and temporal dynamics are computed using NumPy arrays for efficiency. The random number generation capabilities are used extensively for creating realistic biological variability in neural parameters and generating stochastic noise. Matplotlib handles all visualization tasks, creating comprehensive plots that show network structure, spike patterns, activity evolution, and learning dynamics. The animation capabilities allow real-time visualization of network activity as it evolves during simulation. The collections module provides the deque data structure, which is used extensively for maintaining spike histories and activity traces with automatic size limiting. This prevents memory usage from growing unbounded during long simulations while maintaining efficient access to recent events. Threading and queue modules enable the interactive simulation capabilities, allowing user input to control running simulations without blocking the main computation thread. Time provides performance timing capabilities for benchmarking different network configurations. The typing module annotations improve code readability and enable better development tool support by clearly specifying expected data types for function parameters and return values. 

 SPIKING NEURON CLASS IMPLEMENTATION 

 The SpikingNeuron class represents the core computational unit of the brain-inspired network, implementing a sophisticated model that captures essential features of biological neural dynamics while remaining computationally efficient. The constructor initializes all neural parameters with biologically realistic values and appropriate variability. The membrane potential starts with small random variations around the resting potential, simulating the natural variability found in real neurons. The firing threshold includes random variation between neurons, reflecting the diversity of excitability found in biological neural populations. The resting potential is set to a negative value representing the typical resting state of biological neurons. The reset potential is more negative than the resting potential, implementing the hyperpolarization that follows action potential generation in real neurons. Time constants control the temporal dynamics of neural integration. The membrane time constant determines how quickly the neuron forgets past inputs, with longer time constants allowing integration over longer periods. The adaptation time constant controls spike frequency adaptation, while the refractory time constant determines the minimum interval between spikes. The spike timing infrastructure maintains detailed records of when the neuron has fired, enabling precise implementation of spike-timing dependent plasticity. The deque data structure automatically limits memory usage while providing efficient access to recent spike times. Adaptation mechanisms implement spike frequency adaptation, a crucial biological feature where neurons reduce their firing rate during sustained stimulation. The adaptation current increases with each spike and decays exponentially, creating a negative feedback system that prevents unrealistic firing rates. Energy tracking capabilities monitor the metabolic cost of neural activity, providing insights into the efficiency advantages of sparse neural coding. Background activity parameters introduce realistic spontaneous firing that occurs even without external stimulation. The update method implements the core neural dynamics using numerical integration of the leaky integrate-and-fire differential equation. The method first checks whether the neuron is in its refractory period, during which it cannot generate new spikes regardless of input strength. The leak current calculation implements the passive membrane properties that cause the membrane potential to decay toward the resting value. This leak current is proportional to the difference between current membrane potential and resting potential, divided by the membrane time constant. Noise injection adds realistic stochastic variability that is essential for proper neural function. The noise amplitude is carefully calibrated to provide sufficient variability without overwhelming the deterministic dynamics. Background spontaneous activity occasionally provides random excitatory input, simulating the constant bombardment of synaptic inputs that real neurons experience. The total current integration combines external inputs, leak current, noise, and adaptation current to compute the net influence on membrane potential. The membrane potential is updated using Euler integration, which provides adequate accuracy for the time steps used while maintaining computational efficiency. Spike generation occurs when the membrane potential crosses the firing threshold. The spike generation process involves recording the spike time, resetting the membrane potential, entering the refractory period, and updating adaptation and energy consumption tracking. Homeostatic mechanisms adjust the firing threshold based on recent activity levels, implementing biological mechanisms that maintain stable neural function despite ongoing plastic changes. If the neuron fires too frequently, the threshold increases to reduce future firing probability. If firing is too infrequent, the threshold decreases to increase excitability. 

 SPIKING SYNAPSE CLASS IMPLEMENTATION 

 The SpikingSynapse class implements realistic synaptic transmission with spike-timing dependent plasticity, providing the communication and learning mechanisms that enable network-level computation. The constructor establishes the connection between presynaptic and postsynaptic neurons and initializes synaptic parameters. The initial weight includes realistic variability and is constrained within biologically plausible bounds. The synaptic delay represents the time required for action potential propagation and synaptic transmission. The spike buffer maintains a queue of spikes that are in transit between neurons, implementing the realistic delays that characterize biological synaptic transmission. This delay is crucial for proper temporal dynamics and enables sophisticated temporal coding schemes. STDP parameters control the spike-timing dependent plasticity learning rule. The learning rate determines how quickly synaptic weights change in response to correlated activity. The STDP window defines the temporal range over which spike timing affects plasticity. The decay constants for long-term potentiation and depression control the precise shape of the STDP learning curve. The transmit spike method handles the initiation of synaptic transmission when the presynaptic neuron fires. Rather than immediately affecting the postsynaptic neuron, spikes are added to the delay buffer with their scheduled delivery time. The process delayed spikes method manages the delivery of spikes that have completed their synaptic delay. When spikes are ready for delivery, they generate postsynaptic currents that are added to the postsynaptic neuron's input. The amplitude of these currents includes realistic variability that simulates the stochastic nature of synaptic transmission. The STDP weight update mechanism implements the core learning algorithm that enables these networks to adapt without global supervision. The method examines recent spike times from both presynaptic and postsynaptic neurons to identify temporal correlations that should modify synaptic strength. For each pair of correlated spikes, the algorithm calculates the time difference and applies the appropriate weight change based on the STDP learning rule. Positive time differences (presynaptic before postsynaptic) lead to weight increases through long-term potentiation. Negative time differences lead to weight decreases through long-term depression. The exponential decay functions implement the biological mechanisms underlying STDP, where the magnitude of weight changes decreases exponentially with increasing time intervals between spikes. This creates a narrow temporal window for learning that emphasizes precise timing relationships. Metaplasticity mechanisms modify the learning rate based on current synaptic strength, implementing biological phenomena where strong synapses are less plastic than weak synapses. This prevents unlimited weight growth and creates competition between synapses. Heterosynaptic plasticity provides network-wide regulation of synaptic strengths based on overall activity levels. When network activity is high, all synapses are slightly weakened to prevent runaway excitation. When activity is low, synapses are slightly strengthened to maintain adequate network responsiveness. Weight bounds enforcement ensures that synaptic weights remain within realistic ranges. The implementation uses soft bounds that become increasingly resistant to further changes as weights approach their limits, rather than hard clipping that could disrupt learning dynamics. 

BRAIN INSPIRED NETWORK CLASS IMPLEMENTATION 

The BrainInspiredNetwork class orchestrates the complete neural network simulation, managing populations of neurons and synapses while providing high-level interfaces for experimental protocols. The constructor accepts parameters that define the network architecture, including the number of neurons in each layer and the probability of connections between neurons. These parameters allow users to create networks ranging from small experimental systems to large-scale models with thousands of neurons. The network building process follows neurobiologically inspired principles rather than the regular connectivity patterns typical of artificial neural networks. The build network method coordinates the creation of neurons and the establishment of synaptic connections according to realistic organizational principles. Neuron creation organizes neurons into layers with appropriate spatial positioning for visualization. Each layer can have different numbers of neurons and specialized properties appropriate for their functional role. Input layer neurons typically have lower firing thresholds and higher noise levels to respond readily to external stimulation. Output layer neurons may have higher thresholds and longer integration time constants to provide stable output signals. Synaptic connection establishment implements multiple connectivity patterns that reflect different aspects of biological neural organization. Feedforward connections carry information from input layers toward output layers, implementing the basic information processing pathway. The connection probability parameter controls the sparsity of connectivity, with typical values around 0.6-0.7 creating realistic sparse connectivity patterns. This sparsity reduces computational requirements while maintaining network functionality, reflecting the metabolic constraints that shape biological neural organization. Recurrent connections within layers create dynamic feedback loops that can sustain activity patterns and implement working memory functions. These connections typically have lower weights than feedforward connections to prevent instability while providing the recurrent dynamics necessary for temporal processing. Feedback connections from higher layers to lower layers enable top-down modulation and prediction capabilities. These connections allow the network to integrate information across multiple processing stages and implement attention mechanisms that selectively enhance relevant inputs. Inhibitory connections provide crucial computational functions including normalization, competition, and temporal precision. The implementation creates inhibitory connections using negative weights, simplifying the model while preserving essential inhibitory functions. The simulate step method implements the core simulation algorithm that advances the network state by one time step. The method carefully orders operations to ensure proper temporal sequencing of neural events. Synaptic transmission is processed first to deliver any spikes that have completed their propagation delays. This ensures that spikes generated in previous time steps can influence neural activity before new spikes are generated. Neuron state updates integrate all synaptic inputs and update membrane potentials according to the neural dynamics equations. Neurons that cross their firing thresholds generate new spikes that will influence other neurons in subsequent time steps. Plasticity updates modify synaptic weights based on recent activity patterns, implementing the learning mechanisms that allow the network to adapt its connectivity. The frequency of plasticity updates can be controlled to balance learning speed with computational efficiency. Network activity monitoring tracks global activity patterns and computes statistics that characterize network behavior. These measurements provide insights into network dynamics and enable detection of pathological activity patterns. The simulate method provides a high-level interface for running extended simulations with comprehensive data collection. The method accepts input pattern generators, simulation duration, and recording parameters to configure experimental protocols. Input pattern integration allows users to specify complex temporal input patterns through function objects that return input currents as functions of simulation time. This flexible approach enables testing with realistic sensory inputs, artificial test patterns, or interactive user-controlled stimulation. Data recording capabilities capture detailed information about network activity, including individual spike times, population activity levels, synaptic weight evolution, and energy consumption patterns. The recording interval can be adjusted to balance data completeness with memory requirements. Performance monitoring tracks simulation speed and provides progress updates during long simulations. This information helps users optimize their experimental protocols and estimate completion times for extensive parameter studies.  

REAL-TIME SIMULATOR CLASS IMPLEMENTATION 

 The RealTimeSimulator class extends the basic simulation capabilities with interactive features that allow real-time control and monitoring of network dynamics. This class demonstrates advanced software engineering techniques for creating responsive scientific simulation tools. The interactive simulation infrastructure uses Python threading to separate user input handling from the main simulation loop. This prevents user interaction from blocking the simulation while enabling real-time control over simulation parameters. The command queue mechanism allows the user interface thread to communicate with the simulation thread through a thread-safe queue data structure. Users can enter commands that pause the simulation, display current statistics, or modify simulation parameters without interrupting the computational flow. Dynamic input pattern generation creates complex temporal stimulation patterns that change over time, demonstrating the network's ability to process non-stationary inputs. The implementation includes oscillatory patterns, burst patterns, and random walk patterns that test different aspects of temporal processing. Real-time data monitoring maintains buffers of recent activity data that can be analyzed and displayed while the simulation is running. This capability is essential for identifying interesting network behaviors and adjusting experimental parameters interactively. The progress reporting system provides regular updates on simulation status, including progress percentage, simulation speed, and current activity levels. This information helps users monitor long simulations and identify when interesting dynamics are occurring. Performance optimization techniques ensure that the interactive features do not significantly impact simulation speed. The implementation carefully manages data structure sizes and update frequencies to maintain responsive interaction while preserving computational efficiency. 

 NETWORK ANALYZER CLASS IMPLEMENTATION 

 The NetworkAnalyzer class provides comprehensive tools for understanding network structure and dynamics through sophisticated analysis algorithms. These tools are essential for extracting meaningful insights from complex neural network simulations. Connectivity pattern analysis examines the topological properties of network connections to identify organizational principles and computational motifs. The analysis computes connectivity density, identifies hub neurons with unusually high connectivity, and characterizes the distribution of connection strengths. Layer-wise connectivity analysis reveals how information flows between different processing stages in the network. By examining connection patterns between each pair of layers, this analysis can identify the dominant information processing pathways and potential bottlenecks. Weight distribution analysis characterizes the statistical properties of synaptic strengths throughout the network. This information reveals whether learning has created specialized pathways with very strong connections or maintained more distributed processing with moderate connection strengths. Hub neuron identification finds neurons that are highly connected to other neurons, either as sources of output connections or targets of input connections. These hub neurons often play crucial roles in network dynamics and may be particularly important for network function. Spike pattern analysis examines the temporal structure of neural activity to identify important dynamical features like synchronization, oscillations, and burst patterns. These temporal patterns are crucial for understanding how the network processes information and generates outputs. Inter-spike interval analysis characterizes the regularity of neural firing patterns by examining the distribution of times between consecutive spikes. Regular firing patterns indicate highly driven neurons, while irregular patterns suggest more complex integration of multiple inputs. Synchronization analysis measures the tendency of different neurons to fire at similar times, indicating coordinated network activity. High synchronization can indicate effective information integration, while excessive synchronization may indicate pathological activity patterns. Burst detection algorithms identify periods of unusually high network activity that may represent important computational events or pathological seizure-like activity. The analysis examines population firing rates to identify periods that exceed statistical thresholds. Performance benchmarking capabilities test network computational efficiency across different sizes and parameter settings. These benchmarks help users optimize their simulations and compare different algorithmic approaches. 

 DEMONSTRATION FUNCTIONS IMPLEMENTATION 

 The demonstration functions showcase the capabilities of the brain-inspired neural network through carefully designed experimental protocols that highlight different aspects of biological neural computation. The basic demonstration creates a moderate-sized network and runs several standard tests including Poisson input stimulation, periodic pattern stimulation, and learning demonstrations. Each test is designed to highlight specific capabilities like temporal integration, pattern recognition, or synaptic plasticity. Poisson input stimulation provides realistic random input patterns that simulate the irregular firing typical of biological sensory neurons. This stimulation tests the network's ability to integrate noisy inputs and generate stable output patterns despite input variability. Periodic pattern stimulation tests the network's ability to entrain to regular input rhythms and develop synchronized responses. This capability is important for processing rhythmic sensory inputs like speech or motor patterns. Learning demonstrations enable synaptic plasticity and apply structured input patterns to show how the network can adapt its connectivity to better process repeated patterns. These demonstrations reveal how local learning rules can lead to global improvements in network function. The comprehensive demonstration provides a more thorough exploration of network capabilities including interactive simulation, advanced analysis, and performance benchmarking. This demonstration generates extensive documentation and visualizations that characterize network behavior in detail. Pattern recognition tests evaluate the network's ability to distinguish between different temporal input sequences, demonstrating the sophisticated pattern processing capabilities that emerge from spike-timing dependent plasticity. Real-time interaction demonstrations show how users can manipulate running networks to test hypotheses and explore parameter sensitivity. These interactive capabilities are valuable for educational purposes and research applications. Performance comparisons evaluate computational efficiency relative to traditional neural network approaches, highlighting the potential advantages of event-driven computation for energy-constrained applications. 

 EXECUTION FLOW AND PROGRAM STRUCTURE 

 The main execution flow demonstrates proper usage of all network components through a series of increasingly sophisticated examples. The program structure follows best practices for scientific computing applications with clear separation of concerns and modular design. The initialization phase creates network objects with appropriate parameters and configures simulation settings. Error checking ensures that invalid parameter combinations are detected early and provide helpful feedback to users. The simulation execution phase runs the configured experiments while monitoring progress and collecting comprehensive data. The implementation includes checkpointing capabilities that allow simulations to be resumed if interrupted. Data analysis and visualization phases process simulation results to extract meaningful insights and create publication-quality figures. The analysis tools automatically detect important features in the data and highlight them in the visualizations. File output capabilities save both raw simulation data and processed results in formats suitable for further analysis or sharing with collaborators. The JSON format preserves full precision while remaining human-readable. The modular design enables users to easily customize experiments by modifying individual components without affecting the overall system architecture. This flexibility supports both educational use and advanced research applications. 

 ERROR HANDLING AND ROBUSTNESS 

The implementation includes comprehensive error handling to ensure robust operation under diverse conditions. Parameter validation checks ensure that network configurations are physically reasonable and computationally feasible. Numerical stability mechanisms prevent common problems like division by zero, overflow conditions, and invalid mathematical operations. The integration algorithms include adaptive step size control to maintain accuracy while preventing instability. Memory management features prevent excessive memory usage during long simulations by automatically pruning old data and limiting buffer sizes. This allows the system to run indefinitely without memory leaks. Performance monitoring helps users identify computational bottlenecks and optimize their experimental protocols. The timing information enables informed decisions about parameter trade-offs between accuracy and speed.  

EXTENSIBILITY AND CUSTOMIZATION 

 The object-oriented design enables straightforward extension with new neuron models, plasticity rules, and analysis tools. The modular architecture ensures that new components can be added without affecting existing functionality. Custom neuron models can be implemented by subclassing the SpikingNeuron class and overriding the update method with alternative dynamics equations. This allows researchers to test different biological hypotheses within the same simulation framework. Alternative plasticity rules can be implemented by modifying the STDP update methods or adding new learning mechanisms. The local nature of these learning rules makes it straightforward to implement diverse biological learning phenomena. New analysis tools can be added to the NetworkAnalyzer class to extract additional insights from simulation data. The comprehensive data recording ensures that all necessary information is available for post-hoc analysis. 

 COMPUTATIONAL OPTIMIZATION 

The implementation includes several optimization techniques that improve computational efficiency without sacrificing biological realism. Vectorized operations using NumPy arrays accelerate mathematical computations across neuron populations. Event-driven computation processes only active neurons and synapses, dramatically reducing computational requirements compared to traditional approaches that update all network components regardless of activity levels. Efficient data structures like deques provide optimal performance for the queue operations that are central to spike-based computation. Memory pre-allocation reduces garbage collection overhead during intensive simulations. Adaptive update scheduling allows different network components to be updated at different frequencies based on their temporal dynamics, improving efficiency while maintaining accuracy. 

VALIDATION AND TESTING 

 The implementation includes built-in validation mechanisms that verify correct operation of all network components. Statistical tests ensure that random number generation produces appropriate distributions and that numerical integration maintains accuracy. Biological realism checks verify that neural firing rates, synaptic weights, and other network properties remain within realistic ranges throughout simulation. These checks help identify parameter settings that lead to unrealistic behavior. Performance regression testing ensures that optimization changes do not inadvertently reduce computational efficiency or introduce bugs. Benchmark results are compared against baseline measurements to detect performance degradation. Unit tests for individual components verify that each class behaves correctly in isolation before integration into the complete network system. These tests are essential for maintaining code quality as the system evolves. This comprehensive implementation provides a solid foundation for research into brain-inspired computation while demonstrating software engineering best practices for scientific computing applications. The combination of biological realism, computational efficiency, and user-friendly interfaces makes it suitable for both educational and research purposes. 





PART 3: MATHEMATICAL FORMULATIONS FOR BRAIN-INSPIRED SPIKING NEURAL NETWORKS

NEURON DYNAMICS EQUATIONS


The fundamental equation governing spiking neuron dynamics is the leaky integrate-and-fire differential equation. This equation describes how the membrane potential V(t) evolves over time based on input currents and passive membrane properties.

The basic leaky integrate-and-fire equation is:

τ_m * dV/dt = -(V(t) - V_rest) + R_m * I_total(t)

Where τ_m represents the membrane time constant that determines how quickly the neuron integrates inputs, V(t) is the membrane potential at time t, V_rest is the resting potential toward which the membrane decays, R_m is the membrane resistance, and I_total(t) is the total input current.

The membrane time constant τ_m is related to the membrane capacitance C_m and resistance R_m through the relationship:

τ_m = R_m * C_m

In the implementation, the total input current I_total(t) includes multiple components:

I_total(t) = I_syn(t) + I_ext(t) + I_noise(t) + I_bg(t) - I_adapt(t)

Where I_syn(t) represents synaptic input currents from other neurons, I_ext(t) is external stimulation current, I_noise(t) is background noise current, I_bg(t) is spontaneous background activity, and I_adapt(t) is the adaptation current that implements spike frequency adaptation.

SPIKE GENERATION AND RESET MECHANICS


Spike generation occurs when the membrane potential crosses a firing threshold. The spike condition is:

if V(t) ≥ θ(t) then spike = True

Where θ(t) is the firing threshold, which can be dynamic and change based on the neuron's recent activity history.

Upon spike generation, the membrane potential undergoes a reset process:

V(t+dt) = V_reset

Where V_reset is the reset potential, typically more hyperpolarized than the resting potential.

The neuron then enters a refractory period during which it cannot fire again. The refractory condition is:

if (t - t_last_spike) < τ_ref then V(t) = V_reset

Where t_last_spike is the time of the most recent spike and τ_ref is the refractory period duration.

SPIKE FREQUENCY ADAPTATION DYNAMICS


Spike frequency adaptation is implemented through an adaptation current that builds up with each spike and decays exponentially. The adaptation current evolution follows:

dI_adapt/dt = -I_adapt(t)/τ_adapt + g_adapt * Σ δ(t - t_spike)

Where τ_adapt is the adaptation time constant, g_adapt is the adaptation conductance strength, and the sum is over all spike times t_spike. The delta function δ(t - t_spike) represents instantaneous increases in adaptation current at each spike time.

In discrete time implementation, this becomes:

I_adapt(t+dt) = I_adapt(t) * exp(-dt/τ_adapt) + g_adapt * spike_event(t)

Where spike_event(t) equals 1 if the neuron spikes at time t and 0 otherwise.

SYNAPTIC TRANSMISSION EQUATIONS


Synaptic transmission involves the conversion of presynaptic spikes into postsynaptic currents with realistic delays and filtering. The basic synaptic current equation is:

I_syn(t) = Σ_j w_j * Σ_k PSP(t - t_j^k - d_j)

Where w_j is the weight of synapse j, t_j^k is the k-th spike time from presynaptic neuron j, d_j is the synaptic delay, and PSP(t) is the postsynaptic potential waveform.

The postsynaptic potential waveform is typically modeled as an exponential decay:

PSP(t) = (A/τ_syn) * exp(-t/τ_syn) * H(t)

Where A is the amplitude scaling factor, τ_syn is the synaptic time constant, and H(t) is the Heaviside step function ensuring causality.

For more realistic synaptic dynamics, a double exponential can be used:

PSP(t) = A * (exp(-t/τ_decay) - exp(-t/τ_rise)) * H(t) / (τ_decay - τ_rise)

Where τ_rise is the rise time constant and τ_decay is the decay time constant.

SPIKE-TIMING DEPENDENT PLASTICITY FORMULATIONS


The STDP learning rule modifies synaptic weights based on the relative timing of presynaptic and postsynaptic spikes. For each pair of pre and post spikes, the weight change is:

Δw = η * F(Δt)

Where η is the learning rate and F(Δt) is the STDP function that depends on the time difference Δt = t_post - t_pre.

The classical STDP function is defined as:

F(Δt) = A_+ * exp(-Δt/τ_+) for Δt > 0 (LTP)
F(Δt) = -A_- * exp(Δt/τ_-) for Δt < 0 (LTD)

Where A_+ and A_- are the amplitudes of long-term potentiation and depression, τ_+ and τ_- are the corresponding time constants, typically τ_+ ≈ τ_- ≈ 20 ms.

The total weight change for a synapse over a time window T is:

Δw_total = Σ_i Σ_j η * F(t_post^j - t_pre^i)

Where the sums are over all presynaptic spikes i and postsynaptic spikes j within the learning window.

WEIGHT DYNAMICS AND BOUNDS


Synaptic weights are constrained within realistic bounds through soft limiting functions. The weight update equation with bounds is:

w_new = w_old + Δw * g(w_old)

Where g(w) is a scaling function that reduces plasticity near the bounds:

g(w) = exp(-(w - w_mid)²/σ²) for soft Gaussian bounds

Or for hard bounds with soft transitions:

g(w) = (w_max - w) * (w - w_min) / ((w_max - w_min)/4)²

Where w_min and w_max are the minimum and maximum allowed weights, and w_mid = (w_max + w_min)/2.

HOMEOSTATIC THRESHOLD ADAPTATION


The firing threshold adapts based on recent activity to maintain target firing rates. The threshold dynamics follow:

dθ/dt = (ρ_target - ρ_actual(t)) / τ_homeostatic

Where ρ_target is the target firing rate, ρ_actual(t) is the current firing rate computed over a sliding window, and τ_homeostatic is the homeostatic time constant.

The actual firing rate is computed as:

ρ_actual(t) = N_spikes(t, t-T_window) / T_window

Where N_spikes(t, t-T_window) is the number of spikes in the time window [t-T_window, t].

METAPLASTICITY EQUATIONS


Metaplasticity modifies the learning rate based on recent synaptic activity. The effective learning rate becomes:

η_eff = η_base * M(w, activity_history)

Where M(w, activity_history) is the metaplasticity modulation function. A common form is:

M(w, history) = exp(-w/w_meta) * (1 + α * recent_changes)

Where w_meta is the metaplasticity threshold weight, α is the history dependence strength, and recent_changes quantifies recent weight modifications.

HETEROSYNAPTIC PLASTICITY


Heterosynaptic plasticity scales all synapses based on postsynaptic activity. The scaling rule is:

dw_i/dt = -ε * w_i * (ρ_post - ρ_target)

Where ε is the heterosynaptic learning rate, w_i is the weight of synapse i, ρ_post is the postsynaptic firing rate, and ρ_target is the target rate.

ENERGY CONSUMPTION CALCULATIONS


Energy consumption in spiking neurons is primarily associated with spike generation and maintenance of resting potential. The total energy cost is:

E_total(t) = E_spike * N_spikes(t) + E_leak * t + E_syn * N_transmissions(t)

Where E_spike is the energy cost per spike, N_spikes(t) is the cumulative number of spikes, E_leak is the leak energy cost per unit time, and E_syn is the energy cost per synaptic transmission.

The energy efficiency relative to continuous computation is:

Efficiency = (N_total_neurons * t) / (N_active_neurons(t) * t)

Where N_total_neurons is the total number of neurons and N_active_neurons(t) is the number of neurons that spiked in time window t.

NETWORK ACTIVITY MEASURES


Population firing rate is computed as:

ρ_pop(t) = (1/N) * Σ_i N_spikes_i(t, t+Δt) / Δt

Where N is the total number of neurons and N_spikes_i(t, t+Δt) is the number of spikes from neuron i in time interval Δt.

Network synchrony is measured using the population variance:

Synchrony(t) = Var[{N_spikes_i(t)}] / Mean[{N_spikes_i(t)}]

Where the variance and mean are computed over all neurons i.

CONNECTIVITY ANALYSIS EQUATIONS


The connectivity density of the network is:

ρ_conn = N_synapses / (N_neurons * (N_neurons - 1))

For directed graphs, or ρ_conn = 2 * N_synapses / (N_neurons * (N_neurons - 1)) for undirected graphs.

The clustering coefficient for neuron i is:

C_i = 2 * E_i / (k_i * (k_i - 1))

Where E_i is the number of connections between neighbors of neuron i, and k_i is the degree of neuron i.

The average path length is computed as:

L = (1/(N*(N-1))) * Σ_i Σ_j d_ij

Where d_ij is the shortest path length between neurons i and j.

SPIKE TRAIN ANALYSIS METRICS


The coefficient of variation of inter-spike intervals is:

CV_ISI = σ_ISI / μ_ISI

Where σ_ISI and μ_ISI are the standard deviation and mean of inter-spike intervals.

The Fano factor measures variability in spike counts:

FF = Var[spike_count] / Mean[spike_count]

Computed over multiple time windows of the same duration.

Cross-correlation between spike trains i and j is:

C_ij(τ) = <s_i(t) * s_j(t+τ)> / sqrt(<s_i²> * <s_j²>)

Where s_i(t) is the spike train of neuron i and <> denotes time averaging.

BURST DETECTION ALGORITHMS


Burst detection uses threshold-based identification. A burst occurs when:

ρ_pop(t) > μ_ρ + κ * σ_ρ

Where μ_ρ and σ_ρ are the mean and standard deviation of population firing rate, and κ is the threshold parameter (typically κ = 2-3).

Burst duration is measured as:

T_burst = t_end - t_start

Where t_start and t_end are the times when the population rate crosses above and below the threshold.

LEARNING CONVERGENCE METRICS


Weight convergence is measured by the rate of change:

dW/dt = (1/N_syn) * Σ_i |dw_i/dt|

Where the sum is over all N_syn synapses.

Learning stability is quantified by weight variance:

Stability(t) = 1 / (1 + Var[{w_i(t)}])

Where higher values indicate more stable weight distributions.

INFORMATION THEORETIC MEASURES


Mutual information between input and output spike trains:

I(X;Y) = Σ_x Σ_y P(x,y) * log₂(P(x,y) / (P(x)*P(y)))

Where X and Y are discretized spike patterns.

Entropy of spike patterns:

H(X) = -Σ_x P(x) * log₂(P(x))

Transfer entropy from neuron j to neuron i:

TE_j→i = Σ P(x_i^n+1, x_i^n, x_j^n) * log₂(P(x_i^n+1|x_i^n, x_j^n) / P(x_i^n+1|x_i^n))

NUMERICAL INTEGRATION SCHEMES


For discrete time simulation, the Euler method is used:

V(t+dt) = V(t) + dt * dV/dt

Where dV/dt is computed from the differential equation.

For improved accuracy, the Runge-Kutta method can be used:

k₁ = dt * f(t, V)
k₂ = dt * f(t + dt/2, V + k₁/2)
k₃ = dt * f(t + dt/2, V + k₂/2)
k₄ = dt * f(t + dt, V + k₃)

V(t+dt) = V(t) + (k₁ + 2k₂ + 2k₃ + k₄)/6

STOCHASTIC COMPONENTS


Gaussian white noise is added to membrane dynamics:

dV/dt = ... + σ_noise * ξ(t)

Where ξ(t) is white noise with <ξ(t)> = 0 and <ξ(t)ξ(t')> = δ(t-t').

Poisson spike trains for background activity:

P(spike in dt) = λ * dt

Where λ is the Poisson rate parameter.

OPTIMIZATION OBJECTIVES


For supervised learning applications, the objective function can be:

L = Σ_t (y_target(t) - y_output(t))²

Where y_target and y_output are target and actual output spike patterns.

For unsupervised learning, information maximization:

max I(input; output) subject to metabolic constraints

PARAMETER ESTIMATION


Maximum likelihood estimation for STDP parameters:

L(A_+, A_-, τ_+, τ_-) = Π_pairs P(Δw_observed | Δt, parameters)

Where the product is over all observed spike pairs and weight changes.

Bayesian parameter inference:

P(θ|data) ∝ P(data|θ) * P(θ)

Where θ represents model parameters and P(θ) is the prior distribution.

PERFORMANCE METRICS


Computational efficiency:

Efficiency = Operations_spiking / Operations_traditional

Memory efficiency:

Memory_ratio = Connections_active / Connections_total

Speed performance:

Speed = Simulation_time / Real_time

STABILITY ANALYSIS


Lyapunov stability for network dynamics:

dE/dt ≤ 0

Where E is a Lyapunov function such as total network energy.

Fixed point analysis:

df/dx|_{x=x*} = 0

Where f represents the network dynamics and x* are fixed points.

LINEAR STABILITY


The Jacobian matrix for small perturbations:

J_ij = ∂f_i/∂x_j|_{x=x*}

Stability requires all eigenvalues of J to have negative real parts.

BIFURCATION ANALYSIS


Critical parameters where dynamics change qualitatively:

det(J - λI) = 0

Where eigenvalue λ = 0 indicates a bifurcation point.

FREQUENCY DOMAIN ANALYSIS


Power spectral density of spike trains:

S(f) = |FT{spike_train}|²

Where FT denotes Fourier transform.

Coherence between signals:

Coh(f) = |S_xy(f)|² / (S_xx(f) * S_yy(f))

Where S_xy is the cross-power spectral density.

ADAPTIVE ALGORITHMS


Gradient descent for parameter optimization:

θ(t+1) = θ(t) - α * ∇L(θ(t))

Where α is the learning rate and ∇L is the gradient of the loss function.

Momentum-based updates:

v(t+1) = γ * v(t) + α * ∇L(θ(t))
θ(t+1) = θ(t) - v(t+1)

Where γ is the momentum coefficient.

REGULARIZATION TERMS


L1 regularization for sparse connectivity:

L_total = L_data + λ₁ * Σ_i |w_i|

L2 regularization for weight decay:

L_total = L_data + λ₂ * Σ_i w_i²

Entropy regularization for diverse activity:

L_total = L_data + λ_H * H(activity_distribution)

These mathematical formulations provide the complete theoretical foundation for the brain-inspired spiking neural network implementation, capturing both the biological realism and computational requirements necessary for effective neuromorphic computation.





PART 4: FULL PYTHON SOURCE CODE 


import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from collections import deque
import time
import random
from typing import List, Tuple, Optional, Dict, Any
import pickle
import json

class SpikingNeuron:
    """
    Biologically-inspired spiking neuron implementing leaky integrate-and-fire dynamics
    with spike frequency adaptation, refractory periods, and variable thresholds.
    """
    
    def __init__(self, neuron_id: int, layer: int, x: float = 0.0, y: float = 0.0):
        self.neuron_id = neuron_id
        self.layer = layer
        self.x = x
        self.y = y
        
        # Membrane dynamics parameters
        self.membrane_potential = np.random.uniform(-0.1, 0.1)
        self.resting_potential = -0.07  # mV (scaled)
        self.threshold = np.random.uniform(0.45, 0.55)  # Variable threshold
        self.reset_potential = -0.2
        
        # Time constants (in simulation time steps)
        self.tau_membrane = np.random.uniform(15, 25)  # Membrane time constant
        self.tau_adaptation = 100.0  # Adaptation time constant
        self.tau_refractory = 5.0  # Refractory period
        
        # Spike timing and history
        self.last_spike_time = -np.inf
        self.spike_times = deque(maxlen=1000)  # Store recent spike times
        self.is_refractory = False
        self.refractory_timer = 0.0
        
        # Adaptation mechanisms
        self.adaptation_current = 0.0
        self.adaptation_increment = 0.05
        
        # Input current and noise
        self.input_current = 0.0
        self.noise_amplitude = 0.02
        self.background_rate = 0.001  # Background spontaneous activity
        
        # Energy and activity tracking
        self.total_spikes = 0
        self.energy_consumed = 0.0
        self.activity_trace = deque(maxlen=1000)
        
        # Plasticity-related variables
        self.homeostatic_target = 10.0  # Target firing rate (Hz)
        self.homeostatic_gain = 0.001
        
    def update(self, dt: float, current_time: float, external_current: float = 0.0) -> bool:
        """
        Update neuron state for one time step using leaky integrate-and-fire dynamics.
        
        Args:
            dt: Time step size
            current_time: Current simulation time
            external_current: External input current
            
        Returns:
            bool: True if neuron spiked, False otherwise
        """
        # Handle refractory period
        if self.is_refractory:
            self.refractory_timer -= dt
            if self.refractory_timer <= 0:
                self.is_refractory = False
            else:
                return False
        
        # Calculate leak current (towards resting potential)
        leak_current = -(self.membrane_potential - self.resting_potential) / self.tau_membrane
        
        # Add noise (important for biological realism)
        noise_current = np.random.normal(0, self.noise_amplitude)
        
        # Background spontaneous activity
        if np.random.random() < self.background_rate * dt:
            noise_current += 0.1
        
        # Total current
        total_current = (external_current + self.input_current + leak_current + 
                        noise_current - self.adaptation_current)
        
        # Update membrane potential
        self.membrane_potential += dt * total_current
        
        # Update adaptation current (spike frequency adaptation)
        self.adaptation_current *= np.exp(-dt / self.tau_adaptation)
        
        # Check for spike
        if self.membrane_potential >= self.threshold:
            return self._generate_spike(current_time)
        
        # Reset input current (synaptic inputs are transient)
        self.input_current = 0.0
        
        # Update activity trace for homeostatic plasticity
        self.activity_trace.append(1.0 if len(self.spike_times) > 0 and 
                                  current_time - self.spike_times[-1] < 10 else 0.0)
        
        return False
    
    def _generate_spike(self, current_time: float) -> bool:
        """Generate a spike and update neuron state."""
        # Record spike
        self.spike_times.append(current_time)
        self.last_spike_time = current_time
        self.total_spikes += 1
        
        # Reset membrane potential
        self.membrane_potential = self.reset_potential
        
        # Enter refractory period
        self.is_refractory = True
        self.refractory_timer = self.tau_refractory
        
        # Increase adaptation current (reduces future spiking)
        self.adaptation_current += self.adaptation_increment
        
        # Energy consumption
        self.energy_consumed += 1.0
        
        # Homeostatic threshold adjustment
        recent_rate = self._calculate_recent_firing_rate(current_time)
        if recent_rate > self.homeostatic_target:
            self.threshold += self.homeostatic_gain
        elif recent_rate < self.homeostatic_target * 0.5:
            self.threshold -= self.homeostatic_gain
        
        # Keep threshold in reasonable bounds
        self.threshold = np.clip(self.threshold, 0.3, 0.8)
        
        return True
    
    def add_input_current(self, current: float):
        """Add synaptic input current."""
        self.input_current += current
    
    def _calculate_recent_firing_rate(self, current_time: float, window: float = 100.0) -> float:
        """Calculate firing rate over recent time window."""
        recent_spikes = [t for t in self.spike_times if current_time - t <= window]
        return len(recent_spikes) / (window / 1000.0)  # Convert to Hz
    
    def get_spike_times_in_window(self, current_time: float, window: float) -> List[float]:
        """Get spike times within a time window."""
        return [t for t in self.spike_times if current_time - t <= window and current_time - t >= 0]
    
    def reset(self):
        """Reset neuron to initial state."""
        self.membrane_potential = np.random.uniform(-0.1, 0.1)
        self.last_spike_time = -np.inf
        self.spike_times.clear()
        self.is_refractory = False
        self.refractory_timer = 0.0
        self.adaptation_current = 0.0
        self.input_current = 0.0
        self.total_spikes = 0
        self.energy_consumed = 0.0
        self.activity_trace.clear()


class SpikingSynapse:
    """
    Synapse with spike-timing dependent plasticity (STDP) and realistic delays.
    """
    
    def __init__(self, pre_neuron: SpikingNeuron, post_neuron: SpikingNeuron, 
                 initial_weight: Optional[float] = None):
        self.pre_neuron = pre_neuron
        self.post_neuron = post_neuron
        
        # Synaptic weight
        if initial_weight is None:
            self.weight = np.random.uniform(0.1, 0.8)
        else:
            self.weight = initial_weight
            
        self.min_weight = 0.01
        self.max_weight = 1.0
        
        # Synaptic delay (realistic conduction delays)
        self.delay = np.random.uniform(1.0, 5.0)  # 1-5 time steps
        
        # Spike buffer for delayed transmission
        self.spike_buffer = deque()
        
        # STDP parameters
        self.learning_rate = 0.01
        self.stdp_window = 20.0  # Time window for STDP
        self.ltp_decay = 10.0  # Long-term potentiation decay constant
        self.ltd_decay = 10.0  # Long-term depression decay constant
        self.ltp_amplitude = 0.1
        self.ltd_amplitude = 0.12  # Slightly stronger depression (realistic)
        
        # Plasticity modulation
        self.metaplasticity_threshold = 0.5
        self.heterosynaptic_strength = 0.02
        
        # Statistics
        self.total_transmissions = 0
        self.weight_changes = deque(maxlen=1000)
        self.last_update_time = 0
        
    def transmit_spike(self, current_time: float):
        """Handle spike transmission with delay."""
        if self.pre_neuron.last_spike_time == current_time:
            # Add spike to buffer with delay
            delivery_time = current_time + self.delay
            self.spike_buffer.append(delivery_time)
    
    def process_delayed_spikes(self, current_time: float):
        """Process spikes that have completed their synaptic delay."""
        delivered_spikes = []
        
        # Check for spikes ready for delivery
        while self.spike_buffer and self.spike_buffer[0] <= current_time:
            delivery_time = self.spike_buffer.popleft()
            delivered_spikes.append(delivery_time)
            
            # Deliver synaptic current
            synaptic_current = self.weight * self._calculate_psp_amplitude()
            self.post_neuron.add_input_current(synaptic_current)
            self.total_transmissions += 1
        
        return delivered_spikes
    
    def _calculate_psp_amplitude(self) -> float:
        """Calculate post-synaptic potential amplitude with realistic dynamics."""
        # Realistic PSP with some variability
        base_amplitude = 0.5
        variability = np.random.normal(1.0, 0.1)  # 10% variability
        return base_amplitude * variability
    
    def update_weight_stdp(self, current_time: float):
        """Update synaptic weight using spike-timing dependent plasticity."""
        if current_time - self.last_update_time < 1.0:  # Limit update frequency
            return
            
        self.last_update_time = current_time
        
        # Get recent spike times
        pre_spikes = self.pre_neuron.get_spike_times_in_window(current_time, self.stdp_window)
        post_spikes = self.post_neuron.get_spike_times_in_window(current_time, self.stdp_window)
        
        if not pre_spikes or not post_spikes:
            return
        
        weight_change = 0.0
        
        # Calculate STDP for all spike pairs
        for pre_time in pre_spikes:
            for post_time in post_spikes:
                dt = post_time - pre_time
                
                if abs(dt) < self.stdp_window:
                    if dt > 0:  # Pre before post -> LTP
                        change = self.ltp_amplitude * np.exp(-dt / self.ltp_decay)
                    else:  # Post before pre -> LTD
                        change = -self.ltd_amplitude * np.exp(dt / self.ltd_decay)
                    
                    # Metaplasticity: modify learning based on current weight
                    if self.weight > self.metaplasticity_threshold:
                        change *= 0.5  # Reduce plasticity for strong synapses
                    
                    weight_change += change
        
        # Apply learning rate and update weight
        final_change = self.learning_rate * weight_change
        self.weight += final_change
        
        # Apply weight bounds with soft constraints
        if self.weight < self.min_weight:
            self.weight = self.min_weight + np.exp(-(self.min_weight - self.weight))
        elif self.weight > self.max_weight:
            self.weight = self.max_weight - np.exp(-(self.weight - self.max_weight))
        
        # Record weight change for analysis
        if abs(final_change) > 1e-6:
            self.weight_changes.append((current_time, final_change))
    
    def apply_heterosynaptic_plasticity(self, network_activity: float):
        """Apply heterosynaptic plasticity based on network activity."""
        if network_activity > 0.8:  # High network activity
            self.weight *= (1.0 - self.heterosynaptic_strength)
        elif network_activity < 0.2:  # Low network activity
            self.weight *= (1.0 + self.heterosynaptic_strength)
        
        self.weight = np.clip(self.weight, self.min_weight, self.max_weight)


class BrainInspiredNetwork:
    """
    Complete brain-inspired spiking neural network with neuromorphic features.
    """
    
    def __init__(self, layer_sizes: List[int], connectivity_probability: float = 0.7,
                 recurrent_probability: float = 0.1):
        self.layer_sizes = layer_sizes
        self.num_layers = len(layer_sizes)
        self.connectivity_probability = connectivity_probability
        self.recurrent_probability = recurrent_probability
        
        # Network components
        self.neurons: List[SpikingNeuron] = []
        self.synapses: List[SpikingSynapse] = []
        self.layer_indices: List[Tuple[int, int]] = []  # Start and end indices for each layer
        
        # Simulation state
        self.current_time = 0.0
        self.dt = 1.0  # Time step
        self.total_spikes = 0
        self.network_activity_history = deque(maxlen=1000)
        
        # Learning parameters
        self.global_learning_rate = 0.01
        self.plasticity_enabled = True
        self.homeostasis_enabled = True
        
        # Statistics
        self.simulation_stats = {
            'total_simulation_time': 0.0,
            'total_spikes': 0,
            'average_firing_rates': [],
            'energy_consumption': 0.0,
            'weight_statistics': {}
        }
        
        self._build_network()
    
    def _build_network(self):
        """Build the complete network architecture."""
        print("Building brain-inspired network...")
        
        # Create neurons
        self._create_neurons()
        
        # Create synaptic connections
        self._create_synapses()
        
        print(f"Network created: {len(self.neurons)} neurons, {len(self.synapses)} synapses")
        print(f"Connectivity: {len(self.synapses) / (len(self.neurons) * len(self.neurons)) * 100:.1f}%")
    
    def _create_neurons(self):
        """Create neurons organized in layers."""
        neuron_id = 0
        
        for layer_idx, layer_size in enumerate(self.layer_sizes):
            layer_start = len(self.neurons)
            
            for i in range(layer_size):
                # Position neurons for visualization
                x = layer_idx * 100
                y = i * (400 / max(layer_size - 1, 1))
                
                neuron = SpikingNeuron(neuron_id, layer_idx, x, y)
                
                # Adjust parameters based on layer type
                if layer_idx == 0:  # Input layer
                    neuron.threshold *= 0.8  # More sensitive
                    neuron.noise_amplitude *= 1.5
                elif layer_idx == len(self.layer_sizes) - 1:  # Output layer
                    neuron.threshold *= 1.2  # Less sensitive
                    neuron.tau_membrane *= 1.5  # Longer integration
                
                self.neurons.append(neuron)
                neuron_id += 1
            
            layer_end = len(self.neurons)
            self.layer_indices.append((layer_start, layer_end))
    
    def _create_synapses(self):
        """Create synaptic connections with brain-like topology."""
        # Feedforward connections
        for layer_idx in range(self.num_layers - 1):
            self._create_layer_connections(layer_idx, layer_idx + 1, 
                                         self.connectivity_probability, feedforward=True)
        
        # Recurrent connections within layers
        for layer_idx in range(1, self.num_layers):  # Skip input layer
            self._create_recurrent_connections(layer_idx, self.recurrent_probability * 0.3)
        
        # Feedback connections (higher to lower layers)
        for layer_idx in range(self.num_layers - 1, 0, -1):
            if layer_idx > 1:  # Skip direct output to input feedback
                self._create_layer_connections(layer_idx, layer_idx - 1, 
                                             self.recurrent_probability, feedforward=False)
        
        # Lateral inhibitory connections
        self._create_inhibitory_connections()
    
    def _create_layer_connections(self, pre_layer: int, post_layer: int, 
                                probability: float, feedforward: bool = True):
        """Create connections between two layers."""
        pre_start, pre_end = self.layer_indices[pre_layer]
        post_start, post_end = self.layer_indices[post_layer]
        
        for pre_idx in range(pre_start, pre_end):
            for post_idx in range(post_start, post_end):
                if np.random.random() < probability:
                    weight_scale = 1.0 if feedforward else 0.3  # Weaker feedback
                    initial_weight = np.random.uniform(0.1, 0.6) * weight_scale
                    
                    synapse = SpikingSynapse(self.neurons[pre_idx], 
                                           self.neurons[post_idx], initial_weight)
                    self.synapses.append(synapse)
    
    def _create_recurrent_connections(self, layer_idx: int, probability: float):
        """Create recurrent connections within a layer."""
        layer_start, layer_end = self.layer_indices[layer_idx]
        
        for i in range(layer_start, layer_end):
            for j in range(layer_start, layer_end):
                if i != j and np.random.random() < probability:
                    initial_weight = np.random.uniform(0.05, 0.2)  # Weak recurrent weights
                    synapse = SpikingSynapse(self.neurons[i], self.neurons[j], initial_weight)
                    self.synapses.append(synapse)
    
    def _create_inhibitory_connections(self):
        """Create lateral inhibitory connections (simplified)."""
        for layer_idx in range(1, self.num_layers):
            layer_start, layer_end = self.layer_indices[layer_idx]
            layer_size = layer_end - layer_start
            
            # Create inhibitory interneurons (simplified as negative weights)
            if layer_size > 2:
                for i in range(layer_start, layer_end):
                    # Each neuron inhibits a few random neighbors
                    num_inhibitory = min(3, layer_size - 1)
                    targets = np.random.choice(range(layer_start, layer_end), 
                                             num_inhibitory, replace=False)
                    targets = [t for t in targets if t != i]
                    
                    for target in targets:
                        # Inhibitory synapse (negative weight)
                        initial_weight = -np.random.uniform(0.1, 0.3)
                        synapse = SpikingSynapse(self.neurons[i], self.neurons[target], 
                                               initial_weight)
                        synapse.min_weight = -0.5
                        synapse.max_weight = 0.0
                        self.synapses.append(synapse)
    
    def simulate_step(self, input_currents: Optional[np.ndarray] = None) -> Dict[str, Any]:
        """Simulate one time step of the network."""
        step_spikes = 0
        step_stats = {}
        
        # Apply input currents to input layer
        if input_currents is not None:
            input_start, input_end = self.layer_indices[0]
            for i, current in enumerate(input_currents):
                if i < (input_end - input_start):
                    self.neurons[input_start + i].add_input_current(current)
        
        # Process synaptic transmission first
        for synapse in self.synapses:
            synapse.transmit_spike(self.current_time)
            synapse.process_delayed_spikes(self.current_time)
        
        # Update all neurons
        spiking_neurons = []
        for neuron in self.neurons:
            if neuron.update(self.dt, self.current_time):
                step_spikes += 1
                spiking_neurons.append(neuron.neuron_id)
        
        # Update synaptic weights with STDP
        if self.plasticity_enabled:
            for synapse in self.synapses:
                synapse.update_weight_stdp(self.current_time)
        
        # Calculate network activity
        network_activity = step_spikes / len(self.neurons)
        self.network_activity_history.append(network_activity)
        
        # Apply heterosynaptic plasticity
        if len(self.network_activity_history) > 10:
            avg_activity = np.mean(list(self.network_activity_history)[-10:])
            for synapse in self.synapses:
                synapse.apply_heterosynaptic_plasticity(avg_activity)
        
        # Update statistics
        self.total_spikes += step_spikes
        self.current_time += self.dt
        
        step_stats = {
            'time': self.current_time,
            'spikes': step_spikes,
            'spiking_neurons': spiking_neurons,
            'network_activity': network_activity,
            'total_energy': sum(n.energy_consumed for n in self.neurons)
        }
        
        return step_stats
    
    def simulate(self, duration: float, input_pattern: Optional[callable] = None,
                record_interval: int = 10) -> Dict[str, Any]:
        """
        Run complete simulation for specified duration.
        
        Args:
            duration: Simulation time
            input_pattern: Function that returns input currents given time
            record_interval: How often to record detailed statistics
            
        Returns:
            Dictionary containing simulation results
        """
        print(f"Starting simulation for {duration} time steps...")
        
        simulation_data = {
            'time_points': [],
            'spike_counts': [],
            'network_activity': [],
            'energy_consumption': [],
            'firing_rates_by_layer': {i: [] for i in range(self.num_layers)},
            'weight_evolution': [],
            'neuron_spike_times': {i: [] for i in range(len(self.neurons))}
        }
        
        start_time = time.time()
        steps = int(duration / self.dt)
        
        for step in range(steps):
            # Generate input pattern
            input_currents = None
            if input_pattern:
                input_currents = input_pattern(self.current_time)
            
            # Simulate one step
            step_stats = self.simulate_step(input_currents)
            
            # Record spike times for detailed analysis
            for neuron_id in step_stats['spiking_neurons']:
                simulation_data['neuron_spike_times'][neuron_id].append(self.current_time)
            
            # Record statistics at regular intervals
            if step % record_interval == 0:
                simulation_data['time_points'].append(self.current_time)
                simulation_data['spike_counts'].append(step_stats['spikes'])
                simulation_data['network_activity'].append(step_stats['network_activity'])
                simulation_data['energy_consumption'].append(step_stats['total_energy'])
                
                # Calculate firing rates by layer
                for layer_idx in range(self.num_layers):
                    layer_start, layer_end = self.layer_indices[layer_idx]
                    layer_spikes = sum(1 for nid in step_stats['spiking_neurons'] 
                                     if layer_start <= nid < layer_end)
                    layer_size = layer_end - layer_start
                    firing_rate = (layer_spikes / layer_size) * (1000.0 / self.dt)  # Hz
                    simulation_data['firing_rates_by_layer'][layer_idx].append(firing_rate)
                
                # Sample weight evolution
                if len(self.synapses) > 0:
                    weights = [s.weight for s in self.synapses[:100]]  # Sample first 100
                    simulation_data['weight_evolution'].append({
                        'mean': np.mean(weights),
                        'std': np.std(weights),
                        'min': np.min(weights),
                        'max': np.max(weights)
                    })
            
            # Progress reporting
            if step % (steps // 10) == 0:
                progress = (step / steps) * 100
                print(f"Simulation progress: {progress:.1f}%")
        
        elapsed_time = time.time() - start_time
        print(f"Simulation completed in {elapsed_time:.2f} seconds")
        
        # Final statistics
        simulation_data['summary'] = self._calculate_summary_statistics(simulation_data)
        
        return simulation_data
    
    def _calculate_summary_statistics(self, simulation_data: Dict) -> Dict:
        """Calculate summary statistics from simulation data."""
        summary = {}
        
        if simulation_data['time_points']:
            summary['simulation_duration'] = max(simulation_data['time_points'])
            summary['average_network_activity'] = np.mean(simulation_data['network_activity'])
            summary['total_energy'] = max(simulation_data['energy_consumption'])
            
            # Firing rate statistics
            summary['firing_rates_by_layer'] = {}
            for layer_idx in range(self.num_layers):
                rates = simulation_data['firing_rates_by_layer'][layer_idx]
                if rates:
                    summary['firing_rates_by_layer'][layer_idx] = {
                        'mean': np.mean(rates),
                        'std': np.std(rates),
                        'max': np.max(rates)
                    }
            
            # Weight statistics
            if simulation_data['weight_evolution']:
                final_weights = simulation_data['weight_evolution'][-1]
                summary['final_weight_stats'] = final_weights
            
            # Spike statistics
            total_spikes_per_neuron = []
            for neuron_id, spike_times in simulation_data['neuron_spike_times'].items():
                total_spikes_per_neuron.append(len(spike_times))
            
            if total_spikes_per_neuron:
                summary['spike_statistics'] = {
                    'mean_spikes_per_neuron': np.mean(total_spikes_per_neuron),
                    'std_spikes_per_neuron': np.std(total_spikes_per_neuron),
                    'total_network_spikes': sum(total_spikes_per_neuron)
                }
        
        return summary
    
    def create_input_pattern_poisson(self, rates: List[float]) -> callable:
        """Create Poisson input pattern for input layer."""
        def input_pattern(t):
            input_start, input_end = self.layer_indices[0]
            input_size = input_end - input_start
            currents = np.zeros(input_size)
            
            for i in range(min(len(rates), input_size)):
                # Poisson process
                if np.random.random() < rates[i] * self.dt / 1000.0:
                    currents[i] = np.random.uniform(0.5, 1.5)
            
            return currents
        
        return input_pattern
    
    def create_input_pattern_periodic(self, frequencies: List[float], 
                                    amplitudes: List[float]) -> callable:
        """Create periodic input pattern."""
        def input_pattern(t):
            input_start, input_end = self.layer_indices[0]
            input_size = input_end - input_start
            currents = np.zeros(input_size)
            
            for i in range(min(len(frequencies), input_size)):
                phase = 2 * np.pi * frequencies[i] * t / 1000.0
                amplitude = amplitudes[i] if i < len(amplitudes) else amplitudes[0]
                currents[i] = amplitude * (np.sin(phase) + 1) / 2
            
            return currents
        
        return input_pattern
    
    def visualize_network(self, simulation_data: Optional[Dict] = None, 
                         save_path: Optional[str] = None):
        """Create comprehensive visualization of the network."""
        fig = plt.figure(figsize=(16, 12))
        
        if simulation_data:
            # Create subplot layout
            gs = fig.add_gridspec(3, 3, hspace=0.3, wspace=0.3)
            
            # Network structure
            ax1 = fig.add_subplot(gs[0, 0])
            self._plot_network_structure(ax1)
            
            # Spike raster plot
            ax2 = fig.add_subplot(gs[0, 1:])
            self._plot_spike_raster(ax2, simulation_data)
            
            # Network activity over time
            ax3 = fig.add_subplot(gs[1, 0])
            self._plot_network_activity(ax3, simulation_data)
            
            # Firing rates by layer
            ax4 = fig.add_subplot(gs[1, 1])
            self._plot_firing_rates(ax4, simulation_data)
            
            # Weight evolution
            ax5 = fig.add_subplot(gs[1, 2])
            self._plot_weight_evolution(ax5, simulation_data)
            
            # Energy consumption
            ax6 = fig.add_subplot(gs[2, 0])
            self._plot_energy_consumption(ax6, simulation_data)
            
            # Weight distribution
            ax7 = fig.add_subplot(gs[2, 1])
            self._plot_weight_distribution(ax7)
            
            # Summary statistics
            ax8 = fig.add_subplot(gs[2, 2])
            self._plot_summary_stats(ax8, simulation_data)
            
        else:
            # Just plot network structure
            ax = fig.add_subplot(111)
            self._plot_network_structure(ax)
        
        plt.suptitle('Brain-Inspired Spiking Neural Network Analysis', fontsize=16)
        
        if save_path:
            plt.savefig(save_path, dpi=300, bbox_inches='tight')
        
        plt.show()
    
    def _plot_network_structure(self, ax):
        """Plot network structure."""
        # Plot neurons
        for layer_idx in range(self.num_layers):
            layer_start, layer_end = self.layer_indices[layer_idx]
            layer_neurons = self.neurons[layer_start:layer_end]
            
            x_coords = [n.x for n in layer_neurons]
            y_coords = [n.y for n in layer_neurons]
            
            colors = ['red', 'green', 'blue', 'orange', 'purple']
            color = colors[layer_idx % len(colors)]
            
            ax.scatter(x_coords, y_coords, c=color, s=50, alpha=0.7, 
                      label=f'Layer {layer_idx}')
        
        # Plot connections (sample)
        sample_synapses = self.synapses[::max(1, len(self.synapses) // 100)]
        for synapse in sample_synapses:
            x1, y1 = synapse.pre_neuron.x, synapse.pre_neuron.y
            x2, y2 = synapse.post_neuron.x, synapse.post_neuron.y
            
            alpha = min(0.5, abs(synapse.weight))
            color = 'blue' if synapse.weight > 0 else 'red'
            
            ax.plot([x1, x2], [y1, y2], color=color, alpha=alpha, linewidth=0.5)
        
        ax.set_title('Network Structure')
        ax.legend()
        ax.set_aspect('equal')
    
    def _plot_spike_raster(self, ax, simulation_data):
        """Plot spike raster."""
        for neuron_id, spike_times in simulation_data['neuron_spike_times'].items():
            if spike_times:
                y_coords = [neuron_id] * len(spike_times)
                ax.scatter(spike_times, y_coords, s=1, alpha=0.7)
        
        ax.set_xlabel('Time')
        ax.set_ylabel('Neuron ID')
        ax.set_title('Spike Raster Plot')
    
    def _plot_network_activity(self, ax, simulation_data):
        """Plot network activity over time."""
        ax.plot(simulation_data['time_points'], simulation_data['network_activity'])
        ax.set_xlabel('Time')
        ax.set_ylabel('Network Activity')
        ax.set_title('Network Activity')
    
    def _plot_firing_rates(self, ax, simulation_data):
        """Plot firing rates by layer."""
        for layer_idx in range(self.num_layers):
            rates = simulation_data['firing_rates_by_layer'][layer_idx]
            if rates:
                ax.plot(simulation_data['time_points'], rates, 
                       label=f'Layer {layer_idx}')
        
        ax.set_xlabel('Time')
        ax.set_ylabel('Firing Rate (Hz)')
        ax.set_title('Firing Rates by Layer')
        ax.legend()
    
    def _plot_weight_evolution(self, ax, simulation_data):
        """Plot weight evolution."""
        if simulation_data['weight_evolution']:
            times = simulation_data['time_points']
            means = [w['mean'] for w in simulation_data['weight_evolution']]
            stds = [w['std'] for w in simulation_data['weight_evolution']]
            
            ax.plot(times, means, label='Mean')
            ax.fill_between(times, 
                           np.array(means) - np.array(stds),
                           np.array(means) + np.array(stds),
                           alpha=0.3)
        
        ax.set_xlabel('Time')
        ax.set_ylabel('Weight')
        ax.set_title('Weight Evolution')
        ax.legend()
    
    def _plot_energy_consumption(self, ax, simulation_data):
        """Plot energy consumption."""
        ax.plot(simulation_data['time_points'], simulation_data['energy_consumption'])
        ax.set_xlabel('Time')
        ax.set_ylabel('Total Energy')
        ax.set_title('Energy Consumption')
    
    def _plot_weight_distribution(self, ax):
        """Plot current weight distribution."""
        weights = [s.weight for s in self.synapses]
        ax.hist(weights, bins=50, alpha=0.7)
        ax.set_xlabel('Weight')
        ax.set_ylabel('Count')
        ax.set_title('Weight Distribution')
    
    def _plot_summary_stats(self, ax, simulation_data):
        """Plot summary statistics."""
        summary = simulation_data.get('summary', {})
        
        stats_text = []
        if 'average_network_activity' in summary:
            stats_text.append(f"Avg Activity: {summary['average_network_activity']:.3f}")
        if 'total_energy' in summary:
            stats_text.append(f"Total Energy: {summary['total_energy']:.0f}")
        if 'spike_statistics' in summary:
            spike_stats = summary['spike_statistics']
            stats_text.append(f"Total Spikes: {spike_stats['total_network_spikes']}")
            stats_text.append(f"Avg Spikes/Neuron: {spike_stats['mean_spikes_per_neuron']:.1f}")
        
        ax.text(0.1, 0.9, '\n'.join(stats_text), transform=ax.transAxes, 
               fontsize=10, verticalalignment='top',
               bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.8))
        ax.set_title('Summary Statistics')
        ax.axis('off')
    
    def save_network(self, filepath: str):
        """Save network state to file."""
        network_data = {
            'layer_sizes': self.layer_sizes,
            'connectivity_probability': self.connectivity_probability,
            'recurrent_probability': self.recurrent_probability,
            'current_time': self.current_time,
            'neuron_states': [],
            'synapse_weights': []
        }
        
        # Save neuron states
        for neuron in self.neurons:
            neuron_state = {
                'neuron_id': neuron.neuron_id,
                'layer': neuron.layer,
                'membrane_potential': neuron.membrane_potential,
                'threshold': neuron.threshold,
                'total_spikes': neuron.total_spikes,
                'energy_consumed': neuron.energy_consumed
            }
            network_data['neuron_states'].append(neuron_state)
        
        # Save synapse weights
        for i, synapse in enumerate(self.synapses):
            synapse_data = {
                'pre_neuron_id': synapse.pre_neuron.neuron_id,
                'post_neuron_id': synapse.post_neuron.neuron_id,
                'weight': synapse.weight,
                'delay': synapse.delay
            }
            network_data['synapse_weights'].append(synapse_data)
        
        with open(filepath, 'w') as f:
            json.dump(network_data, f, indent=2)
        
        print(f"Network saved to {filepath}")
    
    def reset_network(self):
        """Reset network to initial state."""
        self.current_time = 0.0
        self.total_spikes = 0
        self.network_activity_history.clear()
        
        for neuron in self.neurons:
            neuron.reset()
        
        for synapse in self.synapses:
            synapse.spike_buffer.clear()
            synapse.weight_changes.clear()


def demonstration_experiment():
    """
    Comprehensive demonstration of the brain-inspired neural network.
    """
    print("=== BRAIN-INSPIRED SPIKING NEURAL NETWORK DEMONSTRATION ===")
    print()
    
    # Create network
    layer_sizes = [8, 12, 10, 6]  # Input, Hidden1, Hidden2, Output
    network = BrainInspiredNetwork(layer_sizes, connectivity_probability=0.6)
    
    print("Network Architecture:")
    for i, size in enumerate(layer_sizes):
        layer_type = ['Input', 'Hidden', 'Hidden', 'Output'][min(i, 3)]
        print(f"  Layer {i} ({layer_type}): {size} neurons")
    print()
    
    # Experiment 1: Poisson input
    print("EXPERIMENT 1: Poisson Input Stimulation")
    print("Applying random Poisson inputs to demonstrate spiking behavior...")
    
    poisson_rates = [50, 30, 70, 40, 60, 35, 45, 55]  # Hz for each input neuron
    input_pattern = network.create_input_pattern_poisson(poisson_rates)
    
    results1 = network.simulate(duration=1000, input_pattern=input_pattern, record_interval=5)
    
    print("Results:")
    summary = results1['summary']
    print(f"  Total spikes: {summary['spike_statistics']['total_network_spikes']}")
    print(f"  Average network activity: {summary['average_network_activity']:.4f}")
    print(f"  Energy consumption: {summary['total_energy']:.0f}")
    print()
    
    # Experiment 2: Periodic input with learning
    print("EXPERIMENT 2: Periodic Input with STDP Learning")
    print("Testing learning capabilities with periodic stimulation...")
    
    network.reset_network()
    network.plasticity_enabled = True
    
    frequencies = [5, 8, 3, 12, 7, 15, 4, 10]  # Hz
    amplitudes = [1.0] * len(frequencies)
    periodic_pattern = network.create_input_pattern_periodic(frequencies, amplitudes)
    
    results2 = network.simulate(duration=2000, input_pattern=periodic_pattern, record_interval=10)
    
    print("Results after learning:")
    summary2 = results2['summary']
    print(f"  Total spikes: {summary2['spike_statistics']['total_network_spikes']}")
    print(f"  Final average weight: {summary2['final_weight_stats']['mean']:.4f}")
    print(f"  Weight standard deviation: {summary2['final_weight_stats']['std']:.4f}")
    print()
    
    # Experiment 3: Network response analysis
    print("EXPERIMENT 3: Response Characterization")
    print("Analyzing layer-specific responses and adaptation...")
    
    print("Firing rates by layer (final 500ms):")
    for layer_idx in range(network.num_layers):
        if layer_idx in summary2['firing_rates_by_layer']:
            layer_stats = summary2['firing_rates_by_layer'][layer_idx]
            print(f"  Layer {layer_idx}: {layer_stats['mean']:.2f} ± {layer_stats['std']:.2f} Hz")
    print()
    
    # Visualize results
    print("Generating comprehensive visualizations...")
    network.visualize_network(results2, save_path='brain_network_analysis.png')
    
    # Save network state
    network.save_network('trained_brain_network.json')
    
    print("DEMONSTRATION COMPLETE")
    print()
    print("Key Brain-Like Features Demonstrated:")
    print("1. Spiking dynamics with realistic membrane potentials")
    print("2. STDP learning without global error signals")
    print("3. Sparse, asynchronous activity patterns")
    print("4. Energy-efficient computation (spikes only when needed)")
    print("5. Adaptive thresholds and homeostatic regulation")
    print("6. Recurrent and feedback connections")
    print("7. Synaptic delays and realistic temporal dynamics")
    print()
    print("This represents a significant advance over traditional ANNs in")
    print("biological plausibility and energy efficiency!")


class RealTimeSimulator:
    """
    Enhanced real-time simulation engine with interactive capabilities and performance optimization.
    """
    
    def __init__(self, network: BrainInspiredNetwork):
        self.network = network
        self.is_running = False
        self.simulation_speed = 1.0
        self.real_time_data = {
            'spike_trains': deque(maxlen=10000),
            'activity_buffer': deque(maxlen=1000),
            'weight_samples': deque(maxlen=1000),
            'energy_buffer': deque(maxlen=1000)
        }
        
    def run_interactive_simulation(self, duration: float = 5000, 
                                 update_callback: Optional[callable] = None):
        """Run simulation with real-time updates and user interaction."""
        print("Starting interactive simulation...")
        print("Commands during simulation:")
        print("  - Press Enter to pause/resume")
        print("  - Type 'stats' for current statistics")
        print("  - Type 'quit' to stop simulation")
        print()
        
        import threading
        import queue
        
        # Command queue for user interaction
        command_queue = queue.Queue()
        
        def input_thread():
            while self.is_running:
                try:
                    cmd = input().strip().lower()
                    command_queue.put(cmd)
                except:
                    break
        
        # Start input thread
        self.is_running = True
        input_thread = threading.Thread(target=input_thread, daemon=True)
        input_thread.start()
        
        steps = int(duration / self.network.dt)
        start_time = time.time()
        
        for step in range(steps):
            if not self.is_running:
                break
                
            # Process user commands
            try:
                while True:
                    cmd = command_queue.get_nowait()
                    if cmd == 'quit':
                        self.is_running = False
                        break
                    elif cmd == 'stats':
                        self._print_real_time_stats()
                    elif cmd == '':  # Enter key
                        input("Simulation paused. Press Enter to continue...")
            except queue.Empty:
                pass
            
            # Simulate step with dynamic input
            input_current = self._generate_dynamic_input(self.network.current_time)
            step_stats = self.network.simulate_step(input_current)
            
            # Update real-time data
            self._update_real_time_data(step_stats)
            
            # Call user callback if provided
            if update_callback and step % 50 == 0:
                update_callback(step_stats, self.real_time_data)
            
            # Progress update
            if step % (steps // 20) == 0:
                elapsed = time.time() - start_time
                progress = (step / steps) * 100
                rate = step / elapsed if elapsed > 0 else 0
                print(f"Progress: {progress:.1f}% | Rate: {rate:.0f} steps/sec | "
                      f"Spikes: {step_stats['spikes']} | Activity: {step_stats['network_activity']:.3f}")
        
        self.is_running = False
        print("Interactive simulation completed!")
        return self.real_time_data
    
    def _generate_dynamic_input(self, t: float) -> np.ndarray:
        """Generate dynamic input patterns that change over time."""
        input_start, input_end = self.network.layer_indices[0]
        input_size = input_end - input_start
        
        # Create complex temporal patterns
        patterns = []
        
        # Pattern 1: Oscillatory input
        freq1 = 0.01
        pattern1 = np.sin(2 * np.pi * freq1 * t) * 0.5 + 0.5
        
        # Pattern 2: Burst pattern
        burst_period = 500
        burst_phase = (t % burst_period) / burst_period
        pattern2 = 1.0 if burst_phase < 0.2 else 0.2
        
        # Pattern 3: Random walk
        if not hasattr(self, '_random_walk'):
            self._random_walk = np.random.uniform(0, 1, input_size)
        
        self._random_walk += np.random.normal(0, 0.02, input_size)
        self._random_walk = np.clip(self._random_walk, 0, 1)
        
        # Combine patterns
        currents = np.zeros(input_size)
        for i in range(input_size):
            if i < input_size // 3:
                currents[i] = pattern1 * np.random.uniform(0.5, 1.5)
            elif i < 2 * input_size // 3:
                currents[i] = pattern2 * np.random.uniform(0.3, 1.2)
            else:
                currents[i] = self._random_walk[i] * np.random.uniform(0.4, 1.0)
        
        return currents
    
    def _update_real_time_data(self, step_stats: Dict):
        """Update real-time monitoring data."""
        self.real_time_data['spike_trains'].append({
            'time': step_stats['time'],
            'neurons': step_stats['spiking_neurons']
        })
        
        self.real_time_data['activity_buffer'].append(step_stats['network_activity'])
        self.real_time_data['energy_buffer'].append(step_stats['total_energy'])
        
        # Sample weights periodically
        if len(self.real_time_data['weight_samples']) == 0 or \
           step_stats['time'] % 100 < self.network.dt:
            weights = [s.weight for s in self.network.synapses[::10]]  # Sample every 10th
            self.real_time_data['weight_samples'].append({
                'time': step_stats['time'],
                'weights': weights
            })
    
    def _print_real_time_stats(self):
        """Print current simulation statistics."""
        if self.real_time_data['activity_buffer']:
            recent_activity = list(self.real_time_data['activity_buffer'])[-100:]
            avg_activity = np.mean(recent_activity)
            
            recent_energy = list(self.real_time_data['energy_buffer'])[-10:]
            energy_rate = (recent_energy[-1] - recent_energy[0]) / 10 if len(recent_energy) > 1 else 0
            
            print(f"\n--- REAL-TIME STATS (t={self.network.current_time:.0f}) ---")
            print(f"Average activity (last 100 steps): {avg_activity:.4f}")
            print(f"Energy consumption rate: {energy_rate:.2f}/step")
            print(f"Total network spikes: {self.network.total_spikes}")
            
            if self.real_time_data['weight_samples']:
                last_weights = self.real_time_data['weight_samples'][-1]['weights']
                print(f"Current weight stats: mean={np.mean(last_weights):.3f}, "
                      f"std={np.std(last_weights):.3f}")
            print("--- END STATS ---\n")


class NetworkAnalyzer:
    """
    Advanced analysis tools for understanding network behavior and performance.
    """
    
    def __init__(self, network: BrainInspiredNetwork):
        self.network = network
    
    def analyze_connectivity_patterns(self) -> Dict:
        """Analyze network connectivity and identify patterns."""
        analysis = {}
        
        # Basic connectivity statistics
        total_possible = len(self.network.neurons) ** 2
        actual_connections = len(self.network.synapses)
        analysis['connectivity_density'] = actual_connections / total_possible
        
        # Layer-wise connectivity
        layer_connectivity = {}
        for pre_layer in range(self.network.num_layers):
            for post_layer in range(self.network.num_layers):
                connections = 0
                pre_start, pre_end = self.network.layer_indices[pre_layer]
                post_start, post_end = self.network.layer_indices[post_layer]
                
                for synapse in self.network.synapses:
                    pre_id = synapse.pre_neuron.neuron_id
                    post_id = synapse.post_neuron.neuron_id
                    
                    if (pre_start <= pre_id < pre_end and 
                        post_start <= post_id < post_end):
                        connections += 1
                
                layer_connectivity[f"{pre_layer}->{post_layer}"] = connections
        
        analysis['layer_connectivity'] = layer_connectivity
        
        # Weight distribution analysis
        weights = [s.weight for s in self.network.synapses]
        analysis['weight_stats'] = {
            'mean': np.mean(weights),
            'std': np.std(weights),
            'min': np.min(weights),
            'max': np.max(weights),
            'negative_fraction': sum(1 for w in weights if w < 0) / len(weights)
        }
        
        # Identify hub neurons (highly connected)
        in_degrees = {}
        out_degrees = {}
        for neuron in self.network.neurons:
            in_degrees[neuron.neuron_id] = 0
            out_degrees[neuron.neuron_id] = 0
        
        for synapse in self.network.synapses:
            out_degrees[synapse.pre_neuron.neuron_id] += 1
            in_degrees[synapse.post_neuron.neuron_id] += 1
        
        # Find top hubs
        sorted_out = sorted(out_degrees.items(), key=lambda x: x[1], reverse=True)
        sorted_in = sorted(in_degrees.items(), key=lambda x: x[1], reverse=True)
        
        analysis['hub_neurons'] = {
            'top_output_hubs': sorted_out[:5],
            'top_input_hubs': sorted_in[:5],
            'avg_out_degree': np.mean(list(out_degrees.values())),
            'avg_in_degree': np.mean(list(in_degrees.values()))
        }
        
        return analysis
    
    def analyze_spike_patterns(self, simulation_data: Dict) -> Dict:
        """Analyze temporal spike patterns and synchronization."""
        analysis = {}
        
        # Inter-spike interval analysis
        all_isis = []
        for neuron_id, spike_times in simulation_data['neuron_spike_times'].items():
            if len(spike_times) > 1:
                isis = np.diff(spike_times)
                all_isis.extend(isis)
        
        if all_isis:
            analysis['isi_stats'] = {
                'mean': np.mean(all_isis),
                'std': np.std(all_isis),
                'cv': np.std(all_isis) / np.mean(all_isis) if np.mean(all_isis) > 0 else 0
            }
        
        # Synchronization analysis
        time_windows = np.arange(0, max(simulation_data['time_points']), 50)
        synchrony_measures = []
        
        for window_start in time_windows:
            window_end = window_start + 50
            window_spikes = []
            
            for spike_times in simulation_data['neuron_spike_times'].values():
                window_count = sum(1 for t in spike_times 
                                 if window_start <= t < window_end)
                window_spikes.append(window_count)
            
            if sum(window_spikes) > 0:
                # Calculate synchrony as variance of spike counts
                synchrony = np.var(window_spikes) / (np.mean(window_spikes) + 1e-6)
                synchrony_measures.append(synchrony)
        
        if synchrony_measures:
            analysis['synchrony_stats'] = {
                'mean_synchrony': np.mean(synchrony_measures),
                'max_synchrony': np.max(synchrony_measures),
                'synchrony_variability': np.std(synchrony_measures)
            }
        
        # Burst detection
        burst_analysis = self._detect_network_bursts(simulation_data)
        analysis['burst_stats'] = burst_analysis
        
        return analysis
    
    def _detect_network_bursts(self, simulation_data: Dict) -> Dict:
        """Detect and analyze network-wide burst events."""
        # Calculate population firing rate in time bins
        bin_size = 10  # ms
        max_time = max(simulation_data['time_points'])
        time_bins = np.arange(0, max_time, bin_size)
        
        population_rates = []
        for i, bin_start in enumerate(time_bins[:-1]):
            bin_end = time_bins[i + 1]
            total_spikes = 0
            
            for spike_times in simulation_data['neuron_spike_times'].values():
                bin_spikes = sum(1 for t in spike_times 
                               if bin_start <= t < bin_end)
                total_spikes += bin_spikes
            
            rate = total_spikes / (len(self.network.neurons) * bin_size / 1000.0)
            population_rates.append(rate)
        
        # Identify bursts as periods above threshold
        if population_rates:
            threshold = np.mean(population_rates) + 2 * np.std(population_rates)
            burst_periods = []
            in_burst = False
            burst_start = 0
            
            for i, rate in enumerate(population_rates):
                if rate > threshold and not in_burst:
                    in_burst = True
                    burst_start = i
                elif rate <= threshold and in_burst:
                    in_burst = False
                    burst_periods.append((burst_start * bin_size, i * bin_size))
            
            return {
                'num_bursts': len(burst_periods),
                'burst_periods': burst_periods,
                'mean_burst_duration': np.mean([end - start for start, end in burst_periods]) if burst_periods else 0,
                'burst_rate_threshold': threshold
            }
        
        return {'num_bursts': 0}
    
    def performance_benchmark(self, network_sizes: List[int] = None) -> Dict:
        """Benchmark network performance for different sizes."""
        if network_sizes is None:
            network_sizes = [50, 100, 200, 500]
        
        benchmark_results = {}
        
        for size in network_sizes:
            print(f"Benchmarking network size: {size} neurons...")
            
            # Create test network
            layer_config = [size // 4, size // 2, size // 4]
            test_network = BrainInspiredNetwork(layer_config)
            
            # Simple input pattern
            def simple_input(t):
                input_size = test_network.layer_indices[0][1] - test_network.layer_indices[0][0]
                return np.random.uniform(0, 1, input_size) * 0.5
            
            # Benchmark simulation
            start_time = time.time()
            test_results = test_network.simulate(duration=1000, input_pattern=simple_input, 
                                               record_interval=50)
            elapsed_time = time.time() - start_time
            
            # Calculate performance metrics
            steps_per_second = 1000 / elapsed_time
            spikes_per_second = test_results['summary']['spike_statistics']['total_network_spikes'] / elapsed_time
            
            benchmark_results[size] = {
                'simulation_time': elapsed_time,
                'steps_per_second': steps_per_second,
                'spikes_per_second': spikes_per_second,
                'neurons': len(test_network.neurons),
                'synapses': len(test_network.synapses),
                'memory_efficiency': len(test_network.synapses) / (len(test_network.neurons) ** 2)
            }
            
            print(f"  Completed in {elapsed_time:.2f}s ({steps_per_second:.0f} steps/s)")
        
        return benchmark_results


def comprehensive_demonstration():
    """
    Complete demonstration showcasing all features of the brain-inspired network.
    """
    print("=" * 80)
    print("COMPREHENSIVE BRAIN-INSPIRED NEURAL NETWORK DEMONSTRATION")
    print("=" * 80)
    print()
    
    # 1. Basic Network Creation and Analysis
    print("1. NETWORK CREATION AND STRUCTURAL ANALYSIS")
    print("-" * 50)
    
    network = BrainInspiredNetwork([10, 16, 12, 8], connectivity_probability=0.65)
    analyzer = NetworkAnalyzer(network)
    
    connectivity_analysis = analyzer.analyze_connectivity_patterns()
    print(f"Network connectivity density: {connectivity_analysis['connectivity_density']:.3f}")
    print(f"Average weight: {connectivity_analysis['weight_stats']['mean']:.3f}")
    print(f"Hub neurons (top output): {connectivity_analysis['hub_neurons']['top_output_hubs'][:3]}")
    print()
    
    # 2. Real-time Interactive Simulation
    print("2. REAL-TIME SIMULATION WITH DYNAMIC PATTERNS")
    print("-" * 50)
    
    simulator = RealTimeSimulator(network)
    
    def progress_callback(step_stats, real_time_data):
        """Callback for real-time monitoring."""
        if len(real_time_data['activity_buffer']) > 0:
            recent_activity = list(real_time_data['activity_buffer'])[-10:]
            if len(recent_activity) > 5:
                trend = "↗" if recent_activity[-1] > recent_activity[-5] else "↘"
                print(f"    Activity trend {trend} Current: {step_stats['network_activity']:.4f}")
    
    # Run shorter simulation for demo
    print("Running 2000-step simulation with dynamic input patterns...")
    real_time_data = {}
    
    # Simulate without interactive input for demo
    start_time = time.time()
    results = network.simulate(duration=2000, 
                              input_pattern=simulator._generate_dynamic_input,
                              record_interval=20)
    sim_time = time.time() - start_time
    print(f"Simulation completed in {sim_time:.2f} seconds")
    print()
    
    # 3. Advanced Pattern Analysis
    print("3. SPIKE PATTERN AND SYNCHRONIZATION ANALYSIS")
    print("-" * 50)
    
    spike_analysis = analyzer.analyze_spike_patterns(results)
    
    if 'isi_stats' in spike_analysis:
        print(f"Inter-spike interval CV: {spike_analysis['isi_stats']['cv']:.3f}")
        print(f"(CV < 1.0 indicates regular firing, CV > 1.0 indicates irregular firing)")
    
    if 'synchrony_stats' in spike_analysis:
        print(f"Network synchrony measure: {spike_analysis['synchrony_stats']['mean_synchrony']:.3f}")
    
    if 'burst_stats' in spike_analysis:
        print(f"Detected network bursts: {spike_analysis['burst_stats']['num_bursts']}")
        if spike_analysis['burst_stats']['num_bursts'] > 0:
            print(f"Average burst duration: {spike_analysis['burst_stats']['mean_burst_duration']:.1f} ms")
    print()
    
    # 4. Learning and Adaptation Demonstration
    print("4. LEARNING AND SYNAPTIC PLASTICITY")
    print("-" * 50)
    
    # Reset and train with specific pattern
    network.reset_network()
    network.plasticity_enabled = True
    
    def learning_pattern(t):
        """Structured input for learning demonstration."""
        period = 300
        phase = (t % period) / period
        input_size = network.layer_indices[0][1] - network.layer_indices[0][0]
        
        pattern = np.zeros(input_size)
        if phase < 0.3:  # Pattern A
            pattern[:input_size//2] = 1.2
        elif phase < 0.6:  # Pattern B  
            pattern[input_size//2:] = 1.2
        else:  # Rest period
            pattern *= 0.1
        
        return pattern
    
    print("Training network with structured temporal patterns...")
    learning_results = network.simulate(duration=1500, input_pattern=learning_pattern, 
                                      record_interval=25)
    
    # Compare weights before and after learning
    final_weights = [s.weight for s in network.synapses]
    print(f"Weight adaptation - Final mean: {np.mean(final_weights):.3f}, "
          f"std: {np.std(final_weights):.3f}")
    
    # Check for learning-induced changes
    if 'weight_evolution' in learning_results and len(learning_results['weight_evolution']) > 1:
        initial_mean = learning_results['weight_evolution'][0]['mean']
        final_mean = learning_results['weight_evolution'][-1]['mean']
        change = ((final_mean - initial_mean) / initial_mean) * 100
        print(f"Weight change during learning: {change:+.1f}%")
    print()
    
    # 5. Performance Benchmarking
    print("5. PERFORMANCE BENCHMARKING")
    print("-" * 50)
    
    print("Running performance tests on different network sizes...")
    benchmark_results = analyzer.performance_benchmark([50, 100, 200])
    
    print("Benchmark Results:")
    for size, metrics in benchmark_results.items():
        print(f"  {size} neurons: {metrics['steps_per_second']:.0f} steps/sec, "
              f"{metrics['memory_efficiency']:.3f} connectivity")
    print()
    
    # 6. Comprehensive Visualization
    print("6. GENERATING COMPREHENSIVE VISUALIZATIONS")
    print("-" * 50)
    
    print("Creating detailed network analysis plots...")
    network.visualize_network(learning_results, save_path='comprehensive_analysis.png')
    print("Visualization saved as 'comprehensive_analysis.png'")
    print()
    
    # 7. Network State Persistence
    print("7. NETWORK STATE PERSISTENCE")
    print("-" * 50)
    
    network.save_network('trained_comprehensive_network.json')
    print("Trained network state saved to 'trained_comprehensive_network.json'")
    print()
    
    # Final Summary
    print("=" * 80)
    print("DEMONSTRATION SUMMARY")
    print("=" * 80)
    print()
    print("Successfully demonstrated:")
    print("✓ Brain-inspired spiking neural network architecture")
    print("✓ Real-time simulation with dynamic input patterns") 
    print("✓ STDP learning and synaptic plasticity")
    print("✓ Spike pattern analysis and synchronization detection")
    print("✓ Performance benchmarking and optimization")
    print("✓ Comprehensive visualization and analysis tools")
    print("✓ Network state persistence and reproducibility")
    print()
    print("Key Biological Features Implemented:")
    print("• Discrete spike-based communication")
    print("• Local learning rules (STDP)")
    print("• Realistic temporal dynamics")
    print("• Energy-efficient sparse computation")
    print("• Homeostatic regulation")
    print("• Asynchronous processing")
    print("• Noise-enhanced robustness")
    print()
    print("This implementation provides a complete foundation for")
    print("developing more brain-like artificial intelligence systems!")


if __name__ == "__main__":
    # Run the original demonstration
    demonstration_experiment()
    
    print("\n" + "="*60)
    print("RUNNING COMPREHENSIVE ENHANCED DEMONSTRATION")
    print("="*60)
    
    # Run the comprehensive demonstration
    comprehensive_demonstration()
    
    print("\n" + "="*60)
    print("ALL DEMONSTRATIONS COMPLETED SUCCESSFULLY!")
    print("="*60)
    print()
    print("The brain-inspired neural network is now ready for:")
    print("• Research into biological neural computation")
    print("• Development of energy-efficient AI systems") 
    print("• Neuromorphic computing applications")
    print("• Continuous learning and adaptation studies")
    print("• Temporal pattern recognition tasks")
    print("• Real-time processing applications")
    print()
    print("Files generated:")
    print("- brain_network_analysis.png (basic demonstration)")
    print("- comprehensive_analysis.png (full analysis)")
    print("- trained_brain_network.json (basic trained state)")
    print("- trained_comprehensive_network.json (comprehensive trained state)")



No comments: