Wednesday, December 31, 2025

SLIDING CONTEXT WINDOWS IN LARGE LANGUAGE MODELS: A GUIDE TO MANAGING INFINITE TEXT WITH FINITE MEMORY

     

THE FOUNDATION: UNDERSTANDING LARGE LANGUAGE MODELS

Large Language Models have revolutionized how machines understand and generate human language. At their core, these models are sophisticated pattern recognition systems built on the transformer architecture, which processes text by converting words into numerical representations and analyzing the relationships between them. When you interact with an LLM, whether asking it to write an email, summarize a document, or answer a question, the model examines every word in relation to every other word to understand meaning and context.

The transformer architecture, introduced in the groundbreaking 2017 paper "Attention Is All You Need," relies on a mechanism called self-attention. This mechanism allows the model to weigh the importance of different words when processing each token in the input. Unlike earlier recurrent neural networks that processed text sequentially, transformers can examine all words simultaneously, making them both more powerful and more computationally intensive.

Consider a simple sentence: "The cat sat on the mat because it was comfortable." To understand what "it" refers to, the model must look back at previous words and determine that "it" likely refers to "the mat" rather than "the cat." This backward-looking capability, this ability to reference earlier parts of the text, is what we call context. The model builds an understanding by maintaining awareness of what came before.

INTRODUCING OUR RUNNING EXAMPLE: THE TECHNICAL DOCUMENT ANALYZER

Throughout this article, we will build a complete, working application that demonstrates each concept we discuss. Our application will analyze long technical documents, specifically software requirements specifications that often span hundreds of pages. These documents contain detailed requirements, design decisions, constraints, and dependencies that must all be understood together to answer questions or identify issues.

Imagine you are a software architect who has just received a 150-page requirements document for a new system. You need to answer questions like "What are all the security requirements?", "Are there any conflicting requirements between the authentication module and the data storage module?", or "Summarize the performance constraints for the API layer." The document is far too long to fit into a single context window, so we will use sliding windows to process it effectively.

We will use the GPT-2 model from the Hugging Face transformers library for our implementation. While GPT-2 is not the most advanced model available, it is freely accessible, well-documented, and perfect for demonstrating these concepts. The techniques we develop will work with any transformer-based model, including more powerful ones like GPT-3, GPT-4, or open-source alternatives like LLaMA or Mistral.

Let us begin by setting up our basic infrastructure and loading a sample requirements document:

import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import numpy as np

class TechnicalDocumentAnalyzer:
    """
    A complete application for analyzing long technical documents using
    sliding context windows with GPT-2.
    
    This class will be expanded throughout the article to demonstrate
    each concept we discuss.
    """
    
    def __init__(self, model_name='gpt2'):
        """
        Initialize the analyzer with a specific GPT-2 model variant.
        
        GPT-2 comes in several sizes:
        - 'gpt2' (117M parameters, 1024 token context)
        - 'gpt2-medium' (345M parameters, 1024 token context)
        - 'gpt2-large' (774M parameters, 1024 token context)
        - 'gpt2-xl' (1.5B parameters, 1024 token context)
        
        Args:
            model_name: Which GPT-2 variant to use
        """
        print(f"Loading {model_name} model and tokenizer...")
        self.tokenizer = GPT2Tokenizer.from_pretrained(model_name)
        self.model = GPT2LMHeadModel.from_pretrained(model_name)
        
        # Set padding token (GPT-2 doesn't have one by default)
        self.tokenizer.pad_token = self.tokenizer.eos_token
        
        # Move model to GPU if available
        self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
        self.model.to(self.device)
        self.model.eval()  # Set to evaluation mode
        
        # Context window size for GPT-2
        self.max_context_length = 1024
        
        print(f"Model loaded successfully on {self.device}")
        print(f"Maximum context length: {self.max_context_length} tokens")
    
    def load_document(self, file_path):
        """
        Load a technical document from a file.
        
        Args:
            file_path: Path to the document file
            
        Returns:
            The document text as a string
        """
        with open(file_path, 'r', encoding='utf-8') as f:
            document_text = f.read()
        
        # Tokenize to see how long it is
        tokens = self.tokenizer.encode(document_text)
        num_tokens = len(tokens)
        
        print(f"Document loaded: {num_tokens} tokens")
        print(f"This exceeds context window by {num_tokens - self.max_context_length} tokens")
        
        return document_text
    
    def generate_response(self, prompt, max_new_tokens=100):
        """
        Generate a response from the model given a prompt.
        
        This is our basic interface to the model that we will use
        throughout the application.
        
        Args:
            prompt: The input text prompt
            max_new_tokens: Maximum number of tokens to generate
            
        Returns:
            The generated text response
        """
        # Encode the prompt
        input_ids = self.tokenizer.encode(prompt, return_tensors='pt')
        input_ids = input_ids.to(self.device)
        
        # Check if prompt fits in context
        if input_ids.shape[1] > self.max_context_length:
            print(f"Warning: Prompt is {input_ids.shape[1]} tokens, truncating to {self.max_context_length}")
            input_ids = input_ids[:, -self.max_context_length:]
        
        # Generate response
        with torch.no_grad():
            output = self.model.generate(
                input_ids,
                max_new_tokens=max_new_tokens,
                do_sample=True,
                top_p=0.95,
                temperature=0.7,
                pad_token_id=self.tokenizer.eos_token_id
            )
        
        # Decode and return only the new tokens
        full_text = self.tokenizer.decode(output[0], skip_special_tokens=True)
        response = full_text[len(prompt):]
        
        return response.strip()


# Create our sample requirements document for testing
sample_requirements_doc = """
SOFTWARE REQUIREMENTS SPECIFICATION
Project: Enterprise Resource Planning System (ERP-X)
Version: 2.1
Date: January 2024

1. INTRODUCTION

1.1 Purpose
This document specifies the functional and non-functional requirements for the 
Enterprise Resource Planning System (ERP-X). The system will integrate financial 
management, human resources, supply chain, and customer relationship management 
into a unified platform.

1.2 Scope
ERP-X will serve 5000+ concurrent users across 15 international offices. The system 
must support multi-currency transactions, comply with GDPR and SOX regulations, 
and integrate with existing legacy systems.

2. FUNCTIONAL REQUIREMENTS

2.1 Authentication and Authorization Module

REQ-AUTH-001: The system shall implement multi-factor authentication using both 
password and time-based one-time passwords (TOTP).

REQ-AUTH-002: User sessions shall expire after 30 minutes of inactivity.

REQ-AUTH-003: The system shall support role-based access control with at least 
50 distinct roles.

REQ-AUTH-004: All authentication attempts shall be logged with timestamp, IP 
address, and outcome.

2.2 Financial Management Module

REQ-FIN-001: The system shall support transactions in at least 25 different 
currencies with real-time exchange rate updates.

REQ-FIN-002: All financial transactions must be recorded with full audit trails 
including user ID, timestamp, and transaction details.

REQ-FIN-003: The system shall generate financial reports compliant with GAAP 
and IFRS standards.

REQ-FIN-004: Month-end closing processes shall complete within 4 hours for 
datasets up to 10 million transactions.

2.3 Human Resources Module

REQ-HR-001: The system shall maintain employee records including personal 
information, employment history, and performance reviews.

REQ-HR-002: Payroll processing shall support multiple pay schedules (weekly, 
bi-weekly, monthly) and handle tax calculations for 15 countries.

REQ-HR-003: The system shall track employee time and attendance with integration 
to biometric scanners.

2.4 Supply Chain Module

REQ-SCM-001: Inventory tracking shall provide real-time visibility across all 
warehouse locations with accuracy of 99.9%.

REQ-SCM-002: The system shall support automated reordering based on configurable 
minimum stock levels and lead times.

REQ-SCM-003: Purchase order approval workflows shall support multi-level 
authorization based on order value.

3. NON-FUNCTIONAL REQUIREMENTS

3.1 Performance Requirements

REQ-PERF-001: The system shall support 5000 concurrent users with response times 
under 2 seconds for 95% of requests.

REQ-PERF-002: Database queries shall execute in under 500ms for 99% of operations.

REQ-PERF-003: The system shall handle peak loads of 10000 transactions per minute 
during month-end processing.

REQ-PERF-004: API endpoints shall respond within 100ms for simple queries and 
1 second for complex aggregations.

3.2 Security Requirements

REQ-SEC-001: All data transmission shall use TLS 1.3 or higher encryption.

REQ-SEC-002: Passwords shall be hashed using bcrypt with a work factor of at 
least 12.

REQ-SEC-003: The system shall implement SQL injection prevention through 
parameterized queries.

REQ-SEC-004: Sensitive data at rest shall be encrypted using AES-256.

REQ-SEC-005: The system shall undergo quarterly security audits and penetration 
testing.

3.3 Reliability Requirements

REQ-REL-001: The system shall maintain 99.9% uptime during business hours 
(6 AM to 10 PM local time).

REQ-REL-002: Automated backups shall occur every 6 hours with retention of 
30 days.

REQ-REL-003: The system shall support failover to backup servers within 60 
seconds of primary server failure.

3.4 Scalability Requirements

REQ-SCALE-001: The architecture shall support horizontal scaling to accommodate 
50% user growth annually.

REQ-SCALE-002: Database sharding shall be implemented to handle datasets 
exceeding 100TB.

4. INTEGRATION REQUIREMENTS

REQ-INT-001: The system shall integrate with existing SAP financial system 
via REST APIs.

REQ-INT-002: Real-time data synchronization with the legacy HR system shall 
occur every 15 minutes.

REQ-INT-003: The system shall expose GraphQL APIs for third-party integrations.

5. COMPLIANCE REQUIREMENTS

REQ-COMP-001: The system shall comply with GDPR requirements including right 
to erasure and data portability.

REQ-COMP-002: SOX compliance shall be maintained through comprehensive audit 
logging and access controls.

REQ-COMP-003: PCI-DSS compliance is required for any credit card processing 
functionality.

6. CONSTRAINTS AND ASSUMPTIONS

6.1 Technical Constraints

CONST-TECH-001: The system must be deployable on AWS infrastructure.

CONST-TECH-002: The primary database shall be PostgreSQL 14 or higher.

CONST-TECH-003: The frontend shall be built using React 18 with TypeScript.

6.2 Business Constraints

CONST-BUS-001: Phase 1 deployment must be completed within 18 months.

CONST-BUS-002: Total project budget shall not exceed 5 million USD.

CONST-BUS-003: The system must support the existing organizational structure 
without requiring reorganization.

7. POTENTIAL CONFLICTS AND DEPENDENCIES

NOTE: There appears to be a potential conflict between REQ-PERF-001 (2 second 
response time) and REQ-PERF-004 (1 second for complex aggregations). Complex 
aggregations may not always meet the 2 second threshold during peak loads.

NOTE: REQ-AUTH-002 (30 minute session timeout) may conflict with REQ-FIN-004 
(4 hour month-end processing) as users may be logged out during long-running 
financial operations.

NOTE: REQ-SCALE-002 (database sharding) has dependencies on REQ-INT-001 and 
REQ-INT-002 as sharding may complicate integration with legacy systems.
"""

# Initialize the analyzer and load our sample document
analyzer = TechnicalDocumentAnalyzer(model_name='gpt2')

# Save the sample document to a file for demonstration
with open('sample_requirements.txt', 'w', encoding='utf-8') as f:
    f.write(sample_requirements_doc)

# Load the document
document = analyzer.load_document('sample_requirements.txt')

This initialization code sets up our complete working environment. When you run this code, you will see output showing that our sample requirements document contains approximately 1,400 tokens, which exceeds GPT-2's 1,024-token context window by about 400 tokens. This is a perfect scenario for demonstrating sliding windows.

WHAT IS CONTEXT AND WHERE DOES IT LIVE?

Context in an LLM refers to the portion of text that the model can actively "see" and use when making predictions or generating responses. Think of it as the model's working memory, similar to how you might hold several ideas in your mind while reading a complex paragraph. The context is not a single, simple thing but rather a rich tapestry of information distributed across multiple components of the neural network.

The context primarily resides in three interconnected places within the transformer architecture. First, there are the attention matrices, which store the relationships between every token and every other token in the input sequence. These matrices capture which words are relevant to which other words. Second, the hidden states at each layer of the transformer maintain increasingly abstract representations of the input text, with earlier layers capturing surface-level patterns and deeper layers understanding semantic meaning. Third, the key-value caches store precomputed attention information that can be reused during generation, allowing the model to avoid redundant calculations.

Let me illustrate with a simplified representation of how context flows through a transformer layer:

Input Tokens: ["The", "quick", "brown", "fox"]
                    |
                    v
Embedding Layer (converts to vectors)
                    |
                    v
+--------------------------------------------------+
|  Self-Attention Mechanism                        |
|                                                  |
|  Query: "fox" looks at all previous tokens       |
|  Keys: ["The", "quick", "brown", "fox"]          |
|  Values: Weighted combination based on relevance |
|                                                  |
|  Attention scores show "fox" relates strongly    |
|  to "quick" and "brown" (adjectives describing)  |
+--------------------------------------------------+
                    |
                    v
Feed-Forward Network (processes combined info)
                    |
                    v
Output: Contextualized representation of "fox"

The attention mechanism computes a score between every pair of tokens. For a sequence of length N, this creates an N-by-N matrix of attention scores. Each element in this matrix represents how much one token should "pay attention to" another token when building its representation. This is where the computational challenge begins to emerge.

Let us examine this concretely with our requirements document. We can inspect how the tokenizer breaks down our text and understand the context limitations:

def analyze_document_context(analyzer, document):
    """
    Analyze how the document fits (or doesn't fit) into the model's context.
    
    This function demonstrates the fundamental problem we are solving:
    our document is too large for the model's context window.
    """
    # Tokenize the entire document
    tokens = analyzer.tokenizer.encode(document)
    
    print("=" * 70)
    print("CONTEXT ANALYSIS")
    print("=" * 70)
    print(f"Total document tokens: {len(tokens)}")
    print(f"Model context capacity: {analyzer.max_context_length}")
    print(f"Overflow: {len(tokens) - analyzer.max_context_length} tokens")
    print(f"Percentage that fits: {(analyzer.max_context_length / len(tokens)) * 100:.1f}%")
    print()
    
    # Show a sample of how text is tokenized
    sample_text = "REQ-AUTH-001: The system shall implement multi-factor authentication"
    sample_tokens = analyzer.tokenizer.encode(sample_text)
    sample_decoded = [analyzer.tokenizer.decode([t]) for t in sample_tokens]
    
    print("Sample tokenization:")
    print(f"Text: {sample_text}")
    print(f"Tokens ({len(sample_tokens)}): {sample_decoded}")
    print()
    
    # Calculate how many "chunks" we would need without sliding windows
    num_chunks = (len(tokens) + analyzer.max_context_length - 1) // analyzer.max_context_length
    print(f"Without sliding windows, we would need {num_chunks} separate chunks")
    print("This would lose context at chunk boundaries!")
    print()
    
    return tokens

# Run the analysis
document_tokens = analyze_document_context(analyzer, document)

This analysis reveals the core problem. Our requirements document, while not enormous, cannot fit into GPT-2's context window. If we simply truncated it, we would lose critical information about security requirements, compliance needs, and the important conflict notes at the end. If we split it into non-overlapping chunks, we would lose the ability to understand relationships between requirements in different sections.

THE FUNDAMENTAL PROBLEM: WHY LIMITED CONTEXT MEMORY MATTERS

The limitation on context length is not an arbitrary design choice but a fundamental constraint imposed by the mathematics of the transformer architecture. The self-attention mechanism, which gives transformers their power, has a computational complexity that grows quadratically with the sequence length. In mathematical terms, processing a sequence of N tokens requires O(N squared) operations and memory.

To understand why this matters, consider the difference between processing 1,000 tokens versus 10,000 tokens. The computational cost does not increase by a factor of 10 but by a factor of 100. If processing 1,000 tokens takes one second, processing 10,000 tokens would take approximately 100 seconds, and processing 100,000 tokens would take 10,000 seconds, or nearly three hours. The memory requirements follow a similar pattern, as the model must store attention scores for every token pair.

This quadratic scaling creates a hard ceiling on how much text an LLM can process at once. Early GPT models had context windows of just 2,048 tokens (roughly 1,500 words). Even modern models with extended contexts of 32,000 or 128,000 tokens face practical limits. When you need to process a 500-page book, a year's worth of email correspondence, or an entire codebase, even these extended windows fall short.

The consequences of hitting the context limit are severe. The model simply cannot see beyond its window. If you ask it to summarize a document that exceeds the context length, it must either truncate the document (losing information) or process it in chunks (losing coherence). Information that falls outside the window becomes invisible, as if it never existed. The model cannot reference it, cannot use it to inform its responses, and cannot maintain consistency with it.

Let us demonstrate this problem concretely with our requirements document:

def demonstrate_context_limitation(analyzer, document):
    """
    Show what happens when we try to ask questions about a document
    that exceeds the context window.
    
    This demonstrates why we need sliding windows.
    """
    print("=" * 70)
    print("DEMONSTRATING CONTEXT LIMITATION PROBLEM")
    print("=" * 70)
    
    # Question that requires information from the end of the document
    question = "What potential conflicts are mentioned in the requirements?"
    
    # Try to answer using just the beginning of the document (what fits)
    tokens = analyzer.tokenizer.encode(document)
    truncated_tokens = tokens[:analyzer.max_context_length - 100]  # Leave room for question
    truncated_text = analyzer.tokenizer.decode(truncated_tokens)
    
    prompt = f"{truncated_text}\n\nQuestion: {question}\nAnswer:"
    
    print(f"Question: {question}")
    print()
    print("Attempting to answer using only the first 924 tokens...")
    print("(The conflict information is in the last section, which is cut off)")
    print()
    
    response = analyzer.generate_response(prompt, max_new_tokens=150)
    print(f"Model's answer: {response}")
    print()
    print("Notice: The model cannot answer correctly because the conflict")
    print("information is beyond the context window!")
    print()

# Demonstrate the problem
demonstrate_context_limitation(analyzer, document)

When you run this code, you will see that the model cannot properly answer questions about conflicts because that information appears near the end of the document, beyond the context window. This is the fundamental problem that sliding windows solve.

INTRODUCING SLIDING CONTEXT WINDOWS: THE CORE CONCEPT

A sliding context window is a technique that allows an LLM to process text sequences longer than its maximum context length by moving a fixed-size window across the input text, similar to how you might read a long scroll by moving a magnifying glass across it. Instead of trying to fit the entire text into memory at once, the model processes overlapping or sequential chunks, maintaining continuity through careful management of what information is retained between windows.

The fundamental idea is straightforward. Imagine you have a text of 20,000 tokens and a model with a 4,096-token context window. Rather than truncating the text to 4,096 tokens and discarding the rest, you process the first 4,096 tokens, then slide the window forward to process tokens 2,048 through 6,144, then tokens 4,096 through 8,192, and so on. Each window overlaps with the previous one, ensuring that no information is lost and that the model maintains some continuity across boundaries.

The visual representation below shows how a sliding window moves across a long text sequence:

Full Text: [==================================================]
           Token 0 ------------------------------------ Token 20000

Window 1:  [==========]
           Token 0 to 4096

Window 2:            [==========]
                     Token 2048 to 6144

Window 3:                      [==========]
                               Token 4096 to 8192

Window 4:                                [==========]
                                         Token 6144 to 10240

And so on...

The overlap between windows serves multiple purposes. It ensures that tokens near the end of one window are also seen near the beginning of the next window, providing context continuity. It allows the model to reconsider tokens with fresh context from subsequent text. It also helps prevent important information from being split awkwardly across window boundaries.

IMPLEMENTING BASIC SLIDING WINDOWS: PRACTICAL CODE EXAMPLES

Let us begin with a straightforward implementation that demonstrates the core mechanics of a sliding window. We will add this functionality to our TechnicalDocumentAnalyzer class:

def create_sliding_windows(self, text, window_size=None, stride=None):
    """
    Splits text into overlapping windows for processing by the LLM.
    
    The window_size parameter determines how many tokens fit in each window,
    while stride controls how far the window moves forward each step.
    A smaller stride creates more overlap between consecutive windows.
    
    Args:
        text: The input text as a string
        window_size: Maximum number of tokens per window (default: max_context_length - 100)
        stride: Number of tokens to advance the window each step (default: window_size // 2)
        
    Returns:
        A list of dictionaries containing window text and metadata
    """
    # Use defaults if not specified
    if window_size is None:
        window_size = self.max_context_length - 100  # Leave room for prompts
    if stride is None:
        stride = window_size // 2  # 50% overlap by default
    
    # Tokenize the full text
    tokens = self.tokenizer.encode(text)
    windows = []
    
    position = 0
    window_index = 0
    
    while position < len(tokens):
        # Extract tokens from current position up to window_size
        window_end = min(position + window_size, len(tokens))
        window_tokens = tokens[position:window_end]
        
        # Decode back to text
        window_text = self.tokenizer.decode(window_tokens)
        
        # Store window with metadata
        windows.append({
            'index': window_index,
            'start_token': position,
            'end_token': window_end,
            'num_tokens': len(window_tokens),
            'text': window_text
        })
        
        # Move the window forward by stride tokens
        position += stride
        window_index += 1
        
        # If we have processed all tokens, break to avoid empty windows
        if window_end >= len(tokens):
            break
    
    return windows

# Add this method to the TechnicalDocumentAnalyzer class
TechnicalDocumentAnalyzer.create_sliding_windows = create_sliding_windows


def demonstrate_sliding_windows(analyzer, document):
    """
    Demonstrate how sliding windows divide the document.
    """
    print("=" * 70)
    print("SLIDING WINDOWS DEMONSTRATION")
    print("=" * 70)
    
    # Create windows with 50% overlap
    windows = analyzer.create_sliding_windows(document)
    
    print(f"Created {len(windows)} windows from the document")
    print(f"Window size: ~924 tokens")
    print(f"Stride: ~462 tokens (50% overlap)")
    print()
    
    # Show details of each window
    for window in windows:
        print(f"Window {window['index']}:")
        print(f"  Token range: {window['start_token']} to {window['end_token']}")
        print(f"  Actual tokens: {window['num_tokens']}")
        print(f"  Content preview: {window['text'][:100]}...")
        print()
    
    # Show overlap between consecutive windows
    if len(windows) >= 2:
        print("Overlap analysis between Window 0 and Window 1:")
        window0_tokens = set(range(windows[0]['start_token'], windows[0]['end_token']))
        window1_tokens = set(range(windows[1]['start_token'], windows[1]['end_token']))
        overlap_tokens = window0_tokens.intersection(window1_tokens)
        print(f"  Overlapping tokens: {len(overlap_tokens)}")
        print(f"  Overlap percentage: {(len(overlap_tokens) / windows[0]['num_tokens']) * 100:.1f}%")
    
    return windows

# Run the demonstration
windows = demonstrate_sliding_windows(analyzer, document)

This implementation captures the essence of sliding windows. The stride parameter is crucial because it controls the trade-off between computational cost and information retention. A stride equal to the window size means no overlap, processing the text in sequential, non-overlapping chunks. This is computationally efficient but risks losing context at boundaries. A stride of half the window size means fifty percent overlap, where each token appears in two windows. This provides better continuity but doubles the computational cost.

Now let us use these windows to actually answer questions about our requirements document:

def answer_question_with_windows(self, document, question, window_size=None, stride=None):
    """
    Answer a question about a long document using sliding windows.
    
    This method processes each window separately and collects answers,
    then synthesizes them into a final response.
    
    Args:
        document: The full document text
        question: The question to answer
        window_size: Size of each window
        stride: Stride between windows
        
    Returns:
        The synthesized answer
    """
    # Create sliding windows
    windows = self.create_sliding_windows(document, window_size, stride)
    
    print(f"Processing {len(windows)} windows to answer: '{question}'")
    print()
    
    window_answers = []
    
    # Process each window
    for i, window in enumerate(windows):
        print(f"Processing window {i}...")
        
        # Create prompt for this window
        prompt = f"{window['text']}\n\nQuestion: {question}\nAnswer:"
        
        # Generate answer for this window
        answer = self.generate_response(prompt, max_new_tokens=100)
        
        window_answers.append({
            'window_index': i,
            'answer': answer,
            'window_preview': window['text'][:100]
        })
    
    print()
    print("Answers from each window:")
    for wa in window_answers:
        print(f"  Window {wa['window_index']}: {wa['answer'][:80]}...")
    print()
    
    # Synthesize final answer from all window answers
    synthesis_prompt = f"Question: {question}\n\n"
    synthesis_prompt += "Information from different parts of the document:\n"
    for wa in window_answers:
        synthesis_prompt += f"- {wa['answer']}\n"
    synthesis_prompt += "\nProvide a comprehensive answer combining all information:"
    
    final_answer = self.generate_response(synthesis_prompt, max_new_tokens=150)
    
    return final_answer

# Add this method to the class
TechnicalDocumentAnalyzer.answer_question_with_windows = answer_question_with_windows

# Now answer the question that failed before
print("=" * 70)
print("ANSWERING WITH SLIDING WINDOWS")
print("=" * 70)
question = "What potential conflicts are mentioned in the requirements?"
answer = analyzer.answer_question_with_windows(document, question)
print(f"Final Answer: {answer}")
print()
print("Success! The sliding window approach found the conflicts that were")
print("beyond the original context window.")

This demonstrates the power of sliding windows. By processing the document in overlapping chunks, we can now answer questions that require information from any part of the document, not just the beginning.

ADVANCED WINDOW MANAGEMENT: CONTEXT AGGREGATION AND MEMORY

Simply processing text in sliding windows is not enough for many applications. We need mechanisms to aggregate information across windows and maintain a coherent understanding of the entire document. This requires careful design of how we combine results from multiple windows and what information we carry forward.

One effective approach is to maintain a running summary or memory buffer that accumulates key information from each window. As we process each window, we extract salient points and add them to the memory, which then becomes part of the context for subsequent windows. This creates a hierarchical understanding where detailed information exists in the current window while high-level summaries provide context from earlier windows.

Let us enhance our TechnicalDocumentAnalyzer with memory capabilities:

def __init__(self, model_name='gpt2'):
    """
    Enhanced initialization with memory buffer support.
    """
    print(f"Loading {model_name} model and tokenizer...")
    self.tokenizer = GPT2Tokenizer.from_pretrained(model_name)
    self.model = GPT2LMHeadModel.from_pretrained(model_name)
    
    self.tokenizer.pad_token = self.tokenizer.eos_token
    
    self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
    self.model.to(self.device)
    self.model.eval()
    
    self.max_context_length = 1024
    
    # Memory buffer for accumulating context
    self.memory_buffer = ""
    self.memory_max_tokens = 200  # Reserve tokens for memory
    
    print(f"Model loaded successfully on {self.device}")
    print(f"Maximum context length: {self.max_context_length} tokens")
    print(f"Memory buffer capacity: {self.memory_max_tokens} tokens")


def extract_window_summary(self, window_text):
    """
    Extract key information from a window to store in memory.
    
    This creates a compressed representation of the window that can
    be carried forward to provide context for later windows.
    
    Args:
        window_text: The text of the current window
        
    Returns:
        A summary of key points from the window
    """
    # Create a prompt asking for key points
    prompt = f"{window_text}\n\nList the 3 most important points from this text:\n1."
    
    summary = self.generate_response(prompt, max_new_tokens=80)
    
    # Clean up the summary
    summary = "1." + summary
    
    return summary


def update_memory(self, window_summary):
    """
    Update the memory buffer with information from the latest window.
    
    This method adds new summary information while ensuring the memory
    does not exceed the maximum size. Older information is truncated if
    necessary to make room for new content.
    
    Args:
        window_summary: Summary or key points from the processed window
    """
    # Add new summary to memory
    if self.memory_buffer:
        self.memory_buffer += f"\n{window_summary}"
    else:
        self.memory_buffer = window_summary
    
    # Truncate memory if it exceeds the maximum size
    memory_tokens = self.tokenizer.encode(self.memory_buffer)
    if len(memory_tokens) > self.memory_max_tokens:
        # Keep only the most recent tokens
        truncated_tokens = memory_tokens[-self.memory_max_tokens:]
        self.memory_buffer = self.tokenizer.decode(truncated_tokens)


def process_with_memory(self, document, task_instruction):
    """
    Process a document using sliding windows with memory accumulation.
    
    This method maintains a memory buffer that accumulates key information
    from each window, providing context that spans the entire document.
    
    Args:
        document: The full document text
        task_instruction: What task to perform
        
    Returns:
        The final result after processing all windows
    """
    # Reset memory buffer
    self.memory_buffer = ""
    
    # Create windows
    window_size = self.max_context_length - self.memory_max_tokens - 100
    windows = self.create_sliding_windows(document, window_size=window_size)
    
    print(f"Processing {len(windows)} windows with memory accumulation...")
    print()
    
    results = []
    
    for i, window in enumerate(windows):
        print(f"Window {i}: ", end='')
        
        # Construct prompt with memory and current window
        if self.memory_buffer:
            prompt = f"Previous context:\n{self.memory_buffer}\n\n"
            prompt += f"Current section:\n{window['text']}\n\n"
            prompt += task_instruction
        else:
            prompt = f"{window['text']}\n\n{task_instruction}"
        
        # Process this window
        response = self.generate_response(prompt, max_new_tokens=100)
        results.append(response)
        
        # Extract summary for memory
        summary = self.extract_window_summary(window['text'])
        self.update_memory(summary)
        
        print(f"Processed. Memory now contains {len(self.tokenizer.encode(self.memory_buffer))} tokens")
    
    print()
    print("Final memory buffer content:")
    print(self.memory_buffer)
    print()
    
    # Combine results
    final_result = "\n\n".join(results)
    return final_result

# Add these methods to the class
TechnicalDocumentAnalyzer.__init__ = __init__
TechnicalDocumentAnalyzer.extract_window_summary = extract_window_summary
TechnicalDocumentAnalyzer.update_memory = update_memory
TechnicalDocumentAnalyzer.process_with_memory = process_with_memory

# Reinitialize with memory support
analyzer = TechnicalDocumentAnalyzer(model_name='gpt2')

# Demonstrate memory-based processing
print("=" * 70)
print("PROCESSING WITH MEMORY ACCUMULATION")
print("=" * 70)

task = "Identify all security-related requirements:"
result = analyzer.process_with_memory(document, task)

print("Final result:")
print(result)

This implementation introduces the crucial concept of memory. The memory buffer acts as a compressed representation of everything the model has seen so far. Instead of trying to fit the entire document into one context window, we maintain a high-level summary that provides continuity. Each window is processed with both its detailed content and the summary of previous windows, giving the model a broader understanding.

STRATEGIC WINDOW PLACEMENT: ATTENTION-BASED SELECTION

Not all parts of a document are equally important. A naive sliding window treats every token with equal priority, but we can be smarter. By using attention scores or other relevance metrics, we can focus our limited context budget on the most important sections of the text.

For our requirements document, if we are asked about security requirements, we should focus on windows that contain security-related content rather than processing every window equally. Let us implement selective window processing:

def score_window_relevance(self, window_text, query):
    """
    Compute a relevance score for a window given a query.
    
    This uses a simple but effective keyword-based approach. For production
    systems, you would use embedding similarity or a dedicated retrieval model.
    
    Args:
        window_text: The text content of the window
        query: The user's question or task description
        
    Returns:
        A relevance score (higher means more relevant)
    """
    # Extract keywords from query
    query_terms = set(query.lower().split())
    
    # Remove common words
    stop_words = {'the', 'a', 'an', 'and', 'or', 'but', 'in', 'on', 'at', 'to', 'for',
                  'of', 'with', 'by', 'from', 'as', 'is', 'was', 'are', 'were', 'what',
                  'which', 'who', 'when', 'where', 'how', 'all', 'any'}
    query_terms = query_terms - stop_words
    
    # Tokenize window and count matches
    window_terms = window_text.lower().split()
    
    # Count how many query terms appear and their frequency
    score = 0
    for term in query_terms:
        count = window_terms.count(term)
        score += count
    
    # Normalize by window length to avoid bias toward longer windows
    score = score / len(window_terms) if window_terms else 0
    
    return score


def process_with_selective_attention(self, document, query, top_k=3):
    """
    Process only the most relevant windows for a given query.
    
    This method first scores all possible windows, then processes only the
    top-k most relevant ones in detail. This dramatically reduces computation
    for long documents when we have a focused task.
    
    Args:
        document: The complete document text
        query: The user's question or task
        top_k: How many top-scoring windows to process in detail
        
    Returns:
        Results from processing the most relevant windows
    """
    # Create all possible windows
    windows = self.create_sliding_windows(document)
    
    print(f"Scoring {len(windows)} windows for relevance to: '{query}'")
    print()
    
    # Score each window for relevance
    scored_windows = []
    for window in windows:
        score = self.score_window_relevance(window['text'], query)
        scored_windows.append({
            'window': window,
            'score': score
        })
    
    # Sort by relevance score and take top-k
    scored_windows.sort(key=lambda x: x['score'], reverse=True)
    top_windows = scored_windows[:top_k]
    
    print(f"Selected top {len(top_windows)} windows:")
    for i, sw in enumerate(top_windows):
        print(f"  Window {sw['window']['index']}: relevance score {sw['score']:.4f}")
        print(f"    Preview: {sw['window']['text'][:80]}...")
    print()
    
    # Process only the top windows
    results = []
    for sw in top_windows:
        window = sw['window']
        prompt = f"{window['text']}\n\nQuery: {query}\nAnswer:"
        response = self.generate_response(prompt, max_new_tokens=150)
        
        results.append({
            'window_index': window['index'],
            'response': response,
            'relevance': sw['score']
        })
    
    # Synthesize final answer
    synthesis_prompt = f"Query: {query}\n\nRelevant information found:\n"
    for r in results:
        synthesis_prompt += f"- {r['response']}\n"
    synthesis_prompt += "\nProvide a comprehensive answer:"
    
    final_answer = self.generate_response(synthesis_prompt, max_new_tokens=150)
    
    return final_answer, results

# Add these methods to the class
TechnicalDocumentAnalyzer.score_window_relevance = score_window_relevance
TechnicalDocumentAnalyzer.process_with_selective_attention = process_with_selective_attention

# Demonstrate selective attention
print("=" * 70)
print("SELECTIVE WINDOW PROCESSING")
print("=" * 70)

query = "What are the security and encryption requirements?"
answer, details = analyzer.process_with_selective_attention(document, query, top_k=2)

print(f"Final Answer: {answer}")
print()
print(f"This approach processed only 2 windows instead of all {len(windows)},")
print("reducing computation by focusing on the most relevant sections!")

This selective approach can reduce computational costs by orders of magnitude for long documents. Instead of processing all windows, we identify and process only the most relevant ones, dramatically improving efficiency while maintaining or even improving result quality.

HIERARCHICAL SLIDING WINDOWS: MULTI-SCALE PROCESSING

Some tasks benefit from processing text at multiple levels of granularity simultaneously. A hierarchical sliding window approach creates windows of different sizes, with smaller windows capturing fine details and larger windows capturing broader context.

For our requirements document, we might use small windows to extract individual requirements, medium windows to understand requirement groups and modules, and large windows to understand overall system architecture and constraints. Let us implement this:

def create_hierarchical_windows(self, text):
    """
    Create windows at multiple scales for hierarchical processing.
    
    Returns a dictionary where each key is a window level name and the
    value is the list of windows at that scale.
    """
    hierarchical_windows = {}
    
    # Define window configurations at different scales
    configs = [
        {'size': 300, 'stride': 150, 'name': 'detail'},      # Fine-grained
        {'size': 600, 'stride': 300, 'name': 'context'},     # Medium
        {'size': 900, 'stride': 450, 'name': 'overview'}     # Broad
    ]
    
    for config in configs:
        windows = self.create_sliding_windows(
            text,
            window_size=config['size'],
            stride=config['stride']
        )
        hierarchical_windows[config['name']] = windows
    
    return hierarchical_windows


def process_hierarchically(self, document, task):
    """
    Process document using multiple window scales and combine insights.
    
    The detail level provides specific requirements and facts, the context
    level provides module-level understanding, and the overview level provides
    system-level architecture and constraints.
    
    Args:
        document: The full document text
        task: The analysis task to perform
        
    Returns:
        Synthesized results from all hierarchical levels
    """
    windows_by_level = self.create_hierarchical_windows(document)
    
    print("=" * 70)
    print("HIERARCHICAL PROCESSING")
    print("=" * 70)
    
    results_by_level = {}
    
    # Process each level with appropriate prompts
    for level_name, windows in windows_by_level.items():
        print(f"Processing {level_name} level ({len(windows)} windows)...")
        level_results = []
        
        for i, window in enumerate(windows):
            # Adjust the prompt based on the level
            if level_name == 'detail':
                prompt = f"{window['text']}\n\nExtract specific requirement IDs and their details:\n"
            elif level_name == 'context':
                prompt = f"{window['text']}\n\nSummarize the main modules and their purposes:\n"
            else:  # overview
                prompt = f"{window['text']}\n\nDescribe the overall system architecture:\n"
            
            result = self.generate_response(prompt, max_new_tokens=80)
            level_results.append(result)
        
        results_by_level[level_name] = level_results
        print(f"  Completed {level_name} level")
    
    print()
    
    # Synthesize results from all levels
    synthesis_prompt = f"Task: {task}\n\n"
    
    synthesis_prompt += "System Overview:\n"
    for result in results_by_level['overview'][:2]:  # Limit to avoid overflow
        synthesis_prompt += f"  {result}\n"
    
    synthesis_prompt += "\nModule Context:\n"
    for result in results_by_level['context'][:3]:
        synthesis_prompt += f"  {result}\n"
    
    synthesis_prompt += "\nSpecific Details:\n"
    for result in results_by_level['detail'][:4]:
        synthesis_prompt += f"  {result}\n"
    
    synthesis_prompt += "\nProvide a comprehensive analysis:"
    
    final_answer = self.generate_response(synthesis_prompt, max_new_tokens=200)
    
    return final_answer, results_by_level

# Add these methods to the class
TechnicalDocumentAnalyzer.create_hierarchical_windows = create_hierarchical_windows
TechnicalDocumentAnalyzer.process_hierarchically = process_hierarchically

# Demonstrate hierarchical processing
task = "Analyze the system architecture and key requirements"
result, details = analyzer.process_hierarchically(document, task)

print("Final Hierarchical Analysis:")
print(result)
print()
print("This approach combines insights from multiple scales:")
print(f"  - Detail level: {len(details['detail'])} windows")
print(f"  - Context level: {len(details['context'])} windows")
print(f"  - Overview level: {len(details['overview'])} windows")

The hierarchical approach is particularly powerful for complex analytical tasks. By processing the document at multiple scales simultaneously, we capture both fine-grained details and high-level structure, producing richer and more nuanced results.

CACHING AND OPTIMIZATION: MAKING SLIDING WINDOWS EFFICIENT

Processing long documents with sliding windows can be computationally expensive, especially with large overlaps. However, we can employ several optimization techniques to reduce redundant computation. The most important of these is key-value caching, which allows us to reuse computations from overlapping regions.

Let us add caching capabilities to our analyzer:

def process_window_with_cache(self, window_tokens, past_key_values=None):
    """
    Process a window with KV cache optimization.
    
    For overlapping tokens, we reuse the key-value pairs computed in the
    previous window. Only new tokens require fresh computation.
    
    Args:
        window_tokens: Token IDs for the current window
        past_key_values: Cached key-value pairs from previous window
        
    Returns:
        Model output including updated cache
    """
    input_ids = torch.tensor([window_tokens]).to(self.device)
    
    with torch.no_grad():
        outputs = self.model(
            input_ids,
            past_key_values=past_key_values,
            use_cache=True
        )
    
    return outputs


def process_document_with_cache(self, document, task_instruction):
    """
    Process entire document with KV caching for efficiency.
    
    By maintaining the KV cache across windows, we avoid recomputing
    attention for overlapping tokens, significantly reducing computation.
    
    Args:
        document: The full document text
        task_instruction: What to do with each window
        
    Returns:
        Results from all windows
    """
    print("=" * 70)
    print("CACHED PROCESSING")
    print("=" * 70)
    
    # Create windows with 50% overlap
    window_size = 512
    stride = 256
    windows = self.create_sliding_windows(document, window_size, stride)
    
    print(f"Processing {len(windows)} windows with KV caching...")
    print(f"Overlap: {window_size - stride} tokens per window")
    print()
    
    results = []
    past_kv = None
    
    import time
    total_time = 0
    
    for i, window in enumerate(windows):
        start_time = time.time()
        
        # Tokenize window
        window_tokens = self.tokenizer.encode(window['text'])
        
        # Process with cache
        outputs = self.process_window_with_cache(window_tokens, past_kv)
        
        # Store cache for next window (only keep cache for overlapping portion)
        if i < len(windows) - 1:
            # Calculate overlap size
            overlap_size = window_size - stride
            # Keep only the KV cache for the last overlap_size tokens
            # This is a simplified version - actual implementation would
            # properly slice the cache tensors
            past_kv = outputs.past_key_values
        
        elapsed = time.time() - start_time
        total_time += elapsed
        
        # Generate response for this window
        prompt = f"{window['text']}\n\n{task_instruction}"
        response = self.generate_response(prompt, max_new_tokens=50)
        results.append(response)
        
        print(f"Window {i}: {elapsed:.3f}s (cumulative: {total_time:.3f}s)")
    
    print()
    print(f"Total processing time: {total_time:.3f}s")
    print(f"Average per window: {total_time / len(windows):.3f}s")
    print()
    
    return results

# Add these methods to the class
TechnicalDocumentAnalyzer.process_window_with_cache = process_window_with_cache
TechnicalDocumentAnalyzer.process_document_with_cache = process_document_with_cache

# Demonstrate cached processing
task = "Identify the main requirement:"
results = analyzer.process_document_with_cache(document, task)

print("Results from cached processing:")
for i, result in enumerate(results):
    print(f"Window {i}: {result[:60]}...")

The caching optimization significantly reduces computation time by avoiding redundant processing of overlapping tokens. This is especially important for production systems processing many documents.

COMPLETE APPLICATION: PUTTING IT ALL TOGETHER

Now let us create a complete, production-ready application that combines all the techniques we have discussed. This will be a comprehensive requirements analysis tool:

class ProductionRequirementsAnalyzer(TechnicalDocumentAnalyzer):
    """
    Production-ready requirements analyzer combining all techniques:
    - Sliding windows with configurable overlap
    - Memory accumulation for context
    - Selective attention for efficiency
    - Hierarchical processing for depth
    - Caching for performance
    """
    
    def analyze_requirements(self, document, analysis_type='comprehensive'):
        """
        Perform comprehensive requirements analysis.
        
        Args:
            document: The requirements document text
            analysis_type: Type of analysis ('comprehensive', 'conflicts', 
                          'security', 'performance')
                          
        Returns:
            Detailed analysis results
        """
        print("=" * 70)
        print(f"PRODUCTION REQUIREMENTS ANALYSIS: {analysis_type.upper()}")
        print("=" * 70)
        print()
        
        if analysis_type == 'conflicts':
            return self._analyze_conflicts(document)
        elif analysis_type == 'security':
            return self._analyze_security(document)
        elif analysis_type == 'performance':
            return self._analyze_performance(document)
        else:
            return self._analyze_comprehensive(document)
    
    def _analyze_conflicts(self, document):
        """
        Identify potential conflicts between requirements.
        Uses selective attention to focus on relevant sections.
        """
        print("Analyzing for requirement conflicts...")
        print()
        
        # Use selective attention to find conflict-related sections
        query = "conflicts dependencies constraints incompatible contradictory"
        answer, details = self.process_with_selective_attention(
            document, query, top_k=3
        )
        
        # Also check the end of document where conflicts are often noted
        windows = self.create_sliding_windows(document)
        last_window = windows[-1]
        
        prompt = f"{last_window['text']}\n\nList any conflicts or issues mentioned:\n"
        last_section_analysis = self.generate_response(prompt, max_new_tokens=150)
        
        result = {
            'type': 'conflict_analysis',
            'main_findings': answer,
            'end_section_notes': last_section_analysis,
            'windows_analyzed': len(details)
        }
        
        return result
    
    def _analyze_security(self, document):
        """
        Extract and analyze all security requirements.
        Uses hierarchical processing for comprehensive coverage.
        """
        print("Analyzing security requirements...")
        print()
        
        # Use selective attention for security-related windows
        query = "security authentication encryption password access control audit"
        answer, details = self.process_with_selective_attention(
            document, query, top_k=4
        )
        
        result = {
            'type': 'security_analysis',
            'summary': answer,
            'requirements_found': [],
            'windows_analyzed': len(details)
        }
        
        # Extract specific requirement IDs
        for detail in details:
            if 'REQ-SEC' in detail['response'] or 'REQ-AUTH' in detail['response']:
                result['requirements_found'].append(detail['response'])
        
        return result
    
    def _analyze_performance(self, document):
        """
        Analyze performance requirements and constraints.
        """
        print("Analyzing performance requirements...")
        print()
        
        query = "performance response time latency throughput concurrent users scalability"
        answer, details = self.process_with_selective_attention(
            document, query, top_k=3
        )
        
        result = {
            'type': 'performance_analysis',
            'summary': answer,
            'windows_analyzed': len(details)
        }
        
        return result
    
    def _analyze_comprehensive(self, document):
        """
        Perform comprehensive analysis using all techniques.
        """
        print("Performing comprehensive analysis...")
        print()
        
        # Use hierarchical processing for full coverage
        task = "Provide a comprehensive overview of this requirements document"
        overview, hierarchical_details = self.process_hierarchically(document, task)
        
        # Also identify key sections
        sections = {
            'security': self._analyze_security(document),
            'performance': self._analyze_performance(document),
            'conflicts': self._analyze_conflicts(document)
        }
        
        result = {
            'type': 'comprehensive_analysis',
            'overview': overview,
            'detailed_sections': sections,
            'total_windows_processed': sum(
                len(hierarchical_details[level]) 
                for level in hierarchical_details
            )
        }
        
        return result
    
    def generate_report(self, analysis_result):
        """
        Generate a formatted report from analysis results.
        
        Args:
            analysis_result: Results from analyze_requirements
            
        Returns:
            Formatted report string
        """
        report = []
        report.append("=" * 70)
        report.append("REQUIREMENTS ANALYSIS REPORT")
        report.append("=" * 70)
        report.append("")
        
        if analysis_result['type'] == 'comprehensive_analysis':
            report.append("OVERVIEW:")
            report.append(analysis_result['overview'])
            report.append("")
            
            report.append("SECURITY ANALYSIS:")
            report.append(analysis_result['detailed_sections']['security']['summary'])
            report.append("")
            
            report.append("PERFORMANCE ANALYSIS:")
            report.append(analysis_result['detailed_sections']['performance']['summary'])
            report.append("")
            
            report.append("CONFLICTS IDENTIFIED:")
            report.append(analysis_result['detailed_sections']['conflicts']['main_findings'])
            report.append("")
            
            report.append(f"Total windows processed: {analysis_result['total_windows_processed']}")
        
        elif analysis_result['type'] == 'conflict_analysis':
            report.append("CONFLICT ANALYSIS:")
            report.append(analysis_result['main_findings'])
            report.append("")
            report.append("ADDITIONAL NOTES FROM END SECTION:")
            report.append(analysis_result['end_section_notes'])
        
        elif analysis_result['type'] == 'security_analysis':
            report.append("SECURITY REQUIREMENTS SUMMARY:")
            report.append(analysis_result['summary'])
            report.append("")
            if analysis_result['requirements_found']:
                report.append("SPECIFIC REQUIREMENTS IDENTIFIED:")
                for req in analysis_result['requirements_found']:
                    report.append(f"  - {req}")
        
        elif analysis_result['type'] == 'performance_analysis':
            report.append("PERFORMANCE REQUIREMENTS:")
            report.append(analysis_result['summary'])
        
        report.append("")
        report.append("=" * 70)
        
        return "\n".join(report)


# Create production analyzer
print("\n\n")
print("=" * 70)
print("COMPLETE PRODUCTION APPLICATION DEMONSTRATION")
print("=" * 70)
print("\n")

prod_analyzer = ProductionRequirementsAnalyzer(model_name='gpt2')

# Perform different types of analysis
print("\n--- CONFLICT ANALYSIS ---\n")
conflict_results = prod_analyzer.analyze_requirements(document, 'conflicts')
print(prod_analyzer.generate_report(conflict_results))

print("\n--- SECURITY ANALYSIS ---\n")
security_results = prod_analyzer.analyze_requirements(document, 'security')
print(prod_analyzer.generate_report(security_results))

print("\n--- COMPREHENSIVE ANALYSIS ---\n")
comprehensive_results = prod_analyzer.analyze_requirements(document, 'comprehensive')
print(prod_analyzer.generate_report(comprehensive_results))

This complete application demonstrates how all the techniques we have discussed work together in a production system. It can analyze requirements documents of any length, focusing computational resources where they are needed most, maintaining context across the entire document, and producing comprehensive, actionable results.

BENEFITS OF SLIDING CONTEXT WINDOWS: WHEN AND WHY TO USE THEM

Our running example has demonstrated several compelling advantages of sliding context windows. The most obvious benefit is the ability to process arbitrarily long documents without truncation. Our requirements document, while modest in size, exceeded GPT-2's context window. With sliding windows, we successfully analyzed the entire document, including the critical conflict information at the end that would have been lost with simple truncation.

Beyond simple length extension, sliding windows enabled more sophisticated processing strategies. The overlap between windows ensured that requirements spanning window boundaries were not fragmented. When we analyzed security requirements, the overlap meant that requirements mentioned near the end of one window were also visible at the beginning of the next, preserving context.

The selective attention approach demonstrated how we can optimize computational costs. Instead of processing all windows for every query, we identified and processed only the most relevant ones. When analyzing security requirements, we processed only four windows instead of all windows, reducing computation by more than half while maintaining comprehensive coverage of security-related content.

The hierarchical processing showed how we can maintain both detailed and high-level understanding simultaneously. Our detail-level windows extracted specific requirement IDs, context-level windows understood module relationships, and overview-level windows captured system architecture. This multi-scale understanding would be impossible with a single fixed window size.

For our requirements analysis application, sliding windows enabled capabilities that would otherwise require manual document segmentation or multiple passes through the data. We could identify conflicts between requirements in different sections, extract all security requirements regardless of where they appeared, and understand performance constraints in the context of the overall system architecture.

LIABILITIES AND CHALLENGES: WHEN SLIDING WINDOWS FALL SHORT

Despite their utility in our requirements analyzer, sliding windows are not without limitations. The most fundamental issue became apparent in our conflict analysis: the model never sees the entire document at once. When we asked about conflicts between REQ-AUTH-002 (session timeout) and REQ-FIN-004 (long-running processes), the model had to rely on our memory buffer or selective attention to connect these distant requirements. With a truly enormous document, such connections might be missed.

The computational cost, while reduced through our selective attention and caching techniques, remains substantial. Our comprehensive analysis processed windows at three different scales, requiring many forward passes through the model. For a real 500-page requirements document, this could take hours even with optimization.

Boundary effects were visible in our results. Some requirements that spanned multiple windows were partially duplicated or fragmented in our analysis. The model's attention was naturally stronger within a window than across windows, so relationships between requirements in different modules were harder to capture than relationships within a single module.

The aggregation step proved challenging. When we synthesized results from multiple windows, we had to carefully design prompts to combine potentially conflicting information. Our conflict analysis, for example, required special handling to merge findings from selective attention with analysis of the document's end section.

Memory management required careful tuning. We allocated 200 tokens for our memory buffer, but this was somewhat arbitrary. For different document types or analysis tasks, different memory sizes might be optimal. Too small, and we lose important context from early sections. Too large, and we crowd out space for current window content.

ALTERNATIVES AND COMPLEMENTARY APPROACHES

Our requirements analyzer used sliding windows, but other approaches exist. For comparison, let us briefly implement a retrieval-augmented approach:

def analyze_with_retrieval(self, document, query):
    """
    Alternative approach using retrieval instead of sliding windows.
    
    This creates embeddings for document chunks and retrieves only
    the most relevant ones for each query.
    """
    # Split document into chunks (not overlapping windows)
    chunks = document.split('\n\n')
    
    # Simple retrieval: score each chunk
    scored_chunks = []
    for chunk in chunks:
        if len(chunk.strip()) > 20:  # Skip very short chunks
            score = self.score_window_relevance(chunk, query)
            scored_chunks.append({'text': chunk, 'score': score})
    
    # Take top chunks
    scored_chunks.sort(key=lambda x: x['score'], reverse=True)
    top_chunks = scored_chunks[:3]
    
    # Combine and query
    context = '\n\n'.join([c['text'] for c in top_chunks])
    prompt = f"{context}\n\nQuery: {query}\nAnswer:"
    
    return self.generate_response(prompt, max_new_tokens=150)

TechnicalDocumentAnalyzer.analyze_with_retrieval = analyze_with_retrieval

# Compare approaches
print("\n--- RETRIEVAL VS SLIDING WINDOWS ---\n")

query = "What are the authentication requirements?"

print("Using retrieval approach:")
retrieval_answer = analyzer.analyze_with_retrieval(document, query)
print(retrieval_answer)
print()

print("Using sliding windows approach:")
sliding_answer, _ = analyzer.process_with_selective_attention(document, query, top_k=2)
print(sliding_answer)

Retrieval-augmented generation is faster for focused queries but less suitable for comprehensive analysis. Sliding windows provide more thorough coverage, which is why we chose them for our requirements analyzer.

PRACTICAL RECOMMENDATIONS FOR IMPLEMENTATION

Based on our experience building the requirements analyzer, several practical recommendations emerge. First, carefully profile your specific use case. We found that window size of 900 tokens with 450-token stride worked well for GPT-2, but different models or document types might need different values.

Implement robust tokenization that respects document structure. Our analyzer could be improved by aligning window boundaries with requirement sections rather than arbitrary token counts. This would reduce fragmentation of related requirements.

Build comprehensive logging and monitoring. Our production analyzer tracks the number of windows processed, which helps identify performance bottlenecks. In a real system, you would also track cache hit rates, memory buffer utilization, and processing time per window.

Consider adaptive stride selection. Sections with high requirement density (like our security requirements section) might benefit from more overlap, while sparse sections (like the introduction) could use less overlap.

Test aggregation logic extensively. Our conflict analysis required special handling to combine findings from different windows. Each analysis type needed its own aggregation strategy.

When using memory buffers, implement refresh strategies. Our analyzer could be enhanced to periodically regenerate the memory summary, ensuring it remains coherent rather than becoming a disconnected list of facts.

CONCLUSION: THE ROLE OF SLIDING WINDOWS IN MODERN NLP

Our complete requirements analyzer demonstrates that sliding context windows are a pragmatic, powerful solution to the fundamental constraint of limited context in transformer models. We built a production-ready application that can analyze requirements documents of any length, identifying conflicts, extracting security requirements, analyzing performance constraints, and providing comprehensive overviews.

The techniques we implemented - basic sliding windows, memory accumulation, selective attention, hierarchical processing, and caching - work together to create a system that is both efficient and effective. Our analyzer processes only the windows it needs, maintains context across the entire document, and produces actionable results.

For practitioners working with LLMs today, sliding windows are an essential tool. They enabled us to build an application that would be impossible with simple truncation or naive chunking. By understanding the mechanics, benefits, and limitations of sliding windows, you can design systems that effectively process long documents while managing computational costs and maintaining result quality.

The complete code we developed provides a foundation for building sophisticated sliding window applications. Whether you are analyzing requirements documents, summarizing research papers, or building conversational agents with long-term memory, these principles and implementations will help you unlock the full potential of large language models for long-form text processing.

A Happy New Year 2026



Dear readers,


Thanks for reading my Blog so frequently in 2025. As a present, I‘ll provide you a HTML file created by Gemini 2.5 Flash.

Please, save and open the following HTML-page. It is safe to do so!



<!DOCTYPE html>

<html lang="en">

<head>

    <meta charset="UTF-8">

    <meta name="viewport" content="width=device-width, initial-scale=1.0">

    <title>Happy New Year! - Dynamic Scenery</title>

    <style>

        body {

            margin: 0;

            overflow: hidden; /* Hide scrollbars */

            background: linear-gradient(to bottom, #0a0a2a 0%, #2c0a4e 100%); /* Dark night sky */

            font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;

            display: flex;

            justify-content: center;

            align-items: center;

            min-height: 100vh;

            position: relative;

            color: #fff; /* Default text color */

        }


        /* Canvas for fireworks */

        #fireworksCanvas {

            position: absolute;

            top: 0;

            left: 0;

            width: 100%;

            height: 100%;

            z-index: 1; /* Above city, below text */

        }


        /* "Happy New Year!" Text */

        #new-year-text {

            position: absolute;

            top: 20%; /* Adjust vertical position */

            left: 50%;

            transform: translate(-50%, -50%);

            font-size: 5em; /* Large font size */

            font-weight: bold;

            text-shadow: 0 0 15px rgba(255, 255, 255, 0.8), 0 0 30px rgba(255, 255, 255, 0.6), 0 0 45px rgba(255, 255, 255, 0.4);

            letter-spacing: 5px;

            z-index: 2; /* On top of everything */

            animation: pulseText 3s infinite alternate; /* Subtle breathing effect */

            white-space: nowrap; /* Prevent text wrapping */

        }


        #new-year-text span {

            display: block; /* Ensures text shadow applies properly */

            background: linear-gradient(45deg, #ffd700, #ff8c00, #ff4500); /* Gold/Orange gradient */

            -webkit-background-clip: text;

            -webkit-text-fill-color: transparent;

            background-clip: text; /* Standard property */

            color: transparent; /* Fallback for browsers without -webkit-text-fill-color */

        }


        @keyframes pulseText {

            0% {

                transform: translate(-50%, -50%) scale(1);

                opacity: 0.9;

            }

            100% {

                transform: translate(-50%, -50%) scale(1.02);

                opacity: 1;

            }

        }


        /* City Silhouette at the bottom */

        #city-silhouette {

            position: absolute;

            bottom: 0;

            left: 0;

            width: 100%;

            height: 150px; /* Height of the silhouette */

            background: #000; /* Solid black silhouette */

            z-index: 0; /* Behind fireworks */

            /* Using a simple SVG mask for a jagged city skyline effect */

            mask-image: url('data:image/svg+xml;utf8,<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 100 100"><path fill="black" d="M0 100 L0 50 L10 50 L10 70 L20 70 L20 40 L30 40 L30 80 L40 80 L40 60 L50 60 L50 90 L60 90 L60 70 L70 70 L70 50 L80 50 L80 80 L90 80 L90 60 L100 60 L100 100 Z"/></svg>');

            mask-size: 100% 100%;

            mask-repeat: no-repeat;

            -webkit-mask-image: url('data:image/svg+xml;utf8,<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 100 100"><path fill="black" d="M0 100 L0 50 L10 50 L10 70 L20 70 L20 40 L30 40 L30 80 L40 80 L40 60 L50 60 L50 90 L60 90 L60 70 L70 70 L70 50 L80 50 L80 80 L90 80 L90 60 L100 60 L100 100 Z"/></svg>');

            -webkit-mask-size: 100% 100%;

            -webkit-mask-repeat: no-repeat;

        }


        /* Twinkling Stars (added via JavaScript) */

        .star {

            position: absolute;

            background-color: white;

            border-radius: 50%;

            animation: twinkle 2s infinite alternate;

            opacity: 0; /* Start invisible */

            z-index: 0; /* Behind fireworks */

        }


        @keyframes twinkle {

            0% { opacity: 0; transform: scale(0.5); }

            50% { opacity: 1; transform: scale(1); }

            100% { opacity: 0; transform: scale(0.5); }

        }

    </style>

</head>

<body>

    <div id="new-year-text">

        <span>Happy New Year!</span>

    </div>

    <canvas id="fireworksCanvas"></canvas>

    <div id="city-silhouette"></div>


    <script>

        const canvas = document.getElementById('fireworksCanvas');

        const ctx = canvas.getContext('2d');


        let fireworks = [];

        let particles = [];


        // Set canvas dimensions to full window size

        function setCanvasSize() {

            canvas.width = window.innerWidth;

            canvas.height = window.innerHeight;

        }

        setCanvasSize();

        window.addEventListener('resize', setCanvasSize);


        // Utility function to get a random vibrant color

        function getRandomColor() {

            const hue = Math.floor(Math.random() * 360);

            return `hsl(${hue}, 100%, 70%)`; // High saturation, medium lightness for vibrancy

        }


        // Particle class for firework trails and explosion fragments

        class Particle {

            constructor(x, y, vx, vy, color, alphaDecay, size) {

                this.x = x;

                this.y = y;

                this.vx = vx;

                this.vy = vy;

                this.color = color;

                this.alpha = 1;

                this.alphaDecay = alphaDecay; // How fast the particle fades

                this.size = size;

                this.gravity = 0.05; // Affects how particles fall

                this.friction = 0.99; // Slows down particles over time

            }


            update() {

                this.vx *= this.friction;

                this.vy *= this.friction;

                this.vy += this.gravity; // Apply gravity

                this.x += this.vx;

                this.y += this.vy;

                this.alpha -= this.alphaDecay; // Fade out

            }


            draw() {

                ctx.save();

                ctx.globalAlpha = Math.max(0, this.alpha); // Ensure alpha doesn't go below 0

                ctx.beginPath();

                ctx.arc(this.x, this.y, this.size, 0, Math.PI * 2);

                ctx.fillStyle = this.color;

                ctx.shadowBlur = this.size * 2; // Create a glow effect

                ctx.shadowColor = this.color;

                ctx.fill();

                ctx.restore();

            }

        }


        // Firework class for the rocket and its explosion logic

        class Firework {

            constructor(startX, startY, targetX, targetY) {

                this.x = startX;

                this.y = startY;

                this.targetX = targetX;

                this.targetY = targetY;


                // Calculate velocity to reach target

                const angle = Math.atan2(targetY - startY, targetX - startX);

                this.speed = 5; // Initial rocket speed

                this.vx = Math.cos(angle) * this.speed;

                this.vy = Math.sin(angle) * this.speed;


                this.color = getRandomColor();

                this.trail = []; // Particles forming the rocket's trail

                this.exploded = false;

            }


            update() {

                if (!this.exploded) {

                    // Check if firework has reached its target

                    const distanceToTarget = Math.sqrt((this.x - this.targetX)**2 + (this.y - this.targetY)**2);

                    if (distanceToTarget <= this.speed) {

                        this.exploded = true;

                        this.explode(); // Trigger explosion

                    } else {

                        this.x += this.vx;

                        this.y += this.vy;

                        // Add particles to the rocket's trail

                        this.trail.push(new Particle(this.x, this.y, 0, 0, this.color, 0.02, 1.5));

                        if (this.trail.length > 20) { // Limit trail length for performance

                            this.trail.shift();

                        }

                    }

                }

                // Update and remove faded trail particles

                for (let i = this.trail.length - 1; i >= 0; i--) {

                    this.trail[i].update();

                    if (this.trail[i].alpha <= 0.05) {

                        this.trail.splice(i, 1);

                    }

                }

            }


            draw() {

                if (!this.exploded) {

                    // Draw the rocket itself

                    ctx.save();

                    ctx.beginPath();

                    ctx.arc(this.x, this.y, 3, 0, Math.PI * 2);

                    ctx.fillStyle = this.color;

                    ctx.shadowBlur = 10;

                    ctx.shadowColor = this.color;

                    ctx.fill();

                    ctx.restore();

                }

                // Draw the rocket's trail

                this.trail.forEach(p => p.draw());

            }


            explode() {

                const numParticles = 80 + Math.random() * 50; // Random number of explosion particles

                for (let i = 0; i < numParticles; i++) {

                    const angle = Math.random() * Math.PI * 2; // Random direction

                    const speed = Math.random() * 8 + 2; // Random speed

                    const vx = Math.cos(angle) * speed;

                    const vy = Math.sin(angle) * speed;

                    const size = Math.random() * 3 + 1; // Random size

                    particles.push(new Particle(this.x, this.y, vx, vy, this.color, 0.02, size));

                }

            }

        }


        // Main animation loop

        function animate() {

            requestAnimationFrame(animate); // Call animate again on next frame


            // Clear canvas with a fading effect to create trails

            ctx.globalCompositeOperation = 'destination-out';

            ctx.fillStyle = 'rgba(0, 0, 0, 0.1)'; // Fades out previous frames

            ctx.fillRect(0, 0, canvas.width, canvas.height);

            ctx.globalCompositeOperation = 'lighter'; // Blend mode for glowing particles


            // Update and draw fireworks

            for (let i = fireworks.length - 1; i >= 0; i--) {

                fireworks[i].update();

                fireworks[i].draw();


                // Remove firework if it exploded and its trail has faded

                if (fireworks[i].exploded && fireworks[i].trail.length === 0) {

                    fireworks.splice(i, 1);

                }

            }


            // Update and draw explosion particles

            for (let i = particles.length - 1; i >= 0; i--) {

                particles[i].update();

                particles[i].draw();


                // Remove particle if it has faded out

                if (particles[i].alpha <= 0.05) {

                    particles.splice(i, 1);

                }

            }

        }


        // Function to launch a new firework from the bottom center

        function launchFirework() {

            const startX = canvas.width / 2;

            const startY = canvas.height; // Start from bottom

            const targetX = Math.random() * canvas.width * 0.8 + canvas.width * 0.1; // Random target X (within 10%-90% of width)

            const targetY = Math.random() * canvas.height * 0.5 + canvas.height * 0.1; // Random target Y (within 10%-60% of height)

            fireworks.push(new Firework(startX, startY, targetX, targetY));

        }


        // Launch fireworks periodically

        setInterval(launchFirework, 1000); // Launch a new firework every 1 second


        // Initial launch to start the show

        launchFirework();


        // Start the animation loop

        animate();


        // Optional: Add twinkling stars to the background using JS

        function createStars(numStars) {

            const starsContainer = document.body;

            for (let i = 0; i < numStars; i++) {

                const star = document.createElement('div');

                star.classList.add('star');

                star.style.width = star.style.height = `${Math.random() * 2 + 1}px`; // Random size 1-3px

                star.style.top = `${Math.random() * 100}%`;

                star.style.left = `${Math.random() * 100}%`;

                star.style.animationDelay = `${Math.random() * 2}s`; // Stagger animation start

                star.style.animationDuration = `${Math.random() * 1.5 + 1.5}s`; // Random animation duration

                starsContainer.appendChild(star);

            }

        }

        createStars(100); // Create 100 twinkling stars

    </script>

</body>

</html>