Back to Blog
April 7, 2024·8 min read

Query Context Injection: How Attanix Improves LLM Prompt Construction

LLMsPrompt EngineeringContextTutorial

Effective prompt engineering is crucial for getting the best results from large language models. However, traditional approaches often struggle with maintaining context and relevance. This guide shows how Attanix's query context injection can transform your prompt construction process.

The Challenge of Context in LLM Prompts

Traditional prompt engineering faces several limitations:

  1. Context Loss: Important background information gets lost
  2. Relevance Issues: Unrelated information dilutes the prompt
  3. Static Context: Prompts don't adapt to changing situations
  4. Memory Gaps: Previous interactions aren't effectively utilized

How Attanix Enhances Prompt Construction

Attanix's query context injection provides:

  1. Dynamic Context: Automatically injects relevant information
  2. Salience-Based Selection: Prioritizes the most important context
  3. Temporal Awareness: Considers timing and recency
  4. Relationship Mapping: Maintains connections between concepts

Implementation Guide

Here's how to implement query context injection:

from attanix import MemorySystem
from attanix.prompt import ContextInjector

# Initialize Attanix and context injector
memory = MemorySystem()
injector = ContextInjector(memory)

# Basic context injection
async def build_prompt(query):
    context = await injector.get_context(
        query=query,
        max_tokens=500,
        recency_weight=0.7
    )
    
    return f"""
    Context: {context}
    
    Question: {query}
    """

Advanced Context Injection Patterns

  1. Multi-Source Context
async def get_multi_source_context(query):
    # Get context from different sources
    conversation_context = await injector.get_conversation_context()
    document_context = await injector.get_document_context()
    user_context = await injector.get_user_context()
    
    # Combine and prioritize
    combined = await injector.combine_contexts(
        contexts=[conversation_context, document_context, user_context],
        weights=[0.4, 0.3, 0.3]
    )
    
    return combined
  1. Dynamic Context Adjustment
async def adjust_context(query, initial_context):
    # Analyze query complexity
    complexity = await injector.analyze_complexity(query)
    
    # Adjust context based on complexity
    if complexity > 0.8:
        # Add more detailed context
        additional_context = await injector.get_detailed_context()
        return await injector.merge_contexts(initial_context, additional_context)
    else:
        # Simplify context
        return await injector.simplify_context(initial_context)
  1. Context Validation
async def validate_context(context, query):
    # Check relevance
    relevance = await injector.check_relevance(context, query)
    if relevance < 0.6:
        # Get more relevant context
        return await injector.get_alternative_context(query)
    
    # Check coherence
    coherence = await injector.check_coherence(context)
    if coherence < 0.7:
        # Improve context structure
        return await injector.restructure_context(context)
    
    return context

Best Practices

  1. Context Selection

    • Balance relevance and diversity
    • Consider temporal factors
    • Account for user preferences
    • Monitor context quality
  2. Token Management

    • Set appropriate token limits
    • Implement smart truncation
    • Use compression when needed
    • Monitor token usage
  3. Performance Optimization

    • Cache frequent contexts
    • Batch context retrieval
    • Implement lazy loading
    • Use parallel processing

Real-World Examples

Here are some practical applications:

Next Steps

Ready to enhance your LLM prompts with better context? Check out our documentation or try our context injection guide.

Author

Author Name

Brief author bio or description