action.py

Action Processor Documentation

Overview

The ActionProcessor module provides semantic analysis and action inference capabilities using natural language processing and neural network concepts. It determines the probability of text representing an action through contextual understanding rather than simple classification.

Table of Contents

  • Installation

  • Core Components

  • Implementation Details

  • Usage Guide

  • API Reference

  • Performance Considerations

  • Examples

Installation

Requirements

pip install spacy
pip install scikit-learn
pip install numpy
pip install scipy
python -m spacy download en_core_web_sm

Dependencies

import numpy as np
from typing import List, Dict, Any, Tuple
from sklearn.feature_extraction.text import TfidfVectorizer
import spacy
from scipy.special import softmax

Core Components

1. Semantic Analysis

  • SpaCy-based linguistic processing

  • Verb pattern recognition

  • Object relationship mapping

  • Contextual relevance scoring

2. Neural Processing

  • ReLU activation function

  • Feature weight management

  • Probability normalization

  • Confidence scoring

3. Feature Engineering

features = {
    'verb_score': float,      # Verb presence and position
    'object_score': float,    # Object relationship strength
    'context_score': float    # Contextual relevance
}

Implementation Details

Feature Extraction

semantic_features = {
    'verb_strength': 0.4,    # Primary action indicator
    'object_impact': 0.3,    # Action target importance
    'context_score': 0.3     # Contextual relevance
}

Processing Pipeline

  1. Text Input → Spacy Processing

  2. Feature Extraction → Weight Application

  3. ReLU Activation → Probability Calculation

  4. Semantic Analysis → Result Generation

Output Structure

{
    'action_probability': float,        # 0.0 to 1.0
    'is_action': bool,                 # threshold at 0.5
    'semantic_analysis': {
        'features': Dict[str, float],  # extracted features
        'key_verbs': List[str],        # identified verbs
        'key_objects': List[str],      # affected objects
        'confidence': float            # 0-100 score
    }
}

Usage Guide

Basic Usage

# Initialize processor
processor = ActionProcessor()

# Process single text
result = processor.process_text("Deploy the application to production")
print(f"Action Probability: {result['action_probability']}")

# Process batch of texts
texts = ["Configure the database", "The system is running"]
results = processor.batch_process(texts)

Weight Adjustment

# Training data format
training_data = [
    ("Implement new feature", True),
    ("System status report", False)
]

# Update weights
processor.update_weights(training_data)

API Reference

ActionProcessor Class

Constructor

def __init__(self)

Initializes processor with default weights and loads SpaCy model.

Core Methods

process_text(text: str) → Dict[str, Any]

Processes single text input.

  • Input: Text string

  • Output: Complete analysis dictionary

  • Performance: O(n) where n is text length

batch_process(texts: List[str]) → List[Dict[str, Any]]

Processes multiple texts.

  • Input: List of text strings

  • Output: List of analysis dictionaries

  • Performance: O(n*m) where n is number of texts

update_weights(training_data: List[Tuple[str, bool]]) → None

Updates feature weights based on training examples.

  • Input: List of (text, is_action) pairs

  • Effect: Modifies internal weights

  • Learning Rate: 0.1

Performance Considerations

Memory Usage

  • SpaCy model loading (~100MB)

  • Batch processing memory scaling

  • Feature vector size

Processing Speed

  • Single text: ~10ms

  • Batch processing: Linear scaling

  • Weight updates: O(n) for n training examples

Optimization Tips

  1. Batch process when possible

  2. Reuse processor instance

  3. Limit text length if needed

  4. Monitor memory usage

Examples

Detailed Analysis

processor = ActionProcessor()

text = "Deploy the new version to production servers"
result = processor.process_text(text)

print(f"""
Action Probability: {result['action_probability']:.2f}
Is Action: {result['is_action']}
Key Verbs: {', '.join(result['semantic_analysis']['key_verbs'])}
Confidence: {result['semantic_analysis']['confidence']}%
""")

Batch Processing

texts = [
    "Configure the database settings",
    "The system is online",
    "Update user permissions"
]

results = processor.batch_process(texts)
for text, result in zip(texts, results):
    print(f"{text}: {'Action' if result['is_action'] else 'Non-action'}")

Best Practices

Text Preparation

  1. Clean input text

  2. Remove unnecessary whitespace

  3. Handle special characters

  4. Normalize case if needed

Processing

  1. Use batch processing for multiple texts

  2. Monitor confidence scores

  3. Validate results

  4. Handle edge cases

Weight Management

  1. Start with default weights

  2. Use representative training data

  3. Monitor weight changes

  4. Validate after updates

Contributing

Guidelines for extending functionality:

  1. Maintain semantic focus

  2. Document feature additions

  3. Follow typing conventions

  4. Add test cases

Last updated