Testing
Effective testing and debugging are essential for building reliable applications. This guide covers strategies for testing and debugging complex flows, and monitoring applications in production.
Testing Approaches
BrainyFlow supports multiple testing approaches to ensure your applications work correctly:
Unit Testing (Nodes)
Individual nodes can be tested in isolation to verify their behavior:
import unittest
from unittest.mock import AsyncMock, patch
from brainyflow import Node
class TestSummarizeNode(unittest.TestCase):
async def test_summarize_node(self):
# Create the node
summarize_node = SummarizeNode()
# Create a mock shared store
memory = {"text": "This is a long text that needs to be summarized."}
# Mock the LLM call
with patch('utils.call_llm', new_callable=AsyncMock) as mock_llm:
mock_llm.return_value = "Short summary."
# Run the node
await summarize_node.run(memory)
# Verify the node called the LLM with the right prompt
mock_llm.assert_called_once()
call_args = mock_llm.call_args[0][0]
self.assertIn("summarize", call_args.lower())
# Verify the result was stored correctly
self.assertEqual(memory.summary, "Short summary.") # Access memory object
if __name__ == "__main__":
# Use asyncio.run for async tests if needed, or run within an existing loop
# For simplicity, assuming standard unittest runner handles async test cases
unittest.main()
Integration Testing (Flows)
Test complete flows to verify that nodes work together correctly:
import unittest
from unittest.mock import AsyncMock, patch
from brainyflow import Flow
class TestQuestionAnsweringFlow(unittest.TestCase):
async def test_qa_flow(self):
# Create the flow
qa_flow = create_qa_flow()
# Create a mock shared store
memory = {"question": "What is the capital of France?"}
# Mock all LLM calls
with patch('utils.call_llm', new_callable=AsyncMock) as mock_llm:
# Configure the mock to return different values for different prompts
def mock_llm_side_effect(prompt):
if "search" in prompt.lower():
return "Paris is the capital of France."
elif "answer" in prompt.lower():
return "The capital of France is Paris."
return "Unexpected prompt"
mock_llm.side_effect = mock_llm_side_effect
# Run the flow
await qa_flow.run(memory)
# Verify the final answer
self.assertEqual(memory.answer, "The capital of France is Paris.") # Access memory object
# Verify the LLM was called the expected number of times
self.assertEqual(mock_llm.call_count, 2)
if __name__ == '__main__':
# Use asyncio.run for async tests if needed
unittest.main()
Testing Retry Logic
To test retry behavior:
Simulate Transient Failures: Make the mock function fail a few times before succeeding.
Check Retry Count: Verify that retries happened the expected number of times (e.g., by checking
node.cur_retry
inside the mock or tracking calls).Test Backoff: If using
wait
, mockasyncio.sleep
(Python) orsetTimeout
(TypeScript) to verify delays without actually waiting.
from unittest.mock import patch, AsyncMock
import asyncio
# from brainyflow import Node # Assuming Node is imported
# Mock function that fails twice, then succeeds
call_count_retry = 0
async def mock_fails_then_succeeds(*args, **kwargs):
global call_count_retry
call_count_retry += 1
print(f"Mock called (Attempt {call_count_retry})") # For debugging test
if call_count_retry <= 2:
raise ValueError("Temporary network failure")
return "Success on third try"
# Example Node (conceptual)
# class NodeWithRetry(Node):
# def __init__(self):
# super().__init__(max_retries=3, wait=0.1) # Retry up to 3 times (4 attempts total)
# async def exec(self, prep_res):
# # This method calls the function we will mock
# return await some_external_call(prep_res)
async def test_retry_logic():
global call_count_retry
call_count_retry = 0 # Reset counter for test
# node = NodeWithRetry()
# memory = Memory({})
# Patch the external call made within node.exec
# Also patch asyncio.sleep to avoid actual waiting
with patch('__main__.some_external_call', new=AsyncMock(side_effect=mock_fails_then_succeeds)), \
patch('asyncio.sleep', new=AsyncMock()) as mock_sleep:
# await node.run(memory) # Run the node
pass # Replace pass with actual node execution
# Assertions
# assert call_count_retry == 3 # Should be called 3 times (1 initial + 2 retries)
# assert memory.result == "Success on third try" # Check final result
# assert mock_sleep.call_count == 2 # Check if sleep was called between retries
# asyncio.run(test_retry_logic())
Test Fixtures and Helpers
Creating helper functions can make tests more readable and maintainable.
# Example helpers (can be placed in a conftest.py for pytest or a base class for unittest)
# from brainyflow import Memory, Node # Assuming imports
def create_default_test_memory() -> dict:
"""Creates a standard dictionary for test memory."""
return {"input": "test data", "config": {"setting": "value"}}
async def run_node_with_memory(node: Node, initial_memory: dict | None = None) -> dict:
"""Runs a node with provided or default initial memory."""
memory_obj = initial_memory if initial_memory is not None else create_default_test_memory()
# Assuming node.run modifies the dictionary in place or returns it
await node.run(memory_obj)
return memory_obj
def assert_memory_contains(memory: dict, expected_data: dict):
"""Asserts that the memory dictionary contains the expected key-value pairs."""
for key, value in expected_data.items():
assert key in memory, f"Memory missing key: {key}"
assert memory[key] == value, f"Memory value mismatch for key '{key}': expected {value}, got {memory[key]}"
# Example usage in a test
# async def test_my_node_output():
# node = MyProcessingNode()
# final_memory = await run_node_with_memory(node)
# assert_memory_contains(final_memory, {"output": "processed data", "status": "completed"})
Common Testing Patterns
1. Input Validation Testing
Test that nodes properly handle invalid or unexpected inputs.
# Requires: pip install pytest pytest-asyncio
import pytest
# from brainyflow import Node, Memory # Assuming imports
# from my_nodes import MyNodeThatValidates # Your node
@pytest.mark.parametrize("invalid_input", [None, "", {}, [], {"wrong_key": 1}])
@pytest.mark.asyncio
async def test_node_handles_invalid_input(invalid_input):
"""Tests if the node handles various invalid inputs gracefully."""
node = MyNodeThatValidates() # Node that should validate memory.input_data
memory = {"input_data": invalid_input} # Pass invalid data
# Expect the node to run without unhandled exceptions
# and potentially set an error state or default output
await node.run(memory)
# Example assertions: Check for an error flag or a specific state
assert memory.get("error_message") is not None or memory.get("status") == "validation_failed"
# Or assert that a default value was set
# assert memory.get("output") == "default_value"
2. Flow Path Testing
Test that flows follow the expected paths based on node triggers.
import asyncio
# from brainyflow import Node, Flow, Memory # Assuming imports
async def test_flow_follows_correct_path():
"""Tests if the flow executes nodes in the expected sequence."""
visited_nodes_log = []
# Define simple tracking nodes
class SimpleTrackingNode(Node):
def __init__(self, name: str, trigger_action: str = "default"):
super().__init__()
self._node_name = name
self._trigger_action = trigger_action
async def exec(self, prep_res):
# No real work, just track visit
visited_nodes_log.append(self._node_name)
return f"Processed by {self._node_name}" # Return something for post
async def post(self, memory, prep_res, exec_res):
# Trigger the specified action
self.trigger(self._trigger_action)
# Create nodes for a simple path: A -> B -> C
node_a = SimpleTrackingNode("A", trigger_action="next_step")
node_b = SimpleTrackingNode("B", trigger_action="finish")
node_c = SimpleTrackingNode("C") # This node shouldn't be reached
# Connect nodes based on actions
node_a.on("next_step", node_b)
node_b.on("finish", node_c) # Connect C, but B will trigger 'finish'
# Create and run the flow
flow = Flow(start=node_a)
await flow.run({}) # Pass empty memory
# Verify the execution path
assert visited_nodes_log == ["A", "B"], f"Expected A->B, but got: {visited_nodes_log}"
# asyncio.run(test_flow_follows_correct_path())
Best Practices
Testing Best Practices
Test Each Node Individually: Verify that each node performs its specific task correctly
Test Flows as Integration Tests: Ensure nodes work together as expected
Mock External Dependencies: Use mocks for LLMs, APIs, and databases to ensure consistent testing
Test Error Handling: Explicitly test how your application handles failures
Automate Tests: Include BrainyFlow tests in your CI/CD pipeline
Debugging Best Practices
Start Simple: Begin with a minimal flow and add complexity incrementally
Visualize Your Flow: Generate flow diagrams to understand the structure
Isolate Issues: Test individual nodes to narrow down problems
Check Shared Store: Verify that data is correctly passed between nodes
Monitor Actions: Ensure nodes are returning the expected actions
Monitoring Best Practices
Monitor Node Performance: Track execution time for each node
Watch for Bottlenecks: Identify nodes that take longer than expected
Track Error Rates: Monitor how often nodes and flows fail
Set Up Alerts: Configure alerts for critical failures
Log Judiciously: Log important events without overwhelming storage
Implement Distributed Tracing: Use tracing for complex, distributed applications
By applying these testing techniques, you can ensure your BrainyFlow applications are reliable and maintainable.
Last updated