Overview
BrainyFlow is built around a simple yet powerful abstraction: the nested directed graph with shared store. This mental model separates data flow from computation, making complex LLM applications more maintainable and easier to reason about.
Core Philosophy
BrainyFlow follows these fundamental principles:
Modularity & Composability: Build complex systems from simple, reusable components that are easy to build, test, and maintain
Explicitness: Make data dependencies between steps clear and traceable
Separation of Concerns: Data storage (shared store) remains separate from computation logic (nodes)
Minimalism: The framework provides only essential abstractions, avoiding vendor-specific implementations while supporting various high-level AI design paradigms (agents, workflows, map-reduce, etc.)
Resilience: Handle failures gracefully with retries and fallbacks
The Graph + Shared Store Pattern
The fundamental pattern in BrainyFlow combines two key elements:
Computation Graph: A directed graph where nodes represent discrete units of work and edges represent the flow of control
Shared Store: A global data structure that enables communication between nodes
This pattern offers several advantages:
Clear visualization of application logic
Easy identification of bottlenecks
Simple debugging of individual components
Natural parallelization opportunities
Key Components
BrainyFlow's architecture is based on these fundamental building blocks:
The basic unit of work
Clear lifecycle (prep → exec → post
), fault tolerance, graceful fallbacks
Connects nodes together
Action-based transitions, branching, looping, nesting
Enables data sharing
Shared Store (global), Params (node-specific)
Handles multiple items
Sequential or parallel processing, nested batching
Manages concurrency
Rate limiting, concurrency control
How They Work Together
Nodes perform individual tasks with a clear lifecycle:
prep
: Read from shared store and prepare dataexec
: Execute computation (often LLM calls)post
: Process results and write to shared store
Flows orchestrate nodes by:
Starting with a designated node
Following action-based transitions between nodes
Supporting branching, looping, and nested flows
Communication happens through:
Shared Store: A global dictionary accessible to all nodes
Params: Node-specific configuration passed down from parent flows
Batch Processing enables:
Processing multiple items sequentially or in parallel
Handling large datasets efficiently
Supporting nested batch operations
Getting Started
If you're new to BrainyFlow, we recommend exploring these core abstractions in the following order:
Last updated