Overview
Last updated
Last updated
BrainyFlow is built around a simple yet powerful abstraction: the nested directed graph with shared store. This mental model separates data flow from computation, making complex LLM applications more maintainable and easier to reason about.
BrainyFlow follows these fundamental principles:
Modularity & Composability: Build complex systems from simple, reusable components that are easy to build, test, and maintain
Explicitness: Make data dependencies between steps clear and traceable
Separation of Concerns: Data storage (shared store) remains separate from computation logic (nodes)
Minimalism: The framework provides only essential abstractions, avoiding vendor-specific implementations while supporting various high-level AI design paradigms (agents, workflows, map-reduce, etc.)
Resilience: Handle failures gracefully with retries and fallbacks
The fundamental pattern in BrainyFlow combines two key elements:
Computation Graph: A directed graph where nodes represent discrete units of work and edges represent the flow of control.
Shared Memory
Object: A state management store that enables communication between nodes, separating global
and local
state.
This pattern offers several advantages:
Clear visualization of application logic
Easy identification of bottlenecks
Simple debugging of individual components
Natural parallelization opportunities
BrainyFlow's architecture is based on these fundamental building blocks:
The basic unit of work
Clear lifecycle (prep
→ exec
→ post
), fault tolerance (retries), graceful fallbacks
Connects nodes together
Action-based transitions, branching, looping (with cycle detection), nesting, sequential/parallel execution
Manages state accessible during flow execution
Shared global
store, forkable local
store, cloning for isolation
Nodes perform individual tasks with a clear lifecycle:
prep
: Read from shared store and prepare data
exec
: Execute computation (often LLM calls), cannot access memory directly.
post
: Process results, write to shared store, and trigger next actions
Flows orchestrate nodes by:
Starting with a designated start
node.
Following action-based transitions (driven by trigger
calls in post
) between nodes.
Supporting branching, looping, and nested flows.
Executing triggered branches sequentially (Flow
) or concurrently (ParallelFlow
).
Supporting nested batch operations.
Communication happens through the memory
instance provided to each node's lifecycle methods (in prep
and post
methods):
Global Store: A shared object accessible throughout the flow. Nodes typically write results here.
Local Store: An isolated object specific to a node and its downstream path, typically populated via forkingData
in trigger
calls.
If you're new to BrainyFlow, we recommend exploring these core abstractions in the following order:
- Understand the basic building block
- Learn how to connect nodes together
- See how nodes share data
Once you understand these core abstractions, you'll be ready to implement various to solve real-world problems.