This guide helps you migrate from older versions of BrainyFlow to the latest version. It covers breaking changes and provides examples for upgrading your code.
Migrating to v1.0
Version 1.0 includes several major architectural improvements that require code updates:
Key Changes
Memory Management: Changed from dictionary-based shared to object-based memory
Explicit Triggers: Flow control now requires explicit trigger() calls
Node Lifecycle: Minor adjustments to method signatures
Flow Configuration: Added options for configuration
Removal of params: The setParams approach has been removed
Batch Processing: Batch node classes have been removed in favor of flow-based patterns
Memory Management Changes
# Before (v0.2)
class MyNode(Node):
async def prep(self, shared):
return shared["input_text"]
async def post(self, shared, prep_res, exec_res):
shared["result"] = exec_res
return "default" # Action name as return value
# Before (v0.2)
flow = Flow(start=start_node)
# After (v1.0)
# With default options
flow = Flow(start=start_node)
# With custom options
flow = Flow(start=start_node, options={"max_visits": 10})
// Before (v0.2)
const flow = new Flow(startNode)
// After (v1.0)
// With default options
const flow = new Flow(startNode)
// With custom options
const flow = new Flow(startNode, { maxVisits: 10 })
Removal of params and setParams
In v1.0, setParams has been removed in favor of direct property access through the streamlined memory management.
Replace params with local memory and remove setParams from the code.
Batch Processing Changes (*BatchNode and *BatchFlow Removal)
In v1.0, dedicated batch processing classes like BatchNode, ParallelBatchNode, BatchFlow, and ParallelBatchFlow have been removed from the core library.
The core concept of batching (processing multiple items, often in parallel) is now achieved using a more fundamental pattern built on standard Nodes and Flows:
Fan-Out Trigger Node: A standard Node (let's call it TriggerNode) is responsible for initiating the processing for each item in a batch.
In its prep method, it typically reads the list of items from memory.
In its post method, it iterates through the items and calls this.trigger("process_one", forkingData={...})for each item.
The forkingData argument is crucial: it passes the specific item (and potentially its index or other context) to the local memory of the successor node instance created for that trigger. This isolates the data for each parallel branch.
Processor Node: Another standard Node (let's call it ProcessorNode) handles the actual processing of a single item.
It's connected to the TriggerNode via the "process_one" action (e.g., triggerNode.on("process_one", processorNode)).
Its prep method reads the specific item data from its local memory (e.g., memory.item, memory.index), which was populated by the forkingData from the TriggerNode.
Its exec method contains the logic previously found in exec_one. It performs the computation for the single item.
Its post method takes the result and typically stores it back into the global memory, often in a list or dictionary indexed by the item's original index to handle potential out-of-order completion in parallel scenarios.
Flow Orchestration:
To process items sequentially, use a standard Flow containing the TriggerNode and ProcessorNode. The flow will execute the branch triggered for item 1 completely before starting the branch for item 2, and so on.
To process items concurrently, use a ParallelFlow. This flow will execute all the branches triggered by TriggerNode in parallel (using Promise.all or asyncio.gather).
This approach simplifies the core library by handling batching as an orchestration pattern rather than requiring specialized node types.
Example: Translating Text into Multiple Languages
Let's adapt the TranslateTextNode example provided earlier. Before, it might have been a BatchNode. Now, we split it into a TriggerTranslationsNode and a TranslateOneLanguageNode.
# Before (v0.2) - Conceptual BatchNode
class TranslateTextBatchNode(BatchNode):
async def prep(self, shared):
text = shared.get("text", "(No text provided)")
languages = shared.get("languages", ["Chinese", "Spanish", "Japanese"])
# BatchNode prep would return items for exec
return [(text, lang) for lang in languages]
async def exec(self, item):
text, lang = item
# Assume translate_text exists
return await translate_text(text, lang)
async def post(self, shared, prep_res, exec_results):
# BatchNode post might aggregate results
shared["translations"] = exec_results
return "default"
# After (v1.0) - Using Flow Patterns with ParallelFlow
from brainyflow import Node, Memory
# 1. Trigger Node (Fans out work)
class TriggerTranslationsNode(Node):
async def prep(self, memory: Memory):
text = memory.text if hasattr(memory, 'text') else "(No text provided)"
languages = memory.languages if hasattr(memory, 'languages') else ["Chinese", "Spanish", "Japanese"]
return [{"text": text, "language": lang} for lang in languages]
async def post(self, memory: Memory, prep_res, exec_res):
for index, input in enumerate(prep_res):
this.trigger("default", input | {"index": index})
# 2. Processor Node (Handles one language)
class TranslateOneLanguageNode(Node):
async def prep(self, memory: Memory):
# Read data passed via forkingData from local memory
return {
"text": memory.text,
"language": memory.language,
"index": memory.index
}
async def exec(self, item):
# Assume translate_text exists
return await translate_text(item.text, item.language)
async def post(self, memory: Memory, prep_res, exec_res):
# Store result in the global list at the correct index
memory.translations[exec_res["index"]] = exec_res
this.trigger("default")
# 3. Flow Setup
trigger_node = TriggerTranslationsNode()
processor_node = TranslateOneLanguageNode()
trigger_node >> processor_node
Aggregation (Optional): If you need to combine the results after all items are processed (like a Reduce step), the TriggerNode can fire an additional, final trigger (e.g., this.trigger("aggregate")) after the loop. Alternatively, the ProcessorNode can maintain a counter in global memory and trigger the aggregation step only when the counter reaches zero (see the ).