Migrating from Older Versions
This guide helps you update your BrainyFlow applications when migrating from older versions to newer ones. We strive for backward compatibility, but major refactors sometimes introduce breaking changes for significant improvements.
General Advice
Start Small: Migrate one part of your application at a time.
Consult the Changelog: Check the release notes on the repository for the specific version you are upgrading to. It will list breaking changes, new features, and bug fixes.
Review Core Abstraction Docs: Changes often revolve around the core
Node
,Flow
, orMemory
components. Re-reading their documentation can clarify new behaviors or APIs.
Migrating to v2.0
The most significant recent changes revolve around Memory
management and Flow
execution results.
Memory Management (
Memory
object/createMemory
factory):Explicit Creation:
Python:
Memory(global_store={...}, local_store={...})
TypeScript:
createMemory(globalStore, localStore)
Flow Execution (
Flow.run()
):Return Value:
Flow.run()
now returns a structuredExecutionTree
object instead of a simple dictionary. ThisExecutionTree
provides a detailed trace of node execution order, triggered actions, and nested results.maxVisits
Default: The defaultmaxVisits
for cycle detection inFlow
has been increased (e.g., from 5 to 15).
Error Handling (
NodeError
):Python:
NodeError
is now atyping.Protocol
, promoting structural typing. You'd typically catch the specific underlying error and then checkisinstance(err, NodeError)
if you need to accesserr.retry_count
.TypeScript:
NodeError
remains anError
subtype with an optionalretryCount
.
Migrating to v1.0
Version 1.0 includes several major architectural improvements that require code updates:
Key Changes
Memory Management: Changed from dictionary-based
shared
to object-basedmemory
Explicit Triggers: Flow control now requires explicit
trigger()
callsNode Lifecycle: Minor adjustments to method signatures
Flow Configuration: Added options for configuration
Removal of
params
: ThesetParams
approach has been removedBatch Processing: Batch node classes have been removed in favor of flow-based patterns
Memory Management Changes
# Before (v0.2)
class MyNode(Node):
async def prep(self, shared):
return shared["input_text"]
async def post(self, shared, prep_res, exec_res):
shared["result"] = exec_res
return "default" # Action name as return value
# After (v1.0)
class MyNode(Node):
async def prep(self, memory):
return memory.input_text # Property access syntax
async def post(self, memory, prep_res, exec_res):
memory.result = exec_res # Property assignment syntax
self.trigger("default") # Explicit trigger call
Explicit Triggers
# Before (v0.2)
async def post(self, shared, prep_res, exec_res):
if exec_res > 10:
shared["status"] = "high"
return "high_value"
else:
shared["status"] = "low"
return "low_value"
# After (v1.0)
async def post(self, memory, prep_res, exec_res):
if exec_res > 10:
memory.status = "high"
self.trigger("high_value")
else:
memory.status = "low"
self.trigger("low_value")
Flow Configuration
# Before (v0.2)
flow = Flow(start=start_node)
# After (v1.0)
# With default options
flow = Flow(start=start_node)
# With custom options
flow = Flow(start=start_node, options={"max_visits": 10})
Removal of params
and setParams
params
and setParams
In v1.0, setParams
has been removed in favor of direct property access through the streamlined memory management.
Replace params
with local memory and remove setParams
from the code.
Batch Processing Changes (*BatchNode
and *BatchFlow
Removal)
*BatchNode
and *BatchFlow
Removal)In v1.0, dedicated batch processing classes like BatchNode
, ParallelBatchNode
, BatchFlow
, and ParallelBatchFlow
have been removed from the core library.
The core concept of batching (processing multiple items, often in parallel) is now achieved using a more fundamental pattern built on standard Node
s and Flow
s:
Fan-Out Trigger Node: A standard
Node
(let's call itTriggerNode
) is responsible for initiating the processing for each item in a batch.In its
prep
method, it typically reads the list of items from memory.In its
post
method, it iterates through the items and callsthis.trigger("process_one", forkingData={...})
for each item.The
forkingData
argument is crucial: it passes the specific item (and potentially its index or other context) to the local memory of the successor node instance created for that trigger. This isolates the data for each parallel branch.
Processor Node: Another standard
Node
(let's call itProcessorNode
) handles the actual processing of a single item.It's connected to the
TriggerNode
via the"process_one"
action (e.g.,triggerNode.on("process_one", processorNode)
).Its
prep
method reads the specific item data from its local memory (e.g.,memory.item
,memory.index
), which was populated by theforkingData
from theTriggerNode
.Its
exec
method contains the logic previously found inexec_one
. It performs the computation for the single item.Its
post
method takes the result and typically stores it back into the global memory, often in a list or dictionary indexed by the item's original index to handle potential out-of-order completion in parallel scenarios.
Flow Orchestration:
To process items sequentially, use a standard
Flow
containing theTriggerNode
andProcessorNode
. The flow will execute the branch triggered for item 1 completely before starting the branch for item 2, and so on.To process items concurrently, use a
ParallelFlow
. This flow will execute all the branches triggered byTriggerNode
in parallel (usingPromise.all
orasyncio.gather
).
Aggregation (Optional): If you need to combine the results after all items are processed (like a Reduce step), the
TriggerNode
can fire an additional, final trigger (e.g.,this.trigger("aggregate")
) after the loop. Alternatively, theProcessorNode
can maintain a counter in global memory and trigger the aggregation step only when the counter reaches zero (see the MapReduce pattern).
This approach simplifies the core library by handling batching as an orchestration pattern rather than requiring specialized node types.
Example: Translating Text into Multiple Languages
Let's adapt the TranslateTextNode
example provided earlier. Before, it might have been a BatchNode
. Now, we split it into a TriggerTranslationsNode
and a TranslateOneLanguageNode
.
# Before (v0.2) - Conceptual BatchNode
class TranslateTextBatchNode(BatchNode):
async def prep(self, shared):
text = shared.get("text", "(No text provided)")
languages = shared.get("languages", ["Chinese", "Spanish", "Japanese"])
# BatchNode prep would return items for exec
return [(text, lang) for lang in languages]
async def exec(self, item):
text, lang = item
# Assume translate_text exists
return await translate_text(text, lang)
async def post(self, shared, prep_res, exec_results):
# BatchNode post might aggregate results
shared["translations"] = exec_results
return "default"
# After (v1.0) - Using Flow Patterns with ParallelFlow
from brainyflow import Node, Memory
# 1. Trigger Node (Fans out work)
class TriggerTranslationsNode(Node):
async def prep(self, memory: Memory):
text = memory.text if hasattr(memory, 'text') else "(No text provided)"
languages = memory.languages if hasattr(memory, 'languages') else ["Chinese", "Spanish", "Japanese"]
return [{"text": text, "language": lang} for lang in languages]
async def post(self, memory: Memory, prep_res, exec_res):
for index, input in enumerate(prep_res):
this.trigger("default", input | {"index": index})
# 2. Processor Node (Handles one language)
class TranslateOneLanguageNode(Node):
async def prep(self, memory: Memory):
# Read data passed via forkingData from local memory
return {
"text": memory.text,
"language": memory.language,
"index": memory.index
}
async def exec(self, item):
# Assume translate_text exists
return await translate_text(item.text, item.language)
async def post(self, memory: Memory, prep_res, exec_res):
# Store result in the global list at the correct index
memory.translations[exec_res["index"]] = exec_res
this.trigger("default")
# 3. Flow Setup
trigger_node = TriggerTranslationsNode()
processor_node = TranslateOneLanguageNode()
trigger_node >> processor_node
Need Help?
If you encounter issues during migration, you can:
Check the documentation for detailed explanations
Look at the examples for reference implementations
File an issue on GitHub
Always consult the specific release notes for the version you are migrating to for the most accurate and detailed list of changes.
Happy migrating!
Last updated