Node
In Caskada, a Node is the fundamental building block of any application. It represents a discrete, self-contained unit of work within a larger flow. Nodes are designed to be reusable, testable, and fault-tolerant.
Node Lifecycle

Every node follows a clear, three-phase lifecycle when executed: prep → exec → post. This separation of concerns ensures clean data handling, computation, and state management.
prep(memory):Purpose: Prepare the node for execution. This is where the node reads necessary input data from the
Memoryobject (which includes both global and local state).Output: Returns a
prep_res(preparation result) that will be passed directly to theexecmethod. This ensuresexecis pure and doesn't directly access shared memory.Best Practice: Keep
prepfocused on data retrieval and initial validation. Avoid heavy computation or side effects here.
exec(prep_res):Purpose: Execute the core business logic or computation of the node. This method receives only the
prep_resfrom theprepmethod.Output: Returns an
exec_res(execution result) that will be passed to thepostmethod.Key Principle:
execshould be a pure function (or as close as possible). It should not directly access theMemoryobject or perform side effects. This makesexechighly testable and retryable.Fault Tolerance: This is the phase where retries are applied if configured.
post(memory, prep_res, exec_res):Purpose: Post-process the results of
exec, update theMemoryobject, and determine the next steps in the flow.Input: Receives the
Memoryobject,prep_res, andexec_res.Key Actions:
Write results back to the global
Memorystore.Call
self.trigger("action_name", forking_data={...})(Python) orthis.trigger("action_name", {...})(TypeScript) to specify which action was completed and pass any branch-specific data to the local store of successor nodes.A node can trigger multiple actions, leading to parallel execution if the flow is a
ParallelFlow.
Creating Custom Nodes
To create a custom node, extend the Node class and implement the lifecycle methods:
All step definitions are optional. For example, you can implement only prep and post if you just need to alter data without external computation, or skip post if the node does not write any data to memory.
Error Handling
Nodes include built-in retry capabilities for handling transient failures in exec() calls.
You can configure retries with 2 options in their constructor to control their behavior:
id(string, optional): A unique identifier for the node. If not provided, a UUID is generated.maxRetries(number): Maximum number of attempts forexec()(default: 1, meaning no retry).wait(number): Seconds to wait between retry attempts (default: 0). Thewaitparameter is especially helpful when you encounter rate-limits or quota errors from your LLM provider and need to back off.
During retries, you can access the current retry count (0-based) via self.cur_retry (Python) or this.curRetry (TypeScript).
To handle failures gracefully after all retry attempts for exec() are exhausted, override the execFallback method.
By default, execFallback just re-raises the exception. You can override it to return a fallback result instead, which becomes the exec_res passed to post(), allowing the flow to potentially continue. The error object passed to execFallback will be an instance of NodeError and will include a retryCount property indicating the number of retries performed.
Node Transitions
Nodes define how the flow progresses by triggering actions. These actions are then used by the Flow to determine the next node(s) to execute.
Call this method within the
postmethod of your node.action_name: A string identifying the action that just completed (e.g.,"success","error","data_ready"). This name corresponds to the transitions defined in theFlow(e.g.,node.on('action_name', nextNode)).forking_data(optional): A dictionary (Python) or object (TypeScript) whose key-value pairs will be deeply cloned and merged into the local store (memory.local) of the memory instance passed to the next node(s) triggered by this action. This allows for passing specific data down a particular branch without polluting the global store.A node can call
triggermultiple times in itspostmethod, leading to multiple successor branches being executed (sequentially inFlow, concurrently inParallelFlow).
trigger() can only be called inside the post() method. Calling it elsewhere will result in errors.
The running Flow uses the action_name triggered to look up the successor nodes, which are defined using .on() or .next() (as seen in the next section below).
Defining Connections (on, next)
on, next)While trigger determines which path to take during execution, you define the possible paths before execution, by using either .next() or .on(), as shown below:
You can define transitions with syntax sugar:
Basic default transition:
node_a >> node_bThis means ifnode_atriggers the default action, go tonode_b.Named action transition:
node_a - "action_name" >> node_bThis means ifnode_atriggers"action_name", go tonode_b.
Note that node_a >> node_b is equivalent to node_a - "default" >> node_b
Basic default transition:
node_a.next(node_b)This means ifnode_atriggers"default", go tonode_b.Named action transition:
node_a.on('action_name', node_b)ornode_a.next(node_b, 'action_name')This means ifnode_atriggers"action_name", go tonode_b.
Note that node_a.next(node_b) is equivalent to both node_a.next(node_b, 'default') and node_a.on('default', node_b)
Basic default transition:
node_a.next(node_b)This means ifnode_atriggers"default",node_bwill execute next.Named action transition:
node_a.on('action_name', node_b)ornode_a.next(node_b, 'action_name')This means ifnode_atriggers"action_name",node_bwill execute next.
Note that node_a.next(node_b) is equivalent to both node_a.next(node_b, 'default') and node_a.on('default', node_b). Both methods return the successor node (node_b in this case), allowing for chaining.
To summarize it:
node.on(actionName, successorNode): ConnectssuccessorNodeto be executed whennodetriggersactionName.node.next(successorNode, actionName = DEFAULT_ACTION): A convenience method, equivalent tonode.on(actionName, successorNode).
These methods are typically called when constructing your Flow. See the Flow documentation for detailed examples of graph construction.
Example: Conditional Branching
A common pattern is a "router" node that determines the next step based on some condition (e.g., language detection, data validation result).
Example: Multiple Triggers (Fan-Out / Batch Processing)
A single node can call this.trigger() multiple times within its post method to initiate multiple downstream paths simultaneously. Each triggered path receives its own cloned memory instance, potentially populated with unique local data via the forkingData argument.
This "fan-out" capability is the core pattern used for batch processing (processing multiple items, often in parallel).
For a detailed explanation and examples of implementing batch processing using this fan-out pattern with Flow or ParallelFlow, please see the Flow documentation.
Running Individual Nodes
Nodes have a run(memory, propagate?) method, which executes its full lifecycle (prep -> execRunner (which handles exec and execFallback) -> post). This method is primarily intended for testing or debugging individual nodes in isolation, and in production code you should always use Flow.run(memory) instead.
Do NOT use node.run() to execute a workflow.
node.run() executes only the single node it's called on. It does not look up or execute any successor nodes defined via .on() or .next().
Always use Flow.run(memory) or ParallelFlow.run(memory) to execute a complete graph workflow. Using node.run() directly will lead to incomplete execution if you expect the flow to continue.
The node.run() method can, however, return information about triggered actions if the propagate argument is set to true. This is used internally by the Flow execution mechanism.
Best Practices
Single Responsibility: Each node should do one thing well. Avoid at all costs monolithic nodes that handle too many responsibilities!
Pure
exec: Keep theexecmethod free of side effects and direct memory access. All inputs should come fromprep_res, and all outputs should go toexec_res.Clear
prepandpost: Useprepfor input gathering andpostfor output handling and triggering.Respect the lifecycle: Read in
prep, compute inexec, write and trigger inpost. No exceptions allowed!Use
forkingData: Pass branch-specific data viatrigger'sforkingDataargument to populate thelocalstore for successors, keeping the global store clean.Type Safety: For better developer experience, define the expected structure of
memorystores, actions, and results.Error Handling: Leverage the built-in retry logic (
maxRetries,wait) andexecFallbackfor resilience.
Last updated