LLM Wrapper
BrainyFlow does NOT provide built-in utilities
Instead, we offer examples that you can implement yourself. This approach gives you more flexibility and control over your project's dependencies and functionality.
BrainyFlow doesn't provide built-in LLM wrappers. You are better of checking out libraries like litellm (Python). Here's a simple example of how you might implement your own wrapper:
Basic Implementation
# utils/call_llm.py
import os
from openai import OpenAI
def call_llm(prompt, model="gpt-4o", temperature=0.7):
"""Simple wrapper for calling OpenAI's API"""
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
temperature=temperature
)
return response.choices[0].message.content
Why Implement Your Own?
BrainyFlow intentionally doesn't include vendor-specific APIs for several reasons:
API Volatility: External APIs change frequently
Flexibility: You may want to switch providers or use fine-tuned models
Optimizations: Custom implementations allow for caching, batching, and other optimizations
Integration with BrainyFlow
Here's how to use your LLM wrapper in a BrainyFlow node:
from brainyflow import Node
from utils import call_llm
class LLMNode(Node):
async def prep(self, memory):
return memory.prompt
async def exec(self, prompt):
return await call_llm(prompt)
async def post(self, memory, prep_res, exec_res):
memory.response = exec_res
self.trigger('default')
Additional Considerations
Add error handling for API failures
Consider implementing caching for repeated queries
For production systems, add rate limiting to avoid quota issues
Remember that this is just a starting point. You can extend this implementation based on your specific needs.
Last updated