LLM Wrapper
Caskada does NOT provide built-in utilities
Instead, we offer examples that you can implement yourself. This approach gives you more flexibility and control over your project's dependencies and functionality.
Caskada doesn't provide built-in LLM wrappers. You are better of checking out libraries like litellm (Python). Here's a simple example of how you might implement your own wrapper:
Basic Implementation
# utils/call_llm.py
import os
from openai import OpenAI
def call_llm(prompt, model="gpt-4o", temperature=0.7):
"""Simple wrapper for calling OpenAI's API"""
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
temperature=temperature
)
return response.choices[0].message.content// utils/callLLM.ts
import OpenAI from 'openai'
export async function callLLM(prompt: string, model: string = 'gpt-4o', temperature: number = 0.7): Promise<string> {
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
})
const response = await openai.chat.completions.create({
model,
messages: [{ role: 'user', content: prompt }],
temperature,
})
return response.choices[0]?.message?.content || ''
}Why Implement Your Own?
Caskada intentionally doesn't include vendor-specific APIs for several reasons:
API Volatility: External APIs change frequently
Flexibility: You may want to switch providers or use fine-tuned models
Optimizations: Custom implementations allow for caching, batching, and other optimizations
Integration with Caskada
Here's how to use your LLM wrapper in a Caskada node:
Additional Considerations
Add error handling for API failures
Consider implementing caching for repeated queries
For production systems, add rate limiting to avoid quota issues
Remember that this is just a starting point. You can extend this implementation based on your specific needs.
Last updated