LLM Wrapper
Last updated
Last updated
Check out libraries like . Here, we provide some minimal example implementations:
OpenAI
Store the API key in an environment variable like OPENAI_API_KEY for security.
Claude (Anthropic)
Google (Generative AI Studio / PaLM API)
Azure (Azure OpenAI)
Ollama (Local LLM)
Feel free to enhance your call_llm
function as needed. Here are examples:
Handle chat history:
Add in-memory caching
Caching conflicts with Node retries, as retries yield the same result.
To address this, you could use cached results only if not retried.
Enable logging: