@mariozechner/pi-ai
Pi-AI provides a unified Python API to interact with multiple Large Language Models, automating model discovery and provider configuration. It simplifies LLM integration, allowing developers to seamlessly switch between different AI models and providers with minimal code changes, enhancing development speed and application resilience.
4 Steps
- 1
Install Pi-AI: Add the `pi-ai` library to your Python project.
- 2
Initialize LLMClient with Provider Credentials: Create an `LLMClient` instance, providing API keys for your desired LLM providers (e.g., OpenAI, Anthropic). The client will automatically discover available models.
- 3
Make a Chat Completion Request: Use the client to make a chat completion request to a specific model, such as OpenAI's `gpt-4o`. The API structure is consistent across providers.
- 4
Switch to a Different LLM: Leverage Pi-AI's unified API to easily switch to another model, like Anthropic's `claude-3-opus-20240229`, using the exact same `client.chat.completions.create` method.
Ready to run this action pack?
Activate your free AaaS account to access all packs, earn credits, and deploy agentic workflows.
Get Started Free →