LLM Provider Configuration
Supported LLM Providers
ASSERT LLM TOOLS currently supports two major LLM providers for metrics that require language model capabilities.
OpenAI
from assert_llm_tools.llm.config import LLMConfig
config = LLMConfig(
provider="openai",
model_id="gpt-4", # or "gpt-3.5-turbo"
api_key="your-openai-api-key",
# Proxy configuration (optional)
proxy_url="http://your-proxy-server:port" # Used for both HTTP and HTTPS
# Or use protocol-specific proxies:
# http_proxy="http://your-http-proxy:port",
# https_proxy="http://your-https-proxy:port"
)
Available Models
gpt-4
gpt-3.5-turbo
Installation
pip install "assert_llm_tools[openai]"
Amazon Bedrock
from assert_llm_tools.llm.config import LLMConfig
config = LLMConfig(
provider="bedrock",
model_id="anthropic.claude-v2",
region="us-east-1",
api_key="your-aws-access-key",
api_secret="your-aws-secret-key",
# Proxy configuration (optional)
proxy_url="http://your-proxy-server:port" # Used for both HTTP and HTTPS
# Or use protocol-specific proxies:
# http_proxy="http://your-http-proxy:port",
# https_proxy="http://your-https-proxy:port"
)
Available Models
anthropic.claude-v2
anthropic.claude-v1
anthropic.claude-instant-v1
anthropic.claude-3-haiku-20240307-v1:0
anthropic.claude-3-sonnet-20240229-v1:0
anthropic.claude-3-opus-20240229-v1:0
amazon.nova-lite-v1:0
amazon.nova-micro-v1:0
amazon.nova-pro-v1:
Installation
pip install "assert_llm_tools[bedrock]"
Authentication
OpenAI
- Create an account at OpenAI
- Generate an API key in your account settings
- Use the API key in your configuration
Amazon Bedrock
- Set up an AWS account
- Create IAM credentials with Bedrock access
- Note your access key and secret key
- Configure your region based on Bedrock availability
Proxy Configuration
ASSERT LLM TOOLS supports configuring proxies for API requests to LLM providers. This is useful in corporate environments or when you need to route your API traffic through specific networking infrastructure.
Basic Proxy Configuration
Use the proxy_url
parameter to set a single proxy for all HTTP and HTTPS requests:
config = LLMConfig(
provider="bedrock",
model_id="anthropic.claude-v2",
region="us-east-1",
# Set the same proxy for both HTTP and HTTPS
proxy_url="http://your-proxy-server:port"
)
Protocol-Specific Proxies
For environments that require different proxies for HTTP and HTTPS:
config = LLMConfig(
provider="openai",
model_id="gpt-4",
api_key="your-openai-api-key",
# Set different proxies for HTTP and HTTPS
http_proxy="http://your-http-proxy:port",
https_proxy="http://your-https-proxy:port"
)
Priority Order
If both general and protocol-specific proxies are provided, the protocol-specific ones take precedence:
http_proxy
andhttps_proxy
if definedproxy_url
as fallback for any protocol not specifically defined
Best Practices
- 🔒 Never commit API keys to version control
- 🔄 Rotate your API keys regularly
- ⚡ Use environment variables for sensitive credentials
- 📝 Keep track of API usage and costs
- 🚀 Test with smaller models before using larger ones
- 🔐 When using proxies, ensure your proxy configuration complies with your organization's security policies