Understanding the Playground
The playground is a lightweight, standalone app for testing AI endpoints without authentication, user management, or admin complexity. Perfect for local development and testing environments.
Core Components
Playground Endpoints
Main workspace for creating and managing your AI agents and API endpoints with different configurations.
Endpoint Details
Comprehensive testing interface with configuration, knowledge management, and debug panels.
Settings
Global configuration for themes, providers, models, and artifact management with S3 integration.
Playground Endpoints
Your main workspace for creating and managing AI agents and API endpoints. This is where you'll spend most of your time building and testing different configurations.
Endpoint Management Features
Browse & Organize
View your complete endpoint collection with search and filtering capabilities.
Favorites System
Pin important endpoints to the top for quick access during development.
Sample Library
Access pre-configured endpoint examples and templates to get started quickly.
Creating New Endpoints
Choose Endpoint Type - Select between AI Agent (conversational) or API Endpoint (REST testing)
Select SDK Framework - Pick the framework that matches your backend architecture
Configure Settings - Set up authentication, models, and basic parameters
Test & Iterate - Use the endpoint details page to refine and test your configuration
Endpoint Types
Perfect for conversational AI testing
- Chat configurations and memory management
- Knowledge base integration
- Debug tools and metrics
- Ephemeral chat UI for immediate testing
- Support for attachments and structured responses
Ideal for REST API development and testing
- Local Postman-style testing interface
- Debug headers and response inspection
- Custom payload configuration
- Response validation and formatting
- Perfect for non-conversational APIs
Available SDK Frameworks
Strands SDK
AWS Strands Agent SDK with Python backends - Full-featured agent development
AI SDK
Vercel AI SDK with TypeScript - Modern streaming AI applications
OpenAI SDK
Direct OpenAI integration - Simple and straightforward GPT access
SDK Framework Details
Strands SDK
Best for: Complex business logic, specialized agents, AWS integration
Architecture & Features:
- Python backend with FastAPI
- Full AWS Strands Agent SDK integration
- Advanced memory and session management
- Custom tool integration
- Bedrock model support
- Comprehensive observability
Use Cases:
- Domain-specific agents (gym management, weather, RAG)
- Complex business workflows
- Multi-tool agent systems
- Production-ready agent deployments
AI SDK
Best for: Modern web applications, streaming responses, multi-provider support
Architecture & Features:
- TypeScript client and server integration
- Multiple AI providers (OpenAI, Anthropic, Google)
- Streaming response support
- Environment variable configuration
- Real-time updates
Use Cases:
- Web application integration
- Multi-provider testing
- Streaming chat interfaces
- Rapid prototyping
OpenAI SDK
Best for: Simple GPT integration, quick testing, minimal setup
Architecture & Features:
- Direct OpenAI API integration
- Minimal configuration required
- GPT model focus
- Environment variable support
- Streaming enabled
Use Cases:
- Quick GPT testing
- Simple conversational interfaces
- Proof of concepts
- Educational projects
Endpoint Details
Your comprehensive testing and configuration interface. This is where you'll configure, test, and debug individual endpoints with precision control over every aspect.
Interface Layout
The endpoint details page uses a dual-panel layout for maximum efficiency:
Configuration Panel
Three-tab configuration interface for agent settings, knowledge management, and debugging.
Chat Panel
Live testing interface with chat UI, response viewing, and real-time debugging.
Configuration Panel
Agent Configuration Tab
Configuration options vary significantly by SDK type. Choose the right framework for your needs.
Full Configuration Control
- Endpoint Path: Configure the API endpoint path for agent invocation
- Request Payload: Customize the JSON payload structure and parameters
- System Instructions: Define agent behavior and personality
- Memory Management: Enable/disable conversation memory and persistence
- Model Selection: Choose from available AWS Bedrock models
- Tool Configuration: Access to agent metadata and custom tools
Provider-Based Configuration
- Provider Selection: Choose between OpenAI, Anthropic, or Google (edit endpoint only)
- Model Configuration: Select specific models for your chosen provider
- API Key Management: Automatic environment variable detection
- Real-time Updates: Configuration changes apply immediately
- Streaming Settings: Control streaming behavior and response formatting
Simplified Setup
- Model Default: Automatically configured for GPT-4o-Mini
- API Key: Uses OPENAI_API_KEY environment variable
- Minimal Configuration: Focus on testing rather than setup
- Streaming Enabled: Built-in streaming response support
- Edit-Only Settings: Core configuration available when editing endpoint
Knowledge Management Tab
Advanced Knowledge Integration
- Knowledge Bases: View and select configured Bedrock knowledge bases
- Default Artifacts: Set up files to be attached to every request automatically
- Agent Tools: Access tools and features loaded from agent metadata
- Metadata Refresh: Update agent capabilities from source
- Custom Tool Configuration: Configure domain-specific tools and integrations
Artifact Management
- Default Attachments: Configure files to be included with requests
- File Upload: Upload documents for context and reference
- Attachment Preview: View and manage attached files
- Context Management: Control how attachments are processed and used
Debug & Monitoring Tab
Strands SDK Only: Advanced debugging and metrics are available exclusively for Strands SDK endpoints.
Available for Strands SDK:
- Invocation Metrics: Real-time performance data and response times
- Error Logging: Detailed error messages and stack traces
- Token Usage: Monitor token consumption and costs
- Request/Response Logging: Full conversation history and debugging
- Agent Tool Execution: Monitor tool calls and their results
For AI SDK & OpenAI SDK:
- Basic Logging: Simple request/response logging
- Error Messages: Basic error information and status codes
Testing Panel
Chat Interface
Interactive chat UI for testing conversational endpoints with real-time responses.
Response Analysis
Detailed response inspection, formatting, and debugging information.
Quick Actions
Refresh metadata, reset conversations, and access additional tools.
Advanced Features
Metadata Management
Refresh Meta Button - Updates Strands agent metadata from source repositories
Tool Discovery - Automatically detects available tools and capabilities
Configuration Sync - Ensures playground settings match agent definitions
Best Practices
Pro Tips for Effective Testing
- Start Simple: Begin with basic queries to verify connectivity
- Test Edge Cases: Try unusual inputs to test robustness
- Monitor Performance: Watch response times and token usage
- Use Debug Mode: Enable detailed logging for troubleshooting
- Save Configurations: Export successful configurations for reuse
Settings
Global configuration center for customizing your playground experience and managing shared resources.
Core Settings
Appearance
Dark/light mode toggle and UI customization options.
Providers & Models
Configure available AI providers and model selections.
Artifacts
S3 integration for file storage and knowledge base management.
Artifact Management
Artifacts are files you upload for use as attachments or knowledge base items. They're stored securely and can be reused across multiple endpoints.
We do not recommend uploading any sensitive or private data.
S3 Integration Features
File Operations:
- Upload files and create organized folder structures
- Support for various file types (documents, images, data files)
- Automatic file validation and processing
- Secure storage with access control
Knowledge Base Creation:
- Transform folder structures into searchable knowledge bases
- Integration with AWS Bedrock for vector embeddings
- Manual sync control for knowledge base updates
- Query and retrieval testing interface
Orbit Credentials Integration:
- Dedicated folder location when using Orbit's AWS credentials
- Automatic configuration and permissions setup
- Shared access to Orbit's curated knowledge sources
- Seamless integration with sample agents
The playground provides a powerful yet simple interface for testing AI endpoints and agents without production environment complexity. Start with the samples, then build your own custom configurations!
Last updated on