Installation
Quick Start
Configuration
ChatSDKConfig Interface
Configuration Options
- baseUrl: The base URL of your API endpoint
- projectId: Your unique project identifier
- apiKey & apiSecret: For server-side authentication
- accessToken: For client-side authentication (web apps)
Authentication
The ChatSDK supports two authentication methods:API Key Authentication (Server-side)
Access Token Authentication (Client-side)
Core Methods
Chat Management
createChat(name?, documentKeys?)
Creates a new chat conversation.
listChats(cursor?, limit?)
Retrieve a paginated list of chats in the project.
getChatHistory(chatId)
Retrieve a chat with its complete message history.
deleteChat(chatId)
Delete a chat and all its messages permanently.
updateChatName(chatId, newName)
Update the display name of an existing chat.
Message Handling
sendMessage(message, options?)
Send a message and receive the AI response.
sendFeedback(messageId, chatId, feedback)
Provide feedback on an AI response.
Streaming Support
sendMessageStream(message, options)
Send a message with real-time streaming response.
Error Handling
The SDK throwsAPIError objects for API-related failures:
Examples
Basic Chat Application
In this example, you will learn how to build a basic chat application using the Odin AI SDK with simple, non-streaming message exchanges. You start by creating aSimpleChatApp class that initializes the ChatSDK with your API credentials pulled from environment variables (base URL, project ID, API key, and secret). The class tracks your current chat session with currentChatId and provides three main methods: startNewChat() creates a new chat conversation with a custom name, sendMessage() sends a message to the AI and returns the complete response (automatically creating a new chat if one doesn’t exist), and getChatList() retrieves all your existing chat conversations. Unlike streaming implementations, this approach waits for the full AI response before displaying it, making it perfect for simpler use cases where you don’t need real-time token-by-token updates. The example uses the GPT-4o-mini model and includes error handling throughout, with console logging to help you track the conversation flow—giving you a straightforward foundation for building basic chatbot functionality without the complexity of streaming callbacks.
Streaming Chat with Real-time Updates
In this example, you will learn how to build a streaming chat interface that connects to the Odin AI API and displays AI responses in real-time. You start by creating aStreamingChat class that initializes the ChatSDK with your API credentials and a reference to an HTML element where messages will appear. When you send a message using sendStreamingMessage(), you’ll set up callback handlers that process the streaming response as it arrives: the onChunk callback appends each piece of text to your UI immediately (creating that characteristic “typing” effect), while onMessageObject lets you handle rich content like images. You’ll also implement error handling with onError, update your chat title with onChatNameUpdate, and use onComplete to add thumbs up/down feedback buttons once the message finishes streaming. This example shows you how to display images dynamically, collect user feedback on AI responses, and configure the chat to use GPT-4o with Knowledge Base integration—giving you everything you need to create a ChatGPT-like interface with real-time streaming responses and interactive features.
Knowledge Base Integration
In this example, you will discover how to build a Knowledge Base-powered chat application that can answer questions based on specific documents you upload to your Knowledge Base. TheKnowledgeBasedChat class initializes the ChatSDK with your API credentials from environment variables, then uses two key methods to interact with your documents: createDocumentChat() sets up a new chat session linked to specific documents by passing an array of documentKeys (unique identifiers for your uploaded documents), and askDocumentQuestion() sends questions to the AI with Knowledge Base features enabled. When you ask questions, you’ll configure the chat to use useKnowledgebase: true and specify agentType: 'document_agent' to ensure the AI retrieves relevant information from your documents, while formatInstructions tells the AI to include citations for its sources. The response includes a sources property that shows you which documents the AI referenced when generating its answer, making this perfect for building document Q&A systems, research assistants, or any application where you need AI responses grounded in specific content rather than general knowledge. This enables full transparency into how the AI arrived at its answers.
Best Practices
Error Handling
Always implement proper error handling for all SDK methods:Streaming for Better UX
Use streaming for longer responses to provide real-time feedback:Optimize API Calls
- Reuse chat IDs instead of creating new chats for each message
- Use pagination for chat lists
- Implement caching for frequently accessed data

