A Go client library for the Tama API, providing easy access to Neural and Sensory provisioning endpoints.
go get github.com/upmaru/tama-goThe Tama Go client uses OAuth2 with client credentials flow for authentication. You'll need:
- Client ID: Your OAuth2 client identifier
- Client Secret: Your OAuth2 client secret
The client automatically handles token acquisition and refresh using the /auth/tokens endpoint with:
- Grant type:
client_credentials - Scope:
provision.all - Authentication: HTTP Basic Auth with base64 encoded
client_id:client_secret
For testing purposes, you can skip OAuth2 token fetching by setting SkipTokenFetch: true in the config:
config := tama.Config{
BaseURL: "https://api.tama.io",
ClientID: "test-client-id",
ClientSecret: "test-client-secret",
SkipTokenFetch: true, // Skip token fetching for tests
}This prevents the client from making actual HTTP requests to obtain tokens during initialization, which is useful for unit tests with mock servers.
package main
import (
"fmt"
"time"
tama "github.com/upmaru/tama-go"
"github.com/upmaru/tama-go/neural"
"github.com/upmaru/tama-go/perception"
"github.com/upmaru/tama-go/sensory"
)
func main() {
// Initialize the client with OAuth2 credentials
config := tama.Config{
BaseURL: "https://api.tama.io",
ClientID: "your-client-id",
ClientSecret: "your-client-secret",
Timeout: 30 * time.Second,
}
client, err := tama.NewClient(config)
if err != nil {
panic(err)
}
// Create a neural space
space, err := client.Neural.CreateSpace(neural.CreateSpaceRequest{
Space: neural.SpaceRequestData{
Name: "My Neural Space",
Type: "root",
},
})
if err != nil {
panic(err)
}
fmt.Printf("Created space: ID=%s, Name=%s, Type=%s, State=%s\n",
space.ID, space.Name, space.Type, space.ProvisionState)
// Create a source in the space
source, err := client.Sensory.CreateSource(space.ID, sensory.CreateSourceRequest{
Source: sensory.SourceRequestData{
Name: "AI Model Source",
Type: "model",
Endpoint: "https://api.example.com/v1",
Credential: sensory.SourceCredential{
APIKey: "source-api-key",
},
},
})
if err != nil {
panic(err)
}
fmt.Printf("Created source: ID=%s, Name=%s, Endpoint=%s, SpaceID=%s, State=%s\n",
source.ID, source.Name, source.Endpoint, source.SpaceID, source.ProvisionState)
// Create a limit for the source
limit, err := client.Sensory.CreateLimit(source.ID, sensory.CreateLimitRequest{
Limit: sensory.LimitRequestData{
ScaleUnit: "minutes",
ScaleCount: 1,
Count: 100,
},
})
if err != nil {
panic(err)
}
fmt.Printf("Created limit: ID=%s, SourceID=%s, Count=%d, State=%s\n",
limit.ID, limit.SourceID, limit.Count, limit.ProvisionState)
// Create a perception chain
chain, err := client.Perception.CreateChain(space.ID, perception.CreateChainRequest{
Chain: perception.ChainRequestData{
Name: "AI Processing Chain",
},
})
if err != nil {
panic(err)
}
fmt.Printf("Created chain: ID=%s, Name=%s, SpaceID=%s, State=%s\n",
chain.ID, chain.Name, chain.SpaceID, chain.ProvisionState)
// Create a thought in the chain
thought, err := client.Perception.CreateThought(chain.ID, perception.CreateThoughtRequest{
Thought: perception.ThoughtRequestData{
Relation: "description",
OutputClassID: "class-123",
Module: perception.Module{
Reference: "tama/agentic/generate",
Parameters: map[string]any{
"temperature": 0.7,
"max_tokens": 150,
"model": "gpt-4",
},
},
},
})
if err != nil {
panic(err)
}
fmt.Printf("Created thought: ID=%s, ChainID=%s, Relation=%s, State=%s\n",
thought.ID, thought.ChainID, thought.Relation, thought.ProvisionState)
}The client library is organized into the following packages:
client.go- Main client configuration and initializationneural.go- Neural service wrapper that uses the neural packagesensory.go- Sensory service wrapper that uses the sensory packageperception.go- Perception service wrapper that uses the perception packagetypes.go- Shared types and documentation
service.go- Service definition and neural-related typesspace.go- Space operations (GET, POST, PATCH, PUT, DELETE)processor.go- Processor operations (GET, POST, PATCH, PUT, DELETE)class.go- Class operations (GET, POST, PATCH, PUT, DELETE)corpus.go- Corpus operations (GET, POST, PATCH, PUT, DELETE)bridge.go- Bridge operations (GET, POST, PATCH, PUT, DELETE)
service.go- Service definition and sensory-related typessource.go- Source operations (GET, POST, PATCH, PUT, DELETE)model.go- Model operations (GET, POST, PATCH, PUT, DELETE)limit.go- Limit operations (GET, POST, PATCH, PUT, DELETE)
service.go- Service definition and perception-related typeschain.go- Chain operations (GET, POST, PATCH, PUT, DELETE)thought.go- Thought operations (GET, POST, PATCH, DELETE)path.go- Path operations (GET, POST, PATCH, PUT, DELETE)context.go- Context operations (GET, POST, PATCH, PUT, DELETE)
example/- Working examples demonstrating all features
This modular structure separates concerns into different packages, making the codebase easier to navigate, maintain, and extend. Each service package encapsulates its related functionality with its own types and operations.
Detailed API documentation is available in the docs/ directory:
- Client Configuration - How to configure and initialize the Tama client
- Neural Service - Space, Processor, Class, Corpus, and Bridge operations
- Memory Service - Prompt operations and memory management
- Sensory Service - Source, Model, Limit, Specification, and Identity operations
- Perception Service - Chain, Thought, Path, and Context operations
For a complete overview, see the documentation index.
The client provides comprehensive coverage of the Tama API endpoints, organized by resource type:
GET /provision/neural/spaces/:id- Get space by IDPOST /provision/neural/spaces- Create new spacePATCH /provision/neural/spaces/:id- Update spacePUT /provision/neural/spaces/:id- Replace spaceDELETE /provision/neural/spaces/:id- Delete space
GET /provision/neural/spaces/:space_id/models/:model_id/processor- Get processorPOST /provision/neural/spaces/:space_id/models/:model_id/processor- Create processorPATCH /provision/neural/spaces/:space_id/models/:model_id/processor- Update processorPUT /provision/neural/spaces/:space_id/models/:model_id/processor- Replace processorDELETE /provision/neural/spaces/:space_id/models/:model_id/processor- Delete processor
GET /provision/neural/classes/:id- Get class by IDPOST /provision/neural/spaces/:space_id/classes- Create class in spacePATCH /provision/neural/classes/:id- Update classPUT /provision/neural/classes/:id- Replace classDELETE /provision/neural/classes/:id- Delete class
GET /provision/neural/corpora/:id- Get corpus by IDPOST /provision/neural/classes/:class_id/corpora- Create corpus in classPATCH /provision/neural/corpora/:id- Update corpusPUT /provision/neural/corpora/:id- Replace corpusDELETE /provision/neural/corpora/:id- Delete corpus
GET /provision/neural/bridges/:id- Get bridge by IDPOST /provision/neural/spaces/:space_id/bridges- Create bridge in spacePATCH /provision/neural/bridges/:id- Update bridgePUT /provision/neural/bridges/:id- Replace bridgeDELETE /provision/neural/bridges/:id- Delete bridge
GET /provision/sensory/sources/:id- Get source by IDPOST /provision/sensory/spaces/:space_id/sources- Create source in spacePATCH /provision/sensory/sources/:id- Update sourcePUT /provision/sensory/sources/:id- Replace sourceDELETE /provision/sensory/sources/:id- Delete source
GET /provision/sensory/models/:id- Get model by IDPOST /provision/sensory/sources/:source_id/models- Create model for sourcePATCH /provision/sensory/models/:id- Update modelPUT /provision/sensory/models/:id- Replace modelDELETE /provision/sensory/models/:id- Delete model
GET /provision/sensory/limits/:id- Get limit by IDPOST /provision/sensory/sources/:source_id/limits- Create limit for sourcePATCH /provision/sensory/limits/:id- Update limitPUT /provision/sensory/limits/:id- Replace limitDELETE /provision/sensory/limits/:id- Delete limit
Note: Limits are associated with sources via the source_id field and track resource usage counts with current state.
GET /provision/perception/chains/:id- Get chain by IDPOST /provision/perception/spaces/:space_id/chains- Create chain in spacePATCH /provision/perception/chains/:id- Update chainPUT /provision/perception/chains/:id- Replace chainDELETE /provision/perception/chains/:id- Delete chain
GET /provision/perception/thoughts/:id- Get thought by IDPOST /provision/perception/chains/:chain_id/thoughts- Create thought in chainPATCH /provision/perception/thoughts/:id- Update thoughtDELETE /provision/perception/thoughts/:id- Delete thought
Note: Thoughts are associated with chains and contain module configurations for AI processing operations.
GET /provision/perception/paths/:id- Get path by IDPOST /provision/perception/thoughts/:thought_id/paths- Create path in thoughtPATCH /provision/perception/paths/:id- Update pathPUT /provision/perception/paths/:id- Replace pathDELETE /provision/perception/paths/:id- Delete path
Note: Paths are associated with thoughts and define target classes with configurable parameters.
GET /provision/perception/contexts/:id- Get context by IDPOST /provision/perception/thoughts/:thought_id/contexts- Create context in thoughtPATCH /provision/perception/contexts/:id- Update contextPUT /provision/perception/contexts/:id- Replace contextDELETE /provision/perception/contexts/:id- Delete context
Note: Contexts are associated with thoughts and contain prompt IDs with layer information for neural processing operations.
import "github.com/upmaru/tama-go/neural"
// Create a space
space, err := client.Neural.CreateSpace(neural.CreateSpaceRequest{
Space: neural.SpaceRequestData{
Name: "Production Space",
Type: "root",
},
})
// space will have ID, Name, Slug, Type, and ProvisionState populated
// Get a space
space, err := client.Neural.GetSpace("space-123")
// Update a space (partial update)
space, err := client.Neural.UpdateSpace("space-123", neural.UpdateSpaceRequest{
Space: neural.UpdateSpaceData{
Name: "Updated Production Space",
Type: "component",
},
})
// ProvisionState cannot be updated via API - it's managed server-side
// Replace a space (full replacement)
space, err := client.Neural.ReplaceSpace("space-123", neural.UpdateSpaceRequest{
Space: neural.UpdateSpaceData{
Name: "New Production Space",
Type: "root",
},
})
// Delete a space
err := client.Neural.DeleteSpace("space-123")import "github.com/upmaru/tama-go/neural"
// Create a processor
processor, err := client.Neural.CreateProcessor("space-123", "model-123", neural.CreateProcessorRequest{
Processor: neural.ProcessorRequestData{
Type: "completion",
Configuration: map[string]any{
"temperature": 0.8,
"tool_choice": "required",
"role_mappings": []map[string]any{
{"from": "user", "to": "human"},
{"from": "assistant", "to": "ai"},
},
},
},
})
// processor will have ID, SpaceID, ModelID, Type, Configuration, and ProvisionState populated
// Get a processor
processor, err := client.Neural.GetProcessor("space-123", "model-123")
// Update a processor (partial update)
processor, err := client.Neural.UpdateProcessor("space-123", "model-123", neural.UpdateProcessorRequest{
Processor: neural.UpdateProcessorData{
Type: "embedding",
Configuration: map[string]any{
"max_tokens": 512,
"templates": []map[string]any{
{"type": "query", "content": "Query: {text}"},
},
},
},
})
// ProvisionState cannot be updated via API - it's managed server-side
// Replace a processor (full replacement)
processor, err := client.Neural.ReplaceProcessor("space-123", "model-123", neural.UpdateProcessorRequest{
Processor: neural.UpdateProcessorData{
Type: "reranking",
Configuration: map[string]any{
"top_n": 3,
},
},
})
// Delete a processor
err := client.Neural.DeleteProcessor("space-123", "model-123")import "github.com/upmaru/tama-go/neural"
// Create a class
class, err := client.Neural.CreateClass("space-123", neural.CreateClassRequest{
Class: neural.ClassRequestData{
Schema: map[string]any{
"title": "user-profile",
"description": "User profile information",
"type": "object",
"properties": map[string]any{
"name": map[string]any{
"type": "string",
"description": "User's full name",
},
"email": map[string]any{
"type": "string",
"description": "User's email address",
},
"age": map[string]any{
"type": "integer",
"description": "User's age",
},
},
"required": []string{"name", "email"},
},
},
})
// Get a class
class, err := client.Neural.GetClass("class-123")
// Update a class (partial update)
class, err := client.Neural.UpdateClass("class-123", neural.UpdateClassRequest{
Class: neural.UpdateClassData{
Schema: map[string]any{
"title": "updated-user-profile",
"properties": map[string]any{
"name": map[string]any{
"type": "string",
},
"phone": map[string]any{
"type": "string",
},
},
},
},
})
// Replace a class (full replacement)
class, err := client.Neural.ReplaceClass("class-123", neural.UpdateClassRequest{
Class: neural.UpdateClassData{
Schema: map[string]any{
"title": "new-user-profile",
"type": "object",
"properties": map[string]any{
"username": map[string]any{
"type": "string",
},
},
},
},
})
// Delete a class
err := client.Neural.DeleteClass("class-123")import "github.com/upmaru/tama-go/neural"
// Create a corpus
corpus, err := client.Neural.CreateCorpus("class-123", neural.CreateCorpusRequest{
Corpus: neural.CorpusRequestData{
Main: true,
Name: "Primary Training Corpus",
Template: "training-template-v1",
},
})
// Get a corpus
corpus, err := client.Neural.GetCorpus("corpus-123")
// Update a corpus (partial update)
main := false
corpus, err := client.Neural.UpdateCorpus("corpus-123", neural.UpdateCorpusRequest{
Corpus: neural.UpdateCorpusData{
Main: &main,
Name: "Updated Training Corpus",
Template: "training-template-v2",
},
})
// Replace a corpus (full replacement)
mainFlag := true
corpus, err := client.Neural.ReplaceCorpus("corpus-123", neural.UpdateCorpusRequest{
Corpus: neural.UpdateCorpusData{
Main: &mainFlag,
Name: "New Training Corpus",
Template: "training-template-v3",
},
})
// Delete a corpus
err := client.Neural.DeleteCorpus("corpus-123")import "github.com/upmaru/tama-go/neural"
// Create a bridge
bridge, err := client.Neural.CreateBridge("space-123", neural.CreateBridgeRequest{
Bridge: neural.BridgeRequestData{
TargetSpaceID: "space-456",
},
})
// Get a bridge
bridge, err := client.Neural.GetBridge("bridge-123")
// Update a bridge (partial update)
bridge, err := client.Neural.UpdateBridge("bridge-123", neural.UpdateBridgeRequest{
Bridge: neural.UpdateBridgeData{
TargetSpaceID: "space-789",
},
})
// Replace a bridge (full replacement)
bridge, err := client.Neural.ReplaceBridge("bridge-123", neural.UpdateBridgeRequest{
Bridge: neural.UpdateBridgeData{
TargetSpaceID: "space-999",
},
})
// Delete a bridge
err := client.Neural.DeleteBridge("bridge-123")- ID (string): Unique identifier for the bridge (read-only)
- SpaceID (string): ID of the source space (read-only, set from creation endpoint)
- TargetSpaceID (string): ID of the target space that this bridge connects to (required)
- ProvisionState (string): Current provisioning status (read-only)
- Main (boolean): Indicates if this is the primary corpus for the class
- Name (string): Human-readable name for the corpus
- Template (string): Template identifier used for processing the corpus data
- Slug (string): Auto-generated URL-friendly identifier (read-only)
- ProvisionState (string): Current provisioning status (read-only)
Processors support three types: "completion", "embedding", and "reranking". Each type has its own configuration schema:
For text completion and chat completion tasks:
processor, err := client.Neural.CreateProcessor("space-123", "model-123", neural.CreateProcessorRequest{
Processor: neural.ProcessorRequestData{
Type: "completion",
Configuration: map[string]any{
"temperature": 0.8, // decimal, default: 0.8
"tool_choice": "required", // enum: "required", "auto", "any", default: "required"
"role_mappings": []map[string]any{
{
"from": "user",
"to": "human",
},
{
"from": "assistant",
"to": "ai",
},
},
},
},
})For text embedding and vector generation:
processor, err := client.Neural.CreateProcessor("space-123", "model-123", neural.CreateProcessorRequest{
Processor: neural.ProcessorRequestData{
Type: "embedding",
Configuration: map[string]any{
"max_tokens": 512, // integer, default: 512
"templates": []map[string]any{
{
"type": "query",
"content": "Query: {text}",
},
{
"type": "document",
"content": "Document: {text}",
},
},
},
},
})For document reranking and relevance scoring:
processor, err := client.Neural.CreateProcessor("space-123", "model-123", neural.CreateProcessorRequest{
Processor: neural.ProcessorRequestData{
Type: "reranking",
Configuration: map[string]any{
"top_n": 3, // integer, default: 3
},
},
})Completion Configuration:
temperature(decimal): Controls randomness in generation, default: 0.8tool_choice(string): Tool selection strategy - "required", "auto", or "any", default: "required"role_mappings(array): Maps input roles to model-specific rolesfrom(string): Input role nameto(string): Model role name
Embedding Configuration:
max_tokens(integer): Maximum tokens to process, default: 512templates(array): Text templates for different embedding typestype(string): Template type - "query" or "document"content(string): Template string with {text} placeholder
Reranking Configuration:
top_n(integer): Number of top results to return, default: 3
import "github.com/upmaru/tama-go/sensory"
// Create a source in a space
source, err := client.Sensory.CreateSource("space-123", sensory.CreateSourceRequest{
Source: sensory.SourceRequestData{
Name: "Mistral Source",
Type: "model",
Endpoint: "https://api.mistral.ai/v1",
Credential: sensory.SourceCredential{
APIKey: "your-api-key",
},
},
})
// Get a source
source, err := client.Sensory.GetSource("source-123")
// Update a source
source, err := client.Sensory.UpdateSource("source-123", sensory.UpdateSourceRequest{
Source: sensory.UpdateSourceData{
Name: "Updated Mistral Source",
Endpoint: "https://api.mistral.ai/v2",
Credential: &sensory.SourceCredential{
APIKey: "your-updated-api-key",
},
},
})
// Delete a source
err := client.Sensory.DeleteSource("source-123")import "github.com/upmaru/tama-go/sensory"
// Create a model for a source
model, err := client.Sensory.CreateModel("source-123", sensory.CreateModelRequest{
Model: sensory.ModelRequestData{
Identifier: "mistral-small-latest",
Path: "/chat/completions",
Parameters: map[string]any{
"reasoning_effort": "low",
"temperature": 1.0,
"max_tokens": 2000,
"stream": true,
"stop": []string{"\n", "###"},
"config": map[string]any{
"timeout": 30,
"enable_cache": true,
},
},
},
})
// Get a model
model, err := client.Sensory.GetModel("model-123")
// Update a model
model, err := client.Sensory.UpdateModel("model-123", sensory.UpdateModelRequest{
Model: sensory.UpdateModelData{
Identifier: "mistral-large-latest",
Path: "/chat/completions",
Parameters: map[string]any{
"temperature": 0.8,
"max_tokens": 1500,
},
},
})
// Delete a model
err := client.Sensory.DeleteModel("model-123")The Parameters field in models accepts any valid JSON values, allowing flexible configuration:
Parameters: map[string]any{
// String values
"reasoning_effort": "low",
// Numeric values
"temperature": 0.8,
"max_tokens": 1500,
"frequency_penalty": 0.1,
// Boolean values
"stream": true,
// Array values
"stop": []string{"\n", "###", "END"},
// Object values
"config": map[string]any{
"timeout": 30,
"enable_cache": true,
"retries": 3,
},
}import "github.com/upmaru/tama-go/sensory"
// Create a limit for a source
limit, err := client.Sensory.CreateLimit("source-123", sensory.CreateLimitRequest{
Limit: sensory.LimitRequestData{
ScaleUnit: "seconds",
ScaleCount: 1,
Count: 32,
},
})
// Get a limit
limit, err := client.Sensory.GetLimit("limit-123")
// Update a limit
limit, err := client.Sensory.UpdateLimit("limit-123", sensory.UpdateLimitRequest{
Limit: sensory.UpdateLimitData{
ScaleUnit: "minutes",
ScaleCount: 5,
Count: 100,
ProvisionState: "active",
},
})
// Delete a limit
err := client.Sensory.DeleteLimit("limit-123")// Create a chain
chain, err := client.Perception.CreateChain("space-123", perception.CreateChainRequest{
Chain: perception.ChainRequestData{
Name: "Processing Chain",
},
})
// Get a chain
chain, err := client.Perception.GetChain("chain-123")
// Update a chain
chain, err := client.Perception.UpdateChain("chain-123", perception.UpdateChainRequest{
Chain: perception.UpdateChainData{
Name: "Updated Processing Chain",
},
})
// Delete a chain
err := client.Perception.DeleteChain("chain-123")// Create a thought
thought, err := client.Perception.CreateThought("chain-123", perception.CreateThoughtRequest{
Thought: perception.ThoughtRequestData{
Relation: "description",
OutputClassID: "class-123",
Module: perception.Module{
Reference: "tama/agentic/generate",
Parameters: map[string]any{
"temperature": 0.7,
"max_tokens": 150,
"model": "gpt-4",
},
},
},
})
// Get a thought
thought, err := client.Perception.GetThought("thought-123")
// Update a thought
thought, err := client.Perception.UpdateThought("thought-123", perception.UpdateThoughtRequest{
Thought: perception.UpdateThoughtData{
Relation: "analysis",
OutputClassID: "class-456",
Module: perception.Module{
Reference: "tama/agentic/analyze",
Parameters: map[string]any{
"depth": 3,
"focus_areas": []string{"sentiment", "intent", "entities"},
},
},
},
})
// Delete a thought
err := client.Perception.DeleteThought("thought-123")// Create a path
path, err := client.Perception.CreatePath("thought-123", perception.CreatePathRequest{
Path: perception.PathRequestData{
TargetClassID: "class-456",
Parameters: map[string]any{
"threshold": 0.8,
"max_results": 10,
"output_format": "json",
},
},
})
// Get a path
path, err := client.Perception.GetPath("path-123")
// Update a path
path, err := client.Perception.UpdatePath("path-123", perception.UpdatePathRequest{
Path: perception.UpdatePathData{
TargetClassID: "class-789",
Parameters: map[string]any{
"threshold": 0.9,
"max_results": 5,
},
},
})
// Replace a path
path, err := client.Perception.ReplacePath("path-123", perception.UpdatePathRequest{
Path: perception.UpdatePathData{
TargetClassID: "class-101",
Parameters: map[string]any{
"mode": "strict",
"validation": true,
},
},
})
// Delete a path
err := client.Perception.DeletePath("path-123")// Create a context
context, err := client.Perception.CreateContext("thought-123", perception.CreateContextRequest{
Context: perception.ContextRequestData{
PromptID: "prompt-456",
Layer: 2,
},
})
// Get a context
context, err := client.Perception.GetContext("context-123")
// Update a context
context, err := client.Perception.UpdateContext("context-123", perception.UpdateContextRequest{
Context: perception.UpdateContextData{
PromptID: "prompt-789",
Layer: 5,
},
})
// Replace a context
context, err := client.Perception.ReplaceContext("context-123", perception.UpdateContextRequest{
Context: perception.UpdateContextData{
PromptID: "prompt-101",
Layer: 1,
},
})
// Delete a context
err := client.Perception.DeleteContext("context-123")Thoughts contain module configurations that define AI processing operations:
Module: perception.Module{
Reference: "tama/agentic/generate",
Parameters: map[string]any{
"temperature": 0.7,
"max_tokens": 150,
"model": "gpt-4",
"prompt": "Generate a summary of the input text",
},
}Module: perception.Module{
Reference: "tama/agentic/analyze",
Parameters: map[string]any{
"depth": 3,
"focus_areas": []string{"sentiment", "intent", "entities"},
"output_format": "structured",
},
}Module: perception.Module{
Reference: "tama/agentic/preprocess",
Parameters: map[string]any{
"clean_text": true,
"normalize": true,
"remove_html": true,
},
}Module: perception.Module{
Reference: "tama/agentic/validate",
Parameters: map[string]any{
"strict_mode": false,
"schema_version": "v2",
"required_fields": []string{"input", "output"},
},
}config := tama.Config{
BaseURL: "https://api.tama.io", // Required: API base URL
APIKey: "your-api-key", // Required: API authentication key
Timeout: 30 * time.Second, // Optional: Request timeout (default: 30s)
}
client := tama.NewClient(config)The client supports API key authentication. Set your API key in the config:
client.SetAPIKey("your-new-api-key")Enable debug mode to see HTTP request/response details:
client.SetDebug(true)The client provides structured error handling with service-specific error types:
import "github.com/upmaru/tama-go/neural"
space, err := client.Neural.GetSpace("invalid-id")
if err != nil {
if apiErr, ok := err.(*neural.Error); ok {
fmt.Printf("Neural API Error %d\n", apiErr.StatusCode)
} else {
fmt.Printf("Client Error: %v\n", err)
}
}import "github.com/upmaru/tama-go/sensory"
source, err := client.Sensory.GetSource("invalid-id")
if err != nil {
if apiErr, ok := err.(*sensory.Error); ok {
fmt.Printf("Sensory API Error %d\n", apiErr.StatusCode)
} else {
fmt.Printf("Client Error: %v\n", err)
}
}import "github.com/upmaru/tama-go/perception"
chain, err := client.Perception.GetChain("invalid-id")
if err != nil {
if apiErr, ok := err.(*perception.Error); ok {
fmt.Printf("Perception API Error %d\n", apiErr.StatusCode)
} else {
fmt.Printf("Client Error: %v\n", err)
}
}- neural.Space: Neural space resource with configuration, type, and current state
- neural.Processor: Neural processor resource with type-specific configuration
- neural.Class: Neural class resource with schema definition and metadata
- neural.Corpus: Neural corpus resource with main flag, name, template, and state
- neural.Bridge: Neural bridge resource connecting two spaces with target space ID and state
- neural.CreateSpaceRequest: For creating new spaces
- neural.UpdateSpaceRequest: For updating existing spaces
- neural.CreateProcessorRequest: For creating new processors
- neural.UpdateProcessorRequest: For updating existing processors
- neural.CreateClassRequest: For creating new classes
- neural.UpdateClassRequest: For updating existing classes
- neural.CreateCorpusRequest: For creating new corpora
- neural.UpdateCorpusRequest: For updating existing corpora
- neural.CreateBridgeRequest: For creating new bridges
- neural.UpdateBridgeRequest: For updating existing bridges
- neural.SpaceRequestData: Space data in create requests
- neural.UpdateSpaceData: Space data in update requests
- neural.ProcessorRequestData: Processor data in create requests
- neural.UpdateProcessorData: Processor data in update requests
- neural.ClassRequestData: Class data in create requests
- neural.UpdateClassData: Class data in update requests
- neural.CorpusRequestData: Corpus data in create requests
- neural.UpdateCorpusData: Corpus data in update requests
- neural.BridgeRequestData: Bridge data in create requests
- neural.UpdateBridgeData: Bridge data in update requests
- neural.SpaceResponse: API response wrapper for space operations
- neural.ProcessorResponse: API response wrapper for processor operations
- neural.ClassResponse: API response wrapper for class operations
- neural.CorpusResponse: API response wrapper for corpus operations
- neural.BridgeResponse: API response wrapper for bridge operations
- neural.Error: Neural service specific error type
- sensory.Source: Sensory data source with type and connection details
- sensory.Model: Machine learning model with identifier, path, and configurable parameters
- sensory.Limit: Resource limits with counts, scale units, current state, and source association
- sensory.CreateSourceRequest: For creating new sources
- sensory.UpdateSourceRequest: For updating existing sources
- sensory.CreateModelRequest: For creating new models
- sensory.UpdateModelRequest: For updating existing models
- sensory.CreateLimitRequest: For creating new limits
- sensory.UpdateLimitRequest: For updating existing limits
- sensory.Error: Sensory service specific error type
- perception.Chain: Perception chain resource with name, slug, and current state
- perception.Thought: Thought resource with module configuration, relation, and index
- perception.Path: Path resource with target class ID, parameters, and current state
- perception.Context: Context resource with prompt ID, layer, and current state
- perception.Module: Module configuration with reference and parameters
- perception.CreateChainRequest: For creating new chains
- perception.UpdateChainRequest: For updating existing chains
- perception.CreateThoughtRequest: For creating new thoughts
- perception.UpdateThoughtRequest: For updating existing thoughts
- perception.CreatePathRequest: For creating new paths
- perception.UpdatePathRequest: For updating existing paths
- perception.CreateContextRequest: For creating new contexts
- perception.UpdateContextRequest: For updating existing contexts
- perception.ChainRequestData: Chain data in create requests
- perception.UpdateChainData: Chain data in update requests
- perception.ThoughtRequestData: Thought data in create requests (includes optional OutputClassID)
- perception.UpdateThoughtData: Thought data in update requests (includes optional OutputClassID)
- perception.PathRequestData: Path data in create requests
- perception.UpdatePathData: Path data in update requests
- perception.ContextRequestData: Context data in create requests
- perception.UpdateContextData: Context data in update requests
- perception.ChainResponse: API response wrapper for chain operations
- perception.ThoughtResponse: API response wrapper for thought operations
- perception.PathResponse: API response wrapper for path operations
- perception.ContextResponse: API response wrapper for context operations
- perception.Error: Perception service specific error type
See the example/main.go file for a complete working example demonstrating all client features.
- Go 1.23 or later
- Active Tama API credentials
- go-resty/resty - HTTP client library
Run the test suite:
go test -vRun integration tests (requires API credentials):
export TAMA_BASE_URL="https://api.tama.io"
export TAMA_API_KEY="your-api-key"
go test -tags=integration -vThis project is licensed under the MIT License.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
For issues and questions:
- Create an issue on GitHub
- Check the API documentation
- Review the examples in this repository