Skip to content

upmaru/tama-go

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tama Go Client Library

A Go client library for the Tama API, providing easy access to Neural and Sensory provisioning endpoints.

Installation

go get github.com/upmaru/tama-go

Authentication

The Tama Go client uses OAuth2 with client credentials flow for authentication. You'll need:

  • Client ID: Your OAuth2 client identifier
  • Client Secret: Your OAuth2 client secret

The client automatically handles token acquisition and refresh using the /auth/tokens endpoint with:

  • Grant type: client_credentials
  • Scope: provision.all
  • Authentication: HTTP Basic Auth with base64 encoded client_id:client_secret

Testing

For testing purposes, you can skip OAuth2 token fetching by setting SkipTokenFetch: true in the config:

config := tama.Config{
    BaseURL:        "https://api.tama.io",
    ClientID:       "test-client-id",
    ClientSecret:   "test-client-secret",
    SkipTokenFetch: true, // Skip token fetching for tests
}

This prevents the client from making actual HTTP requests to obtain tokens during initialization, which is useful for unit tests with mock servers.

Quick Start

package main

import (
    "fmt"
    "time"
    tama "github.com/upmaru/tama-go"
    "github.com/upmaru/tama-go/neural"
    "github.com/upmaru/tama-go/perception"
    "github.com/upmaru/tama-go/sensory"
)

func main() {
    // Initialize the client with OAuth2 credentials
    config := tama.Config{
        BaseURL:      "https://api.tama.io",
        ClientID:     "your-client-id",
        ClientSecret: "your-client-secret",
        Timeout:      30 * time.Second,
    }
    
    client, err := tama.NewClient(config)
    if err != nil {
        panic(err)
    }
    
    // Create a neural space
    space, err := client.Neural.CreateSpace(neural.CreateSpaceRequest{
        Space: neural.SpaceRequestData{
            Name: "My Neural Space",
            Type: "root",
        },
    })
    if err != nil {
        panic(err)
    }
    
    fmt.Printf("Created space: ID=%s, Name=%s, Type=%s, State=%s\n", 
        space.ID, space.Name, space.Type, space.ProvisionState)
    
    // Create a source in the space
    source, err := client.Sensory.CreateSource(space.ID, sensory.CreateSourceRequest{
        Source: sensory.SourceRequestData{
            Name:     "AI Model Source",
            Type:     "model",
            Endpoint: "https://api.example.com/v1",
            Credential: sensory.SourceCredential{
                APIKey: "source-api-key",
            },
        },
    })
    if err != nil {
        panic(err)
    }
    
    fmt.Printf("Created source: ID=%s, Name=%s, Endpoint=%s, SpaceID=%s, State=%s\n", 
        source.ID, source.Name, source.Endpoint, source.SpaceID, source.ProvisionState)
    
    // Create a limit for the source
    limit, err := client.Sensory.CreateLimit(source.ID, sensory.CreateLimitRequest{
        Limit: sensory.LimitRequestData{
            ScaleUnit:  "minutes",
            ScaleCount: 1,
            Count:      100,
        },
    })
    if err != nil {
        panic(err)
    }
    
    fmt.Printf("Created limit: ID=%s, SourceID=%s, Count=%d, State=%s\n", 
        limit.ID, limit.SourceID, limit.Count, limit.ProvisionState)
    
    // Create a perception chain
    chain, err := client.Perception.CreateChain(space.ID, perception.CreateChainRequest{
        Chain: perception.ChainRequestData{
            Name: "AI Processing Chain",
        },
    })
    if err != nil {
        panic(err)
    }
    
    fmt.Printf("Created chain: ID=%s, Name=%s, SpaceID=%s, State=%s\n", 
        chain.ID, chain.Name, chain.SpaceID, chain.ProvisionState)
    
    // Create a thought in the chain
    thought, err := client.Perception.CreateThought(chain.ID, perception.CreateThoughtRequest{
        Thought: perception.ThoughtRequestData{
            Relation:      "description",
            OutputClassID: "class-123",
            Module: perception.Module{
                Reference: "tama/agentic/generate",
                Parameters: map[string]any{
                    "temperature": 0.7,
                    "max_tokens":  150,
                    "model":       "gpt-4",
                },
            },
        },
    })
    if err != nil {
        panic(err)
    }
    
    fmt.Printf("Created thought: ID=%s, ChainID=%s, Relation=%s, State=%s\n", 
        thought.ID, thought.ChainID, thought.Relation, thought.ProvisionState)
}

Project Structure

The client library is organized into the following packages:

Main Package

  • client.go - Main client configuration and initialization
  • neural.go - Neural service wrapper that uses the neural package
  • sensory.go - Sensory service wrapper that uses the sensory package
  • perception.go - Perception service wrapper that uses the perception package
  • types.go - Shared types and documentation

Neural Package (neural/)

  • service.go - Service definition and neural-related types
  • space.go - Space operations (GET, POST, PATCH, PUT, DELETE)
  • processor.go - Processor operations (GET, POST, PATCH, PUT, DELETE)
  • class.go - Class operations (GET, POST, PATCH, PUT, DELETE)
  • corpus.go - Corpus operations (GET, POST, PATCH, PUT, DELETE)
  • bridge.go - Bridge operations (GET, POST, PATCH, PUT, DELETE)

Sensory Package (sensory/)

  • service.go - Service definition and sensory-related types
  • source.go - Source operations (GET, POST, PATCH, PUT, DELETE)
  • model.go - Model operations (GET, POST, PATCH, PUT, DELETE)
  • limit.go - Limit operations (GET, POST, PATCH, PUT, DELETE)

Perception Package (perception/)

  • service.go - Service definition and perception-related types
  • chain.go - Chain operations (GET, POST, PATCH, PUT, DELETE)
  • thought.go - Thought operations (GET, POST, PATCH, DELETE)
  • path.go - Path operations (GET, POST, PATCH, PUT, DELETE)
  • context.go - Context operations (GET, POST, PATCH, PUT, DELETE)

Examples

  • example/ - Working examples demonstrating all features

This modular structure separates concerns into different packages, making the codebase easier to navigate, maintain, and extend. Each service package encapsulates its related functionality with its own types and operations.

Documentation

Detailed API documentation is available in the docs/ directory:

For a complete overview, see the documentation index.

API Coverage

The client provides comprehensive coverage of the Tama API endpoints, organized by resource type:

Neural Resources (/provision/neural)

Spaces

  • GET /provision/neural/spaces/:id - Get space by ID
  • POST /provision/neural/spaces - Create new space
  • PATCH /provision/neural/spaces/:id - Update space
  • PUT /provision/neural/spaces/:id - Replace space
  • DELETE /provision/neural/spaces/:id - Delete space

Processors

  • GET /provision/neural/spaces/:space_id/models/:model_id/processor - Get processor
  • POST /provision/neural/spaces/:space_id/models/:model_id/processor - Create processor
  • PATCH /provision/neural/spaces/:space_id/models/:model_id/processor - Update processor
  • PUT /provision/neural/spaces/:space_id/models/:model_id/processor - Replace processor
  • DELETE /provision/neural/spaces/:space_id/models/:model_id/processor - Delete processor

Classes

  • GET /provision/neural/classes/:id - Get class by ID
  • POST /provision/neural/spaces/:space_id/classes - Create class in space
  • PATCH /provision/neural/classes/:id - Update class
  • PUT /provision/neural/classes/:id - Replace class
  • DELETE /provision/neural/classes/:id - Delete class

Corpora

  • GET /provision/neural/corpora/:id - Get corpus by ID
  • POST /provision/neural/classes/:class_id/corpora - Create corpus in class
  • PATCH /provision/neural/corpora/:id - Update corpus
  • PUT /provision/neural/corpora/:id - Replace corpus
  • DELETE /provision/neural/corpora/:id - Delete corpus

Bridges

  • GET /provision/neural/bridges/:id - Get bridge by ID
  • POST /provision/neural/spaces/:space_id/bridges - Create bridge in space
  • PATCH /provision/neural/bridges/:id - Update bridge
  • PUT /provision/neural/bridges/:id - Replace bridge
  • DELETE /provision/neural/bridges/:id - Delete bridge

Sensory Resources (/provision/sensory)

Sources

  • GET /provision/sensory/sources/:id - Get source by ID
  • POST /provision/sensory/spaces/:space_id/sources - Create source in space
  • PATCH /provision/sensory/sources/:id - Update source
  • PUT /provision/sensory/sources/:id - Replace source
  • DELETE /provision/sensory/sources/:id - Delete source

Models

  • GET /provision/sensory/models/:id - Get model by ID
  • POST /provision/sensory/sources/:source_id/models - Create model for source
  • PATCH /provision/sensory/models/:id - Update model
  • PUT /provision/sensory/models/:id - Replace model
  • DELETE /provision/sensory/models/:id - Delete model

Limits

  • GET /provision/sensory/limits/:id - Get limit by ID
  • POST /provision/sensory/sources/:source_id/limits - Create limit for source
  • PATCH /provision/sensory/limits/:id - Update limit
  • PUT /provision/sensory/limits/:id - Replace limit
  • DELETE /provision/sensory/limits/:id - Delete limit

Note: Limits are associated with sources via the source_id field and track resource usage counts with current state.

Perception Resources (/provision/perception)

Chains

  • GET /provision/perception/chains/:id - Get chain by ID
  • POST /provision/perception/spaces/:space_id/chains - Create chain in space
  • PATCH /provision/perception/chains/:id - Update chain
  • PUT /provision/perception/chains/:id - Replace chain
  • DELETE /provision/perception/chains/:id - Delete chain

Thoughts

  • GET /provision/perception/thoughts/:id - Get thought by ID
  • POST /provision/perception/chains/:chain_id/thoughts - Create thought in chain
  • PATCH /provision/perception/thoughts/:id - Update thought
  • DELETE /provision/perception/thoughts/:id - Delete thought

Note: Thoughts are associated with chains and contain module configurations for AI processing operations.

Paths

  • GET /provision/perception/paths/:id - Get path by ID
  • POST /provision/perception/thoughts/:thought_id/paths - Create path in thought
  • PATCH /provision/perception/paths/:id - Update path
  • PUT /provision/perception/paths/:id - Replace path
  • DELETE /provision/perception/paths/:id - Delete path

Note: Paths are associated with thoughts and define target classes with configurable parameters.

Contexts

  • GET /provision/perception/contexts/:id - Get context by ID
  • POST /provision/perception/thoughts/:thought_id/contexts - Create context in thought
  • PATCH /provision/perception/contexts/:id - Update context
  • PUT /provision/perception/contexts/:id - Replace context
  • DELETE /provision/perception/contexts/:id - Delete context

Note: Contexts are associated with thoughts and contain prompt IDs with layer information for neural processing operations.

Usage Examples

Neural Service - Spaces

import "github.com/upmaru/tama-go/neural"

// Create a space
space, err := client.Neural.CreateSpace(neural.CreateSpaceRequest{
    Space: neural.SpaceRequestData{
        Name: "Production Space",
        Type: "root",
    },
})
// space will have ID, Name, Slug, Type, and ProvisionState populated

// Get a space
space, err := client.Neural.GetSpace("space-123")

// Update a space (partial update)
space, err := client.Neural.UpdateSpace("space-123", neural.UpdateSpaceRequest{
    Space: neural.UpdateSpaceData{
        Name: "Updated Production Space",
        Type: "component",
    },
})
// ProvisionState cannot be updated via API - it's managed server-side

// Replace a space (full replacement)
space, err := client.Neural.ReplaceSpace("space-123", neural.UpdateSpaceRequest{
    Space: neural.UpdateSpaceData{
        Name: "New Production Space",
        Type: "root",
    },
})

// Delete a space
err := client.Neural.DeleteSpace("space-123")

Neural Service - Processors

import "github.com/upmaru/tama-go/neural"

// Create a processor
processor, err := client.Neural.CreateProcessor("space-123", "model-123", neural.CreateProcessorRequest{
    Processor: neural.ProcessorRequestData{
        Type: "completion",
        Configuration: map[string]any{
            "temperature":  0.8,
            "tool_choice": "required",
            "role_mappings": []map[string]any{
                {"from": "user", "to": "human"},
                {"from": "assistant", "to": "ai"},
            },
        },
    },
})
// processor will have ID, SpaceID, ModelID, Type, Configuration, and ProvisionState populated

// Get a processor
processor, err := client.Neural.GetProcessor("space-123", "model-123")

// Update a processor (partial update)
processor, err := client.Neural.UpdateProcessor("space-123", "model-123", neural.UpdateProcessorRequest{
    Processor: neural.UpdateProcessorData{
        Type: "embedding",
        Configuration: map[string]any{
            "max_tokens": 512,
            "templates": []map[string]any{
                {"type": "query", "content": "Query: {text}"},
            },
        },
    },
})
// ProvisionState cannot be updated via API - it's managed server-side

// Replace a processor (full replacement)
processor, err := client.Neural.ReplaceProcessor("space-123", "model-123", neural.UpdateProcessorRequest{
    Processor: neural.UpdateProcessorData{
        Type: "reranking",
        Configuration: map[string]any{
            "top_n": 3,
        },
    },
})

// Delete a processor
err := client.Neural.DeleteProcessor("space-123", "model-123")

Neural Service - Classes

import "github.com/upmaru/tama-go/neural"

// Create a class
class, err := client.Neural.CreateClass("space-123", neural.CreateClassRequest{
    Class: neural.ClassRequestData{
        Schema: map[string]any{
            "title":       "user-profile",
            "description": "User profile information",
            "type":        "object",
            "properties": map[string]any{
                "name": map[string]any{
                    "type":        "string",
                    "description": "User's full name",
                },
                "email": map[string]any{
                    "type":        "string",
                    "description": "User's email address",
                },
                "age": map[string]any{
                    "type":        "integer",
                    "description": "User's age",
                },
            },
            "required": []string{"name", "email"},
        },
    },
})

// Get a class
class, err := client.Neural.GetClass("class-123")

// Update a class (partial update)
class, err := client.Neural.UpdateClass("class-123", neural.UpdateClassRequest{
    Class: neural.UpdateClassData{
        Schema: map[string]any{
            "title": "updated-user-profile",
            "properties": map[string]any{
                "name": map[string]any{
                    "type": "string",
                },
                "phone": map[string]any{
                    "type": "string",
                },
            },
        },
    },
})

// Replace a class (full replacement)
class, err := client.Neural.ReplaceClass("class-123", neural.UpdateClassRequest{
    Class: neural.UpdateClassData{
        Schema: map[string]any{
            "title": "new-user-profile",
            "type":  "object",
            "properties": map[string]any{
                "username": map[string]any{
                    "type": "string",
                },
            },
        },
    },
})

// Delete a class
err := client.Neural.DeleteClass("class-123")

Neural Service - Corpora

import "github.com/upmaru/tama-go/neural"

// Create a corpus
corpus, err := client.Neural.CreateCorpus("class-123", neural.CreateCorpusRequest{
    Corpus: neural.CorpusRequestData{
        Main:     true,
        Name:     "Primary Training Corpus",
        Template: "training-template-v1",
    },
})

// Get a corpus
corpus, err := client.Neural.GetCorpus("corpus-123")

// Update a corpus (partial update)
main := false
corpus, err := client.Neural.UpdateCorpus("corpus-123", neural.UpdateCorpusRequest{
    Corpus: neural.UpdateCorpusData{
        Main:     &main,
        Name:     "Updated Training Corpus",
        Template: "training-template-v2",
    },
})

// Replace a corpus (full replacement)
mainFlag := true
corpus, err := client.Neural.ReplaceCorpus("corpus-123", neural.UpdateCorpusRequest{
    Corpus: neural.UpdateCorpusData{
        Main:     &mainFlag,
        Name:     "New Training Corpus",
        Template: "training-template-v3",
    },
})

// Delete a corpus
err := client.Neural.DeleteCorpus("corpus-123")

Neural Service - Bridges

import "github.com/upmaru/tama-go/neural"

// Create a bridge
bridge, err := client.Neural.CreateBridge("space-123", neural.CreateBridgeRequest{
    Bridge: neural.BridgeRequestData{
        TargetSpaceID: "space-456",
    },
})

// Get a bridge
bridge, err := client.Neural.GetBridge("bridge-123")

// Update a bridge (partial update)
bridge, err := client.Neural.UpdateBridge("bridge-123", neural.UpdateBridgeRequest{
    Bridge: neural.UpdateBridgeData{
        TargetSpaceID: "space-789",
    },
})

// Replace a bridge (full replacement)
bridge, err := client.Neural.ReplaceBridge("bridge-123", neural.UpdateBridgeRequest{
    Bridge: neural.UpdateBridgeData{
        TargetSpaceID: "space-999",
    },
})

// Delete a bridge
err := client.Neural.DeleteBridge("bridge-123")

Bridge Fields

  • ID (string): Unique identifier for the bridge (read-only)
  • SpaceID (string): ID of the source space (read-only, set from creation endpoint)
  • TargetSpaceID (string): ID of the target space that this bridge connects to (required)
  • ProvisionState (string): Current provisioning status (read-only)

Corpus Fields

  • Main (boolean): Indicates if this is the primary corpus for the class
  • Name (string): Human-readable name for the corpus
  • Template (string): Template identifier used for processing the corpus data
  • Slug (string): Auto-generated URL-friendly identifier (read-only)
  • ProvisionState (string): Current provisioning status (read-only)

Processor Types and Configuration

Processors support three types: "completion", "embedding", and "reranking". Each type has its own configuration schema:

Completion Type

For text completion and chat completion tasks:

processor, err := client.Neural.CreateProcessor("space-123", "model-123", neural.CreateProcessorRequest{
    Processor: neural.ProcessorRequestData{
        Type: "completion",
        Configuration: map[string]any{
            "temperature":  0.8,  // decimal, default: 0.8
            "tool_choice": "required", // enum: "required", "auto", "any", default: "required"
            "role_mappings": []map[string]any{
                {
                    "from": "user",
                    "to":   "human",
                },
                {
                    "from": "assistant", 
                    "to":   "ai",
                },
            },
        },
    },
})
Embedding Type

For text embedding and vector generation:

processor, err := client.Neural.CreateProcessor("space-123", "model-123", neural.CreateProcessorRequest{
    Processor: neural.ProcessorRequestData{
        Type: "embedding",
        Configuration: map[string]any{
            "max_tokens": 512, // integer, default: 512
            "templates": []map[string]any{
                {
                    "type":    "query",
                    "content": "Query: {text}",
                },
                {
                    "type":    "document",
                    "content": "Document: {text}",
                },
            },
        },
    },
})
Reranking Type

For document reranking and relevance scoring:

processor, err := client.Neural.CreateProcessor("space-123", "model-123", neural.CreateProcessorRequest{
    Processor: neural.ProcessorRequestData{
        Type: "reranking",
        Configuration: map[string]any{
            "top_n": 3, // integer, default: 3
        },
    },
})
Configuration Field Details

Completion Configuration:

  • temperature (decimal): Controls randomness in generation, default: 0.8
  • tool_choice (string): Tool selection strategy - "required", "auto", or "any", default: "required"
  • role_mappings (array): Maps input roles to model-specific roles
    • from (string): Input role name
    • to (string): Model role name

Embedding Configuration:

  • max_tokens (integer): Maximum tokens to process, default: 512
  • templates (array): Text templates for different embedding types
    • type (string): Template type - "query" or "document"
    • content (string): Template string with {text} placeholder

Reranking Configuration:

  • top_n (integer): Number of top results to return, default: 3

Sensory Service - Sources

import "github.com/upmaru/tama-go/sensory"

// Create a source in a space
source, err := client.Sensory.CreateSource("space-123", sensory.CreateSourceRequest{
    Source: sensory.SourceRequestData{
        Name: "Mistral Source",
        Type: "model",
        Endpoint: "https://api.mistral.ai/v1",
        Credential: sensory.SourceCredential{
            APIKey: "your-api-key",
        },
    },
})

// Get a source
source, err := client.Sensory.GetSource("source-123")

// Update a source
source, err := client.Sensory.UpdateSource("source-123", sensory.UpdateSourceRequest{
    Source: sensory.UpdateSourceData{
        Name: "Updated Mistral Source",
        Endpoint: "https://api.mistral.ai/v2",
        Credential: &sensory.SourceCredential{
            APIKey: "your-updated-api-key",
        },
    },
})

// Delete a source
err := client.Sensory.DeleteSource("source-123")

Sensory Service - Models

import "github.com/upmaru/tama-go/sensory"

// Create a model for a source
model, err := client.Sensory.CreateModel("source-123", sensory.CreateModelRequest{
    Model: sensory.ModelRequestData{
        Identifier: "mistral-small-latest",
        Path:       "/chat/completions",
        Parameters: map[string]any{
            "reasoning_effort": "low",
            "temperature":      1.0,
            "max_tokens":       2000,
            "stream":           true,
            "stop":             []string{"\n", "###"},
            "config": map[string]any{
                "timeout":      30,
                "enable_cache": true,
            },
        },
    },
})

// Get a model
model, err := client.Sensory.GetModel("model-123")

// Update a model
model, err := client.Sensory.UpdateModel("model-123", sensory.UpdateModelRequest{
    Model: sensory.UpdateModelData{
        Identifier: "mistral-large-latest",
        Path:       "/chat/completions",
        Parameters: map[string]any{
            "temperature": 0.8,
            "max_tokens":  1500,
        },
    },
})

// Delete a model
err := client.Sensory.DeleteModel("model-123")

Model Parameters

The Parameters field in models accepts any valid JSON values, allowing flexible configuration:

Parameters: map[string]any{
    // String values
    "reasoning_effort": "low",
    
    // Numeric values
    "temperature":       0.8,
    "max_tokens":        1500,
    "frequency_penalty": 0.1,
    
    // Boolean values
    "stream": true,
    
    // Array values
    "stop": []string{"\n", "###", "END"},
    
    // Object values
    "config": map[string]any{
        "timeout":      30,
        "enable_cache": true,
        "retries":      3,
    },
}

Sensory Service - Limits

import "github.com/upmaru/tama-go/sensory"

// Create a limit for a source
limit, err := client.Sensory.CreateLimit("source-123", sensory.CreateLimitRequest{
    Limit: sensory.LimitRequestData{
        ScaleUnit:  "seconds",
        ScaleCount: 1,
        Count:      32,
    },
})

// Get a limit
limit, err := client.Sensory.GetLimit("limit-123")

// Update a limit
limit, err := client.Sensory.UpdateLimit("limit-123", sensory.UpdateLimitRequest{
    Limit: sensory.UpdateLimitData{
        ScaleUnit:      "minutes",
        ScaleCount:     5,
        Count:          100,
        ProvisionState: "active",
    },
})

// Delete a limit
err := client.Sensory.DeleteLimit("limit-123")

Perception Service - Chains

// Create a chain
chain, err := client.Perception.CreateChain("space-123", perception.CreateChainRequest{
    Chain: perception.ChainRequestData{
        Name: "Processing Chain",
    },
})

// Get a chain
chain, err := client.Perception.GetChain("chain-123")

// Update a chain
chain, err := client.Perception.UpdateChain("chain-123", perception.UpdateChainRequest{
    Chain: perception.UpdateChainData{
        Name: "Updated Processing Chain",
    },
})

// Delete a chain
err := client.Perception.DeleteChain("chain-123")

Perception Service - Thoughts

// Create a thought
thought, err := client.Perception.CreateThought("chain-123", perception.CreateThoughtRequest{
    Thought: perception.ThoughtRequestData{
        Relation:      "description",
        OutputClassID: "class-123",
        Module: perception.Module{
            Reference: "tama/agentic/generate",
            Parameters: map[string]any{
                "temperature": 0.7,
                "max_tokens":  150,
                "model":       "gpt-4",
            },
        },
    },
})

// Get a thought
thought, err := client.Perception.GetThought("thought-123")

// Update a thought
thought, err := client.Perception.UpdateThought("thought-123", perception.UpdateThoughtRequest{
    Thought: perception.UpdateThoughtData{
        Relation:      "analysis",
        OutputClassID: "class-456",
        Module: perception.Module{
            Reference: "tama/agentic/analyze",
            Parameters: map[string]any{
                "depth":       3,
                "focus_areas": []string{"sentiment", "intent", "entities"},
            },
        },
    },
})

// Delete a thought
err := client.Perception.DeleteThought("thought-123")

Perception Service - Paths

// Create a path
path, err := client.Perception.CreatePath("thought-123", perception.CreatePathRequest{
    Path: perception.PathRequestData{
        TargetClassID: "class-456",
        Parameters: map[string]any{
            "threshold":    0.8,
            "max_results":  10,
            "output_format": "json",
        },
    },
})

// Get a path
path, err := client.Perception.GetPath("path-123")

// Update a path
path, err := client.Perception.UpdatePath("path-123", perception.UpdatePathRequest{
    Path: perception.UpdatePathData{
        TargetClassID: "class-789",
        Parameters: map[string]any{
            "threshold":   0.9,
            "max_results": 5,
        },
    },
})

// Replace a path
path, err := client.Perception.ReplacePath("path-123", perception.UpdatePathRequest{
    Path: perception.UpdatePathData{
        TargetClassID: "class-101",
        Parameters: map[string]any{
            "mode": "strict",
            "validation": true,
        },
    },
})

// Delete a path
err := client.Perception.DeletePath("path-123")

Perception Service - Contexts

// Create a context
context, err := client.Perception.CreateContext("thought-123", perception.CreateContextRequest{
    Context: perception.ContextRequestData{
        PromptID: "prompt-456",
        Layer:    2,
    },
})

// Get a context
context, err := client.Perception.GetContext("context-123")

// Update a context
context, err := client.Perception.UpdateContext("context-123", perception.UpdateContextRequest{
    Context: perception.UpdateContextData{
        PromptID: "prompt-789",
        Layer:    5,
    },
})

// Replace a context
context, err := client.Perception.ReplaceContext("context-123", perception.UpdateContextRequest{
    Context: perception.UpdateContextData{
        PromptID: "prompt-101",
        Layer:    1,
    },
})

// Delete a context
err := client.Perception.DeleteContext("context-123")

Thought Module Configuration

Thoughts contain module configurations that define AI processing operations:

Generate Module
Module: perception.Module{
    Reference: "tama/agentic/generate",
    Parameters: map[string]any{
        "temperature": 0.7,
        "max_tokens":  150,
        "model":       "gpt-4",
        "prompt":      "Generate a summary of the input text",
    },
}
Analyze Module
Module: perception.Module{
    Reference: "tama/agentic/analyze",
    Parameters: map[string]any{
        "depth":       3,
        "focus_areas": []string{"sentiment", "intent", "entities"},
        "output_format": "structured",
    },
}
Preprocess Module
Module: perception.Module{
    Reference: "tama/agentic/preprocess",
    Parameters: map[string]any{
        "clean_text":  true,
        "normalize":   true,
        "remove_html": true,
    },
}
Validate Module
Module: perception.Module{
    Reference: "tama/agentic/validate",
    Parameters: map[string]any{
        "strict_mode":    false,
        "schema_version": "v2",
        "required_fields": []string{"input", "output"},
    },
}

Configuration

Client Configuration

config := tama.Config{
    BaseURL: "https://api.tama.io",  // Required: API base URL
    APIKey:  "your-api-key",         // Required: API authentication key
    Timeout: 30 * time.Second,       // Optional: Request timeout (default: 30s)
}

client := tama.NewClient(config)

Authentication

The client supports API key authentication. Set your API key in the config:

client.SetAPIKey("your-new-api-key")

Debug Mode

Enable debug mode to see HTTP request/response details:

client.SetDebug(true)

Error Handling

The client provides structured error handling with service-specific error types:

Neural Service Errors

import "github.com/upmaru/tama-go/neural"

space, err := client.Neural.GetSpace("invalid-id")
if err != nil {
    if apiErr, ok := err.(*neural.Error); ok {
        fmt.Printf("Neural API Error %d\n", apiErr.StatusCode)
    } else {
        fmt.Printf("Client Error: %v\n", err)
    }
}

Sensory Service Errors

import "github.com/upmaru/tama-go/sensory"

source, err := client.Sensory.GetSource("invalid-id")
if err != nil {
    if apiErr, ok := err.(*sensory.Error); ok {
        fmt.Printf("Sensory API Error %d\n", apiErr.StatusCode)
    } else {
        fmt.Printf("Client Error: %v\n", err)
    }
}

Perception Service Errors

import "github.com/upmaru/tama-go/perception"

chain, err := client.Perception.GetChain("invalid-id")
if err != nil {
    if apiErr, ok := err.(*perception.Error); ok {
        fmt.Printf("Perception API Error %d\n", apiErr.StatusCode)
    } else {
        fmt.Printf("Client Error: %v\n", err)
    }
}

Data Types

Neural Package Types

  • neural.Space: Neural space resource with configuration, type, and current state
  • neural.Processor: Neural processor resource with type-specific configuration
  • neural.Class: Neural class resource with schema definition and metadata
  • neural.Corpus: Neural corpus resource with main flag, name, template, and state
  • neural.Bridge: Neural bridge resource connecting two spaces with target space ID and state
  • neural.CreateSpaceRequest: For creating new spaces
  • neural.UpdateSpaceRequest: For updating existing spaces
  • neural.CreateProcessorRequest: For creating new processors
  • neural.UpdateProcessorRequest: For updating existing processors
  • neural.CreateClassRequest: For creating new classes
  • neural.UpdateClassRequest: For updating existing classes
  • neural.CreateCorpusRequest: For creating new corpora
  • neural.UpdateCorpusRequest: For updating existing corpora
  • neural.CreateBridgeRequest: For creating new bridges
  • neural.UpdateBridgeRequest: For updating existing bridges
  • neural.SpaceRequestData: Space data in create requests
  • neural.UpdateSpaceData: Space data in update requests
  • neural.ProcessorRequestData: Processor data in create requests
  • neural.UpdateProcessorData: Processor data in update requests
  • neural.ClassRequestData: Class data in create requests
  • neural.UpdateClassData: Class data in update requests
  • neural.CorpusRequestData: Corpus data in create requests
  • neural.UpdateCorpusData: Corpus data in update requests
  • neural.BridgeRequestData: Bridge data in create requests
  • neural.UpdateBridgeData: Bridge data in update requests
  • neural.SpaceResponse: API response wrapper for space operations
  • neural.ProcessorResponse: API response wrapper for processor operations
  • neural.ClassResponse: API response wrapper for class operations
  • neural.CorpusResponse: API response wrapper for corpus operations
  • neural.BridgeResponse: API response wrapper for bridge operations
  • neural.Error: Neural service specific error type

Sensory Package Types

  • sensory.Source: Sensory data source with type and connection details
  • sensory.Model: Machine learning model with identifier, path, and configurable parameters
  • sensory.Limit: Resource limits with counts, scale units, current state, and source association
  • sensory.CreateSourceRequest: For creating new sources
  • sensory.UpdateSourceRequest: For updating existing sources
  • sensory.CreateModelRequest: For creating new models
  • sensory.UpdateModelRequest: For updating existing models
  • sensory.CreateLimitRequest: For creating new limits
  • sensory.UpdateLimitRequest: For updating existing limits
  • sensory.Error: Sensory service specific error type

Perception Package Types

  • perception.Chain: Perception chain resource with name, slug, and current state
  • perception.Thought: Thought resource with module configuration, relation, and index
  • perception.Path: Path resource with target class ID, parameters, and current state
  • perception.Context: Context resource with prompt ID, layer, and current state
  • perception.Module: Module configuration with reference and parameters
  • perception.CreateChainRequest: For creating new chains
  • perception.UpdateChainRequest: For updating existing chains
  • perception.CreateThoughtRequest: For creating new thoughts
  • perception.UpdateThoughtRequest: For updating existing thoughts
  • perception.CreatePathRequest: For creating new paths
  • perception.UpdatePathRequest: For updating existing paths
  • perception.CreateContextRequest: For creating new contexts
  • perception.UpdateContextRequest: For updating existing contexts
  • perception.ChainRequestData: Chain data in create requests
  • perception.UpdateChainData: Chain data in update requests
  • perception.ThoughtRequestData: Thought data in create requests (includes optional OutputClassID)
  • perception.UpdateThoughtData: Thought data in update requests (includes optional OutputClassID)
  • perception.PathRequestData: Path data in create requests
  • perception.UpdatePathData: Path data in update requests
  • perception.ContextRequestData: Context data in create requests
  • perception.UpdateContextData: Context data in update requests
  • perception.ChainResponse: API response wrapper for chain operations
  • perception.ThoughtResponse: API response wrapper for thought operations
  • perception.PathResponse: API response wrapper for path operations
  • perception.ContextResponse: API response wrapper for context operations
  • perception.Error: Perception service specific error type

Examples

See the example/main.go file for a complete working example demonstrating all client features.

Requirements

  • Go 1.23 or later
  • Active Tama API credentials

Dependencies

Testing

Run the test suite:

go test -v

Run integration tests (requires API credentials):

export TAMA_BASE_URL="https://api.tama.io"
export TAMA_API_KEY="your-api-key"
go test -tags=integration -v

License

This project is licensed under the MIT License.

Contributing

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Support

For issues and questions:

  • Create an issue on GitHub
  • Check the API documentation
  • Review the examples in this repository

About

Tama GO client library

Resources

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •  

Languages