Skip to content

pinecone-io/pinecone-ts-client

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

370 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pinecone TypeScript SDK · License npm npm GitHub Workflow Status (with event)

The official Pinecone TypeScript SDK for building vector search applications with AI/ML.

Pinecone is a vector database that makes it easy to add vector search to production applications. Use Pinecone to store, search, and manage high-dimensional vectors for applications like semantic search, recommendation systems, and RAG (Retrieval-Augmented Generation).

Features

  • Vector Operations: Store, query, and manage high-dimensional vectors with metadata filtering
  • Serverless & Pod Indexes: Choose between serverless (auto-scaling) or pod-based (dedicated) indexes
  • Integrated Inference: Built-in embedding and reranking models for end-to-end search workflows
  • Pinecone Assistant: AI assistants powered by vector database capabilities
  • Type Safety: Full TypeScript support with generic type parameters for metadata

Table of Contents

Documentation

Upgrading the SDK

Note

For notes on changes between major versions, see the migration guides:

Prerequisites

  • The Pinecone TypeScript SDK is compatible with TypeScript >=5.2.0 and Node.js >=20.0.0.
  • Before you can use the Pinecone SDK, you must sign up for an account and find your API key in the Pinecone console dashboard at https://app.pinecone.io.

Note for TypeScript users: This SDK uses Node.js built-in modules in its type definitions. If you're using TypeScript, ensure you have @types/node installed in your project:

npm install --save-dev @types/node

Installation

npm install @pinecone-database/pinecone

Productionizing

The Pinecone TypeScript SDK is intended for server-side use only. Using the SDK within a browser context can expose your API key(s). If you have deployed the SDK to production in a browser, please rotate your API keys.

Quickstart

Bringing your own vectors to Pinecone

This example shows how to create an index, add vectors with embeddings you've generated, and query them. This approach gives you full control over your embedding model and vector generation process.

import { Pinecone } from '@pinecone-database/pinecone';

// 1. Instantiate the Pinecone client
// Option A: Pass API key directly
const pc = new Pinecone({ apiKey: 'YOUR_API_KEY' });

// Option B: Use environment variable (PINECONE_API_KEY)
// const pc = new Pinecone();

// 2. Create a serverless index
const indexModel = await pc.createIndex({
  name: 'example-index',
  dimension: 1536,
  metric: 'cosine',
  spec: {
    serverless: {
      cloud: 'aws',
      region: 'us-east-1',
    },
  },
});

// 3. Target the index
const index = pc.index({ host: indexModel.host });

// 4. Upsert vectors with metadata
await index.upsert({
  records: [
    {
      id: 'vec1',
      values: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8], // ... dimension should match index (1536)
      metadata: { genre: 'drama', year: 2020 },
    },
    {
      id: 'vec2',
      values: [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
      metadata: { genre: 'action', year: 2021 },
    },
  ],
});

// 5. Query the index
const queryResponse = await index.query({
  vector: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8], // ... query vector
  topK: 3,
  includeMetadata: true,
});

console.log(queryResponse);

Using integrated inference

This example demonstrates using Pinecone's integrated inference capabilities. You provide raw text data, and Pinecone handles embedding generation and optional reranking automatically. This is ideal when you want to focus on your data and let Pinecone handle the ML complexity.

import { Pinecone } from '@pinecone-database/pinecone';

// 1. Instantiate the Pinecone client
const pc = new Pinecone({ apiKey: 'YOUR_API_KEY' });

// 2. Create an index configured for use with a particular embedding model
const indexModel = await pc.createIndexForModel({
  name: 'example-index',
  cloud: 'aws',
  region: 'us-east-1',
  embed: {
    model: 'multilingual-e5-large',
    fieldMap: { text: 'chunk_text' },
  },
  waitUntilReady: true,
});

// 3. Target the index
const index = pc.index({ host: indexModel.host });

// 4. Upsert records with raw text data
// Pinecone will automatically generate embeddings using the configured model
await index.upsertRecords({
  records: [
    {
      id: 'rec1',
      chunk_text:
        "Apple's first product, the Apple I, was released in 1976 and was hand-built by co-founder Steve Wozniak.",
      category: 'product',
    },
    {
      id: 'rec2',
      chunk_text:
        'Apples are a great source of dietary fiber, which supports digestion and helps maintain a healthy gut.',
      category: 'nutrition',
    },
    {
      id: 'rec3',
      chunk_text:
        'Apples originated in Central Asia and have been cultivated for thousands of years, with over 7,500 varieties available today.',
      category: 'cultivation',
    },
    {
      id: 'rec4',
      chunk_text:
        'In 2001, Apple released the iPod, which transformed the music industry by making portable music widely accessible.',
      category: 'product',
    },
  ],
});

// 5. Search for similar records using text queries
// Pinecone handles embedding the query and optionally reranking results
const searchResponse = await index.searchRecords({
  query: {
    inputs: { text: 'Apple corporation' },
    topK: 3,
  },
  rerank: {
    model: 'bge-reranker-v2-m3',
    topN: 2,
    rankFields: ['chunk_text'],
  },
});

console.log(searchResponse);

Pinecone Assistant

The Pinecone Assistant API enables you to create and manage AI assistants powered by Pinecone's vector database capabilities. These Assistants can be customized with specific instructions and metadata, and can interact with files and engage in chat conversations.

import { Pinecone } from '@pinecone-database/pinecone';
const pc = new Pinecone();

// Create an assistant
const assistant = await pc.createAssistant({
  name: 'product-assistant',
  instructions: 'You are a helpful product recommendation assistant.',
});

// Target the assistant for data operations
const myAssistant = pc.assistant({ name: 'product-assistant' });

// Upload a file
await myAssistant.uploadFile({
  path: 'product-catalog.txt',
  metadata: { source: 'catalog' },
});

// Chat with the assistant
const response = await myAssistant.chat({
  messages: [
    {
      role: 'user',
      content: 'What products do you recommend for outdoor activities?',
    },
  ],
});

console.log(response.message.content);

For more information on Pinecone Assistant, see the Pinecone Assistant documentation.

More information on usage

Detailed information on specific ways of using the SDK are covered in these guides:

Index Management:

Data Operations:

Inference:

Assistant:

TypeScript Features:

Additional Resources:

  • FAQ - Frequently asked questions and troubleshooting

Issues & Bugs

If you notice bugs or have feedback, please file an issue.

You can also get help in the Pinecone Community Forum.

Contributing

If you'd like to make a contribution, or get setup locally to develop the Pinecone TypeScript SDK, please see our contributing guide

About

The official TypeScript/Node client for the Pinecone vector database

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Contributors 18

Languages