Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 24 additions & 1 deletion .eslintrc.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,26 @@
{
"extends": "../../.eslintrc"
"env": {
"node": true,
"es2021": true,
"mocha": true
},
"extends": [
"eslint:recommended",
"plugin:@typescript-eslint/recommended"
],
"parser": "@typescript-eslint/parser",
"parserOptions": {
"ecmaVersion": "latest",
"sourceType": "module",
"project": ["./tsconfig.json", "./tsconfig.test.json"]
},
"plugins": [
"@typescript-eslint"
],
"rules": {
"indent": ["error", 4],
"linebreak-style": ["error", "unix"],
"quotes": ["error", "double"],
"semi": ["error", "always"]
}
}
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -108,3 +108,6 @@ dist

# TernJS port file
.tern-port

# temp
temp/
41 changes: 39 additions & 2 deletions api-extractor.json
Original file line number Diff line number Diff line change
@@ -1,4 +1,41 @@
{
"extends": "../../api-extractor.json",
"mainEntryPointFilePath": "./lib/index.d.ts"
"$schema": "https://developer.microsoft.com/json-schemas/api-extractor/v7/api-extractor.schema.json",
"mainEntryPointFilePath": "<projectFolder>/lib/index.d.ts",
"bundledPackages": [],
"compiler": {
"tsconfigFilePath": "<projectFolder>/tsconfig.json"
},
"dtsRollup": {
"enabled": true,
"untrimmedFilePath": "<projectFolder>/lib/index.d.ts"
},
"docModel": {
"enabled": true,
"apiJsonFilePath": "<projectFolder>/lib/api.json"
},
"tsdocMetadata": {
"enabled": true,
"tsdocMetadataFilePath": "<projectFolder>/lib/tsdoc-metadata.json"
},
"apiReport": {
"enabled": true,
"reportFolder": "<projectFolder>/lib/api-report"
},
"messages": {
"compilerMessageReporting": {
"default": {
"logLevel": "warning"
}
},
"extractorMessageReporting": {
"default": {
"logLevel": "warning"
}
},
"tsdocMessageReporting": {
"default": {
"logLevel": "warning"
}
}
}
}
1 change: 1 addition & 0 deletions docs/.nojekyll
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
TypeDoc added this file to prevent GitHub Pages from using Jekyll. You can turn off this behavior by setting the `githubPages` option to false.
102 changes: 102 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
Vectra / [Exports](modules.md)

# Vectra

Vectra is a local vector database for Node.js with features similar to [Pinecone](https://www.pinecone.io/) or [Qdrant](https://qdrant.tech/) but built using local files. Each Vectra index is a folder on disk. There's an `index.json` file in the folder that contains all the vectors for the index along with any indexed metadata. When you create an index you can specify which metadata properties to index and only those fields will be stored in the `index.json` file. All of the other metadata for an item will be stored on disk in a separate file keyed by a GUID.

When queryng Vectra you'll be able to use the same subset of [Mongo DB query operators](https://www.mongodb.com/docs/manual/reference/operator/query/) that Pinecone supports and the results will be returned sorted by simularity. Every item in the index will first be filtered by metadata and then ranked for simularity. Even though every item is evaluated its all in memory so it should by nearly instantanious. Likely 1ms - 2ms for even a rather large index. Smaller indexes should be <1ms.

Keep in mind that your entire Vectra index is loaded into memory so it's not well suited for scenarios like long term chat bot memory. Use a real vector DB for that. Vectra is intended to be used in scenarios where you have a small corpus of mostly static data that you'd like to include in your prompt. Infinite few shot examples would be a great use case for Vectra or even just a single document you want to ask questions over.

Pinecone style namespaces aren't directly supported but you could easily mimic them by creating a separate Vectra index (and folder) for each namespace.

## Other Language Bindings

This repo contains the TypeScript/JavaScript binding for Vectra but other language bindings are being created. Since Vectra is file based, any language binding can be used to read or write a Vectra index. That means you can build a Vectra index using JS and then read it using Python.

- [vectra-py](https://github.com/BMS-geodev/vectra-py) - Python version of Vectra.

## Installation

```
$ npm install vectra
```

## Usage

First create an instance of `LocalIndex` with the path to the folder where you want you're items stored:

```typescript
import { LocalIndex } from 'vectra';

const index = new LocalIndex(path.join(__dirname, '..', 'index'));
```

Next, from inside an async function, create your index:

```typescript
if (!(await index.isIndexCreated())) {
await index.createIndex();
}
```

Add some items to your index:

```typescript
import { OpenAI } from 'openai';

const openai = new OpenAI({
apiKey: `<YOUR_KEY>`,
});

async function getVector(text: string) {
const response = await openai.embeddings.create({
'model': 'text-embedding-ada-002',
'input': text,
});
return response.data[0].embedding;
}

async function addItem(text: string) {
await index.insertItem({
vector: await getVector(text),
metadata: { text },
});
}

// Add items
await addItem('apple');
await addItem('oranges');
await addItem('red');
await addItem('blue');
```

Then query for items:

```typescript
async function query(text: string) {
const vector = await getVector(text);
const results = await index.queryItems(vector, 3);
if (results.length > 0) {
for (const result of results) {
console.log(`[${result.score}] ${result.item.metadata.text}`);
}
} else {
console.log(`No results found.`);
}
}

await query('green');
/*
[0.9036569942401076] blue
[0.8758153664568566] red
[0.8323828606103998] apple
*/

await query('banana');
/*
[0.9033128691220631] apple
[0.8493374123092652] oranges
[0.8415324469533297] blue
*/
```
50 changes: 50 additions & 0 deletions docs/classes/FileFetcher.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
[Vectra](../README.md) / [Exports](../modules.md) / FileFetcher

# Class: FileFetcher

Fetches text content from local files.

## Implements

- [`TextFetcher`](../interfaces/TextFetcher.md)

## Table of contents

### Constructors

- [constructor](FileFetcher.md#constructor)

### Methods

- [fetch](FileFetcher.md#fetch)

## Constructors

### constructor

• **new FileFetcher**()

## Methods

### fetch

▸ **fetch**(`uri`, `onDocument`): `Promise`\<`boolean`\>

#### Parameters

| Name | Type |
| :------ | :------ |
| `uri` | `string` |
| `onDocument` | (`uri`: `string`, `text`: `string`, `docType?`: `string`) => `Promise`\<`boolean`\> |

#### Returns

`Promise`\<`boolean`\>

#### Implementation of

[TextFetcher](../interfaces/TextFetcher.md).[fetch](../interfaces/TextFetcher.md#fetch)

#### Defined in

[FileFetcher.ts:10](https://github.com/bartonmalow/vectra/blob/418123d/src/FileFetcher.ts#L10)
74 changes: 74 additions & 0 deletions docs/classes/GPT3Tokenizer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
[Vectra](../README.md) / [Exports](../modules.md) / GPT3Tokenizer

# Class: GPT3Tokenizer

Tokenizer that uses the GPT-3 tokenizer.

## Implements

- [`Tokenizer`](../interfaces/Tokenizer.md)

## Table of contents

### Constructors

- [constructor](GPT3Tokenizer.md#constructor)

### Methods

- [decode](GPT3Tokenizer.md#decode)
- [encode](GPT3Tokenizer.md#encode)

## Constructors

### constructor

• **new GPT3Tokenizer**()

## Methods

### decode

▸ **decode**(`tokens`): `string`

#### Parameters

| Name | Type |
| :------ | :------ |
| `tokens` | `number`[] |

#### Returns

`string`

#### Implementation of

[Tokenizer](../interfaces/Tokenizer.md).[decode](../interfaces/Tokenizer.md#decode)

#### Defined in

[GPT3Tokenizer.ts:9](https://github.com/bartonmalow/vectra/blob/418123d/src/GPT3Tokenizer.ts#L9)

___

### encode

▸ **encode**(`text`): `number`[]

#### Parameters

| Name | Type |
| :------ | :------ |
| `text` | `string` |

#### Returns

`number`[]

#### Implementation of

[Tokenizer](../interfaces/Tokenizer.md).[encode](../interfaces/Tokenizer.md#encode)

#### Defined in

[GPT3Tokenizer.ts:13](https://github.com/bartonmalow/vectra/blob/418123d/src/GPT3Tokenizer.ts#L13)
Loading