Skip to content

A personal portfolio website built with Next.js 15, Langchain, and AstraDB featuring an AI-powered chatbot that answers questions about experiences, projects, and skills by retrieving relevant information from the site content (RAG). Utilizes LangChain for LLM interfacing, AstraDB for the vector database, and a responsive UI with TailwindCSS.

Notifications You must be signed in to change notification settings

quagrain/ai-assistant-portfolio

Repository files navigation

AI Assisant Portfolio

Overview

This is a portfolio website that includes a chatbot that answers user questions using content from your pages. Although this was implemented as a portfolio, the chatbot will work for any kind of website (blogs, documentation, etc) if configured properly.

Prerequisites

  • npm
  • Node.js
  • An AstraDB account
  • Ollama installed locally

Getting Started

Installation

Clone the repository and install dependencies.

git clone https://github.com/quagrain/ai-assistant-portfolio.git
cd ai-assistant-portfolio/
npm install

ENVIRONMENT SETUP

Create an .env.local file in the root directory with the required environment variables. You can use the .env.example file as a reference.

ASTRADB SETUP

This implementation uses AstraDB for storing the text embeddings generated by the LLM. You can sign up for a free tier.

  1. Create an AstraDB account at DataStax
  2. Create a Serverless Vector database (select any region, provider, and database name).
  3. Paste the API Endpoint into the .env.local file as ASTRA_DB_ENDPOINT.
  4. Generate an Application Token and paste it into the .env.local file asASTRA_DB_APPLICATION_TOKEN.
  5. Create an empty collection and use the collection name as the value of ASTRA_DB_COLLECTION in the .env.local file.

Note: If you use a different name from the collection you created in the environment variable, a new collection will be created using the name in your .env file.

OLLAMA SETUP

Install ollama from ollama.com and start the service:

ollama serve

Mac and Windows users can alternatively open the Ollama application if the ollama serve command fails. For this implementation, you will want to run (in a different terminal window from ollama serve):

ollama pull llama3.2
ollama pull mxbai-embed-large

BUILDING AND RUNNING

npm run build && npm run start

About

A personal portfolio website built with Next.js 15, Langchain, and AstraDB featuring an AI-powered chatbot that answers questions about experiences, projects, and skills by retrieving relevant information from the site content (RAG). Utilizes LangChain for LLM interfacing, AstraDB for the vector database, and a responsive UI with TailwindCSS.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published