This is a portfolio website that includes a chatbot that answers user questions using content from your pages. Although this was implemented as a portfolio, the chatbot will work for any kind of website (blogs, documentation, etc) if configured properly.
- npm
- Node.js
- An AstraDB account
- Ollama installed locally
Clone the repository and install dependencies.
git clone https://github.com/quagrain/ai-assistant-portfolio.git
cd ai-assistant-portfolio/
npm install
Create an .env.local file in the root directory with the required environment variables. You can use the .env.example file as a reference.
This implementation uses AstraDB for storing the text embeddings generated by the LLM. You can sign up for a free tier.
- Create an AstraDB account at DataStax
- Create a Serverless Vector database (select any region, provider, and database name).
- Paste the API Endpoint into the .env.local file as
ASTRA_DB_ENDPOINT
. - Generate an Application Token and paste it into the .env.local file as
ASTRA_DB_APPLICATION_TOKEN
. - Create an empty collection and use the collection name as the value of
ASTRA_DB_COLLECTION
in the .env.local file.
Note: If you use a different name from the collection you created in the environment variable, a new collection will be created using the name in your .env file.
Install ollama from ollama.com and start the service:
ollama serve
Mac and Windows users can alternatively open the Ollama application if the ollama serve command fails.
For this implementation, you will want to run (in a different terminal window from ollama serve
):
ollama pull llama3.2
ollama pull mxbai-embed-large
npm run build && npm run start