Releases: TannerMidd/minimal-chat
v5.1.4 PWA Compatability Updates and UI Adjustments
Full Changelog: v5.1.3...v5.1.4
- Fixed issues causing PWA (Progressive Web Application) compatibility problems
- Fixed issue with site icons not being found
- Updated AI message bubble labels to indicate what model selection is being used (GPT, Claude, Custom Model)
- Fixed bug causing the send button not to work
- Updated the conversation dialog to close when starting a new conversation in mobile mode
- Fixed some issues with the general layout sizing for different screen sizes.
v5.1.3 Conversation Panel Improvements
Full Changelog: v5.1.2...v5.1.3
- Added a delete current conversation option to the conversations panel
- Added a open settings option to the conversations panel
- Updated some border colors to align with the site theme
- Removed remaining header icons while in desktop mode
v5.1.2 Message Label Bugfixes & More
Message Label Bugfixes & More
- Fixed various bugs causing the correct loading message bubble to not be displayed.
- Disabled auto scroll to bottom on message regeneration streams
- Selected conversation color bugfix
- Fixed text overflowing out of message bubbles while streaming responses back
- Made the regenerate response icon background transparent to fix visual bug while it is spinning
- Updated message regeneration to not auto scroll to the bottom by @Fingerthief in #55
Full Changelog: v5.1.1...v5.1.2
v5.1.1 Regenerate Previous Message Responses
Regenerate Previous Responses with Ease
Users can now regenerate the response of any previous message in the conversation history, allowing for greater flexibility and experimentation.
This feature enables you to refine model settings, such as temperature or max tokens, and re-generate a previous response that didn't meet expectations. As well as seamlessly switch to a different model, comparing responses between models like GPT-4-Turbo and Llama 3, and regenerate the response with the new model's output.
Full Changelog: v5.1.0...v5.1.1
- Added message regeneration ability. by @Fingerthief in #54
v5.1.0 Formatting and Code Syntax Highlighting Improvements
Full Changelog: v5.0.9...v5.1.0
- Updated custom model logic to query themselves for generating titles of the new conversations like the other models do
- Further improved the general formatting of message bubble contents.
- Greatly Improved Code Syntax Highlighting
- Fixed issue that caused markdown/code sections to bug out and display the raw values for stray lines every once in awhile.
- Removed all referenced CDNs and installed them as proper packages finally
- toastify-js
- pwacompat
- highlightjs
- markdownit
- Cleaned up the styles code in the
message-itemcomponent
v5.0.9 UI Design and Other Improvements
Full Changelog: v5.0.8...v5.0.9
- Added ability for users to abort a response that is streaming back by clicking the new stop icon that is present while streaming responses.
- Further improvements to the general site design. Aligning colors and styles mostly throughout the app. Some work left to do.
- Fixed bug where the newly design conversations panel wouldn't update the conversation selections in the panel on new conversations.
- Updates to the message bubble text formatting for improved readability
- Cleaned up the
gpt-api-accesslib file a bit - Renamed
local-model-api-accesslib toopen-ai-api-standard-accessto be representative of what it actually is. - Fixed bug in retry logic in areas.
v5.0.8 - Further Design Adjustments and Bugfixes
Full Changelog: v5.0.7...v5.0.8
- Greatly improved the readability of the message text by adding in some general text formatting/styling
- Replaced fonts site wide with Roboto
- Reduced stroke width of header area icons for a cleaner look
- API Key field values are hidden (input type password) by default now
- Adjusted initial conversations panel width
- Fixed longstanding bug that caused text in code blocks to overflow out of message bubbles on smaller screens at times
v5.0.7 Site Design Updates and Vision Support for Custom Models
Full Changelog: v5.0.6...v5.0.7
- Added Vision request support for the Open AI Response Formatted API model selection.
- This means essentially any local model or custom API Endpoint (like OpenRouter) that supports vision will now work.
- Site Design updates for both the color palette and layout
- On desktop sized screens the conversations now show on the left side as a panel that is resizable.
- Double click the resize bar to quickly collapse or show the panel while in desktop mode.
- Phone sized screens will see the same behavior as previous builds to maintain the best use of screen space.
- Theme color palette changes be more cohesive throughout the application.
- Removed various borders to create a more cohesive feel
- On desktop sized screens the conversations now show on the left side as a panel that is resizable.
- Created a unified parsing function in the utils lib for OpenAI formatted stream responses.
v5.0.6 Generalized API Support Added & Settings Panel Updates
Full Changelog: v5.0.5.2...v5.0.6
-
Removed Local and HuggingFace specific implementations and consolidated into a general library to use for interacting with any model/server that returns responses in the OpenAI Reponse Format.
-
Examples:
- LM Studio
- Hugging Face Inference Endpoints
-
Created a new settings config section "Open AI Format Model Config" with the following settings
- API Endpoint
- Model - The expected model value for the service being used.
- API Key
- Max Tokens - Defaults to 3000
-
Updated Settings panel to only show the currently selected models relevant config options. This massively cleans up the settings panel that was a growing list of values
Version 5.0.5 - General Bugfixes
- Fixed issue where on initial app load the stored user selected model from the last session was not loaded. Which caused all kinds of chaos.
- Removed bad
self.reference in retry logic of GPT conversation generation that was breaking the retry logic.
Full Changelog: v5.0.5.1...v5.0.5.2




