Releases: jupyterlab/jupyter-ai
v2.31.4
v2.31.3
2.31.3
Bugs fixed
Documentation improvements
Contributors to this release
v2.31.2
2.31.2
Bugs fixed
- Add
default_completions_model
trait #1303 (@srdas) - Pass
model_parameters
trait to embedding & completion models #1298 (@srdas)
Contributors to this release
v2.31.1
2.31.1
Enhancements made
Bugs fixed
- Migrate old config schemas, fix v2.31.0 regression #1294 (@dlqqq)
- Remove error log emitted when FAISS file is absent #1287 (@srdas)
Contributors to this release
v2.31.0
2.31.0
This release notably:
- Allows any Ollama embedding model (now requires user input of the model ID),
- Adds a custom OpenAI provider for using any model served on an OpenAI API,
- Allows embedding model fields to be specified, and
- Fixes the Jupyter AI settings, which previously used a single dictionary for chat, embedding, and completion model fields. These fields are now stored separately in the Jupyter AI settings file.
Running pip install -U jupyter_ai
will now also update jupyter_ai_magics
automatically. This wasn't true before, but thankfully this is fixed now.
Special thanks to @srdas for his contributions to this release!
Enhancements made
Bugs fixed
- Ensure magics package version is consistent in future releases #1280 (@dlqqq)
- Allow embedding model fields, fix coupled model fields, add custom OpenAI provider #1264 (@srdas)
Maintenance and upkeep improvements
Contributors to this release
v2.30.0
2.30.0
This release notably allow developers to override or disable Jupyter AI's chat handlers and slash commands via the entry points API. See the new section in the developer documentation for more info.
Special thanks to @Darshan808 and @krassowski for their contributions to this release!
Enhancements made
- Make Native Chat Handlers Overridable via Entry Points #1249 (@Darshan808)
- Allow chat handlers to be initialized in any order #1268 (@Darshan808)
- Refactor Chat Handlers to Simplify Initialization #1257 (@Darshan808)
Bugs fixed
- Correct minimum versions in dependency version ranges #1272 (@dlqqq)
- Fix: Enable up and down arrow keys in chat input #1254 (@keerthi-swarna)
Maintenance and upkeep improvements
- Correct minimum versions in dependency version ranges #1272 (@dlqqq)
- Remove the dependency on
jupyterlab
#1234 (@jtpio)
Documentation improvements
- Add information about ollama - document it as an available provider and provide clearer troubleshooting help. #1235 (@fperez)
- Add documentation for vLLM usage #1232 (@srdas)
Contributors to this release
(GitHub contributors page for this release)
@Darshan808 | @dlqqq | @gogakoreli | @keerthi-swarna | @krassowski | @meeseeksmachine | @paulrutter | @srdas
v2.29.1
2.29.1
Enhancements made
- Show error icon near cursor on inline completion errors #1197 (@Darshan808)
Bugs fixed
- Enforce path imports for MUI icons, upgrade to ESLint v8 #1225 (@krassowski)
- Fixes duplicate api key being passed in
openrouter.py
#1216 (@srdas)
Maintenance and upkeep improvements
Documentation improvements
- Update documentation for setting API keys without revealing them #1224 (@srdas)
- Typo in comment #1217 (@Carreau)
- Docs: Update installation steps to work in bash & zsh #1211 (@srdas)
Contributors to this release
v2.29.0
2.29.0
This release notably upgrades to LangChain v0.3 and Pydantic v2. You can now use the latest LangChain & Pydantic APIs available in the same environment as Jupyter AI! 🎉
Note that just running pip install -U jupyter-ai
may not upgrade LangChain partner packages like langchain-aws
or langchain-openai
, as these are listed as optional dependencies.
Therefore, to upgrade all LangChain packages in your environment along with Jupyter AI, we strongly recommend running this command to upgrade:
pip install -U "jupyter-ai[all]"
Enhancements made
Bugs fixed
Documentation improvements
Contributors to this release
v2.28.5
2.28.5
Bugs fixed
- Fix specifying empty list in provider and model allow/denylists #1185 (@MaicoTimmerman)
Documentation improvements
- Update documentation to add usage of
Openrouter
#1193 (@srdas) - Fix dev install steps in contributor docs #1188 (@srdas)
Contributors to this release
v3.0.0a0
3.0.0a0
Hope you all have had a wonderful holiday season! Santa and I present to you the first pre-release of v3, the next major version of Jupyter AI. 🎁
Rapid summary of what's new: In v3, all responsibility for managing the chat is now delegated to Jupyter Chat, a new project built with Jupyter AI components and a custom chat backend. By using Jupyter Chat, Jupyter AI now supports multiple chats, and automatically saves them as files on disk. This migration has already allowed us to greatly simplify our codebase, and will provide a fantastic foundation to build new features for users in v3. ❤️
- Thank you @brichet for leading development on Jupyter Chat!
- For more details, please see the full PR history of v3-dev using this link.
This pre-release is being published quickly to get feedback from contributors & stakeholders. v3 is still a work-in-progress, and we will absolutely build more features & fix more issues on top of this before the v3.0.0 official release. This is just the first pre-release of many more to come. 💪
Known issues
There are already a few issues I've noticed, which I will call out below to help save you all some time:
- Opening the Jupyter AI settings is not obvious.
- From JupyterLab's top bar menu, click "Settings" => "AI Settings" (near the bottom) to open the Jupyter AI settings.
- You have to select a chat model and provide API keys for that chat model, otherwise the chat fails silently.
- You may have to wait a minute or two after starting the server before the chat responds to new inputs.
- This bug happens rarely, seemingly at random. I am monitoring this issue until I can reproduce it consistently.
- Pressing
Ctrl + C
in the terminal sometimes does not stop the server. This is a known issue withjupyter_collaboration
.- Workaround from the terminal: Press
Ctrl + Z
to suspend the server, then runkill -9 %1
from the same terminal.
- Workaround from the terminal: Press