Releases: jupyterlab/jupyter-ai
v2.26.0
2.26.0
This release notably includes the addition of a "Stop streaming" button, which takes over the "Send" button when a reply is streaming and the chat input is empty. While Jupyternaut is streaming a reply to a user, the user has the option to click the "Stop streaming" button to interrupt Jupyternaut and stop it from streaming further. Thank you @krassowski for contributing this feature! 🎉
Enhancements made
- Support Quarto Markdown in
/learn
#1047 (@dlqqq) - Update requirements contributors doc #1045 (@JasonWeill)
- Remove clear_message_ids from RootChatHandler #1042 (@michaelchia)
- Migrate streaming logic to
BaseChatHandler
#1039 (@dlqqq) - Unify message clearing & broadcast logic #1038 (@dlqqq)
- Learn from JSON files #1024 (@jlsajfj)
- Allow users to stop message streaming #1022 (@krassowski)
Bugs fixed
- Always use
username
fromIdentityProvider
#1034 (@krassowski)
Maintenance and upkeep improvements
- Support
jupyter-collaboration
v3 #1035 (@krassowski) - Test Python 3.9 and 3.12 on CI, test minimum dependencies #1029 (@krassowski)
Documentation improvements
- Update requirements contributors doc #1045 (@JasonWeill)
Contributors to this release
(GitHub contributors page for this release)
@dlqqq | @JasonWeill | @jlsajfj | @krassowski | @michaelchia | @pre-commit-ci
v2.25.0
2.25.0
Enhancements made
- Export context hooks from NPM package entry point #1020 (@dlqqq)
- Add support for optional telemetry plugin #1018 (@dlqqq)
- Add back history and reset subcommand in magics #997 (@akaihola)
Maintenance and upkeep improvements
Contributors to this release
(GitHub contributors page for this release)
@akaihola | @dlqqq | @jtpio | @pre-commit-ci
v2.24.1
2.24.1
Enhancements made
- Make path argument required on /learn #1012 (@andrewfulton9)
Bugs fixed
Contributors to this release
v2.24.0
2.24.0
This release notably introduces a new context command @file:<file-path>
to the chat UI, which includes the content of the target file with your prompt when sent. This allows you to ask questions like:
What does @file:src/components/ActionButton.tsx do?
Can you refactor @file:src/index.ts to use async/await syntax?
How do I add an optional dependency to @file:pyproject.toml?
The context command feature also includes an autocomplete menu UI to help navigate your filesystem with fewer keystrokes.
Thank you @michaelchia for developing this feature!
Enhancements made
- Migrate to
ChatOllama
base class in Ollama provider #1015 (@srdas) - Add
metadata
field to agent messages #1013 (@dlqqq) - Add OpenRouter support #996 (@akaihola)
- Framework for adding context to LLM prompt #993 (@michaelchia)
- Adds unix shell-style wildcard matching to
/learn
#989 (@andrewfulton9)
Bugs fixed
- Run mypy on CI, fix or ignore typing issues #987 (@krassowski)
Maintenance and upkeep improvements
Contributors to this release
(GitHub contributors page for this release)
@akaihola | @andrewfulton9 | @dlqqq | @ellisonbg | @hockeymomonow | @krassowski | @michaelchia | @srdas
v2.23.0
2.23.0
Enhancements made
- Allow unlimited LLM memory through traitlets configuration #986 (@krassowski)
- Allow to disable automatic inline completions #981 (@krassowski)
- Add ability to delete messages + start new chat session #951 (@michaelchia)
Bugs fixed
- Fix
RunnableWithMessageHistory
import #980 (@krassowski) - Fix sort messages #975 (@michaelchia)
Contributors to this release
(GitHub contributors page for this release)
@dlqqq | @krassowski | @michaelchia | @srdas
v2.22.0
2.22.0
Enhancements made
- Add 'Generative AI' submenu #971 (@dlqqq)
- Add Gemini 1.5 to the list of chat options #964 (@trducng)
- Allow configuring a default model for cell magics (and line error magic) #962 (@krassowski)
- Make chat memory size traitlet configurable + /clear to reset memory #943 (@michaelchia)
Maintenance and upkeep improvements
Documentation improvements
Contributors to this release
(GitHub contributors page for this release)
@dlqqq | @krassowski | @michaelchia | @pre-commit-ci | @srdas | @trducng
v2.21.0
2.21.0
Enhancements made
- Add optional configurable message footer #942 (@dlqqq)
- Add support for Azure Open AI Embeddings to Jupyter AI #940 (@gsrikant7)
- Make help message template configurable #938 (@dlqqq)
- Add latest Bedrock models (Titan, Llama 3.1 405b, Mistral Large 2, Jamba Instruct) #923 (@gabrielkoo)
- Add support for custom/provisioned models in Bedrock #922 (@dlqqq)
- Settings section improvement #918 (@andrewfulton9)
Bugs fixed
- Bind reject method to promise, improve typing #949 (@krassowski)
- Fix sending empty input with Enter #946 (@michaelchia)
- Fix saving chat settings #935 (@dlqqq)
Documentation improvements
- Add documentation on how to use Amazon Bedrock #936 (@srdas)
- Update copyright template #925 (@srdas)
Contributors to this release
(GitHub contributors page for this release)
@andrewfulton9 | @dlqqq | @gabrielkoo | @gsrikant7 | @krassowski | @michaelchia | @srdas
v2.20.0
2.20.0
Enhancements made
- Respect selected persona in chat input placeholder #916 (@dlqqq)
- Migrate to
langchain-aws
for AWS providers #909 (@dlqqq) - Added new Bedrock Llama 3.1 models and gpt-4o-mini #908 (@srdas)
- Rework selection inclusion; new Send button UX #905 (@dlqqq)
Contributors to this release
(GitHub contributors page for this release)
@dlqqq | @JasonWeill | @srdas
v2.19.1
2.19.1
Enhancements made
- Allow overriding the Ollama base URL #904 (@jtpio)
- Make magic aliases user-customizable #901 (@krassowski)
Bugs fixed
- Trim leading whitespace when processing #900 (@krassowski)
- Fix python<3.10 compatibility #899 (@michaelchia)
Maintenance and upkeep improvements
Documentation improvements
- Add notebooks to the documentation #906 (@andrewfulton9)
- Update docs to reflect Python 3.12 support #898 (@dlqqq)
Contributors to this release
(GitHub contributors page for this release)
@andrewfulton9 | @dlqqq | @jtpio | @krassowski | @michaelchia | @pre-commit-ci
v2.19.0
2.19.0
This is a significant release that implements LLM response streaming in Jupyter AI along with several other enhancements & fixes listed below. Special thanks to @krassowski for his generous contributions this release!
Enhancements made
- Upgrade to
langchain~=0.2.0
andlangchain_community~=0.2.0
#897 (@dlqqq) - Rework selection replacement #895 (@dlqqq)
- Ensure all slash commands support
-h/--help
#878 (@krassowski) - Add keyboard shortcut command to focus chat input #876 (@krassowski)
- Implement LLM response streaming #859 (@dlqqq)
- Add Ollama #646 (@jtpio)
Bugs fixed
- Fix streaming in
HuggingFaceHub
provider #894 (@krassowski) - Fix removal of pending messages on error #888 (@krassowski)
- Ensuring restricted access to the
/learn
index directory #887 (@krassowski) - Make preferred-dir the default read/write directory for slash commands #881 (@andrewfulton9)
- Fix prefix removal when streaming inline completions #879 (@krassowski)
- Limit chat input height to 20 lines #877 (@krassowski)
- Do not redefine
refreshCompleterState
on each render #875 (@krassowski) - Remove unused toolbars/menus from schema #873 (@krassowski)
- Fix plugin ID format #872 (@krassowski)
- Address error on
/learn
after change of embedding model #870 (@srdas) - Fix pending message overlapping text #857 (@michaelchia)
- Fixes error when allowed or blocked model list is passed in config #855 (@3coins)
- Fixed
/export
for timestamp, agent name #854 (@srdas)
Maintenance and upkeep improvements
- Update to
actions/checkout@v4
#893 (@jtpio) - Upload
jupyter-releaser
built distributions #892 (@jtpio) - Updated integration tests workflow #890 (@krassowski)
Contributors to this release
(GitHub contributors page for this release)
@3coins | @andrewfulton9 | @brichet | @dannongruver | @dlqqq | @JasonWeill | @jtpio | @krassowski | @lalanikarim | @michaelchia | @pedrogutobjj | @srdas