|
2 | 2 |
|
3 | 3 | <!--- generated changelog ---> |
4 | 4 |
|
| 5 | +## [2025-11-10] |
| 6 | + |
| 7 | +### llama-index-core [0.14.8] |
| 8 | + |
| 9 | +- Fix ReActOutputParser getting stuck when "Answer:" contains "Action:" ([#20098](https://github.com/run-llama/llama_index/pull/20098)) |
| 10 | +- Add buffer to image, audio, video and document blocks ([#20153](https://github.com/run-llama/llama_index/pull/20153)) |
| 11 | +- fix(agent): Handle multi-block ChatMessage in ReActAgent ([#20196](https://github.com/run-llama/llama_index/pull/20196)) |
| 12 | +- Fix/20209 ([#20214](https://github.com/run-llama/llama_index/pull/20214)) |
| 13 | +- Preserve Exception in ToolOutput ([#20231](https://github.com/run-llama/llama_index/pull/20231)) |
| 14 | +- fix weird pydantic warning ([#20235](https://github.com/run-llama/llama_index/pull/20235)) |
| 15 | + |
| 16 | +### llama-index-embeddings-nvidia [0.4.2] |
| 17 | + |
| 18 | +- docs: Edit pass and update example model ([#20198](https://github.com/run-llama/llama_index/pull/20198)) |
| 19 | + |
| 20 | +### llama-index-embeddings-ollama [0.8.4] |
| 21 | + |
| 22 | +- Added a test case (no code) to check the embedding through an actual connection to a Ollama server (after checking that the ollama server exists) ([#20230](https://github.com/run-llama/llama_index/pull/20230)) |
| 23 | + |
| 24 | +### llama-index-llms-anthropic [0.10.2] |
| 25 | + |
| 26 | +- feat(llms/anthropic): Add support for RawMessageDeltaEvent in streaming ([#20206](https://github.com/run-llama/llama_index/pull/20206)) |
| 27 | +- chore: remove unsupported models ([#20211](https://github.com/run-llama/llama_index/pull/20211)) |
| 28 | + |
| 29 | +### llama-index-llms-bedrock-converse [0.11.1] |
| 30 | + |
| 31 | +- feat: integrate bedrock converse with tool call block ([#20099](https://github.com/run-llama/llama_index/pull/20099)) |
| 32 | +- feat: Update model name extraction to include 'jp' region prefix and … ([#20233](https://github.com/run-llama/llama_index/pull/20233)) |
| 33 | + |
| 34 | +### llama-index-llms-google-genai [0.7.3] |
| 35 | + |
| 36 | +- feat: google genai integration with tool block ([#20096](https://github.com/run-llama/llama_index/pull/20096)) |
| 37 | +- fix: non-streaming gemini tool calling ([#20207](https://github.com/run-llama/llama_index/pull/20207)) |
| 38 | +- Add token usage information in GoogleGenAI chat additional_kwargs ([#20219](https://github.com/run-llama/llama_index/pull/20219)) |
| 39 | +- bug fix google genai stream_complete ([#20220](https://github.com/run-llama/llama_index/pull/20220)) |
| 40 | + |
| 41 | +### llama-index-llms-nvidia [0.4.4] |
| 42 | + |
| 43 | +- docs: Edit pass and code example updates ([#20200](https://github.com/run-llama/llama_index/pull/20200)) |
| 44 | + |
| 45 | +### llama-index-llms-openai [0.6.8] |
| 46 | + |
| 47 | +- FixV2: Correct DocumentBlock type for OpenAI from 'input_file' to 'file' ([#20203](https://github.com/run-llama/llama_index/pull/20203)) |
| 48 | +- OpenAI v2 sdk support ([#20234](https://github.com/run-llama/llama_index/pull/20234)) |
| 49 | + |
| 50 | +### llama-index-llms-upstage [0.6.5] |
| 51 | + |
| 52 | +- OpenAI v2 sdk support ([#20234](https://github.com/run-llama/llama_index/pull/20234)) |
| 53 | + |
| 54 | +### llama-index-packs-streamlit-chatbot [0.5.2] |
| 55 | + |
| 56 | +- OpenAI v2 sdk support ([#20234](https://github.com/run-llama/llama_index/pull/20234)) |
| 57 | + |
| 58 | +### llama-index-packs-voyage-query-engine [0.5.2] |
| 59 | + |
| 60 | +- OpenAI v2 sdk support ([#20234](https://github.com/run-llama/llama_index/pull/20234)) |
| 61 | + |
| 62 | +### llama-index-postprocessor-nvidia-rerank [0.5.1] |
| 63 | + |
| 64 | +- docs: Edit pass ([#20199](https://github.com/run-llama/llama_index/pull/20199)) |
| 65 | + |
| 66 | +### llama-index-readers-web [0.5.6] |
| 67 | + |
| 68 | +- feat: Add ScrapyWebReader Integration ([#20212](https://github.com/run-llama/llama_index/pull/20212)) |
| 69 | +- Update Scrapy dependency to 2.13.3 ([#20228](https://github.com/run-llama/llama_index/pull/20228)) |
| 70 | + |
| 71 | +### llama-index-readers-whisper [0.3.0] |
| 72 | + |
| 73 | +- OpenAI v2 sdk support ([#20234](https://github.com/run-llama/llama_index/pull/20234)) |
| 74 | + |
| 75 | +### llama-index-storage-kvstore-postgres [0.4.3] |
| 76 | + |
| 77 | +- fix: Ensure schema creation only occurs if it doesn't already exist ([#20225](https://github.com/run-llama/llama_index/pull/20225)) |
| 78 | + |
| 79 | +### llama-index-tools-brightdata [0.2.1] |
| 80 | + |
| 81 | +- docs: add api key claim instructions ([#20204](https://github.com/run-llama/llama_index/pull/20204)) |
| 82 | + |
| 83 | +### llama-index-tools-mcp [0.4.3] |
| 84 | + |
| 85 | +- Added test case for issue 19211. No code change ([#20201](https://github.com/run-llama/llama_index/pull/20201)) |
| 86 | + |
| 87 | +### llama-index-utils-oracleai [0.3.1] |
| 88 | + |
| 89 | +- Update llama-index-core dependency to 0.12.45 ([#20227](https://github.com/run-llama/llama_index/pull/20227)) |
| 90 | + |
| 91 | +### llama-index-vector-stores-lancedb [0.4.2] |
| 92 | + |
| 93 | +- fix: FTS index recreation bug on every LanceDB query ([#20213](https://github.com/run-llama/llama_index/pull/20213)) |
| 94 | + |
5 | 95 | ## [2025-10-30] |
6 | 96 |
|
7 | 97 | ### llama-index-core [0.14.7] |
|
0 commit comments