Skip to content

Commit bc52c85

Browse files
Release 0.14.8 (#20236)
1 parent 1a960be commit bc52c85

File tree

4 files changed

+154
-3
lines changed

4 files changed

+154
-3
lines changed

CHANGELOG.md

Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,96 @@
22

33
<!--- generated changelog --->
44

5+
## [2025-11-10]
6+
7+
### llama-index-core [0.14.8]
8+
9+
- Fix ReActOutputParser getting stuck when "Answer:" contains "Action:" ([#20098](https://github.com/run-llama/llama_index/pull/20098))
10+
- Add buffer to image, audio, video and document blocks ([#20153](https://github.com/run-llama/llama_index/pull/20153))
11+
- fix(agent): Handle multi-block ChatMessage in ReActAgent ([#20196](https://github.com/run-llama/llama_index/pull/20196))
12+
- Fix/20209 ([#20214](https://github.com/run-llama/llama_index/pull/20214))
13+
- Preserve Exception in ToolOutput ([#20231](https://github.com/run-llama/llama_index/pull/20231))
14+
- fix weird pydantic warning ([#20235](https://github.com/run-llama/llama_index/pull/20235))
15+
16+
### llama-index-embeddings-nvidia [0.4.2]
17+
18+
- docs: Edit pass and update example model ([#20198](https://github.com/run-llama/llama_index/pull/20198))
19+
20+
### llama-index-embeddings-ollama [0.8.4]
21+
22+
- Added a test case (no code) to check the embedding through an actual connection to a Ollama server (after checking that the ollama server exists) ([#20230](https://github.com/run-llama/llama_index/pull/20230))
23+
24+
### llama-index-llms-anthropic [0.10.2]
25+
26+
- feat(llms/anthropic): Add support for RawMessageDeltaEvent in streaming ([#20206](https://github.com/run-llama/llama_index/pull/20206))
27+
- chore: remove unsupported models ([#20211](https://github.com/run-llama/llama_index/pull/20211))
28+
29+
### llama-index-llms-bedrock-converse [0.11.1]
30+
31+
- feat: integrate bedrock converse with tool call block ([#20099](https://github.com/run-llama/llama_index/pull/20099))
32+
- feat: Update model name extraction to include 'jp' region prefix and … ([#20233](https://github.com/run-llama/llama_index/pull/20233))
33+
34+
### llama-index-llms-google-genai [0.7.3]
35+
36+
- feat: google genai integration with tool block ([#20096](https://github.com/run-llama/llama_index/pull/20096))
37+
- fix: non-streaming gemini tool calling ([#20207](https://github.com/run-llama/llama_index/pull/20207))
38+
- Add token usage information in GoogleGenAI chat additional_kwargs ([#20219](https://github.com/run-llama/llama_index/pull/20219))
39+
- bug fix google genai stream_complete ([#20220](https://github.com/run-llama/llama_index/pull/20220))
40+
41+
### llama-index-llms-nvidia [0.4.4]
42+
43+
- docs: Edit pass and code example updates ([#20200](https://github.com/run-llama/llama_index/pull/20200))
44+
45+
### llama-index-llms-openai [0.6.8]
46+
47+
- FixV2: Correct DocumentBlock type for OpenAI from 'input_file' to 'file' ([#20203](https://github.com/run-llama/llama_index/pull/20203))
48+
- OpenAI v2 sdk support ([#20234](https://github.com/run-llama/llama_index/pull/20234))
49+
50+
### llama-index-llms-upstage [0.6.5]
51+
52+
- OpenAI v2 sdk support ([#20234](https://github.com/run-llama/llama_index/pull/20234))
53+
54+
### llama-index-packs-streamlit-chatbot [0.5.2]
55+
56+
- OpenAI v2 sdk support ([#20234](https://github.com/run-llama/llama_index/pull/20234))
57+
58+
### llama-index-packs-voyage-query-engine [0.5.2]
59+
60+
- OpenAI v2 sdk support ([#20234](https://github.com/run-llama/llama_index/pull/20234))
61+
62+
### llama-index-postprocessor-nvidia-rerank [0.5.1]
63+
64+
- docs: Edit pass ([#20199](https://github.com/run-llama/llama_index/pull/20199))
65+
66+
### llama-index-readers-web [0.5.6]
67+
68+
- feat: Add ScrapyWebReader Integration ([#20212](https://github.com/run-llama/llama_index/pull/20212))
69+
- Update Scrapy dependency to 2.13.3 ([#20228](https://github.com/run-llama/llama_index/pull/20228))
70+
71+
### llama-index-readers-whisper [0.3.0]
72+
73+
- OpenAI v2 sdk support ([#20234](https://github.com/run-llama/llama_index/pull/20234))
74+
75+
### llama-index-storage-kvstore-postgres [0.4.3]
76+
77+
- fix: Ensure schema creation only occurs if it doesn't already exist ([#20225](https://github.com/run-llama/llama_index/pull/20225))
78+
79+
### llama-index-tools-brightdata [0.2.1]
80+
81+
- docs: add api key claim instructions ([#20204](https://github.com/run-llama/llama_index/pull/20204))
82+
83+
### llama-index-tools-mcp [0.4.3]
84+
85+
- Added test case for issue 19211. No code change ([#20201](https://github.com/run-llama/llama_index/pull/20201))
86+
87+
### llama-index-utils-oracleai [0.3.1]
88+
89+
- Update llama-index-core dependency to 0.12.45 ([#20227](https://github.com/run-llama/llama_index/pull/20227))
90+
91+
### llama-index-vector-stores-lancedb [0.4.2]
92+
93+
- fix: FTS index recreation bug on every LanceDB query ([#20213](https://github.com/run-llama/llama_index/pull/20213))
94+
595
## [2025-10-30]
696

797
### llama-index-core [0.14.7]

docs/src/content/docs/framework/CHANGELOG.md

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,67 @@ title: ChangeLog
44

55
<!--- generated changelog --->
66

7+
## [2025-10-30]
8+
9+
### llama-index-core [0.14.7]
10+
11+
- Feat/serpex tool integration ([#20141](https://github.com/run-llama/llama_index/pull/20141))
12+
- Fix outdated error message about setting LLM ([#20157](https://github.com/run-llama/llama_index/pull/20157))
13+
- Fixing some recently failing tests ([#20165](https://github.com/run-llama/llama_index/pull/20165))
14+
- Fix: update lock to latest workflow and fix issues ([#20173](https://github.com/run-llama/llama_index/pull/20173))
15+
- fix: ensure full docstring is used in FunctionTool ([#20175](https://github.com/run-llama/llama_index/pull/20175))
16+
- fix api docs build ([#20180](https://github.com/run-llama/llama_index/pull/20180))
17+
18+
### llama-index-embeddings-voyageai [0.5.0]
19+
20+
- Updating the VoyageAI integration ([#20073](https://github.com/run-llama/llama_index/pull/20073))
21+
22+
### llama-index-llms-anthropic [0.10.0]
23+
24+
- feat: integrate anthropic with tool call block ([#20100](https://github.com/run-llama/llama_index/pull/20100))
25+
26+
### llama-index-llms-bedrock-converse [0.10.7]
27+
28+
- feat: Add support for Bedrock Guardrails streamProcessingMode ([#20150](https://github.com/run-llama/llama_index/pull/20150))
29+
- bedrock structured output optional force ([#20158](https://github.com/run-llama/llama_index/pull/20158))
30+
31+
### llama-index-llms-fireworks [0.4.5]
32+
33+
- Update FireworksAI models ([#20169](https://github.com/run-llama/llama_index/pull/20169))
34+
35+
### llama-index-llms-mistralai [0.9.0]
36+
37+
- feat: mistralai integration with tool call block ([#20103](https://github.com/run-llama/llama_index/pull/20103))
38+
39+
### llama-index-llms-ollama [0.9.0]
40+
41+
- feat: integrate ollama with tool call block ([#20097](https://github.com/run-llama/llama_index/pull/20097))
42+
43+
### llama-index-llms-openai [0.6.6]
44+
45+
- Allow setting temp of gpt-5-chat ([#20156](https://github.com/run-llama/llama_index/pull/20156))
46+
47+
### llama-index-readers-confluence [0.5.0]
48+
49+
- feat(confluence): make SVG processing optional to fix pycairo install… ([#20115](https://github.com/run-llama/llama_index/pull/20115))
50+
51+
### llama-index-readers-github [0.9.0]
52+
53+
- Add GitHub App authentication support ([#20106](https://github.com/run-llama/llama_index/pull/20106))
54+
55+
### llama-index-retrievers-bedrock [0.5.1]
56+
57+
- Fixing some recently failing tests ([#20165](https://github.com/run-llama/llama_index/pull/20165))
58+
59+
### llama-index-tools-serpex [0.1.0]
60+
61+
- Feat/serpex tool integration ([#20141](https://github.com/run-llama/llama_index/pull/20141))
62+
- add missing toml info ([#20186](https://github.com/run-llama/llama_index/pull/20186))
63+
64+
### llama-index-vector-stores-couchbase [0.6.0]
65+
66+
- Add Hyperscale and Composite Vector Indexes support for Couchbase vector-store ([#20170](https://github.com/run-llama/llama_index/pull/20170))
67+
768
## [2025-10-26]
869

970
### llama-index-core [0.14.6]

llama-index-core/pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ dev = [
3232

3333
[project]
3434
name = "llama-index-core"
35-
version = "0.14.7"
35+
version = "0.14.8"
3636
description = "Interface between LLMs and your data"
3737
authors = [{name = "Jerry Liu", email = "[email protected]"}]
3838
requires-python = ">=3.9,<4.0"

pyproject.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ classifiers = [
4040
]
4141
dependencies = [
4242
"llama-index-cli>=0.5.0,<0.6 ; python_version > '3.9'",
43-
"llama-index-core>=0.14.7,<0.15.0",
43+
"llama-index-core>=0.14.8,<0.15.0",
4444
"llama-index-embeddings-openai>=0.5.0,<0.6",
4545
"llama-index-indices-managed-llama-cloud>=0.4.0",
4646
"llama-index-llms-openai>=0.6.0,<0.7",
@@ -70,7 +70,7 @@ maintainers = [
7070
name = "llama-index"
7171
readme = "README.md"
7272
requires-python = ">=3.9,<4.0"
73-
version = "0.14.7"
73+
version = "0.14.8"
7474

7575
[project.scripts]
7676
llamaindex-cli = "llama_index.cli.command_line:main"

0 commit comments

Comments
 (0)