forked from postgres/postgres
-
Notifications
You must be signed in to change notification settings - Fork 0
16->17 #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
pmp-p
wants to merge
3,076
commits into
REL_16_STABLE
Choose a base branch
from
REL_17_STABLE
base: REL_16_STABLE
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
16->17 #2
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The last section of pg_createsubscriber used the terms "publication-name", "replication-slot-name", and "subscription-name". These terms are not defined on the page, which was confusing, and the intention is clearly to refer to the values one would give to the options --publication, --subscription and --replication-slot. Let's simplify the documentation by mentioning the option switches, instead of these terms. Reported-by: Christophe Courtois Author: Shubham Khanna Reviewed-by: Vignesh C, Peter Smith Discussion: https://postgr.es/m/[email protected] Backpatch-through: 17
Commit da952b4 unintentially reverted the SQL-visible part of commit 14a9101, which breaks queries joining pg_wait_events with pg_stat_acivity. Remove the suffix again. Backpatch to 17. Reported-by: Christophe Courtois <[email protected]> Author: Bertrand Drouvot <[email protected]> Discussion: https://postgr.es/m/18728-450924477056a339%40postgresql.org Discussion: https://postgr.es/m/[email protected]
The validate_sync_standby_slots subroutine requires an LWLock, so it cannot run in processes without PGPROC; skip it there to avoid a crash. This replaces the current test for ReplicationSlotCtl being not null, which appears to be a solution for the same problem but less general. I also rewrote a related comment that mentioned ReplicationSlotCtl in StandbySlotsHaveCaughtup. This code came in with commit bf279dd; backpatch to 17. Reported-by: Gabriele Bartolini <[email protected]> Reviewed-by: Amit Kapila <[email protected]> Reviewed-by: Zhijie Hou <[email protected]> Discussion: https://postgr.es/m/[email protected]
parallel_vacuum_reset_dead_items used a local variable to hold a pointer from the passed vacrel, purely as a shorthand. This pointer was later freed and a new allocation was made and stored to the struct. Then the local pointer was mistakenly referenced again. This apparently happened not to break anything since the freed chunk would have been put on the context's freelist, so it was accidentally the same pointer anyway, in which case the DSA handle was correctly updated. The minimal fix is to change two places so they access dead_items through the vacrel. This coding style is a maintenance hazard, so while at it get rid of most other similar usages, which were inconsistently used anyway. Analysis and patch by Vallimaharajan G, with further defensive coding by me Backpath to v17, when TidStore came in Discussion: https://postgr.es/m/[email protected]
The corresponding read-only server settings have been removed since in PG16. See commit b0f6c43. Author: Pierre Giraud <[email protected]> Discussion: https://www.postgresql.org/message-id/flat/a75a2fb0-f4b3-4c0c-be3d-7a62d266d760%40dalibo.com
These format codes produce or consume strings of digits, so they should be labeled with is_digit = true, but they were not. This has effect in only one place, where is_next_separator() is checked to see if the preceding format code should slurp up all the available digits. Thus, with a format such as '...SSFF3' with remaining input '12345', the 'SS' code would consume all five digits (and then complain about seconds being out of range) when it should eat only two digits. Per report from Nick Davies. This bug goes back to d589f94 where the FFn codes were introduced, so back-patch to v13. Discussion: https://postgr.es/m/AM8PR08MB6356AC979252CFEA78B56678B6312@AM8PR08MB6356.eurprd08.prod.outlook.com
Yoran Heling reported a case where a data type could be dropped while references to its OID remain behind in pg_amproc. This causes getObjectDescription to fail, which blocks dropping the operator family (since our DROP code likes to construct descriptions of everything it's dropping). The proper fix for this requires adding more pg_depend entries. But to allow DROP to go through with already-corrupt catalogs, tweak getObjectDescription to print "???" for the type instead of failing when it processes such an entry. I changed the logic for pg_amop similarly, for consistency, although it is not known that the problem can manifest in pg_amop. Per report from Yoran Heling. Back-patch to all supported branches (although the problem may be unreachable in v13). Discussion: https://postgr.es/m/Z1MVCOh1hprjK5Sf@gmai021
Usually an entry in pg_amop or pg_amproc does not need a dependency on its amoplefttype/amoprighttype/amproclefttype/amprocrighttype types, because there is an indirect dependency via the argument types of its referenced operator or procedure, or via the opclass it belongs to. However, for some support procedures in some index AMs, the argument types of the support procedure might not mention the column data type at all. Also, the amop/amproc entry might be treated as "loose" in the opfamily, in which case it lacks a dependency on any particular opclass; or it might be a cross-type entry having a reference to a datatype that is not its opclass' opcintype. The upshot of all this is that there are cases where a datatype can be dropped while leaving behind amop/amproc entries that mention it, because there is no path in pg_depend showing that those entries depend on that type. Such entries are harmless in normal activity, because they won't get used, but they cause problems for maintenance actions such as dropping the operator family. They also cause pg_dump to produce bogus output. The previous commit put a band-aid on the DROP OPERATOR FAMILY failure, but a real fix is needed. To fix, add pg_depend entries showing that a pg_amop/pg_amproc entry depends on its lefttype/righttype. To avoid bloating pg_depend too much, skip this if the referenced operator or function has that type as an input type. (I did not bother with considering the possible indirect dependency via the opclass' opcintype; at least in the reported case, that wouldn't help anyway.) Probably, the reason this has escaped notice for so long is that add-on datatypes and relevant opclasses/opfamilies are usually packaged as extensions nowadays, so that there's no way to drop a type without dropping the referencing opclasses/opfamilies too. Still, in the absence of pg_depend entries there's nothing that constrains DROP EXTENSION to drop the opfamily entries before the datatype, so it seems possible for a DROP failure to occur anyway. The specific case that was reported doesn't fail in v13, because v13 prefers to attach the support procedure to the opclass not the opfamily. But it's surely possible to construct other edge cases that do fail in v13, so patch that too. Per report from Yoran Heling. Back-patch to all supported branches. Discussion: https://postgr.es/m/Z1MVCOh1hprjK5Sf@gmai021
When short-circuiting WindowAgg node evaluation on the top-level WindowAgg node using quals on monotonic window functions, because the WindowAgg run condition can mean there's no need to evaluate subsequent window function results in the same partition once the run condition becomes false, it was possible that the executor would use stale results from the previous invocation of the window function in some cases. A fix for this was partially done by a5832722, but that commit only fixed the issue for non-top-level WindowAgg nodes. I mistakenly thought that the top-level WindowAgg didn't have this issue, but Jayesh's example case clearly shows that's incorrect. At the time, I also thought that this only affected 32-bit systems as all window functions which then supported run conditions returned BIGINT, however, that's wrong as ExecProject is still called and that could cause evaluation of any other window function belonging to the same WindowAgg node, one of which may return a byref type. The only queries affected by this are WindowAggs with a "Run Condition" which contains at least one window function with a byref result type, such as lead() or lag() on a byref column. The window clause must also contain a PARTITION BY clause (without a PARTITION BY, execution of the WindowAgg stops immediately when the run condition becomes false and there's no risk of using the stale results). Reported-by: Jayesh Dehankar Discussion: https://postgr.es/m/[email protected] Backpatch-through: 15, where WindowAgg run conditions were added
818119a has introduced the "generation" concept in pgstats entries, incremented a counter when a pgstats entry is reinitialized, but it did not count on the fact that backends still holding local references to such entries need to be refreshed if the cache age is outdated. The previous logic only updated local references when an entry was dropped, but it needs also to consider entries that are reinitialized. This matters for replication slot stats (as well as custom pgstats kinds in 18~), where concurrent drops and creates of a slot could cause incorrect stats to be locally referenced. This would lead to an assertion failure at shutdown when writing out the stats file, as the backend holding an outdated local reference would not be able to drop during its shutdown sequence the stats entry that should be dropped, as the last process holding a reference to the stats entry. The checkpointer was then complaining about such an entry late in the shutdown sequence, after the shutdown checkpoint is finished with the control file updated, causing the stats file to not be generated. In non-assert builds, the entry would just be skipped with the stats file written. Note that only logical replication slots use statistics. A test case based on TAP is added to test_decoding, where a persistent connection peeking at a slot's data is kept with concurrent drops and creates of the same slot. This is based on the isolation test case that Anton has sent. As it requires a node shutdown with a check to make sure that the stats file is written with this specific sequence of events, TAP is used instead. Reported-by: Anton A. Melnikov Reviewed-by: Bertrand Drouvot Discussion: https://postgr.es/m/[email protected] Backpatch-through: 15
pgstat_write_statsfile() discards any entries marked as dropped from being written to the stats file at shutdown, and also included an assertion based on the same condition. The intention of the assertion is to track that no pgstats entries should be left around as terminating backends should drop any entries they still hold references on before the stats file is written by the checkpointer, and it not worth taking down the server in this case if there is a bug making that possible. Let's improve the comment of this area to document clearly what's intended. Based on a discussion with Bertrand Drouvot and Anton A. Melnikov. Author: Bertrand Drouvot Discussion: https://postgr.es/m/[email protected] Backpatch-through: 15
Our parallel-mode code only works when we are executing a query in full, so ExecutePlan must disable parallel mode when it is asked to do partial execution. The previous logic for this involved passing down a flag (variously named execute_once or run_once) from callers of ExecutorRun or PortalRun. This is overcomplicated, and unsurprisingly some of the callers didn't get it right, since it requires keeping state that not all of them have handy; not to mention that the requirements for it were undocumented. That led to assertion failures in some corner cases. The only state we really need for this is the existing QueryDesc.already_executed flag, so let's just put all the responsibility in ExecutePlan. (It could have been done in ExecutorRun too, leading to a slightly shorter patch -- but if there's ever more than one caller of ExecutePlan, it seems better to have this logic in the subroutine than the callers.) This makes those ExecutorRun/PortalRun parameters unnecessary. In master it seems okay to just remove them, returning the API for those functions to what it was before parallelism. Such an API break is clearly not okay in stable branches, but for them we can just leave the parameters in place after documenting that they do nothing. Per report from Yugo Nagata, who also reviewed and tested this patch. Back-patch to all supported branches. Discussion: https://postgr.es/m/[email protected]
Follow-up commit to a9d58bf. Backpatch down to v16 where this was added in order to keep the code consistent for future backpatches. Author: Tofig Aliev <[email protected]> Reviewed-by: Daniel Gustafsson <[email protected]> Reviewed-by: Masahiko Sawada <[email protected]> Discussion: https://postgr.es/m/[email protected] Backpatch-through: 16
It looks like the example case was once modified to increase the number of rows but the EXPLAIN ANALYZE output wasn't updated to reflect that. Also adjust the text which discusses the index sizes. With the example table size, the bloom index isn't quite 8 times more space efficient than the btree indexes. Discussion: https://postgr.es/m/CAApHDvovx8kQ0=HTt85gFDAwmTJHpCgiSvRmQZ_6u_g-vQYM_w@mail.gmail.com Backpatch-through: 13, all supported versions
When #include'ing radixtree.h with RT_SHMEM, it could happen to raise compiler errors due to missing some declarations of types and functions. This commit also removes the inclusion of postgres.h since it's against our usual convention. Backpatch to v17, where radixtree.h was introduced. Reviewed-by: Heikki Linnakangas, Álvaro Herrera Discussion: https://postgr.es/m/CAD21AoCU9YH%2Bb9Rr8YRw7UjmB%3D1zh8GKQkWNiuN9mVhMvkyrRg%40mail.gmail.com Backpatch-through: 17
This routine documented that "iterations" would use a default value if set to 0 by the caller. However, the iteration should always be set by the caller to a value strictly more than 0, as documented by an assertion. Oversight in b577743, that has made the iteration count of SCRAM configurable. Author: Matheus Alcantara Discussion: https://postgr.es/m/[email protected] Backpatch-through: 16
The GUC assign and check hooks used "assign_timezone_abbreviations", which was incorrect. Issue noticed while browsing this area of the code, introduced in 0a20ff5. Reviewed-by: Tom Lane Discussion: https://postgr.es/m/[email protected] Backpatch-through: 16
Since commit 97550c0, these failed with "PANIC: proc_exit() called in child process" due to uninitialized or stale MyProcPid. That was reachable if close() failed in ClosePostmasterPorts() or setlocale(category, "C") failed, both unlikely. Back-patch to v13 (all supported versions). Discussion: https://postgr.es/m/[email protected]
On failure, the pg_upgrade log files are automatically appended to the test log file, but the information reported was inconsistent. A header, with the log file name, was reported with note(), while the log contents and a footer used print(), making it harder to diagnose failures when these are split into console output and test log file because the pg_upgrade log file path in the header may not be included in the test log file. The output is now consolidated so as the header uses print() rather than note(). An extra note() is added to inform that the contents of a pg_upgrade log file are appended to the test log file. The diffs from the regression test suite and dump files all use print() to show their contents on failure. Author: Joel Jacobson Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/[email protected] Backpatch-through: 15
This reverts commit 562bee0. We received a report from the field about this change in behavior, so it seems best to revert this commit and to add proper multibyte-aware truncation as a follow-up exercise. Fixes bug #18711. Reported-by: Adam Rauch Reviewed-by: Tom Lane, Bertrand Drouvot, Bruce Momjian, Thomas Munro Discussion: https://postgr.es/m/18711-7503ee3e449d2c47%40postgresql.org Backpatch-through: 17
If an owned sequence is considered interesting, force its owning table to be marked interesting too. This ensures, in particular, that we'll fetch the owning table's column names so we have the data needed for ALTER TABLE ... ADD GENERATED. Previously there were edge cases where pg_dump could get SIGSEGV due to not having filled in the column names. (The known case is where the owning table has been made part of an extension while its identity sequence is not a member; but there may be others.) Also, if it's an identity sequence, force its dumped-components mask to exactly match the owning table: dump definition only if we're dumping the table's definition, dump data only if we're dumping the table's data, etc. This generalizes the code introduced in commit b965f26 that set the sequence's dump mask to NONE if the owning table's mask is NONE. That's insufficient to prevent failures, because for example the table's mask might only request dumping ACLs, which would lead us to still emit ALTER TABLE ADD GENERATED even though we didn't create the table. It seems better to treat an identity sequence as though it were an inseparable aspect of the table, matching the treatment used in the backend's dependency logic. Perhaps this policy needs additional refinement, but let's wait to see some field use-cases before changing it further. While here, add a comment in pg_dump.h warning against writing tests like "if (dobj->dump == DUMP_COMPONENT_NONE)", which was a bug in this case. There is one other example in getPublicationNamespaces, which if it's not a bug is at least remarkably unclear and under-documented. Changing that requires a separate discussion, however. Per report from Artur Zakirov. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAKNkYnwXFBf136=u9UqUxFUVagevLQJ=zGd5BsLhCsatDvQsKQ@mail.gmail.com
The @extschema:name@ feature added by 72a5b1f allows us to make earthdistance's references to the cube extension fully search-path-secure, so long as all those references are resolved at extension installation time not runtime. To do that, we must convert earthdistance's SQL functions to the new SQL-standard style; but we wanted to do that anyway. The functions can be updated in our customary style by running CREATE OR REPLACE FUNCTION in an extension update script. However, there's still problems in the "CREATE DOMAIN earth" command: its references to cube functions could be captured by hostile objects in earthdistance's installation schema, if that's not where the cube extension is. Worse, the reference to the cube type itself as the domain's base could be captured, and that's not something we could fix after-the-fact in the update script. What I've done about that is to change the "CREATE DOMAIN earth" command in the base script earthdistance--1.1.sql. Ordinarily, changing a released extension script is forbidden; but I think it's okay here since the results of successful (non-trojaned) script execution will be identical to before. A good deal of care is still needed to make the extension's scripts proof against search-path-based attacks. We have to make sure that all the function and operator invocations have exact argument-type matches, to forestall attacks based on supplying a better match. Fortunately earthdistance isn't very big, so I've just gone through it and inspected each call to be sure of that. The only actual code changes needed were to spell all floating-point constants in the style '-1'::float8, rather than depending on runtime type conversions and/or negations. (I'm not sure that the shortcuts previously used were attackable, but removing run-time effort is a good thing anyway.) I believe that this fixes earthdistance enough that we could mark it trusted and remove the warnings about it that were added by 7eeb1d9; but I've not done that here. The primary reason for dealing with this now is that we've received reports of pg_upgrade failing for databases that use earthdistance functions in contexts like generated columns. That's a consequence of 2af07e2 having restricted the search_path used while evaluating such expressions. The only way to fix that is to make the earthdistance functions independent of run-time search_path. This patch is very much nicer than the alternative of attaching "SET search_path" clauses to earthdistance's functions: it is more secure and doesn't create a run-time penalty. Therefore, I've chosen to back-patch this to v16 where @extschema:name@ was added. It won't help unless users update to 16.7 and issue "ALTER EXTENSION earthdistance UPDATE" before upgrading to 17, but at least there's now a way to deal with the problem without manual intervention in the dump/restore process. Tom Lane and Ronan Dunklau Discussion: https://postgr.es/m/3316564.aeNJFYEL58@aivenlaptop Discussion: https://postgr.es/m/[email protected]
The documentation in wal.sgml explains that old WAL files cannot be removed or recycled until they are archived (when WAL archiving is used) or replicated (when using replication slots). However, it did not mention that, similarly, old WAL files are also kept until they are summarized if WAL summarization is enabled. This commit adds that clarification to the documentation. Back-patch to v17 where WAL summarization was added. Author: Fujii Masao Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/[email protected]
An \if command appearing within a false (not-to-be-executed) \if branch was incorrectly treated the same as \elif. This could allow statements within the inner \if to be executed when they should not be. Also the missing inner \if stack entry would result in an assertion failure (in assert-enabled builds) when the final \endif is reached. Report and patch by Michail Nikolaev. Back-patch to all supported branches. Discussion: https://postgr.es/m/CANtu0oiA1ke=SP6tauhNqkUdv5QFsJtS1p=aOOf_iU+EhyKkjQ@mail.gmail.com
The test failed if you ran the regression tests with TEMP_CONFIG with recovery_min_apply_delay = '500ms'. Fix the race condition by waiting for transaction to be applied in the replica, like in a few other tests. The failing test was introduced in commit cbfbda7. Backpatch to all supported versions like that commit (except v12, which is no longer supported). Reported-by: Alexander Lakhin Discussion: https://www.postgresql.org/message-id/[email protected]
Two places in the documentation suggest B-tree is the only index access method allowing parallel builds. Commit b437571 added parallel builds for BRIN too, but failed to update the docs. So fix that, and backpatch to 17, where parallel BRIN builds were introduced. Author: Egor Rogov Backpatch-through: 17 Discussion: https://postgr.es/m/[email protected]
The test was creating both the dumps to compare from the same database on the same node, so it would never detect any mismatches when comparing the logical dumps of the two servers. Fixing this issue has revealed that there is a difference in the dumps: the tablespaces paths are different. This commit uses compare_text() with a custom comparison function to erase the difference (slightly tweaked to be able to work with WIN32 and non-WIN32 paths). This way, the non-relevant parts of the tablespace path are ignored from the check with the basic structure of the query string still compared. Author: Dagfinn Ilmari Mannsåker Discussion: https://postgr.es/m/[email protected] Backpatch-through: 17
Commit 84db9a0 has added the incorrect link to 'initial data synchronization'. It was a subsection of Row Filter and didn't provide the required information. Author: Peter Smith Reviewed-by: Vignesh C, Pavel Luzanov Backpatch-through: 17, where it was introduced Discussion: https://postgr.es/m/CAHut+PtnA4DB_pcv4TDr4NjUSM1=P2N_cuZx5DX09k7LVmaqUA@mail.gmail.com
Commit b437571 allowed parallel builds for BRIN, but left behind two comments claiming only btree indexes support parallel builds. Reported by Egor Rogov, along with similar issues in SGML docs. Backpatch to 17, where parallel builds for BRIN were introduced. Reported-by: Egor Rogov Backpatch-through: 17 Discussion: https://postgr.es/m/[email protected]
Commit dae761a modified brin_page_items() to return the new "empty" flag for each BRIN range. But the new output parameter was added in the middle, which may cause crashes when using the new binary with old function definition. The ideal solution would be to introduce API versioning similar to what pg_stat_statements does, but it's too late for that as PG17 was already released (so we can't introduce a new extension version). We could do something similar in brin_page_items() by checking the number of output columns (and ignoring the new flag), but it doesn't seem very nice. Instead, simply error out and suggest updating the extension to the latest version. pageinspect is a superuser-only extension, and there's not much reason to run an older version. Moreover, there's a precedent for this approach in 691e8b2. Reported by Ľuboslav Špilák, investigation and patch by me. Backpatch to 17, same as dae761a. Reported-by: Ľuboslav Špilák Reviewed-by: Michael Paquier, Hayato Kuroda, Peter Geoghegan Backpatch-through: 17 Discussion: https://postgr.es/m/VI1PR02MB63331C3D90E2104FD12399D38A5D2@VI1PR02MB6333.eurprd02.prod.outlook.com Discussion: https://postgr.es/m/flat/[email protected]
If a GENERATED column is declared to have a domain data type where the domain's constraints disallow null values, INSERT commands failed because we built a targetlist that included coercing a null constant to the domain's type. The failure occurred even when the generated value would have been perfectly OK. This is adjacent to the issues fixed in 0da39aa, but we didn't notice for lack of testing a domain with such a constraint. We aren't going to use the result of the targetlist entry for the generated column --- ExecComputeStoredGenerated will overwrite it. So it's not really necessary that it have the exact datatype of the generated column. This patch fixes the problem by changing the targetlist entry to be a null Const of the domain's base type, which should be sufficiently legal. (We do have to tweak ExecCheckPlanOutput to accept the situation, though.) This has been broken since we implemented generated columns. However, this patch only applies easily as far back as v14, partly because I (tgl) only carried 0da39aa back that far, but mostly because v14 significantly refactored the handling of INSERT/UPDATE targetlists. Given the lack of field complaints and the short remaining support lifetime of v13, I judge the cost-benefit ratio not good for devising a version that would work in v13. Reported-by: jian he <[email protected]> Author: jian he <[email protected]> Reviewed-by: Tom Lane <[email protected]> Discussion: https://postgr.es/m/CACJufxG59tip2+9h=rEv-ykOFjt0cbsPVchhi0RTij8bABBA0Q@mail.gmail.com Backpatch-through: 14
We'd try to drop the partitions of a partitioned index separately, which is disallowed by the backend, leading to an error during restore. While the error is harmless, it causes problems if you try to use --single-transaction mode. Fortunately, there seems no need to do a DROP at all, since the partition will go away silently when we drop either the parent index or the partition's table. So just make the DROP conditional on not being a partition. Reported-by: jian he <[email protected]> Author: jian he <[email protected]> Reviewed-by: Pavel Stehule <[email protected]> Reviewed-by: Tom Lane <[email protected]> Discussion: https://postgr.es/m/CACJufxF0QSdkjFKF4di-JGWN6CSdQYEAhGPmQJJCdkSZtd=oLg@mail.gmail.com Backpatch-through: 13
Backpatch to 17, where the line was added. Reported by Noboru Saito while he was working on translating the file into Japanese. Discussion: https://postgr.es/m/20250417.203047.1321297410457834775.ishii%40postgresql.org Reported-by: Noboru Saito <[email protected]> Reviewed-by: Daniel Gustafs <[email protected]> Backpatch-through: 17
The original intent in heap_page_items() was to return nulls, not throw an error or crash, if an item was sufficiently corrupt that we couldn't safely extract data from it. However, commit d6061f8 utterly missed that memo, and not only put in an un-length-checked copy of the tuple's data section, but also managed to break the check on sane nulls-bitmap length. Either mistake could possibly lead to a SIGSEGV crash if the tuple is corrupt. Bug: #18896 Reported-by: Dmitry Kovalenko <[email protected]> Author: Dmitry Kovalenko <[email protected]> Reviewed-by: Tom Lane <[email protected]> Discussion: https://postgr.es/m/[email protected] Backpatch-through: 13
1349d27 added support so that aggregate functions with an ORDER BY or DISTINCT clause could make use of presorted inputs to avoid an implicit sort within nodeAgg.c. That commit failed to consider that a FILTER clause may exist that filters rows before the aggregate function arguments are evaluated. That can be problematic if an aggregate argument contains an expression which could error out during evaluation. It's perfectly valid to want to have a FILTER clause which eliminates such values, and with the pre-sorted path added in 1349d27, it was possible that the planner would produce a plan with a Sort node above the Aggregate to perform the sort on the aggregate's arguments long before the Aggregate node would filter out the non-matching values. Here we fix this by inspecting ORDER BY / DISTINCT aggregate functions which have a FILTER clause to see if the aggregate's arguments are anything more complex than a Var or a Const. Evaluating these isn't going to cause an error. If we find any non-Var, non-Const parameters then the planner will now opt to perform the sort in the Aggregate node for these aggregates, i.e. disable the presorted aggregate optimization. An alternative fix would have been to completely disallow the presorted optimization for Aggrefs with any FILTER clause, but that wasn't done as that could cause large performance regressions for queries that see significant gains from 1349d27 due to presorted results coming in from an Index Scan. Backpatch to 16, where 1349d27 was introduced Author: David Rowley <[email protected]> Reported-by: Kaimeh <[email protected]> Diagnosed-by: Tom Lane <[email protected]> Discussion: https://postgr.es/m/CAK-%2BJz9J%3DQ06-M7cDJoPNeYbz5EZDqkjQbJnmRyQyzkbRGsYkA%40mail.gmail.com Backpatch-through: 16
Commit 7102070 fixed a similar bug, but it missed the case of database-wide ANALYZE ("use_own_xacts" mode). Commit a07e03f changed consequences from silent discard of a pg_class stats (relpages et al.) update to ERROR "tuple to be updated was already modified". Losing a relpages update of an ON COMMIT DELETE ROWS table was negligible, but a COMMIT-time error isn't negligible. Back-patch to v13 (all supported versions). Reported-by: Richard Guo <[email protected] Reported-by: Robins Tharakan <[email protected]> Discussion: https://postgr.es/m/CAMbWs4-XwMKMKJ_GT=p3_-_=j9rQSEs1FbDFUnW9zHuKPsPNEQ@mail.gmail.com Backpatch-through: 13
v14 commit 1f95181 and its v13 equivalent caused timing-dependent failures in archive recovery, at restartpoints. The symptom was "invalid magic number 0000 in log segment X, offset 0", "unexpected pageaddr X in log segment Y, offset 0" [X < Y], or an assertion failure. Commit 3635a0a and predecessors back-patched v15 changes to fix that. This test reproduces the problem probabilistically, typically in less than 1000 iterations of the test. Hence, buildfarm and CI runs would have surfaced enough failures to get attention within a day. Reported-by: Arun Thirupathi <[email protected]> Discussion: https://postgr.es/m/[email protected] Backpatch-through: 13
Author: Noboru Saito <[email protected]> Discussion: https://postgr.es/m/CAAM3qnJtv5YbjpwDfVOYN2gZ9zGSLFM1UGJgptSXmwfifOZJFQ@mail.gmail.com Backpatch-through: 17
This injection point was named "AtEOXact_Inval-with-transInvalInfo", not respecting the implied naming convention that injection points should use lower-case characters, with terms separated by dashes. All the other points defined in the tree follow this style, so let's be more consistent. Author: Hayato Kuroda <[email protected]> Reviewed-by: Aleksander Alekseev <[email protected]> Discussion: https://postgr.es/m/OSCPR01MB14966E14C1378DEE51FB7B7C5F5B32@OSCPR01MB14966.jpnprd01.prod.outlook.com Backpatch-through: 17
The previous text was a little clumsy. Here we improve that. Author: David Rowley <[email protected]> Reported-by: Noboru Saito <[email protected]> Reviewed-by: David G. Johnston <[email protected]> Discussion: https://postgr.es/m/CAAM3qnJtv5YbjpwDfVOYN2gZ9zGSLFM1UGJgptSXmwfifOZJFQ@mail.gmail.com Backpatch-through: 13
All the injection points used in the tree have relied on an implied rule: their names should be made of lower-case characters, with dashes between the words used. This commit adds a light mention about that in the docs, encouraging the practice. Author: Hayato Kuroda <[email protected]> Reviewed-by: Aleksander Alekseev <[email protected]> Discussion: https://postgr.es/m/OSCPR01MB14966E14C1378DEE51FB7B7C5F5B32@OSCPR01MB14966.jpnprd01.prod.outlook.com Backpatch-through: 17
This assertion, based on pending_since (timestamp used to prevent stats reports to be too frequent or should a partial flush happen), is reached when it is found that no data can be flushed but a previous call of pgstat_report_stat() determined that some stats data has been found as in need of a flush. So pending_since is set when some stats data is pending (in non-force mode) or if report attempts are too frequent, and reset to 0 once all stats have been flushed. Since 5cbbe70, WAL senders have begun to report their stats on a periodic basis for IO stats in v16~ and backend stats on HEAD, creating some friction with the concurrent pgstat_report_stat() calls that can happen in the context of a WAL sender (shutdown callback doing a final report or backend-related code paths). This problem is the cause of spurious failures in the TAP tests. In theory, this assertion can be also reached in v15, even if that's very unlikely. For example, a process, say a background worker, could do periodic and direct stats flushes with concurrent calls of pgstat_report_stat() that could cause conflicting values of pending_since. This can be done with WAL or SLRU stats flushes using pgstat_flush_wal() or pgstat_slru_flush(). HEAD makes this situation easier to happen with custom cumulative stats. This commit removes the assertion altogether, per discussion, as it is more useful to keep the state of things as they are for the WAL sender. The assertion could use a special state based on for example am_walsender, but I doubt that this would be meaningful in the long run based on the other arguments raised while discussing this issue. Reported-by: Tom Lane <[email protected]> Reported-by: Andres Freund <[email protected]> Discussion: https://postgr.es/m/[email protected] Discussion: https://postgr.es/m/dwrkeszz6czvtkxzr5mqlciy652zau5qqnm3cp5f3p2po74ppk@omg4g3cc6dgq Backpatch-through: 15
Commit 3f28b2f tried to ensure that the replication origin shouldn't be advanced in case of an ERROR in the apply worker, so that it can request the same data again after restart. However, it is possible that an ERROR was caught and handled by a (say PL/pgSQL) function, and the apply worker continues to apply further changes, in which case, we shouldn't reset the replication origin. Ensure to reset the origin only when the apply worker exits after an ERROR. Commit 3f28b2f added new function geterrlevel, which we removed in HEAD as part of this commit, but kept it in backbranches to avoid breaking any applications. A separate case can be made to have such a function even for HEAD. Reported-by: Shawn McCoy <[email protected]> Author: Hayato Kuroda <[email protected]> Reviewed-by: Masahiko Sawada <[email protected]> Reviewed-by: vignesh C <[email protected]> Reviewed-by: Amit Kapila <[email protected]> Backpatch-through: 16, where it was introduced Discussion: https://postgr.es/m/CALsgZNCGARa2mcYNVTSj9uoPcJo-tPuWUGECReKpNgTpo31_Pw@mail.gmail.com
One place in hash_create() used DynaHashAlloc() as a convenient shorthand for MemoryContextAlloc(). That was fine when it was written, but it stopped being fine when 9c911ec changed DynaHashAlloc() to use MCXT_ALLOC_NO_OOM (mea culpa). Change the code to call plain MemoryContextAlloc() as intended. I think that this bug may be unreachable in practice, since we now always create AllocSets with some space already allocated, so that an OOM failure here for a non-shared hash table should be impossible (with a hash table name of reasonable length anyway). And there aren't enough shared hash tables to make a crash for one of those probable. Nonetheless it's clearly not operating as designed, so back-patch to v16 where 9c911ec came in. Reported-by: Maksim Korotkov <[email protected]> Author: Tom Lane <[email protected]> Discussion: https://postgr.es/m/[email protected] Backpatch-through: 16
Author: Shlok Kyal <[email protected]> Backpatch-through: 13 Discussion: https://postgr.es/m/CANhcyEXsObdjkjxEnq10aJumDpa5J6aiPzgTh_w4KCWRYHLw6Q@mail.gmail.com
During logical decoding, we advance catalog_xmin of logical too early in fast_forward mode, resulting in required catalog data being removed by vacuum. This mode is normally used to advance the slot without processing the changes, but we still can't let the slot's xmin to advance to an incorrect value. Commit f49a80c fixed a similar issue where the logical slot's catalog_xmin was getting advanced prematurely during non-fast-forward mode. During xl_running_xacts processing, instead of directly advancing the slot's xmin to the oldest running xid in the record, it allowed the xmin to be held back for snapshots that can be used for not-yet-replayed transactions, as those might consider older txns as running too. However, it missed the fact that the same problem can happen during fast_forward mode decoding, as we won't build a base snapshot in that mode, and the future call to get_changes from the same slot can miss seeing the required catalog changes leading to incorrect reslts. This commit allows building the base snapshot even in fast_forward mode to prevent the early advancement of xmin. Reported-by: Amit Kapila <[email protected]> Author: Zhijie Hou <[email protected]> Reviewed-by: Masahiko Sawada <[email protected]> Reviewed-by: shveta malik <[email protected]> Reviewed-by: Amit Kapila <[email protected]> Backpatch-through: 13 Discussion: https://postgr.es/m/CAA4eK1LqWncUOqKijiafe+Ypt1gQAQRjctKLMY953J79xDBgAg@mail.gmail.com Discussion: https://postgr.es/m/OS0PR01MB57163087F86621D44D9A72BF94BB2@OS0PR01MB5716.jpnprd01.prod.outlook.com
DST law changes in Chile: there is a new time zone America/Coyhaique for Chile's Aysén Region, to account for it changing to UTC-03 year-round and thus diverging from America/Santiago. Historical corrections for Iran. Backpatch-through: 13
Add a documentation warning to ts_headline() pointing out that, when working with untrusted input documents, the output is not guaranteed to be safe for direct inclusion in web pages. This is because, while it does remove some XML tags from the input, it doesn't remove all HTML markup, and so the result may be unsafe (e.g., it might permit XSS attacks). To guard against that, all HTML markup should be removed from the input, making it plain text, or the output should be passed through an HTML sanitizer. In addition, document precisely what the default text search parser recognises as valid XML tags, since that's what determines which XML tags ts_headline() will remove. Reported-by: Richard Neill <[email protected]> Author: Dean Rasheed <[email protected]> Reviewed-by: Noah Misch <[email protected]> Backpatch-through: 13
SQL "SET search_path = 'pg_catalog, pg_temp'" is silently equivalent to "SET search_path = pg_temp, pg_catalog, "pg_catalog, pg_temp"" instead of the intended "SET search_path = pg_catalog, pg_temp". (The intent was a two-element search path. With the single quotes, it instead specifies one element with a comma and a space in the middle of the element.) In addition to the SET statement, this affects SET clauses of CREATE FUNCTION, ALTER ROLE, and ALTER DATABASE. It does not affect the set_config() SQL function. Though the documentation did not show an insecure command, remove single quotes that could entice a reader to write an insecure command. Back-patch to v13 (all supported versions). Reported-by: Sven Klemm <[email protected]> Author: Sven Klemm <[email protected]> Backpatch-through: 13
As usual, the release notes for other branches will be made by cutting these down, but put them up for community review first.
It's --auto-features not --auto_features. Reported-by: Egor Chindyaskin <[email protected]> Discussion: https://postgr.es/m/[email protected] Discussion: https://postgr.es/m/[email protected] Backpatch-through: 16
For self-referencing foreign keys in partitioned tables, we weren't handling creation of pg_constraint rows during CREATE TABLE PARTITION AS as well as ALTER TABLE ATTACH PARTITION. This is an old bug -- mostly, we broke this in 614a406 while trying to fix it (so 12.13, 13.9, 14.6 and 15.0 and up all behave incorrectly). This commit reverts part of that with additional fixes for full correctness, and installs more tests to verify the parts we broke, not just the catalog contents but also the user-visible behavior. Backpatch to all live branches. In branches 13 and 14, commit 46a8c27 changed the behavior during DETACH to drop a FK constraint rather than trying to repair it, because the complete fix of repairing catalog constraints was problematic due to lack of previous fixes. For this reason, the test behavior in those branches is a bit different. However, as best as I can tell, the fix works correctly there. In release notes we have to recommend that all self-referencing foreign keys on partitioned tables be recreated if partitions have been created or attached after the FK was created, keeping in mind that violating rows might already be present on the referencing side. Reported-by: Guillaume Lelarge <[email protected]> Reported-by: Matthew Gabeler-Lee <[email protected]> Reported-by: Luca Vallisa <[email protected]> Discussion: https://postgr.es/m/CAECtzeWHCA+6tTcm2Oh2+g7fURUJpLZb-=pRXgeWJ-Pi+VU=_w@mail.gmail.com Discussion: https://postgr.es/m/[email protected] Discussion: https://postgr.es/m/CAAT=myvsiF-Attja5DcWoUWh21R12R-sfXECY2-3ynt8kaOqjw@mail.gmail.com
Also adjust the phrasing in the comments. Author: Etsuro Fujita <[email protected]> Author: Heikki Linnakangas <[email protected]> Reviewed-by: Tender Wang <[email protected]> Reviewed-by: Gurjeet Singh <[email protected]> Reviewed-by: Michael Paquier <[email protected]> Discussion: https://postgr.es/m/CAPmGK17%3DPHSDZ%2B0G6jcj12buyyE1bQQc3sbp1Wxri7tODT-SDw%40mail.gmail.com Backpatch-through: 15
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: ff466b83eb6fbf9434ff087426546e3dc988135d
Start the file with static functions not specific to pe_test_vectors tests. This way, new tests can use them without disrupting the file's layout. Change report_result() PQExpBuffer arguments to plain strings. Back-patch to v13 (all supported versions), for the next commit. Reviewed-by: Masahiko Sawada <[email protected]> Backpatch-through: 13 Security: CVE-2025-4207
With GB18030 as source encoding, applications could crash the server via SQL functions convert() or convert_from(). Applications themselves could crash after passing unterminated GB18030 input to libpq functions PQescapeLiteral(), PQescapeIdentifier(), PQescapeStringConn(), or PQescapeString(). Extension code could crash by passing unterminated GB18030 input to jsonapi.h functions. All those functions have been intended to handle untrusted, unterminated input safely. A crash required allocating the input such that the last byte of the allocation was the last byte of a virtual memory page. Some malloc() implementations take measures against that, making the SIGSEGV hard to reach. Back-patch to v13 (all supported versions). Author: Noah Misch <[email protected]> Author: Andres Freund <[email protected]> Reviewed-by: Masahiko Sawada <[email protected]> Backpatch-through: 13 Security: CVE-2025-4207
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.