Releases: dolthub/dolt
1.55.3
Merged PRs
dolt
- 9393: Fix 483 - rm already covered bats
Fixes #483
Removed extra bats test - 9388: Ensure error checking is performed when generating statistics template
- 9384: Bh/copilot test
- 9383: go: clone: Fix dolt clone to work even when dolt startup logic detects that TMPDIR is not renamable to the local directory and it creates a .dolt/tmp directory.
- 9382: contribution test
Copy of #9349 - 9379: Fix 8853 warn local user persists
Fixes: #8853
On sql-server restart__dolt_local_user__
is dropped if persistent
If the__dolt_local_user__
is persistent on restart it's logged with a warning and dropped - 9376: 9374 support mysql user mysql password env vars
Fixes: #9374
Add support for MYSQL_USER/MYSQL_PASSWORD environment variables - 9371: Fix 483 unable to checkout table(s)/branch(s) with same name
Fixes: #483
Local/remote refs take priority in Do What I Mean (DWIM) scenarios wheredolt
has to interpret ambiguous input incheckout
.
If a table and tracking branch share the same name, but local does not exist an error will occur.
--
can now be used to clear up ambiguity.
case 1:dolt checkout <ref> -- [<tables>]
<ref>
must be a singular valid tree, everything else after '--' must be a table.
case 2:dolt checkout -- [<tables>]
This is case 1 but without<ref>
.
case 3:git checkout <something> [--]
(a) If is a commit switch to branch.
(b) If isn’t a commit, and either "--" is present or isn’t a table, no -t or -b is given, and there’s a tracking branch named on exactly one remote (or on the specified remote), then this acts as shorthand to fork local from that remote-tracking branch.
(c) Otherwise, if "--" is present, treat it like case 1.
(d) Otherwise, if a ref treat it like case 1. If a table treat like case 2. If neither fail.
case 4:git checkout <something> <tables>
The first argument must not be ambiguous. If a<ref>
follow case 1, if a table follow case 2. Otherwise, fail. - 9369: Dolt Stash CLI Migration
Part 2 of #7623
Migratesdolt stash
cli command to usedolt_stash
stored procedure - 9361: Implemented
dolt_history_dolt_schemas
anddolt_diff_dolt_schemas
Fixes: #9360 - 9349: feat(import): validate primary keys early
NOTE: currently this is just checking primary keys, but the related issue also mentioned in #1083 references issues with number of args. I'm wondering if I should split that into two separate PRs, or if it's fine to do both - currently I only have the PK validation in place? Personally I would argue to separate them since they are slightly different, albeit similar. But happy to take either path.
Add early validation to check if specified primary keys exist in the import file's schema before processing rows. This prevents users from waiting for large files to be processed only to discover that their primary key column names are invalid.
Changes:- Add validatePrimaryKeysAgainstSchema function to check primary key existence against file schema
- Integrate validation into newImportDataReader for create operations
- Provide helpful error messages listing available columns when primary keys are not found
- Add unit tests covering various validation scenarios
- Add BATS integration tests for CSV, PSV, and large file scenarios
The validation only runs for create operations when primary keys are explicitly specified and no schema file is provided. This ensures fast failure while maintaining backward compatibility.
Before: Users waited minutes for large files to process before seeing "provided primary key not found" errors
After: Users get immediate feedback with helpful column suggestions
Closes: #1083
- 9339: Utility method to list all system tables in a schema
Needed for doltgres, to put generated system tables in pg_catalog tables.
go-mysql-server
- 3043: fix missed case for nullable enums
This PR adds a missed edge case from this: dolthub/go-mysql-server#2985
We were incorrectly making nullable enum columns take the first value. - 3040: backtick column names in check constraints
MySQL backticks any column names within a check expression preventing any parser errors when round tripping tables, so we should do the same.
fixes: #9343 - 3038: Fix schema for call to
hash.HashOf()
inHashLookups
The incorrect schema was used in thehash.HashOf()
function.
n.Schema()
is the schema of the entire JoinNode; we just needed the schema of the key.'
Test bump: dolthub/ld#20634 - 3037: Apply foreign key constraints to
UPDATE JOIN
Modified UpdateJoin to be able to apply foreign key constraints
Part of #9356 - 3036: Cache
REGEX
We should be able to avoid recompiling the regex if the pattern and flags are static.
Addresses: #9366
Other reasons for potential slowness:- REGEXP is more correct; it takes into account collations
- Variable-length TextStorage requires unwrapping
- 3034: Allow
drop trigger ...
when trigger is invalid
We should ignore parser errors when trying to drop triggers that are invalid.
fixes: #9359 - 3027: make
sql.HashOf()
collation aware
This PR adds type/collation information toHashOf
.
Additionally, it refactorsHashOf
to avoid import cycles and hasgroupingKey
use the function.
fix for: #9049
doltgres fix: dolthub/doltgresql#1548
Closed Issues
- 9360: add
dolt_history_dolt_schemas
anddolt_diff_dolt_schemas
- 483: Unable to checkout table if branch name and table name are identical
- 4205: Preview conflicts without running
dolt merge
- 1083: validate all import args before reading import rows
- 9343: Check constraint with column named
order
breaks inserts, updates, other operations - 8853: Warn if Dolt can't create
__dolt_local_user__
account on startup - 9374: Support MYSQL_USER and MYSQL_PASSWORD environment variables in Dolt Docker image
- 9049:
IN (SUBQUERY)
expressions ignore collation - 9368:
dolt reset --hard
does not resetdolt_schemas
- 9366: REGEXP performance regression
- 3039: server package usage issues
Performance
Read Tests | MySQL | Dolt | Multiple |
---|---|---|---|
covering_index_scan | 1.96 | 0.68 | 0.35 |
groupby_scan | 13.22 | 20.0 | 1.51 |
index_join | 1.5 | 2.48 | 1.65 |
index_join_scan | 1.44 | 1.47 | 1.02 |
index_scan | 34.33 | 30.26 | 0.88 |
oltp_point_select | 0.2 | 0.28 | 1.4 |
oltp_read_only | 3.75 | 5.37 | 1.43 |
select_random_points | 0.35 | 0.61 | 1.74 |
select_random_ranges | 0.38 | 0.64 | 1.68 |
table_scan | 34.95 | 32.53 | 0.93 |
types_table_scan | 75.82 | 125.52 | 1.66 |
reads_mean_multiplier | 1.3 |
Write Tests | MySQL | Dolt | Multiple |
---|---|---|---|
oltp_delete_insert | 8.43 | 6.55 | 0.78 |
oltp_insert | 4.18 | 3.25 | 0.78 |
oltp_read_write | 9.06 | 11.87 | 1.31 |
oltp_update_index | 4.18 | 3.3 | 0.79 |
oltp_update_non_index | 4.18 | 3.25 | 0.78 |
oltp_write_only | 5.28 | 6.55 | 1.24 |
types_delete_insert | 8.43 | 6.91 | 0.82 |
writes_mean_multiplier | 0.93 |
TPC-C TPS Tests | MySQL | Dolt | Multiple |
---|---|---|---|
tpcc-scale-factor-1 | 94.02 | 38.19 | 2.46 |
tpcc_tps_multiplier | 2.46 |
Overall Mean Multiple | 1.56 |
---|
1.55.2
Merged PRs
dolt
go-mysql-server
- 3034: Allow
drop trigger ...
when trigger is invalid
We should ignore parser errors when trying to drop triggers that are invalid.
fixes: #9359 - 3032: Generalize types in
CASE
,IF
, andIFNULL
Fixes #9321
Generalize types inCASE
,IF
, andIFNULL
as described by MySQL docs - 3031: Fix 7998 json escape double quotes
Key write now uses same recursive call towriteMarshalledValue()
as value underjson_encode.go
Fixes: #7998 - 3030: Distinguish nils in firstBuffer
Fixes #9035
InAggregationBuffer.firstBuffer
, the buffer should not update if the first row has already been written. However, there was no way of distinguishing between anil
because the buffer is empty and anil
because the value in that column of the first row isnil
.
When you have two rows(NULL, 1)
and(1, NULL)
, the first row gets written into[]firstBuffer.val
as{nil, 1}
. Because there's no way to tell whether the first value is supposed to benil
, it ends up getting overwritten by the1
in the second row. Because the second value is notnil
,Update
skips it. As a result, we end up with a row{1, 1}
that matches neither of the original rows and evaluates toTRUE
when grouped byc0 = c1
, even though the original rows evaluate toNULL
.
This isn't an issue when the rows are(NULL, 1), (NULL, 1)
or(1, NULL), (1, NULL)
because thenil
value remains anil
value whenUpdate
is called on the second row.
I fixed this by adding awrittenNil
flag tofirstBuffer
to indicate that thenil
value is meant to be there and should not be overwritten. - 3029: Fix 9347 panic case statements info schema
information_schema
updated to respect enum type
Eval()
underenum.go
is more flexible with strings now
Fixes: #9347 - 3028: add mysql dialect tag to new
group by
test
newgroup by
test added in dolthub/go-mysql-server#3025 is not Postgres/Doltgres compatible - 3024: Adding skipped tests for
UPDATE ... JOIN
bugs - 2994: fix NewServer call
this is a followup to dolthub/go-mysql-server#2989
Closed Issues
- 9359: Error when trying to drop trigger
- 9347: Panic with case statements in information_schema.tables access
- 9321: Using an IFNULL with TINYINT param restricts the output as TINYINT
- 9355: dolt version 1.56.0 published on dockerhub but latest released version is 1.55.1
Performance
Read Tests | MySQL | Dolt | Multiple |
---|---|---|---|
covering_index_scan | 1.96 | 0.68 | 0.35 |
groupby_scan | 13.22 | 18.61 | 1.41 |
index_join | 1.5 | 2.57 | 1.71 |
index_join_scan | 1.47 | 1.44 | 0.98 |
index_scan | 34.33 | 30.81 | 0.9 |
oltp_point_select | 0.2 | 0.29 | 1.45 |
oltp_read_only | 3.75 | 5.37 | 1.43 |
select_random_points | 0.35 | 0.61 | 1.74 |
select_random_ranges | 0.39 | 0.64 | 1.64 |
table_scan | 34.95 | 32.53 | 0.93 |
types_table_scan | 75.82 | 130.13 | 1.72 |
reads_mean_multiplier | 1.3 |
Write Tests | MySQL | Dolt | Multiple |
---|---|---|---|
oltp_delete_insert | 8.43 | 6.67 | 0.79 |
oltp_insert | 4.18 | 3.25 | 0.78 |
oltp_read_write | 9.06 | 11.87 | 1.31 |
oltp_update_index | 4.18 | 3.36 | 0.8 |
oltp_update_non_index | 4.18 | 3.25 | 0.78 |
oltp_write_only | 5.28 | 6.55 | 1.24 |
types_delete_insert | 8.43 | 6.91 | 0.82 |
writes_mean_multiplier | 0.93 |
TPC-C TPS Tests | MySQL | Dolt | Multiple |
---|---|---|---|
tpcc-scale-factor-1 | 94.38 | 39.0 | 2.42 |
tpcc_tps_multiplier | 2.42 |
Overall Mean Multiple | 1.55 |
---|
1.55.1
Merged PRs
dolt
- 9346: /docker/docker-entrypoint.sh: support env vars for creating a default database
- 9344: Nathan/commit time
(Temporarily) fixes issue #8477. Commits in the shell created via\commit
now have the correct time. - 9336: Fix 4160 dolt commit amend flag for initial commit
BuildNewCommit() now checks opts.Amend before implicitly appending parents to commit
Fixes: #4160 - 9271: go: sqle/statspro: Make ANALYZE TABLE faster by circumventing the rate limiter on stats work.
When a user is requesting the analysis, we should not rate limit resource utilization in the same way we do when collecting stats in the background.
Also simplifies the implementation of rate limiting a bit.
go-mysql-server
- 3028: add mysql dialect tag to new
group by
test
newgroup by
test added in dolthub/go-mysql-server#3025 is not Postgres/Doltgres compatible - 3025: Convert if.Eval result to correct type
if.Eval().Type() should evaluate to if.Type(), instead of whatever type the chosen subexpression is.
https://dev.mysql.com/doc/refman/8.4/en/flow-control-functions.html#function_if
Fixes #9320 - 3023:
like
match when collation is unspecified
For some reason theLikeMatcher
returns false with no error on unspecified collation, so this PR fixes that.
Fixes: #9337 - 3021: Added test for
insert...select...returning
Fixes #9248 - 3020: Support OCT() function and fix CONV() mishandling of negative floats and empty string N
MySQL Docs: https://dev.mysql.com/doc/refman/8.4/en/string-functions.html#function_oct
vitess
- 421: fix up constants for server cursor types and add a support for
PARAMETER_COUNT_AVAILABLE
cursor
We were missing thePARAMETER_COUNT_AVAILABLE
server cursor type when handling aCOM_STMT_EXECUTE
.
Oddly, it seems to behave as if there were no cursor.
MySQL Docs: https://dev.mysql.com/doc/dev/mysql-server/9.3.0/mysql__com_8h.html#a3e5e9e744ff6f7b989a604fd669977da
Fixes: #9313 - 420: Make
returning
a reserved keyword
Part of #9248
Makingreturning
a reserved keyword allows forreturning
clause in all types of insert statements
Closed Issues
- 9035: NULL in GROUP BY column is interpreted as 1
- 9347: Panic with case statements in information_schema.tables access
- 7998: Casting json to string doesn't escape double quotes in keys/values
- 4160: dolt commit --amend returns error for initial commit
- 9313: Incompatibility of streaming prepared statement responses with
github.com/go-mysql-org/go-mysql
- 9345: Support
MYSQL_DATABASE
andDOLT_DATABASE
environment variables in Dolt Docker image. - 9320: Group-By with
IF
function fails to distinguish the integer "1" and the boolean value "true" - 9307: UNKNOWN_CODE_PLEASE_REPORT: No authentication methods available for authentication.
- 9325: Support
OCT()
function - 9324: Panic when pushing to remote
Performance
Read Tests | MySQL | Dolt | Multiple |
---|---|---|---|
covering_index_scan | 1.86 | 0.68 | 0.37 |
groupby_scan | 13.46 | 18.61 | 1.38 |
index_join | 1.47 | 2.48 | 1.69 |
index_join_scan | 1.44 | 1.44 | 1.0 |
index_scan | 34.33 | 31.37 | 0.91 |
oltp_point_select | 0.2 | 0.28 | 1.4 |
oltp_read_only | 3.75 | 5.37 | 1.43 |
select_random_points | 0.35 | 0.61 | 1.74 |
select_random_ranges | 0.38 | 0.64 | 1.68 |
table_scan | 34.95 | 32.53 | 0.93 |
types_table_scan | 74.46 | 132.49 | 1.78 |
reads_mean_multiplier | 1.3 |
Write Tests | MySQL | Dolt | Multiple |
---|---|---|---|
oltp_delete_insert | 8.28 | 6.67 | 0.81 |
oltp_insert | 4.18 | 3.25 | 0.78 |
oltp_read_write | 9.06 | 11.87 | 1.31 |
oltp_update_index | 4.25 | 3.36 | 0.79 |
oltp_update_non_index | 4.18 | 3.25 | 0.78 |
oltp_write_only | 5.28 | 6.55 | 1.24 |
types_delete_insert | 8.43 | 7.04 | 0.84 |
writes_mean_multiplier | 0.94 |
TPC-C TPS Tests | MySQL | Dolt | Multiple |
---|---|---|---|
tpcc-scale-factor-1 | 94.91 | 38.85 | 2.44 |
tpcc_tps_multiplier | 2.44 |
Overall Mean Multiple | 1.56 |
---|
1.55.0
Backwards Incompatible Changes
There has been a change in the way stashes are stored, so you may not be able to see stashes created before this version with dolt stash list
. To update your stash references run the following commands on the command line:
dolt sql -q "call dolt_stash('pop', 'stashes')"
to put your stashed changes back in the working set. Then run either of the following:
dolt stash
or dolt sql -q "call dolt_stash('push', 'dolt-cli')"
Additionally returning
was made a reserved word to support insert...returning
statements. Any tables, views, or columns named returning
will no longer work in certain SQL statements without being quoted, e.g.:
SELECT `returning` from mytable;
Per Dolt’s versioning policy, this is a minor version bump because these changes may impact existing applications. Please reach out to us on GitHub or Discord if you have questions or need help with any of these changes.
Merged PRs
dolt
- 9329: go: store/datas/database_common: store/nbs/store.go: Fix some issues when pushing to a dolt sql-server that is itself running GC.
A push to a remote works by uploading the missing content and then adding references to it in the remote datastore. If the remote is running a GC during the push, it is possible for the newly added data to be collected and no longer be available when the references are added.
This should cause a transient failure which is safe is retry. There were a couple bugs which could instead cause a panic. This makes some changes to safeguard against those case. - 9317: go/store/nbs: archive_chunk_source.go: Make sure we skip getRecords marked found when fetching chunks.
This avoids delivering the chunk more than once, which parts of the application layer are not expecting. - 9310: tidy up errors
This PR fixes index coll to use pointer receivers, and consolidates some errors to use the ones defined in GMS
companion pr: dolthub/go-mysql-server#3006 - 9309: go/store/nbs: Fix a bug where GetChunkLocations{,WithPaths} can return the same chunk more than once if it is in newgen and oldgen.
- 9308: dolt_stash stored procedure
This Pr implements issue #7623.
Includes the following changes:dolt_stashes
system table to view current stashes. Currently read-only.dolt_stash()
stored procedure to use in the server context. Must be given a specific command (push, pop, clear, or drop) and a stash name.
- 9301: Use 256 stripes in node cache
- 9270: Add
dolt_preview_merge_conflicts
table function
This table function shows which rows are conflicting for a table between two branches. Will error if there are schema conflicts - 9233: Add a
dolt_preview_merge_conflicts_summary
table function
This table function shows which tables are conflicting between two branches and how many data and/or schema conflicts there are for each table
go-mysql-server
- 3021: Added test for
insert...select...returning
Fixes #9248 - 3018: fix text storage for
left
andinstr
function
fixes: #9322 - 3017: Support
*
inreturning
clauses
part of #9248 - 3016: Minor updates to support
UPDATE ... FROM
in Doltgres
Minor changes to supportUPDATE ... FROM
in Doltgres (dolthub/doltgresql#1536)
While working on this, I noticed a few issues with theUPDATE ... JOIN
support in GMS:- There was duplicated logic for
UPDATE ... JOIN
inPlanBuilder.buildUpdate()
and in the analyzer rulemodifyUpdateExprsForJoin
. Not allUPDATE ... JOIN
statements can be handled in PlanBuilder, because some statements, likeIN SUBQUERY
, are transformed into a join during analysis, so we need logic in the analyzer to identifyUPDATE ... JOIN
plans after that transformation. To reduce the duplication, I removed the logic from PlanBuilder and now we rely on on themodifyUpdateExprsForJoin
rule to mark allUPDATE ... JOIN
plans. - Callers use
plan.GetUpdatable()
to get a reference to the table being updated, butUPDATE ... JOIN
can update multiple tables. TheGetUpdatable()
APIs should be refactored to model this, otherwise we cannot accurately track multiple tables being modified and apply all the correct triggers/constraints. - Currently
UPDATE ... JOIN
never honors foreign key constraints, due to limitations in the concrete typeUpdateJoin.GetUpdatable()
returns. TheupdatableJoinTable
type does not implementsql.ForeignKeyTable
, so GMS is unable to load any foreign key constraints from it. This also needs to be handled in a future PR.
- There was duplicated logic for
- 3015: server var tests
- 3014: Simplify implementation of
hasSingleOutput
As part of adding support forIS NULL
andIS NOT NULL
implementations that can match Postgres's behavior for records, I've been digging through the references toexpression.IsNull
in GMS so that we can have a separate implementation for Doltgres that GMS can still analyze correctly.
One reference toexpression.IsNull
is in thehasSingleOutput
which is used to determine if an expression result has a single row or more than one row. The only expression implementation I was able to find that actually returns multiple rows isplan.Subquery
, so I simplified this function to remove the reference toexpression.IsNull
. - 3013: handle insert returning for server context
- 3012: Handle
insert...returning...
queries
part of #9248
insert...returning *
currently doesn't work. Similar to dolthub/doltgresql#1432 - 3010: Add query time to logger
This is a proposed fix for Dolt issue: #8909
I'm not certain if we want to just add this field or replace connect time. I think this implementation is safer. - 3006: implement
if [not] exists
logic for DDL aroundview
s andindex
es
This PR adds support for queries:create view if not exists ...
create index if not exists ...
alter table ... add index if not exists ...
drop index if exists ...
fixes: #9293
companion pr: dolthub/vitess#417
vitess
- 420: Make
returning
a reserved keyword
Part of #9248
Makingreturning
a reserved keyword allows forreturning
clause in all types of insert statements - 419: Add grammar support for
insert...returning...
(most cases)
Part of #9248
Insert...returning...
statements work withinsert_data_alias
andSET assignment_list
cases. The following examples (from the MariaDB documentation) work.INSERT INTO t2 (id) VALUES (2),(3) RETURNING id,t;
I wasn't able to getINSERT INTO t1 SET id1=1, animal1='Bear' RETURNING f(id1), UPPER(animal1);
insert...returning...
to work withinsert_data_select
cases. TheRETURNING
clause is not supported for the following and would lead to a syntax error.When I had added aINSERT INTO archive (id, data) SELECT id, data FROM live_data WHERE flag = 'old' RETURNING id;
RETURNING
clause, there was a grammar conflict because the parser was not able to distinguish between a column alias and the 'RETURNING' keyword. This could be resolved by makingRETURNING
a reserved keyword, but we decided against that sinceRETURNING
is a non-reserved keyword in MySQL (doc).
I ran into a similar conflict withinsert_data_alias
cases because the parser was not able to distinguish between a table alias and theRETURNING
keyword. This was resolved by addingRETURNING
tonon_reserved_keyword2
. This meansRETURNING
can't be used as a table alias and may cause some customer issues but the likelihood is very low.
Also removed TestSingle from parse_test.go (we already have TestSingleSQL) - 417: add
if not exists
option tocreate view
syntax for: #9293
Introduced in MySQL 9.1: https://dev.mysql.com/doc/refman/9.1/en/create-view.html - 416: New flavor of injected expression
Closed Issues
- 9337: Can't use LIKE together with JSON_TABLE
- 9332: remotesapi through reverse proxy (caddy) error
- 9248: support returning clause in inserts like mariadb ?
- 9322: Using LEFT on TEXT column fails with "invalid type: *val.TextStorage"
- [9090](https://github.com/dolthub/dolt/issu...
1.54.2
Merged PRs
dolt
- 9306: go: store/datas/pull: clone.go: Improve robust of Clone for certain remoteapi implementations when the remote Conjoins.
The clone code works by listing the remote table files and downloading them into the local table file store. When the remote is a remoteapi implementation, like a DoltHub repository, this resulting in listing the remote table files and using URLs to fetch each of them.
The URLs returned from these APIs can expire and they need to be refreshed. This refresh can happen in two ways:- There is explicit support in the TableFileSource representation returned by the API to include a mechanism to refresh it. DoltHub uses this, and the Dolt client will make use of that support to refresh expired URLs.
- The heavy handed approach is to list the table files again and use the newly returned URLs.
The Clone code has explicit support for doing (2), and it is necessary for remoteapi implementations with expiring URLs but without explicit RefreshTableFileUrl support. dolt itself, when running a remote as part of sql-server for example, does not implement RefreshTableFileUrl support, and so the re-list support is still necessary.
This PR changes the Clone implementation so that, on a retry, it makes all the newly returned table file sources available for the next try, but it keeps the old sources around if they no longer come back from ListTableFiles. In this way, we get strictly more robust behavior than before.
The downside is that, when the remote file is actually gone, the Clone code will continue attempting to download it until it reaches a terminal download failure. This change in behavior is not as disruptive as the current behavior, and so we make this new trade off for now.
- 9299: Improved chunk tracking during clone
Decrement the chunk counter when a file completes. Add a completion status message. Fix a potential race that @reltuk identified in review. - 9298: Last easy unskip according to Claude
- 9296: Check for invalid ancestor merge spec
fixes #9294 - 9295: Various unskipped bats tests identified and fixed by Claude Code.
- 9289: Fix load data tabs test
- 9288: The where clause on a dolt diff command is always validated
Even if there is an empty diff - 9287: Fix skip test for another load data issue
- 9122: implement
dolt_branch_status()
This PR implementsdolt_branch_status(<base_branch>, <feature_branch1>, <feature_branch2>, ...)
.
The function comparesbase_branch
against each of the feature branches and outputs a table that lists how many commits ahead and behind base branch.
Additionally, this PR contains some cleanup/refactoring.
addresses: #9090
go-mysql-server
- 3010: Add query time to logger
This is a proposed fix for Dolt issue: #8909
I'm not certain if we want to just add this field or replace connect time. I think this implementation is safer. - 3009: New skipped tests for group_concat
The ORDER BY clauses in aggregates likegroup_concat
do not work correctly with subqueries. Lots of varied tests demonstrating this defect. - 3004: Add support for
IS NULL
expressions with records
The implementation in GMS forIS NULL
checks if the value isNULL
or not. For record and composite types in Postgres,IS NULL
needs to check if each value in the record or composite type isNULL
to determine if the value isNULL
. See dolthub/doltgresql#1520 for more details.
This change creates a new shared type, namedTupleValue
, in GMS that is used for record values (and eventually, composite type values). I also considered giving Doltgres it's own, customIsNull
implementation, but the analyzer does specific checks and optimizations when it looks for*expression.IsNull
instances, which would stop working correctly. - 3002: fix table functions in stored procedures
- 3001: Support for injected expressions with ORDER BY clauses
Depends on 416
vitess
- 416: New flavor of injected expression
- 415: Fix implementation for
REVOKE ALL...
This PR fixesREVOKE ALL [PRIVILEGES], GRANT OPTION ...
so that it parse to the same thing asREVOKE ALL PRIVILEGES
as they are equivalent in MySQL. TheGRANT
privilege is already part of all privileges, specifying it is just for clarity.
Additionally, this PR adds syntax support for theIF EXISTS
option andIGNORE UNKNOWN USER
option that is part ofREVOKE
statements.
MySQL Docs: https://dev.mysql.com/doc/refman/8.4/en/revoke.html
Syntax for: #9228
Closed Issues
- 8909: Dolt start time of queries in logs is all pinned to the connection open start time
- 7984: Implement
dolt_help
set of system tables - 8345: dolt does not reload all statistics on server restart
- 9240: Clone Status reports more chunks being downloaded than it says exist
- 9294: HEAD^0 causes panic
Performance
Read Tests | MySQL | Dolt | Multiple |
---|---|---|---|
covering_index_scan | 1.86 | 0.67 | 0.36 |
groupby_scan | 13.46 | 18.61 | 1.38 |
index_join | 1.44 | 2.43 | 1.69 |
index_join_scan | 1.42 | 1.47 | 1.04 |
index_scan | 34.33 | 30.26 | 0.88 |
oltp_point_select | 0.18 | 0.26 | 1.44 |
oltp_read_only | 3.43 | 5.28 | 1.54 |
select_random_points | 0.33 | 0.6 | 1.82 |
select_random_ranges | 0.36 | 0.62 | 1.72 |
table_scan | 34.33 | 32.53 | 0.95 |
types_table_scan | 75.82 | 123.28 | 1.63 |
reads_mean_multiplier | 1.31 |
Write Tests | MySQL | Dolt | Multiple |
---|---|---|---|
oltp_delete_insert | 8.9 | 6.32 | 0.71 |
oltp_insert | 4.1 | 3.07 | 0.75 |
oltp_read_write | 8.74 | 11.65 | 1.33 |
oltp_update_index | 4.18 | 3.19 | 0.76 |
oltp_update_non_index | 4.18 | 3.07 | 0.73 |
oltp_write_only | 5.67 | 6.32 | 1.11 |
types_delete_insert | 8.28 | 6.67 | 0.81 |
writes_mean_multiplier | 0.89 |
TPC-C TPS Tests | MySQL | Dolt | Multiple |
---|---|---|---|
tpcc-scale-factor-1 | 97.48 | 39.64 | 2.46 |
tpcc_tps_multiplier | 2.46 |
Overall Mean Multiple | 1.55 |
---|
1.54.1
Merged PRs
dolt
- 9286: Fix skip test for dolt diff -r sql and views
- 9285: Fixed a load data test for mismatched columns
- 9278: Two additional unskipped bats tests: one that said flakey, one in git status
One says it was skipped because it was flakey in CI. Let's see if it's now fine. - 9276: Unskip keyless add/drop column test
This is now implemented - 9275: Fix merge test to properly test for a constraint violation
Identified and fixed by Claude - 9274: Skipped bats test for dolt status that now works.
Identified by Claude. - 9273: Fix input validation of primary keys
- 9272: Added a new constant for string columns created during schema import.
Length is 200 instead of 1023.
Fixes: #8972 - 9269: manifest safety checks
Recent user report of a manifest landing that was missing it's format. This will at least prevent the generation of such a file in the future. - 9267: Remove the --new-format flag from dolt init
This flag really makes no sense any more
This is a claude change. $0.80, two prompts. - 9264: Unskip test for export quotes
Companion GMS PR: dolthub/go-mysql-server#2995 - 9256: remote cleanup
When removing remote definitions, we now also remove the remote references from the branch specifications.
Fixes: #9169
go-mysql-server
- 3002: fix table functions in stored procedures
- 3000: make update join case insensitive
fixes: #9268 - 2997: Properly escape special characters in comments
Mostly fixes #8509 (does not handle Control+Z) - 2996: Revert "Merge pull request #2894 from dolthub/max/nil-proc-db"
This reverts commit ce4535ec2da6600a552279a5abf46eb652b7eec0, reversing changes made to f38274720545e418e94e16b664302d96098d515e. - 2995: Fixed enum escaping issue
This fixes a skipped bats tests in table export.
Fixes Dolt issue: #2197 - 2993: Added undeclared variable error message
Fixes #8539 - 2894: [planbuilder] nil database doesn't panic building procedure call
Closed Issues
- 9268: Update with subselect in WHERE clause silently fails to update
- 8509:
SHOW CREATE TABLE
doesn't properly escape quotes in comments - 8972: varchar types inferred on schema import are set to size 1023. 18 occurrences overflow row size limit
- 2197: SQL export for set types with embedded single quotes broken
- 8697: Support MariaDB to Dolt replication
- 9169: Dolt maintains reference to removed remote
1.54.0
This release adds the commit_order
column to the dolt_log
system table and dolt_log() table function
. This is a backwards incompatible change because select *
-style queries against that table or table function will return rows of a different size and require the user to update their parsing code. Per Dolt's versioning policy, this requires a minor version bump.
Merged PRs
dolt
- 9262: Fix replace count test
Claude insists that this behavior matches MySQL. It consulted the docs and ran tests against MySQL.
I can confirm the tested behavior matches MariaDB which I have installed:MariaDB [(none)]> CREATE DATABASE test_replace; ERROR 1007 (HY000): Can't create database 'test_replace'; database exists MariaDB [(none)]> DROP DATABASE IF EXISTS -> test_replace; Query OK, 1 row affected (0.057 sec) MariaDB [(none)]> USE test_replace; ERROR 1049 (42000): Unknown database 'test_replace' MariaDB [(none)]> CREATE DATABASE test_replace; Query OK, 1 row affected (0.003 sec) MariaDB [(none)]> USE test_replace; Database changed MariaDB [test_replace]> CREATE TABLE test(pk BIGINT -> PRIMARY KEY, v BIGINT); Query OK, 0 rows affected (0.030 sec) MariaDB [test_replace]> REPLACE INTO test -> VALUES (1, 1); Query OK, 1 row affected (0.001 sec) MariaDB [test_replace]> REPLACE INTO test VALUES (1, -> 2); Query OK, 2 rows affected (0.001 sec)
- 9261: Unskip no test symbols in binary test
- 9260: Ignore CLAUDE.md for now
- 9254: Add support for MariaDB's GTID format
Fixes: #9241
Fixes: #8697
Testing for this is currently manual. To add automated tests for MariaDB integration, we'll need some larger changes to the binlog replication testing framework to support launching a MariaDB server. We should do that work eventually, especially if we hear more customer requests for MariaDB replication integration. - 9251: Allow dumping wrapper values to SQL
This prevents an issue where commands likedolt dump
would fail when trying to export a table as SQL statements if the table has aBLOB
column, since it would fail to unwrap the text wrapper value to access the underlying bytes. - 9250: Implement commit_order in dolt_log system table and table function
This PR implements: #9145. This is implemented using Claude Code.
A customer asked for the commit order in thedolt_log
system table. We have that information as we generate the commit log in the height variable. This exposes that variable as a column populated for each commit.$ dolt sql -q "select * from dolt_log limit 3" +----------------------------------+--------------------+-----------------------------------+---------------------+--------------------------------------+--------------+ | commit_hash | committer | email | date | message | commit_order | +----------------------------------+--------------------+-----------------------------------+---------------------+--------------------------------------+--------------+ | iifq0hvp5fis9ld2lnhorqr5l697ph38 | post-no-preference | [email protected] | 2025-05-23 06:34:26 | volatility_history 2025-05-22 update | 1427 | | 62n14k63lu5jcc3a3qr18q8ku19e8g8q | post-no-preference | [email protected] | 2025-05-23 06:34:15 | option_chain 2025-05-22 update | 1426 | | 2paehofana33lbbbjnqa18v3nlsvbgbk | post-no-preference | [email protected] | 2025-05-22 06:34:38 | volatility_history 2025-05-21 update | 1425 | +----------------------------------+--------------------+-----------------------------------+---------------------+--------------------------------------+--------------+
- 9247: Made dolt diff --stat -r sql an error
Fixes #7817
From Claude:⏺ Perfect! The changes are working correctly. Let me summarize what I've implemented: Summary I successfully implemented the fix for GitHub issue #7817. The issue was that dolt diff --stat -r sql was throwing a generic "unsupported" error, but it should return a more specific error message explaining that SQL format diffs are only for schema or data changes. Changes Made: 1. Updated the error message in /Users/timsehn/dolthub/git/dolt/go/cmd/dolt/commands/diff_output.go: - Changed from: "diff stats are not supported for sql output" - Changed to: "SQL format diffs only rendered for schema or data changes" 2. Updated the test in /Users/timsehn/dolthub/git/dolt/integration-tests/bats/sql-diff.bats: - Updated the expected error message in the test case to match the new error message Reasoning: The --stat flag shows statistics about diffs (like number of rows changed), which cannot be meaningfully represented as SQL statements. SQL format should only be used for actual schema or data changes that can be expressed as executable SQL commands. The new error message clearly communicates this limitation to users. The implementation correctly returns an error when someone tries to use dolt diff --stat -r sql, and the tests confirm that: 1. The specific error case now returns the improved error message 2. Normal SQL diff functionality continues to work as expected > /cost ⎿ Total cost: $3.08 Total duration (API): 11m 9.9s Total duration (wall): 25m 15.4s Total code changes: 2 lines added, 3 lines removed Token usage by model: claude-3-5-haiku: 79.9k input, 1.9k output, 0 cache read, 0 cache write claude-sonnet: 23.2k input, 15.3k output, 4.1m cache read, 396.2k cache write
- 9246: --all option for dolt log
Issue #8200
Adds a--all
option for dolt log. You can also specify tables after the all option, withdolt lot --all table1 table2
.
I still want to work on:- The code's a bit messy still, I think a helper function might be nice, especially to set the existingTables map.
- Maybe make it work with
--not
to exclude branches? - I'm sort of wondering if the code handling the generation of the "query" we make could be improved. It might not be worth it, but for example if you have a table and branch with the same name, we reinsert that value into the
params
slice. We also reuse a bunch of code for the code handling the first non-revision argument.
- 9244: update maven for ORM tests
- 9237: Sql shell warnings
Fix issue #8875
This pr adds support for warnings in the sql shell. There is now both a summary and detailed list of warnings.
Runningselect 1/0;
will produce, after the table:Warning (Code 1365): Division by 0``` You can disable/enable the ending list with \w and \W, respectively.
- 9235: Log errors on a binlog connection
When a SQL error is sent from the replication source to the replica, the connection is closed out, but the error wasn't getting logged. This made it difficult to debug why the replication connection was erroring out.
go-mysql-server
- 2993: Added undeclared variable error message
Fixes #8539 - 2992: Hacky extension point for aggregate function determination
Also refactors building aggregates - 2989: fix: Fix create new server in example.
I found that the parameters did not match when creating the server in the example, so I fixed it. - 2988: Implement
REVOKE ALL PRIVILEGES, GRANT OPTION ...
This PR adds tests forREVOKE [IF EXISTS] ALL [PRIVILEGES], GRANT OPTION ...
and implements theIGNORE UNKNOWN USER
option.
There are changes toREVOKE PROXY
, but it seems like that is unsupported so there are no tests.
Companion pr: dolthub/vitess#415 - 2986: add returning clause case for assign exec index rule
Test for it is added in doltgresql PR - 2985: fix default value for non null enum columns
In MySQL non null enum columns default to the very first enum instead of throwing a missing default error.
fixes: #8617 - 2984: Add lock_warnings system variable.
Adds a session variable for locking warnings, allowing you to run background queries without clearing the warnings from the last user-generated command.
vitess
- 415: Fix implementation for
REVOKE ALL...
This PR fixesREVOKE ALL [PRIVILEGES], GRANT OPTION ...
so that it parse to the same thing asREVOKE ALL PRIVILEGES
as they are equivalent in MySQL. TheGRANT
privilege is already part of all privileges, specifying it is just for clarity.
Additionally, this PR adds syntax support for theIF EXISTS
option andIGNORE UNKNOWN USER
option that is part ofREVOKE
statements.
MySQL Docs: https://dev.mysql.com/doc/refman/8.4/en/...
1.53.6
Merged PRs
dolt
- 9236: go: sqle/statspro: prollyStats: FlushFromMem: Fix long-held lock when flushing stats to disk.
For sql-server instances with many databases, many branches or many tables and indexes, flushing stats could cause queries to block until stats were finished flushing. StatsController attempted to rate limited how much I/O bandwidth would be used to flush stats, but this is currently counter-productive because all queries are blocked from analyzing successfully until stats are flushed.
This changes it back to flush stats to disk as quickly as possible when we need to do it. Computing stats is already rate limited, so for large flushes this should not happen too often.
We will continue to improve things here so that stats writes have less impact on server resource utilization and do not block user-facing work as they proceed. - 9229: When performing a three-way merge, Use a patchBuffer to build the new primary index instead of a MutableMap.
MutableMaps are designed for caching point modifications to a table. But during merge, the new primary index is computed sequentially. There's no benefit to using a MutableMap here.
MutableMaps are built on top of theApplyMutations
function, which takes a sequential stream of modifications (called a PatchBuffer) and applies them to a chunker. This has the added benefit of being parallelizable: the patches are produced in one goroutine and consumed in another.
Instead of using the MutableMap, we can extract the underlying PatchBuffer and use it directly. This should be both more performant, and more correct as it avoids a failure case with schema merges where the MutableMap flushes changes to disk and writes a chunk containing rows with different schemas. - 9215: go/store/nbs: table_index.go: Fix uint32 overflow in findPrefix when working with large table files.
go-mysql-server
- 2985: fix default value for non null enum columns
In MySQL non null enum columns default to the very first enum instead of throwing a missing default error.
fixes: #8617 - 2983: fix explain plan on server engine
fixes: #8882 - 2982: Bug fix: use ExtendedType serialization if present
The grouping key wasn't getting created correctly for DoltgresTypes, since they implementsql.StringType
, but aren't supported in GMS'types.ConvertToString
function. Tests for grouping with DoltgresTypes added in Doltgres. - 2979: fix indexing for
GROUP BY
s andWINDOW
s inINSERT
andREPLACE
statements inTRIGGERS
Using aggregation and window functions inside a select statement inside an insert source inside a trigger was causing problems. For example, a trigger defined like so:The issue involved thecreate trigger trig before insert on t1 for each row begin insert into t2 select max(id), first_value(id) over (partition by id order by id), ... from t3; end;
Projections
over theGroup By
s. The scope for thegroup by
s already contained the trigger's columns and are indexed uniquely, so we shouldn't include the trigger/parent scope.
Closed Issues
- 8617: INSERT to ENUM column with NOT NULL errors
- 8882: Error in
DESCRIBE PLAN SELECT COUNT(*) FROM table
on running server - 5317: dolt FixedWidthTableWriter panic when calling stored proc
- 5530:
PREPARE
/EXECUTE
in stored procedures is buggy - 7068: first
call
of a stored procedure that uses session user variable always return NULL - 9213: Connection concurrency issues with DDL
Performance
Read Tests | MySQL | Dolt | Multiple |
---|---|---|---|
covering_index_scan | 1.86 | 0.67 | 0.36 |
groupby_scan | 13.46 | 17.95 | 1.33 |
index_join | 1.47 | 2.35 | 1.6 |
index_join_scan | 1.42 | 1.44 | 1.01 |
index_scan | 34.33 | 30.81 | 0.9 |
oltp_point_select | 0.18 | 0.26 | 1.44 |
oltp_read_only | 3.43 | 5.18 | 1.51 |
select_random_points | 0.33 | 0.6 | 1.82 |
select_random_ranges | 0.36 | 0.62 | 1.72 |
table_scan | 34.95 | 32.53 | 0.93 |
types_table_scan | 75.82 | 127.81 | 1.69 |
reads_mean_multiplier | 1.3 |
Write Tests | MySQL | Dolt | Multiple |
---|---|---|---|
oltp_delete_insert | 8.9 | 6.32 | 0.71 |
oltp_insert | 4.1 | 3.07 | 0.75 |
oltp_read_write | 8.9 | 11.45 | 1.29 |
oltp_update_index | 4.18 | 3.19 | 0.76 |
oltp_update_non_index | 4.18 | 3.07 | 0.73 |
oltp_write_only | 5.67 | 6.32 | 1.11 |
types_delete_insert | 8.28 | 6.67 | 0.81 |
writes_mean_multiplier | 0.88 |
TPC-C TPS Tests | MySQL | Dolt | Multiple |
---|---|---|---|
tpcc-scale-factor-1 | 98.1 | 40.36 | 2.43 |
tpcc_tps_multiplier | 2.43 |
Overall Mean Multiple | 1.54 |
---|
1.53.5
Merged PRs
dolt
- 9204: feat(import,csv,psv): Add support for importing CSV and PSV files without header rows
Summary
- Add --no-header flag to treat the first row in CSV/PSV files as data instead of column names
- Add --columns option to specify column names when importing files without headers
- Fix nil pointer panic when importing from stdin with --create-table
In short, this feature makes Dolt more compatible with MySQL/SQLite workflows and provides users with more flexibility when importing data.
Problem
Previously, Dolt always expected the first row of CSV/PSV files to contain column names. This differs from MySQL and SQLite which support importing files where the first row contains data. Users migrating from these systems or working with headerless data files couldn't import them without modifying their files.
Additionally, when users attempted to import data from stdin using --create-table, they would encounter a nil pointer panic instead of receiving a error message.Solution
The implementation adds:- A new --no-header flag that treats the first row as data instead of column headers
- A complementary --columns option to specify column names when headers aren't present
- Proper validation to ensure correct flag combinations
- Comprehensive error handling for stdin imports with clear error messages
- Integration tests for both CSV and PSV files
Testing
- Added integration tests for both CSV and PSV files that verify:
- Importing files with --no-header and --columns options
- Error cases when required options are missing
- Original behavior is maintained when not using --no-header
- Behavior of --columns with and without --no-header
- Edge cases like stdin imports
go-mysql-server
- 2981: Unwrap wrapper values used in JSON aggregation functions, and un-skip accidentally-skipped tests for this behavior.
We were accidentally skipping most of the tests inTestJsonScripts
. An error in the test harness meant that skipping one test in this suite would also skip all additional tests.
A few of the skipped tests were for JSON aggregation functions. The recent "Adaptive Encoding / Wrapper Values" optimization wasn't working properly with these functions because the wrapped values provided to these functions weren't being unwrapped before being inserted into JSON documents. These tests would have caught that issue, but didn't because they were disabled.
This PR fixes the issue and also re-enables the test. - 2979: fix indexing for
GROUP BY
s andWINDOW
s inINSERT
andREPLACE
statements inTRIGGERS
Using aggregation and window functions inside a select statement inside an insert source inside a trigger was causing problems. For example, a trigger defined like so:The issue involved thecreate trigger trig before insert on t1 for each row begin insert into t2 select max(id), first_value(id) over (partition by id order by id), ... from t3; end;
Projections
over theGroup By
s. The scope for thegroup by
s already contained the trigger's columns and are indexed uniquely, so we shouldn't include the trigger/parent scope.
Closed Issues
- 9222: JSON_OBJECT error on
longtext
columns - "unsupported type: *val.TextStorage" - 7831: Allow importing CSVs without column names.
Performance
Read Tests | MySQL | Dolt | Multiple |
---|---|---|---|
covering_index_scan | 2.0 | 0.65 | 0.32 |
groupby_scan | 13.46 | 17.95 | 1.33 |
index_join | 1.47 | 2.39 | 1.63 |
index_join_scan | 1.42 | 1.5 | 1.06 |
index_scan | 34.33 | 30.26 | 0.88 |
oltp_point_select | 0.18 | 0.26 | 1.44 |
oltp_read_only | 3.43 | 5.28 | 1.54 |
select_random_points | 0.33 | 0.59 | 1.79 |
select_random_ranges | 0.37 | 0.61 | 1.65 |
table_scan | 34.33 | 32.53 | 0.95 |
types_table_scan | 75.82 | 125.52 | 1.66 |
reads_mean_multiplier | 1.3 |
Write Tests | MySQL | Dolt | Multiple |
---|---|---|---|
oltp_delete_insert | 8.9 | 6.32 | 0.71 |
oltp_insert | 4.1 | 3.07 | 0.75 |
oltp_read_write | 8.74 | 11.45 | 1.31 |
oltp_update_index | 4.18 | 3.19 | 0.76 |
oltp_update_non_index | 4.18 | 3.07 | 0.73 |
oltp_write_only | 5.67 | 6.32 | 1.11 |
types_delete_insert | 8.28 | 6.67 | 0.81 |
writes_mean_multiplier | 0.88 |
TPC-C TPS Tests | MySQL | Dolt | Multiple |
---|---|---|---|
tpcc-scale-factor-1 | 97.61 | 39.19 | 2.49 |
tpcc_tps_multiplier | 2.49 |
Overall Mean Multiple | 1.56 |
---|
1.53.4
Merged PRs
dolt
- 9220: Bump undici from 5.28.5 to 5.29.0 in /.github/actions/ses-email-action
Bumps undici from 5.28.5 to 5.29.0.Release notes
Sourced from undici's releases.
v5.29.0
What's Changed
- Fix tests in v5.x for Node 20 by
@mcollina
in nodejs/undici#4104 - Removed clients with unrecoverable errors from the Pool nodejs/undici#4088
Full Changelog: nodejs/undici@v5.28.5...v5.29.0
Commits
9528f68
Bumped v5.29.0f1d75a4
increase timeout for redirect test2d31ed6
remove fuzzing tests6b36d49
fix redirect test in Node v16648dd8f
more fix for the wpt runner on Windowsa0516ba
don't use internal header state for cookies (#3295)87ce4af
fix test/client for node 20c2c8fd5
fix: accept v20 SSL specific error for alpn selection in http/282200bd
[v6.x] fix wpts on windows (#4093)47546fa
test: fix windows wpt (#4050)- Additional commits viewable in compare view
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/dolthub/dolt/network/alerts). - Fix tests in v5.x for Node 20 by
- 9219: Adding
dolt_update_column_tag()
stored procedure
Adds thedolt_update_column_tag()
stored procedure to update a column's tag. This is an advanced operation that most users will not need to use. This blog explains more about column tags and column tag conflicts.
Example usage:Follow ups:CALL dolt_update_column_tag('myTable', 'col1', 4321);
- Add docs for the new
dolt_schema_update_tag()
stored procedure - The
dolt schema update-tag
CLI command should be updated to use this stored procedure.
- Add docs for the new
go-mysql-server
- 2981: Unwrap wrapper values used in JSON aggregation functions, and un-skip accidentally-skipped tests for this behavior.
We were accidentally skipping most of the tests inTestJsonScripts
. An error in the test harness meant that skipping one test in this suite would also skip all additional tests.
A few of the skipped tests were for JSON aggregation functions. The recent "Adaptive Encoding / Wrapper Values" optimization wasn't working properly with these functions because the wrapped values provided to these functions weren't being unwrapped before being inserted into JSON documents. These tests would have caught that issue, but didn't because they were disabled.
This PR fixes the issue and also re-enables the test. - 2980: Unwrap inputs to JSON_OBJECT.
The inputs to the JSON_OBJECT function are expected to be strings for the keys, and the expected types for the values. So we need to check whether the inputs are wrapper values and unwrap them.
The safest thing to do is unwrap the value when the document is created, so this PR does that. In theory, allowed wrapped values to be used as document values and unwrapping them at the use site could improve performance for certain queries that create JSON objects in memory but then only read some of the fields. But that's likely an infrequent enough situation that it's not worth the added complexity. - 2978: Bug fix: ensure an active transaction is set before preparing a query
Also adds support for running transaction tests with prepared statements.
Fixes: #9213