Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Created by
brew bump
Created with
brew bump-formula-pr
.release notes
The PR from earlier in the week encoded stats bucket bound rows with prolly formatting. I neglected to encode MCVs in the same way, which are also rows and are subject to the same bugs.
GMS PR: Support information_schema views/tables hooks for doltgres dolthub/go-mysql-server#2678
Changes:
--branch
optionFixes: Dolt clone on tag name results in multiple issues dolthub/dolt#8377
dolt_clean
anddolt_checkout
for doltgresWe previously used commas as serialization boundaries for multi-field stats tuples (bucket bounds). That worked well for numeric values, and doesn't work well for strings with commas. This uses the prolly serialization code to more safely round trip tuples.
We still use commas to separate MCV counts, which are integers, and newlines (
\n
) for index types. If types can have newlines at some point we would want to switch that to prolly encoding as well.io.Copy into an io.PipeWriter will block until all the bytes have been delivered or the reader is closed. reliable/http StreamingResponse was constructed to only cancel the request context on Close(), not also clear the Reader. The Reader should also be closed to ensure all finalization can still happen if the Write to the PipeWriter is currently blocked when the context is canceled.
…olation table output for same
Bug fix for schema name being lost when adding a unique index to a table through Doltgres.
Fixes: Panic creating unique index dolthub/doltgresql#725
go-mysql-server
pruneTables
validate_password_strength()
MySQL Docs: https://dev.mysql.com/doc/refman/8.4/en/encryption-functions.html#function_validate-password-strength
compress()
,uncompress()
, anduncompressed_length()
MySQL Docs:
The library we are using is
compress/zlib
, which is slightly different than the MySQLzlib
implementation. As a result, the actually compressed data is similar but not equivalent. However, this library is still able to uncompress any MySQL compressed data.There is
czlib
, but it is not actively maintained and might requirecgo
.STR_TO_DATE
function cannot parse "%Y%m%d".I mentioned it in the issue #2666
When using the server engine, we return an error for indexes defined over non-unique not null columns.
This meant that filters over these columns would incorrectly return error when there were duplicate entries.
Oddly, this only happens on server engine and not using dolt sql-shell directly.
The bug stems from a missing check when gathering functional dependencies for equalities.
Related:
dolt_diff
withcommit_hash
filter errors withresult max1Row iterator returned more than one row
dolthub/dolt#8365This expands the index interfaces to make it possible to have vector indexes, and demonstrates it with a proof-of-concept in-memory index. It's a rough implementation with some shortcomings: for instance, it doesn't currently handle tables that consist of multiple partitions.
However, this showcases how to use the GMS interfaces to add a vector index.
vitess
Supports the following syntax:
create table t (v blob, vector index(v))
create vector index vec on t(v)
alter table t add vector index vec on t(v)
Closed Issues