Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Comments

Close side panel

Stamp IvorySQL 2.2#196

Merged
caryhuang merged 132 commits intoIvorySQL:IVORY_REL_2_STABLEIvorySQL/IvorySQL:IVORY_REL_2_STABLEfrom
wangjie-star:IVORY_REL_2_STABLE_2_2Copy head branch name to clipboard
Mar 22, 2023
Merged

Stamp IvorySQL 2.2#196
caryhuang merged 132 commits intoIvorySQL:IVORY_REL_2_STABLEIvorySQL/IvorySQL:IVORY_REL_2_STABLEfrom
wangjie-star:IVORY_REL_2_STABLE_2_2Copy head branch name to clipboard

Conversation

@wangjie-star
Copy link
Contributor

No description provided.

tglsfdc and others added 30 commits March 14, 2023 00:11
Commit fede154 introduced FILTER by jamming it into the existing
example introducing HAVING, which seems pedagogically poor to me;
and it added no information about what the keyword actually does.
Not to mention that the claimed output didn't match the sample
data being used in this running example.

Revert that and instead make an independent example using FILTER.
To help drive home the point that it's a per-aggregate filter,
we need to use two aggregates not just one; for consistency
expand all the examples in this segment to do that.

Also adjust the example using WHERE ... LIKE so that it'd produce
nonempty output with this sample data, and show that output.

Back-patch, as the previous patch was.  (Sadly, v10 is now out
of scope.)

Discussion: https://postgr.es/m/166794307526.652.9073408178177444190@wrigleys.postgresql.org
Add a little to the header comments for these functions to make it
clearer what guarantees about commit behavior are provided to callers.
(See commit f929441 for context.)

Although this is only a comment change, it's really documentation
aimed at authors of extensions, so it seems appropriate to back-patch.

Yugo Nagata and Tom Lane, per further discussion of bug #17434.

Discussion: https://postgr.es/m/17434-d9f7a064ce2a88a3@postgresql.org
Replace the stopgap fix I made in 0e758ae89 with a cleaner one.

The real problem with 4ab5dae is that it contorted this function's
logic substantially, by introducing a third code path that required
different behavior in the function's main loop.  That seems quite
unnecessary on closer inspection: the new IsBinaryUpgrade case can
just share the behavior of the other immediate-unlink cases.  Hence,
revert 4ab5dae and most of 0e758ae89 (keeping the latter's
save/restore errno fix), and add IsBinaryUpgrade to the set of
conditions tested to choose immediate unlink.

Also fix some additional places with sloppy handling of errno,
to ensure we have an invariant that we always continue processing
after any non-ENOENT failure of do_truncate.  I doubt that that's
fixing any bug of field importance, so I don't feel it necessary to
back-patch; but we might as well get it right while we're here.

Also improve the comments, which had drifted a bit from what the
code actually does, and neglected to mention some important
considerations.

Back-patch to v15, not because this is fixing any bug but because
it doesn't seem like a good idea for v15's mdunlinkfork logic to be
significantly different from both v14 and v16.

Discussion: https://postgr.es/m/3797575.1667924888@sss.pgh.pa.us
sync_handler was not mentioned in the comment block of the function.

Oversight in dee663f.

Author: Aleksander Alekseev
Discussion: https://postgr.es/m/CAJ7c6TPUd9BwNY47TtMxaijLHSbyHNdhu=kvbGnvO_bi+oC6_Q@mail.gmail.com
Backpatch-through: 14
The comments atop seem to indicate that we always accumulate invalidation
messages in a top-level transaction which is neither required nor matches
with the code.

Author: Amit Kapila
Reviewd by: Masahiko Sawada
Backpatch-through: 14, where it was introduced in commit c55040c
Discussion: https://postgr.es/m/CAA4eK1LxGgnUroPz8STb6OfjVU1yaHoSA+T63URwmGCLdMJ0LA@mail.gmail.com
In commit 450ee7012 I supposed that all platforms we now care about have
snprintf(), since that's required by C99.  Turns out that Microsoft did
not get around to adding that until VS2015.  We've dropped support for
VS2013 as of HEAD (cf 6203583), but not in the back branches, so add
a hack for this in the back branches only.

There's no easy shortcut to an exact emulation of standard snprintf
in VS2013, but fortunately we don't need one: this code was just fine
with using sprintf before 450ee7012, so we can make it do so again
on that platform (and any others where the problem might crop up).

Per bug #17681 from Daisuke Higuchi.  Back-patch to v12, like the
previous patch.

Discussion: https://postgr.es/m/17681-485ba2ec13e7f392@postgresql.org
The stanza "SET STORAGE may need to add a TOAST table" does not
test what it's supposed to, and hasn't done so since we added
the ability to store constant column default values as metadata.
We need to use a non-constant default to get the expected table
rewrite to actually happen.

Fix that, and add the missing checks that would have exposed the
problem to begin with.

Noted while reviewing a patch that made changes in this test case.
Back-patch to v11 where the problem came in.
The original report was concerned with a possible inconsistency
between the heap and the visibility map, which I was unable to
confirm. The concern has been retracted.

However, there did seem to be a torn page hazard when using
checksums. By not setting the heap page LSN during redo, the
protections of minRecoveryPoint were bypassed. Fixed, along with a
misleading comment.

It may have been impossible to hit this problem in practice, because
it would require a page tear between the checksum and the flags, so I
am marking this as a theoretical risk. But, as discussed, it did
violate expectations about the page LSN, so it may have other
consequences.

Backpatch to all supported versions.

Reported-by: Konstantin Knizhnik
Reviewed-by: Konstantin Knizhnik
Discussion: https://postgr.es/m/fed17dac-8cb8-4f5b-d462-1bb4908c029e@garret.ru
Backpatch-through: 11
…t. Test files should now ignore has_wal_read_bug() so long as wait_for_catchup() is their only known way of reaching the bug. That's at least five files today, a number expected to grow over time. This commit removes skip logic from three. By doing so, systems having the bug regain the ability to catch other kinds of defects via those three tests. The other two, 002_databases.pl and 031_recovery_conflict.pl, have been unprotected. Back-patch to v15, where done_testing() first became our standard.

Discussion: https://postgr.es/m/20221030031639.GA3082137@rfd.leadboat.com
Currently this only allows for one argument, which must be present, and
always returns a single string. With this change the following now all
work:

  $all_config = $node->config_data;
  %config_map = ($node->config_data);
  $incdir = $node->config_data('--include-dir');
  ($incdir, $sharedir) = $node->config_data(
      qw(--include-dir --share-dir));

Backpatch to release 15 where this was introduced.

Discussion: https://postgr.es/m/73eea68e-3b6f-5f63-6024-25ed26b52016@dunslane.net

Reviewed by Tom Lane, Alvaro Herrera, Michael Paquier.
The current code looks for the sample file in the source directory, but
it seems better to test against the installed sample file.

Backpatch to release 15 where the test was introduced.

Discussion: https://postgr.es/m/73eea68e-3b6f-5f63-6024-25ed26b52016@dunslane.net

Reviewed by Tom Lane, Alvaro Herrera, Michael Paquier.
This column is new in v16, but it was listed in the v15 docs too
via a back-patching fumble.

Per report from Peter Gigowski; diagnosis by Julien Rouhaud.

Discussion: https://postgr.es/m/CAM7cJ6XY_PAmx0kGn6U307EKZ+qXDFEBH27WP87-_ygetnBuxQ@mail.gmail.com
During XLOG_HASH_SPLIT_ALLOCATE_PAGE replay, we were checking for a
cleanup lock on the new bucket page after acquiring an exclusive lock on
it and raising a PANIC error on failure. However, it is quite possible
that checkpointer can acquire the pin on the same page before acquiring a
lock on it, and then the replay will lead to an error. So instead, directly
acquire the cleanup lock on the new bucket page during
XLOG_HASH_SPLIT_ALLOCATE_PAGE replay operation.

Reported-by: Andres Freund
Author: Robert Haas
Reviewed-By: Amit Kapila, Andres Freund, Vignesh C
Backpatch-through: 11
Discussion: https://postgr.es/m/20220810022617.fvjkjiauaykwrbse@awork3.anarazel.de
UPDATE was listed twice and DELETE was omitted, replace one UPDATE
with DELETE instead.

Backpatch through v15 where MERGE was added.

Author: Myo Wai Thant <myo.waithant@fujitsu.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/OSAPR01MB43247E46931E9E9CFC4AA0F29A079@OSAPR01MB4324.jpnprd01.prod.outlook.com
Backpatch-through: 15
This commend references a struct that disappeared before MERGE was
merged ... and ExecDelete is not called by the committed MERGE anyway.
Revert to the original wording.

Backpatch to 15
This restores compatibility with the not-yet-released successor of
version 20220807.0.  Back-patch to 9.4, which introduced this code.

Reviewed by Andrew Dunstan.

Discussion: https://postgr.es/m/20221117061805.GA4020280@rfd.leadboat.com
Reporting tuples for which nothing is done is useless and goes against
the documented behavior, so don't do it.

Backpatch to 15.

Reported by: Luca Ferrari
Discussion: https://postgr.es/m/CAKoxK+42MmACUh6s8XzASQKizbzrtOGA6G1UjzCP75NcXHsiNw@mail.gmail.com
Version strings with unequal numbers of parts were being compared
incorrectly. We cure this by treating a missing part in the shorter
version as 0.

per complaint from Jehan-Guillaume de Rorthais, but the fix is mine, not
his.

Discussion: https://postgr.es/m/20220628225325.53d97b8d@karst

Backpatch to release 14 where this code was introduced.
basics.source is supposed to be pretty closely in step with
the examples in chapter 2 of the tutorial, but I forgot to
update it in commit f05a5e000.  Fix that, and adjust a couple
of other discrepancies that had crept in over time.

(I notice that advanced.source is nowhere near being in sync
with chapter 3, but I lack the ambition to do something
about that right now.)
The test output varies when debug_discard_caches is enabled,
because that causes extra executions of recomputeNamespacePath.
Maybe putting a hook in that was a bad idea, but as a stopgap,
just turn off debug_discard_caches in this test.

Per buildfarm (now that we have debug_discard_caches coverage
again).  Back-patch to v15 where this module was added.

Discussion: https://postgr.es/m/2267406.1668804934@sss.pgh.pa.us
ProcSleep() used a PGPROC* variable to point to PROC_QUEUE->links.next,
because that does "the right thing" with SHMQueueInsertBefore(). While that
largely works, it's certainly not correct and unnecessary - we can just use
SHM_QUEUE* to point to the insertion point.

Noticed when testing a 32bit of postgres with undefined behavior
sanitizer. UBSan noticed that sometimes the supposed PGPROC wasn't
sufficiently aligned (required since 46d6e5f, ensured indirectly, via
ShmemAllocRaw() guaranteeing cacheline alignment).

For now fix this by using a SHM_QUEUE* for the insertion point. Subsequently
we should replace all the use of PROC_QUEUE and SHM_QUEUE with ilist.h, but
that's a larger change that we don't want to backpatch.

Backpatch to all supported versions - it's useful to be able to run postgres
under UBSan.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20221117014230.op5kmgypdv2dtqsf@awork3.anarazel.de
Backpatch: 11-
pageinspect has occasionally failed on slow buildfarm members,
with symptoms indicating that the expected effects of VACUUM
FREEZE didn't happen.  This is presumably because a background
transaction such as auto-analyze was holding back global xmin.

We can work around that by using a temp table in the test.
Since commit a7212be, that will use an up-to-date cutoff xmin
regardless of other processes.  And pageinspect itself shouldn't
really care whether the table is temp.

Back-patch to v14.  There would be no point in older branches
without back-patching a7212be, which seems like more trouble
than the problem is worth.

Discussion: https://postgr.es/m/2892135.1668976646@sss.pgh.pa.us
These functions have been marked parallel safe, but the buildfarm's
response to commit e2933a6e1 exposed the flaw in that thinking:
if you try to use them on a temporary table, and they run inside
a parallel worker, they'll fail with "cannot access temporary tables
during a parallel operation".

Fix that by marking them parallel restricted instead.  Maybe someday
we'll have a better answer and can reverse this decision.

Back-patch to v15.  To go back further, we'd have to devise variant
versions of pre-1.10 pageinspect versions.  Given the lack of field
complaints, it doesn't seem worth the trouble.  We'll just deem
this case unsupported pre-v15.  (If anyone does complain, it might
be good enough to update the markings manually in their DBs.)

Discussion: https://postgr.es/m/E1ox94a-000EHu-VH@gemulon.postgresql.org
I just spent an annoying amount of time reverse-engineering the
100%-undocumented API between ts_headline and the text search
parser's prsheadline function.  Add some commentary about that
while it's fresh in mind.  Also remove some unused macros in
wparser_def.c.

While at it, I noticed that when commit 78e73e8 added a
CHECK_FOR_INTERRUPTS call in TS_execute_recurse, it missed
doing so in the parallel function TS_phrase_execute, which
surely needs one just as much.

Back-patch because of the missing CHECK_FOR_INTERRUPTS.
Might as well back-patch the rest of this too.
The Hunspell project moved from Sourceforge to Github sometime
in 2016, so update our links to match the new URL.  Backpatch
the doc changes to all supported versions.

Discussion: https://postgr.es/m/DC9A662A-360D-4125-A453-5A6CB9C6C4B4@yesql.se
Backpatch-through: v11
Once a logical slot has acquired a catalog_xmin, it doesn't let go of
it, even when invalidated by exceeding the max_slot_wal_keep_size, which
means that dead catalog tuples are not removed by vacuum anymore since
the point is invalidated, until the slot is dropped.  This could be
catastrophic if catalog churn is high.

Change the computation of Xmin to ignore invalidated slots,
to prevent dead rows from accumulating.

Backpatch to 13, where slot invalidation appeared.

Author: Sirisha Chamarthi <sirichamarthi22@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://postgr.es/m/CAKrAKeUEDeqquN9vwzNeG-CN8wuVsfRYbeOUV9qKO_RHok=j+g@mail.gmail.com
This was trying to exercise an ERROR we don't actually have.

Backpatch to 15.

Reported by Teja Mupparti <Tejeswar.Mupparti@microsoft.com>
Discussion: https://postgr.es/m/SN6PR2101MB1040BDAF740EA4389484E92BF0079@SN6PR2101MB1040.namprd21.prod.outlook.com
Currently there is a race condition where if concurrent TAP tests both
test that they can open a port they will assume that it is free and use
it, causing one of them to fail. To prevent this we record a reservation
using an exclusive lock, and any TAP test that discovers a reservation
checks to see if the reserving process is still alive, and looks for
another free port if it is.

Ports are reserved in a directory set by the environment setting
PG_TEST_PORT_DIR, or if that doesn't exist a subdirectory of the top
build directory as set by Makefile.global, or its own
tmp_check directory.

The prove_check recipe in Makefile.global.in is extended to export
top_builddir to the TAP tests. This was already exported by the
prove_installcheck recipes.

Per complaint from Andres Freund

Backpatched from 9b4eafcaf4 to all live branches

Discussion: https://postgr.es/m/20221002164931.d57hlutrcz4d2zi7@awork3.anarazel.de
We've made multiple attempts at preventing get_actual_variable_range
from taking an unreasonable amount of time (3ca930f, fccebe4).
But there's still an issue for the very first planning attempt after
deletion of a large number of extremal-valued tuples.  While that
planning attempt will set "killed" bits on the tuples it visits and
thereby reduce effort for next time, there's still a lot of work it
has to do to visit the heap and then set those bits.  It's (usually?)
not worth it to do that much work at plan time to have a slightly
better estimate, especially in a context like this where the table
contents are known to be mutating rapidly.

Therefore, let's bound the amount of work to be done by giving up
after we've visited 100 heap pages.  Giving up just means we'll
fall back on the extremal value recorded in pg_statistic, so it
shouldn't mean that planner estimates suddenly become worthless.

Note that this means we'll still gradually whittle down the problem
by setting a few more index "killed" bits in each planning attempt;
so eventually we'll reach a good state (barring further deletions),
even in the absence of VACUUM.

Simon Riggs, per a complaint from Jakub Wartak (with cosmetic
adjustments by me).  Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAKZiRmznOwi0oaV=4PHOCM4ygcH4MgSvt8=5cu_vNCfc8FSUug@mail.gmail.com
per gripe from Andres Freund and Tom Lane

Backpatch to all live branches.
hlinnaka and others added 29 commits March 15, 2023 14:08
Commit c4649cc removed the "shared" and "ntapes" arguments, but the
comment still talked about "shared". It also talked about "a shared
file handle", which was technically correct because even before commit
c4649cc, the "shared file handle" referred to the "fileset"
argument, not "shared". But it was very confusing. Improve the
comment.

Also add a comment on what the "preallocate" argument does.

Backpatch to v15, just to make backpatching other patches easier in
the future.

Discussion: https://www.postgresql.org/message-id/af989685-91d5-aad4-8f60-1d066b5ec309@enterprisedb.com
Reviewed-by: Peter Eisentraut
The test in question was meant to be testing Memoize to ensure it worked
correctly when the inner side of the join contained lateral vars, however,
nothing in the lateral subquery stopped it from being pulled up into the
main query, so the planner did that, and that meant no more lateral vars.

Here we add a simple ORDER BY to stop the planner from being able to
pullup the lateral subquery.

Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4_LHJaN4L-tXpKMiPFnsCJWU1P8Xh59o0W7AA6UN99=cQ@mail.gmail.com
Backpatch-through: 14, where Memoize was added.
b762fed64 recently changed this test to prevent subquery pullup to allow
us to test Memoize with lateral_vars.  As pointed out by Tom Lane, OFFSET
0 is our standard way of preventing subquery pullups, so do it that way
instead.

Discussion: https://postgr.es/m/2144818.1674517061@sss.pgh.pa.us
Backpatch-through: 14, same as b762fed64
When libpqrcv_connect (also known as walrcv_connect()) failed, it leaked the
libpq connection. In most paths that's fairly harmless, as the calling process
will exit soon after. But e.g. CREATE SUBSCRIPTION could lead to a somewhat
longer lived leak.

Fix by releasing resources, including the libpq connection, on error.

Add a test exercising the error code path. To make it reliable and safe, the
test tries to connect to port=-1, which happens to fail during connection
establishment, rather than during connection string parsing.

Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/20230121011237.q52apbvlarfv6jm6@awork3.anarazel.de
Backpatch: 11-
The drop database command waits for the logical replication sync worker to
accept ProcSignalBarrier and the worker's slot creation waits for the drop
database to finish which leads to a deadlock. This happens because the
tablesync worker holds interrupts while creating a slot.

We prevent cancel/die interrupts while creating a slot in the table sync
worker because it is possible that before the server finishes this
command, a concurrent drop subscription happens which would complete
without removing this slot and that leads to the slot existing until the
end of walsender. However, the slot will eventually get dropped at the
walsender exit time, so there is no danger of the dangling slot.

This patch reallows cancel/die interrupts while creating a slot and
modifies the test to wait for slots to become zero to prevent finding an
ephemeral slot.

The reported hang doesn't happen in PG14 as the drop database starts to
wait for ProcSignalBarrier with PG15 (commits 4eb2176 and e2f65f4)
but it is good to backpatch this till PG14 as it is not a good idea to
prevent interrupts during a network call that could block indefinitely.

Reported-by: Lakshmi Narayanan Sreethar
Diagnosed-by: Andres Freund
Author: Hou Zhijie
Reviewed-by: Vignesh C, Amit Kapila
Backpatch-through: 14, where it was introduced in commit 6b67d72
Discussion: https://postgr.es/m/CA+kvmZELXQ4ZD3U=XCXuG3KvFgkuPoN1QrEj8c-rMRodrLOnsg@mail.gmail.com
network_ops is an opclass family of SpGiST, and the opclass able to
work on the inet type is named inet_ops.

Oversight in 7a1cd52, that reworked the design of the table listing all
the operators available.

Reported-by: Laurence Parry
Reviewed-by: Tom Lane, David G. Johnston
Discussion: https://postgr.es/m/167458110639.2667300.14741268666497110766@wrigleys.postgresql.org
Backpatch-through: 14
If the final chunk of an oversized tuple being written out to disk was
exactly 32760 bytes, it would be corrupted due to a fencepost bug.

Bug #17619.  Back-patch to 11 where the code arrived.

While testing that (see test module in archives), I (tmunro) noticed
that the per-participant page counter was not initialized to zero as it
should have been; that wasn't a live bug when it was written since DSM
memory was originally always zeroed, but since 14
min_dynamic_shared_memory might be configured and it supplies non-zeroed
memory, so that is also fixed here.

Author: Dmitry Astapov <dastapov@gmail.com>
Discussion: https://postgr.es/m/17619-0de62ceda812b8b5%40postgresql.org
This fixes a bug that, under some circumstances, would cause MERGE to
fail to properly recompute expressions for GENERATED STORED columns.

Formerly, ExecInitModifyTable() did not call ExecInitStoredGenerated()
for a MERGE command, which meant that the generated expressions
information was not computed until later, when the first merge action
was executed. However, if the first merge action to execute was an
UPDATE, then ExecInitStoredGenerated() could decide to skip some some
generated columns, if the columns on which they depended were not
updated, which was a problem if the MERGE also contained an INSERT
action, for which no generated columns should be skipped.

So fix by having ExecInitModifyTable() call ExecInitStoredGenerated()
for MERGE, and assume that it isn't safe to skip any generated columns
in a MERGE. Possibly that could be relaxed, by allowing some generated
columns to be skipped for a MERGE without an INSERT action, but it's
not clear that it's worth the effort.

Noticed while investigating bug #17759. Back-patch to v15, where MERGE
was added.

Dean Rasheed, reviewed by Tom Lane.

Discussion:
  https://postgr.es/m/17759-e76d9bece1b5421c%40postgresql.org
  https://postgr.es/m/CAEZATCXb_ezoMCcL0tzKwRGA1x0oeE%3DawTaysRfTPq%2B3wNJn8g%40mail.gmail.com
This test has been added as of 857ee8e that has introduced the SQL
function txid_status(), with the purpose of checking that a transaction
ID still in-progress during a crash is correctly marked as aborted after
recovery finishes.

This test is unstable, and some configuration scenarios may that easier
to reproduce (wal_level=minimal, wal_compression=on) because the WAL
holding the information about the in-progress transaction ID may not
have made it to disk yet, hence a post-crash recovery may cause the same
XID to be reused, triggering a test failure.

We have discussed a few approaches, like making this function force a
WAL flush to make it reliable across crashes, but we don't want to pay a
performance penalty in some scenarios, as well.  The test could have
been tweaked to enforce a checkpoint but that actually breaks the
promise of the test to rely on a stable result of txid_status() after
a crash.

This issue has been reported a few times across the past years, with an
original report from Kyotaro Horiguchi.  The buildfarm machines tanager,
hachi and gokiburi enable wal_compression, and fail on this test
periodically.

Discussion: https://postgr.es/m/3163112.1674762209@sss.pgh.pa.us
Discussion: https://postgr.es/m/20210305.115011.558061052471425531.horikyota.ntt@gmail.com
Backpatch-through: 11
This was only mentioned in the description of the text/label, which
are marked as being in quotes in the synopsis, which can cause
confusion (as witnessed on IRC).

Also separate the literal and NULL cases in the parameter list, per
suggestion from Tom Lane.

Also add an example of dropping a security label.

Dagfinn Ilmari Mannsåker, with some tweaks by me

Discussion: https://postgr.es/m/87sffqk4zp.fsf@wibble.ilmari.org
DST law changes in Greenland and Mexico.  Notably, a new timezone
America/Ciudad_Juarez has been split off from America/Ojinaga.

Historical corrections for northern Canada, Colombia, and Singapore.
An early release of AF_UNIX in Windows apparently supported Linux-style
"abstract" Unix sockets, but they do not seem to work in current Windows
versions and there is no mention of any of this in the Winsock
documentation.  Remove the mention of Windows from the documentation.

Back-patch to 14, where commit c9f0624 landed.

Discussion: https://postgr.es/m/CA%2BhUKGKrYbSZhrk4NGfoQGT_3LQS5pC5KNE1g0tvE_pPBZ7uew%40mail.gmail.com
Back-patch to 15, where in-tree CI began.

Author: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/1441145.1675300332%40sss.pgh.pa.us
So far we have used containers for testing windows on cirrus-ci. Unfortunately
they come with substantial overhead: First, the container images are pulled
onto the host on-demand. Due to the large size of windows containers, that
ends up taking nearly 4 minutes. Secondly, IO is slow, leading to CI runs
taking long.

Thus switch to windows VMs, improving windows CI times by well over 2x.

Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/211afb88-6df6-b74d-f1b7-84b5f21ad875@gmail.com
Backpatch: 15-, where CI was added
Breaking <phrase> over two lines is not handled by psql's
create_help.pl.  (It creates faulty \help output.)

Undo the formatting change introduced by
9bdad1b to fix this for now.
The prior coding of int64_div_fast_to_numeric() had a number of bugs
that would cause it to fail under different circumstances, such as
with log10val2 <= 0, or log10val2 a multiple of 4, or in the "slow"
numeric path with log10val2 >= 10.

None of those could be triggered by any of our current code, which
only uses log10val2 = 3 or 6. However, they made it a hazard for any
future code that might use it. Also, since this is exported by
numeric.c, users writing their own C code might choose to use it.

Therefore fix, and back-patch to v14, where it was introduced.

Dean Rasheed, reviewed by Tom Lane.

Discussion: https://postgr.es/m/CAEZATCW8gXgW0tgPxPgHDPhVX71%2BSWFRkhnXy%2BTfGDsKLepu2g%40mail.gmail.com
As usual, the release notes for other branches will be made by cutting
these down, but put them up for community review first.
pqsecure_open_gss() includes a code path handling error messages with
v2-style protocol messages coming from the server.  The client-side
buffer holding the error message does not force a NULL-termination, with
the data of the server getting copied to the errorMessage of the
connection.  Hence, it would be possible for a server to send an
unterminated string and copy arbitrary bytes in the buffer receiving the
error message in the client, opening the door to a crash or even data
exposure.

As at this stage of the authentication process the exchange has not been
completed yet, this could be abused by an attacker without Kerberos
credentials.  Clients that have a valid kerberos cache are vulnerable as
libpq opportunistically requests for it except if gssencmode is
disabled.

Author: Jacob Champion
Backpatch-through: 12
Security: CVE-2022-41862
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 3748d8972214a3d1e316cffc19824cd948e9e2d8
In standby mode, we don't actually report progress of recovery,
but up until now, startup_progress_timeout_handler() nevertheless
got called every log_startup_progress_interval seconds. That's
an unnecessary expense, so avoid it.

Report by Thomas Munro. Patch by Bharath Rupireddy, reviewed by
Simon Riggs, Thomas Munro, and me. Back-patch to v15, where
the problem was introduced.

Discussion: https://www.postgresql.org/message-id/CA%2BhUKGKCHSffAj8zZJKJvNX7ygnQFxVD6wm1d-2j3fVw%2BMafPQ%40mail.gmail.com
This reverts commit 98e7234. I
forgot that we're about to wrap a release, and this fix isn't
critical enough to justify committing it right before we wrap
a release.

Discussion: http://postgr.es/m/2676424.1675700113@sss.pgh.pa.us
@caryhuang caryhuang merged commit c07f4a5 into IvorySQL:IVORY_REL_2_STABLE Mar 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Morty Proxy This is a proxified and sanitized view of the page, visit original site.