Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 76c23bb

Browse filesBrowse files
committed
Reduce overhead of cache-clobber testing in LookupOpclassInfo().
Commit 03ffc4d added logic to bypass all caching behavior in LookupOpclassInfo when CLOBBER_CACHE_ALWAYS is enabled. It doesn't look like I stopped to think much about what that would cost, but recent investigation shows that the cost is enormous: it roughly doubles the time needed for cache-clobber test runs. There does seem to be value in this behavior when trying to test the opclass-cache loading logic itself, but for other purposes the cost is excessive. Hence, let's back off to doing this only when debug_invalidate_system_caches_always is at least 3; or in older branches, when CLOBBER_CACHE_RECURSIVELY is defined. While here, clean up some other minor issues in LookupOpclassInfo. Re-order the code so we aren't left with broken cache entries (leading to later core dumps) in the unlikely case that we suffer OOM while trying to allocate space for a new entry. (That seems to be my oversight in 03ffc4d.) Also, in >= v13, stop allocating one array entry too many. That's evidently left over from sloppy reversion in 851b14b. Back-patch to all supported branches, mainly to reduce the runtime of cache-clobbering buildfarm animals. Discussion: https://postgr.es/m/1370856.1625428625@sss.pgh.pa.us
1 parent 84dc6a4 commit 76c23bb
Copy full SHA for 76c23bb

File tree

1 file changed

+23
-20
lines changed
Filter options

1 file changed

+23
-20
lines changed

‎src/backend/utils/cache/relcache.c

Copy file name to clipboardExpand all lines: src/backend/utils/cache/relcache.c
+23-20Lines changed: 23 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1597,15 +1597,15 @@ LookupOpclassInfo(Oid operatorClassOid,
15971597
/* First time through: initialize the opclass cache */
15981598
HASHCTL ctl;
15991599

1600+
/* Also make sure CacheMemoryContext exists */
1601+
if (!CacheMemoryContext)
1602+
CreateCacheMemoryContext();
1603+
16001604
MemSet(&ctl, 0, sizeof(ctl));
16011605
ctl.keysize = sizeof(Oid);
16021606
ctl.entrysize = sizeof(OpClassCacheEnt);
16031607
OpClassCache = hash_create("Operator class cache", 64,
16041608
&ctl, HASH_ELEM | HASH_BLOBS);
1605-
1606-
/* Also make sure CacheMemoryContext exists */
1607-
if (!CacheMemoryContext)
1608-
CreateCacheMemoryContext();
16091609
}
16101610

16111611
opcentry = (OpClassCacheEnt *) hash_search(OpClassCache,
@@ -1614,39 +1614,42 @@ LookupOpclassInfo(Oid operatorClassOid,
16141614

16151615
if (!found)
16161616
{
1617-
/* Need to allocate memory for new entry */
1617+
/* Initialize new entry */
16181618
opcentry->valid = false; /* until known OK */
16191619
opcentry->numSupport = numSupport;
1620-
1621-
if (numSupport > 0)
1622-
opcentry->supportProcs = (RegProcedure *)
1623-
MemoryContextAllocZero(CacheMemoryContext,
1624-
numSupport * sizeof(RegProcedure));
1625-
else
1626-
opcentry->supportProcs = NULL;
1620+
opcentry->supportProcs = NULL; /* filled below */
16271621
}
16281622
else
16291623
{
16301624
Assert(numSupport == opcentry->numSupport);
16311625
}
16321626

16331627
/*
1634-
* When testing for cache-flush hazards, we intentionally disable the
1635-
* operator class cache and force reloading of the info on each call. This
1636-
* is helpful because we want to test the case where a cache flush occurs
1637-
* while we are loading the info, and it's very hard to provoke that if
1638-
* this happens only once per opclass per backend.
1628+
* When aggressively testing cache-flush hazards, we disable the operator
1629+
* class cache and force reloading of the info on each call. This models
1630+
* no real-world behavior, since the cache entries are never invalidated
1631+
* otherwise. However it can be helpful for detecting bugs in the cache
1632+
* loading logic itself, such as reliance on a non-nailed index. Given
1633+
* the limited use-case and the fact that this adds a great deal of
1634+
* expense, we enable it only in CLOBBER_CACHE_RECURSIVELY mode.
16391635
*/
1640-
#if defined(CLOBBER_CACHE_ALWAYS)
1636+
#if defined(CLOBBER_CACHE_RECURSIVELY)
16411637
opcentry->valid = false;
16421638
#endif
16431639

16441640
if (opcentry->valid)
16451641
return opcentry;
16461642

16471643
/*
1648-
* Need to fill in new entry.
1649-
*
1644+
* Need to fill in new entry. First allocate space, unless we already did
1645+
* so in some previous attempt.
1646+
*/
1647+
if (opcentry->supportProcs == NULL && numSupport > 0)
1648+
opcentry->supportProcs = (RegProcedure *)
1649+
MemoryContextAllocZero(CacheMemoryContext,
1650+
numSupport * sizeof(RegProcedure));
1651+
1652+
/*
16501653
* To avoid infinite recursion during startup, force heap scans if we're
16511654
* looking up info for the opclasses used by the indexes we would like to
16521655
* reference here.

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.