{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":9454675,"defaultBranch":"fb-mysql-8.0.32","name":"mysql-5.6","ownerLogin":"facebook","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2013-04-15T17:54:43.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/69631?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1717043595.0","currentOid":""},"activityList":{"items":[{"before":"b3f7bf76e068f256e66753043f92f2cbb19f9b40","after":"9d0a754dc9973af0508b3ba260fc337190a3218f","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-23T21:14:11.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Add admit and more yields to clone\n\nSummary:\nClone is the largest source of CPU scheduler stalls now.\n- Add yield to the clone chunk loop\n- Add admission to the clone command, so that we could exclude it using the filter if needed.\n\nDifferential Revision: D60061292\n\nfbshipit-source-id: 16415f5d3a68513f823eb2cc19987a441f9ecdf5","shortMessageHtmlLink":"Add admit and more yields to clone"}},{"before":"5475d4fe4bce18d6d1dfa9544021bf65fd6a12dd","after":"b3f7bf76e068f256e66753043f92f2cbb19f9b40","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-23T01:46:38.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Return only the client stats subset of the buffer\n\nSummary: Usage of the variable 'len' causes us to return 1.3KB with every client request, which helps with nothing and increases network usage. Since we know the real size of the buffer, return accordingly.\n\nDifferential Revision: D60023390\n\nfbshipit-source-id: 2972d0f26b97877ea05739d112609ee45f5177ba","shortMessageHtmlLink":"Return only the client stats subset of the buffer"}},{"before":"b8c0125715ee6a0ef97779ac6bec326776e5ec16","after":"5475d4fe4bce18d6d1dfa9544021bf65fd6a12dd","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-18T06:18:05.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Admit statements to admission control\n\nSummary:\nCurrently, there are queries like create/drop table, which enter the scheduler in the sql prepare phase, and stall the scheduler, without giving us the flexibility of excluding them from admission control and thus, the scheduler.\n\nSo, this change allows to admit such queries to scheduler first and then, decide whether it needs to be filtered out providing the flexibility of excluding some known expensive operations like create/drop table which cause stalls later on.\n\nthe scheduler checks: thd->lex->sql_command, which should be create/drop table.\n\nHave not added a generic change to execute command yet. We will observe the effects of this change first.\n\nDifferential Revision: D59736374\n\nfbshipit-source-id: 6c640249ed47fc31c6e3c8ab81bc4362ab8fcff1","shortMessageHtmlLink":"Admit statements to admission control"}},{"before":"8309dea1baa552ba0583cf48bd45549c5948bc40","after":"b8c0125715ee6a0ef97779ac6bec326776e5ec16","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-18T00:21:05.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fixing missed notification in wait_for_all_committing_trxs_to_finish()\n\nSummary:\nThere were two problems with the existing code that caused\nmissed notification such that `wait_for_all_committing_trxs_to_finish()`\nwaited forever:\n1. In group commit the follower threads were incrementing committing\n trxs without holding LOCK_log which meant that the counter could\n increment after going down to 0 and since the counter doesn't use\n mutex (it's atomic) there can be a race such that the predicate check\n while waiting fails and we never notify after so we're stuck. Another\n problem with this was that the counter can go to 0 if only the leader\n thread decrements it before the followers have the chance to\n increment it. This will cause us to wake up before all trxs are\n actually finished.\n2. Since we don't hold a mutex while inc and dec the counter there can\n be a race such that the waiting thread fails the predicate on a\n non-zero counter, then before we go into waiting, the counter is dec\n to 0 and we notify. But since the notification happened before we\n went into waiting we miss it completely and wait forever.\n\nTo fix #1 we make sure that we increment the counter only while holding\nLOCK_log. To fix #2 we take a mutex before notifying when the counter\ngoes to 0, this way the notification will be either entirely before or\nafter the wait.\n\nOut of paranoia also added a gvar to skip waiting altogether and\nconverted the wait() to a wait_for().\n\nDifferential Revision: D59882151\n\nfbshipit-source-id: d60dc70a7d10356274c715b76545a84c14aae5d3","shortMessageHtmlLink":"Fixing missed notification in wait_for_all_committing_trxs_to_finish()"}},{"before":"b800ce1181da2c915b871c4fb6f0ef7748a89f25","after":"8309dea1baa552ba0583cf48bd45549c5948bc40","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-18T00:08:37.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix potential deadlock with LOCK_index between BinlogWrapper and rli_relay_log_raft_reset\n\nSummary:\nThere's a potential deadlock where plugin's BinlogWrapper::IsReadyForAppend acquires smutex -> LOCK_index but in D57198631, we introduced order of LOCK_index->smutex in rli_relay_log_raft_reset()\n\nTo fix this, we can unlock the LOCK_index much earlier.\n\nDifferential Revision: D59862336\n\nfbshipit-source-id: 1d51c171d4ec30560fbf58a67168e05303c3f6b8","shortMessageHtmlLink":"Fix potential deadlock with LOCK_index between BinlogWrapper and rli_…"}},{"before":"ba2e9ab6866561300711c9dbdf3735bb3586bfad","after":"b800ce1181da2c915b871c4fb6f0ef7748a89f25","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-16T15:21:25.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Support additional errors being returned by main.mysqlpump_bugs\n\nSummary:\nAllow handling additional error codes for the test case\nattempting to trigger crashes.\n\nDifferential Revision: D59806782\n\nfbshipit-source-id: 20220f2c9bac568c328658b8e7cfc26c190a59f7","shortMessageHtmlLink":"Support additional errors being returned by main.mysqlpump_bugs"}},{"before":"0a0b812c60c8cbce3c5dfc51825d6e43bc7ed3f7","after":"ba2e9ab6866561300711c9dbdf3735bb3586bfad","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-15T16:53:07.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"add OFF switch to plan capture\n\nSummary: As titled\n\nDifferential Revision: D59707956\n\nfbshipit-source-id: 7cfbd297361a71e10f3b7090812a55ed50e46291","shortMessageHtmlLink":"add OFF switch to plan capture"}},{"before":"b70e8feef296bd31cd71e8b2689fe86092ba43ff","after":"0a0b812c60c8cbce3c5dfc51825d6e43bc7ed3f7","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-12T18:07:19.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Disabling read free replication when the server is operating in idempotent mode\n\nSummary:\nWhen operating in idempotent mode on the replica, it's\nimportant the we read the correct version of the row that exists in the\nengine, we can't just rely on the before image of the row in the binlog.\nThis is because the before image and the current version of the row in\nthe engine might differ when we're running idempotent recovery and if we\njust use the before image it might lead to inconsistencies in the PK and\nother SK when we do a blind write during read free replication. To solve\nthis we disable read free replication when we're executing in idempotent\nmode.\n\nDifferential Revision: D59478062\n\nfbshipit-source-id: ba1efd5407df2f65a22df6e43578819e97736536","shortMessageHtmlLink":"Disabling read free replication when the server is operating in idemp…"}},{"before":"fb8bf5e253e8ccaf7abf9094f50ef80f74c60ae3","after":"b70e8feef296bd31cd71e8b2689fe86092ba43ff","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-10T17:51:57.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix for dd version check test after version bump\n\nSummary: After the addition of DB name to the memory_events_thread schema, the DD version was updated. However, the DD version test also needed a fix. Fixing that.\n\nDifferential Revision: D59569740\n\nfbshipit-source-id: fed6f3d4de5385ad50bca2aaab4701c9c13c6db5","shortMessageHtmlLink":"Fix for dd version check test after version bump"}},{"before":"22f7af38feb4279527c979845d820e149cd2c4c0","after":"fb8bf5e253e8ccaf7abf9094f50ef80f74c60ae3","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-10T17:07:40.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Adding DB transaction ID to metadata event\n\nSummary:\nA DB transaction ID is a db_name:id pair for each database\nchanged by the transaction, the id is a 8 byte unsigned int that\nincreases sequentially. DBTID is carried by the metadata event, we also\nwrite a previous dbtids at the top of every binlog file. DBTIDs are\nrecovered from the last binlog on restart just like GTIDs.\n\nGlobal variable `dbtids` can be used to get the current global dbtids and also\nto forcefully update ids for any set of databases e.g. `SET\nGLOBAL.dbtids = 'db1:10,db2:10'` forcefully sets the id to 10 for both\ndb1 and db2 without touching any other database. DBTIDs can also be\noptionally printed in mysqldump's output for backup/restore purposes.\n\nTo keep the implementation simple we don't maintain holes in the DBTID seq,\nwe always keep the max id for each database in the global map. This\nworks fine in most general cases because we have DB level commit\nordering. In cases where DB level commit ordering is not enforced we\nexpect the update stream to be eventually consistent i.e. eventually\ndbtids will reflect the data correctly.\n\nAlso removed some dead code from metadata event class.\n\nDifferential Revision: D53558839\n\nfbshipit-source-id: 9f6bd38d540f28067332bded20a6261b0c7059b0","shortMessageHtmlLink":"Adding DB transaction ID to metadata event"}},{"before":"3f70a9c1f40460c0cebfba7a77901cf64cbfd7f4","after":"22f7af38feb4279527c979845d820e149cd2c4c0","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-10T16:11:49.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"fix x.performance_schema with thread pool plugin\n\nSummary:\n- x.performance_schema constantly failing with result diff\n\n```\n SELECT * FROM performance_schema.global_status\n WHERE variable_name LIKE '%_LOST' AND variable_value > 0;\n VARIABLE_NAME\tVARIABLE_VALUE\n+Performance_schema_rwlock_classes_lost\t1\n```\n\nbump performance_schema_max_rwlock_classes in the test config to fix it.\n\n- add rocksdb.performance_schema test with similar check to verify that myrocks with or without thread-pool does not lose any instruments.\n\nsee also\n- https://dev.mysql.com/doc/mysql-em-plugin/en/myoem-metric-performanceschema-activity-category.html\n- https://dev.mysql.com/doc/refman/8.4/en/performance-schema-system-variables.html#sysvar_performance_schema_max_rwlock_classes\n\nDifferential Revision: D59500874\n\nfbshipit-source-id: ec90aab1db522f7f08a777dd6f8ca11d6a89b1b5","shortMessageHtmlLink":"fix x.performance_schema with thread pool plugin"}},{"before":"86e9696e7fc0a08376f9c9f369ac17c274f28bc4","after":"3f70a9c1f40460c0cebfba7a77901cf64cbfd7f4","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-09T02:53:52.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"use replace_numeric_round to round off floating point numbers for vector distance\n\nSummary: Those tests are failing in fbcode because of floating point number precision difference, see https://www.internalfb.com/intern/testinfra/diagnostics/13229323943783091.844425085149812.1718861969/. Thus, replace_numeric_round to round off floating point numbers for vector distance\n\nDifferential Revision: D58966151\n\nfbshipit-source-id: 37a31e230f1875cbd966f6284e633befeee9212a","shortMessageHtmlLink":"use replace_numeric_round to round off floating point numbers for vec…"}},{"before":"8cd648095afa81621814a5d669c4fef0318f4ca9","after":"86e9696e7fc0a08376f9c9f369ac17c274f28bc4","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-09T02:04:57.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Add DB name to mem_thread table\n\nSummary: We want to be able to track memory consumed per DB at a point in time. Exposing this via the mem_summary_by_thread_event_name table.\n\nDifferential Revision: D59207315\n\nfbshipit-source-id: 9ce4c4a71d2b1b88aee35c8a0d10f77d54ae52ee","shortMessageHtmlLink":"Add DB name to mem_thread table"}},{"before":"45c65deb36ef647d74b313609f25458ca6479657","after":"8cd648095afa81621814a5d669c4fef0318f4ca9","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-08T21:20:51.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Ignore flush cache mem root allocations from memory accounting\n\nSummary:\nFor background, the psi threads have a pool amongst themselves and are assigned to connection THDs on receiving new connections.\n\nThe way tracking works there is a tie up between connection thread and psi thread, so it's expected that when a thread is invoking any mem root(query arena) allocation, there is a corresponding psi thread which is used as the owner thread for a particular memory block and is supposed to remain so till the lifecycle.\n\nHere's the connection thread invocation code:\n\nhttps://www.internalfb.com/code/mysql-5.6/[687c7b62000f8ffa49afb86109fc72c9f8d92f46]/sql/conn_handler/connection_handler_per_thread.cc?lines=246\n\nWhile we are in the lifecycle of this thread, and are managing our own data structures, the alloc/free is well accounted. But when the connection thread becomes responsible for flushing other accumulated thread commits in the queue for ordered commit.\n\nhttps://www.internalfb.com/code/mysql-5.6/[687c7b62000f8ffa49afb86109fc72c9f8d92f46]/sql/binlog.cc?lines=11119\n\n THD *first_seen = fetch_and_process_flush_stage_queue();\n DBUG_EXECUTE_IF(\"crash_after_flush_engine_log\", DBUG_SUICIDE(););\n CONDITIONAL_SYNC_POINT_FOR_TIMESTAMP(\"before_write_binlog\");\n assign_automatic_gtids_to_flush_group(first_seen);\n\n ulonglong thd_count = 0;\n /* Flush thread caches to binary log. */\n for (THD *head = first_seen; head; head = head->next_to_commit) {\n Thd_backup_and_restore switch_thd(current_thd, head);\n head->commit_consensus_error = 0;\n std::pair result\n\nthe pfs thread responsible for the tracking will be different from the thread whose mem root is undergoing allocations. Later on, when the thread is undergoing mem root cleanup, it tries to clean up the block allocated earlier during flush by some other thread and causes an unaccounted memory.\n\n======\n\nThe exact memory block in question is:\n\nhttps://www.internalfb.com/code/mysql-5.6/[fb-mysql-8.0.32%3A687c7b62000f8ffa49afb86109fc72c9f8d92f46]/sql/sql_class.h?lines=3012\n\n if (!m_trans_fixed_log_path)\n m_trans_fixed_log_path = (char *)main_mem_root.Alloc(FN_REFLEN + 1);\n\nThis is roughly 500bytes of memory and depending on the workload, will be memory un-accounted multiple times. For a sample case, that's roughly: 16 * 500 bytes => 8KB/minute\n\n======\n\nDebugging logs(a combination of time + mem_root + allocation block tracking + stack trace):\n\nnew_block: 0x7f640c2c1820 actual: 0x7f640c2c1830\n/opt/mysql/dev/libexec/mysqld(my_print_stacktrace(unsigned char const*, unsigned long, void*, void*)+0x3d) [0x46a054d]\n/opt/mysql/dev/libexec/mysqld(pfs_memory_alloc_vc(unsigned int, unsigned long, PSI_thread**, void*, void*)+0x2b5) [0x20e2385]\n/opt/mysql/dev/libexec/mysqld(MEM_ROOT::AllocSlow(unsigned long)+0xc0) [0x6470498]\n/opt/mysql/dev/libexec/mysqld() [0x1f47e1d]\n/opt/mysql/dev/libexec/mysqld(MYSQL_BIN_LOG::flush_thread_caches(THD*)+0x80a) [0x6316a1c]\n/opt/mysql/dev/libexec/mysqld(MYSQL_BIN_LOG::process_flush_stage_queue(unsigned long long*, bool*, THD**)+0x3a8) [0x6315a32]\n/opt/mysql/dev/libexec/mysqld(MYSQL_BIN_LOG::ordered_commit(THD*, bool, bool)+0x501) [0x630db41]\n/opt/mysql/dev/libexec/mysqld(MYSQL_BIN_LOG::commit(THD*, bool)+0x1534) [0x62ea2c6]\n/opt/mysql/dev/libexec/mysqld(ha_commit_trans(THD*, bool, bool)+0x431) [0x62e56b1]\n/opt/mysql/dev/libexec/mysqld(mysql_execute_command(THD*, bool, unsigned long long*)+0x1366) [0x64a3c3e]\n/opt/mysql/dev/libexec/mysqld(dispatch_sql_command(THD*, Parser_state*, unsigned long long*)+0x1517) [0x6238d97]\n/opt/mysql/dev/libexec/mysqld(dispatch_command(THD*, COM_DATA const*, enum_server_command)+0x2671) [0x640a531]\n/opt/mysql/dev/libexec/mysqld(do_command(THD*)+0x3bd) [0x6412945]\n/opt/mysql/dev/lib64/mysql/plugin/mysql_thread_pool.so(mysql::thread_pool::TpConnHandler::processEvent(void*)+0x37) [0x7f667dd8a887]\n/opt/mysql/dev/lib64/mysql/plugin/mysql_thread_pool.so(mysql::thread_pool::TpWorkerPool::processWorker(void*)+0x611) [0x7f667ddac8c1]\n/usr/local/fbcode/platform010/lib/libc.so.6(+0x9ac28) [0x7f68b109ac28]\n/usr/local/fbcode/platform010/lib/libc.so.6(+0x12cfbb) [0x7f68b112cfbb]\npost-counted thread instr pfs_memory_alloc_vc : 2147483657 pfs_thread: 0x7f63b7672100 owner_thd: 0x7f645633b000 mem_root: 0x7f645633e230 cur_ptr: 0x7f641a450020\nAllocBlock 0x7f641a450020 mem_root 0x7f6471145230\nnew_block: 0x7f641a450020 actual: 0x7f641a450030\n\nset_trans_pos allow: 0x7f641a450030\n\n===\n\nDuring free:\n\nFreeBlocks: 0x7f641a450020 time_now: 1719176454471058346\nuncounted thread instr pfs_memory_free_vc : 2147483657 owner ptr: 0x7f63b7672100 pfs_thread: 0x7f6393d425c0 calling thd: 0x7f645633b000 pthd: 0x7f6471142000 time_now: 1719176454471058346 got_owner_time: 1719176454420663319 o_mem_root: 0x7f645633e230 p_mem_root: 0x7f6471145230\n\nThrough correlation between memory alloc and free tracking, we see that the mem_root alloc for a thread was performed by a different thread.\n\n=======\n\nConsidering this is fleeting memory, ignoring it.\n\nDifferential Revision: D58937400\n\nfbshipit-source-id: c1013e96cd9be8fdca7eb13cd234b6d998bd61a5","shortMessageHtmlLink":"Ignore flush cache mem root allocations from memory accounting"}},{"before":"5a46f54efc058e4d580dfa8b5620b8a599eb4ecd","after":"45c65deb36ef647d74b313609f25458ca6479657","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-08T21:07:08.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fixing tests\n\nSummary:\nFixing some tests to run with or without the mysqlx plugin.\nAdd searching for rocks_sst_dump.\nAllow more error codes to be handled by mysqlpump_bugs.\n\nDifferential Revision: D59489453\n\nfbshipit-source-id: 42e2cbce6c8e1895752c2c842b682cda9e511fb3","shortMessageHtmlLink":"Fixing tests"}},{"before":"6783460b2860982242ae591f08d632585ec5e5ef","after":"5a46f54efc058e4d580dfa8b5620b8a599eb4ecd","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-08T20:57:34.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"consolidate cleanup logic in bypass_select and rpc_open_table\n\nSummary: bypass_select and rpc_open_table have duplicated cleanup logic. creates a context object to consolidate them\n\nDifferential Revision: D59352443\n\nfbshipit-source-id: d6b903b7f69d30e2b92670fca89fec9a3b5e8015","shortMessageHtmlLink":"consolidate cleanup logic in bypass_select and rpc_open_table"}},{"before":"fd6609eb37cd06f4ae4d405b1d6e9c662e7f8301","after":"6783460b2860982242ae591f08d632585ec5e5ef","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-05T21:10:04.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"MyRocks: skip row lock during DDSE upgrade (#1472)\n\nSummary:\nThere are some instance contain huge number of DBs and each DB contains a lot tables/columns(such as 500K).\nDuring DDSE upgrade, It use single transaction to insert select statement to copy data from old SE to new SE. if new DDSE is rocksdb, it may hit max row lock error.\n\nThere are 3 places during DD upgrade that may involve a lot row locks:\n\n- Insert select statement: This involves copy data from one storage engine to\nanother storage engine. This only happens for DD SE upgrade, and with\nthis change, no row locks are hold.\n- Drop DD tables: when dropping old DD tables, the SQL layer reads one rows\nand deletes it. This happens for both DD SE upgrade and DD version\nupgrade. The required max number of row lock is the max number of rows\namong these DD tables.\n- SYSTEM table update: there are many SQL scripts involving insertion of data\ninto system tables. This happens for both DD SE upgrade and DD version\nupgrade. Currently, some tables involve 15-20K rows locks during\ninsertion. Consider to use bulk load in future if more row lock required\n\nThe change is to skip row lock during DDSE upgrade for Insert select statement. Future change may required to skip other two places(Drop DD tables and SYSTEM table update)\n\nPull Request resolved: https://github.com/facebook/mysql-5.6/pull/1472\n\nDifferential Revision: D59384318\n\nfbshipit-source-id: 40db41d753cbfda0526c4055695ec07a11f904f9","shortMessageHtmlLink":"MyRocks: skip row lock during DDSE upgrade (#1472)"}},{"before":"4182e45786fb2a86aa57283272f0c627db7ebba3","after":"fd6609eb37cd06f4ae4d405b1d6e9c662e7f8301","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-05T16:52:42.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"add rocksdb_enable_autoinc_compat_mode\n\nSummary:\ncurently myrocks always generate interleave auto ids even for simple inserts. while innnodb will generate consecutive auto ids for simple inserts even for innodb_autoinc_lock_mode == 2.\n\nThe change is to add a rocksdb_enable_autoinc_compat_mode variable, which enable allocate bunch of ids per call, similar to INNODB innodb_autoinc_lock_mode == 2 behavior. When enabled, Customer can use get_last_id() for simple inserts. one downside is that it may allocate more ids than necessary(same behavior as innodb_autoinc_lock_mode == 2).\n\n| Scenarios | INNODB innodb_autoinc_lock_mode == 2| Current MyRocks | MyRocks + compat\n| -- | -- | -- | -- |\n| Simple Inserts | allocate consecutive ids | allocate 1 id per call | allocate consecutive ids |\n| Mixed inserts | always allocate \"number of row\" ids | only allocate id when necessary | always allocate \"number of row\" ids |\n| insert on duplicate | always allocate \"number of row\" ids | only allocate id when necessary | always allocate \"number of row\" ids |\n| bulk insert | allocate 1,2,4,8 per call | allocate 1 id per call | allocate 1,2,4,8 per call\n|\n\nPrevious in 5.6 due to https://github.com/facebook/mysql-5.6/issues/263, We start to allocate one id per call. now even with add rocksdb_enable_autoinc_compat_mode == true, It will allocate consecutive ids and the behavior will be same as innodb_autoinc_lock_mode == 2(instead of behavior as described in https://github.com/facebook/mysql-5.6/issues/263).\n\nUpstream:\nin MySQL 8.0.32, the default autoinc locking mode for INNODB is innodb_autoinc_lock_mode == 1\nin MySQL 8.0.36, the default autoinc locking mode for INNODB is innodb_autoinc_lock_mode == 2(interleaved)\n\nDifferential Revision: D58918732\n\nfbshipit-source-id: e8da526f59411c87b9cfc16b1845e73166d36f0e","shortMessageHtmlLink":"add rocksdb_enable_autoinc_compat_mode"}},{"before":"66aa6bb5888704b6bf1856916c4e9abb7fdc7790","after":"4182e45786fb2a86aa57283272f0c627db7ebba3","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-05T15:13:16.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Add a sysvar to control max_trash_db_ratio with high default\n\nSummary:\nThis diff adds a sysvar rocksdb_max_trash_db_ratio_pct sysvar\nto set SstFileManager option max_trash_db_ratio. Default max_trash_db_ratio\nis only 0.25, but for most MyRocks use cases the default is too low. To\nmake slow sst file removals work almost all cases, it is necessary to\nset very high value. This diff sets default as 100B percent, which means\neven 1KB small database does not remove trash files immediately, until\ntrash size reaches 1TB. File deletion speed is controlled by\nan existing sysvar rocksdb_sst_mgr_rate_bytes_per_sec.\n\nDifferential Revision: D59368234\n\nfbshipit-source-id: 24f965f877c84508852da70166e2f2291e1a4c75","shortMessageHtmlLink":"Add a sysvar to control max_trash_db_ratio with high default"}},{"before":"4fa9d0554e516473e96901371c8d5536aea6c04a","after":"66aa6bb5888704b6bf1856916c4e9abb7fdc7790","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-03T22:43:03.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"make rpl_raft.rpl_raft_fail_raft_cache_copy test more stable\n\nSummary:\n- in raft_leadership.inc when `SET @global.rpl_raft_new_leader_uuid= '$master_raft_uuid'` fails, the test will enter a deadloop. in this commit, retry `SET @global.rpl_raft_new_leader_uuid` when it fails.\n- change sys var slave_parallel_workers to the new name replica_parallel_workers in combination file\n- add `replica_parallel_workers=1` setting in test combination `sts` to make it explicit. the default value is 4\n- fix some existing tests that depends on multiple replication workers\n\nDifferential Revision: D59331688\n\nfbshipit-source-id: a1689b8e6b0762fb6212346845009241cadd977e","shortMessageHtmlLink":"make rpl_raft.rpl_raft_fail_raft_cache_copy test more stable"}},{"before":"76704b954fb5a4586c78f2e4487c11f6c09530a0","after":"4fa9d0554e516473e96901371c8d5536aea6c04a","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-03T22:23:28.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"add ICP for reverse REF scans\n\nSummary:\nFor reverse REF scans (triggered via ORDER BY .. DESC), ICP is not supported today\nfor any storage engine. This seems to be a historic limitation.\n\nIt seems that some storage engines may not have had a good implementation of reverse\nREF scans in conjunction with Index Condition Pushdown feature. On encountering a series\nof ICP misses, the reverse iterator may end up scanning all the way till the start of the index,\nand only then hit EOF. This will of course lead to degraded query performance.\n\nI did some analysis on MyRocks, and ICP is not causing any problems. This is because of the way iterator bounds are set for REF iterators. Also did some testing both INNODB and ROCKSDB based production-like setups, and the original queries that were slow (due to lack of ICP on reverse REF scans) are working well now, and query latencies went down many times as expected.\n\nCurrently this \"feature\" is guarded behind the following session variable, which is ON by default.\nTo turn it off, if any query latency regressions are seen on production setups, do the following:\n\n SET SESSION icp_with_desc_ref_scans = OFF;\n\nDifferential Revision: D58398325\n\nfbshipit-source-id: 0d7b18d0cfcf74de9add74f29a585e9e3dbc8872","shortMessageHtmlLink":"add ICP for reverse REF scans"}},{"before":"e00da805defec14ead5d1331ae6bff6973983420","after":"76704b954fb5a4586c78f2e4487c11f6c09530a0","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-03T19:55:42.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix strict ordering warnings and C++20 errors\n\nSummary:\nFixing a couple of strict ordering warnings.\nAlso port part of\nhttps://github.com/mysql/mysql-server/commit/138489f44fbfb which\nresolves some C++20 errors.\n\nDifferential Revision: D59329351\n\nfbshipit-source-id: 05d0cb82c89e082b7dde6ef7da7aed56b160e9e8","shortMessageHtmlLink":"Fix strict ordering warnings and C++20 errors"}},{"before":"a505f3273d3aad8fa8d804fd1d0dd4466969baab","after":"e00da805defec14ead5d1331ae6bff6973983420","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-03T00:11:53.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"update faiss submodule\n\nSummary: update-submodule: faiss\n\nDifferential Revision: D59011997\n\nfbshipit-source-id: 08216b050bf7512df089eee5d0a1ae293207598b","shortMessageHtmlLink":"update faiss submodule"}},{"before":"0016763f4e5ede23956eee96661a8550e07fb185","after":"a505f3273d3aad8fa8d804fd1d0dd4466969baab","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-01T22:55:27.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix some log grepping RPL tests\n\nSummary:\nWe reduced the log level of binlog dump to debug, but some mtr tests are\ndoing log grepping for it, Turn logging to debug then back in these\ntests.\n\nDifferential Revision: D59184686\n\nfbshipit-source-id: 1e84c2fb02dac3a819bfff8b7397ceb47b323657","shortMessageHtmlLink":"Fix some log grepping RPL tests"}},{"before":"e78a5bb969b328fd0927c4b870755dfc9aa109fb","after":"0016763f4e5ede23956eee96661a8550e07fb185","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-01T19:13:58.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Remove spaces from rqg\n\nSummary: Convert the remaining spaces in the rqg filenames.\n\nDifferential Revision: D59238060\n\nfbshipit-source-id: bf8ab8c5ef38eb7b33bc2f0fa429ceb3b6bdb27b","shortMessageHtmlLink":"Remove spaces from rqg"}},{"before":"ae53954a2b6c09984c607f593f57e3bc5a555c41","after":"e78a5bb969b328fd0927c4b870755dfc9aa109fb","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-01T17:23:19.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"record mysqld--help-notwin\n\nSummary: squash 0120eb4ab14 and D59151906\n\nDifferential Revision: D59229777\n\nfbshipit-source-id: c8373feb5b4b7c81f5710114fb1fca3af8dadad9","shortMessageHtmlLink":"record mysqld--help-notwin"}},{"before":"0e6dc784a4ae79171ca082459ae865b4f87488f5","after":"ae53954a2b6c09984c607f593f57e3bc5a555c41","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-07-01T17:14:26.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Remove spaces from rqg filenames\n\nSummary: Rename the filenames to remove spaces\n\nDifferential Revision: D59232416\n\nfbshipit-source-id: b822ccd0e697c442c0a7f6d35a2bfd16fdcbd561","shortMessageHtmlLink":"Remove spaces from rqg filenames"}},{"before":"2751e509fbd4f3d2505a7ca0225842fd2d0832fe","after":"0e6dc784a4ae79171ca082459ae865b4f87488f5","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-06-28T22:46:49.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Remove no-op sysvar rocksdb-error-on-suboptimal-collation (#1468)\n\nSummary: Pull Request resolved: https://github.com/facebook/mysql-5.6/pull/1468\n\nDifferential Revision: D59011205\n\nfbshipit-source-id: 40be3da26970f77bb92de55d1bc0b08b227943e1","shortMessageHtmlLink":"Remove no-op sysvar rocksdb-error-on-suboptimal-collation (#1468)"}},{"before":"77934f3953961392365a236c8f2d3de7e2d96eba","after":"2751e509fbd4f3d2505a7ca0225842fd2d0832fe","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-06-28T10:09:40.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Remove rocksdb_fault_injection test\n\nSummary:\nRevert D27477107 since RocksDB doesn't support this as an external\nfeature.\n\nDifferential Revision: D59151906\n\nfbshipit-source-id: 0120eb4ab14bfff161f8561c6fee5062435aa497","shortMessageHtmlLink":"Remove rocksdb_fault_injection test"}},{"before":"1d087c27f3753af0138a816650b5af1035e8acb3","after":"77934f3953961392365a236c8f2d3de7e2d96eba","ref":"refs/heads/fb-mysql-8.0.32","pushedAt":"2024-06-27T04:04:27.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"prevent vector search from reverting to table scan\n\nSummary: When SELECT .. LIMIT is provided, sometimes vector search reverts to table scan depending on index cardinality stats. These stats for MyRocks change in strange ways sometimes, especially when table sizes are small. This fix is to ensure we always treat vector index as many times faster than table scan under all query conditions.\n\nDifferential Revision: D59082428\n\nfbshipit-source-id: 5eb8b35f49418bac423e1998466b48365d914052","shortMessageHtmlLink":"prevent vector search from reverting to table scan"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEh0xSvwA","startCursor":null,"endCursor":null}},"title":"Activity · facebook/mysql-5.6"}