For several years I've been making automated daily database backups using a procedure iterating over existing databases.
mysqldump --user=${mysql_username} --password=${mysql_password} $db --single-transaction --events -R >> $normal_output_filename
Recently I moved from a dedicated server (Centos 6, Apache 2.2, php5.6, Mysql 5.0 -as far I recall) to a VPS with Centos 7, Apache 2.4, php 7.2/5.6, MariaDB 5.5)
Recently, time to time SOME database accesses are slow and eventually "time execution exceeded"
I have a cron job to make a daily backup after 03:00 of all databases.
From ps aux | grep mysql I get
root 15840 0.0 0.0 126772 3456 ? SN 03:09 0:00 mysqldump --user=uuu --password=x xxxxxx information_schema --single-transaction --events -R
which is on hold for several hours.
Once, I realized that problem after six days that mysqldump was on hold and no new db backups were performed.
show status like '%conn%';
does not output anything, it stays on hold.
mysqladmin -uuser1 -p*** processlist
(user1 is superuser) lists almost 8000 lines of Killed processes like
| 671958 | user1 | localhost | database1 | Killed | 3 | | | 0.000 | | 671959 | user1 | localhost | database1 | Killed | 3 | | | 0.000 | | 671961 | user1 | localhost | database1 | Killed | 2 | | | 0.000 | | 671962 | user1 | localhost | database1 | Killed | 2 | | | 0.000 | | 671963 | user1 | localhost | database2 | Killed | 2 | | | 0.000 | | 671964 | user2 | localhost | database3 | Killed | 1 | | | 0.000 | | 671965 | user1 | localhost | | Killed | 1 | | | 0.000 | | 671966 | user1 | localhost | | Query | 0 | | show processlist | 0.000 | +--------+-----+--------------+-----------+---------+---+---+------------------+----------+
I didn't restart mysql server yet. I can see some websites loading fast their pages which have several db accesses while Horde and Roundcube webmails reach the timeout and error 500.
I don't realize why suddenly (it may be days before it happens) list of processes start growing with killed processes I don't know where they come from.
UPDATE 1:
VPS at Contabo, 200GB SSD disk. 61.93 GiB used / 134.78 GiB free / 196.71 GiB total
Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz, 4 cores
CentOS Linux 7.7.1908
Linux 3.10.0-1062.9.1.el7.x86_64 on x86_64
At this time: CPU load averages 0.88 (1 min) 1.03 (5 mins) 0.95 (15 mins)
8GB RAM - At this time: 1.81 GiB used / 2.21 GiB cached / 7.63 GiB total
At this time: Uptime 2 days, 17 hours
UPDATE 2
Added thread_handling=pool-of-threads to my.cnf
The following does not directly answer the Question you are asking, but it points out some very low settings and the usage of MyISAM. I don't know whether switching to InnoDB and/or increasing some of the settings would help.
Do be aware that dumping MyISAM tables essentially blocks users from doing database work. (On the other hand, perhaps your data set is rather small and the activity is rather low.)
Observations:
The More Important Issues:
You should move from MyISAM to InnoDB; see Conversion from MyISAM to InnoDB
See if you can raise the following (more discussion below):
OPTIMIZE TABLEis an infrequent task; you are doing it much too often.Details and other observations:
( (key_buffer_size / 0.20 + innodb_buffer_pool_size / 0.70) ) = ((16M / 0.20 + 128M / 0.70)) / 8192M = 3.2%-- Most of available ram should be made available for caching. -- http://mysql.rjweb.org/doc.php/memory( open_files_limit ) = 760-- ulimit -n -- To allow more files, change ulimit or /etc/security/limits.conf or in sysctl.conf (kern.maxfiles & kern.maxfilesperproc) or something else (OS dependent)( table_open_cache ) = 64-- Number of table descriptors to cache -- Several hundred is usually good.( innodb_buffer_pool_size ) = 128M-- InnoDB Data + Index cache -- 128M (an old default) is woefully small.( Innodb_buffer_pool_pages_free / Innodb_buffer_pool_pages_total ) = 5,578 / 8191 = 68.1%-- Pct of buffer_pool currently not in use -- innodb_buffer_pool_size (now 134217728) is bigger than necessary?( innodb_print_all_deadlocks ) = innodb_print_all_deadlocks = OFF-- Whether to log all Deadlocks. -- If you are plagued with Deadlocks, turn this on. Caution: If you have lots of deadlocks, this may write a lot to disk.( join_buffer_size ) = 131,072 / 8192M = 0.00%-- 0-N per thread. May speed up JOINs (better to fix queries/indexes) (all engines) Used for index scan, range index scan, full table scan, each full JOIN, etc. -- If large, decrease join_buffer_size (now 131072) to avoid memory pressure. Suggest less than 1% of RAM. If small, increase to 0.01% of RAM to improve some queries.( innodb_buffer_pool_populate ) = OFF = 0-- NUMA control( query_prealloc_size ) = 8,192 / 8192M = 0.00%-- For parsing. Pct of RAM( query_alloc_block_size ) = 8,192 / 8192M = 0.00%-- For parsing. Pct of RAM( character_set_server ) = character_set_server = latin1-- Charset problems may be helped by setting character_set_server (now latin1) to utf8mb4. That is the future default.( local_infile ) = local_infile = ON-- local_infile (now ON) = ON is a potential security issue( Key_writes / Key_write_requests ) = 5,804 / 9232 = 62.9%-- key_buffer effectiveness for writes -- If you have enough RAM, it would be worthwhile to increase key_buffer_size (now 16777216).( Created_tmp_disk_tables / Created_tmp_tables ) = 13,250 / 18108 = 73.2%-- Percent of temp tables that spilled to disk -- Maybe increase tmp_table_size (now 16777216) and max_heap_table_size (now 16777216); improve indexes; avoid blobs, etc.( (Com_insert + Com_update + Com_delete + Com_replace) / Com_commit ) = (68440 + 1927 + 425 + 0) / 0 = INF-- Statements per Commit (assuming all InnoDB) -- Low: Might help to group queries together in transactions; High: long transactions strain various things.( Select_scan ) = 165,862 / 75935 = 2.2 /sec-- full table scans -- Add indexes / optimize queries (unless they are tiny tables)( Com_optimize ) = 464 / 75935 = 22 /HR-- How often OPTIMIZE TABLE is performed. -- OPTIMIZE TABLE is rarely useful, certainly not at high frequency.( binlog_format ) = binlog_format = STATEMENT-- STATEMENT/ROW/MIXED. -- ROW is preferred by 5.7 (10.3)( expire_logs_days ) = 0-- How soon to automatically purge binlog (after this many days) -- Too large (or zero) = consumes disk space; too small = need to respond quickly to network/machine crash. (Not relevant if log_bin (now OFF) = OFF)( innodb_autoinc_lock_mode ) = 1-- Galera: desires 2 -- 2 = "interleaved"; 1 = "consecutive" is typical; 0 = "traditional". -- Galera desires 2; 2 requires BINLOG_FORMAT=ROW or MIXED( log_slow_queries ) = log_slow_queries = OFF-- Whether to log slow queries. (Before 5.1.29, 5.6.1)( slow_query_log ) = slow_query_log = OFF-- Whether to log slow queries. (5.1.12)( long_query_time ) = 10-- Cutoff (Seconds) for defining a "slow" query. -- Suggest 2( back_log ) = 50-- (Autosized as of 5.6.6; based on max_connections) -- Raising to min(150, max_connections (now 151)) may help when doing lots of connections.( Com_change_db / Connections ) = 1,278,567 / 363881 = 3.51-- Database switches per connection -- (minor) Consider using "db.table" syntax( Com_change_db ) = 1,278,567 / 75935 = 17 /sec-- Probably comes from USE statements. -- Consider connecting with DB, using db.tbl syntax, eliminating spurious USE statements, etc.( Threads_running / thread_cache_size ) = 1 / 0 = INF-- Threads: current / cached (Not relevant when using thread pooling) -- Optimize queriesYou have the Query Cache half-off. You should set both query_cache_type = OFF and query_cache_size = 0 . There is (according to a rumor) a 'bug' in the QC code that leaves some code on unless you turn off both of those settings.
Abnormally small:
Abnormally large:
Abnormal strings: