WooCommerce Database Meltdown — From 376-Second Queries to Under 1 Second
· 6 min read
I got the alert at 9:47am on a Tuesday. A client's WooCommerce store — roughly 300 orders per day — was completely unresponsive. The server wasn't just slow. It was drowning.
The Symptom
SSH into the server, top told me everything I needed to know at a glance. MariaDB was consuming 1479% CPU and 21.4GB of RAM on a server that only had 32GB total. Every PHP-FPM worker was stuck waiting on a database query that would never finish in time.
I ran SHOW FULL PROCESSLIST in MySQL and saw 77 active queries, most of them identical:
UPDATE wp_options SET option_value = 'a:437:{...}'
WHERE option_name = '_transient_wc_term_counts';
The oldest query had been running for 376 seconds. Six minutes for a single UPDATE. Dozens more were queued behind it, all fighting over the same row.
The Investigation
I checked the InnoDB status to understand what was happening at the storage engine level:
SHOW ENGINE INNODB STATUS\G
The output confirmed massive lock contention. Multiple transactions were trying to UPDATE the exact same row in wp_options simultaneously, and because this table was still using MyISAM (the default on older WordPress installs), every UPDATE was locking the entire table rather than just the single row.
The rollback rate told the rest of the story:
SHOW GLOBAL STATUS LIKE 'Com_rollback';
SHOW GLOBAL STATUS LIKE 'Com_commit';
27% of all transactions were rolling back. The database was doing more failed work than successful work.
The Root Cause
WooCommerce stores term counts (product category counts, tag counts, attribute counts) in a WordPress transient called _transient_wc_term_counts. Every time a product is viewed, purchased, or updated, WooCommerce recalculates these counts and writes them back to wp_options.
On a busy store, this means dozens of PHP workers all trying to UPDATE the same single row at the same time. With MyISAM's table-level locking, each UPDATE has to wait for every other UPDATE to finish. The queries pile up. Workers pile up. RAM fills. MariaDB starts swapping. Everything grinds to a halt.
This is a known race condition in WooCommerce, but it only becomes catastrophic at scale — and the default WordPress database configuration does nothing to prevent it.
The Fix
I applied four changes, in order of urgency.
1. Redis Object Cache
The most impactful fix was eliminating the transient writes from the database entirely. Transients are meant to be temporary cached values — they belong in memory, not in a MySQL table.
# Install Redis
sudo apt install redis-server
# Configure Redis for WordPress
sudo systemctl enable redis-server
sudo systemctl start redis-server
Then I installed the Redis Object Cache plugin and activated it via WP-CLI:
wp plugin install redis-cache --activate
wp redis enable
This single change removed thousands of wp_options writes per hour.
2. InnoDB Table Conversion
The wp_options table was still using MyISAM, which only supports table-level locking. InnoDB supports row-level locking, meaning concurrent writes to different rows no longer block each other.
ALTER TABLE wp_options ENGINE=InnoDB;
ALTER TABLE wp_postmeta ENGINE=InnoDB;
ALTER TABLE wp_usermeta ENGINE=InnoDB;
I converted all MyISAM tables while I was at it. There is no good reason to run MyISAM on a modern WordPress site.
3. Missing Index on wp_options
WordPress ships with an index on option_name, but the autoload column — which WordPress queries on every single page load — had no index:
ALTER TABLE wp_options ADD INDEX autoload_idx (autoload);
This sped up the core SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes' query that runs on every request.
4. PHP-FPM Worker Tuning
The server had pm.max_children = 50, which was far too many for the available RAM. Each worker was consuming around 90MB, meaning PHP-FPM alone could consume 4.5GB. When the database started slowing down, workers piled up waiting, each holding onto its memory allocation. This is what pushed MariaDB into swap.
I tuned it down to a sustainable level:
; /etc/php/8.2/fpm/pool.d/www.conf
pm = dynamic
pm.max_children = 20
pm.start_servers = 5
pm.min_spare_servers = 3
pm.max_spare_servers = 10
pm.max_requests = 500
The formula I use: (Available RAM - MySQL allocation - OS overhead) / average worker memory = max_children. For this server: (32GB - 16GB - 2GB) / 90MB = ~155 maximum, but I set it to 20 because you want headroom, not a knife-edge.
The Results
After applying all four changes:
- Query times dropped from 376 seconds to under 1 second — a 99.7% improvement
- MariaDB CPU usage dropped from 1479% to under 20%
- Transaction rollback rate dropped from 27% to under 1%
- The site went from completely unresponsive to handling traffic comfortably
The store has been stable ever since. The owner didn't even know there was a problem with their database configuration until everything fell over — which is exactly why proactive database monitoring matters.
If you're running a busy WooCommerce store without Redis object caching, you're running on borrowed time. The _transient_wc_term_counts race condition is just one of several ways wp_options can become a bottleneck. I cover the full list of WooCommerce-specific database issues as part of my maintenance plans.
Need help with something similar? Check out my maintenance plans.
