How I Cleaned a Cryptominer with 14 Persistence Mechanisms
· 5 min read
This one started with a simple complaint: "the server feels slow." It ended with me cataloguing 14 separate persistence mechanisms planted by a cryptominer that had been running undetected for weeks.
The Initial Symptom
The client ran several WordPress sites on a dedicated server. Performance had been degrading gradually, but the real alarm came when I logged in and saw the CPU sitting at 400% usage across all cores. The strange part — ps aux and top showed nothing out of the ordinary. The processes consuming the CPU didn't appear in standard process listings.
That's when I knew this wasn't a misconfigured plugin. Something was actively hiding from us.
The Investigation
Standard tools were compromised, so I reached for atop — a process monitor that reads directly from /proc rather than relying on the same libraries that ps uses:
atop -a 1
There they were. Multiple processes named things like snap-bubblewrap and agrofoia consuming massive CPU. They'd been filtered out of ps output by a modified libprocesshider.so preloaded via /etc/ld.so.preload.
I also used execsnoop from the BCC toolkit to watch what was being launched in real time:
execsnoop-bpfcc
This caught the miner restarting itself every time I killed it. Something was watching the process and relaunching it within seconds.
The 14 Persistence Mechanisms
I spent the next several hours tracing every way this miner had embedded itself into the system. The final count was 14 distinct persistence mechanisms:
Cron-based persistence (4 mechanisms):
- Cron job in
/etc/cron.d/downloading and executing a script from a remote C2 server - User-level crontab entry for root
- Cron job in
/var/spool/cron/crontabs/ - Script in
/etc/cron.hourly/disguised as a log rotation job
Service-based persistence (3 mechanisms):
- Systemd service unit (
/etc/systemd/system/snap-bubblewrap.service) configured to restart on failure - Init.d script (
/etc/init.d/linux-update) starting a miner binary on boot - Systemd timer running every 5 minutes to check if the miner was alive
Binary locations (4 mechanisms):
/media/yum-zig/— primary miner binary/srv/yum-haskell/— backup copy/usr/bin/snap-bubblewrap— disguised as a Snap package component/bin/agrofoia— fallback binary with a randomised name
Filesystem protection (2 mechanisms):
- Immutable attribute set on miner binaries using
chattr +i— preventing deletion even by root - Modified
/etc/ld.so.preloadto load a process-hiding shared library
Remote update (1 mechanism):
- A shell script that downloaded a fresh copy of the miner from a C2 server every hour, ensuring any cleaned files were restored
I checked for immutable attributes using:
lsattr /usr/bin/snap-bubblewrap
----i--------e-- /usr/bin/snap-bubblewrap
The i flag means the file cannot be deleted, renamed, or modified — even by root — until the flag is removed.
The Entry Point
Forensic analysis of /var/log/auth.log revealed the initial compromise: a user account called admin1 with a weak password and an authorised SSH key that didn't belong to anyone on the team.
grep "admin1" /var/log/auth.log
grep "Accepted" /var/log/auth.log | grep -v known_users
The admin1 account had been created months earlier — likely through an exposed control panel or a compromised WordPress admin account that had server-level access it shouldn't have had.
The Cleanup
I worked through the removal methodically, starting with the persistence mechanisms to prevent respawning:
# 1. Kill the remote update mechanism first
systemctl stop snap-bubblewrap.service
systemctl disable snap-bubblewrap.service
rm /etc/systemd/system/snap-bubblewrap.service
# 2. Remove all cron entries
rm /etc/cron.d/yum-update
crontab -r -u root
rm /etc/cron.hourly/logrotate-helper
# 3. Remove init.d script and systemd timer
rm /etc/init.d/linux-update
rm /etc/systemd/system/miner-check.timer
systemctl daemon-reload
# 4. Remove immutable flags, then delete binaries
chattr -i /media/yum-zig/miner
chattr -i /srv/yum-haskell/miner
chattr -i /usr/bin/snap-bubblewrap
chattr -i /bin/agrofoia
rm -rf /media/yum-zig/ /srv/yum-haskell/
rm /usr/bin/snap-bubblewrap /bin/agrofoia
# 5. Remove the process-hiding library
rm /etc/ld.so.preload
# (or edit it to remove the malicious entry)
# 6. Remove the backdoor account
userdel -r admin1
# 7. Remove any unknown SSH keys
find /home -name "authorized_keys" -exec cat {} \;
find /root -name "authorized_keys" -exec cat {} \;
After each step, I verified the miner didn't restart using atop and execsnoop. Only after all 14 mechanisms were removed did the CPU finally drop back to normal.
Post-Incident Hardening
Once the server was clean, I implemented several hardening measures:
- Disabled password authentication for SSH — key-only access
- Restricted SSH to specific IP ranges via firewall rules
- Removed unnecessary user accounts and audited sudo access
- Installed and configured rkhunter for rootkit detection
- Set up file integrity monitoring on critical system directories
- Configured automated security updates for the OS
- Moved all WordPress admin access behind Cloudflare Access
The entire cleanup took about 6 hours. The attacker had clearly used an automated toolkit — the naming conventions and directory structures matched known cryptominer campaigns targeting Linux servers with weak SSH credentials.
This is the kind of incident that reinforces why server-level security matters just as much as WordPress-level security. A strong WordPress password means nothing if someone can SSH into your server with admin1:password123. And proper server management includes hardening the operating system, not just the application running on top of it.
Need help with something similar? Check out my maintenance plans.
