I work as a system administrator for a company monitoring around 20 servers running open source applications . One of the application we are using is Jboss . The Jboss version we are running is an old version (4.0) as per client requirement .We have upgraded jboss to the latest after this incident . Besides this we are using Nagios for application and Infrastructure monitoring.
The alarm of a server getting compromised raised on a Monday morning when we saw continuous Nagios high load alerts from the server running old version of Jboss . The alerts actually started coming from Saturday morning .
I immediately logged to the affected machine and the first few commands I run was w and top command .
Checking System Load and identifying top contributor
top command showed high load on the system with some perl commands running for 'jboss' user with CPU utilization around 100%
These perl process are unknown to me and no such process are suppose to run with jboss user. I started checking for memory and disk utilization and they looked normal. Went further on investigation and looked for network bandwidth usage on the host. We are using MRTG and Cacti for monitoring bandwidth usage and MRTG is showing bandwidth link utilisation of more than 100% for ethernet interface on this host. Interestingly , outgoing traffic is beyond 100% utilization and so I suspected that probably our machine is being used as a zombie machine to target other machines on Internet .
Identifying Open files by these process.
I proceeded further . Identified those files which are being used by these perl process . I used losof and strace to collect these information .
#cd /var/tmp
from one of strace command output , lots of connect sessions can be seen to different IP Address on Internet .
Also , can see a cron job set for jboss user.
Preventive Measures:
1. Kill processes
# kill -9 16965 19058
2. Deleted unknown binaries
1. Remove executable permission from /var , /var/tmp and /var/tmp
directory.
2. Perform similar hardening for /var/tmp and /dev/shm directory.
3. Replace login shell for jboss user
#usermod -s /bin/false jboss
4 . Disabled home directory for jboss user
#usermod -d /home/jboss jboss
5. Disabled cron for jboss user
5.1 Add jboss entry in /etc/cron.deny
5.2 touch /var/spool/cron/jboss.disabled
These steps provided a sigh of relief for our team . We are actually able to prevent our server from being further compromised .
Best advice for a system getting compromised is to remove the machine from network and reinstall from scratch after completing all forensic analysis.
We followed that and after proper testing of applications on Jboss 6 , we migrated to the latest version.