1. 監控內存
- vmstat
如下是執行vmstat後的輸出:
fengxi@ubuntu:~/bash$ vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 167516 35040 354276 0 0 469 28 101 373 4 5 89 1 0
- cat /proc/meminfo
fengxi@ubuntu:~/bash$ cat /proc/meminfo
MemTotal: 1023924 kB
MemFree: 167276 kB
MemAvailable: 554148 kB
Buffers: 35408 kB
Cached: 354304 kB
SwapCached: 0 kB
- top
top - 06:59:42 up 19 min, 2 users, load average: 0.00, 0.04, 0.14
Tasks: 226 total, 1 running, 225 sleeping, 0 stopped, 0 zombie
%Cpu(s): 4.8 us, 4.1 sy, 0.0 ni, 91.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 1023924 total, 843696 used, 180228 free, 35440 buffers
KiB Swap: 1046524 total, 0 used, 1046524 free. 354448 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
947 root 20 0 158948 40128 19720 S 5.3 3.9 0:23.42 Xorg
1775 fengxi 20 0 249984 85572 60384 S 3.3 8.4 0:06.78 compiz
2180 fengxi 20 0 116208 33152 25724 S 1.3 3.2 0:04.40 gnome-termi+
1791 fengxi 20 0 30508 7424 6880 S 0.3 0.7 0:00.38 ibus-engine+ <span style="font-family: Arial, Helvetica, sans-serif;"> </span>
通過按M,則會以佔用內存從大到小排列;按P,則會以佔用CPU從大到小排列。- ps
fengxi@ubuntu:~/bash$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.1 0.4 23944 4780 ? Ss 06:40 0:02 /sbin/init auto
root 2 0.0 0.0 0 0 ? S 06:40 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S 06:40 0:00 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S< 06:40 0:00 [kworker/0:0H]
其中輸出的第三列爲CPU的佔比,第四列爲內存的佔比。#!/bin/bash
pids=$(ps aux | sort -k4nr | head -5 | awk '{print $2}')
for pid in $pids
do
kill -9 $pid
done
這裏sort的參數含義爲:-k4以第四列爲排序標準,-n爲以字符串排序,-r爲reverse的簡稱,因爲sort默認是從小到大排序,加上-r則爲從大到小排序。如果以數字大小爲排序準則,則要用-g參數。- free
fengxi@ubuntu:~/bash$ free
total used free shared buffers cached
Mem: 1023924 936728 87196 6356 76040 402712
-/+ buffers/cache: 457976 565948
Swap: 1046524 6940 1039584
2. 監控CPU
- vmstat
- top
- ps
3. 監控硬盤
- df
fengxi@ubuntu:~/bash$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 484M 0 484M 0% /dev
tmpfs 100M 5.6M 95M 6% /run
/dev/sda1 19G 5.8G 12G 33% /
tmpfs 500M 156K 500M 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 500M 0 500M 0% /sys/fs/cgroup
cgmfs 100K 0 100K 0% /run/cgmanager/fs
tmpfs 100M 44K 100M 1% /run/user/1000