> 線上的一個日志實時輸出的程序曾經出過這樣一個問題,剛開始上線java程序占用的`CPU`的資源很少,但是到了整點的時候,CPU直線飆高,直接到達`100%`根本沒有要下降的趨勢,唯一的方法只能殺掉它了,后面在借助`jstack`與`top`排查到線程然后定位到某行代碼出的問題。 ## 排查演示 1. 使用`jps`找到程序的`pid` ``` [root@logs bin]# ./jps -l -m | grep logdir2 22169 galaxy-log-online-0.1-SNAPSHOT-all.jar 3002 /logdir2 ``` 2. 找到`22169`進程ID 3. 找到CPU過高的線程 ``` top -H -p 22169 top - 19:03:22 up 156 days, 5:57, 4 users, load average: 1.00, 2.84, 4.25 Threads: 15 total, 0 running, 15 sleeping, 0 stopped, 0 zombie %Cpu(s): 99.4 us, 12.6 sy, 0.0 ni, 62.6 id, 4.8 wa, 0.0 hi, 2.6 si, 0.0 st KiB Mem : 8010456 total, 206760 free, 1079668 used, 6724028 buff/cache KiB Swap: 0 total, 0 free, 0 used. 6561460 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22184 root 20 0 4543356 74148 12960 S 80.0 40.9 0:19.96 java 22169 root 20 0 4543356 74148 12960 S 0.0 0.9 0:00.00 java 22170 root 20 0 4543356 74148 12960 S 0.0 0.9 0:00.35 java 22171 root 20 0 4543356 74148 12960 S 0.0 0.9 0:00.08 java 22172 root 20 0 4543356 74148 12960 S 0.0 0.9 0:00.09 java ... ``` 4. 將線程轉為16進制 ``` [root@logs bin]# printf "%x" 22184 56a8 ``` 5. 使用`jstack`定位到線程 ``` [root@logs bin]# ./jstack 22169 | grep 56a8 "Thread-1" #9 prio=5 os_prio=0 tid=0x00007fe428230800 nid=0x56a8 waiting on condition [0x00007fe4121a5000] ``` 6. 使用`3D`肉眼來查看線程運行的代碼了 ``` "Thread-1" #9 prio=5 os_prio=0 tid=0x00007fe428230800 nid=0x56a8 waiting on condition [0x00007fe4121a5000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(Native Method) at java.lang.Thread.sleep(Unknown Source) at java.util.concurrent.TimeUnit.sleep(Unknown Source) at com.dounine.tool.http.sql.LogsRequest$4.run(LogsRequest.java:152) at java.lang.Thread.run(Unknown Source) ``` 然后開始從`LogsRequest.java`152行開始找起,發現里面有一個死循環... ## FIX 它 然后開開心心下班