这个快速参考备忘单提供了使用 Linux 系统日志 常用命令的使用清单
参考:linux-logs
参数 | action |
---|---|
/var/log/messages | Redhat/CentrOS 系统日志 |
/var/log/syslog | Debian/Ubuntu 安全日志 |
/var/log/secure | Redhat/CentrOS 安全日志 |
/var/log/auth.log | Debian/Ubuntu 安全日志 |
/var/log/cron | 定时任务日志 |
参数 | action |
---|---|
/var/log/boot.log | 启动引导日志 |
/var/log/kern | 内核告警日志 |
/var/log/dmesg | 设备驱动日志 |
var/log/mail.log | 邮件服务日志 |
/var/log/daemon.log | 后台服务日志 |
参数 | action |
---|---|
/var/log/lastlog | 跟踪每次登录/注销 |
/var/log/faillog | 跟踪失败的登录尝试 |
/var/log/btmp | 记录所有失败的登录尝试 |
/var/log/utmp | 跟踪用户当前的登录状态 |
/var/log/wtmp | 跟踪每次登录和注销 |
GREP allows you to search patterns in files.
-v: Invert matches
-i: Case insensitive
-E: Extended regex
-c: Count number of matches
$ grep <pattern> file.log
CUT is used to parse fields from delimited logs
-d: Use the field delimiter
-f: The field numbers
-c: Specifies characters position
$ cut -d ":" -f 2 file.log
SORT is used to sort a file
# -r: Reverse order
$ sort file.log
UNIQ is used to extract uniq occurrences
# -c: Count number of duplicates
$ uniq -c file.log
AWK is used to manipulate data
# Print first column with separator ":"
$ awk -F : '{print $1}' file.log
# 累加第一列的值
$ awk '{a+=$1}END{print a}'
SED (Stream Editor) is used to replace strings in a file.
# s: Search
# g: Replace
# d: Delete
$ sed s/regex/replace/g
DIFF differences in files by compare
$ diff 1.log 2.log
# How to read output?
a: Add #: Line numbers
c: Change <: File 1
d: Delete >: File 2