Pmlogger logs Rocky 9.3

Hey All,

In an effort to try and clean up my system drive and create a backup in case of failure for a restore, I’m trying to find and eliminate large and unnecessary files/directories.

I came accross /var/log/pcp/pmlogger on my machine and notice its taking up hundreds of MB. My IT admin, also known as Google, tells me these file are used for performance monitoring. As I am not using them, can they be deleted and does anyone know how to shut off/limit their creation? Otherwise I guess I can just make a cron job to dump them every day.

Thanks in advance for any info you may have to share!
David

Just to play devils advocate: hundreds of MBs is actually not that much and probably only takes a few seconds to copy on an internal network. I’d keep them.

Could always mount /var/log to another location if the log sizes are an issue or even script a solution to move them somewhere as they grow.

However, whenever I am looking for space to reclaim, I limit my results and grep for T and G variety and leave the Ms to mellow.

Good luck

1 Like

Might as well add some options:

Find all the G’s and T’s up in the current path:
du -h . --max-depth=1 2>/dev/null | awk '$1 ~ /[GT]/'

Find all files larger than 10GB across the filesystem:
find / -xdev -type f -size +10G -exec du -hs {} \;

Find the 10 largest files across the filesystem:
find / -type f -exec du -aSh {} + | sort -rh | head -n 10

Caveat, I haven’t used these for a few years so minor tweaking may be necessary.

1 Like

Yeah, I got desperate hunting down 500+ MB files because I can’t for the life of me figure out what’s gobbling up 1.1T on my system drive…


Filesystem                           Size  Used Avail Use% Mounted on
devtmpfs                             4.0M     0  4.0M   0% /dev
tmpfs                                126G  4.0K  126G   1% /dev/shm
tmpfs                                 51G   20M   51G   1% /run
/dev/sdb3                            1.8T  1.1T  750G  60% /

du -sh /opt is showing 26G and I think that’s my largest directory when I check each one.

I feel like I should be at a couple hundred GB at most used space.

find / -type f -size +100M -exec ls -lh {} \; | awk '{ print $9 ": " $5 }' > /var/tmp/large_files_report.txt && echo "Report of files larger than 100M saved to large_files_report.txt"

or

du -h / | grep -E '^[0-9\.]+[KMG]' > /var/tmp/storage_usage_report.txt && echo "Storage usage report saved to storage_usage_report.txt"

you may have a lucid link cache in your system

yeah I actually contacted lucid link support regarding cache… i set the cache in the GUI but I’m launching the daemon using system service at boot. so I want to make sure that I know what’s going where.

thanks Phil!

1 Like

I found the issue. It involved a folder that was one directory above my zfs mount point. So some tmp archives I thought were written to my zpool were in fact written to my system drive. after removing those files I’m down to 61GB which makes a lot more sense.

Since zfs doesn’t show any type of a symlink in ls -la or through lsblk, I kinda missed it. Weird that I was able to track it down through /hosts/192.168.1.xxx/mnt as this path was showing the contents of /mnt on the system drive, but the zpool mountpoint showed up empty through this path…

@Dwklink - you’re running rocky on zfs?
or mounting pools onto a xfs or ext2/3/4?
zfs as boot drive is pretty badass, for all the obvious reasons.

haha, hell no. I can barely handle Rocky on ext4. I have a ZRAID for my framestore.

1 Like

@Dwklink - you rebel you!

1 Like