Workshop » Reference Section » Grimoires » IT » Platforms » Linux » Packages » n8n

Starting a temporary instance of N8N accessible via IPv4 over the local network

N8n by default binds to localhost, and even binding it to listen on all interfaces, it still by default listens only for IPv6. You also may want to tell it to not use the Secure Cookie if you don't want to set up an SSL certificate or sign up for their cloud service. Make sure you only have it reachable on the local network if you do this, you don't want to open your instance to the entire world.

N8N_SECURE_COOKIE=false N8N_HOST=0.0.0.0 N8N_LISTEN_ADDRESS=0.0.0.0 npx n8n

Remember to open port 5678 on the machine's firewall if necessary.

Workshop » Reference Section » Grimoires » IT » Platforms » Web » WordPress » Yoast

CURL and Browser get different page versions (301 redirects, content changed, etc. — due to Yoast url parameter stripping only for non-logged-in users)

I had a truly maddening problem where my /embed/ functionality was sometimes returning full pages to CURL commands and online services like redirect-checker.org, etc.

It turned out, it was Yoast's URL parameter stripping. I had failed to update Yoast's settings with some new parameters I was using, and what Yyoast never tells you is that when you're logged in, it lets everything work fine, and only strips parameters for non-logged in users. It just lets you proceed on your way thinking everything is working fine until you can't figure out why curl -I https://mysite.com/blah?param1¶m2 is getting a 301 redirect while https://mysite.com/blah?param1¶m2 is loading fine in your browser. This is especially fun on sites like mine where things like /embed/ and ?embed get written back and forth to each other internally.

I lost several hours to this.

Yoast's docs say there's a way of registering parameters but,…

Workshop » Reference Section » Grimoires » IT » Applications » Web Browsers » CSS

Determining which javascript script changed an element’s attribute

Determining which script changed an element's attribute

So, I had an issue where quite a while ago I added some js code that would open a [code]details[/code] disclosure element if it contained a named anchor that was included in the page's URL. For instance if you loaded the URL [code]https://thisdomain.com/somepage.html#blahblahblah[/code], and the page had [code][/code] hidden inside a closed [code]details[/code] element, it would open that element by setting the attribute "open" on the details element, and scroll to reveal the anchor.

The problem was, I needed to make some changes to how that code functioned, and I couldn't find where I had added the script that did that.

Long story short: I temporarily added this script to the head of the page, and then reloaded it with an #anchor added to the URL, in this case [code]https://michaelkupietz.com/literally-hundreds-capsule-reviews/#puzzlehead[/code]:

[code] // Override the open property setter to catch when…
Workshop » Reference Section » Grimoires » IT » Platforms » Linux » Packages » SQL

SQL query to list all WordPress post revisions for archiving

Here's the SQL query to get all post revisions, which I do prior to cleaning them out of the database, which seems to make it much faster:

SELECT p.*
FROM [posts table name] p
WHERE (p.post_type = 'post' OR p.post_type = 'page') -- Include posts/pages
AND (p.post_date BETWEEN '2020-01-01' AND '2029-07-01') -- Adjust date range
OR (p.post_type = 'revision' AND p.post_parent IN (
SELECT ID FROM [posts table name]
WHERE post_date BETWEEN '2024-01-01' AND '2024-07-01'
));

To get just a count of revisions, change SELECT p.* to SELECT count(*).

Workshop » Reference Section » Grimoires » IT » Platforms » Linux » Packages » rsync & rsnapshot

Speeding up rsnapshot (rsync) backups by removing wildcard paths from exclude

I removed a bunch of wildcard paths from rsnapshot.conf's exclude, and suddenly tonight my backup ran in a few minutes instead of taking over a day like it usually does.

Interesting, I've been looking off and on for at least the better part of a year for ways to lighten the load of rsnapshot's under-the-hood rsync backup commands, which reliably took up about half my CPU power almost continuously, and never found this tip before. You can see, plenty of wildcard paths removed, plus a few other things.

Here's a diff, rsnapshot.conf before changes (<) vs after (>):
< verbose 1
---
> verbose 4
120c120
< loglevel 2
---
> loglevel 4
143a144,146
> rsync_short_args -Wa
> #-W is transfer whole files without prescan, recommended for performance by https://serverfault.com/questions/639458/rsync-taking-100-of-cpu-and-hours-to-complete
> #NOTE: if you set the above short…

Workshop » Reference Section » Grimoires » IT » Applications » Web Browsers » CSS

Display “fixed” elements still scrolling, not fixed to page (also, if z-index not working properly)

I had an interesting problem where I set an image's CSS rules to display:fixed and it still scrolled with the page. Here's what I discovered:

In CSS, display:fixed means fixed with regard to the nearest ancestor stacking context, not necessarily to the page coordinates. You can reset the stacking context by adding a transform, will-change, or other attributes (list provided below) to an element. If an ancestor element resets the stacking context, any descendant of it with display:fixed will stay fixed with regard to it, but if it scrolls with the page, will scroll too.

Ditto for the CSS attribute z-index. A higher z-index is only in front of objects in its stacking context. A new stacking context, lower down on the page, can contain elements with a lower z-index but that nonetheless appear in front of it visually, because they're not in the same stacking context.

Josh Comeau's site…

Workshop » Reference Section » Grimoires » IT » Troubleshooting log » Web Server

Performance troubleshooting & settings changes 2025aug29

Following several days of frequent freezes, I tried changing the following settings

updated in :

[opcache] original settings
;recommended by https://vpsfix.com/14433/virtualmin-post-installation-configuration-and-server-optimization-guide/
opcache.enable=
opcache.memory_consumption=
opcache.interned_strings_buffer=
opcache.max_accelerated_files=
opcache.validate_timestamps=
opcache.revalidate_freq=
opcache.save_comments=
;end recommendation

to

[opcache]
;recommended by https://vpsfix.com/14433/virtualmin-post-installation-configuration-and-server-optimization-guide/
opcache.enable=
opcache.memory_consumption=
opcache.interned_strings_buffer=
opcache.max_accelerated_files=
opcache.validate_timestamps=
opcache.revalidate_freq=
opcache.save_comments=

-
added var_dump(opcache_get_status()) to php status page to be able to monitor opcache usage

-
changed warning logs from E_ALL & ~E_DEPRECATED & ~E_STRICT to
----
noticed contained a LOT of processes being stopped for tracing
turned off request_slowlog_timeout by setting to 0s in
had been 4s
---
I had turned on lightspeed at 1:45 am est , aug 26. Seems like more problems since then.

None of the above seem to help, still getting freezes maybe every 30 minutes. Next…

Workshop » Reference Section » Grimoires » IT » Platforms » Linux » File management

Directory and File Locations particular to this server

PHP config -
Includes:
- opcache settings
- error warnings

PHP slow log setup is in

PHP log -

PHP error and slow logs by pool are in

Workshop » Reference Section » Grimoires » IT » Platforms » Linux » Packages » rsync & rsnapshot

Resolving an rsnapshot error — “rsync: –delete does not work without –recursive (-r) or –dirs (-d). rsync error: syntax or usage error (code 1) at main.c(1795)”

I discovered rsnapshot hadn't run in a few days. Checking /etc/rsnapshot.log, I found every recent day had this:

rsync: --delete does not work without --recursive (-r) or --dirs (-d). rsync error: syntax or usage error (code 1) at main.c(1795) [client=3.2.7]

A few days ago I had added the line rsync_short_args -W to /etc/rsnapshot.conf in an effort to get rsync to run without putting such a load on my system. Removing this and running rsnapshot -v hourly from the command line shows that without it, the first line of the rsync command was /usr/bin/rsync -ax --delete --numeric-ids --relative --delete-excluded \, but with it, the first line was /usr/bin/rsync -Wx --delete --numeric-ids --relative --delete-excluded \.

Changing the line rsync_short_args -W to rsync_short_args -Wa, with an a flag explicitly included, solved the problem. Apparently specifying custom short flags overrides at least one of the default flags.

Also: remember, when you run an…

Workshop » Reference Section » Grimoires » IT » Platforms » Linux » Packages » rsync & rsnapshot

Post-migration steps when migrating or restoring Linux from backup

There are intentionally vague broad steps, here just as a reminder to myself; best to look specific instructions for each of these steps up at restore time for the particular system you're restoring to.

A.) Backups should include all user data. Depending on who you ask, that's either:
1.) The entire filesystem except /dev/*, /proc/*, /sys/*, /tmp/*, /run/*, /mnt/*, /media/*, /lost+found (which can be pulled from a complete filesystem backup with rsync -avhP --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} /mnt/olddrive/ /mnt/netdrive/)
2.)/home, /etc (except /etc/passwd and /etc/groups, these have useful information to back up but may conflict if written to a new install), /usr/local, /opt, /root, /var (exluding /var/tmp, /var/run/, /var/lock, or /var/spool except you DO want /var/spool/cron/crontabs/)

B.) After copying all the above to the new or restored disk, you need to update /etc/fstab with the new disk UUIDs.

C.) Install GRUB Bootloader.

D.) If you're using LUKS encryption, set that…

Workshop » Reference Section » Grimoires » IT » Platforms » Linux » Packages » rsync & rsnapshot

Fastest way to delete a large, deep directory in Linux

Per numerous references around the web, to delete /path/to/directory-to-delete/:

 cd /path/to/ mkdir empty_dir rsync -a --delete empty_dir/ directory-to-delete/ rm -r empty_dir rm -r directory-to-delete 

Disclaimer: this is for my own reference, not recommended for your use. Use it at your own risk. If I am wrong—and I may be—these commands can do tremendous damage to your system.

Workshop » Reference Section » Grimoires » IT » Troubleshooting log » VMWare

VMWare VM unreachable via IP after reboot, even from host machine

My VWWare VM lost internet connectivity after a reboot. Even the host machine could not access any service on it. Http/https got 523 errors.

I powered down the VM, changed the networking to NAT, powered it back up, shut it down again, changed the networking back to Autodetect, booted it again, and everything seemed fine.

Workshop » Reference Section » Grimoires » IT » Platforms » Linux » Packages » rsync & rsnapshot

Reducing rsnapshot or rsync resource usage

I've had sporadic problems with clearing the WP cache causing the server to return 520 errors for a few minutes. Usually other sites on the same server are fine, it's specific to this vhost. Logging in via SSH, checking with htop, rsync is usually hogging most of the cpu. Restarting the fpm and then restarting Apache restores the website.

According to https://www.claudiokuenzler.com/blog/361/rsnapshot-backup-reduce-high-load-with-ionice, the big bottleneck with rsync, which rsnaphot runs on, is i/o, not cpu, and rsync can actually tie up i/o such that a web server won't respond to http requests. This can be solved by making the rsnapshot command in crontab ionice -c 3 [rsnapshot command] instead of just the rsnapshot command, which tells rsync not to wait until the disk is idle before trying to access it. So I did. In fact, I made it nice -n 19 ionice -c 3 [rsnapshot command] although…

Workshop » Reference Section » Grimoires » IT » Platforms » Linux » Packages » Git

Force Git to overwrite local changes if there is a branch conflict

Use code you find here at your own risk! I am not responsible if you damage your data or system by following any instructions you find here.

  1. Navigate to your plugin's root directory:

    Bash

    cd /home/kupietzc/public_html/kartscode/wp-content/plugins/ktwp-draggable-elements

  2. Fetch the latest changes from GitHub:
    Bash

    git fetch origin

  3. Perform a hard reset to match GitHub's main branch (assuming main is your branch):

    Bash
    git reset --hard origin/main

    WARNING: This command is destructive. It will discard all local changes to tracked files and make your local repository exactly match your GitHub repository. Ensure you have a backup of any local modifications you wish to preserve that are NOT on GitHub before running this.

  4. Clean up any untracked files or directories (remnants from manual copying):
Workshop » Reference Section » Grimoires » IT » Applications » Web Browsers » CSS

Getting Web Browsers Not To Blur Images on Retina Screens

Unfortunately this must be set by site, but on retina screens on MacOS, many browsers blur small images, such as 88x31 buttons.

You can overcome this, at least for the images on your site, by adding this CSS to your site:

img, div {
image-rendering: optimizeSpeed;
image-rendering: -moz-crisp-edges;
image-rendering: -webkit-optimize-contrast;
image-rendering: optimize-contrast;
image-rendering: pixelated;
-ms-interpolation-mode: nearest-neighbor;
}

You should add any element that might have a CSS background image property to that selector. In this case I have added

because I have many divs with background images on this site.

This tip is from https://stackoverflow.com/questions/31908444/fix-for-blurry-images-on-browsers-used-by-a-mac-retina

Workshop » Reference Section » Grimoires » IT » Platforms » MacOS » Apps

How to fix if the “Save” button is grayed out in Photoshop CC 2017 save and export dialogs

I don't know if this affects other versions of Photoshop, but on MacOS Photoshop CC 2017 frequently starts unexpectedly graying out all save buttons when you have made changes to your file and should be able to save.

The secret is to resize and move around the dialog. Drag the lower right corner to make it bigger and smaller a few times, and try dragging the whole dialog to the upper left corner of the screen and making it small.

This fixes it for me.

Workshop » Reference Section » Grimoires » IT » Troubleshooting log » WordPress

Deactivating, deleting, and completely removing a plugin that WordPress won’t let you deactivate

I installed the WordPress plugin LWS Optimize, which turned out to be unusably broken (which is the reason I'm not linking to it) and made my site unusable. To make matters worse, when I tried to deactivate it, it told me it deactivated... and was still active. I went in through FTP and deleted the plugin folder entirely, and then WordPress said it had been deactivated because it couldn't be found... and it still showed as present and activated in the plugin list.

So I added this to my theme's functions.php file:

add_action('admin_init', function() {
$active_plugins = get_option('active_plugins');
$plugin_to_remove = 'lws-optimize/lws-optimize.php';

if (($key = array_search($plugin_to_remove, $active_plugins)) !== false) {
unset($active_plugins[$key]);
update_option('active_plugins', array_values($active_plugins));
}
});

I then reloaded an admin page and removed that. That deactivated the plugin in the plugins list, but then when I hit the "delete" link, it said it…

Workshop » Reference Section » Grimoires » IT » Troubleshooting log

Website returns 503 server errors, but no errors in logs

Had a weird one today. Last one website of the several of on this server suddenly started returning 503 (service unavailable) errors. There was nothing in the PHP error log or Apache error log. All server configs are already thoroughly optimized for performance. Other websites on the same server were functioning normally.

I didn't notice this at the time, but my uptime monitor didn't report an outage. When I used redirect-checker.com to check the status code, it returned 200, which should have been a clue, also.

Next time, before doing all sorts of arcane troubleshooting:
1. Try with a different browser
2. Is there a CDN? Try bypassing it.
3. Are you using a VPN? Try selecting a different endpoint (VPN server) if it will let you, or turning it off.

I use the NordVPN plugin in Firefox, and quic.cloud is my…

Workshop » Reference Section » Grimoires » IT » Applications » FileMaker Pro

Get names of all input fields in a FileMaker Pro table

ExecuteSQL ( "SELECT FieldName FROM FileMaker_Fields WHERE TableName='[TABLE NAME]' AND FieldClass='Normal'",",","¶")

Workshop » Reference Section » Grimoires » IT » Platforms » Linux » Packages » PHP

How to monitor RAM for tuning pm.max_children

How to monitor RAM usage:

  1. free -h:

    • This command shows your system's total, used, and free memory in a human-readable format.
    • Key metrics:
      • total: Total RAM.
      • used: RAM currently in use.
      • free: Unused RAM.
      • buff/cache: RAM used for file system buffers and page cache. This is good; Linux uses free RAM for this and frees it when applications need it.
      • available: The most important metric. This estimates how much memory is available for starting new applications without swapping.
    • Run it before and after: Run free -h before you increase max_children and then after your server has been running for a while under typical load with the new settings. Compare the available memory.
  2. htop (recommended if installed):

    • htop (you might need to sudo…
Workshop » Reference Section » Grimoires » IT » Platforms » Linux » Packages » cron

Add sar logging for CPU, RAM, and disk I/O

Add or change /etc/cron.d/sysstat to this. This creates a cron jobe to write file /tmp/outage_resource_log.txt that keeps minute-by-minute stats, sometimes useful in troubleshooting slowdowns. However, it's not a great way to do things, it create a small, constant resource drag, so disable it when done troubleshooting.

# The first element of the path is a directory where the debian-sa1
# script is located
PATH=/usr/lib/sysstat:/usr/sbin:/usr/sbin:/usr/bin:/sbin:/bin

# Activity reports every 10 minutes everyday
#ORIGINAL DEFAULT WAS 5-55/10 * * * * root command -v debian-sa1 > /dev/null && debian-sa1 1 1
#uncomment above line and comment out /tmp/outage_resource_log.txt lines to restore original functionality
* * * * * root date +"%Y-%m-%d %H:%M:%S" >> /tmp/outage_resource_log.txt
* * * * * root sar -u 1 1 >> /tmp/outage_resource_log.txt 2>&1
* * * * * root sar -r 1 1 >> /tmp/outage_resource_log.txt 2>&1
* * *…

Workshop » Reference Section » Grimoires » IT » Platforms » Linux » Packages » Apache

Add /fpm-status page to Apache virtual host

Add this to virtual host file in /etc/apache2/sites-available/, right below DocumentRoot, in both :80 and :443 sections



SetHandler "proxy:unix:/var/php-fpm/170027027353667.sock|fcgi://127.0.0.1"
Require all granted

May need in /etc/php/8.2/fpm/pool.d/www.conf, not sure:
pm.status_path = /fpm-status

May need at very start of .htaccess to prevent wordpress from intercepting the URL, not sure:
RewriteRule ^fpm-status$ - [L]

Workshop » Reference Section » Grimoires » IT » Platforms » Linux » Packages » Apache

View last 200 lines of all access logs on apache server

find [path/to/access/logs/folder] -name "*_access_log" -exec sh -c 'tail -200 "$1" | grep -v "HetrixTools\|ok\.txt\|canary" | sed "s/$/ [$(basename "$1" _access_log)]/"' _ {} \; | sort -k4,4

The grep -v "HetrixTools\|ok\.txt\|canary" filters out hits from my uptime monitor.

Workshop » Reference Section » Grimoires » IT » Platforms » Linux » Packages » fail2ban

How to check Fail2ban log

Command to check fail2ban's log is sudo tail -f /var/log/fail2ban.log

Linux

Linux PHP tuning utilities & commands

1. See memory consumed by php-fpm8.2 (change this to match different PHP version if necessary)

ps --no-headers -o "rss,cmd" -C php-fpm8.2 | awk '{ sum+=$1 } END { printf ("%d%s\n", sum/NR/1024,"M") }'

Linux

Linux SQL Tuning Utilities

1. tuning-primer.sh

Run from Github:
curl -L https://raw.githubusercontent.com/BMDan/tuning-primer.sh/main/tuning-primer.sh | bash

2. MySQLTuner.pl

wget http://mysqltuner.pl/ -O mysqltuner.pl
wget https://raw.githubusercontent.com/major/MySQLTuner-perl/master/basic_passwords.txt -O basic_passwords.txt
wget https://raw.githubusercontent.com/major/MySQLTuner-perl/master/vulnerabilities.csv -O vulnerabilities.csv
perl mysqltuner.pl --host 127.0.0.1 --user [user] --pass [pass]

Remember to quote any punctuation or BASH tokens in the password.

Linux

Debian CPU spike process logger

This was a system service that ren every time the CPU spiked and logged what was running. Mostly it showed that php-fpm was what spiked the CPU. Fascinating.

daemon was at /etc/systemd/system/mk-cpu-watcher.service

[Unit]
Description=CPU Usage Watcher
After=network.target

[Service]
Type=simple
ExecStart=/usr/local/bin/mk-cpu-watcher.sh
Restart=always

[Install]
WantedBy=multi-user.target

script it ran was at /usr/local/bin/mk-cpu-watcher.sh


#!/bin/bash

LOG_FILE="/var/log/mk-cpu-spikes.log"
THRESHOLD=98 # Can set to threshold just below 100 to catch near-maximum usage

while true; do
CPU_USAGE=$(top -bn1 | grep "Cpu(s)" | awk '{print $2+$4}' | cut -d. -f1)

if [ "$CPU_USAGE" -ge "$THRESHOLD" ]; then
echo "=== CPU spike detected at $(date) - Usage: ${CPU_USAGE}% ===" >> "$LOG_FILE"
echo "Top processes:" >> "$LOG_FILE"
ps -eo pid,ppid,cmd,%cpu,%mem --sort=-%cpu | head -11 >> "$LOG_FILE"
echo -e "\n" >> "$LOG_FILE"
fi

sleep 5 #…