techblog.jeppson.org Open in urlscan Pro
136.60.132.54  Public Scan

Submitted URL: http://techblog.jeppson.org/
Effective URL: https://techblog.jeppson.org/
Submission: On October 24 via manual from US — Scanned from DE

Form analysis 3 forms found in the DOM

GET https://techblog.jeppson.org/

<form role="search" method="get" class="search-form" action="https://techblog.jeppson.org/">
  <label>
    <span class="screen-reader-text">Search for:</span>
    <input type="search" class="search-field" placeholder="Search …" value="" name="s">
  </label>
  <input type="submit" class="search-submit" value="Search">
</form>

POST https://www.paypal.com/cgi-bin/webscr

<form action="https://www.paypal.com/cgi-bin/webscr" method="post">
  <div class="paypal-donations">
    <input type="hidden" name="cmd" value="_donations">
    <input type="hidden" name="bn" value="TipsandTricks_SP">
    <input type="hidden" name="business" value="jollysaintnick@gmail.com">
    <input type="hidden" name="item_name" value="Techblog.jeppson.org donation">
    <input type="hidden" name="rm" value="0">
    <input type="hidden" name="currency_code" value="USD">
    <input type="image" style="cursor: pointer;" src="https://www.paypalobjects.com/en_US/i/btn/btn_donateCC_LG.gif" name="submit" alt="PayPal - The safer, easier way to pay online.">
    <img class="lazy loaded" alt="" src="https://www.paypalobjects.com/en_US/i/scr/pixel.gif" data-src="https://www.paypalobjects.com/en_US/i/scr/pixel.gif" width="1" height="1" data-was-processed="true">
  </div>
</form>

GET https://techblog.jeppson.org/

<form role="search" method="get" class="search-form" action="https://techblog.jeppson.org/">
  <label>
    <span class="screen-reader-text">Search for:</span>
    <input type="search" class="search-field" placeholder="Search …" value="" name="s">
  </label>
  <input type="submit" class="search-submit" value="Search">
</form>

Text Content

TECHNICUS

Search
Primary Menu Skip to content
 * Virtualization
 * Analytics
 * CLI
 * Networking
 * OS
 * Mobile
 * Web
 * Hardware
 * Gaming

Search for:
Hardware


DIGITIZE OLD PHOTOS AND VIDEOS

August 25, 2023 nicholas Leave a comment

Here is a list of hardware and software that I use to digitize old home movies,
tapes, and family pictures:


HARDWARE

35mm film scanner: Pacific Image PowerFilm Plus 35mm Film Scanner

Document / picture scanner: Brother ADS-2700W

Flatbed scanner: Epson DS-50000 Large-Format Document Scanner

Audiocasette player with a 3.5mm output jack

Laptop / computer with 3.5mm input jack (headphone/microphone)

VHS player

USB VHS to Digital Converter

Soft tip silicone air blower


SOFTWARE

Video capture: OBS Studio

Photo editing: GIMP

Audio capture: Audacity



Digitizing these precious memories makes them available for future generations.
They are much more useful to everyone online than they ever were sitting in a
box.

hardwarescanning
CLI


REPLACE UNAVAIL DISK IN ZFS

August 24, 2023 nicholas Leave a comment

I had an issue where I removed a drive in my ZFS array and replaced it with a
new drive which the OS gave the same device name (/dev/sdd). I had a hard time
getting zfs to replace the drive until I discovered the -g flag for zpool status
(thanks to this stackexchange post.)

That did the trick! Simply running zpool status -g showed the GUIDs of each
device, which I could then use to properly use zpool replace on:

sudo zpool replace Poolname 12922644002107879117 /dev/sdd

Success!

linuxZFSzpool
CLI


FIX MAKEMKV NOT COMPILING IN ARCH

August 4, 2023 nicholas Leave a comment

I’ve had my Arch Linux desktop system for several years now. Over that time,
cruft has built up. It bit me today when I tried to install makemkv. No matter
what I tried I could not get it to compile. Configure constantly failed an this
step:

checking whether LIBAVCODEC_VERSION_MAJOR is declared... yes
checking LIBAVCODEC_VERSION_MAJOR... 52
...
configure: error: The libavcodec library is too old. Please get a recent one from http://www.ffmpeg.org


I had to systematically delete anything containing ffmpeg, then re-install
ffmpeg, in order to finally get it to work.

Get a list of installed packages containing ffmpeg:

yay -Ss ffmpeg | grep Installed

Remove ffmpeg-containing packages:

yay -R chromaprint-fftw grilo-plugins gst-plugins-bad cheese gnome-music gnome-video-effects totem ffmpeg-compat-54 ffmpeg-compat-57 ffmpeg0.10 ffmpeg4.4 vlc libavutil-52 faudio

Install makemkv:

yay -S makemkv

My “nuke all ffmpeg from orbit” approach worked. After I did so, makemkv
compiled!

Arch LinuxffmpegmakemkvPacmanyay
CLI, Web


FIX CRON OUTPUT NOT BEING SENT VIA E-MAIL

August 3, 2023 nicholas Leave a comment

I had an issue where I had cron jobs that output data to stdout, yet mail of the
output was never delivered. Everything showed fine in cron.log :

Aug  3 21:21:01 mail CROND[10426]: (nicholas) CMD (echo “test”)
Aug  3 21:21:01 mail CROND[10424]: (nicholas) CMDOUT (test)

yet no e-mail was sent. I finally found out how to fix this in a roundabout way.
I came across this article on cpanel.net on how to silence cron e-mails. I then
thought I’d try the reverse of a suggestion and add MAILTO= variable at the top
of my cron file. It worked! Example crontab:

MAILTO=”youremail@address.com”
0 * * * * /home/nicholas/queue-check.sh

This came about due to my Zimbra box not sending system e-mails. In addition to
the above, I had to configure zimbra as a sendmail alternative per this Zimbra
wiki post:
https://wiki.zimbra.com/wiki/How_to_%22fix%22_system%27s_sendmail_to_use_that_of_zimbra

CentOS 7cronE-maillinuxZimbra
Networking, Virtualization


FIX NO INTERNET IN KVM/QEMU VMS AFTER INSTALLING DOCKER

July 31, 2023 nicholas Leave a comment

I ran into a frustrating issue where my KVM VMs would lose network connectivity
if I installed docker on my Arch Linux system. After some digging I finally
discovered the cause (thanks to anteru.net)

> It turns out, docker adds a bunch of iptables rules by default which prevent
> communication. These will interfere with an already existing bridge, and
> suddenly your VMs will report no network.

There are two ways to fix this. I went with the route of telling docker to NOT
mess with iptables on startup. Less secure, but my system is not directly
connected to the internet. I created /etc/docker/daemon.json and added this to
it:

{
    "iptables" : false
}

Then restarted my machine. This did the trick!

dockerKVMNetworkingNetworkManager
CLI, Hardware, Virtualization


PROXMOX CEPH STORAGE CONFIGURATION

April 22, 2023 nicholas Leave a comment

These are my notes for migrating my VM storage from NFS mount to Ceph hosted on
Proxmox. I ran into a lot of bumps, but after getting proper server-grade SSDs,
things have been humming smoothly long enough that it’s time to publish.


A NOTE ON SSDS

I had a significant amount of trouble getting ceph to work with consumer-grade
SSDs. This is because ceph does a cache writeback call for each transaction –
much like NFS. On my ZFS array, I could disable this, but not so for ceph. The
result is very slow performance. It wasn’t until I got some Intel DC S3700
drives that ceph became reliable and fast. More details here.


INITIAL INSTALL

I used the Proxmox GUI to install ceph on each node by going to <host> / Ceph.
Then I used the GUI to create a monitor, manager, and OSD on each host. Lastly,
I used the GUI to create a ceph storage target in Datacenter config.


SMALL CLUSTER (3 NODES)

My Proxmox cluster is small (3 nodes.) I discovered I didn’t have enough space
for 3 replicas (the default ceph configuration), so I had to drop my pool
size/min down to 2/1 despite warnings not to do so, since a 3-node cluster is a
special case:

https://forum.proxmox.com/threads/ceph-pool-size-is-2-1-really-a-bad-idea.68939/#post-440755

More discussion:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/UB44GH4Z2NJUV52ZTHKO4TGYEX3DZ4CB/

I have not had any problems with this configuration and it provides the space I
need.


CEPH POOL SIZE

In my early testing, I discovered that if I removed a disk from pool, the size
of the pool increased! After doing some reading in redhat documentation, I
learned the basics of why this happened.

Size = number of copies of the data in the pool

Minsize = minimum number of copies before pool operation is suspended

I didn’t have enough space for 3 copies of the data. When I removed a disk, the
pool it dropped down to the minsize setting (2 copies) – which I did have enough
room for. The pool rebalanced to reflect this and it resulted in more space.


CONFIGURE ALERTING

It turns out that alerting for problems with ceph OSDs and monitors does not
come out of the box. You must configure it. Thanks to this thread and the ceph
documentation for how to do so. I did this on each proxmox node.

apt install ceph-mgr-dashboard

ceph config set mgr mgr/alerts/smtp_host <MAIL_HOST>'
ceph config set mgr mrg/alerts/smtp_ssl false
ceph config set mgr mgr/alerts/smtp_ssl false
ceph config set mgr mgr/alerts/smtp_port 25
ceph config set mgr mgr/alerts/smtp_destination <DEST_EMAIL>
ceph config set mgr mgr/alerts/smtp_sender <SENDER_EMAIL>
ceph config set mgr mgr/alerts/smtp_from_name 'Proxmox Ceph Cluster'

Test this by telling ceph to send its alerts:

ceph alerts send


MOVE VM DISKS TO CEPH STORAGE

I ended up writing a simple for loop to move all my existing Proxmox VM disks
onto my new ceph cluster. None of my VMs had more than 3 scsi devices. If your
VMs have more than that you’ll have to tweak this rudimentary command:

for vm in $(qm list | awk '{print $1}'|grep -v VMID); do qm move-disk $vm scsi0 <CEPH_POOL_NAME>; qm move-disk $vm scsi1 <CEPH_POOL_NAME>; qm move-disk $vm scsi2 <CEPH_POOL_NAME>; done


RENAME STORAGE

I tried to edit /etc/pve/storage.cfg to change the name I gave my ceph cluster
in Proxmox. That didn’t work (question mark next to the storage after renaming
it) so I just removed and re-added instead.


MAINTENANCE


BEGIN MAINTENANCE:

Ceph constantly tries to keep itself in balance. If you take a node down and it
stays down for too long, ceph will begin to rebalance the data among the
remaining nodes. If you’re doing short term maintenance, you can control this
behavior to avoid unnecessary rebalance traffic.

ceph osd set nobackfill
ceph osd set norebalance

Reboot / perform OSD maintenance.


AFTER MAINTENANCE IS COMPLETED:

ceph osd unset nobackfill
ceph osd unset norebalance


PERFORMANCE BENCHMARK

I did a lot of performance checking when I first started to try and track down
why the pool was so slow. In the end it was my consumer-grade SSDs. I’ll keep
this section here for future reference.

Redhat article on ceph performance benchmarking

Ceph wiki on benchmarking

rados bench -p SSD 10 write --no-cleanup
rados bench -p SSD 10 seq
rados bench -p SSD 10 seq
rados bench -p SSD 10 rand
rbd create image01 --size 1024 --pool SSD
rbd map image01 --pool SSD --name client.admin
mkfs.ext4 /dev/rbd/SSD/image01  
mkdir /mnt/ceph-block-device
mount /dev/rbd/SSD/image01 /mnt/ceph-block-device/
rbd bench --io-type write image01 --pool=SSD
pveperf /mnt/ceph-block-device/
rados -p SSD cleanup

Undo:

 umount /mnt/ceph-block-device  
 rbd unmap image01 --pool SSD
 rbd rm image01 --pool SSD



MTU 9000 WARNING

I read that it was recommended to set network MTU to 9000 (jumbo frames. When I
did this I experienced weird behavior, connection timeouts – ceph ground to a
halt, complaining about slow OSDs, mons. It was too much hassle for me to
troubleshoot, so I went back to the standard 1500 MTU.


DATACENTER SETTINGS

I discovered you can have a host automatically migrate hosts off when you issue
the reboot command via the migrate shutdown policy.
https://pve.proxmox.com/wiki/High_Availability

Proxmox GUI / Datacenter / Options / HA Settings


SPECIFY SSD OR HDD FOR POOLS

I have not done this yet but here’s a link I found that explains how to do it:
https://stackoverflow.com/questions/58060333/ceph-how-to-place-a-pool-on-specific-osd


HELPFUL COMMANDS

Determine IPs of OSDs:

ceph osd dump - determine IPs of OSDs

Remove monitor from failed node:

ceph mon remove <host>
Also needs to be removed from /etc/ceph/ceph.conf


CONFIGURE BACKUP

I had been using ZFS snapshots and ZFS send to backup my VM disks before the
move to ceph. While ceph has snapshot capability, it is slow and takes up extra
space in the pool. My solution was to spin up a Proxmox Backup Server and
regularly back up to that instead.

Proxmox backup server: can be installed to an existing PVE server if you desire:

https://pbs.proxmox.com/docs/installation.html

Configure the apt repository as follows:

# PBS pbs-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pbs bullseye pbs-no-subscription

# security updates
deb http://security.debian.org/debian-security bullseye-security main contrib

# apt-get update
# apt-get install proxmox-backup

I had to add a regular user and give admin permissions on PBS side, then add the
host on the proxmox side using those credentials.

Configure automated backup in PVE via Datacenter tab / Backup.

Remember to configure automated verify jobs (scrubs).

Make sure to add an e-mail address for proxmox backup user for alerts.

Edit which account & e-mail is used, and how often notified, at the Datastore
level.


SYNC JOBS

I wanted to synchronize my Proxmox Backup repository to a non-PBS server (simply
host the files.) I accomplished this by doing the following:

 * Add 127.0.0.1 as a Remote host (Configuration / Remotes.) Copy the PBS server
   fingerprint from Certificates / Fingerprint.
 * Create remote datastore in /etc/fstab manually (I used SSHFS to backup to a
   synology over SSH.)
 * Add datastore in PBS, pointing to manual fstab mount. Then add sync job there


IMPORT PBS DATASTORE (IN CASE OF TOTAL CRASH)

I wanted to know how to import the data into a fresh instance of PBS. This is
the procedcure:

edit /etc/proxmox-backup/datastore.cfg and add config about the datastore
manually. Copy from existing datastore config for syntax.


SPACE STILL BEING TAKEN UP AFTER DELETING BACKUPS

PBS uses access time to determine if something has been touched. It waits 24
hours after the last touch. Garbage collection manually updates atime, but still
recommended to keep atime on for the dataset PBS is using. Sources:

https://forum.proxmox.com/threads/zpool-atime-turned-off-effect-on-garbage-collection.76590/

https://pbs.proxmox.com/docs/backup-client.html#garbage-collection


TROUBLESHOOTING


REALLY SLOW VM IOPS DURING DEGRADE / REBUILD

This also ended up being due to having consumer-grade SSDs in my ceph pools. I’m
keeping my notes for what I did to troubleshoot in case they’re useful.

https://forum.proxmox.com/threads/ceph-high-i-o-wait-on-osd-add-remove.20271/

Small cluster. Lower backfill activity so recovery doesn’t cause slowdown:

ceph config set osd osd_max_backfills 1
ceph config set osd osd_recovery_max_active 3

Verify setting was applied: https://www.suse.com/support/kb/doc/?id=000019693

ceph-conf --show-config|egrep "osd_max_backfills|osd_recovery_max_active"
ceph config dump | grep osd

Ramp up backfill performance:

ceph tell osd.* injectargs --osd_max_backfills=2 --osd-recovery_max_active=8 # 2x Increase
ceph tell osd.* injectargs --osd_max_backfills=3 --osd-recovery_max_active=12 # 3x Increase
ceph tell osd.* injectargs --osd_max_backfills=4 --osd_recovery_max_active=16 # 4x Increase
ceph tell osd.* injectargs --osd_max_backfills=1 --osd-recovery_max_active=3 # Back to Defaults

The above didn’t help, turns out consumer SSDs are very bad:

https://yourcmc.ru/wiki/Ceph_performance#General_benchmarking_principles

https://blog.cypressxt.net/hello-ceph-and-samsung-850-evo/

I bought some Intel DC S3700 on ebay for $75 a piece. It fixed all my
latency/speed issues.


DEAD MON DESPITE BEING REMOVED FROM CLI


I had a situation where a monitor showed up as dead in proxmox, but I was unable
to delete it. I followed this procedure:

rm /etc/systemd/system/ceph-mon.target.wants/ceph-mon@<nodename>.service

Dead pve node procedure

remove from /etc/ceph/ceph.conf, remove /var/lib/ceph/mon/ceph-<node>, remove rm
/etc/systemd/system/ceph-mon.target.wants/ceph-mon@pve2.service

https://forum.proxmox.com/threads/ceph-cant-remove-monitor-with-unknown-status.63613/

Adding through GUI brought me back to the same problem.

Bring node back manually

https://docs.ceph.com/en/latest/rados/operations/add-or-rm-mons/

 ceph auth get mon. -o /tmp/key
 ceph mon getmap -o /tmp/map
 ceph-mon -i <node_name> –mkfs –monmap /tmp/map –keyring /tmp/key  
 ceph-mon -i <node_name> –public-addr <node_ip>:6789  
 ceph mon enable-msgr2
 vi /etc/pve/ceph.conf

In the end the most surefire way to fix this problem was to re-image the
affected host.


CLEAR HEALTH_WARNING IN GUI

In my testing I had tried pulling disks at random, then putting them back in.
This recovered well, but I had this message:

HEALTH_WARN 1 daemons have recently crashed

To clear it I had to drop to the CLI and run this command:

ceph crash archive-all

Thanks to the Proxmox Forums for the fix.


POOL CLEANUP

I noticed I would get rbd error: rbd: listing images failed: (2) No such file or
directory (500) when trying to look at which disks were on my Ceph pool. I fixed
this by removing the offending images as per this post.

I then ran another rbd ls -l <POOL_NAME> command to see what was left and
noticed several items without anything in the LOCK column. I discovered these
were artifacts from failed disk migrations I tried early on – wasted space. I
removed them one by one with the following command:

rbd rm <VM_FILE_NAME> -p <POOL_NAME>

Be careful to verify they’re not disks that are in use with VMs with are powered
off – they will also show no lock for non-running VMs.


DISK ERRORS

I had a disk fail, but then I pulled out the wrong disk. I kept getting these
errors:

Warning: Error fsyncing/closing
/dev/mapper/ceph--fc741b6c--499d--482e--9ea4--583652b541cc-osd--block--843cf28a--9be1--4286--a29c--b9c6848d33ba:
Input/output error


I was unable to remove it from the GUI. After a while I realized the problem – I
was on the wrong node. I needed to be on the node that has the disks when
creating an OSD in the Proxmox GUI.

Steps to determine which disk is assigned to an OSD, from ceph docs:

ceph-volume lvm list


====== osd.2 =======

 [block]       /dev/ceph-680265f2-0b3c-4426-b2a8-acf2774d82e0/osd-block-2096f339-0572-4e1d-bf20-52335af9b374

     block device              /dev/ceph-680265f2-0b3c-4426-b2a8-acf2774d82e0/osd-block-2096f339-0572-4e1d-bf20-52335af9b374
     block uuid                tcnwFr-G33o-ybue-n0mP-cDpe-sp9y-d0gvYS
     cephx lockbox secret       
     cluster fsid              65f26da0-fca0-4419-ba15-20269a5a363f
     cluster name              ceph
     crush device class        ssd
     encrypted                 0
     osd fsid                  2096f339-0572-4e1d-bf20-52335af9b374
     osd id                    2
     osdspec affinity           
     type                      block
     vdo                       0
     devices                   /dev/sde


backupCephDebianlinuxProxMoxsshfsstorage
CLI, Networking


FORCE DNS REFRESH FOR PING IN CENTOS7

March 30, 2023 nicholas Leave a comment

I came across an issue where I updated /etc/resolv.conf but name resolution
wasn’t working in Cent7. nslookup & host both returned results, but ping did
not. After some digging, I found this post which mentioned name service cache
daemon. After restarting it, DNS worked properly! So if you update DNS, be sure
to also restart NCSD:

sudo systemctl restart ncsd

CentOS 7linuxncsd
CLI


CONVERT TIF TO JPG WITH IMAGEMAGICK

March 3, 2023 nicholas Leave a comment

My new project is digitizing film negatives. Following advice found on the
DataHoarder subreddit, I’m scanning these files in the highest possible quality
in uncompressed TIF files. These TIF files are too big for regular consumption,
thus the need to convert to JPG.

ImageMagick is amazing, and does the job nicely. Make sure you have the
imagemagick package installed, and it’s as simple as using the convert command.

This is my simple script for converting all TIF files to JPG, and outputting
them to the same directory:

for file in *.tif; do echo converting "$file" to "${file%.*}.jpg"; convert "$file" "${file%.*}.jpg"; done


It uses bash substitution to remove the TIF extension in the resulting JPG file.
It works beautifully!

Update 4/14/2023:

I have re-worked this a bit to handle multiple directories. It involves setting
the Internal Field Separator to be ‘ \n’ instead of space (default) and using
the find command. The multi-directory command is below:

IFS=$'\n'; for file in $(find . -name *.tif); do echo converting "$file" to "${file%.*}.jpg"; convert "$file" "${file%.*}.jpg"; done;unset IFS




BASHconvertImageMagickJPGscanningTIF
CLI, Networking


RESTART WIREGUARD INTERFACE IN OPENWRT

February 1, 2023 nicholas Leave a comment

One annoying issue with wireguard in OpenWRT is the fact that it won’t re-check
DNS on connection failure. In the event that my public IP changes (dynamic IP)
the OpenWRT wireguard client doesn’t ever get the memo, even when DNS is
updated.

I discovered here that you can tell OpenWRT via the command line to stop and
start the wireguard interface. This forces a new DNS check and then the tunnel
builds successfully. The command:

ubus call network.interface.wg0 down &&  ubus call network.interface.wg0 up

Success! Throw this into a cron job and you have an automated failsafe to ensure
a reconnect after IP change.

OpenWRTwireguard
Networking


WIREGUARD ONE-WAY TRAFFIC ON USG PRO 4 AFTER DUAL WAN SETUP

October 18, 2022 nicholas Leave a comment

I have a site-to-site VPN between my Ubiquiti USG Pro-4 and an OpenWRT device
over wireguard . It’s worked great until I got a secondary WAN connection as a
failover connection since my primary cable connection has been flaky lately.

When you introduce dual-WAN on Ubiquiti devices you have to manually configure
everything since the GUI assumes only one WAN connection. I configured my manual
DNAT (port forwards) for each interface successfully but struggled to figure out
why suddenly my Wireguard VPN between my two sites only went one way (remote
side could ping all hosts on local side, but not visa-versa.)

After some troubleshooting I realized the firewall itself could ping the remote
subnet just fine, it just wasn’t allowing local hosts to do so. I couldn’t find
anything in firewall logs. Eventually I came across this very helpful page from
hackad.nu that helped me to solve my problem.

The solution was to add a Firewall Modify rule specifically for the eth0
interface (where all my LAN traffic is routed through) to allow the source
address of the subnets I want to traverse the VPN, then apply that modifier to
the LAN_IN firewall rule for that interface. I had to do it for any VLANs I
wanted to be able to use the Wireguard tunnel as well (vifs of eth0, VLAN 50 in
my case)

Here is the relevant config.gateway.json sections, namely “firewall” and
“interfaces”:

{
    "firewall": {
        "modify": {
            "Wireguard": {
                "rule": {
                    "10": {
                        "action": "modify",
                        "description": "Allow Wireguard traffic",
                        "modify": {
                            "table": "10"
                        },
                        "source": {
                            "address": "10.1.0.0/16"
                        }
                    }
                }
            }
        },
        "interfaces": {
            "ethernet": {
                "eth0": {
                    "firewall": {
                        "in": {
                            "ipv6-name": "LANv6_IN",
                            "modify": "Wireguard",
                            "name": "LAN_IN"
                        }
                    },
                    "vif": {
                        "50": {
                            "firewall": {
                                "in": {
                                    "ipv6-name": "LANv6_IN",
                                    "modify": "Wireguard",
                                    "name": "LAN_IN"
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}

This did the trick! Wireguard is working both directions again, this time with
my dual WAN connections.

Dual WANfailoverfirewallJSONsite to siteUbiquitiUSGVPNwireguard


POSTS NAVIGATION

1 2 … 35 Next →


WELCOME

This blog is meant as a dumping ground for my technical musings. It is mostly
for my own sake but I am making it public in the off chance that it might be
useful to someone else.

If you find the content on this site useful, please donate to contribute to
server costs. Thank you!




TAGS

 * Active Directory
 * apache
 * Arch Linux
 * awk
 * BASH
 * CentOS
 * CentOS 7
 * crouton
 * Debian
 * DNS
 * find
 * firewall
 * for loop
 * freeBSD
 * freeNAS
 * grep
 * hardware
 * KVM
 * linux
 * LVM
 * migration
 * Mint
 * mount
 * mysql
 * openvpn
 * PCI passthrough
 * php
 * ProxMox
 * rsync
 * samba
 * scripting
 * sed
 * Sophos UTM
 * Splunk
 * SSH
 * Ubuntu
 * ubuntu 14.04
 * varnish
 * VPN
 * Windows
 * Windows 10
 * wordpress
 * xen
 * Xenserver
 * ZFS


ARCHIVES

 * August 2023 (4)
 * July 2023 (1)
 * April 2023 (1)
 * March 2023 (2)
 * February 2023 (1)
 * October 2022 (1)
 * September 2022 (2)
 * August 2022 (2)
 * June 2022 (2)
 * May 2022 (1)
 * April 2022 (3)
 * March 2022 (2)
 * December 2021 (1)
 * November 2021 (1)
 * October 2021 (1)
 * September 2021 (1)
 * July 2021 (1)
 * May 2021 (1)
 * April 2021 (1)
 * March 2021 (2)
 * February 2021 (1)
 * January 2021 (1)
 * December 2020 (2)
 * November 2020 (1)
 * October 2020 (1)
 * August 2020 (2)
 * July 2020 (1)
 * June 2020 (2)
 * May 2020 (5)
 * April 2020 (4)
 * March 2020 (4)
 * February 2020 (8)
 * January 2020 (1)
 * December 2019 (2)
 * November 2019 (1)
 * October 2019 (5)
 * September 2019 (1)
 * August 2019 (8)
 * July 2019 (2)
 * June 2019 (2)
 * May 2019 (1)
 * April 2019 (3)
 * March 2019 (3)
 * February 2019 (1)
 * January 2019 (2)
 * December 2018 (6)
 * November 2018 (5)
 * October 2018 (3)
 * September 2018 (4)
 * August 2018 (5)
 * July 2018 (4)
 * June 2018 (7)
 * May 2018 (2)
 * April 2018 (4)
 * March 2018 (5)
 * February 2018 (1)
 * January 2018 (2)
 * December 2017 (6)
 * October 2017 (3)
 * September 2017 (1)
 * August 2017 (4)
 * June 2017 (2)
 * May 2017 (1)
 * April 2017 (3)
 * March 2017 (5)
 * February 2017 (3)
 * January 2017 (3)
 * December 2016 (3)
 * November 2016 (3)
 * October 2016 (7)
 * August 2016 (8)
 * July 2016 (4)
 * June 2016 (4)
 * May 2016 (2)
 * April 2016 (5)
 * March 2016 (2)
 * February 2016 (8)
 * January 2016 (4)
 * December 2015 (4)
 * November 2015 (4)
 * October 2015 (2)
 * September 2015 (4)
 * August 2015 (4)
 * July 2015 (8)
 * June 2015 (2)
 * May 2015 (4)
 * April 2015 (4)
 * March 2015 (6)
 * February 2015 (10)
 * January 2015 (10)
 * December 2014 (8)
 * November 2014 (8)
 * October 2014 (13)
 * September 2014 (19)
 * August 2014 (9)
 * July 2014 (1)


NICK'S TECHNICAL MUSINGS


SEARCH

Search for:


RECENT COMMENTS

 * Gustavo on Wireguard on a USG Pro 4
 * Baris on Generate SuperMicro IPMI license
 * Hussain Nasif on Generate SuperMicro IPMI license
 * Eddi on Generate SuperMicro IPMI license
 * SAB on Fix icedtea Cannot grant permissions to unsigned jars error


RECENT POSTS

 * Digitize old photos and videos
 * Replace unavail disk in ZFS
 * Fix makemkv not compiling in Arch
 * Fix cron output not being sent via e-mail
 * Fix no internet in KVM/QEMU VMs after installing docker


CATEGORIES

 * Analytics
 * CLI
 * Gaming
 * Hardware
 * Mobile
 * Networking
 * OS
 * Virtualization
 * Web

Free DNS
Proudly powered by WordPress