pull/38/head
DoTheEvo 10 months ago
parent eb2d88d99f
commit f88dd2cf06

@ -241,6 +241,7 @@ rather than just **docker** tutorials.
[Beginners speedrun to selfhosting something in docker](beginners-speedrun-selfhosting/)
* [Good stuff](https://adamtheautomator.com/docker-compose-tutorial/)
* [https://devopswithdocker.com/getting-started](https://devopswithdocker.com/getting-started)
* [This](https://youtu.be/DM65_JyGxCo) one is pretty good. That entire channel
has good stuff.

@ -0,0 +1,64 @@
[global]
ioengine=windowsaio
#ioengine=libaio
filesize=1g
filename=.fio-diskmark
direct=1 #use O_DIRECT IO (negates buffered)
time_based #keep running until runtime/timeout is met
runtime=30 #stop workload when this amount of time has passed
loops=1 #number of times to run the job
#refill_buffers #always writes new random data in the buffer
#randrepeat=0 #do not use repeatable random IO pattern
thread #use threads instead of processes
stonewall #insert a hard barrier between this job and previous
[Seq-Read-Q32T1]
iodepth=32
numjobs=1
bs=1m
rw=read
[Seq-Write-Q32T1]
iodepth=32
numjobs=1
bs=1m
rw=write
[Rand-Read-4K-Q8T8]
iodepth=8
numjobs=8
openfiles=8
bs=4k
rw=randread
[Rand-Write-4K-Q8T8]
iodepth=8
numjobs=8
openfiles=8
bs=4k
rw=randwrite
[Rand-Read-4K-Q32T1]
iodepth=32
numjobs=1
bs=4k
rw=randread
[Rand-Write-4K-Q32T1]
iodepth=32
numjobs=1
bs=4k
rw=randwrite
[Rand-Read-4K-Q1T1]
iodepth=1
numjobs=1
bs=4k
rw=randread
[Rand-Write-4K-Q1T1]
iodepth=1
numjobs=1
bs=4k
rw=randwrite
unlink=1

@ -0,0 +1,25 @@
# Fio or KdiskMark
###### guide-by-example
# Purpose & Overview
Benchmark disks and NAS performance.
https://github.com/JonMagon/KDiskMark
* [Github](https://github.com/axboe/fio)
* [Official documentation](https://fio.readthedocs.io/en/latest/index.html)
Command line tool. Exremely rich in all the aspects.<br>
This repo aims to just have a simple one preset that tells most about the disk.
# Install on
# Useful links
https://www.youtube.com/watch?v=mBhXUYh-76o
https://arstechnica.com/gadgets/2020/02/how-fast-are-your-disks-find-out-the-open-source-way-with-fio/
https://portal.nutanix.com/page/documents/kbs/details?targetId=kA07V000000LX7xSAG

@ -0,0 +1,317 @@
#!/bin/bash
#############################################################################################################
#Changelog #
#############################################################################################################
#Added prompts for user input to configure script instead of relying on hardcoded settings.
#Added a lot of errorchecking
#The script is now optionally compatible with dash (this is the reason for there being a sed command at the end of every echo -e instance, dash liked to print the -e part when I was testing.)
#Vastly improved compatibility across distributions
#Special thanks to everyone who contributed here: https://gist.github.com/i3v/99f8ef6c757a5b8e9046b8a47f3a9d5b
#Also extra special thanks to BAGELreflex on github for this: https://gist.github.com/BAGELreflex/c04e7a25d64e989cbd9376a9134b8f6d it made a huge difference to this improved version.
#Added optimizations for 512k and 4k tests (they now use QSIZE instead of SIZE, it makes these tests a lot faster and doesn't affect accuracy much, assuming SIZE is appropriately configured for your drive.)
#Added option to not use legacy (512k and Q1T1 Seq R/W tests) to save time when testing.
#Ensured the script can run fine without df installed now. Some information may be missing but worst case scenario it'll just look ugly.
#Added a save results option that imitates the saved results from crystaldiskmark; the formatting is a little wonky but it checks out. Great for comparing results between operating systems.
#Reconfigured results to use MegaBytes instead of MebiBytes (This is what crystaldiskmark uses so results should now be marginally closer).
#Sequential read/write results (512k, q1t1 seq and q32t1 seq) will now appear as soon as they're finished and can be viewed while the 4k tests are running.
#Note: The legacy test option defaults to no if nothing is selected, the result saving defaults to yes. It's easy to change if you don't like this.
#Observation: When testing, I observed that the read results seemed mostly consistent with the results I got from crystaldiskmark on windows, however there's something off with the write results.
#Sorry for the messy code :)
#############################################################################################################
#User input requests and error checking #
#############################################################################################################
if [ -f /usr/bin/fio ]; then #Dependency check
:
else
echo -e "\033[1;31mError: This script requires fio to run, please make sure it is installed." | sed 's:-e::g'
exit
fi
if [ -f /usr/bin/df ]; then #Dependency check
nodf=0
else
nodf=1
echo -e "\033[1;31mWarning: df is not installed, this script relies on df to display certain information, some information may be missing." | sed 's:-e::g'
fi
if [ "$(ps -ocmd= | tail -1)" = "bash" ]; then
echo "What drive do you want to test? (Default: $HOME on /dev/$(df $HOME | grep /dev | cut -d/ -f3 | cut -d" " -f1) )"
echo -e "\033[0;33mOnly directory paths (e.g. /home/user/) are valid targets.\033[0;00m"
read -e TARGET
else #no autocomplete available for dash.
echo "What drive do you want to test? (Default: $HOME on /dev/$(df $HOME | grep /dev | cut -d/ -f3 | cut -d" " -f1) )"
echo -e "\033[0;33mOnly directory paths (e.g. /home/user/) are valid targets. Use bash if you want autocomplete.\033[0;00m" | sed 's:-e::g'
read TARGET
fi
echo "
How many times to run the test? (Default: 5)"
read LOOPS
echo "How large should each test be in MiB? (Default: 1024)"
echo -e "\033[0;33mOnly multiples of 32 are permitted!\033[0;00m" | sed 's:-e::g'
read SIZE
echo "Do you want to write only zeroes to your test files to imitate dd benchmarks? (Default: 0)"
echo -e "\033[0;33mEnabling this setting may drastically alter your results, not recommended unless you know what you're doing.\033[0;00m" | sed 's:-e::g'
read WRITEZERO
echo "Would you like to include legacy tests (512kb & Q1T1 Sequential Read/Write)? [Y/N]"
read LEGACY
if [ -z $TARGET ]; then
TARGET=$HOME
elif [ -d $TARGET ]; then
:
else
echo -e "\033[1;31mError: $TARGET is not a valid path."
exit
fi
if [ -z $LOOPS ]; then
LOOPS=5
elif [ "$LOOPS" -eq "$LOOPS" ] 2>/dev/null; then
:
else
echo -e "\033[1;31mError: $LOOPS is not a valid number, please use a number to declare how many times to loop tests." | sed 's:-e::g'
exit
fi
if [ -z $SIZE ]; then
SIZE=1024
elif [ "$SIZE" -eq "$SIZE" ] 2>/dev/null && ! (( $SIZE % 32 )) 2>/dev/null;then
:
else
echo -e "\033[1;31mError: The test size must be an integer set to a multiple of 32. Please write a multiple of 32 for the size setting (Optimal settings: 1024, 2048, 4096, 8192, 16384)."
exit
fi
if [ -z $WRITEZERO ]; then
WRITEZERO=0
elif [ "$WRITEZERO" -eq 1 ] 2>/dev/null || [ "$WRITEZERO" -eq 0 ] 2>/dev/null; then
:
else
echo -e "\033[1;31mError: WRITEZERO only accepts 0 or 1, $WRITEZERO is not a valid argument." | sed 's:-e::g'
exit
fi
if [ "$LEGACY" = "Y" ] || [ "$LEGACY" = "y" ]; then
:
else
LEGACY=no
fi
if [ $nodf = 1 ]; then
echo "
Settings are as follows:
Target Directory: $TARGET
Size Of Test: $SIZE MiB
Number Of Loops: $LOOPS
Write Zeroes: $WRITEZERO
Legacy Tests: $LEGACY
"
echo "Are you sure these are correct? [Y/N]"
read REPLY
if [ $REPLY = Y ] || [ $REPLY = y ]; then
REPLY=""
else
echo ""
exit
fi
else
DRIVE=$(df $TARGET | grep /dev | cut -d/ -f3 | cut -d" " -f1 | rev | cut -c 2- | rev)
if [ "$(echo $DRIVE | cut -c -4)" = "nvme" ]; then #NVME Compatibility
echo $DRIVE
DRIVE=$(df $TARGET | grep /dev | cut -d/ -f3 | cut -d" " -f1 | rev | cut -c 3- | rev)
echo $DRIVE
fi
DRIVEMODEL=$(cat /sys/block/$DRIVE/device/model | sed 's/ *$//g')
DRIVESIZE=$(($(cat /sys/block/$DRIVE/size)*512/1024/1024/1024))GB
DRIVEPERCENT=$(df -h $TARGET | cut -d ' ' -f11 | tail -n 1)
DRIVEUSED=$(df -h $TARGET | cut -d ' ' -f6 | tail -n 1)
echo "
Settings are as follows:
Target Directory: $TARGET
Target Drive: $DRIVE
Size Of Test: $SIZE MiB
Number Of Loops: $LOOPS
Write Zeroes: $WRITEZERO
Legacy Tests: $LEGACY
"
echo "Are you sure these are correct? [Y/N]"
read REPLY
if [ "$REPLY" = "Y" ] || [ "$REPLY" = "y" ]; then
REPLY=""
else
echo ""
exit
fi
fi
#############################################################################################################
#Setting the last Variables And Running Sequential R/W Benchmarks #
#############################################################################################################
QSIZE=$(($SIZE / 32)) #Size of Q32Seq tests
SIZE=$(echo $SIZE)m
QSIZE=$(echo $QSIZE)m
if [ $nodf = 1 ]; then
echo "
Running Benchmark, please wait...
"
else
echo "
Running Benchmark on: /dev/$DRIVE, $DRIVEMODEL ($DRIVESIZE), please wait...
"
fi
if [ $LEGACY = Y ] || [ $LEGACY = y ]; then
fio --loops=$LOOPS --size=$SIZE --filename="$TARGET/.fiomark.tmp" --stonewall --ioengine=libaio --direct=1 --zero_buffers=$WRITEZERO --output-format=json \
--name=Bufread --loops=1 --bs=$SIZE --iodepth=1 --numjobs=1 --rw=readwrite \
--name=Seqread --bs=$SIZE --iodepth=1 --numjobs=1 --rw=read \
--name=Seqwrite --bs=$SIZE --iodepth=1 --numjobs=1 --rw=write \
--name=SeqQ32T1read --bs=$QSIZE --iodepth=32 --numjobs=1 --rw=read \
--name=SeqQ32T1write --bs=$QSIZE --iodepth=32 --numjobs=1 --rw=write \
> "$TARGET/.fiomark.txt"
fio --loops=$LOOPS --size=$QSIZE --filename="$TARGET/.fiomark-512k.tmp" --stonewall --ioengine=libaio --direct=1 --zero_buffers=$WRITEZERO --output-format=json \
--name=512kread --bs=512k --iodepth=1 --numjobs=1 --rw=read \
--name=512kwrite --bs=512k --iodepth=1 --numjobs=1 --rw=write \
> "$TARGET/.fiomark-512k.txt"
SEQR="$(($(cat "$TARGET/.fiomark.txt" | grep -A15 '"name" : "Seqread"' | grep bw | grep -v '_' | cut -d: -f2 | sed s:,::g)/1000))MB/s [ $(cat "$TARGET/.fiomark.txt" | grep -A15 '"name" : "Seqread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g') IOPS]"
SEQW="$(($(cat "$TARGET/.fiomark.txt" | grep -A80 '"name" : "Seqwrite"' | grep bw | grep -v '_' | sed 2\!d | cut -d: -f2 | sed s:,::g)/1000))MB/s [ $(cat "$TARGET/.fiomark.txt" | grep -A80 '"name" : "Seqwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g') IOPS]"
F12KR="$(($(cat "$TARGET/.fiomark-512k.txt" | grep -A15 '"name" : "512kread"' | grep bw | grep -v '_' | cut -d: -f2 | sed s:,::g)/1000))MB/s [ $(cat "$TARGET/.fiomark-512k.txt" | grep -A15 '"name" : "512kread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g') IOPS]"
F12KW="$(($(cat "$TARGET/.fiomark-512k.txt" | grep -A80 '"name" : "512kwrite"' | grep bw | grep -v '_' | sed 2\!d | cut -d: -f2 | sed s:,::g)/1000))MB/s [ $(cat "$TARGET/.fiomark-512k.txt" | grep -A80 '"name" : "512kwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g') IOPS]"
SEQ32R="$(($(cat "$TARGET/.fiomark.txt" | grep -A15 '"name" : "SeqQ32T1read"' | grep bw | grep -v '_' | cut -d: -f2 | sed s:,::g)/1000))MB/s [ $(cat "$TARGET/.fiomark.txt" | grep -A15 '"name" : "SeqQ32T1read"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g') IOPS]"
SEQ32W="$(($(cat "$TARGET/.fiomark.txt" | grep -A80 '"name" : "SeqQ32T1write"' | grep bw | grep -v '_' | sed 2\!d | cut -d: -f2 | sed s:,::g)/1000))MB/s [ $(cat "$TARGET/.fiomark.txt" | grep -A80 '"name" : "SeqQ32T1write"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g') IOPS]"
echo -e "
Results:
\033[0;33m
Sequential Read: $SEQR
Sequential Write: $SEQW
\033[0;32m
512KB Read: $F12KR
512KB Write: $F12KW
\033[1;36m
Sequential Q32T1 Read: $SEQ32R
Sequential Q32T1 Write: $SEQ32W" | sed 's:-e::g'
else
fio --loops=$LOOPS --size=$SIZE --filename="$TARGET/.fiomark.tmp" --stonewall --ioengine=libaio --direct=1 --zero_buffers=$WRITEZERO --output-format=json \
--name=Bufread --loops=1 --bs=$SIZE --iodepth=1 --numjobs=1 --rw=readwrite \
--name=SeqQ32T1read --bs=$QSIZE --iodepth=32 --numjobs=1 --rw=read \
--name=SeqQ32T1write --bs=$QSIZE --iodepth=32 --numjobs=1 --rw=write \
> "$TARGET/.fiomark.txt"
SEQ32R="$(($(cat "$TARGET/.fiomark.txt" | grep -A15 '"name" : "SeqQ32T1read"' | grep bw | grep -v '_' | cut -d: -f2 | sed s:,::g)/1000))MB/s [ $(cat "$TARGET/.fiomark.txt" | grep -A15 '"name" : "SeqQ32T1read"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g') IOPS]"
SEQ32W="$(($(cat "$TARGET/.fiomark.txt" | grep -A80 '"name" : "SeqQ32T1write"' | grep bw | grep -v '_' | sed 2\!d | cut -d: -f2 | sed s:,::g)/1000))MB/s [ $(cat "$TARGET/.fiomark.txt" | grep -A80 '"name" : "SeqQ32T1write"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g') IOPS]"
echo -e "
Results:
\033[1;36m
Sequential Q32T1 Read: $SEQ32R
Sequential Q32T1 Write: $SEQ32W" | sed 's:-e::g'
fi
#############################################################################################################
#4KiB Tests & Results #
#############################################################################################################
fio --loops=$LOOPS --size=$QSIZE --filename="$TARGET/.fiomark-4k.tmp" --stonewall --ioengine=libaio --direct=1 --zero_buffers=$WRITEZERO --output-format=json \
--name=4kread --bs=4k --iodepth=1 --numjobs=1 --rw=randread \
--name=4kwrite --bs=4k --iodepth=1 --numjobs=1 --rw=randwrite \
--name=4kQ32T1read --bs=4k --iodepth=32 --numjobs=1 --rw=randread \
--name=4kQ32T1write --bs=4k --iodepth=32 --numjobs=1 --rw=randwrite \
--name=4kQ8T8read --bs=4k --iodepth=8 --numjobs=8 --rw=randread \
--name=4kQ8T8write --bs=4k --iodepth=8 --numjobs=8 --rw=randwrite \
> "$TARGET/.fiomark-4k.txt"
FKR="$(($(cat "$TARGET/.fiomark-4k.txt" | grep -A15 '"name" : "4kread"' | grep bw | grep -v '_' | cut -d: -f2 | sed s:,::g)/1000))MB/s [ $(cat "$TARGET/.fiomark-4k.txt" | grep -A15 '"name" : "4kread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g') IOPS]"
FKW="$(($(cat "$TARGET/.fiomark-4k.txt" | grep -A80 '"name" : "4kwrite"' | grep bw | grep -v '_' | sed 2\!d | cut -d: -f2 | sed s:,::g)/1000))MB/s [ $(cat "$TARGET/.fiomark-4k.txt" | grep -A80 '"name" : "4kwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g') IOPS]"
FK32R="$(($(cat "$TARGET/.fiomark-4k.txt" | grep -A15 '"name" : "4kQ32T1read"' | grep bw | grep -v '_' | cut -d: -f2 | sed s:,::g)/1000))MB/s [ $(cat "$TARGET/.fiomark-4k.txt" | grep -A15 '"name" : "4kQ32T1read"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g') IOPS]"
FK32W="$(($(cat "$TARGET/.fiomark-4k.txt" | grep -A80 '"name" : "4kQ32T1write"' | grep bw | grep -v '_' | sed 2\!d | cut -d: -f2 | sed s:,::g)/1000))MB/s [ $(cat "$TARGET/.fiomark-4k.txt" | grep -A80 '"name" : "4kQ32T1write"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g') IOPS]"
FK8R="$(($(cat "$TARGET/.fiomark-4k.txt" | grep -A15 '"name" : "4kQ8T8read"' | grep bw | grep -v '_' | sed 's/ "bw" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }')/1000))MB/s [ $(cat "$TARGET/.fiomark-4k.txt" | grep -A15 '"name" : "4kQ8T8read"' | grep iops | sed 's/ "iops" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }' | cut -d. -f1) IOPS]"
FK8W="$(($(cat "$TARGET/.fiomark-4k.txt" | grep -A80 '"name" : "4kQ8T8write"' | grep bw | sed 's/ "bw" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }')/1000))MB/s [ $(cat "$TARGET/.fiomark-4k.txt" | grep -A80 '"name" : "4kQ8T8write"' | grep '"iops" '| sed 's/ "iops" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }' | cut -d. -f1) IOPS]"
echo -e "\033[1;35m
4KB Q8T8 Read: $FK8R
4KB Q8T8 Write: $FK8W
\033[1;33m
4KB Q32T1 Read: $FK32R
4KB Q32T1 Write: $FK32W
\033[0;36m
4KB Read: $FKR
4KB Write: $FKW
\033[0m
" | sed 's:-e::g'
echo "Would you like to save these results? [Y/N]"
read REPLY
if [ "$REPLY" = "N" ] || [ "$REPLY" = "n" ]; then
REPLY=""
else
DRIVESIZE=$(df -h $TARGET | cut -d ' ' -f3 | tail -n 1)
echo "
Saving at $HOME/$DRIVE$(date +%F%I%M%S).txt
"
if [ "$LEGACY" = "Y" ] || [ "$LEGACY" = "y" ]; then
echo "-----------------------------------------------------------------------
Flexible I/O Tester - $(fio --version) (C) axboe
Fio Github : https://github.com/axboe/fio
Script Source : https://unix.stackexchange.com/a/480191/72554
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s
* KB = 1000 bytes, KiB = 1024 bytes
Legacy Seq Read (Q= 1,T= 1) : $SEQR
Legacy Seq Write (Q= 1,T= 1) : $SEQW
512KiB Seq Read (Q= 1,T= 1) : $F12KR
512KiB Seq Write (Q= 1,T= 1) : $F12KW
Sequential Read (Q= 32,T= 1) : $SEQ32R
Sequential Write (Q= 32,T= 1) : $SEQ32W
Random Read 4KiB (Q= 8,T= 8) : $FK8R
Random Write 4KiB (Q= 8,T= 8) : $FK8W
Random Read 4KiB (Q= 32,T= 1) : $FK32R
Random Write 4KiB (Q= 32,T= 1) : $FK32W
Random Read 4KiB (Q= 1,T= 1) : $FKR
Random Write 4KiB (Q= 1,T= 1) : $FKW
Test : $(echo $SIZE | rev | cut -c 2- | rev) MiB [$DRIVEMODEL, $DRIVE $DRIVEPERCENT ($(echo $DRIVEUSED | rev | cut -c 2- | rev)/$(echo $DRIVESIZE | rev | cut -c 2- | rev) GiB] (x$LOOPS) [Interval=0 sec]
Date : $(date +%F | sed 's:-:/:g') $(date +%T)
OS : $(uname -srm)
" > "$HOME/$DRIVE$(date +%F%I%M%S).txt"
else
echo "-----------------------------------------------------------------------
Flexible I/O Tester - $(fio --version) (C) axboe
Fio Github : https://github.com/axboe/fio
Script Source : https://unix.stackexchange.com/a/480191/72554
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s
* KB = 1000 bytes, KiB = 1024 bytes
Sequential Read (Q= 32,T= 1) : $SEQ32R
Sequential Write (Q= 32,T= 1) : $SEQ32W
Random Read 4KiB (Q= 8,T= 8) : $FK8R
Random Write 4KiB (Q= 8,T= 8) : $FK8W
Random Read 4KiB (Q= 32,T= 1) : $FK32R
Random Write 4KiB (Q= 32,T= 1) : $FK32W
Random Read 4KiB (Q= 1,T= 1) : $FKR
Random Write 4KiB (Q= 1,T= 1) : $FKW
Test : $(echo $SIZE | rev | cut -c 2- | rev) MiB [$DRIVEMODEL, $DRIVE $DRIVEPERCENT ($(echo $DRIVEUSED | rev | cut -c 2- | rev)/$(echo $DRIVESIZE | rev | cut -c 2- | rev) GiB] (x$LOOPS) [Interval=0 sec]
Date : $(date +%F | sed 's:-:/:g') $(date +%T)
OS : $(uname -srm)
" > "$HOME/$DRIVE$(date +%F%I%M%S).txt"
fi
fi
rm "$TARGET/.fiomark.txt" "$TARGET/.fiomark-512k.txt" "$TARGET/.fiomark-4k.txt" 2>/dev/null
rm "$TARGET/.fiomark.tmp" "$TARGET/.fiomark-512k.tmp" "$TARGET/.fiomark-4k.tmp" 2>/dev/null

@ -0,0 +1,30 @@
[global]
bs=128K
iodepth=256
direct=1
ioengine=libaio
group_reporting
time_based
name=seq
log_avg_msec=1000
bwavgtime=1000
filenameofi=/dev/nvme0n1
#size=100G
[rd_qd_256_128k_1w]
stonewall
bs=128k
iodepth=256
numjobs=1
rw=read
runtime=60
write_bw_log=seq_read_bw.log
[wr_qd_256_128k_1w]
stonewall
bs=128k
iodepth=256
numjobs=1
rw=write
runtime=60
write_bw_log=seq_write_bw.log

@ -102,7 +102,7 @@ if planning serious use.
A script will be periodically executing cli version of kopia to connect to a repository,
execute backup, and disconnect.<br>
Systemd-timers are used to scheduling execution of the script.
Systemd-timers are used to schedule execution of the script.
The repository is created on a network share, also mounted on boot using systemd.
### Install Kopia
@ -248,13 +248,13 @@ WantedBy=timers.target
### Troubleshooting
![journaclt_output](https://i.imgur.com/46XIFFO.png)
To see logs of last Kopia runs done by systemd
* `sudo journalctl -ru kopia-home-etc.service`
* `sudo journalctl -xru kopia-home-etc.service`
![journaclt_output](https://i.imgur.com/46XIFFO.png)
<details>
<summary><h3>Mounting network storage using systemd</h3></summary>

@ -1,169 +0,0 @@
# Ofelia in docker
###### guide-by-example
# Purpose
Scheduling jobs that will be run in docker containers.
* [Github](https://github.com/mcuadros/ofelia)
* [DockerHub image used](https://hub.docker.com/r/mcuadros/ofelia/)
Ofelia is a simple scheduler for docker containers, replacing cron.</br>
Written in go, its binary is running in the background as a daemon,
executing scheduled tasks as set in a simple config file.
# Files and directory structure
```
/home/
└── ~/
└── docker/
└── ofelia/
├── docker-compose.yml
└── config.ini
```
* `docker-compose.yml` - a docker compose file, telling docker how to run the container
* `config.ini` - ofelia's configuration file bind mounted in to the container
All files need to be provided.
# docker-compose
`docker-compose.yml`
```yml
version: "3"
services:
ofelia:
image: mcuadros/ofelia
container_name: ofelia
hostname: ofelia
restart: unless-stopped
volumes:
- ./config.ini:/etc/ofelia/config.ini:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
```
# Config
config.ini contains scheduled jobs.</br>
There are several [types](https://github.com/mcuadros/ofelia/blob/master/docs/jobs.md),
but here is just the basic most common use: *job-exec*</br>
which executes command inside an already running container.
`config.ini`
```ini
[job-exec "test1"]
schedule = @every 5m
container = phpipam_phpipam-web_1
command = touch /tmp/bla
[job-exec "test2"]
schedule = @every 1h
container = phpipam_phpipam-mariadb_1
command = touch /tmp/bla
[job-exec "test3"]
schedule = @every 1h30m10s
container = phpipam_phpipam-cron_1
command = touch /tmp/bla
```
# Logging
![logo](https://i.imgur.com/5SgWE0I.png)
Docker containers log shows which jobs are active and when they were executed.
But Ofelia has several build in
[logging options.](https://github.com/mcuadros/ofelia#logging)
#### email
Unfortunetly seems `[global]` section, where email settings would be set once
and then enabled per job is not working.
So either the settings would go
in to `[global]` and be set for every single job, or every job that requires
email logging will have to have all the email settings writen out in full.
`config.ini`
```ini
[job-exec "test1"]
schedule = @every 5m
container = phpipam_phpipam-web_1
command = touch /tmp/zla
smtp-user = apikey
smtp-password = SG.***************************
smtp-host = smtp.sendgrid.net
smtp-port = 465
mail-only-on-error = false
email-to = whoever@example.com
email-from = test@example.com
[job-exec "test2"]
schedule = @every 2m
container = phpipam_phpipam-mariadb_1
command = touch /tmp/zla
```
#### save-folder
Saves result of every job execution in to a file.
By defining `save-folder` path for the job.</br>
The path used should be bind mounted from the host,
for persistence of the data and easier access.
`docker-compose.yml`
```yml
version: "3"
services:
ofelia:
image: mcuadros/ofelia
container_name: ofelia
hostname: ofelia
restart: unless-stopped
volumes:
- ./config.ini:/etc/ofelia/config.ini:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./logs:/tmp/logs
```
`config.ini`
```ini
[job-exec "test1"]
schedule = @every 5m
container = nginx
command = touch /tmp/example
save-folder = /tmp/logs
```
# Update
[Watchtower](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/watchtower)
updates the image automatically.
Manual image update:
- `docker-compose pull`</br>
- `docker-compose up -d`</br>
- `docker image prune`
# Backup and restore
#### Backup
Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup)
that makes daily snapshot of the entire directory.
#### Restore
* down the homer container `docker-compose down`</br>
* delete the entire homer directory</br>
* from the backup copy back the homer directory</br>
* start the container `docker-compose up -d`

@ -0,0 +1,145 @@
# Port Forwarding
###### guide-by-example
You want to selfhost stuff.<br>
You know little and want to start somewhere, FAST!
# Requirements
* A **spare PC** that will be the server.<br>
Can be **virtualmachine**.. virtualbox, hyperv.
* **Google**.<br>
If the guide says do X, and steps seem insuficient,
you google that shit and add the word **youtube**.
# Install a linux on the server
![endeavouros_logo](https://i.imgur.com/DSMmaj8.png)
[Some video.](https://www.youtube.com/watch?v=SyBuNZxzy_Y)
* **download linux iso**. For noobs I picked [EndeavourOS \(2GB\)](https://github.com/endeavouros-team/ISO/releases/download/1-EndeavourOS-ISO-releases-archive/EndeavourOS_Cassini_Nova-03-2023_R1.iso)
* why that linux and not xxx? Under the hood its Arch Linux.
* **make bootable usb** from the iso, recommend use [ventoy](https://www.ventoy.net/en/doc_start.html)
* download; run; select usb; click install; exit; copy iso on to it
* **boot from the usb**, maybe on newer machines need to disable secure boot in bios
* **click through the installation**
* pick online installer when offered
* during install, there can be step called `Desktop` - pick `No Desktop`<br>
or whatever, does not really matter
* when picking disk layout choose wipe everything
* username lets say you pick `noob`
* done
# Basic setup of the linux server
![ssh](https://i.imgur.com/ElFrBog.png)
**SSH** - a tiny application that allows you to execute commands
from your comfy windows PC on the damn server
* log in to the server and be in terminal
* ssh is installed by default, but disabled
* to check status - `systemctl status sshd`
* to **enable it** `sudo systemctl enable --now sshd`
* `ip a` or `ip r` - show [somewhere in there](https://www.cyberciti.biz/faq/linux-ip-command-examples-usage-syntax/#3)
what IP address the server got assigned<br>
lets say you got `192.168.1.8`,
nope I am not explaining IP addresses
* done
*arrow up key in terminal will cycle through old comamnds in history*
# Remote connect to the server
![mobasterm_logo](https://i.imgur.com/aBL85Tr.png)
* **install** [mobaXterm](https://mobaxterm.mobatek.net/) on your windows machine
* use it to **connect** to the server using its ip address and username
* [have a pic](https://i.imgur.com/lhRGt1p.png)<br>
* done
# Install docker
![docker_logo](https://i.imgur.com/6SS5lFj.png)
**Docker** - a thing that makes hosting super easy, people prepared *recipies*,
you copy paste them, maybe edit a bit, run them
* **install docker-compose** - `sudo pacman -S docker-compose`
* **enable docker service** - `sudo systemctl enable --now docker`
* add your user to docker group so you dont need to sudo all the time<br>
`sudo gpasswd -a noob docker`
* log out, log back in
* done
# Using docker
Well, its time to learn how to create and edit files and copy paste shit
in to them, IN LINUX!<br>
Honestly could be annoying as fuck at first, but mobaXterm should make it easier
with the right mouse click paste.<br>
Nano editor is relatively simple and everywhere so that will be used.
* be in your home directory, the command `cd` will always get you there
* create directory `mkdir docker`
* go in to it `cd docker`
* create directory `mkdir nginx`
* go in to it `cd nginx`
* Oh look at you being all hacker in terminal, following simple directions
* create empty docker-compose.yml file `nano docker-compose.yml`
* paste in to it this *recipe*, spacing matters
```
services:
nginx:
image: nginx:latest
container_name: nginx
hostname: nginx
ports:
- "80:80"
```
* save using `ctrl+s`; exit `ctrl+x`
* run command `sudo docker compose up -d`<br>
will say the container started
* on your windows machine go to your browser<br>
in address bar put the ip of your server `192.168.1.8` bam<br>
![nging_welcome](https://i.imgur.com/Iv0B6bN.png)
# undertanding what you just did
* on linux server a docker container is running, its a webserver and it is
accessible for others.<br>
Most of selfhosted stuff is just webserver with some database.
* if this part is done that means that shit like hosting own netflix(jellyfin),
or google drive/calendar/photos(nextcloud), or own password manager(vaultwarden)
or own minecraft server(minecraft server) is just one `docker-compose.yml` away.
* you could almost abandon terminal at this point, just start googling portainer
and you can be doing this shit through a webpage. I dont use it, but it
got good I heard.
# undertanding what you did not get done
* this shit is on your own local network, not accessible from the outside.
Cant call grandma and tell her to write `192.168.1.8` in to her browser
to see your awesome nginx welcome running.
She tells you that the dumb fuck you are, you do not have public IP and ports
forwarded.<br>
To get that working is bit challenging, probably deserves own page,
not realy speedrun, but thorough steps as shit gets sideways fast and people
can dick around for hours trying wrong shit.
* everything here is just basic setup that breaks easily,
server got dynamic IP, turn it off for a weekend and it might get a different ip
assigned next time it starts. Container is not set to start on boot,...
* you dont understand how this shit works, fixing not working stuff be hard,
but now you can start to consume all the guides and tutorials on
docker compose and try stuff...
## Links
* https://www.reddit.com/r/HomeNetworking/comments/i7ijiz/a_guide_to_port_forwarding/

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

@ -1,13 +0,0 @@
# Prometheus+Grafana in docker
###### guide-by-example
![logo](https://i.imgur.com/q41QfyI.png)
---
---
Moved [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/prometheus_grafana_loki)
---
---

@ -0,0 +1,329 @@
# Proxmox
###### guide-by-example
![logo](https://i.imgur.com/vFZQ8og.png)
# Purpose
Type 1 hypervisor hosting virtual machines, running straight on metal,
managed through web GUI.
Proxmox uses qemu as a virtual machines emulator, with KVM kernel module.
Written in perl, javascript for webGUI and C for filesystem.
![gui-pic](https://i.imgur.com/rO5oo0Y.png)
# Installation
Download iso, ventoy usb, boot it, click through.
# Basic settings
* post install scripts, main one includes option to disable subscription nagscreen<br>
[https://tteck.github.io/Proxmox/](https://tteck.github.io/Proxmox/)
* one way to go for single disk storage:
* lvremove /dev/pve/data
* lvresize -l +100%FREE /dev/pve/root
* resize2fs /dev/mapper/pve-root
* afterwards Datacenter > Storage > local > Edit > Content<br>
enable everything
* remove LVM-thin remnant
### New Virtual Machine notes
* [qemu-guest-agent](https://pve.proxmox.com/wiki/Qemu-guest-agent)
- install, on archlinux its a static service, so no enabling<br>
to test if it works, ssh into hypervisor and `qm anget 100 ping`
### Interface
* Right top corner > user name
* Auto-refresh=60
* Settings, turn off everything - statistics,
recent only, welcome message, visual effects
### NTP time sync
* Host > Manage > System > Time & date > Edit NTP Settings
* Use Network Time Protocol (enable NTP client)
* Start and stop with host
* `pool.ntp.org`
* Host > Manage > Services > search for ntpd > Start
### Hostname and domain
* ssh in
* `esxcli system hostname set --host esxi-2023`
* if domain on network<br>
`esxcli system hostname set --domain example.local`
### Network
Should just work, but if there is more complex setup,
like if a VM serves as a firewall...<br>
Be sure you ssh in and try to `ping google.com` to see if network and DNS work.
To check and set the default gateway
* `esxcfg-route`
* `esxcfg-route 10.65.26.25`
To [change DNS server](https://blog.techygeekshome.info/2021/04/vmware-esxi-esxcli-commands-to-update-host-dns-servers/)
* `esxcli network ip dns server list`
* `esxcli network ip dns server add --server=8.8.8.8`
* `esxcli network ip dns server remove --server=1.1.1.1`
* `esxcli network ip dns server list`
To disable ipv6
* `esxcli network ip set --ipv6-enabled=false`
# Logs
[Documentation](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.monitoring.doc/GUID-832A2618-6B11-4A28-9672-93296DA931D0.html)
Host > Monitor > Logs
The one worth knowing about
* shell.log - History of shell commands when SSH in
* syslog.log - General info of what's happening on the system.
* vmkernel.log - Activities of esxi and VMs
Will update with some actual use, when I use logs.
Logs from systems in VMs are in >Virtual Machines > Name-of-VM > Monitor > Logs
![logs-pic](https://i.imgur.com/fEz3Igv.png)
# Backups using ghettoVCB
* [github](https://github.com/lamw/ghettoVCB)
* [documentation](https://communities.vmware.com/t5/VI-VMware-ESX-3-5-Documents/ghettoVCB-sh-Free-alternative-for-backing-up-VM-s-for-ESX-i-3-5/ta-p/2773570)
The script makes snapshot of a VM, copies the "old" vmdk and other files
to a backup location, then deletes the snapshot.<br>
This approach, where backup in time is full backup takes up a lot of space.
Some form of deduplication might be a solution.
VMs that have any existing snapshot wont get backed up.
Files that are backed up:
* vmdk - virtual disk file, every virtual disk has a separate file.
In webgui datastore browser only one vmdk file is seen per disk,
but on filesystem theres `blabla.vmdk` and `blablka-flat.vmdk`.
The `flat` one is where the data actually are, the other one is a descriptor
file.
* nvram - bios settings of a VM
* vmx - virtual machine settings, can be edited
### Backup storage locations
* Local disk datastore
* NFS share<br>
For nfs share on trueNAS scale
* Maproot User -> root
* Maproot Group -> nogroup
Note the exact path from webgui of your datastore for backups.<br>
Looks like this `/vmfs/volumes/6187f7e1-c584077c-d7f6-3c4937073090`
### Install
* [Download](https://github.com/lamw/ghettoVCB/archive/refs/heads/master.zip)
the repo files on a pc from which you would upload them on to esxi
* create a directory on a datastore where the script and configs will reside<br>
`/vmfs/volumes/6187f7e1-c584077c-d7f6-3c4937073090/ghetto_script`
* upload the files, should be 6, can skip `build` directory and `readme.md`
* ssh in to esxi
* cd in to the datastore ghetto directory
* make all the shell files executable<br>
`chmod +x ./*.sh`
### Config and preparation
Gotta know basics how to edit files with ancient `vi`
* cd in to the datastore ghetto directory
`cp ./ghettoVCB.conf ./ghetto_1.conf`<br>
* Only edit this file, for starter setting where to copy backups<br>
`vi ./ghetto_1.conf`<br>
`VM_BACKUP_VOLUME=/vmfs/volumes/6187f7e1-c584077c-d7f6-3c4937073090/Backups`
* Create a file that will contain list of VMs to backup<br>
`touch ./vms_to_backup_list`<br>
`vi ./vms_to_backup_list`<br>
```
OPNsense
Arch-Docker-Host
```
* Create a shell script that starts ghetto script using this config for listed VMs<br>
`touch ./ghetto_run.sh`<br>
`vi ./ghetto_run.sh`<br>
```
#!/bin/sh
GHETTO_DIR=/vmfs/volumes/6187f7e1-c584077c-d7f6-3c4937073090/ghetto_script
$GHETTO_DIR/ghettoVCB.sh \
-g $GHETTO_DIR/ghetto_1.conf \
-f $GHETTO_DIR/vms_to_backup_list \
&> /dev/null
```
Make the script executable<br>
`chmod +x ./ghetto_run.sh`
* for my use case where TrueNAS VM cant be snapshoted while running because
of a passthrough pcie HBA card there needs to be another config
* Make new config copy<br>
`cp ./ghetto_1.conf ./ghetto_2.conf`
* Edit the config, setting it to shut down VMs before backup.<br>
`vi ./ghetto_2.conf`<br>
`POWER_VM_DOWN_BEFORE_BACKUP=1`
* edit the run script, add another execution for specific VM using ghetto_2.conf<br>
`vi ./ghetto_run.sh`<br>
```
#!/bin/sh
GHETTO_DIR=/vmfs/volumes/6187f7e1-c584077c-d7f6-3c4937073090/ghetto_script
$GHETTO_DIR/ghettoVCB.sh \
-g $GHETTO_DIR/ghetto_1.conf \
-f $GHETTO_DIR/vms_to_backup_list \
&> /dev/null
$GHETTO_DIR/ghettoVCB.sh \
-g $GHETTO_DIR/ghetto_2.conf \
-m TrueNAS_scale \
&> /dev/null
```
To one time manually execute:
* `./ghetto_run.sh`
### Scheduled runs
See "Cronjob FAQ" in
[documentation](https://communities.vmware.com/t5/VI-VMware-ESX-3-5-Documents/ghettoVCB-sh-Free-alternative-for-backing-up-VM-s-for-ESX-i-3-5/ta-p/2773570#Cronjob-FAQ:)
To execute it periodicly cron is used. But theres an issue of cronjob being lost
on esxi restart, which require few extra steps to solve.
* Make backup of roots crontab<br>
`cp /var/spool/cron/crontabs/root /vmfs/volumes/datastore1/ghetto_script/root_cron.backup`
* Edit roots crontab to execute the run script at 4:00<br>
add the following line at the end in [cron format](https://crontab.guru/)<br>
`vi /var/spool/cron/crontabs/root`
```
0 4 * * * /vmfs/volumes/6187f7e1-c584077c-d7f6-3c4937073090/ghetto_script/ghetto_run.sh
```
To save read only file in vi use `:wq!`
* restart cron service<br>
`kill $(cat /var/run/crond.pid)`<br>
`crond`
To make the cronjob permanent
* Make backup of local.sh file<br>
`cp /etc/rc.local.d/local.sh /vmfs/volumes/datastore1/ghetto_script/local.sh.backup`
* Edit `etc/rc.local.d/local.sh` file, adding the following lines at the end,
but before the `exit 0` line.
Replace the part in quotes in the echo line with your cronjob line.<br>
`vi /etc/rc.local.d/local.sh`<br>
```
/bin/kill $(cat /var/run/crond.pid) > /dev/null 2>&1
/bin/echo "0 4 * * * /vmfs/volumes/6187f7e1-c584077c-d7f6-3c4937073090/ghetto_script/ghetto_run.sh" >> /var/spool/cron/crontabs/root
/bin/crond
```
ESXi host must have disabled secure boot for local.sh to execute.
* Run esxi config backup for change to be saved<br>
`/sbin/auto-backup.sh`
* Restart host, check if the cronjob is still there and if cron is running,
and check if the date is correct<br>
`vi /var/spool/cron/crontabs/root`<br>
`ps | grep crond | grep -v grep`<br>
`date`
Logs about backups are in `/tmp`
### Restore from backup
[Documentation](https://communities.vmware.com/t5/VI-VMware-ESX-3-5-Documents/Ghetto-Tech-Preview-ghettoVCB-restore-sh-Restoring-VM-s-backed/ta-p/2792996)
* In webgui create a full path where to restore the VM
* The restore-config-template-file is in the ghetto_script directory on datastore<br>
named `ghettoVCB-restore_vm_restore_configuration_template`
Make copy of it<br>
`cp ./ghettoVCB-restore_vm_restore_configuration_template ./vms_to_restore_list`<br>
* Edit this file, adding new line, in which separated by `;` are:
* path to the backup, the directory has date in name
* path where to restore this backup
* disk type - 1=thick | 2=2gbsparse | 3=thin | 4=eagerzeroedthick<br>
* optional - new name of the VM<br>
`vi ./vms_to_restore_list`
```
"/vmfs/volumes/6187f7e1-c584077c-d7f6-3c4937073090/Backups/OPNsense/OPNsense-2023-04-16_04-00-00;/vmfs/volumes/6378107d-b71bee00-873d-b42e99f40944/OPNsense_restored;3;OPNsense-restored"
```
* Execute the restore script with the config given as a parameter.<br>
`./ghettoVCB-restore.sh -c ./vms_to_restore_list`
* Register the restored VM.<br>
If it's in the same location as the original was, it should just go through.
If the location is different, then esxi asks if it was moved or copied.
* Copied - You are planning to use both VMs at the same time,
selecting this option generates new UUID for the VM, new MAC address,
maybe some other hardware identifiers as well.
* Moved - All old settings are kept, for restoring backups this is usually
the correct choice.
# Switching from Thick to Thin disks
Kinda issue is that vmdk are actually two files.<br>
Small plain `.vmdk` that holds some info, and the `flat.vmdk` with actual
gigabytes of data of the disk. In webgui this fact is hidden.
* have backups
* down the VM
* unregister the VM
* ssh in
* navigate to where its vmdk files are in datastore<br>
`cd /vmfs/volumes/6187f7e1-c584077c-d7f6-3c4937073090/linux/`
* execute command that converts the vmdk
`vmkfstools -i "./linux.vmdk" -d thin "./linux-thin.vmdk"`
* zeropunch the image file, so that unused blocks are properly zeroed.<br>
`vmkfstools --punchzero "./linux-thin.vmdk"`
* remove or move both original files<br>
`rm linux.vmdk`<br>
`rm linux-flat.vmdk`
* In webgui navigate to the datastore.
Use `move` command to rename thin version to the original name.<br>
This changes the values in `linux.vmdk` to point to correct `flat.vmdk`
* register the VM back to esxi gui.
# Disk space reclamation
If you run VMs with thin disks, the idea is that it uses only as much space
as is needed. But if you copy 50GB file to a VM, then deletes it, it's not always
seamless that the VMDK shrinks by 50GB too.
Correctly functioning reclamation can save time and space for backups.
* Modern windows should just work, did just one test with win10.
* linux machines need fstrim run that marks blocks as empty.
* Unix machine, like opnsense based on FreeBSD needed to be started from ISO,
so that partition is not mounted and executed<br>
`fsck_ufs -Ey /dev/da0p3`<br>
afterwards it needed one more run of vmkfstools --punchzero "./OPNsense.vmdk"<br>
And it still uses roughly twice as much space as it should.
# links
* https://www.altaro.com/vmware/ghettovcb-back-
up-vms/
* https://www.youtube.com/watch?v=ySMitWnNxp4
* https://forums.unraid.net/topic/30507-guide-scheduled-backup-your-esxi-vms-to-unraid-with-ghettovcb/
* https://blog.kingj.net/2016/07/03/how-to/backing-up-vmware-esxi-vms-with-ghettovcb/
* https://sudonull.com/post/95754-Backing-up-ESXi-virtual-machines-with-ghettoVCB-scripts#esxi-3
Loading…
Cancel
Save