Yet Another Benchmark Script (YABS) - Linux Benchmarking Script using fio, iperf, & Geekbench

You could add a 2nd get IP site as fallback via bash || or like this

IPV4_CHECK=$(curl -s -4 -m 4 icanhazip.com 2> /dev/null || curl -s -4 -m 4 ifconfig.co 2> /dev/null)

That way if the first curl request fail it will fallback to the 2nd curl request.

But since your not actually using the IP data it returns you’re just checking if the IPV4_CHECK variable is not blank the domain it check could be anything really even Google or Wikipedia etc

2 Likes

I like that! Will probably implement something along those lines, thanks!

Switched up the IPv4/v6 check a bit. Let me know if you hit that error again. Thanks for the bug report!

1 Like

:+1: Will do. The check worked later on the same node I tried last, so dunno what caused the glitch … :slight_smile:
Thanks for a great script/tool! :smiley:

1 Like

I’m curious why network speed test fail from @seriesn network

But success from other providers

Try disabling firewall.

3 Likes

LOL. Sorry. It’s working now.

Insane disk speed BTW. How? Or it is an error from YABS?

8 core in Miami:

4 core in Los Angeles:

3 Likes

Magic

5 Likes

My guess is high-end enterprise NVMe SSDs (Intel Optane, Samsung PM1735, or equivalent) in RAID10 (or RAID0…) but it’s peculiar that the write speed is higher than the read speed! Usually reads are much faster.

1 Like

Hm, managed to still get the error (added set -x for debugging): :thinking:

Running GB4 benchmark test... *cue elevator music*+ curl -s https://cdn.geekbench.com/Geekbench-4.4.4-Linux.tar.gz
+ tar xz --strip-components=1 -C ./2020-12-18T04_27_23-05_00/geekbench_4
+ [[ x64 == *\x\8\6* ]]
+ test -f geekbench.license
++ ./2020-12-18T04_27_23-05_00/geekbench_4/geekbench4 --upload
++ grep https://browser
+ GEEKBENCH_TEST=
+ [[ 4 == *5* ]]
+ '[' -z '' ']'
+ [[ ! -z true ]]
+ echo -e '\r\033[0KGeekbench releases can only be downloaded over IPv4. FTP the Geekbench files and run manually.'
Geekbench releases can only be downloaded over IPv4. FTP the Geekbench files and run manually.
+ [[ '' == *True* ]]
+ echo -e

But I don’t get it, as it works if I just run the command, check icanhazip or google manually:

$ IPV4_CHECK=$((ping -4 -c 1 -W 4 ipv4.google.com >/dev/null 2>&1 && echo true) || curl -s -4 -m 4 icanhazip.com 2> /dev/null)
$ echo $IPV4_CHECK
true

Looks to me like it didn’t find the url in the geekbench output to get the uploaded results. Run the geekbench 4 test manually and see if the output that in it or some kind of error. Throw it all in a pastebin so I can take a look.

As soon as I downloaded and unpacked the GB tar.gz files manually in the same dir, yabs.sh ran without issue. (The server has been reinstalled in the meantime, but I can try to reproduce, even give you shell access.) :smiley:

Did you change the script to use the GB executable that you extracted? If not, since it doesn’t detect if GB is installed/available locally, it’ll always download, extract, and use that copy. So if that’s the case, it leads me to believe that sometimes the POST to GB to upload the results after the completing test runs is failing, thus you’re not getting the URL for your results. Is it only GB4 that this happens on?

Also, looking at your debug output made me realize that the if [[ ! -z "$IPV4_CHECK" ]]; then test is actually wrong. I need to remove the not (“!”) from that since it’s giving you the wrong error message as you do, in fact, have IPv4 connectivity. Need to add another catch all that is more generic (i.e. “Geekbench $VERSION test failed. Run manually to determine cause” or something to that effect).

I also wondered about that ! there … :sweat_smile:
No, I didn’t modify the script for it to detect locally downloaded geekbench files. I had the error for both v4 and v5 of GeekBench … :man_shrugging:

1 Like

Is it me, or some Cloudvider machines have a lot of issues connecting. It always takes me ages to run a test because they timeout. It’s bizarre.
And can someone explain to me how IOPS works I don’t understand what I should understand/read. I truly never understood what it means, I mean it’s some reference but what does it define except disk read/write? Like I saw AWS has some storage blocks with more than 1000 IOPS but is it good or bad. Because if I use some NVMe’s it has a lot of I/O surely due to it’s faster to write tiny files but there is so much different IOPS on YABS that I don’t know where to point my head about IO’s (and yes like that you saw all the way to write IOPS, I/O, IO’s :joy:)

For example, this is on some really old mechanical disks I have it really doesn’t look like a lot but is it good somehow for the IO? (I mean not from the read-write speeds) →

# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
#              Yet-Another-Bench-Script              #
#                     v2020-12-29                    #
# https://github.com/masonr/yet-another-bench-script #
# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #

Thu Jan  7 20:22:48 CET 2021

Basic System Information:
---------------------------------
Processor  : Intel(R) Xeon(R) CPU E5-2687W v2 @ 3.40GHz
CPU cores  : 32 @ 1200.481 MHz
AES-NI     : ✔ Enabled
VM-x/AMD-V : ✔ Enabled
RAM        : 110.1 GiB
Swap       : 8.0 GiB
Disk       : 7.2 TiB

fio Disk Speed Tests (Mixed R/W 50/50):
---------------------------------
Block Size | 4k            (IOPS) | 64k           (IOPS)
  ------   | ---            ----  | ----           ---- 
Read       | 3.47 MB/s      (869) | 45.53 MB/s     (711)
Write      | 3.51 MB/s      (877) | 45.79 MB/s     (715)
Total      | 6.98 MB/s     (1.7k) | 91.32 MB/s    (1.4k)
           |                      |                     
Block Size | 512k          (IOPS) | 1m            (IOPS)
  ------   | ---            ----  | ----           ---- 
Read       | 90.51 MB/s     (176) | 144.41 MB/s    (141)
Write      | 95.32 MB/s     (186) | 154.02 MB/s    (150)
Total      | 185.83 MB/s    (362) | 298.44 MB/s    (291)

iperf3 Network Speed Tests (IPv4):
---------------------------------
Provider        | Location (Link)           | Send Speed      | Recv Speed     
                |                           |                 |                
Clouvider       | London, UK (10G)          | busy            | busy           
Online.net      | Paris, FR (10G)           | 1.49 Gbits/sec  | 3.02 Gbits/sec 
WorldStream     | The Netherlands (10G)     | 1.96 Gbits/sec  | 2.53 Gbits/sec 
Biznet          | Jakarta, Indonesia (1G)   | 139 Mbits/sec   | 43.5 Mbits/sec 
Clouvider       | NYC, NY, US (10G)         | 289 Mbits/sec   | 1.19 Gbits/sec 
Velocity Online | Tallahassee, FL, US (10G) | 193 Mbits/sec   | 1.64 Gbits/sec 
Clouvider       | Los Angeles, CA, US (10G) | busy            | busy           
Iveloz Telecom  | Sao Paulo, BR (2G)        | 132 Mbits/sec   | 639 Mbits/sec  

Geekbench 5 Benchmark Test:
---------------------------------
Test            | Value                         
                |                               
Single Core     | 793                           
Multi Core      | 10222                         
Full Test       | https://browser.geekbench.com/v5/cpu/5759462

I noticed the busy clouvider servers as well, maybe we should tag Dom @Clouvider to make him aware, maybe he blocked certain IP ranges in general or maybe it’s just a lot of testing going on with all the people that got new toy during BF and holidays…

for IOps and the fio tests:

to determine the speed and performance of a disk you need to consider two main things, the highest possible throughput/bandwidth you can achieve during a constant datastream, which would be measured in MB/s or GB/s and the largest amount of operations per second possible, literally IOps…

both may be limited by different things. for the bandwidth usually that would be the bus/port/protocol aka SATA 1/2/3 vs pcie nvme and such or in case you have a network storage attached this obviously could depend on the transferrate of that connection etc.
for iops this mainly depends on the controller inside your ssd/nvme and it’s generation - harddisks will fall behind heavily in that regard because of the time head positioning will take, so mechanical limits etc.

with all that said, you might now have another look at the fio numbers. for this in general you can roughly assume blocksize * iops = bandwidth. like in your example, 4k blocksize * 1.7k iops = ~6.9MB/s or 1M blocksize * 291 iops = ~291MB/s

so, to be able to measure the former mentioned highest possible bandwidth you want to use a large blocksize like 1MB as even a smaller amount of IO should be sufficient then to reach the limit of the transferrate. vice versa to measure the best possible IOps result you need a small blocksize like 4k to be able to issue as many operations as possible without being limited by the bandwidth…

now keep in mind that there are things like filecaches, raid-caches, zfs arc caches and so on and carefully try to interpret the numbers you are seeing…

it says 7.2 TiB Disk, so this for sure is HDD. a common HDD is capable of doing something in the range of 100-180 MB/s transfer wise and also 100-180 IOps max.
based on the max 291 MB/s in the 1M section for combined read/write I’d guess you are running some RAID which usually helps and based on the 1.7k IOps in the 4k section I’d think of it being somehow cached either via hardware raid buffer or maybe you use ZFS which does some good caching on these jobs?

in general higher IOps is usually more beneficial for common hosting workloads, as most accesses are to rather small files or parts of those instead of reading/writing large files constantly all the time. SSDs and nowadays NVMe therefore are especially more important for providers, because they a) can be used for strong marketing and b) allow a higher density of customers on the same node because you won’t hit the IO limits that easily.

the 64k and 512k numbers btw are more or less additional to see if the whole thing scales quite linear or runs earlier into one or the other limit, which on a vps could be artificial…

sorry for the wall of text, but last point about AWS and azure and IOps for their different storages: I guess if you now think about the relation between iops and bandwidth it also becomes clear why artificial limits especially to IOps makes sense then. one can calculate quite good in a large scale how many even different workloads you can put on the same storage array so that they average out across the system and you still can guarantee a certain level of ressources.

hope this helps to get your mind spinning and thinking about it. also please keep in mind that this write-up is brought to you with my own words and understanding and might not be fully complete or absolutely correct. everyone feel free to jump in an add infos or correct me, if I missed something :wink:

3 Likes

Thanks for this awesome explanation! I really value it.
It’s really interesting so basically, everything depends on the block size too. I gonna read more about it but you really put some lights on it for me.

1 Like