Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
A
arrh-storage-benchmark
Manage
Activity
Members
Code
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Locked files
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Arrhenius upphandling lagring
arrh-storage-benchmark
Commits
f558c1e8
Commit
f558c1e8
authored
6 months ago
by
Sebastian Thorarensen
Browse files
Options
Downloads
Patches
Plain Diff
Put all tests in one script
parent
091247f4
No related branches found
No related tags found
No related merge requests found
Changes
4
Hide whitespace changes
Inline
Side-by-side
Showing
4 changed files
README
+1
-22
1 addition, 22 deletions
README
flash.batch
+0
-35
0 additions, 35 deletions
flash.batch
hdd.batch
+0
-35
0 additions, 35 deletions
hdd.batch
storage-benchmark.sbatch
+84
-0
84 additions, 0 deletions
storage-benchmark.sbatch
with
85 additions
and
92 deletions
README
+
1
−
22
View file @
f558c1e8
Running the benchmark
---------------------
1. Make sure 'elbencho' is in PATH.
2. sbatch -N <1 up to whatever> hdd.batch <test directory on filesystem>
sbatch -N <1 up to whatever> flash.batch <test directory on filesystem>
TODOs
-----
- Define the metadata tests.
- Add flags to make elbencho use whatever entropy we want for
compression, as suggested by hx.
- We agreed on making block size for the streaming HDD test choosable
by the vendor. Make it an argument instead of hard-coding 1M.
- Discuss if it is OK to hard-code thread count. Right now it is hard-coded
to 16.
Write instructions here, with and without Slurm!
This diff is collapsed.
Click to expand it.
flash.batch
deleted
100644 → 0
+
0
−
35
View file @
091247f4
#!/bin/sh
#SBATCH -J arrh-storage-benchmark-flash
usage
()
{
echo
>
&2
"Usage: sbatch [-N NUMNODES] flash.batch DIRECTORY"
echo
>
&2
"Run benchmark creating, writing, and reading files in DIRECTORY"
exit
2
}
if
[
$#
-ne
1
]
then
usage
fi
NUMTHREADS
=
16
srun
--ntasks-per-node
=
1
--cpus-per-task
=
"
$NUMTHREADS
"
\
elbencho
--service
--foreground
>
/dev/null &
sleep
5
# wait for services to start
#
# Random 4K write/read
#
echo
'flash.batch: Random 4K write/read'
elbencho
--hosts
"
$(
scontrol show hostnames |
tr
'\n'
','
)
"
"
$1
"
\
-w
-r
-t
"
$NUMTHREADS
"
-s
4G
-b
4K
-n
0
-F
--rand
--rotatehosts
=
1
#
# Metadata (creat/stat/unlink)
#
echo
'flash.batch: Metadata'
# to be written
elbencho
--hosts
"
$(
scontrol show hostnames |
tr
'\n'
','
)
"
\
--quit
This diff is collapsed.
Click to expand it.
hdd.batch
deleted
100644 → 0
+
0
−
35
View file @
091247f4
#!/bin/sh
#SBATCH -J arrh-storage-benchmark-hdd
usage
()
{
echo
>
&2
"Usage: sbatch [-N NUMNODES] hdd.batch DIRECTORY"
echo
>
&2
"Run benchmark creating, writing, and reading files in DIRECTORY"
exit
2
}
if
[
$#
-ne
1
]
then
usage
fi
NUMTHREADS
=
16
srun
--ntasks-per-node
=
1
--cpus-per-task
=
"
$NUMTHREADS
"
\
elbencho
--service
--foreground
>
/dev/null &
sleep
5
# wait for services to start
#
# Sequential write/read
#
echo
'hdd.batch: Sequential write/read'
elbencho
--hosts
"
$(
scontrol show hostnames |
tr
'\n'
','
)
"
"
$1
"
\
-w
-r
-t
"
$NUMTHREADS
"
-s
4G
-b
1M
-n
0
-F
--rotatehosts
=
1
#
# Metadata (creat/stat/unlink)
#
echo
'hdd.batch: Metadata'
# to be written
elbencho
--hosts
"
$(
scontrol show hostnames |
tr
'\n'
','
)
"
\
--quit
This diff is collapsed.
Click to expand it.
storage-benchmark.sbatch
0 → 100644
+
84
−
0
View file @
f558c1e8
#!/bin/sh
#SBATCH -J arrh-storage-benchmark
usage
()
{
echo
"Usage: sbatch -N <number of nodes> --cpus-per-task=<threads per node> storage-benchmark.sbatch [ stream <blocksize> | iops | meta ] <directory>"
exit
2
}
info
()
{
echo
"storage-benchmark.sbatch:"
"
$@
"
}
##
## Argument handling
##
MODE
=
$1
if
[
"
$MODE
"
=
stream
]
then
BLOCKSIZE
=
$2
shift
elif
[
"
$MODE
"
=
iops
]
||
[
"
$MODE
"
=
meta
]
then
:
else
usage
fi
DIRECTORY
=
$2
if
[
-z
"
$DIRECTORY
"
]
then
usage
fi
NNODES
=
$SLURM_NNODES
THREADS
=
$SLURM_CPUS_PER_TASK
HOSTS
=
$(
scontrol show hostnames |
tr
'\n'
','
)
info
"Mode:
$MODE
"
if
[
"
$BLOCKSIZE
"
]
then
info
"Block size:
$BLOCKSIZE
"
fi
info
"Number of nodes:
$NNODES
"
info
"Threads per node:
$THREADS
"
elbencho
--version
##
## The benchmark
##
info
"Starting service on all nodes"
srun
--ntasks-per-node
=
1 elbencho
--service
--foreground
>
/dev/null &
sleep
5
# wait for services to start
info
"Starting storage benchmark"
echo
if
[
"
$MODE
"
=
stream
]
then
# 1024 GiB per node
SIZE
=
$((
1
*
1024
*
1024
*
1024
/
THREADS
))
elbencho
--hosts
"
$HOSTS
"
--rotatehosts
=
1
-t
"
$THREADS
"
\
-w
-r
-s
"
$SIZE
"
K
-b
"
$BLOCKSIZE
"
-n
0
-F
"
$DIRECTORY
"
elif
[
"
$MODE
"
=
iops
]
then
# 128 GiB per node
SIZE
=
$((
128
*
1024
*
1024
/
THREADS
))
elbencho
--hosts
"
$HOSTS
"
--rotatehosts
=
1
-t
"
$THREADS
"
\
-w
-r
-s
"
$SIZE
"
K
-b
4K
-n
0
-F
--rand
"
$DIRECTORY
"
elif
[
"
$MODE
"
=
meta
]
then
# 10M files per node
FILES
=
$((
10000000
/
THREADS
))
elbencho
--hosts
"
$HOSTS
"
--rotatehosts
=
1
-t
"
$THREADS
"
\
-d
-w
--stat
-F
-N
"
$FILES
"
-D
"
$DIRECTORY
"
fi
echo
info
"Benchmark done"
elbencho
--hosts
"
$HOSTS
"
--quit
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment