Spectrum Network Tool

  среда 06 мая
      80

Introduction

FS Global 2010 is a terrain enhancement pack for FSX: Steam Edition developed by PILOTS!, which provides updated 3D mesh scenery for the whole world, including areas that are not covered by SRTM data such as Antarctica, Greenland. Fs2004 srtm global terrain data. FS2004/FS2002/FS2000 70m Global Terrain Mesh for The Whole World. Converted & compiled by Stephen Rothlisberger. Introduction These are scenery files for FS2004, FS2002 & FS2000.The entire landmass of the earth between 60°S and 60°N is covered in 682 geographical sections. Fs2004 Srtm Global Terrain Map Of The United Average ratng: 6,5/10 192 votes Has created a digital topographic map for the entire country that can be viewed in Google Earth. The map is based on the WMS map service, which can be a little tricky to set up in Google Earth so I went ahead and set them all up. Default FS9 terrain mesh is something like 1200 meters (a data point every 1200 meters). There is a freeware 70m SRTM terrain mesh for the whole world, it is alright. It creates blue holes and slivers in many areas that there is missing data for, but it improves the scenery overall. There is 38m terrain mesh for the US for free for FS9. The NASA SRTM obtained elevation data on a near-global scale using radar interferometry. The NASA Version 3.0 SRTM Global 1 arc second product is void-filled using elevation data from Advanced Spaceborne Thermal Emission and Reflection Radiometer ( ASTER ) Global Digital Elevation Model 2 ( GDEM2 ), USGS Global Multi-resolution Terrain.

IBM Spectrum Scale™, based on technology from IBM General Parallel File System (hereinafter referred to as IBM Spectrum Scale or GPFS™), is a high performance software defined file management solution that simplifies data management, scalable to petabytes of data and billion of files, and delivers high performance access to data from multiple servers.

DX NetOps network monitoring tools provide unified, scalable AI-driven network triage and automation for traditional, SDN and cloud networks. Jun 12, 2019  Tap the View menu and select Channel rating. The app will display a list of Wi-Fi channels and a star rating — the one with the most stars in the best. The app will actually tell you which Wi-Fi.

The IBM Spectrum Scale clustered file system enables petabytes-scale capacity global namespace that can be accessed simultaneously from multiple nodes and can be deployed in multiple configurations (e.g. NSD client-server, SAN). The file system can be accessed using multiple protocols (e.g. native NSD protocol, NFS, SMB/CIFS, Object). IBM Spectrum Scale stability and performance is highly dependent on the underlying networking infrastructure. To assess the stability and performance of the underlying network, IBM Spectrum Scale provides tools such as mmnetverify [1] and nsdperf [4].

The IBM Spectrum Scale nsdperf is a useful tool to assess the cluster network performance. This blog provides an overview of the nsdperf tool and its usage.

Throughout this document, all references to “GPFS” refer to the “IBM Spectrum Scale” product.

nsdperf overview

The mmnetverify tool can be used to assess the network health and verify common network issues in a Spectrum Scale cluster setup, detailed in the mmnetverify blog [2]. However, mmnetverify tool cannot be used to assess the aggregate parallel network bandwidth between multiple client and server nodes. Furthermore, mmnetverify tool does not yet support network bandwidth assessment using the RDMA protocol.

The nsdperf tool enables to define a set of nodes as clients and as servers and run a coordinated network test that simulates the GPFS Network Shared Disk (NSD) protocol traffic. All network communication is done using the TCP socket connections or RDMA verbs (InfiniBand/iWARP). This tool is stand alone and does not use the GPFS daemon so it is a good way to test network IO without involving disk IO.

Existing network performance programs, such as iperf [3] are good at measuring throughput between a pair of nodes. However, to use these programs on large numbers of cluster nodes requires considerable effort to coordinate startup and to gather results from all nodes. Also, the traffic pattern with many point-to-point streams may give much different results from the GPFS NSD pattern of clients sending messages round-robin to the servers.

Therefore, if iperf is producing good throughput numbers, but GPFS file I/O is slow, the problem might still be due to the network rather than with GPFS. The nsdperf program can be used for effective network performance assessment pertaining to IBM Spectrum Scale network topology. When Spectrum Scale software is installed on a node, the nsdperf source is installed at /usr/lpp/mmfs/samples/net directory.

It is highly recommended to perform cluster network performance assessment using the nsdperf tool prior to the Spectrum Scale cluster deployment to ensure that the underlying network meets the expected performance requirements. Furthermore, in the event of production performance issues, it will be recommended to quiesce file system I/O (when permissible) and verify that the underlying network performance using the nsdperf tool is optimal.

To complement the nsdperf tool (to aid with Spectrum Scale cluster network performance assessment), the IBM Spectrum Scale gpfsperf benchmark [5] can be used to measure the end-to-end file system performance (from a Spectrum Scale node) for several common file access patterns. The gpfsperf benchmark can be run on single node as well as across multiple nodes. There are two independent ways to achieve parallelism in the gpfsperf program. More than one instance of the program can be run on multiple nodes using Message Passing Interface (MPI) to synchronize their execution, or a single instance of the program can execute several threads in parallel on a single node. These two techniques can also be combined. When Spectrum Scale software is installed on a node, the gpfsperf source is installed at /usr/lpp/mmfs/samples/perf directory. The detailed instructions to build and execute gpfsperf benchmark is provided in the README file in the /usr/lpp/mmfs/samples/perf directory.

Building nsdperf

The detailed instructions to build nsdperf is provided in README file in the /usr/lpp/mmfs/samples/net directory. This section provides high level build procedure.

On GNU/Linux or on Windows systems running Cygwin/MinGW:
g++ -O2 -o nsdperf -lpthread -lrt nsdperf.C

To build with RDMA support (GNU/Linux only):
g++ -O2 -DRDMA -o nsdperf-ib -lpthread -lrt -libverbs -lrdmacm nsdperf.C

The nsdperf built with RDMA support may be saved with different naming scheme (e.g., using –ib suffix) to denote the RDMA capability. The nsdperf built with RDMA support can also be used to assess TCP/IP network bandwidth in addition to the RDMA network bandwidth.

NOTE: Since the nsdperf (in server mode) needs to be launched across multiple nodes, the nsdperf binary need to be present in all the participating nodes in the same path/location. To achieve this, following are some recommendations:
• Build this tool in single node (of similar CPU architecture – e.g. x86_64, ppc64) and copy the nsdperf binary to a global shared namespace (accessible via NFS or GPFS) such that nsdperf is accessible from common path.
• Alternatively, the nsdperf binary may be built on all the nodes in the /usr/lpp/mmfs/samples/net directory using parallel shell such as mmdsh (e.g, mmdsh –N all “cd /usr/lpp/mmfs/samples/net; g++ -O2 -DRDMA -o nsdperf-ib -lpthread -lrt -libverbs -lrdmacm nsdperf.C”).

nsdperf Usage

The nsdperf command line options is as follows and this is detailed in README file in the /usr/lpp/mmfs/samples/net directory (on node with Spectrum Scale installed).

Usage: nsdperf-ib [-d] [-h] [-i FNAME] [-p PORT] [-r RDMAPORTS] [-t NRCV][-s] [-w NWORKERS] [-6] [CMD…]

Options:
-d Include debug output
-h Print help message
-i FNAME Read commands from file FNAME
-p PORT TCP port to use (default 6668)
-r RDMAPORTS RDMA devices and ports to use (default is all active ports)
-t NRCV Number of receiver threads (default nCPUs, min 2)
-s Act as a server
-w NWORKERS Number of message worker threads (default 32)
-6 Use IPv6 rather than IPv4

Generally, the most often used nsdperf command-line option is “-s”, which is used to launch nsdperf in server mode. The nsdperf in server mode needs to be run on all the nodes of cluster (that needs to be involved in the nsdperf testing where network bandwidth between the NSD client and server needs to be assessed). For example:

mmdsh –N <participating_nodes> ‘<complete_path_to>nsdperf -s </dev/null > /dev/null 2>&1 &’

After the nsdperf servers are running, the network bandwidth assessment between NSD client and servers can be performed by running nsdperf, without “-s”, from an administrative node (e.g. login node, gateway node, or any cluster node permitting interactive job execution), and entering nsdperf commands.

<complete_path_to>nsdperf

The “test” command sends a message to all the client nodes to begin write and read network performance testing (detailed in the following sections) to the server nodes. The size of the messages can be specified using the nsdperf “buffsize” parameter. It will be good to start with nsdperf “buffSize NBYTES” equal to the GPFS file-system block size to assess the network bandwidth capability. When sequential I/O is performed on the GPFS file system, the NSD client(s) transmit IO sizes in units of file-system block size to the NSD servers.

Throughput numbers are reported in MB/sec, where MB is 1,000,000 bytes. The CPU busy time during the test period is also reported (currently only supported on Linux and AIX systems) and this is detailed in nsdperf README file in the /usr/lpp/mmfs/samples/net directory (on node with Spectrum Scale installed). The numbers reported are the average percentage of non-idle time for all client nodes, and average for all server nodes.

The available nsdperf test types are write, read, nwrite, swrite, sread, rw.

write
Clients write round-robin to all servers. Each client tester thread is in a loop, writing a data buffer to one server, waiting for a reply, and then moving on to the next server.

read
Clients read round-robin from all servers. Each client thread sends a request to a server, waits for the data buffer, and then moves on to the next server.

nwrite
This is the same as the write test, except that it uses a GPFS NSD style of writing, with a four-way handshake. The client tester thread first sends a 16-byte NSD write request to the server. The server receives the request, and sends back a read request for the data. The client replies to this with a data buffer. When the server has received the data, it replies to the original NSD write request, and the client gets this and moves on to the next server.

swrite
Each client tester thread writes repeatedly to a single server, rather than sending data round-robin to all servers. To get useful results, the “threads” command should be used to make the number of tester threads be an even multiple of the number of server nodes.

sread
Each tester thread reads from only one server.

rw
This is a bi-directional test, where half of the client tester threads run the read test and half of them do the write test.

At the minimum, the network bandwidth assessment should be performed using write, read and nwrite tests. The nwrite test is pertinent test when Spectrum Scale cluster is deployed over the Infiniband network and the GPFS verbsRdma configuration parameter is enabled.

Special considerations for Spectrum Scale clusters with Infiniband

nsdperf command-line option “-r” needs to be provided with value same as the GPFS verbsPorts parameter (mmlsconfig grep verbsPorts). The format of the RDMAPORTS argument of the “-r” option is a comma or space separated list of device names and port numbers separated by colon or slash (e.g. “mlx5_0/1,mlx5_1/1”). When multiple ports are specified, RDMA connections will be established for each port and outbound messages will be sent round-robin through the connections. If a port number is not specified, then all active ports on the device will be used.

After the nsdperf servers are running, on an administrative node (e.g. login node, gateway node, or any cluster node permitting interactive job execution) run nsdperf with the same “-r” value used for nsdperf in server mode (-s).

• For example, if GPFS verbsPorts is set to “mlx5_0/1” then nsdperf in server mode (-s) need to have RDMAPORTS (-r) set to “mlx5_0/1”) similar to below:

mmdsh –N <participating_nodes> ‘<complete_path_to_>nsdperf-ib -s -r mlx5_0/1 </dev/null > /dev/null 2>&1 &’

nsdperf administrative command (without –s option) need to have RDMAPORTS (-r) set to “mlx5_0/1”) similar to below:

<complete_path_to>nsdperf-ib -r mlx5_0/1

nsdperf Examples

The following section provides nsdperf examples to assess the network bandwidth for multiple client-server scenarios across TCP/IP as well as RDMA protocol. In the example below, the client and server are interconnected using FDR Infiniband (FDR-IB), with one 1 x FDR-IB link per node. The ib0 suffix to the nodename denotes the IP address corresponding to the IP over Infiniband interface (IPoIB).

Comments inlined in the nsdperf examples, denoted by “#” in the start of the line. The following sections assumes that the nsdperf (nsdperf-ib) binary is installed on all the nodes in “/opt/benchmarks/” directory.

Single Client and Single Server (with detailed comments)
In the example below, the network bandwidth between node c71f1c7p1ib0 and c71f1c9p1ib0 over the TCP/IP and RDMA network is assessed.
The “nsdperf in server mode” is started in the client and server node:

mmdsh -N c71f1c7p1ib0,c71f1c9p1ib0 “/opt/benchmarks/nsdperf-ib -s </dev/null>/dev/null 2>&1 &”

Then, execute nsdperf from an administrative node (e.g. login node, gateway node, or any cluster node permitting interactive job execution):
# /opt/benchmarks/nsdperf-ib

# Designate the nodes as clients using “client” parameter
nsdperf-ib> client c71f1c7p1ib0
Connected to c71f1c7p1ib0

# Designate the nodes as servers using “server” parameter
nsdperf-ib> server c71f1c9p1ib0
Connected to c71f1c9p1ib0

# Set the run time to 30 seconds for the tests using “ttime” parameter
nsdperf-ib> ttime 30
Test time set to 30 seconds

# Perform the desired nsdperf network tests using “test” parameter.

# TCP/IP network mode – Use “status” command to verify client node connectivity to the server # node
nsdperf-ib> status
test time: 30 sec
data buffer size: 4194304
TCP socket send/receive buffer size: 0
tester threads: 4
parallel connections: 1
RDMA enabled: no

clients:
c71f1c7p1ib0 (10.168.117.199) -> c71f1c9p1ib0
servers:
c71f1c9p1ib0 (10.168.117.205)

# Perform performance tests from clients to servers.
# The “test” command sends a message to all clients nodes to begin network performance testing
# to the server nodes. By default, write and read tests are performed.

nsdperf-ib> test
1-1 write 3170 MB/sec (756 msg/sec), cli 2% srv 3%, time 30, buff 4194304
1-1 read 3060 MB/sec (728 msg/sec), cli 3% srv 2%, time 30, buff 4194304

# Based on the results, TCP/IP bandwidth is limited by 1 x IPoIB link (1-1) between the client
# and server [refer to APPENDIX A].
# By default, each client node uses four tester threads. Each of these threads will independently
# send and receive messages to the server node. The thread counts may be scaled using “threads”
# parameter

# Enable RDMA for sending data blocks using “rdma” parameter
nsdperf-ib> rdma on
RDMA is now on

# Perform the desired nsdperf network tests using “test”. Default is write and read tests

# RDMA network mode – Use “status” command to verify client node RDMA connectivity to
# the server node
nsdperf-ib> status
test time: 30 sec
data buffer size: 4194304
TCP socket send/receive buffer size: 0
tester threads: 4
parallel connections: 1
RDMA enabled: yes

clients:
c71f1c7p1ib0 (10.168.117.199) -> c71f1c9p1ib0
mlx5_0:1 10a2:1f00:032d:1de4
servers:
c71f1c9p1ib0 (10.168.117.205)
mlx5_0:1 40a1:1f00:032d:1de4

# Perform RDMA performance tests using “test”. Default is write and read tests
nsdperf-ib> test
1-1 write 6450 MB/sec (1540 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA
1-1 read 6450 MB/sec (1540 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA

# Based on results, RDMA bandwidth is limited by 1 x FDR-IB link (1-1) between the client and
# server [refer to APPENDIX B].
# By default, each client node uses four tester threads. Each of these threads will independently
# send and receive messages to the server node. The thread counts may be scaled using “threads”
# parameter

# Shut down nsdperf (in server mode) on all client and server nodes using “killall” command
nsdperf-ib> killall

# Exit from program using “quit” command
nsdperf-ib> quit

Multiple Clients and Multiple Servers

In the example below, the network bandwidth between multiple client nodes c71f1c9p1ib0, c71f1c10p1ib0 and multiple server nodes c71f1c7p1ib0, c71f1c8p1ib0 over the TCP/IP and RDMA network is assessed.

The “nsdperf in server mode” is started in the client as well as server nodes:
mmdsh -N c71f1c7p1ib0,c71f1c8p1ib0,c71f1c9p1ib0,c71f1c10p1ib0 “/opt/benchmarks/nsdperf-ib -s </dev/null >/dev/null 2>&1 &”
Then, execute nsdperf from an administrative node (e.g. login node, gateway node, or any cluster node permitting interactive job execution):
# /opt/ benchmarks/nsdperf-ib

# Designate the nodes as servers using the “server” parameter
nsdperf-ib> server c71f1c7p1ib0 c71f1c8p1ib0
Connected to c71f1c7p1ib0
Connected to c71f1c8p1ib0

# Designate the nodes as clients using the “client” parameter
nsdperf-ib> client c71f1c9p1ib0 c71f1c10p1ib0
Connected to c71f1c9p1ib0
Connected to c71f1c10p1ib0

# Set run time to 30 seconds for the tests using the “ttime” parameter
nsdperf-ib> ttime 30
Test time set to 30 seconds

# Perform the desired nsdperf network tests using the “test” parameter.

# TCP/IP network mode – Use “status” command to verify client node connectivity to the server node
nsdperf-ib> status
test time: 30 sec
data buffer size: 4194304
TCP socket send/receive buffer size: 0
tester threads: 4
parallel connections: 1
RDMA enabled: no

clients:
c71f1c9p1ib0 (10.168.117.205) -> c71f1c7p1ib0 c71f1c8p1ib0
c71f1c10p1ib0 (10.168.117.208) -> c71f1c7p1ib0 c71f1c8p1ib0
servers:
c71f1c7p1ib0 (10.168.117.199)
c71f1c8p1ib0 (10.168.117.202)

# Perform performance tests from clients to servers.
nsdperf-ib> test
2-2 write 8720 MB/sec (2080 msg/sec), cli 3% srv 5%, time 30, buff 4194304
2-2 read 10200 MB/sec (2440 msg/sec), cli 5% srv 3%, time 30, buff 4194304

# Based on the results, TCP/IP bandwidth is limited by 2 x IPoIB link (2-2) between the clients
# and servers

# Enable RDMA for sending data blocks using “rdma” parameter
nsdperf-ib> rdma on
RDMA is now on

# Perform the desired nsdperf network tests using “test”. Canon lbp 6600 driver for mac. Default is write and read tests

# RDMA network mode – Use “status” command to verify client node RDMA connectivity to
# the server node
nsdperf-ib> status
test time: 30 sec
data buffer size: 4194304
TCP socket send/receive buffer size: 0
tester threads: 4
parallel connections: 1
RDMA enabled: yes

clients:
c71f1c9p1ib0 (10.168.117.205) -> c71f1c7p1ib0 c71f1c8p1ib0
mlx5_0:1 40a1:1f00:032d:1de4
c71f1c10p1ib0 (10.168.117.208) -> c71f1c7p1ib0 c71f1c8p1ib0
mlx5_0:1 e0a5:1f00:032d:1de4
servers:
c71f1c7p1ib0 (10.168.117.199)
mlx5_0:1 10a2:1f00:032d:1de4
c71f1c8p1ib0 (10.168.117.202)
mlx5_0:1 e0a1:1f00:032d:1de4

# Perform RDMA performance tests using “test”. Default is write and read tests
nsdperf-ib> test
2-2 write 12900 MB/sec (3080 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA
2-2 read 12900 MB/sec (3080 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA

# Based on the results, RDMA bandwidth is limited by 2 x FDR-IB link (2-2) between the clients
# and servers

# Shut down nsdperf (in server mode) on all client and server nodes using “killall” command
nsdperf-ib> killall

# Exit from program using “quit” command
nsdperf-ib> quit

Supplement nsdperf tests and commands

This section details supplement nsdperf tests (e.g. nwrite) and commands (e.g. buffsize, hist).

In the example below, the network bandwidth between multiple client nodes c71f1c9p1ib0, c71f1c10p1ib0 and multiple server nodes c71f1c7p1ib0, c71f1c8p1ib0 over the RDMA network is assessed.

The “nsdperf in server mode” is started in the client as well as server nodes:
mmdsh -N c71f1c7p1ib0,c71f1c8p1ib0,c71f1c9p1ib0,c71f1c10p1ib0 “/opt/benchmarks/nsdperf-ib -s </dev/null >/dev/null 2>&1 &”

Then, execute nsdperf from an administrative node (e.g. login node, gateway node, or any cluster node permitting interactive job execution):

# /opt/ benchmarks/nsdperf-ib
nsdperf-ib> server c71f1c7p1ib0 c71f1c8p1ib0
Connected to c71f1c7p1ib0
Connected to c71f1c8p1ib0
nsdperf-ib> client c71f1c9p1ib0 c71f1c10p1ib0
Connected to c71f1c9p1ib0
Connected to c71f1c10p1ib0

# Set the run time to 30 seconds for the tests
nsdperf-ib> ttime 30
Test time set to 30 seconds

# Enable RDMA for sending data blocks
nsdperf-ib> rdma on
RDMA is now on

# Perform the desired nsdperf network tests using the “test” parameter.

# RDMA network mode – Use “status” command to verify client node RDMA connectivity to
# the server node
nsdperf-ib> status
test time: 30 sec
data buffer size: 4194304
TCP socket send/receive buffer size: 0
tester threads: 4
parallel connections: 1
RDMA enabled: yes

clients:
c71f1c9p1ib0 (10.168.117.205) -> c71f1c7p1ib0 c71f1c8p1ib0
mlx5_0:1 40a1:1f00:032d:1de4
c71f1c10p1ib0 (10.168.117.208) -> c71f1c7p1ib0 c71f1c8p1ib0
mlx5_0:1 e0a5:1f00:032d:1de4
servers:
c71f1c7p1ib0 (10.168.117.199)
mlx5_0:1 10a2:1f00:032d:1de4
c71f1c8p1ib0 (10.168.117.202)
mlx5_0:1 e0a1:1f00:032d:1de4

# Perform the desired nsdperf network tests using the “test” parameter.
nsdperf-ib> test
2-2 write 12900 MB/sec (3080 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA
2-2 read 12900 MB/sec (3080 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA

# Perform individual network tests (e.g. nwrite)
nsdperf-ib> test write
2-2 write 12900 MB/sec (3080 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA
nsdperf-ib> test read
2-2 read 12900 MB/sec (3080 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA
nsdperf-ib> test nwrite
2-2 nwrite 12900 MB/sec (3080 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA

# Based on the results, RDMA bandwidth is limited by 2 x FDR-IB link (2-2) between the clients
# and servers

# The hist parameter can be turned “on” to print the network response time histograms
nsdperf-ib> hist on
Histogram printing is now on

nsdperf-ib> test write
2-2 write 12900 MB/sec (3080 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA

c71f1c9p1ib0 block transmit times (average 2.598 msec, median 3 msec)
msec nevents
1 2
2 1211
3 35724
4 2

c71f1c10p1ib0 block transmit times (average 2.598 msec, median 3 msec)
msec nevents
2 263
3 36674
4 1

# Based on the response time histogram, each of the clients have similar average and median
# response time. This can be useful to isolate any slow performing clients.

# Set the buffsize to 1 byte to assess the network latency for small messages
nsdperf-ib> buffsize 1
Buffer size set to 1 bytes

nsdperf-ib> test write
2-2 write 1.27 MB/sec (74800 msg/sec), cli 4% srv 4%, time 30, buff 1, RDMA

c71f1c9p1ib0 block transmit times (average 0.1124 msec, median 0 msec)
msec nevents
0 850036
1 21
2 4
3 5

c71f1c10p1ib0 block transmit times (average 0.1012 msec, median 0 msec)
msec nevents
0 944850
1 9

# Based on the response time histogram, each of the clients have similar average and median
# response time. This can be useful to isolate any slow performing clients.

# Shut down nsdperf (in server mode) on all client and server nodes
nsdperf-ib> killall

# Exit from program
nsdperf-ib> quit

Summary

IBM Spectrum scale is a complete software defined storage solution that delivers simplicity, scalability, high-speed access to data, and supports advanced storage management features such as compression, tiering, replication, and encryption. The Spectrum Scale nsdperf tool enables effective assessment of the network bandwidth between the NSD client and server nodes pertaining to IBM Spectrum Scale network topology over TCP/IP as well as over the RDMA network.

References

[1] mmnetverify command:
https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.2/com.ibm.spectrum.scale.v4r22.doc/bl1adm_mmnetverify.htm
[2] mmnetverify blog:
https://developer.ibm.com/storage/2017/02/24/diagnosing-network-problems-ibm-spectrum-scale-mmnetverify/
[3] iperf:
https://en.wikipedia.org/wiki/Iperf
[4] nsdperf:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/nsdperf%20README
[5] gpfsperf:
http://www-01.ibm.com/support/docview.wss?uid=isg15readmebbb63bf9mples_perf
[6] Netpipe network performance tool:
http://bitspjoule.org/netpipe/
[7] Infiniband Verbs Performance Tests:
https://github.com/lsgunth/perftest

APPENDIX A – TCP/IP Bandwidth using NetPipe network performance tool [6]

# TCP/IP bandwidth between single client and server over 1 x IPoIB link over FDR-IB

# ./NPtcp -h c71f1c7p1ib0

Now starting the main loop
0: 1 bytes 4750 times –> 0.38 Mbps in 20.31 usec
1: 2 bytes 4924 times –> 0.75 Mbps in 20.30 usec
2: 3 bytes 4926 times –> 1.13 Mbps in 20.22 usec
3: 4 bytes 3296 times –> 1.51 Mbps in 20.21 usec
.
.
38: 512 bytes 2350 times –> 183.77 Mbps in 21.26 usec
39: 515 bytes 2361 times –> 184.35 Mbps in 21.31 usec
40: 765 bytes 2368 times –> 270.09 Mbps in 21.61 usec
41: 768 bytes 3085 times –> 274.40 Mbps in 21.35 usec
42: 771 bytes 3128 times –> 271.06 Mbps in 21.70 usec
43: 1021 bytes 1553 times –> 357.98 Mbps in 21.76 usec
44: 1024 bytes 2295 times –> 357.89 Mbps in 21.83 usec
.
.
62: 8192 bytes 2039 times –> 2544.68 Mbps in 24.56 usec
63: 8195 bytes 2036 times –> 2552.65 Mbps in 24.49 usec
64: 12285 bytes 2042 times –> 3467.64 Mbps in 27.03 usec
65: 12288 bytes 2466 times –> 3160.44 Mbps in 29.66 usec
66: 12291 bytes 2247 times –> 3176.01 Mbps in 29.53 usec
67: 16381 bytes 1129 times –> 4074.60 Mbps in 30.67 usec
68: 16384 bytes 1630 times –> 3950.37 Mbps in 31.64 usec
69: 16387 bytes 1580 times –> 3937.13 Mbps in 31.75 usec
70: 24573 bytes 1575 times –> 5122.64 Mbps in 36.60 usec
71: 24576 bytes 1821 times –> 5855.86 Mbps in 32.02 usec
72: 24579 bytes 2082 times –> 5837.66 Mbps in 32.12 usec
73: 32765 bytes 1038 times –> 6921.58 Mbps in 36.12 usec
74: 32768 bytes 1384 times –> 6922.15 Mbps in 36.12 usec
.
.
92: 262144 bytes 510 times –> 20407.76 Mbps in 98.00 usec
93: 262147 bytes 510 times –> 20393.54 Mbps in 98.07 usec
94: 393213 bytes 509 times –> 22948.76 Mbps in 130.73 usec
95: 393216 bytes 509 times –> 22862.71 Mbps in 131.22 usec
96: 393219 bytes 508 times –> 22942.16 Mbps in 130.76 usec
97: 524285 bytes 254 times –> 25609.46 Mbps in 156.19 usec
98: 524288 bytes 320 times –> 26861.95 Mbps in 148.91 usec
99: 524291 bytes 335 times –> 26679.58 Mbps in 149.93 usec
100: 786429 bytes 333 times –> 29743.08 Mbps in 201.73 usec
101: 786432 bytes 330 times –> 29771.74 Mbps in 201.53 usec
102: 786435 bytes 330 times –> 29770.78 Mbps in 201.54 usec
103: 1048573 bytes 165 times –> 31021.22 Mbps in 257.89 usec
104: 1048576 bytes 193 times –> 31296.62 Mbps in 255.62 usec
105: 1048579 bytes 195 times –> 31623.55 Mbps in 252.98 usec
106: 1572861 bytes 197 times –> 33723.61 Mbps in 355.83 usec
107: 1572864 bytes 187 times –> 33690.15 Mbps in 356.19 usec
108: 1572867 bytes 187 times –> 33794.73 Mbps in 355.09 usec
109: 2097149 bytes 93 times –> 34186.09 Mbps in 468.03 usec
110: 2097152 bytes 106 times –> 34317.35 Mbps in 466.24 usec
111: 2097155 bytes 107 times –> 34429.04 Mbps in 464.72 usec
112: 3145725 bytes 107 times –> 34026.96 Mbps in 705.32 usec
113: 3145728 bytes 94 times –> 32615.07 Mbps in 735.86 usec
114: 3145731 bytes 90 times –> 32587.29 Mbps in 736.48 usec
115: 4194301 bytes 45 times –> 34507.09 Mbps in 927.34 usec
116: 4194304 bytes 53 times –> 34796.86 Mbps in 919.62 usec
117: 4194307 bytes 54 times –> 34831.03 Mbps in 918.72 usec
118: 6291453 bytes 54 times –> 35168.87 Mbps in 1364.84 usec
119: 6291456 bytes 48 times –> 34783.13 Mbps in 1379.98 usec
120: 6291459 bytes 48 times –> 34932.40 Mbps in 1374.08 usec
121: 8388605 bytes 24 times –> 34648.09 Mbps in 1847.14 usec
122: 8388608 bytes 27 times –> 33725.32 Mbps in 1897.68 usec
123: 8388611 bytes 26 times –> 33529.14 Mbps in 1908.79 usec

APPENDIX B – Infiniband Bandwidth using Infiniband Verbs Performance Tests [7]

# RDMA bandwidth between single client and server over 1 x FDR-IB link

# ib_write_bw -a c71f1c7p1ib0
—————————————————————————————
RDMA_Write BW Test
Dual-port : OFF Device : mlx5_0
Number of qps : 1 Transport type : IB
Connection type : RC Using SRQ : OFF
TX depth : 128
CQ Moderation : 100
Mtu : 4096[B] Link type : IB
Max inline data : 0[B] rdma_cm QPs : OFF
Data ex. method : Ethernet
—————————————————————————————
local address: LID 0x2a QPN 0x110d1 PSN 0x990306 RKey 0x0066f3 VAddr 0x003fff94800000
remote address: LID 0x07 QPN 0x019a PSN 0x13641c RKey 0x0058c1 VAddr 0x003fff7c800000
—————————————————————————————
#bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps] 2 5000 8.21 8.17 4.281720
4 5000 16.55 16.51 4.327333
8 5000 33.10 33.00 4.325431
16 5000 66.21 66.00 4.325506
32 5000 132.41 131.92 4.322621
64 5000 264.83 264.01 4.325560
128 5000 529.66 527.80 4.323771
256 5000 1059.31 1052.54 4.311206
512 5000 2118.63 2105.57 4.312206
1024 5000 4237.26 4206.14 4.307092
2048 5000 6097.54 6093.93 3.120093
4096 5000 6211.19 6195.85 1.586137
8192 5000 6220.81 6211.62 0.795088
16384 5000 6220.81 6219.45 0.398045
32768 5000 6223.22 6222.82 0.199130
65536 5000 6225.62 6224.76 0.099596
131072 5000 6225.59 6225.56 0.049804
262144 5000 6226.28 6226.24 0.024905
524288 5000 6226.05 6226.03 0.012452
1048576 5000 6226.43 6226.41 0.006226
2097152 5000 6226.14 6226.14 0.003113
4194304 5000 6226.47 6226.46 0.001557
8388608 5000 6226.23 6226.22 0.000778
—————————————————————————————

Using TCP/IP Tools And Utilities
WiFi Spectrum Analyzers and Cable Testers

Wireless network tools:

There are many types of network management utilities available for wireless home networks, ranging from hardware to software, and even passive tools that let you inspect the wireless frequency spectrum of your Wi-Fi environment.

Wireless network tools have become a necessity over the last decade, as network hardware and end user devices have evolved into sophisticated equipment, requiring more indepth knowledge.

When building a home network it’s important to construct a wireless network diagram, this allows you to visualize the topology of your network and help determine where troubles are coming from and what type of network tools are needed to solve the issue.

We’re going to talk about hardware, software and wireless limitations that will help you in understanding what type of wireless network tools to apply to a given situation.

WHNME will show you how to use these network tools correctly when trying to diagnose and repair your wireless home network.

Wireless Network Tools

Getting started:

We’re going to start out discussing tcpip tools and network commands, this will allow you to quickly pinpoint and root out any unnecessary repairs that less experienced technicians make when trying to resolve network related issues and create a wireless network diagram.

Next we’ll be covering how to use a pc based spectrum analyzer, and how they’re helpful in visualizing the wireless side of a home network, giving you the ability to sterilize your Wi-Fi environment allowing your wireless enabled devices work as advertised.

Last we’ll be explaining how to use hardware tools, such as cable crimpers and network line testers, these tools are very important to trace down problems of relating to signal attenuation loss between network hardware connected via Ethernet cable.

tcpip tools:

The TCP/IP protocol suite contains a wide variety of free network tools and utilities that can be entered from a command line sitting at your PC or laptop, and should be the first action a network administrator should take upon receiving word that there are reported problems on the network.

Wireless network tools like TCP/IP include network troubleshooting commands which provide you the ability to view and test connectivity between hardware devices, display the path an Ethernet packet takes to a destination host system, release and request new IP addresses from a DHCP server, and even display or modify the local routing table.

We’ve put together a cheat sheet of useful TCP/IP Commands in a workbook format that will help you interpret the information being returned after network commands are issued, to determine what should be the next line of action in resolving a network issue.

Download and practice these free network tools and commands on a daily basis, and soon you will become more familiar with how wireless network tools communicate, and locate the source of network related issue before ever getting out of your chair.

Wi-Spy spectrum analyzer:

The biggest problem that affects most wireless home networks is RF interference from other household electronics and appliances. If consumers could only see the radio frequency spectrum through plain sight, they would have an entirely new perspective when designing and building a wireless home network.

Laptop based spectrum analyzers like Wi-Spy are astounding wireless network tools which give Wi-Fi users a way to view the 2.4GHz and 5GHz frequency bands as if they were tangible objects.

This provides a way to see all RF activity affecting your network and locate all the electronic devices in your Wi-Fi area that is
just driving your wireless network bonkers.

If you’ve ever heard what a dial-up modem sounds like when its connecting to a remote modem, performing its modulation handshake you know the sounds of electronic interference.

Think about how it would feel to tolerate such an annoying distracting sound over a phone line while making calls to your friends or family, and you can begin to relate how your wireless router senses the Wi-Fi environment when you refuse to replace your existing cordless phone or microwave that operates on its same 2.4Ghz or 5Ghz frequency band.

Crimps and cable testers:

Cat6 cable testers and cable crimps are very important wireless network tools, because even though home networks are thought to be wireless, there will always be a need for Ethernet cables between network hardware.

Anytime you have a electronic signals passing from one media to another there is a slight loss of attenuation in the signal quality, and Ethernet cable testers are a great way to check the condition of the signal as it passes through cable segments.

With wireless home networks in just about every household around the planet, there are many who choose to keep costs low by crimping their own Ethernet cables, and are also a great way to create custom lengths producing better wire management and signal strength.

Using cable crimps paired with a cat 5 cable tester is a great combination as long as the maker of the cable has a little training on how to use them properly. We have created a tips and tricks guide to Making Ethernet Network Cables.

That’s it! Even if you’ve been using these wireless network tools before, you might want refresh yourself on some experienced techniques that network administers use when performing repairs on wireless networks.