[go: up one dir, main page]

DEV Community

Cover image for Network Performance
Stephen Odogwu
Stephen Odogwu

Posted on • Edited on

Network Performance

I have been exploring computer networking for a while now, all in a bid to make me a more rounded programmer. One topic that has fascinated me is Network Performance.

When we talk about performance in software, it's basically always about looking for ways to monitor, measure and optimize this performance.

What is Network Performance?

Network performance is the quality of transmission of a network. This takes into account the speed and efficiency of data transfer.

There are some measures which we take into account when measuring network performance in order to enable us maximize network performance.

Measures of Network Performance

There are 3 fundamental methods for measuring network performance namely:

  • Bandwidth
  • Throughput
  • Latency We will now take a look at the meaning of each of these metrics which we listed.

Bandwidth

Bandwidth can be likened to the frequency of something occuring; as we know frequency is usually a measure of an occurence per unit time. In the case of networking the bandwitdth is the number of bits transmitted per second. The bandwidth of a device is the ability or capacity of a device to transmit data per unit time.

Some of the measurements used to quantity bandwidth are as follows:

Description Unit Unit Abbreviation
Bits transfer per second Bits per second Bps
Thousand bits transfer per second Kilobits Kbps
Million bits transfer per second Megabits Mbps
Billion bits transfer per second Gigabits Gbps
Trillion bits transfer per second Terabits Tbps

Throughput

This is the amount of data sent and received over a connection, including any delays that occur over a period of time.

Throughput is very important for measuring network performance as it makes us see the reality by taking into consideration different factors that occur during transmission.

Let us now see an analogy:

Suppose there is a factory that has the capacity to produce 5000 bowls of ice-cream per day without the machine turned off. Suppose from time to time the machine breaks down, or power goes out or the machine is switched off for cooling. Due to this, the machine only produces 2000 bowls per day.

The breakdown:
The bandwidth = 5000bowls/day
The throughput = 2000bowls/day

So we see that the capacity is 5000 bowls per day but in reality it is 2000 bowls per day.

Throughput can be measured in bits per second(bps), data packets per second(pps) or data packets per time slot. Will now take another example to illustrate throughput.

Example:
A network with a bandwidth of 15Mbps can pass an average of 6000frames per minute, with each frame carrying an average of 5000bits. What is the throughput of this network?

From the above question we can deduce that:
1 frame carries an average of 5000bits
Now number of bits for 6000frames will be:
6000 * 5000=30000000bits
So we have 30000000bits passing through the network in 1 minute
As we know:
1min = 60secs
Throughput=30000000/60
Throughput = 500000bps

Causes
Throughput has several causes which we can take a look at.

  • Bandwidth: Higher bandwidth allows for greater throughput, because more data can be transmitted over a period of time.

  • Latency: The time taken for data to get to its destination. Higher Latency reduces throughput.

  • Congestion: Congestion can lead to increased throughput because it causes delays.

  • Equipment Performance: The performance of network equipments, such as routers, switches and network interface cards can affect throughput. Higher performance equipment can generally handle larger volumes of traffic more efficiently, leading to higher throughput.

Latency

This is the time taken for a packet of data to travel from source to destination and back again.

Though the above definition has some ambiguity, as latency can be broken down into two types, namely; One-way latency and Round-trip latency.

  • One-way latency: This is the time it takes a packet of data to travel from source to destination. That is server to client.

  • Round-trip latency: Round-trip latency is the time required for the packet of data to travel across the network server to client to server.

Conclusion

Performance monitoring is very crucial when dealing with software and hardware. Paying attention to performance can help maximize the output of our services by providing signals on how to optimize them.

Top comments (0)