Skip to main content

Bandwidth is one of those technical terms that has been co-opted by the professional masses. For example: “I don’t have the bandwidth to take another project on at the moment.” Translation: I’m too busy.

Of course, if you tell an IT pro you don’t have enough bandwidth, they might think something else first: What’s wrong with the network?

Network bandwidth is the unseen hero of the digital organization – which is to say, virtually every organization. Simply put, bandwidth is a measure of how much data can travel over a network connection in a fixed period of time, such as one second. So, in an age where data is treated like gold, network bandwidth is critical to ensuring data can flow when and where it’s needed.

In this article, we'll take a closer look at network bandwidth – what it really is, and – critically – how to improve it if necessary.

What is Network Bandwidth?

First, let’s get a non-nonsense definition of network bandwidth to work with. Here’s one courtesy of Derek Ashmore, Application Transformation Principal at Asperitas:

“Network bandwidth is the maximum amount of data that can be transferred over a network connection within a specific period,” Ashmore says.

Ashmore notes that bandwidth is typically measured in bits or bytes. (A refresher: There are bits in a byte.)

With a straightforward definition in hand, it becomes pretty easy for even non-techies to understand why bandwidth is important: If you don’t have enough of it, your applications and human users probably won’t be able to perform optimally in the digital age. That’s not good.

I asked Ashmore to share some actionable insights for improving network bandwidth. Let’s get right to it.

How to Improve Network Bandwidth

There’s a keyword in Ashmore’s definition of bandwidth above: maximum. Put another way, network bandwidth can be thought of as potential. 

In that sense, improving bandwidth – the maximum amount of data that could travel across a network connection in a given time frame – is actually kind of simple. You just need to be willing to pay for more. In the parlance of IT pros, this is sometimes referred to as “buying a bigger pipe.”

But Ashmore points out that upgrading your connectivity is only part of improving overall network performance. A related but different metric – network throughput – measures the actual amount of data that travels over a network connection in a specific time period. (Throughout is usually measured in bits per second.)

Network bandwidth is a fundamental component of network throughput – the latter can’t exceed the former, for starters – but Asmore points out it is only one component. Moreover, increasing bandwidth doesn’t automatically improve throughput.

So when we talk about improving network bandwidth, we really mean improving network throughput – the actual amount of data your network can handle effectively.

Ashmore shared a five-step framework for doing just that.

Discover how to deliver better software and systems in rapidly scaling environments.

Discover how to deliver better software and systems in rapidly scaling environments.

  • By submitting this form you agree to receive our newsletter and occasional emails related to the CTO. You can unsubscribe at anytime. For more details, review our Privacy Policy. We're protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
  • This field is for validation purposes and should be left unchanged.

Step 1: Identifying Bandwidth Bottlenecks

The first step in improving network bandwidth and throughput is to identify any issues that are lowering speed and performance. You can’t solve problems if you don’t know they exist, after all.

Implementing strong network monitoring practices and tools is vital here. (Tracking bandwidth usage is an important network monitoring metric, too.) Ashmore recommends monitoring for – and then mitigating – bottlenecks such as:

  • Overloaded routers and switches.
  • Latency due to long distances or inefficient routes.
  • Network Interface Card (NIC) limitations and congestion
  • Underpowered servers. (Ashmore notes that this is not really a network limitation, per se.)
  • Poorly configured network settings.
  • Security devices such as firewalls getting overloaded.
  • Excessive broadcast or multicast traffic flooding the network.

For example, a manufacturing organization might detect latency between its centralized network and a critical application on a distant production floor. Bringing the network connection (and potentially other infrastructure such as compute and storage) closer to that application could lower latency and improve throughput. (This is one reason why edge computing architectures have increased in popularity.)

Step 2: Upgrade Network Hardware & Software

Like most other forms of computing, a network fundamentally relies on hardware to run. Even a virtualized network that’s managed largely via software still needs switches, routers, load balancers, and other infrastructure.

When that hardware gets old, it might run into performance issues. (The same principle applies broadly to software, especially if that software is no longer getting regular updates or upgrades.)

You can “increase network throughput by upgrading your network connection and network hardware,” Ashmore says.

On the connection front, that might come down to an upgrade – buying a bigger pipe – to increase that maximum potential throughput.

Ashmore also points to link aggregation technology as another strategy here. In this approach, you combine multiple network connections into a single logical network connection. This approach increases available bandwidth for other endpoints on the network, which also improves fault tolerance and resiliency, since a failed connection can automatically switch over to another link in the group.

Step 3: Reduce Distance


Latency is the scourge of network administrators and users everywhere, so it deserves its own step even though we touched on it in step one.

Here’s the basic principle: Keep the network request and the network response as close together as possible to minimize latency. Ashmore notes that latency’s detrimental effects on network bandwidth is a major reason why content delivery networks (CDNs) such as Akamai or Cloudflare are widely used.

“You can’t change the speed of light, but you can change the distance between the requester and the responder,” Ashmore says.

Here’s an example: 

“I can have 10 Gbps bandwidth between a user in Chicago and a site from which they download data,” Ashmore says. “If that site is in the Chicagoland area, I can get 6-8 Gbps through that connection, but I will probably get less than 1 Gbps if the download site is in India or Australia.”

Shrink the physical distance between request and response as much as possible.

Step 4: Consider Using a Dedicated Connection

If you’re not already, Ashmore recommends using a dedicated connection – which means a physical link that only your organization uses – between your network and ISP or cloud provider instead of a virtual private network (VPN) connection.

VPNs are useful technologies and have become popular for cost, security, and other reasons. But a dedicated connection will likely beat it in terms of throughput and other metrics.

“Even if a VPN connection has the same bandwidth as a dedicated connection, the dedicated connection will consistently outperform it,” Ashmore says. He notes three common reasons why this is the case:

  1. VPN connections are usually shared network connections, meaning you’re essentially competing for bandwidth with other users.
  2. Dedicated connections have better network routing paths.
  3. Dedicated connections have lower network jitter, which is essentially unwanted variance in packet arrival times. Minimizing jitter improves network throughput.

Step 5: Reduce (or Segment) Other Traffic on Your Network

The old analogy comparing an IT network to an interstate highway still works: It’s a road that was designed for higher-speed driving, but if everyone wants to travel at the same time, you’re still going to have traffic jams.

The same is true for network bandwidth and throughput: Too many users (or requests) at the same time can slow things down.

“Other users on a network connection can significantly affect overall network performance,” Ashmore says. 

Creating different network connections for discrete user groups is one solution. Here’s a good basic example: Separate internal network access for employees and their business applications from external internet access for customers and your customer-facing applications.

“This ensures that employees doing network-intensive work (such as heavy downloads) do not affect their customers [and vice versa],” Ashmore says.

Tools to Help

When it comes to improving network bandwidth, the right tools can make all the difference. Network monitoring software is the first line of defense against performance bottlenecks. These tools provide real-time visibility into bandwidth usage, allowing you to track and analyze network traffic patterns, pinpoint bottlenecks, and quickly identify underperforming devices or connections.

Tools like SolarWinds, PRTG, and Nagios help monitor network health, alert you to critical issues like overloaded routers or high latency, and provide the data you need to optimize bandwidth allocation. These insights are crucial for proactively managing network performance before small issues escalate into major disruptions.

Beyond monitoring, tools that support network optimization—like traffic shaping software or WAN optimization solutions—can help enhance throughput. For example, using a combination of Quality of Service (QoS) settings and link aggregation can prioritize critical traffic and combine multiple network paths to increase available bandwidth for high-demand applications. In high-latency environments, deploying tools like CDNs (Content Delivery Networks) or using techniques such as data compression can reduce the impact of distance on data transmission speeds.

These tools ensure that your network runs as efficiently as possible, helping you squeeze every bit of performance out of your infrastructure.

5 More Network Bandwidth Tips (& Mistakes to Avoid)

Finally, Ashmore shared a mix of “do’s” and “don’ts” to keep in mind when working toward improving network bandwidth and optimizing throughput.

1. Understand your bandwidth needs. Like other computing requirements, it’s quite possible to overprovision or underprovision resources. Either end of that spectrum is problematic and can lead to unnecessary costs, poorer performance and reliability, and other issues. 

“Avoid underestimating bandwidth needs, but do not overestimate either,” Ashmore says. “Have a plan for how you will expand your current network configuration.”

2. Understand how traffic needs to be routed. Network architecture is a foundation for optimal performance, so take the time to ensure you fully understand its layout and the demands your users and applications will be placing on the network.

3. Design for redundancy and failover. You can buy the biggest pipe possible, but it won’t matter if connections routinely fail. Some of the strategies above, such as using a dedicated connection, can help ensure the network is highly available and resilient when incidents do occur. 

“Users expect 100% network uptime,” Ashmore says.

4. Closer is always better. Proximity matters – often a great deal – to performance (and its evil twin, latency.) As Ashmore noted above, keep the requester and responder as close as possible.

5. Design with security in mind. Remember, the bad guys like bandwidth and throughput, too. This means thinking through – and then closely monitoring – network traffic ingress (or inflows) and egress (outflows) to the internet, potential choke points (such as firewalls), and how network segmentation will be implemented.

Final Thoughts

The network is the backbone of virtually any organization today. Ensuring optimal network bandwidth – and optimal network throughput – is a must to ensure critical applications are not only available but highly performant. Don’t take it for granted.

Subscribe to The CTO Club newsletter for more industry news and discussions.

Kevin Casey

Kevin Casey is an award-winning technology and business writer with deep expertise in digital media. He covers all things IT, with a particular interest in cloud computing, software development, security, careers, leadership, and culture. Kevin's stories have been mentioned in The New York Times, The Wall Street Journal, CIO Journal, and other publications. His InformationWeek.com on ageism in the tech industry, "Are You Too Old For IT?," won an Azbee Award from the American Society of Business Publication Editors (ASBPE), and he's a former Community Choice honoree in the Small Business Influencer Awards. In the corporate world, he's worked for startups and Fortune 500 firms – as well as with their partners and customers – to develop content driven by business goals and customer needs. He can turn almost any subject matter into stories that connect with their intended audience, and has done so for companies like Red Hat, Verizon, New Relic, Puppet Labs, Intuit, American Express, HPE, Dell, and others. Kevin teaches writing at Duke University, where he is a Lecturing Fellow in the nationally recognized Thompson Writing Program.