A Leader in the 2025 Gartner® Magic Quadrant™ for Endpoint Protection Platforms. Five years running.A Leader in the Gartner® Magic Quadrant™Read the Report
Experiencing a Breach?Blog
Get StartedContact Us
SentinelOne
  • Platform
    Platform Overview
    • Singularity Platform
      Welcome to Integrated Enterprise Security
    • AI Security Portfolio
      Leading the Way in AI-Powered Security Solutions
    • How It Works
      The Singularity XDR Difference
    • Singularity Marketplace
      One-Click Integrations to Unlock the Power of XDR
    • Pricing & Packaging
      Comparisons and Guidance at a Glance
    Data & AI
    • Purple AI
      Accelerate SecOps with Generative AI
    • Singularity Hyperautomation
      Easily Automate Security Processes
    • AI-SIEM
      The AI SIEM for the Autonomous SOC
    • Singularity Data Lake
      AI-Powered, Unified Data Lake
    • Singularity Data Lake for Log Analytics
      Seamlessly ingest data from on-prem, cloud or hybrid environments
    Endpoint Security
    • Singularity Endpoint
      Autonomous Prevention, Detection, and Response
    • Singularity XDR
      Native & Open Protection, Detection, and Response
    • Singularity RemoteOps Forensics
      Orchestrate Forensics at Scale
    • Singularity Threat Intelligence
      Comprehensive Adversary Intelligence
    • Singularity Vulnerability Management
      Application & OS Vulnerability Management
    Cloud Security
    • Singularity Cloud Security
      Block Attacks with an AI-powered CNAPP
    • Singularity Cloud Native Security
      Secure Cloud and Development Resources
    • Singularity Cloud Workload Security
      Real-Time Cloud Workload Protection Platform
    • Singularity Cloud Data Security
      AI-Powered Threat Detection for Cloud Storage
    • Singularity Cloud Security Posture Management
      Detect and Remediate Cloud Misconfigurations
    Identity Security
    • Singularity Identity
      Identity Threat Detection and Response
  • Why SentinelOne?
    Why SentinelOne?
    • Why SentinelOne?
      Cybersecurity Built for What’s Next
    • Our Customers
      Trusted by the World’s Leading Enterprises
    • Industry Recognition
      Tested and Proven by the Experts
    • About Us
      The Industry Leader in Autonomous Cybersecurity
    Compare SentinelOne
    • Arctic Wolf
    • Broadcom
    • CrowdStrike
    • Cybereason
    • Microsoft
    • Palo Alto Networks
    • Sophos
    • Splunk
    • Trellix
    • Trend Micro
    • Wiz
    Verticals
    • Energy
    • Federal Government
    • Finance
    • Healthcare
    • Higher Education
    • K-12 Education
    • Manufacturing
    • Retail
    • State and Local Government
  • Services
    Managed Services
    • Managed Services Overview
      Wayfinder Threat Detection & Response
    • Threat Hunting
      World-class Expertise and Threat Intelligence.
    • Managed Detection & Response
      24/7/365 Expert MDR Across Your Entire Environment
    • Incident Readiness & Response
      Digital Forensics, IRR & Breach Readiness
    Support, Deployment, & Health
    • Technical Account Management
      Customer Success with Personalized Service
    • SentinelOne GO
      Guided Onboarding & Deployment Advisory
    • SentinelOne University
      Live and On-Demand Training
    • Services Overview
      Comprehensive solutions for seamless security operations
    • SentinelOne Community
      Community Login
  • Partners
    Our Network
    • MSSP Partners
      Succeed Faster with SentinelOne
    • Singularity Marketplace
      Extend the Power of S1 Technology
    • Cyber Risk Partners
      Enlist Pro Response and Advisory Teams
    • Technology Alliances
      Integrated, Enterprise-Scale Solutions
    • SentinelOne for AWS
      Hosted in AWS Regions Around the World
    • Channel Partners
      Deliver the Right Solutions, Together
    • Partner Locator
      Your go-to source for our top partners in your region
    Partner Portal→
  • Resources
    Resource Center
    • Case Studies
    • Data Sheets
    • eBooks
    • Reports
    • Videos
    • Webinars
    • Whitepapers
    • Events
    View All Resources→
    Blog
    • Feature Spotlight
    • For CISO/CIO
    • From the Front Lines
    • Identity
    • Cloud
    • macOS
    • SentinelOne Blog
    Blog→
    Tech Resources
    • SentinelLABS
    • Ransomware Anthology
    • Cybersecurity 101
  • About
    About SentinelOne
    • About SentinelOne
      The Industry Leader in Cybersecurity
    • Investor Relations
      Financial Information & Events
    • SentinelLABS
      Threat Research for the Modern Threat Hunter
    • Careers
      The Latest Job Opportunities
    • Press & News
      Company Announcements
    • Cybersecurity Blog
      The Latest Cybersecurity Threats, News, & More
    • FAQ
      Get Answers to Our Most Frequently Asked Questions
    • DataSet
      The Live Data Platform
    • S Foundation
      Securing a Safer Future for All
    • S Ventures
      Investing in the Next Generation of Security, Data and AI
  • Pricing
Get StartedContact Us
Background image for What is Latency? Ways to Improve Network Latency
Cybersecurity 101/Cybersecurity/Latency

What is Latency? Ways to Improve Network Latency

Latency affects network performance and security. Learn how to manage latency issues to ensure efficient and secure communications.

CS-101_Cybersecurity.svg
Table of Contents

Related Articles

  • What is Microsegmentation in Cybersecurity?
  • Firewall as a Service: Benefits & Limitations
  • What is MTTR (Mean Time to Remediate) in Cybersecurity?
  • What Is IoT Security? Benefits, Challenges & Best Practices
Author: SentinelOne
Updated: July 21, 2025

Latency is a critical factor affecting network performance and user experience. This guide explores the concept of latency, its causes, and its impact on application performance. Learn about the different types of latency, including network, processing, and queuing latency, and how they influence overall system efficiency.

Discover strategies for measuring and reducing latency to enhance user satisfaction and operational efficiency. Understanding latency is essential for optimizing network performance and ensuring seamless digital interactions.

Latency - Featured Image | SentinelOne

Network Latency Definition and Measurement

Network latency can be stated as the time it takes to pass from one point in a network to another. However, the details of how exactly this term is defined and how it is measured vary depending on the context.

Below are several latency definitions that can be useful in different situations. Here a client is defined as the computer requesting data, while a host is defined as the computer or server responding to the request:

  • Latency – The time it takes for data to pass from one point in a network to another (e.g., from a client to the host)
  • Round Trip Time (RTT) – The time it takes for a response to reach the client device after a request to the host. This is nominally twice the latency time, as data must travel twice the distance as when measuring latency.
  • Ping – Measures the amount of time it takes an echo request message to travel to a remote system and return with a response, typically in ms. Pings can also be defined as a software utility — available for nearly any operating system — used to measure this statistic. While similar to RTT, ping refers specifically to an operation performed by a ping program.
  • Lag – An informal term that often relates to, or includes, network latency, but can broadly be used for any system delay, such as the time it takes between a physical keystroke and a character appearing on screen.

What Causes Latency?

Network latency is caused by many factors, from distance to transmission medium to how many “hops” it takes to go from client to host and back. Equipment on the client and host side also adds latency. Consider the following factors that can contribute to latency:

  • Distance – Digital transmission takes place at a speed inconceivable to the human experience, but it is not instantaneous. As elaborated upon below, a trip halfway around the world (and back) adds something around a 1/10th of a second to network response times.
  • Transmission Medium – Land-based fiber optic lines are the fastest connections available in terms of latency, while copper electrical connections tend to be somewhat slower.
  • Hops – As network data is transmitted from point A to B, it is generally routed (via “hops”) through different transmission lines. Each time a hop occurs the switching hardware adds a small delay, increasing latency. All things being equal, minimal hops produce minimal latency.
  • Computing Hardware – The time it takes the client and host to process and send data, plus the efficiency of switching devices used to guide data to its destination, contribute to network lag.

How Latency Affects Cybersecurity

Cyberattacks happen at the speed of the Internet. If defensive tools can’t react fast enough per system latency, then threats have more time and opportunity to do damage. In a broader sense, an enterprise’s institutional latency (i.e., how fast systems and personnel can process the proper information) affects the timeliness of strategic decisions and threat mitigation requiring human intervention. Latency, and thus reaction time, can affect vulnerability to threats including:

  • Ransomware
  • Malware
  • Distributed-denial-of-service (DDoS)
  • Attacks on specific applications
  • Phishing campaigns
  • SQL injections
  • Advanced persistent threats (ATPs)

How Distance and Transmission Medium Affect Minimum Latency (Example)

Consider that the speed of light in a vacuum is roughly 300,000 kilometers per second. Based on our current understanding of physics, this is the limit of how fast information can travel. Fiber optic cables, however, reduce the speed of light to “only” around 200,000 kilometers per second. Electrical cables are also very fast, but transmission tends to be slower than via fiber optic lines.

Consider if we are in New York City and would like to get data from a server in Tokyo, roughly 11,000 kilometers away. Dividing 11,000 kilometers (km) by 200,000 km/s works out to .055, or 55ms. Doubling this distance to account for the return data’s journey works out to .110s or 110ms. 110ms is, therefore, the absolute minimum amount of time that data could make a full loop, which doesn’t account for other (potentially significant) delays due to processing and switching.

If, however, we’re accessing data from Rochester, NY, roughly 400km away, 400km divided by 200,000km/s is just 2ms, or 4ms for data to return. While an impressive reduction, this is only part of the story. Switching equipment and other factors will add significant lag time to transmission and should be minimized as much as possible.

How Can We Improve Network Latency (and Perceived Latency)?

Consider that strict network latency, the time it takes data to reach a host from a client machine, or vice versa, isn’t always directly related to perceived network performance for the end user. Often it depends on the application and even what someone is manipulating within an application.

For example, it may matter little if there is a 500 ms delay between interactions when passively streaming a movie. One interacts with it on a relatively infrequent basis. On the other hand, a half-second delay would be extremely annoying for videoconferencing and largely unusable for gaming.

Below are several concepts that can be used to help improve latency, with thoughts on how they apply in different situations

  • Physical Proximity – While you can’t always be close to your communication counterpart, all things being equal, the closer the better. This minimizes delays due to distance, as well as hops. For gaming, perhaps this means largely playing with people in your general area, and in real-time control applications perhaps we choose a cloud provider in the same town. For server applications, which must interact with a wide range of clients, this can mean strategically (and redundantly) positioning data storage in different regions.
  • Updated Hardware – Whether acting as a client or host, the speed at which your device can respond to inputs and push data to the network will affect overall system latency. Keep your devices current and updated, including routers and switches, to avoid unneeded hardware latency.
  • Internal Network Connections – Computing devices are typically connected through an internal network (whether wireless or wired) before transmitting data on the open Internet. This system can add latency to the system and should be considered for improvement. Typically wired (Ethernet) connections work better than wireless connections in terms of latency but require more physical setup time and impair portability.
  • External Network (Internet) Connection – While Internet speeds are largely advertised in terms of maximum bandwidth, a better connection may improve actual, as well as perceived, latency. An upgrade rarely results in worse performance but be sure to read the fine print and/or ask other technically minded users about their experiences.
  • Targeted Data Delivery – When users interact with certain data elements first (e.g., the top data on a webpage), if they load fastest, a user may see this as being more responsive than it would be if all elements load at the same rate. Similarly, data networks may choose to prioritize videoconferencing and other time-sensitive tasks over others. While the details of implementation will vary greatly depending on the application, how we use available resources should be considered when possible.

Latency | An Important Challenge In Our Connected Age

As more and more devices become connected, having a fast, reliable, and low-latency connection is of paramount importance.

What is a good latency speed, so to speak? As low as possible. While it can never be fully eliminated, keeping latency low while monitoring for problems and even improvement opportunities should be of paramount importance. This applies on a business and a personal level and is especially important for always-on IoT devices.

Unleash AI-Powered Cybersecurity

Elevate your security posture with real-time detection, machine-speed response, and total visibility of your entire digital environment.

Get a Demo

Conclusion

Latency in computer networking is the time it takes for data to travel from one place to another. A low latency, typically measured in milliseconds, is better. It can be affected by Internet connections, hardware, and physical proximity between nodes. Latency can have a significant effect on cybersecurity, as delays between an attack, detection, and reaction mean more time for malicious actors to potentially damage a system.

FAQs

Bandwidth, the amount of data that can be transmitted over a period of time, often expressed in Mbps (Megabits per second) — is not the same as latency, typically expressed in ms. However, from a user experience, a web page taking a long time to begin loading because of latency is similar to the effect of a webpage taking a long time to load per limited bandwidth.

High latency refers to any situation where the latency of a computing system is unacceptable. While this will vary based on the context, a lower time value (typically in ms) is better than a higher value. For instance, if 50 ms latency is acceptable or even good in a certain situation, 200 ms most likely would need to be improved.

Low latency (i.e., a low time value required for data to pass from one point in a network to another) is better than high latency in nearly any networking application.

Discover More About Cybersecurity

Shadow Data: Definition, Risks & Mitigation GuideCybersecurity

Shadow Data: Definition, Risks & Mitigation Guide

Shadow data creates compliance risks and expands attack surfaces. This guide shows how to discover forgotten cloud storage, classify sensitive data, and secure it.

Read More
Malware Vs. Virus: Key Differences & Protection MeasuresCybersecurity

Malware Vs. Virus: Key Differences & Protection Measures

Malware is malicious software that disrupts systems. Viruses are a specific subset that self-replicate through host files. Learn differences and protection strategies.

Read More
Software Supply Chain Security: Risks & Best PracticesCybersecurity

Software Supply Chain Security: Risks & Best Practices

Learn best practices and mistakes to avoid when implementing effective software supply chain security protocols.

Read More
Defense in Depth AI Cybersecurity: A Layered Protection GuideCybersecurity

Defense in Depth AI Cybersecurity: A Layered Protection Guide

Learn defense-in-depth cybersecurity with layered security controls across endpoints, identity, network, and cloud with SentinelOne's implementation guide.

Read More
Experience the Most Advanced Cybersecurity Platform

Experience the Most Advanced Cybersecurity Platform

See how the world’s most intelligent, autonomous cybersecurity platform can protect your organization today and into the future.

Get a Demo
  • Get Started
  • Get a Demo
  • Product Tour
  • Why SentinelOne
  • Pricing & Packaging
  • FAQ
  • Contact
  • Contact Us
  • Customer Support
  • SentinelOne Status
  • Language
  • English
  • Platform
  • Singularity Platform
  • Singularity Endpoint
  • Singularity Cloud
  • Singularity AI-SIEM
  • Singularity Identity
  • Singularity Marketplace
  • Purple AI
  • Services
  • Wayfinder TDR
  • SentinelOne GO
  • Technical Account Management
  • Support Services
  • Verticals
  • Energy
  • Federal Government
  • Finance
  • Healthcare
  • Higher Education
  • K-12 Education
  • Manufacturing
  • Retail
  • State and Local Government
  • Cybersecurity for SMB
  • Resources
  • Blog
  • Labs
  • Case Studies
  • Videos
  • Product Tours
  • Events
  • Cybersecurity 101
  • eBooks
  • Webinars
  • Whitepapers
  • Press
  • News
  • Ransomware Anthology
  • Company
  • About Us
  • Our Customers
  • Careers
  • Partners
  • Legal & Compliance
  • Security & Compliance
  • Investor Relations
  • S Foundation
  • S Ventures

©2025 SentinelOne, All Rights Reserved.

Privacy Notice Terms of Use