top of page

Techie Broadband Terminology

If you are reading this, you are not here for surface level definitions.

​

You want to understand how your connection behaves under real conditions. You want clarity on what actually impacts performance, control and reliability.

​

This is a complete, structured and practical breakdown of Techie Broadband terminology. Every term is grouped, expanded and explained in a way that reflects how it behaves in real environments.

Techie Broadband Terminology

Core Network Performance

This section covers the raw mechanics of how your connection behaves under load, focusing on latency, jitter, packet loss, throughput, bandwidth and speed balance. It is where performance is either real or just advertised. For techies, this is the foundation because it defines responsiveness, consistency and how systems behave when pushed. Whether you are gaming, trading, developing or transferring data, this layer determines if your connection feels instant and reliable or slow and unpredictable.

​

Latency

​

Latency is the total round trip time it takes for a packet of data to travel from your device to a destination server and back. It is measured in milliseconds and directly impacts responsiveness across everything you do. Low latency means faster reactions in games, quicker API responses, faster SSH sessions and more accurate trade execution. What matters most is not just low latency, but stable latency that does not spike under load or during peak times.

​

Jitter

​

Jitter measures how much latency varies over time. Even if your average latency looks low, high jitter introduces inconsistency that disrupts real time systems. It causes voice calls to distort, streams to stutter and game movement to feel unpredictable. For tech setups, jitter is often more damaging than slightly higher latency because it introduces instability into systems that rely on timing precision.

​

Packet Loss

​

Packet loss occurs when data packets fail to reach their destination. This forces retransmissions or results in lost information entirely. In real world usage, even small levels of packet loss can break streams, corrupt file transfers, interrupt SSH sessions and cause IoT signals to fail. Reliable broadband should maintain near zero packet loss during sustained activity.

​

Throughput

​

Throughput is the actual usable data transfer rate you experience during real usage. It differs from advertised speed because it reflects real conditions such as congestion, routing efficiency and protocol overhead. High throughput ensures fast file transfers, smooth cloud sync and efficient data pipelines. What matters most is consistency, not just peak bursts.

​

Bandwidth

​

Bandwidth represents the maximum theoretical capacity of your connection. It defines how much data can be transmitted at once but does not guarantee performance. A high bandwidth connection with poor routing or congestion will still perform poorly. Bandwidth is the ceiling, while throughput is what you actually experience.

​

Upload Speed

​

Upload speed determines how quickly data leaves your network. It is critical for content creators, developers pushing code, cloud backups, IoT telemetry and live streaming. Poor upload performance creates bottlenecks that delay workflows and disrupt real time systems. Stable upload speed matters more than peak figures.

​

Download Speed

​

Download speed controls how quickly data is received by your device. It impacts software installs, dataset downloads and streaming. While important, it is often less critical than upload speed and latency in technical workflows. Consistency during peak times is more valuable than high headline numbers.

​

Symmetrical Speeds

​

Symmetrical speeds mean your upload and download speeds are equal. This is essential for modern workflows where data moves in both directions, such as cloud computing, development, remote work and streaming. Asymmetric connections often create hidden bottlenecks in advanced setups.

​

Peak Time Performance

​

Peak time performance refers to how your connection behaves when network demand is highest. Many providers degrade during these periods due to congestion. A strong connection maintains consistent latency, throughput and stability even when overall usage increases.

​

RTT (Round Trip Time)

​

RTT is the exact measured time it takes for a packet to travel from your device to a destination and back again, and it is the precise value that latency tools like ping report. While latency is often used as a general term, RTT is the actual metric behind it. For techies, RTT matters because it gives a clear and measurable view of responsiveness across different endpoints, helping identify whether delays are local, routing related or server side.

​

TTFB (Time To First Byte)

​

Time to First Byte measures how long it takes from initiating a request to receiving the first byte of data from a server. It combines latency, server processing time and routing efficiency into a single observable metric. For developers, traders and anyone interacting with APIs or web services, TTFB reveals how quickly systems begin responding, which is often more important than total transfer time.

​

Bufferbloat

​

Bufferbloat is excessive packet queuing inside network devices that causes latency to spike under load. When buffers are too large or poorly managed, packets are delayed instead of dropped, leading to hundreds of milliseconds of added delay during uploads or downloads. For techies, bufferbloat is one of the most common causes of a connection that feels fine when idle but becomes unusable when busy.

​

Goodput

​

Goodput is the portion of throughput that represents actual useful data, excluding protocol overhead, retransmissions and packet headers. It reflects what your applications truly receive and use. A connection can show high throughput but lower goodput if inefficiencies or packet loss are present. For data heavy workflows, goodput is the real measure of efficiency.

​

Network Congestion

​

Network congestion occurs when demand exceeds available capacity, causing queues to build and performance to degrade. It leads to increased latency, jitter and packet loss as traffic competes for limited resources. Congestion can happen locally, within your ISP or across transit and peering links, and it is one of the primary reasons performance drops during peak time.

​

Burst Speed

​

Burst speed refers to short periods where a connection exceeds its sustained throughput capacity, often used in speed tests or initial transfers. While it can make a connection appear fast, it does not reflect real world performance over time. For techies, sustained throughput matters far more than temporary bursts, especially for large transfers or continuous workloads.

​

Sustained Throughput

​

Sustained throughput is the consistent data transfer rate maintained over time during real usage. Unlike burst speed, it reflects how your connection performs during long downloads, uploads or data streams. For developers, content creators and data engineers, sustained throughput is what determines whether workflows complete efficiently without slowdown.

​

Queue Depth

​

Queue depth refers to how many packets are waiting in buffers at any given time within network devices. Higher queue depth increases delay and contributes directly to bufferbloat. Monitoring queue depth helps identify whether a network is handling load efficiently or building up latency due to poor queue management.

​

Latency Under Load

​

Latency under load measures how much latency increases when your connection is actively being used. A strong connection maintains similar latency whether idle or busy, while a weak one shows significant spikes. This is one of the most important real world indicators of connection quality, especially for gaming, streaming and remote access.

​

Microbursts

​

Microbursts are extremely short spikes in traffic that can temporarily exceed network capacity, causing brief congestion, packet loss or jitter. They are often invisible in average metrics but can disrupt real time systems. For high performance environments, microbursts explain sudden inconsistencies that standard monitoring might miss.

​

Line Rate

​

Line rate is the maximum speed at which data can be transmitted over a physical connection, such as a fibre link or network interface. It represents the raw capability of the link before overheads are applied. Actual throughput and goodput will always be lower due to protocol and network conditions.

​

Oversubscription Ratio

​

Oversubscription ratio describes how many users share a given amount of network capacity within an ISP or network segment. Higher ratios increase the likelihood of congestion and degraded performance during peak times. For techies, this explains why some connections slow down when others are active.

techie broadband terms divider

Network Behaviour and Diagnostics

This section focuses on how your traffic moves and how you analyse it. It includes routing, traceroute, ping, congestion and traffic handling. This is where visibility matters. It allows you to see beyond speed tests and understand why performance changes, where delays occur and how your data flows across networks. For engineers and advanced users, this is the layer that explains problems rather than just exposing them.

​

Routing

​

Routing defines the path your data takes across networks to reach its destination. Poor routing introduces unnecessary hops, increasing latency and instability. Efficient routing keeps paths short and predictable, which is essential for gaming, trading and cloud workloads.

​

Traceroute

​

Traceroute is a diagnostic tool that shows each hop your data takes to reach a destination. It helps identify where latency increases or where packets are delayed. Network engineers use traceroute to diagnose routing inefficiencies and pinpoint bottlenecks.

​

Ping

​

Ping is a simple command that measures latency and packet loss between your device and a target server. While basic, it is useful for quick checks of responsiveness. However, it does not reveal jitter or sustained performance issues.

​

Network Congestion

​

Network congestion occurs when demand exceeds available bandwidth. This leads to slower speeds, increased latency and unstable connections. Congestion often appears during peak times or in networks with poor capacity planning.

​

Traffic Shaping

​

Traffic shaping is the prioritisation or limitation of certain types of network traffic. It can improve performance for specific applications or restrict others. When hidden, it creates unpredictable behaviour and reduces control.

​

Throttling

​

Throttling is the intentional slowing of certain types of traffic, often targeting streaming, VPN or high usage patterns. It creates sudden performance drops that are not related to your connection capacity.

​

Clean Network Behaviour

​

Clean network behaviour means your traffic is not altered, prioritised or restricted without your control. What you measure is what you get. This is essential for accurate testing, development and secure environments.

​

Path MTU

​

Path MTU refers to the maximum packet size that can travel across a network path without fragmentation. If packets exceed this size, they must be fragmented or dropped, which can introduce latency, packet loss or failed connections. For techies, incorrect MTU settings are a common hidden cause of inconsistent performance, especially when using VPNs or tunnelling protocols.

​

MTU Black Hole

​

An MTU black hole occurs when packets that exceed the allowed size are silently dropped instead of being fragmented or rejected properly. This results in connections that partially load, stall or fail entirely without obvious packet loss in basic tests. It is a classic issue in misconfigured networks and often difficult to diagnose without deeper analysis.

​

Packet Fragmentation

​

Packet fragmentation happens when data packets are split into smaller pieces to fit across network paths with lower MTU limits. While necessary in some cases, fragmentation increases overhead and can reduce performance. Excessive fragmentation can lead to inefficiencies, retransmissions and instability across applications.

​

Hop Count

​

Hop count is the number of network devices or routers a packet passes through on its journey to a destination. Each hop introduces potential delay and variation. Lower hop counts generally indicate more efficient routing paths, while higher counts often signal indirect or poorly optimised routes.

​

ICMP (Internet Control Message Protocol)

​

ICMP is the protocol used for diagnostic tools such as ping and traceroute. It allows devices to send error messages and operational information across networks. While essential for diagnostics, some networks limit or deprioritise ICMP traffic, which can make results appear worse or hide real issues.

​

TTL (Time To Live)

​

TTL defines how many hops a packet can take before being discarded. Each router reduces the TTL value by one, and when it reaches zero, the packet is dropped. This prevents infinite routing loops and is also used by traceroute to map network paths. TTL behaviour can reveal routing loops and inefficiencies.

​

Asymmetric Routing

​

Asymmetric routing occurs when the path your data takes to a destination differs from the path it takes on the return journey. This can introduce inconsistencies in latency, jitter and packet handling. It is common in complex networks and can make diagnostics more challenging because performance differs in each direction.

​

Route Flapping

​

Route flapping is when network routes change rapidly between different paths due to instability. This creates inconsistent latency, packet loss and jitter as traffic is constantly redirected. For techies, route flapping is a clear sign of upstream instability or poor network management.

​

BGP Border Gateway Protocol

​

BGP is the protocol that controls how routing decisions are made between large networks on the internet. It determines which paths traffic takes across ISPs, transit providers and peering connections. Poor BGP configuration or suboptimal path selection can lead to inefficient routing, increased latency and instability.

​

Anycast Routing

​

Anycast is a routing method where multiple servers share the same IP address, and traffic is directed to the nearest or best performing location. It is widely used by DNS providers and content networks like Cloudflare. While powerful, Anycast behaviour depends heavily on routing quality and can produce unexpected paths.

​

Traffic Prioritisation QoS

​

Quality of Service QoS is a method of prioritising certain types of traffic over others. For example, voice or gaming traffic may be prioritised over bulk downloads. Proper QoS improves real time performance, but poorly configured QoS can introduce bottlenecks or unintended restrictions.

​

DPI (Deep Packet Inspection)

​

Deep Packet Inspection analyses the contents of packets rather than just their headers. It is often used for traffic management, security or throttling. While useful in controlled environments, DPI can introduce latency, reduce privacy and interfere with certain types of traffic.

​

Latency Spikes

​

Latency spikes are sudden increases in response time that occur intermittently rather than consistently. They are often caused by congestion, routing changes or bufferbloat. For techies, spikes are more disruptive than consistently higher latency because they break timing dependent systems.

​

Packet Reordering

​

Packet reordering occurs when packets arrive at their destination in a different order than they were sent. This can happen due to multiple routing paths or network congestion. It forces systems to reorder data before processing, introducing delay and reducing efficiency.

​

Network Path Stability

​

Network path stability refers to how consistently your traffic follows the same route over time. Stable paths result in predictable latency and performance, while unstable paths introduce variation, jitter and unexpected behaviour across applications.

techie broadband terms divider

Addressing and Connectivity

This section explains how devices connect, identify themselves and communicate across networks. It covers IP addressing, NAT, static and dynamic assignments and port access. These concepts are essential for anyone running services, building environments or managing remote access. It is the difference between simply connecting to the internet and being able to control how systems interact with it.

​

IP Address

​

An IP address is the unique identifier assigned to a device on a network, allowing systems to locate and communicate with each other across local and global connections, forming the foundation of all network communication and data exchange.

​

Static IP

​

A static IP address remains fixed and does not change over time, making it essential for hosting services, enabling remote access, running VPN endpoints and supporting any setup that requires a consistent and directly reachable network location.

​

Dynamic IP

​

A dynamic IP address is automatically assigned and changes periodically by the network, making it suitable for general use but less reliable for hosting services or remote access, where consistent addressing is required for stable connectivity.

​

NAT (Network Address Translation)

​

Network Address Translation allows multiple devices within a local network to share a single public IP address by translating internal private addresses into one external address, improving address efficiency but limiting direct inbound connections and adding complexity to advanced network configurations.

​

Port Forwarding

​

Port forwarding allows external traffic to reach specific devices or services within your network. It is required for hosting applications, game servers and remote access systems.

​

Public IP Address

​

A public IP address is an address that is directly reachable from the internet. It allows inbound and outbound communication without intermediary translation layers beyond standard routing. For techies, a true public IP is essential for hosting services, remote access and maintaining full control over how traffic reaches your network.

​

Private IP Address

​

A private IP address is used within local networks and is not directly accessible from the internet. Common ranges include 192.168.x.x, 10.x.x.x and 172.16.x.x. These addresses rely on NAT to communicate externally. They are efficient for internal networking but limit direct inbound connectivity.

​

CGNAT (Carrier Grade NAT)

​

CGNAT is an ISP level extension of NAT where multiple customers share a single public IPv4 address. It places your connection behind an additional translation layer that you do not control. This removes the ability to accept inbound connections directly and prevents standard port forwarding, making it a major limitation for hosting and remote access.

​

IPv4

​

IPv4 is the original Internet Protocol using a 32 bit address space. Due to limited available addresses, it relies heavily on NAT and CGNAT to support modern demand. While still widely used, it introduces complexity and limits direct connectivity.

​

IPv6

​

IPv6 is the modern Internet Protocol using a 128 bit address space, allowing every device to have a globally routable address. It removes the need for NAT, enabling direct connectivity and simpler network design. For techies, IPv6 restores control and scalability.

​

Dual Stack

​

Dual stack refers to running both IPv4 and IPv6 simultaneously on a network. Devices can communicate over either protocol depending on availability and destination support. This is the most common real world setup and allows compatibility with both modern and legacy systems.

​

Port

​

A port is a logical endpoint used to identify specific services or applications on a device. For example, web traffic typically uses port 80 or 443. Ports allow multiple services to run on a single IP address and are critical for routing traffic correctly within a system.

​

Open Ports

​

Open ports are ports that accept inbound connections from external sources. They allow services such as web servers, game servers or remote access tools to be reachable. While necessary for hosting, open ports must be controlled carefully to avoid unintended exposure.

​

Firewall

​

A firewall controls which traffic is allowed into or out of a network based on defined rules. It replaces the false sense of security provided by NAT by explicitly managing access. For techies, firewalls are essential for securing systems while maintaining full connectivity.

​

Subnet

​

A subnet is a segmented portion of a network that groups devices together under a specific address range. Subnetting allows better organisation, isolation and control of network traffic. It is widely used in both home labs and enterprise environments.

​

Subnet Mask

​

A subnet mask defines which portion of an IP address represents the network and which part represents the device. It determines how traffic is routed within and outside a subnet. Understanding subnet masks is key to designing and troubleshooting network structures.

​

Gateway

​

A gateway is the device that routes traffic from your local network to external networks, typically your router. It acts as the exit point for all outbound communication. If the gateway is misconfigured or overloaded, connectivity and performance are immediately affected.

​

DHCP (Dynamic Host Configuration Protocol)

​

DHCP is the protocol that automatically assigns IP addresses to devices on a network. It simplifies connectivity by removing the need for manual configuration. However, for advanced setups, static assignments or reservations are often preferred for consistency.

​

DHCP Reservation

​

A DHCP reservation assigns a fixed IP address to a specific device based on its MAC address while still using DHCP. It combines the flexibility of dynamic addressing with the stability of static IPs, making it useful for devices that need predictable addressing without manual setup.

​

MAC Address

​

A MAC address is a unique hardware identifier assigned to a network interface. It is used within local networks to identify devices at the data link layer. MAC addresses are often used for DHCP reservations, access control and network tracking.

​

DNS (Domain Name System)

​

DNS translates human readable domain names into IP addresses. Without it, you would need to remember numerical addresses for every service. DNS performance and reliability directly impact how quickly connections are established.

​

DDNS (Dynamic DNS)

​

Dynamic DNS maps a changing IP address to a fixed domain name. It allows remote access to systems even when using a dynamic IP. For techies without a static IP, DDNS is a common workaround for maintaining accessibility.

​

Reverse DNS

​

Reverse DNS maps an IP address back to a domain name. It is often used in email systems, logging and security checks. Proper reverse DNS configuration improves trust and compatibility in certain network services.

​

Loopback Address

​

The loopback address, typically 127.0.0.1 in IPv4, allows a device to communicate with itself. It is used for local testing and internal services. While simple, it is essential for development and diagnostics.

​

Hairpin NAT (NAT Loopback)

​

Hairpin NAT allows devices within the same network to access a service using the public IP address instead of the internal address. Without it, internal access to hosted services can fail. It is important for consistent testing and access behaviour.

​

Port Range

​

A port range defines a group of ports rather than a single one, often used for applications that require multiple connections such as VoIP, gaming or FTP. Managing port ranges correctly is essential for ensuring full functionality of certain services.

techie broadband terms divider

Security and Privacy

This section focuses on protecting data, controlling traffic and maintaining visibility. It includes VPNs, DNS control, encryption protocols, firewalls and segmentation. For privacy focused users and cybersecurity professionals, this is where control is defined. A strong setup ensures your data moves securely without interference while still allowing full flexibility over how your network is configured and monitored.

​

VPN

​

A VPN creates an encrypted tunnel between your device and another network, protecting your data from interception, enabling secure remote access and allowing traffic to be routed through different locations for privacy, control and flexibility.

​

WireGuard

​

WireGuard is a modern VPN protocol designed for speed and simplicity, using efficient encryption and minimal overhead to deliver high performance connections that are widely used in setups where low latency and stability matter.

​

OpenVPN

​

OpenVPN is a mature and highly configurable VPN protocol that supports a wide range of encryption methods and network setups, making it a reliable choice for secure environments where flexibility and compatibility are required.

​

DNS

​

DNS translates human readable domain names into IP addresses, allowing devices to locate services across the internet, and its speed and reliability directly affect how quickly connections are established.

​

DNS over HTTPS

​

DNS over HTTPS encrypts DNS queries within standard HTTPS traffic, preventing interception or manipulation while improving privacy and maintaining compatibility with modern web infrastructure.

​

DNS over TLS

​

DNS over TLS encrypts DNS queries at the transport layer, ensuring secure name resolution by protecting requests from being intercepted or altered during transmission.

​

Custom DNS

​

Custom DNS allows you to select your own DNS provider instead of relying on default settings, giving you greater control over performance, filtering, privacy and how domain requests are resolved.

​

Firewall

​

A firewall controls incoming and outgoing network traffic based on defined rules, acting as a barrier that protects systems from unauthorised access while allowing legitimate communication.

​

VLAN

​

A VLAN separates network traffic into isolated segments within the same physical network, improving security and allowing more precise control over how devices communicate with each other.

​

Network Segmentation

​

Network segmentation divides a network into multiple controlled zones, reducing risk by limiting access between systems and improving performance by managing traffic more efficiently.

​

Packet Inspection

​

Packet inspection analyses network traffic at various levels to detect threats, monitor behaviour or enforce policies, making it a key component in both security and traffic management.

​

Encryption

​

Encryption is the process of converting data into a secure format that can only be read by authorised parties. It protects data in transit and at rest, ensuring that even if traffic is intercepted, it cannot be understood. For techies, strong encryption is the baseline for secure communication across all systems.

​

TLS (Transport Layer Security)

​

TLS is the standard protocol used to encrypt data between clients and servers, most commonly seen in HTTPS connections. It ensures confidentiality and integrity of data during transmission. Modern internet security depends heavily on properly implemented TLS across all services.

​

HTTPS

​

HTTPS is the secure version of HTTP, using TLS to encrypt communication between your device and web servers. It prevents interception, tampering and man in the middle attacks. For any secure environment, HTTPS is non negotiable.

​

End to End Encryption

​

End to end encryption ensures that data is encrypted on the sender’s device and only decrypted on the recipient’s device. No intermediate system can access the content. It is essential for secure messaging, sensitive communications and privacy focused applications.

​

Zero Trust Networking

​

Zero trust is a security model where no device or connection is trusted by default, even within the local network. Every request must be verified and authenticated. For techies, this approach replaces traditional perimeter based security with continuous validation and control.

​

IDS (Intrusion Detection System)

​

An IDS monitors network traffic for suspicious activity and potential threats. It does not block traffic but alerts you to anomalies. It is used to detect attacks, unusual behaviour and security breaches in real time.

​

IPS (Intrusion Prevention System)

​

An IPS builds on IDS by actively blocking detected threats. It can stop malicious traffic, prevent exploits and enforce security policies automatically. For advanced setups, IPS provides proactive defence rather than just visibility.

​

NAT Traversal

​

NAT traversal refers to techniques that allow devices behind NAT or CGNAT to establish connections with external systems. It is used in peer to peer applications, VoIP and VPNs. While useful, it introduces complexity and can affect reliability.

​

Port Scanning

​

Port scanning is the process of checking which ports on a device are open and accessible. It is used for both legitimate diagnostics and malicious reconnaissance. Understanding port exposure is critical for securing hosted services.

​

DDoS (Distributed Denial of Service)

​

A DDoS attack overwhelms a network or service with excessive traffic, making it unavailable to legitimate users. Protection mechanisms include filtering, rate limiting and upstream mitigation. For exposed systems, DDoS resilience is essential.

​

Rate Limiting

​

Rate limiting controls how many requests a system will accept over a period of time. It protects against abuse, brute force attacks and traffic spikes. It is commonly used in APIs, login systems and public facing services.

​

ACL (Access Control Lists)

​

ACLs define rules that allow or deny traffic based on IP addresses, ports or protocols. They provide granular control over who can access specific resources. ACLs are a core component of network security and segmentation.

​

Authentication

​

Authentication verifies the identity of a user or device before granting access. It ensures that only authorised entities can connect to systems. Strong authentication is essential for securing remote access, APIs and internal services.

​

MFA (Multi Factor Authentication)

​

MFA adds an additional layer of security by requiring more than one form of verification, such as a password and a code or hardware key. It significantly reduces the risk of unauthorised access, even if credentials are compromised.

​

PKI (Public Key Infrastructure)

​

PKI is the framework used to manage digital certificates and encryption keys. It underpins secure communication protocols such as TLS. For techies, PKI is fundamental to building trusted and secure systems.

​

CA (Certificate Authority)

​

A Certificate Authority issues digital certificates that verify the identity of websites and services. These certificates enable secure HTTPS connections. Trust in online communication depends on recognised and properly managed CAs.

​

DNSSEC (Secure DNS)

​

DNSSEC adds cryptographic validation to DNS responses, ensuring that the results have not been tampered with. It protects against DNS spoofing and man in the middle attacks at the DNS level.

​

Traffic Filtering

​

Traffic filtering blocks or allows data based on defined criteria such as IP, protocol or content type. It is used to enforce security policies, restrict access and prevent malicious traffic from reaching systems.

​

DPI (Deep Packet Inspection)

​

Deep Packet Inspection analyses packet contents beyond headers to identify traffic types, enforce policies or detect threats. While powerful, it can impact privacy and performance depending on how it is implemented.

​

Data Leakage

​

Data leakage refers to the unintended exposure or transmission of sensitive information outside a secure environment. It can occur through misconfiguration, insecure protocols or compromised systems. Preventing leakage is critical in secure network design.

techie broadband terms divider

Development and Infrastructure

This section covers the tools and environments used to build, deploy and manage systems. It includes SSH, APIs, containers, virtual machines, cloud platforms and automation workflows. For developers, engineers and DevOps, this is where network performance directly impacts productivity. Every delay, interruption or inconsistency slows development cycles and affects how systems are tested and deployed.

​

SSH

​

SSH provides secure remote access to systems by creating an encrypted connection between devices, allowing developers, system administrators and engineers to manage servers, run commands and transfer data safely over a network.

​

API

​

An API allows different systems to communicate with each other by sending and receiving structured requests and responses, forming the foundation of modern applications, integrations and cloud based services.

​

Virtual Machine

​

A virtual machine is a fully isolated operating system running on top of physical hardware through virtualisation, requiring stable performance and resources to operate effectively alongside other systems.

​

Container

​

A container is a lightweight environment that isolates applications and their dependencies, enabling consistent development, testing and deployment across different systems without the overhead of full virtual machines.

​

Docker

​

Docker is a platform used to build, manage and run containers, streamlining development and deployment workflows by ensuring applications behave consistently across environments.

​

Home Lab

​

A home lab is a personal setup used to experiment with systems, networks and services, providing a controlled environment for testing, learning and running self managed infrastructure.

​

Self Hosted Services

​

Self hosted services run on infrastructure you control rather than external providers, requiring stable connectivity, proper configuration and open access to ensure reliability and accessibility.

​

Cloud Platforms

​

Cloud platforms such as AWS, Azure and Google Cloud provide scalable computing, storage and networking resources, with performance heavily influenced by latency, routing and overall network stability.

​

Edge Computing

​

Edge computing processes data closer to where it is generated rather than relying on centralised systems, reducing latency and improving responsiveness for time sensitive applications.

​

Deployment

​

Deployment is the process of releasing code or systems into a live environment, requiring reliable connectivity and consistent performance to ensure updates are delivered and applied without failure.

​

Continuous Integration

​

Continuous integration automates the process of building and testing code as changes are made, relying on stable connections to ensure pipelines run consistently and without interruption.

​

Continuous Deployment

​

Continuous deployment automates the release of updates into production environments, requiring consistent performance, uptime and reliable connectivity to maintain system stability.

​

Kubernetes

​

Kubernetes is an orchestration platform used to manage containers at scale. It handles deployment, scaling, networking and failover of containerised applications. For techies, Kubernetes introduces additional network complexity where latency, routing and service discovery directly impact system reliability.

​

Orchestration

​

Orchestration refers to the automated management of infrastructure and services, including containers, virtual machines and workflows. It ensures systems are deployed, scaled and maintained without manual intervention. Network stability is critical, as orchestration systems rely on constant communication between nodes.

​

IaC (Infrastructure as Code)

​

Infrastructure as Code allows you to define and manage infrastructure using code rather than manual configuration. Tools like Terraform and Ansible rely on consistent connectivity to provision and maintain environments. Any instability in the network can break deployments or leave systems in inconsistent states.

​

Terraform

​

Terraform is a widely used Infrastructure as Code tool that automates the provisioning of cloud and network resources. It interacts with APIs to build infrastructure, making latency, API responsiveness and connection stability key to reliable execution.

​

Ansible

​

Ansible is an automation tool used for configuration management and deployment. It relies on SSH and network connectivity to push changes across systems. High latency or packet loss can slow or interrupt execution across multiple nodes.

​

Load Balancer

​

A load balancer distributes traffic across multiple servers or services to improve performance and reliability. It ensures no single system becomes a bottleneck. Network latency and routing quality directly affect how efficiently traffic is distributed.

​

Reverse Proxy

​

A reverse proxy sits in front of backend services and routes incoming traffic to the correct destination. It is commonly used for security, load balancing and SSL termination. Its effectiveness depends on low latency and stable connectivity between layers.

​

Service Discovery

​

Service discovery allows systems to automatically find and communicate with each other within dynamic environments such as containers or microservices. It relies on DNS, APIs or internal registries. Poor network performance can break communication between services.

​

Microservices

​

Microservices architecture splits applications into smaller independent services that communicate over the network. This increases reliance on low latency, stable routing and minimal packet loss, as every interaction depends on network performance.

​

Git

​

Git is a version control system used to manage code changes across distributed environments. Operations such as cloning, pulling and pushing repositories depend on consistent throughput and low latency, especially for large projects.

​

Repository Hosting

​

Repository hosting platforms such as GitHub, GitLab and Bitbucket store and manage code remotely. Their performance depends on routing, latency and throughput, especially during large transfers or CI workflows.

​

Build Pipeline

​

A build pipeline automates the process of compiling, testing and packaging code. It often pulls dependencies, runs tests and pushes artefacts across networks. Any instability introduces delays or failures in the pipeline.

​

Artefact Storage

​

Artefact storage holds built files such as binaries, container images or packages. Systems like Docker registries or package repositories rely on consistent throughput and low packet loss for reliable transfers.

​

CDN (Content Delivery Network)

​

A CDN distributes content across multiple locations to reduce latency and improve delivery speed. It relies on efficient routing and peering to ensure users connect to the nearest node. For developers, CDN performance directly affects application responsiveness.

​

Object Storage

​

Object storage systems such as AWS S3 provide scalable storage accessed over the network. Performance depends on latency, throughput and routing quality, especially for large data transfers and backups.

​

Block Storage

​

Block storage provides raw storage volumes that can be attached to systems like virtual machines. It behaves like a physical disk but is accessed over the network, making latency and consistency critical for performance.

​

NFS (Network File System)

​

NFS allows systems to access files over a network as if they were local. It is commonly used in shared environments. High latency or packet loss can severely degrade performance and reliability.

​

SMB (Server Message Block)

​

SMB is a protocol used for file sharing across networks, commonly in Windows environments. Like NFS, it depends heavily on stable latency and throughput for acceptable performance.

​

Webhooks

​

Webhooks allow systems to send real time data to other services when events occur. They rely on inbound connectivity and stable routing. Any delay or packet loss can disrupt event driven workflows.

​

API Rate Limits

​

API rate limits restrict how many requests can be made within a certain time. They protect services from overload but can impact automation and integrations. Efficient network behaviour helps maximise usable throughput within these limits.

​

Dev Environment

​

A development environment is where code is written and tested before deployment. It often relies on remote services, containers and cloud resources. Network performance directly affects build speed, testing and debugging.

​

Staging Environment

​

A staging environment replicates production systems for testing before release. It requires production like network behaviour, including latency and routing, to ensure accurate validation of performance.

​

Production Environment

​

The production environment is where live systems operate and serve users. Network performance here directly affects uptime, responsiveness and user experience. Any instability has immediate real world impact.

techie broadband terms divider

Data and Workflows

This section focuses on how data moves across systems and how workflows depend on stable connectivity. It includes pipelines, syncing, repositories and continuous integration processes. For data engineers, researchers and developers, this is critical because performance must remain consistent over time, not just in short bursts. Any instability here disrupts entire workflows rather than individual tasks.

​

Data Pipeline

​

A data pipeline is the continuous flow of data between systems for processing, storage or analysis, relying on stable throughput, low latency and high reliability to ensure data moves consistently without interruption or loss.

​

Cloud Sync

​

Cloud sync keeps files and data consistent across multiple devices by continuously uploading and downloading changes, requiring stable connectivity and balanced performance to avoid delays, conflicts or incomplete updates.

​

Version Control

​

Version control tracks and manages changes to code over time, allowing multiple users to collaborate, with reliable connectivity ensuring updates, commits and merges are synchronised without errors or delays.

​

Repository

​

A repository stores code, files and project history in a central location, typically accessed through platforms such as GitHub or GitLab, and depends on stable network performance for efficient access, cloning and updates.

​

Data Transfer

​

Data transfer refers to the movement of data between systems, whether across local networks or the internet. It includes uploads, downloads and bidirectional flows. For techies, the key factors are sustained throughput, low packet loss and stable latency, as any inconsistency slows or corrupts transfers.

​

Data Integrity

​

Data integrity ensures that data arrives exactly as it was sent, without corruption or loss. It is maintained through checksums, hashing and validation processes. In unstable networks, packet loss or retransmissions can impact integrity, making this critical for backups, datasets and deployments.

​

Checksum

​

A checksum is a calculated value used to verify the integrity of data during transfer. After data is received, the checksum is recalculated and compared to ensure nothing has changed. It is widely used in file transfers, downloads and data pipelines to detect corruption.

​

Data Replication

​

Data replication involves copying data across multiple systems or locations to ensure availability and redundancy. It requires consistent network performance to keep replicas in sync. High latency or packet loss can delay replication and create inconsistencies.

​

Batch Processing

​

Batch processing handles large volumes of data in scheduled chunks rather than real time. It relies on sustained throughput and stable connectivity over extended periods. Network instability can delay completion or cause failures in long running jobs.

​

Stream Processing

​

Stream processing handles data in real time as it is generated. It requires low latency, minimal jitter and near zero packet loss to maintain continuous flow. Any disruption immediately impacts downstream systems.

​

ETL (Extract Transform Load)

​

ETL is a process where data is extracted from sources, transformed into a usable format and loaded into a destination system. It is a core part of data engineering workflows and depends heavily on consistent network performance for reliable execution.

​

Data Ingestion

​

Data ingestion is the process of collecting and importing data into a system for processing or storage. It can be real time or batch based. Reliable connectivity ensures that ingestion pipelines do not drop or delay incoming data.

​

Data Egress

​

Data egress refers to data leaving a network or system, often to external services or users. It is critical in cloud environments where large datasets are transferred out. Throughput, cost and stability all play a role in how efficiently egress is handled.

​

Sync Conflicts

​

Sync conflicts occur when multiple versions of the same data are modified simultaneously across systems. They often arise in distributed environments with inconsistent connectivity. Resolving conflicts requires version control logic and reliable syncing behaviour.

​

FTP (File Transfer Protocol) & SFTP (Secure File Transfer Protocol)

​

FTP and SFTP are protocols used to transfer files between systems. SFTP adds encryption for secure transfers. Their performance depends on throughput, latency and connection stability, especially for large files.

​

Rsync

​

Rsync is a tool used to synchronise files between systems efficiently by only transferring changes. It reduces bandwidth usage but still depends on stable connectivity to complete operations reliably.

​

Data Compression

​

Data compression reduces the size of data before transfer to improve efficiency. It can increase effective throughput but adds processing overhead. For large transfers, compression can significantly improve performance over limited bandwidth.

​

Data Chunking

​

Data chunking splits large data into smaller pieces for transfer. This improves reliability, as smaller chunks can be retransmitted individually if needed. It is commonly used in uploads, streaming and distributed systems.

​

Retry Logic

​

Retry logic is the automatic reattempting of failed operations such as API calls or data transfers. It is essential in unstable networks to ensure workflows complete successfully despite intermittent failures.

​

Idempotency

​

Idempotency ensures that repeating the same operation produces the same result without duplication or error. It is critical in distributed systems where retries may occur due to network instability.

​

Queue System

​

A queue system manages tasks or data in an ordered sequence, allowing systems to process workloads asynchronously. It helps smooth out spikes in demand but relies on stable connectivity between producers and consumers.

​

Message Broker

​

A message broker handles communication between systems by passing messages reliably, often used in event driven architectures. Examples include Kafka and RabbitMQ. Network performance directly affects message delivery and system responsiveness.

​

Data Backfill

​

Data backfill is the process of reprocessing or filling in missing data after a failure or gap in a pipeline. It often involves large data transfers and depends on stable throughput to complete efficiently.

​

Workflow Orchestration

​

Workflow orchestration coordinates multiple tasks or processes across systems to ensure they run in the correct order. It relies on consistent connectivity between components, as failures in communication can break entire workflows.

techie broadband terms divider

Real Time Systems and Media

This section covers environments where timing must be exact and consistent. It includes streaming, bitrate, frame delivery, buffering and real time data. For gamers, streamers and traders, this is where performance is felt instantly. Small fluctuations create visible issues, making consistency more important than peak speed.

​

Bitrate

​

Bitrate is the amount of data transmitted per second in a stream, directly affecting quality, with higher bitrate delivering better visual or audio output but requiring stable upload performance to maintain consistency.

​

Frame Drop

​

Frame drop occurs when video frames fail to be delivered during streaming, usually caused by unstable upload, packet loss or network congestion, resulting in visible stutter and reduced playback quality.

​

Buffering

​

Buffering is the delay that occurs while data is preloaded before playback continues, typically caused by inconsistent throughput or congestion, and is a clear sign that the connection cannot sustain the required data flow.

​

Real Time Data

​

Real time data refers to information that is transmitted and updated instantly as events occur, requiring low latency, minimal jitter and reliable delivery to ensure accuracy across systems such as trading, IoT and live applications.

​

Slippage

​

Slippage is the difference between the expected and actual execution price in trading, often caused by latency, network delays or timing inconsistencies that affect how quickly orders are processed.

​

Market Data Feed

​

Market data feeds deliver live financial information such as prices and order updates, requiring low latency, stable routing and consistent connectivity to ensure data arrives accurately and without delay.

​

Latency Sensitivity

​

Latency sensitivity refers to how dependent a system is on low delay to function correctly. Real time applications such as gaming, trading and voice communication require extremely low latency to remain responsive. Even small increases can cause noticeable degradation in performance and user experience.

​

Jitter Buffer

​

A jitter buffer temporarily stores incoming packets to smooth out timing variations caused by jitter. It is commonly used in voice and video systems. While it improves stability, it introduces additional delay, creating a trade off between smooth playback and latency.

​

Tick Rate

​

Tick rate is how often a system updates its state per second, commonly used in gaming and trading systems. A higher tick rate requires more frequent data updates and places greater demand on low latency and stable packet delivery.

​

Input Lag

​

Input lag is the delay between a user action and the system response. It is influenced by latency, processing time and network conditions. In real time environments, input lag directly impacts responsiveness and accuracy.

​

Frame Timing

​

Frame timing refers to the consistency of time between delivered frames in a video or rendering system. Even if frame rate is high, inconsistent timing creates visible stutter and poor experience.

​

Adaptive Bitrate Streaming

​

Adaptive bitrate streaming automatically adjusts video quality based on current network conditions. It attempts to maintain playback by lowering bitrate during instability. While useful, frequent changes indicate inconsistent throughput.

​

Packet Timing

​

Packet timing refers to how consistently packets arrive at their destination. Real time systems depend on predictable timing. Variations lead to jitter, delays and degraded performance.

​

Latency Spikes

​

Latency spikes are sudden increases in delay that occur intermittently. They are more disruptive than consistently higher latency because they break timing dependent systems such as voice, gaming and trading.

​

RTP (Real Time Protocol)

​

RTP is a protocol used to deliver audio and video over networks in real time. It prioritises timely delivery over perfect accuracy, meaning lost packets are not retransmitted. This makes low packet loss and stable latency critical.

​

WebRTC

​

WebRTC is a technology that enables real time communication directly between browsers and devices. It is used for video calls, streaming and peer to peer communication. Its performance depends heavily on latency, jitter and NAT traversal.

​

Keyframe Interval

​

The keyframe interval defines how often a full video frame is sent in a stream. Shorter intervals improve recovery from packet loss but increase bandwidth usage. Longer intervals are more efficient but can worsen visible issues during instability.

​

Encoding Latency

​

Encoding latency is the time taken to compress and prepare data for transmission, especially in video and audio streaming. It adds to total delay and must be minimised in real time systems.

​

Decoding Latency

​

Decoding latency is the time required to process incoming data into usable output, such as video playback. Combined with encoding and network delay, it contributes to overall system responsiveness.

​

Synchronisation Drift

​

Synchronisation drift occurs when audio, video or data streams fall out of sync over time. It is often caused by inconsistent packet timing or jitter. In real time systems, drift leads to noticeable misalignment and degraded experience.

​

Live Stream Latency

​

Live stream latency is the total delay between capturing content and delivering it to viewers. It includes encoding, network transmission and buffering. Lower latency is critical for interactive streams and real time engagement.

​

Packet Loss Concealment

​

Packet loss concealment is a technique used to mask the effects of lost packets in audio or video streams. It attempts to fill gaps using interpolation or previous data. While it reduces disruption, it cannot fully replace missing information.

​

QoE (Quality of Experience)

​

Quality of Experience measures how users perceive performance in real time systems. It combines factors such as latency, jitter, buffering and visual quality. For techies, QoE is the real world outcome of all underlying network metrics.

techie broadband terms divider

Research and Remote Access

This section focuses on long running tasks, remote systems and continuous access to resources. It includes remote desktop, high performance computing, large data transfers and persistent sessions. For researchers, academics and remote professionals, this is where stability matters most. Interruptions here do not just slow work, they break it entirely.

​

High Performance Computing

​

High performance computing provides access to powerful remote systems designed to handle complex calculations and large scale workloads, requiring stable connectivity and low latency to ensure tasks run efficiently without interruption.

​

Remote Desktop

​

Remote desktop allows you to access and control another computer over a network as if you were physically present, with performance heavily dependent on low latency and minimal jitter to maintain responsiveness and usability.

​

Large Dataset Transfer

​

Large dataset transfer involves moving significant volumes of data between systems, requiring consistent throughput, low packet loss and stable connections to ensure transfers complete reliably without corruption or delay.

​

Over The Air Updates

​

Over the air updates enable devices to receive software updates remotely without physical access, relying on reliable connectivity and stable data transfer to ensure updates are delivered and applied without failure.

​

Remote Access

​

Remote access is the ability to connect to and control systems from a different network or location. It underpins workflows such as server management, development and research. Stable latency and reliable connectivity are essential to maintain consistent control and avoid session drops.

​

Persistent Connection

​

A persistent connection remains active over time without needing to reconnect for each request. It is critical for long running tasks, remote sessions and continuous data streams. Network instability can break persistence, forcing reconnections and disrupting workflows.

​

Session Timeout

​

Session timeout defines how long a connection remains active without activity before it is closed. In unstable networks, unintended timeouts can occur due to dropped packets or latency spikes, interrupting remote work and requiring reauthentication.

​

SSH Session

​

An SSH session is an active remote connection to a system using the SSH protocol. It allows command line access and control. Packet loss, latency spikes or disconnections can interrupt sessions and terminate running processes if not managed properly.

​

Mosh Mobile Shell

​

Mosh is an alternative to SSH designed for unstable networks. It maintains sessions even when IP addresses change or connections drop temporarily. It is particularly useful for mobile or inconsistent connections where traditional SSH would fail.

​

Remote File Access

​

Remote file access allows users to open, edit and transfer files on remote systems as if they were local. It depends heavily on latency and throughput, with poor performance causing delays, timeouts or failed operations.

​

Network Drive

​

A network drive is a remote storage location mounted as if it were a local drive. It is commonly used in research and enterprise environments. Performance depends on consistent latency and throughput, especially for large files.

​

Distributed Computing

​

Distributed computing spreads workloads across multiple systems or nodes. It relies on constant communication between nodes, making low latency and stable connectivity critical for synchronisation and performance.

​

Cluster Computing

​

Cluster computing groups multiple machines to work together as a single system. It is widely used in research and HPC environments. Network performance directly impacts task distribution, synchronisation and overall efficiency.

​

Job Scheduling

​

Job scheduling manages how tasks are queued and executed across systems, especially in HPC and research environments. It depends on reliable communication between scheduler and compute nodes. Network issues can delay or fail job execution.

​

Checkpointing

​

Checkpointing saves the current state of a process so it can resume from that point if interrupted. It is critical for long running computations and research workloads. Reliable data transfer ensures checkpoints are stored correctly without corruption.

​

Session Persistence

​

Session persistence ensures that a user remains connected to the same system or node across requests. It is important for maintaining state in remote environments. Without it, sessions can reset or lose context during instability.

​

Remote Execution

​

Remote execution allows commands or scripts to run on systems located elsewhere. It is commonly used in automation, research and cloud workflows. It depends on stable connectivity to ensure commands are delivered and results returned correctly.

​

Data Staging

​

Data staging involves preparing and transferring data to a system before processing begins. It is commonly used in research and HPC workflows. Consistent throughput is required to ensure large datasets are available when needed.

​

WAN (Wide Area Network)

​

A WAN connects systems across large geographic areas, such as between offices, data centres or cloud regions. Remote access performance is heavily influenced by WAN latency, routing and congestion.

​

Latency Tolerance

​

Latency tolerance refers to how much delay a remote system or workflow can handle before performance is affected. Some tasks such as file transfers can tolerate higher latency, while interactive sessions like remote desktop cannot.

​

Keep Alive

​

Keep alive mechanisms send periodic signals to maintain an active connection and prevent timeouts. They are essential for long running sessions and remote access, especially on networks that may drop idle connections.

​

Reconnection Handling

​

Reconnection handling defines how systems recover from dropped connections. Strong implementations allow sessions or tasks to resume without failure, while weak ones require manual restart. This is critical in unstable network conditions.

techie broadband terms divider

IoT and Communication Protocols

This section explains how connected devices communicate across networks using lightweight and specialised protocols. It includes MQTT, CoAP, HTTP and secure transport layers. For IoT developers and engineers, this is where reliability and low latency are essential because systems depend on accurate, real time data exchange between devices and platforms.

​

MQTT (Message Queuing Telemetry Transport)

​

MQTT is a lightweight messaging protocol designed for IoT devices, using a publish subscribe model to enable efficient communication with low bandwidth usage, relying on low latency and reliable delivery to keep data flowing consistently between devices and systems.

​

CoAP (Constrained Application Protocol)

​

CoAP is a protocol designed for constrained devices with limited power and resources, enabling efficient communication over networks in low power IoT environments while maintaining simplicity and low overhead.

​

HTTP (Hypertext Transfer Protocol)

​

HTTP is the standard protocol used for web communication, allowing clients and servers to exchange requests and responses, forming the foundation of how data is transferred across the internet.

​

HTTPS (Hypertext Transfer Protocol Secure)

​

HTTPS is the secure version of HTTP, using encryption to protect data in transit, ensuring that communication between devices and servers remains private and cannot be easily intercepted or altered.

​

TLS (Transport Layer Security)

​

TLS is the encryption protocol used to secure communications across networks, providing confidentiality and integrity for data by encrypting connections between systems and preventing unauthorised access.

​

AMQP (Advanced Message Queuing Protocol)

​

AMQP is a messaging protocol designed for reliable communication between systems. It supports message queuing, routing and guaranteed delivery. In IoT and distributed systems, it is used where reliability and structured messaging are more important than minimal overhead.

​

WebSockets

​

WebSockets provide a persistent, bidirectional communication channel between client and server. Unlike HTTP, which is request based, WebSockets allow real time data exchange with minimal delay. They are widely used in dashboards, IoT control systems and live data applications.

​

gRPC

​

gRPC is a high performance communication protocol built on HTTP 2. It uses binary data formats and supports real time streaming between services. It is commonly used in microservices and IoT backends where efficiency and low latency are critical.

​

UDP (User Datagram Protocol)

​

UDP is a connectionless protocol that sends data without guaranteeing delivery. It prioritises speed and low latency over reliability. Many IoT and real time systems use UDP where occasional data loss is acceptable but delay is not.

​

TCP (Transmission Control Protocol)

​

TCP is a connection oriented protocol that ensures reliable delivery of data through retransmissions and ordering. It is used where accuracy and completeness are critical, but it introduces additional latency compared to UDP.

​

QoS Levels MQTT

​

MQTT supports different Quality of Service levels that define how messages are delivered. These range from at most once to exactly once delivery. Higher QoS improves reliability but increases overhead and latency.

​

Publish Subscribe Model

​

The publish subscribe model allows devices to send messages to a broker without needing to know the recipient. Subscribers receive messages based on topics. This model is widely used in IoT because it decouples systems and improves scalability.

​

Broker

​

A broker is the central component in messaging systems such as MQTT or AMQP. It receives messages from publishers and distributes them to subscribers. Broker performance and reliability directly impact the entire communication system.

​

Topic

​

A topic is a structured label used to organise messages in publish subscribe systems. Devices publish data to specific topics, and subscribers listen to those topics. Clear topic design is essential for scalable and efficient communication.

​

Keep Alive MQTT

​

Keep alive in MQTT ensures that a connection between a device and broker remains active by sending periodic signals. If these signals stop, the connection is assumed lost. It is critical for maintaining stable device communication.

​

Last Will Message

​

The last will message in MQTT is a predefined message sent by the broker if a device disconnects unexpectedly. It allows systems to detect failures and respond immediately, improving reliability in distributed environments.

​

Edge Device

​

An edge device is a physical device that collects, processes or transmits data at the edge of the network. These devices often operate under limited resources and depend on efficient, low latency communication protocols.

​

Gateway

​

A gateway connects IoT devices to broader networks or cloud platforms. It aggregates data, manages communication and often translates between protocols. Gateway performance directly affects latency and reliability.

​

Device Provisioning

​

Device provisioning is the process of securely registering and configuring devices within a network. It ensures that devices can authenticate, communicate and operate correctly. Reliable connectivity is essential during provisioning.

​

Telemetry

​

Telemetry is the continuous transmission of data from devices to a central system. It is used for monitoring, analytics and control. Stable connectivity ensures accurate and timely data collection.

​

Command and Control

​

Command and control refers to sending instructions from a central system to devices. It requires low latency and reliable delivery to ensure actions are executed correctly and on time.

​

Lightweight Protocol

​

Lightweight protocols are designed for devices with limited processing power, memory and bandwidth. They minimise overhead while maintaining functionality. MQTT and CoAP are examples used in constrained environments.

​

Connection State

​

Connection state refers to whether a communication session is active, inactive or disrupted. In IoT systems, maintaining connection state is critical for ensuring devices remain reachable and responsive.

​

Heartbeat Signal

​

A heartbeat signal is a periodic message sent by a device to confirm it is still active. It helps detect failures or disconnections quickly. Missing heartbeats indicate potential issues in connectivity or device health.

​

Message Queue

​

A message queue stores messages temporarily until they are processed by a system. It helps manage load and ensures data is not lost during spikes or interruptions. In IoT, queues improve reliability and scalability.

techie broadband terms divider

Authentication and Access

This section focuses on how users and systems verify identity and maintain secure access. It includes methods like two factor authentication and secure session handling. For all advanced setups, this layer ensures that access remains protected without introducing friction or instability into the connection, balancing security with usability.

​

Two Factor Authentication

​

Two factor authentication adds an additional verification step during login by requiring a second form of identification alongside a password, such as a code or device confirmation, significantly improving security by reducing the risk of unauthorised access even if credentials are compromised.

​

Authentication

​

Authentication is the process of verifying the identity of a user, device or system before granting access. It ensures that only authorised entities can connect to resources. In technical environments, authentication underpins everything from SSH access to API communication.

​

Authorisation

​

Authorisation determines what an authenticated user or system is allowed to do. It defines permissions such as read, write or execute access. Strong authorisation ensures that even valid users cannot access or modify resources beyond their intended scope.

​

Access Control

​

Access control is the framework used to manage who can access systems and resources. It combines authentication and authorisation with defined policies. For techies, it is essential for securing infrastructure while maintaining operational flexibility.

​

RBAC (Role Based Access Control)

​

RBAC assigns permissions based on roles rather than individual users. For example, developers, admins and users each have defined access levels. This simplifies management and reduces the risk of misconfigured permissions in complex systems.

​

ABAC (Attribute Based Access Control)

​

ABAC grants access based on attributes such as user identity, location, device or time of request. It provides more granular and dynamic control compared to role based models. It is commonly used in advanced and cloud environments.

​

SSO (Single Sign On)

​

SSO allows users to access multiple systems with one set of credentials. It reduces login friction while maintaining security through centralised authentication. For large environments, SSO simplifies identity management and improves usability.

​

Identity Provider IdP

​

An identity provider is a system that manages user identities and authentication. It verifies credentials and issues tokens for access to services. Examples include systems used in SSO and cloud authentication setups.

​

OAuth

​

OAuth is an authorisation framework that allows applications to access resources on behalf of a user without exposing credentials. It is widely used in APIs and integrations. It enables secure delegated access between systems.

​

OpenID Connect OIDC

​

OIDC is an identity layer built on top of OAuth that provides authentication in addition to authorisation. It allows systems to verify user identity and obtain profile information securely.

​

API Key

​

An API key is a simple authentication method used to identify and authorise requests to an API. While easy to use, it must be protected carefully as it often provides direct access to services.

​

Token Based Authentication

​

Token based authentication uses issued tokens instead of credentials for ongoing access. After initial login, a token is used for subsequent requests. This improves security and reduces repeated credential exposure.

​

JWT (JSON Web Token)

​

A JWT is a compact token format used to securely transmit authentication and authorisation data between systems. It is widely used in APIs and web applications for session management.

​

Session

​

A session represents an active authenticated interaction between a user and a system. It maintains state across multiple requests. Stable network behaviour ensures sessions remain active without unexpected termination.

​

Session Management

​

Session management controls how sessions are created, maintained and terminated. It includes timeout handling, renewal and invalidation. Poor session management can lead to security risks or disrupted access.

​

Session Token

​

A session token is a temporary credential used to maintain an authenticated session. It replaces the need to send login details repeatedly. Protecting session tokens is critical for preventing unauthorised access.

​

Session Expiry

​

Session expiry defines how long a session remains valid before requiring reauthentication. It balances security and usability. In unstable networks, premature expiry can disrupt workflows.

​

Authentication Timeout

​

Authentication timeout defines how long a system waits for a login process to complete. Network latency or instability can cause timeouts, especially in remote or multi step authentication flows.

​

Credential

​

A credential is any piece of information used to verify identity, such as a password, key or token. Strong credential management is essential for securing access across systems.

​

Password Policy

​

A password policy defines rules for creating and managing passwords, including complexity, length and rotation. It reduces the risk of compromised accounts but must be balanced with usability.

​

SSH Key Authentication

​

SSH key authentication uses cryptographic key pairs instead of passwords for secure access. It provides stronger security and is widely used in development and infrastructure environments.

​

Public Key and Private Key

​

Public and private keys are used in cryptographic systems for authentication and encryption. The public key is shared, while the private key remains secure. Together, they enable secure communication and access control.

​

Access Token

​

An access token is a credential that grants permission to access a resource for a limited time. It is commonly used in APIs and OAuth based systems to control access without exposing credentials.

​

Refresh Token

​

A refresh token is used to obtain a new access token without requiring the user to reauthenticate. It allows long lived sessions while maintaining security through short lived access tokens.

​

Privilege Escalation

​

Privilege escalation occurs when a user or system gains higher access rights than intended. It is a critical security risk and must be prevented through strict access control and monitoring.

​

Least Privilege

​

Least privilege is a security principle where users and systems are given only the access they need and nothing more. It reduces risk and limits the impact of compromised accounts.

techie broadband terms divider

Why This Techie Broadband Terminology Page Matters

Most broadband explanations stop at speed.

​

This goes deeper.

​

If you understand these terms, you understand how your connection behaves under real conditions. You can diagnose problems, optimise setups and avoid limitations.

​

That is what Techie Broadband is built for.

​

Not just faster connections.

​

Better behaved ones.

bottom of page