Why Office Wi-Fi Breaks and How to Fix It

You can buy the fastest internet circuit in town and still end up with choppy Teams calls, buffering screenshares and irate guests. That’s because the number one cause of “slow internet” in London offices isn’t the ISP—it’s an under-engineered wireless LAN. Buildings in the capital mix dense concrete and steel with glass partitions, foil-backed walls, multiple tenants and a jungle of neighbouring SSIDs. If your Wi-Fi wasn’t designed for that reality, it will creak the moment the office fills up.
This guide cuts through the noise with a practical, engineer-led approach you can use to either stabilise what you’ve got or scope a clean rebuild. It’s vendor-neutral, London-specific and written from the perspective of people who have to make networks behave in the real world.
Symptom vs cause: what “slow Wi-Fi” really means
What you notice
- Video and voice wobble mid-meeting
- Uploads stall, downloads spike then crawl
- Guest Wi-Fi works until Friday at 4pm, then dies
- Handheld scanners and conferencing bars drop their connection when people move
What’s usually going on underneath
- Airtime exhaustion: Too many clients crammed onto too few radios, often on over-wide channels that collide with neighbours
- Sticky clients: Devices cling to distant APs at 6–12 Mbps because minimum data rates are too low
- Co-channel interference (CCI): Loud APs blasting through walls so multiple rooms fight for the same channel
- Noisy spectrum: Microwaves, rogue extenders, wireless mics and lift motors making clean signals rare
- Wired bottlenecks: Starved PoE, under-spec switching or messy cabinets that generate “Wi-Fi problems” which aren’t Wi-Fi at all
Fixing symptoms on a bad design is like changing tyres on a bent rim. You need a measurable target and a build that meets it.
Define success in numbers (not brand names)
Set outcomes first; everything else hangs off them:
- Coverage: ≥ -67 dBm at the seating plane in work areas, -65 dBm where real-time apps are common
- SNR: ≥ 25 dB sustained in busy hours
- Capacity: Design for concurrent devices (1.5–2× seats in meeting rooms to account for phones and tablets)
- Latency & jitter: Target <50 ms and <30 ms under real meeting loads
- Roaming: Sub-150 ms hand-off between adjacent cells for voice and scanners
- Security: WPA3-Enterprise/802.1X where estates allow; guest isolation and least-privilege ACLs as standard
Write these into your brief and acceptance tests. If a supplier can’t show how they’ll hit them, that’s your red flag.
Survey and design: evidence beats guesswork
- Predictive model on scaled plans
Use floor plans with accurate materials (concrete cores, foil-backed plasterboard, glass). Model AP placement, antenna pattern and channel widths. The heatmap is a hypothesis—not the truth. - On-site RF survey
Measure noise floors and neighbour occupancy at busy times. Catalog interferers: consumer extenders in neighbour suites, wireless mics, microwave ovens and HDMI senders. - Capacity planning where people actually sit
Boardrooms, training spaces, reception, canteens—design for those peaks, not empty corridors. - Roaming pathing
Overlap cells to enable quick hand-offs, but cap transmit power to avoid bloated cells that cause CCI.
Deliverables you should insist on: design heatmaps (coverage/SNR/data rate), an interference log, a capacity plan per high-density zone and a PoE budget tied to cabinet locations.
Cabling and power: Wi-Fi stands on wired shoulders
- Horizontal runs: Default to Cat6A on new AP drops (multi-gig and PoE++ headroom)
- Backbone: Fibre between cabinets; multi-gig at the edge where APs justify it
- PoE budget: At least 20–30% headroom per switch; throttled radios masquerade as “Wi-Fi issues”
- Cabinet hygiene: Right-length patching, labelled ports, blanking panels for airflow and proper A/B power distribution
- VLAN & ACL basics: Corporate, voice/AV/IoT and guest separated; guest isolated; inter-VLAN access on a strict allow-list
The cleanest WLANs we see are built on tidy racks with sensible power and documented patches. When racks are chaos, wireless takes the blame for wired faults.
Configuration that favours reliability over gimmicks
- Lean SSID strategy: Corporate (802.1X), Guest (isolated) and, if needed, Voice—avoid “one SSID per purpose” bloat
- Channel widths with intent: 20/40 MHz on 5 GHz in dense floors; reserve 80 MHz for sparse areas
- Minimum data rates: Lift them so clients leave unhealthy cells; disable legacy 1–6 Mbps rates where feasible
- Roaming aids: 802.11k/v help clients find neighbours; test 802.11r with your device estate before enabling broadly
- QoS end-to-end: Map DSCP/WMM so collaboration traffic keeps priority on both wired and wireless
Do the boring things well and you’ll outperform flashier builds that ship with every toggle on.
The mid-project reality check (don’t skip it)
Pilot one flagship room and one busy open area with the final AP model, channel plan and minimum data rates. Run a real load: simultaneous video calls, screen shares and live guests. Walk a call through corridors and adjacent rooms. Only after these pass should you roll the template across the estate.
If you prefer an engineer-led, end-to-end route—from survey and capacity model to installation and post-install validation—look at business-grade Wi-Fi installation in London for a sense of scope, deliverables and how acceptance is proven with evidence rather than hunches.
London-specific wrinkles to plan for
- Multi-tenant RF congestion: Your neighbour’s channel plan is not under your control. Use narrower channels, TX-power discipline and better SNR targets rather than trying to shout louder.
- Listed or architecturally sensitive interiors: Low-profile mounts, paintable housings (where permitted) and neat trunking matter; so does the method statement your landlord will ask for.
- Exposed ceilings and glass galore: Model multipath and shadowing from lighting tracks and signage; don’t put APs where metalwork blocks the main lobe.
- Hybrid work patterns: Peaks aren’t 9–5 any more; expect mid-week surges and Friday lulls. Validate at your busy hours, not a generic timetable.
Three common failure stories (and how to fix them)
1) The “great on paper” boardroom
Heatmap is green; calls still jitter. Root cause: 80 MHz channels in a dense floor, AP above a metal light trough and low minimum data rates. Fix: move/aim AP for clean line-of-sight to the seating plane, drop to 20/40 MHz, raise minimums, verify end-to-end QoS and retest under load.
2) The co-working hotspot that dies at 11:00
Root cause: Too many SSIDs, everyone’s phone idling on two of them, and guests bursting bandwidth. Fix: slim to two SSIDs, enable client isolation on guest, rate-limit guest traffic, adjust TX power to shrink bloated cells and add a second AP at lower power.
3) The warehouse corner of doom
Root cause: Cross-aisle omni coverage into a metal canyon; scanners cling to the corridor AP. Fix: directional down-aisle antennas shaped to the pick face, power discipline, 20 MHz channels, raised minimum data rates and walk-tests at handheld height.
Acceptance testing: sign off with numbers, not smiles
A credible handover pack should include:
- Pre- vs post-install heatmaps at the seating plane (coverage & SNR)
- Active tests: Per-seat throughput distributions and median/p95 latency/jitter during real call loads
- Roaming walk-tests: Packet captures or controller logs proving fast transitions between cells
- Spectrum snapshots: Evidence of interferers and the channel plan you chose to mitigate them
- Config artefacts: Backups of controller and switch configs, QoS mappings, VLAN/ACL diagrams, PoE budget and cable test results
If you don’t measure it, you can’t prove it—or improve it next quarter.
Operations that keep Wi-Fi good past day one
- Monitoring that matters: Alert on client failure reasons (DHCP, RADIUS, PSK), retransmit rates, noise floors and DFS events
- Firmware cadence: Quarterly reviews, staged rollouts and lab checks against your conferencing bars and laptops
- Change control that’s lightweight: A shared, simple process for SSID tweaks, VLAN moves and AP relocations as space usage evolves
- Quarterly tune-ups: Re-survey boardrooms and congestion hotspots; trim TX power and channels as headcount and layouts change
- Spares & documentation: Like-for-like APs and injectors on the shelf, plus labelled floor plans and port maps so first-line staff can triage quickly
A two-week stabilisation plan (start Monday)
Days 1–2: Inventory AP models/firmware, switches and PoE headroom; list SSIDs, channels and minimum data rates.
Days 3–4: Cut SSIDs to the minimum, set 20/40 MHz on 5 GHz, raise minimum data rates, cap TX power.
Days 5–6: Tidy cabinets, confirm PoE headroom, fix DHCP scope pressure and verify DNS performance.
Days 7–8: Pilot a flagship room and open zone with the intended plan; run live call loads and adjust.
Days 9–10: Lock in QoS mappings, enable 802.11k/v where the estate supports it, document everything.
Days 11–12: Post-install heatmaps, spectrum snapshots and a roaming walk-test; tune and retest.
Days 13–14: Roll template to remaining rooms/floors in small waves with back-out plans.
Bottom line
In London, “good Wi-Fi” isn’t a brand or a checkbox—it’s a measured outcome. Get the RF right, stand it on solid cabling and power, configure for reliability over gimmicks, and prove performance under real load. Do that and you’ll stop firefighting symptoms and start delivering a network nobody has to think about—which is the highest compliment any WLAN can get.



