This post summarizes a conversation on the Jukebox Podcast (WP Tavern) between Nathan Wrigley and Saumya Majumder, lead software engineer at BigScoots. They discuss a recent major Cloudflare outage, why such outages happen, and the hosting and performance strategies BigScoots uses—especially with Cloudflare Enterprise.
Who Saumya Majumder Is
Saumya leads high-performance WordPress engineering at BigScoots, focusing on Cloudflare-powered architectures and edge-enabled solutions. His work includes custom cache engines, migration tools, worker-based automation, and edge computing. He supports enterprise customers and internal WordPress initiatives, building scalable solutions that are developer-friendly and production-ready.
Why Cloudflare Outages Happen
Saumya stresses that Cloudflare is far more than a simple CDN: it’s a complex platform with many interdependent systems. Large-scale outages typically stem from a rare edge-case failure in a huge distributed system. That single failure can cascade across Points of Presence (PoPs) and multiple control planes. Fixing the root cause is often straightforward; propagating and stabilizing the fix across a global network is the hard part, and recovery itself can trigger traffic bursts that need additional mitigation.
Outages are not unique to Cloudflare; they can and do occur at other major providers (AWS, GCP, Azure). The costs—financial and reputational—are real because of SLAs. Saumya notes Cloudflare generally responds transparently with detailed postmortems and works to remove fragile dependencies (for example, past incidents tied to third-party KV storage).
How BigScoots Mitigated Customer Impact
During the outage, BigScoots used the Cloudflare API to turn off Cloudflare proxying for affected domains, so traffic flowed directly to BigScoots’ origin servers. Because the Cloudflare API remained reachable, they automated rapid failovers to bypass the proxy layer until Cloudflare stabilized. That approach requires origin hosting to be independent of Cloudflare-hosted compute—if a site runs entirely on Cloudflare Workers/platform, disabling the proxy via API wouldn’t restore origin access.
CDN-Level Page Caching vs. Server-Side Caching
Saumya described earlier work on CDN-level page caching, including the Super Page Cache for Cloudflare plugin. Traditional server-side caching saves generated HTML on the origin server, which still forces distant clients to request that origin. CDN-level page caching pushes HTML to PoPs so users get content from a nearby edge location, cutting latency and reducing origin load dramatically.
Cloudflare Enterprise and Tiered Caching
On Cloudflare Enterprise, tiered caching is key to achieving very high cache hit ratios. PoPs are organized into lower and upper tiers inside Cloudflare’s private network. A lower-tier PoP that doesn’t have an asset first checks upstream tiers within the Cloudflare backbone; if another tier has it, the content is served without touching the public Internet or the origin. Only when no cached copy exists does Cloudflare fetch from the origin. This reduces origin requests and improves global performance.
Private Interconnect Between BigScoots and Cloudflare
BigScoots operates its own data centers and has built a private interconnect (CNI) to Cloudflare. This fiber connection avoids the public Internet when Cloudflare fetches content from BigScoots’ origin, reducing latency and variability. Most hosts don’t run their own data centers and can’t make such physical connections, so this setup is a competitive advantage for consistent, faster origin pulls.
BigScoots Cache Plugin and Control Features
BigScoots developed a proprietary BigScoots Cache plugin to orchestrate Cloudflare page caching and provide fine-grained control:
– Intelligent cache purging: updates clear not only the page but related assets (taxonomies, author archives, linked pages).
– Hooks and APIs: filters, actions, and a REST API let developers purge or adjust cache behavior programmatically—useful for custom apps and e-commerce.
– Portal controls: customers can toggle login protection, bot blocking, image optimization, Rocket Loader, and more. Settings include country/continent blocking or challenges, proprietary hardening rules, and WAF/bot management.
Managed Service and Customer Targeting
Although the platform exposes advanced developer features, BigScoots positions these options to be accessible to all customers. Documentation explains hooks and APIs, while managed support teams implement rules and custom snippets for customers who prefer not to manage technical details. BigScoots provides onboarding, zero-downtime migrations, performance optimization packages, and engineering services for custom code work.
Key Takeaways
– Major provider outages are inevitable; the engineering challenge is minimizing customer impact and learning from incidents.
– Transparency and thorough post-incident analysis matter; Cloudflare’s public postmortems help the ecosystem improve.
– Keeping origin hosting separate from a CDN/proxy enables failover options (for example, disabling proxying via API) during CDN outages.
– CDN-level page caching and tiered caching architectures can dramatically reduce latency and origin pressure when configured correctly.
– Private interconnects between host and CDN improve speed and consistency but require data center infrastructure.
– Fine-grained controls, developer APIs, and managed services let hosts tailor caching and security to diverse WordPress needs.
Conclusion
The discussion reinforces that global-scale internet services rest on complex systems that will sometimes fail. Architectures that combine edge caching, tiered cache logic, private interconnects, automation, and managed expertise can improve performance and reduce visible impact during incidents. BigScoots’ tight integration with Cloudflare Enterprise—paired with automation and managed support—illustrates one approach to building faster, more resilient WordPress hosting.
For the original episode notes and links, see WP Tavern’s podcast coverage.
