This article is currently an experimental machine translation and may contain errors. If anything is unclear, please refer to the original Chinese version. I am continuously working to improve the translation.
UptimeFlare is an open-source project I created earlier, serving a similar purpose to uptime-kuma or UptimeRobot — monitoring the online status of websites/servers and displaying a status page. The key difference is that UptimeFlare is fully serverless, deployable directly on Cloudflare Workers.
Moreover, UptimeFlare has a unique advantage over self-hosted solutions: it leverages Cloudflare’s globally distributed edge network, allowing users to customize the geographical location from which monitoring checks are initiated. In other words, it functions like synthetic monitoring, testing connectivity and latency from various global regions.
This article primarily explains how I achieved forcing a Worker to run in a specific region. This method isn’t documented in official Cloudflare documentation, nor is it widely discussed. It’s a technique I discovered through online discussions, experimentation, and trial-and-error, and I’m recording it here for future reference.
The English Wiki in the UptimeFlare project provides a detailed setup guide, so this article will focus more on the underlying principles and reasoning.
Anycast
First, we know that one of Cloudflare’s key features is its use of Anycast in its CDN infrastructure.
Cloudflare announces its IP address ranges across multiple edge nodes (Edge). When clients access a Cloudflare-protected domain, their traffic is routed by their ISP to the nearest edge node. That edge node then fetches the content from the origin server, caches it, and serves it — effectively acting as a CDN. This minimizes latency as much as possible (in some regions, achieving an astonishing 1ms ping).
As shown in the diagram below, suppose www.example.com has an origin server at 5.6.7.8, and the site owner sets up a Proxied A record pointing to 5.6.7.8. When users from the US and UK try to access www.example.com, their DNS queries won’t return the origin IP directly. Instead, they’ll receive an IP address belonging to Cloudflare’s edge network, such as 1.2.3.4.
Client1 in the US will be routed to Edge1 (US), and Client2 in the UK to Edge2 (UK). Both edges then fetch content from the origin server. Note that unlike CDNs that use DNS-based geo-routing, Edge1 and Edge2 may share the same public IP (1.2.3.4) despite being physically different servers — this is the essence of Anycast.
1 | www.example.com A RECORD 5.6.7.8 Proxied |
Workers
Cloudflare Workers are small pieces of code written and uploaded by users to execute custom logic directly on Cloudflare’s edge servers. They run on the V8 JavaScript engine and do not include Node.js APIs.
Workers are ideal for lightweight backend logic or small-scale projects. They offer significantly lower latency and reduced maintenance overhead compared to self-hosted solutions. Plus, Cloudflare generously provides a generous free tier (100,000 requests per day). The main downside is that your code becomes tightly coupled with the Cloudflare platform, as few other providers offer a comparable service.
Forcing Workers to Run in Specific Locations
After all that background, we finally arrive at the main topic.
In the above scenario, a user in the US accessing www.example.com will always be handled by the nearest edge node (Edge1 in the US). Due to how IP routing works (managed by ISPs), users cannot influence this routing — there’s no way to force the request to be processed by a different edge node, say in the UK.
This isn’t usually a problem, but in certain cases — like with UptimeFlare, where we want to monitor latency and connectivity from various global regions — we need a way to control where the check originates.
In threads on the Cloudflare Community forum [1] [2], users discovered a workaround: each edge node has not only its public Anycast IP but also a private, internal IP address. These internal IPs belong to AS13335, fall within the 8.0.0.0/8 range, and are not used for Anycast. Tools like nmap (using ping scan) can help identify valid internal IPs. (Let’s say Edge1’s internal IP is 8.1.2.3, and Edge2’s is 8.4.5.6.)
These internal IPs aren’t directly exposed to the public internet. While they respond to ping, they don’t serve HTTP/HTTPS traffic, so we can’t access them directly.
The solution is to create an A record — for example, uk.example.com pointing to 8.4.5.6. However, we still can’t access the UK edge directly using uk.example.com because incoming requests are still routed to the nearest edge (e.g., Edge1 in the US). If that edge has a Worker associated with the domain, it will handle the request locally and ignore the A record’s IP.
The final workaround is to write a piece of code in the Worker that uses fetch to make an internal request to uk.example.com from Edge1. Since this request originates within Cloudflare’s network, it will be routed to the actual server at 8.4.5.6 — Edge2 in the UK. The actual flow becomes: Client1(US) => Edge1(US,1.2.3.4) => Edge2(UK,8.4.5.6).
1 | www.example.com A RECORD 5.6.7.8 Proxied |
In UptimeFlare, CRON triggers are executed on a randomly selected, relatively idle edge node. However, this doesn’t prevent us from using the above method to redirect the actual monitoring request to the desired geographical location.
This article is licensed under the CC BY-NC-SA 4.0 license.
Author: lyc8503, Article link: https://blog.lyc8503.net/en/post/cloudflare-worker-region/
If this article was helpful or interesting to you, consider buy me a coffee¬_¬
Feel free to comment in English below o/