BDFZ Signed Downloads — Architecture & Runbook
不確定 Famous Last Words: Dr. Jane Goodall 阿里雲盤 能活多久,所以,幾個小時,完成一個新的下載項目。
人類會有沒牆的一天嗎?毫無希望。
Goal: Host large files on Cloudflare R2 and serve download-only links via a Cloudflare Worker at dl.bdfz.net
, with expiring HMAC signatures, Range/HEAD resume, anti-hotlinking, and basic analytics (Cloudflare Analytics Engine, “AE”).
Local CLI (dlput
) uploads files, signs URLs (default 7 days), verifies, and emits a fresh bucket-wide link list that always matches what truly exists in R2 (so web-deleted objects never linger in the list).
1) Architecture (overview)
- Storage: Cloudflare R2 (S3-compatible)
- Gateway: Cloudflare Workers on
dl.bdfz.net/dl/*
- Verifies
?exp=<unix>&sig=<hex>
with HMAC-SHA256 overkey:exp
- Enforces download-only (
Content-Disposition: attachment
) - Supports HEAD and GET with Range (resume)
- Anti-hotlink: allow
bdfz.net
and subdomains (configurable) - Optional
/renew
to mint a new link within a grace window - Optional AE logging (dataset
dl_events
) and Durable Object (DO) rate-meter
- Verifies
- Domains
dl.bdfz.net
→ Worker (signed downloads)media.bdfz.net
→ not for direct access (block via WAF)
- Local CLI
dlput
(~/bin/r2dl.sh
): upload → sign → verify → emit fresh link index- Bucket-wide list is rebuilt from live keys on each run (no stale links)
Security invariants
- Default TTL 7 days, clamped by Worker
MAX_TTL_SECONDS=604800
. - Rotating
HMAC_KEY
invalidates all old links by design (optionally tolerate withHMAC_KEY_PREV
).
2) Prerequisites
- macOS with Homebrew
- Cloudflare account & R2 bucket
- DNS:
dl.bdfz.net
routed to Worker (via Cloudflare Dashboard) - (Optional) AE token for Analytics queries
Homebrew packages: awscli
, rclone
, jq
, openssl
, python@3
(macOS usually has file
and curl
already)
3) One-time environment & tooling (setup.sh)
This script installs deps, sets your shell env, and creates the dlput
alias.
Replace placeholders (<>
) with your values before running.
#!/usr/bin/env bash
set -Eeuo pipefail
echo "== BDFZ setup =="
# 1) Homebrew deps (idempotent)
brew list awscli >/dev/null 2>&1 || brew install awscli
brew list rclone >/dev/null 2>&1 || brew install rclone
brew list jq >/dev/null 2>&1 || brew install jq
brew list openssl@3 >/dev/null 2>&1 || brew install openssl
brew list python@3 >/dev/null 2>/dev/null || brew install python
# 2) Ensure ~/bin exists and is on PATH
mkdir -p "$HOME/bin"
if ! echo ":$PATH:" | grep -q ":$HOME/bin:"; then
echo 'export PATH="$HOME/bin:$PATH"' >> "$HOME/.zshrc"
fi
# 3) Append environment (edit placeholders first!)
cat >> "$HOME/.zshrc" <<'ZRC'
# --- BDFZ Signed Downloads (env) ---
export CF_ACCOUNT_ID="<YOUR_CF_ACCOUNT_ID>"
export R2_BUCKET="<YOUR_R2_BUCKET>"
export R2_ACCESS_KEY_ID="<YOUR_R2_ACCESS_KEY_ID>"
export R2_SECRET_ACCESS_KEY="<YOUR_R2_SECRET_ACCESS_KEY>"
export HMAC_KEY="<your-HMAC-secret-used-by-Worker>"
export HOST_DL="dl.bdfz.net"
# Optional: Analytics Engine token (Account → Analytics:Read on your account)
# export CF_API_TOKEN="<YOUR_CF_API_TOKEN>"
# S3 compat
export AWS_EC2_METADATA_DISABLED=true
export AWS_REGION=auto
export AWS_DEFAULT_REGION=auto
export R2_ENDPOINT="https://${CF_ACCOUNT_ID}.r2.cloudflarestorage.com"
# TTL clamp (must match Worker)
export MAX_TTL_SECONDS=604800
# Optionally limit the bucket-wide link list to certain prefixes:
# export LIST_PREFIXES="uploads/ video/"
# Helpful alias
alias dlput="$HOME/bin/r2dl.sh"
# Quick env checker
check_env() {
for v in CF_ACCOUNT_ID R2_BUCKET R2_ACCESS_KEY_ID R2_SECRET_ACCESS_KEY R2_ENDPOINT HMAC_KEY HOST_DL AWS_REGION MAX_TTL_SECONDS CF_API_TOKEN; do
val="${(P)v}"
if [ -n "$val" ]; then
# mask long secrets
show="$val"
if [ ${#show} -gt 20 ]; then show="${show:0:6}..${show: -6}"; fi
printf " %-24s = %s\n" "$v" "$show"
else
printf " %-24s = <MISSING>\n" "$v"
fi
done
}
ZRC
# 4) Make dlput resolvable
chmod 755 "$HOME/bin"
echo "Done. Open a new terminal (or run: exec zsh), then run: check_env"
Run it:
chmod +x ./setup.sh
./setup.sh
exec zsh
check_env
4) Local CLI (r2dl.sh) — upload, sign, verify, analytics, fresh list
Put this file at ~/bin/r2dl.sh
and chmod +x ~/bin/r2dl.sh
.
It uses your current env vars from ~/.zshrc
.
#!/usr/bin/env bash
set -Eeuo pipefail
# ==== Required environment (set these in ~/.zshrc) ====
: "${CF_ACCOUNT_ID:?set CF_ACCOUNT_ID}"
: "${R2_BUCKET:?set R2_BUCKET}"
: "${R2_ACCESS_KEY_ID:?set R2_ACCESS_KEY_ID}"
: "${R2_SECRET_ACCESS_KEY:?set R2_SECRET_ACCESS_KEY}"
: "${HMAC_KEY:?set HMAC_KEY}" # must match Worker secret
HOST_DL="${HOST_DL:-dl.bdfz.net}"
# S3 compatibility variables
export AWS_ACCESS_KEY_ID="$R2_ACCESS_KEY_ID"
export AWS_SECRET_ACCESS_KEY="$R2_SECRET_ACCESS_KEY"
export AWS_EC2_METADATA_DISABLED=true
export AWS_REGION=auto AWS_DEFAULT_REGION=auto
R2_ENDPOINT="${R2_ENDPOINT:-https://${CF_ACCOUNT_ID}.r2.cloudflarestorage.com}"
need(){ command -v "$1" >/dev/null 2>&1 || { echo "Missing dep: $1" >&2; exit 2; }; }
need aws; need rclone; need openssl; need python3; need jq; need file; need curl
usage(){ echo "Usage: dlput <local-file> [R2 key] [TTL seconds (default 604800=7d)]"; exit 1; }
# ---- Analytics Engine helpers ----
ae_post_sql() {
# requires CF_API_TOKEN in env
local sql="$1"
curl -sS -X POST \
"https://api.cloudflare.com/client/v4/accounts/$CF_ACCOUNT_ID/analytics_engine/sql" \
-H "Authorization: Bearer $CF_API_TOKEN" \
-H "Content-Type: text/plain" \
-H "Accept: application/json" \
--data-binary "$sql"
}
ae_json_ok() { jq -e '.meta and (.data|type)' >/dev/null 2>&1; }
SRC="${1:-}"; [ -n "$SRC" ] || usage
KEY="${2:-}"
TTL="${3:-604800}"
# TTL clamp to Worker limit (defaults to 7d)
MAX_TTL="${MAX_TTL_SECONDS:-604800}"
if [ "${TTL:-0}" -gt "${MAX_TTL:-604800}" ] 2>/dev/null; then
echo "TTL($TTL) > MAX_TTL_SECONDS($MAX_TTL), using $MAX_TTL"
TTL="$MAX_TTL"
fi
# Default R2 key under uploads/YYYY/MM/
if [ -z "$KEY" ]; then
bn="$(basename "$SRC")"; y=$(date +%Y); m=$(date +%m)
KEY="uploads/${y}/${m}/${bn}"
fi
MIME="$(file --mime-type -b "$SRC" 2>/dev/null || echo application/octet-stream)"
# rclone remote (idempotent)
rclone config create r2 s3 provider Cloudflare \
access_key_id "$R2_ACCESS_KEY_ID" secret_access_key "$R2_SECRET_ACCESS_KEY" \
endpoint "$R2_ENDPOINT" >/dev/null 2>&1 || true
echo "Uploading → r2:${R2_BUCKET}/${KEY}"
rclone copyto "$SRC" "r2:${R2_BUCKET}/${KEY}" \
--progress --s3-chunk-size 64M --s3-upload-concurrency 8 \
--header-upload="Content-Type: ${MIME}" \
--header-upload='Cache-Control: public, max-age=31536000, immutable' \
--header-upload="Content-Disposition: attachment; filename=\"$(basename "$KEY")\""
echo "Verify R2 object..."
aws --endpoint-url "$R2_ENDPOINT" s3api head-object \
--bucket "$R2_BUCKET" --key "$KEY" >/tmp/r2_head.json
SIZE=$(jq -r '.ContentLength' /tmp/r2_head.json); ETAG=$(jq -r '.ETag' /tmp/r2_head.json)
echo "OK size=${SIZE} etag=${ETAG}"
now=$(date +%s); EXP=$(( now + TTL ))
SIG=$(printf "%s:%s" "$KEY" "$EXP" | openssl dgst -sha256 -mac HMAC -macopt key:"$HMAC_KEY" -binary | xxd -p -c 256)
ENC_KEY=$(python3 -c 'import urllib.parse,sys; print(urllib.parse.quote(sys.argv[1]))' "$KEY")
URL="https://${HOST_DL}/dl/${ENC_KEY}?exp=${EXP}&sig=${SIG}"
echo "Signed URL:"
echo "$URL"
echo "HEAD check..."
curl -sI "$URL" | sed -n '1,15p' || true
echo "Resume test (expect 1024):"
curl -sSL -r 0-1023 "$URL" | wc -c || true
# ===== Analytics: per-file tail (last 7 days) =====
if [ -n "${CF_API_TOKEN:-}" ]; then
echo "AE tail (last 7 days, up to 20 rows)…"
META_JSON="$(ae_post_sql 'SELECT * FROM dl_events LIMIT 0 FORMAT JSON' || true)"
if echo "$META_JSON" | ae_json_ok; then
TS_COL="$(echo "$META_JSON" | jq -r '.meta[].name | select(.=="ts" or .=="timestamp")' | head -n1)"
KEY_COL="$(echo "$META_JSON" | jq -r '.meta[].name | select(.=="key" or .=="object_key" or .=="r2_key")' | head -n1)"
KEY_ESC=$(printf "%s" "$KEY" | sed "s/'/''/g")
COND=()
[ -n "$KEY_COL" ] && COND+=(" \\\`$KEY_COL\\\`='${KEY_ESC}' ")
[ -n "$TS_COL" ] && COND+=(" \\\`$TS_COL\\\` > now() - INTERVAL 7 DAY ")
WHERE=""; [ ${#COND[@]} -gt 0 ] && WHERE="WHERE $(IFS=' AND '; echo "${COND[*]}")"
SQL1="SELECT * FROM dl_events ${WHERE}"
[ -n "$TS_COL" ] && SQL1="$SQL1 ORDER BY \\\`$TS_COL\\\` DESC"
SQL1="$SQL1 LIMIT 20 FORMAT JSON"
RESP_AE="$(ae_post_sql "$SQL1" || true)"
if echo "$RESP_AE" | jq -e '.' >/dev/null 2>&1; then echo "$RESP_AE" | jq .
else echo "(AE non-JSON; first 200B:)"; printf "%s" "$RESP_AE" | head -c 200; echo; fi
else
echo "(AE meta unavailable; check token/dataset)"; printf "%s" "$META_JSON" | head -c 200; echo
fi
else
echo "(Skip AE: no CF_API_TOKEN)"
fi
# ===== Bucket-wide fresh 7-day link list (only existing objects) =====
OUT="${OUT:-$HOME/Downloads/r2_links_$(date +%Y%m%d-%H%M%S).txt}"
LATEST="$HOME/Downloads/r2_links_latest.txt"
: > "$OUT"
echo "Building fresh 7-day links from live bucket…"
BASE_NOW="${now:-$(date +%s)}"
TMP_KEYS="$(mktemp)"
if [ -n "${LIST_PREFIXES:-}" ]; then
for P in ${LIST_PREFIXES}; do
CONT=""
while :; do
if [ -n "$CONT" ]; then
RESP=$(aws --endpoint-url "$R2_ENDPOINT" s3api list-objects-v2 --bucket "$R2_BUCKET" --prefix "$P" --continuation-token "$CONT" || true)
else
RESP=$(aws --endpoint-url "$R2_ENDPOINT" s3api list-objects-v2 --bucket "$R2_BUCKET" --prefix "$P" || true)
fi
echo "$RESP" | jq -r '.Contents[].Key' >>"$TMP_KEYS"
TRUNC=$(echo "$RESP" | jq -r '.IsTruncated')
[ "$TRUNC" = "true" ] || break
CONT=$(echo "$RESP" | jq -r '.NextContinuationToken // empty')
done
done
else
CONT=""
while :; do
if [ -n "$CONT" ]; then
RESP=$(aws --endpoint-url "$R2_ENDPOINT" s3api list-objects-v2 --bucket "$R2_BUCKET" --continuation-token "$CONT" || true)
else
RESP=$(aws --endpoint-url "$R2_ENDPOINT" s3api list-objects-v2 --bucket "$R2_BUCKET" || true)
fi
echo "$RESP" | jq -r '.Contents[].Key' >>"$TMP_KEYS"
TRUNC=$(echo "$RESP" | jq -r '.IsTruncated')
[ "$TRUNC" = "true" ] || break
CONT=$(echo "$RESP" | jq -r '.NextContinuationToken // empty')
done
fi
sort -u "$TMP_KEYS" | sed '/^$/d' > "${TMP_KEYS}.uniq" || true
if [ ! -s "${TMP_KEYS}.uniq" ]; then
echo "# (empty bucket) nothing to sign" >>"$OUT"
echo "Links file: $OUT"; ln -sf "$OUT" "$LATEST"; echo "(preview: empty)"
else
while IFS= read -r K; do
EXP_ALL=$(( BASE_NOW + TTL ))
SIG_ALL=$(printf "%s:%s" "$K" "$EXP_ALL" \
| openssl dgst -sha256 -mac HMAC -macopt key:"$HMAC_KEY" -binary \
| xxd -p -c 256)
ENC_ALL=$(python3 -c 'import urllib.parse,sys; print(urllib.parse.quote(sys.argv[1]))' "$K")
printf "https://%s/dl/%s?exp=%d&sig=%s\n" "$HOST_DL" "$ENC_ALL" "$EXP_ALL" "$SIG_ALL" >> "$OUT"
done < "${TMP_KEYS}.uniq"
echo "Links file: $OUT"
ln -sf "$OUT" "$LATEST"
echo "(latest symlink → $LATEST)"
echo "(preview top 20)"
head -n 20 "$OUT" | sed 's/^/ /'
fi
rm -f "$TMP_KEYS" "${TMP_KEYS}.uniq" 2>/dev/null || true
# ===== AE TOP 10 (last 7 days) =====
if [ -n "${CF_API_TOKEN:-}" ]; then
echo "AE TOP 10 (last 7 days)…"
META_JSON="$(ae_post_sql 'SELECT * FROM dl_events LIMIT 0 FORMAT JSON' || true)"
if echo "$META_JSON" | ae_json_ok; then
KEY_COL="$(echo "$META_JSON" | jq -r '.meta[].name | select(.=="key" or .=="object_key" or .=="r2_key")' | head -n1)"
TS_COL="$(echo "$META_JSON" | jq -r '.meta[].name | select(.=="ts" or .=="timestamp")' | head -n1)"
if [ -n "$KEY_COL" ]; then
WHERE=""; [ -n "$TS_COL" ] && WHERE="WHERE \\\`$TS_COL\\\` > now() - INTERVAL 7 DAY"
SQL_TOP="SELECT \\\`$KEY_COL\\\` AS key, COUNT(*) AS downloads, SUM(CASTOrNull(bytes, 'UInt64')) AS total_bytes FROM dl_events ${WHERE} GROUP BY \\\`$KEY_COL\\\` ORDER BY downloads DESC LIMIT 10 FORMAT JSON"
RESP_TOP="$(ae_post_sql "$SQL_TOP" || true)"
if echo "$RESP_TOP" | jq -e '.meta and .data' >/dev/null 2>&1; then
echo "$RESP_TOP" \
| jq -r '["downloads","MB","key"], (.data[] | [ (.downloads|tostring), (((.total_bytes//0)/1048576)|round|tostring), .key ]) | @tsv' \
| column -t
else
echo "(AE TOP non-JSON; first 200B:)"; printf "%s" "$RESP_TOP" | head -c 200; echo
fi
else
echo "(AE TOP skipped: no key column in dataset)"
fi
else
echo "(AE meta unavailable; TOP skipped)"
fi
else
echo "(Skip AE TOP 10: no CF_API_TOKEN)"
fi
command -v pbcopy >/dev/null 2>&1 && printf "%s" "$URL" | pbcopy && echo "(copied to clipboard)"
Usage examples
# simplest: default 7d TTL, default key under uploads/YYYY/MM/
dlput "/Users/you/Movies/Big File.mkv"
# custom key (keeps original filename for disposition)
dlput "/path/file.mp4" "video/MyMovie-2010.mp4"
# custom TTL (clamped to 7d if over limit)
dlput "/path/archive.zip" "" 432000
Each run emits a new link list file: ~/Downloads/r2_links_YYYYmmdd-HHMMSS.txt
and updates ~/Downloads/r2_links_latest.txt
.
5) Worker (gateway) — code & bindings (Dashboard)
What it does:
GET/HEAD /dl/<url-encoded key>?exp=&sig=
— validates HMAC & expiry, serves from R2 with Range/HEAD and download disposition.- Optional
/renew
to mint a new link in-place (within grace window). - Anti-hotlink (allow
bdfz.net
& subdomains). - Optional AE logging and Durable Object rate-meter (per IP/per key per minute).
5.1 Source (paste into Workers & Pages → Create → Worker → Quick edit)
// Durable Object: per-IP per-key metering
export class Meter {
constructor(state, env) { this.state = state; this.env = env; }
async fetch(req) {
const url = new URL(req.url);
const op = url.pathname.split('/').pop();
const key = url.searchParams.get('k') || '';
const ip = url.searchParams.get('ip') || '';
const bytes = Number(url.searchParams.get('b') || 0);
const now = Date.now();
const minute = Math.floor(now/60000);
const perMinKey = `m:${key}|${ip}|${minute}`;
const perDayKey = `d:${key}|${ip}|${new Date(now).toISOString().slice(0,10)}`;
const maxPerMin = Number(this.env.MAX_REQ_PER_MIN || 120);
if (op === 'tick') {
let c = (await this.state.storage.get(perMinKey)) || 0;
c += 1;
await this.state.storage.put(perMinKey, c, { expirationTtl: 150 });
let rec = (await this.state.storage.get(perDayKey)) || { c:0, b:0, first: now, last: now };
rec.c += 1; rec.b += bytes; rec.last = now;
await this.state.storage.put(perDayKey, rec, { expirationTtl: 60*60*24*35 });
const allow = c <= maxPerMin;
return new Response(JSON.stringify({ allow, c }), { headers: { 'content-type': 'application/json' } });
}
if (op === 'get') {
const day = url.searchParams.get('day') || new Date(now).toISOString().slice(0,10);
const rec = (await this.state.storage.get(`d:${key}|${ip}|${day}`)) || { c:0, b:0, first:0, last:0 };
return new Response(JSON.stringify(rec), { headers: { 'content-type': 'application/json' } });
}
return new Response('not-found', { status: 404 });
}
}
function basename(p){ return p.split('/').pop(); }
function clientIP(req){ return req.headers.get('cf-connecting-ip') || req.headers.get('x-forwarded-for') || '' }
async function hmacHex(secret, data){
const k = await crypto.subtle.importKey('raw', new TextEncoder().encode(secret), {name:'HMAC', hash:'SHA-256'}, false, ['sign']);
const s = await crypto.subtle.sign('HMAC', k, new TextEncoder().encode(data));
return [...new Uint8Array(s)].map(b=>b.toString(16).padStart(2,'0')).join('');
}
function refererAllowed(env, ref){
if(!ref) return true;
const list=(env.ALLOWED_REFERRERS||'bdfz.net,*.bdfz.net').split(',').map(s=>s.trim()).filter(Boolean);
try{ const h=new URL(ref).hostname; return list.some(p=>p==='*'||h===p||(p.startsWith('*.')&&h.endsWith(p.slice(1)))); }catch{ return false }
}
function commonHeaders(meta, fn){
const h = new Headers();
meta.writeHttpMetadata?.(h);
if (meta.httpEtag) h.set('etag', meta.httpEtag);
h.set('Content-Disposition', `attachment; filename="${fn}"`);
h.set('Cache-Control','public, max-age=31536000, immutable');
h.set('Accept-Ranges','bytes');
return h;
}
async function writeEvent(env, f){
try{ await env.dl_events.writeDataPoint({ blobs:[f.key,f.ip,f.ua,f.ref,f.ray], doubles:[f.status,f.bytes] }); }catch{}
}
export default {
async fetch(request, env, ctx) {
const url = new URL(request.url);
const path = url.pathname;
if (path.startsWith('/renew')) return renew(request, env);
if (!path.startsWith('/dl/')) return new Response('not-found', { status: 404 });
// Anti-hotlink: allow empty or bdfz.net/*
const ref = request.headers.get('referer')||'';
if (!refererAllowed(env, ref)) return new Response('forbidden', { status: 403 });
const encKey = path.slice('/dl/'.length);
const key = decodeURIComponent(encKey);
const exp = Number(url.searchParams.get('exp')||0);
const sig = (url.searchParams.get('sig')||'').toLowerCase();
if (!key || !exp || !sig) return new Response('bad-request', { status: 400 });
const now = Math.floor(Date.now()/1000);
const maxTtl = Number(env.MAX_TTL_SECONDS||604800);
if (exp > now + maxTtl) return new Response('bad-exp', { status: 400 });
if (exp < now) return new Response('expired', { status: 403 });
// HMAC verify (supports PREV during rotation)
const expect = await hmacHex(env.HMAC_KEY, `${key}:${exp}`);
if (expect !== sig) {
if (!env.HMAC_KEY_PREV) return new Response('bad-sig', { status: 403 });
const prev = await hmacHex(env.HMAC_KEY_PREV, `${key}:${exp}`);
if (prev !== sig) return new Response('bad-sig', { status: 403 });
}
// HEAD fast path
if (request.method === 'HEAD') {
const head = await env.BUCKET.head(key);
if (!head) return new Response('not-found', { status: 404 });
return new Response(null, { status: 200, headers: commonHeaders(head, basename(key)) });
}
if (request.method !== 'GET') return new Response('method-not-allowed', { status: 405 });
// Per-IP per-key metering
const ip = clientIP(request);
if (env.METER) {
const meter = env.METER.get(env.METER.idFromName(`${key}|${ip}`));
const r = await (await meter.fetch(`https://meter/tick?k=${encodeURIComponent(key)}&ip=${encodeURIComponent(ip)}&b=0`)).json();
if (!r.allow) return new Response('rate-limited', { status: 429 });
}
// Range/conditional fetch
const range = request.headers.get('Range');
const obj = await env.BUCKET.get(key, { range: request.headers, onlyIf: request.headers });
if (!obj) return new Response('not-found', { status: 404 });
const headers = commonHeaders(obj, basename(key));
const status = range ? 206 : 200;
if (status === 206 && obj.range) {
headers.set('Content-Range', `bytes ${obj.range.offset}-${obj.range.offset+obj.range.length-1}/${obj.size}`);
headers.set('Content-Length', String(obj.range.length));
} else {
headers.set('Content-Length', String(obj.size));
}
// AE event (best-effort)
if (env.dl_events) {
ctx.waitUntil(writeEvent(env, {
key, ip,
ua: request.headers.get('user-agent')||'',
ref, ray: request.headers.get('cf-ray')||'',
status, bytes: obj.size||0
}));
}
if (request.method === 'HEAD') return new Response(null, { status, headers });
return new Response(('body' in obj) ? obj.body : null, { status, headers });
}
}
async function renew(request, env){
const u = new URL(request.url);
const key = u.searchParams.get('k')||'';
const expOld = Number(u.searchParams.get('exp')||0);
const ttl = Number(u.searchParams.get('ttl')||86400);
const sig = u.searchParams.get('sig')||'';
const now = Math.floor(Date.now()/1000);
if(!key || !expOld || !sig) return new Response('bad-request',{status:400});
const expect = await hmacHex(env.REFRESH_KEY||env.HMAC_KEY, `${key}:${expOld}`);
if(expect !== sig) return new Response('bad-sig',{status:403});
if(expOld + Number(env.GRACE_RENEW_SECONDS||1209600) < now) return new Response('too-late',{status:403});
const expNew = Math.min(now+ttl, now+Number(env.MAX_TTL_SECONDS||604800));
const newSig = await hmacHex(env.HMAC_KEY, `${key}:${expNew}`);
const urlNew = `${u.origin}/dl/${encodeURIComponent(key)}?exp=${expNew}&sig=${newSig}`;
return new Response(JSON.stringify({ url: urlNew, exp: expNew }), { headers: { 'content-type': 'application/json' } });
}
5.2 Bindings (Dashboard)
- R2 Bucket: binding
BUCKET
→ your R2 bucket - Durable Object (class): name
Meter
; Binding name:METER
- Analytics Engine dataset: binding name
dl_events
, datasetdl_events
- Variables:
ALLOWED_REFERRERS
=bdfz.net,*.bdfz.net
MAX_TTL_SECONDS
=604800
GRACE_RENEW_SECONDS
=1209600
MAX_REQ_PER_MIN
=120
- Secrets:
HMAC_KEY
(signing)HMAC_KEY_PREV
(optional, for rotation window)REFRESH_KEY
(optional; default toHMAC_KEY
if missing)
Route: dl.bdfz.net/dl/*
→ this Worker
Deploy: Save & Deploy in Dashboard
Acceptance checklist
# HEAD
curl -I "https://dl.bdfz.net/dl/<enc-key>?exp=...&sig=..." | sed -n '1,20p'
# Range
curl -sSL -r 0-1023 "https://dl.bdfz.net/dl/<enc-key>?exp=...&sig=..." | wc -c # expect 1024
# Renew (optional)
# GET /renew?k=<raw key>&exp=<old exp>&sig=<hmac(key:expOld)>&ttl=<sec>
6) WAF: block direct media.bdfz.net (GET/HEAD)
(Run only once; requires a token with Zone:Rulesets write permission)
set -euo pipefail
: "${CF_API_TOKEN:?Need CF_API_TOKEN}"
ZONE_NAME="bdfz.net"
ACTION="block"
EXPR='http.host eq "media.bdfz.net" and http.request.method in {"GET" "HEAD"}'
DESC="block direct media.bdfz.net GET/HEAD"
ZONE_ID="$(curl -s -H "Authorization: Bearer $CF_API_TOKEN" -H "Accept: application/json" \
"https://api.cloudflare.com/client/v4/zones?name=${ZONE_NAME}" | jq -r '.result[0].id')"
RULESET_ID="$(curl -s -H "Authorization: Bearer $CF_API_TOKEN" -H "Accept: application/json" \
"https://api.cloudflare.com/client/v4/zones/$ZONE_ID/rulesets/phases/http_request_firewall_custom/entrypoint" | jq -r '.result.id')"
EXISTING_ID="$(curl -s -H "Authorization: Bearer $CF_API_TOKEN" -H "Accept: application/json" \
"https://api.cloudflare.com/client/v4/zones/$ZONE_ID/rulesets/$RULESET_ID" \
| jq -r --arg d "$DESC" '.result.rules[] | select(.description==$d) | .id' | head -n1)"
if [ -n "${EXISTING_ID:-}" ] && [ "$EXISTING_ID" != "null" ]; then
curl -s -X PATCH -H "Authorization: Bearer $CF_API_TOKEN" \
-H "Content-Type: application/json" -H "Accept: application/json" \
"https://api.cloudflare.com/client/v4/zones/$ZONE_ID/rulesets/$RULESET_ID/rules/$EXISTING_ID" \
--data "$(jq -n --arg a "$ACTION" --arg e "$EXPR" --arg d "$DESC" \
'{action:$a, expression:$e, description:$d, enabled:true}')" | jq .
else
curl -s -X POST -H "Authorization: Bearer $CF_API_TOKEN" \
-H "Content-Type: application/json" -H "Accept: application/json" \
"https://api.cloudflare.com/client/v4/zones/$ZONE_ID/rulesets/$RULESET_ID/rules" \
--data "$(jq -n --arg a "$ACTION" --arg e "$EXPR" --arg d "$DESC" \
'{action:$a, expression:$e, description:$d, enabled:true}')" | jq .
fi
(Variant to allow only bdfz.net
referers: set EXPR='(http.host eq "media.bdfz.net") and not ( http.referer contains "bdfz.net" or http.referer eq "" )'
and ACTION="managed_challenge"
)
7) Troubleshooting (greatest hits)
- 403 bad-sig TTL exceeded or HMAC rotated. Re-generate a link. (The link list is always freshly signed with the current HMAC.)
- 404 not-found Wrong key or you deleted/moved the object. The bucket-wide list is always built from live keys.
- jq: parse error on AE
You were parsing non-JSON. Always use
Content-Type: text/plain
+--data-binary
andAccept: application/json
. AvoidDESCRIBE
; usesystem.columns
orSELECT * … LIMIT 0 FORMAT JSON
. - /dev/nul typo
It’s
/dev/null
. - Shell
pipe>
prompt Don’t leave a trailing|
; it makes the next line part of the SQL body.
8) Key operational routines
- Rotate HMAC
- Add
HMAC_KEY_PREV
(old) and keepHMAC_KEY
(new) for a short window. - Rebuild link lists (
dlput
emits new files). - Remove
HMAC_KEY_PREV
after the window.
- Add
- Limit link blast radius
Keep
MAX_TTL_SECONDS=604800
in Worker;r2dl.sh
clamps to it. - Periodic link index
Use
~/Downloads/r2_links_latest.txt
as the canonical “current” index. - Analytics
Token needs Account → Analytics: Read on your account. Good starters:
SHOW TABLES
SELECT name,type FROM system.columns WHERE table='dl_events' ORDER BY position FORMAT JSON
- Per-key tail & TOP10 queries (see §4 script)
9) Files in this manual
setup.sh
— one-time Homebrew + env + alias (dlput
)r2dl.sh
— upload/sign/verify/analytics & fresh bucket-wide link list- Worker source (Dashboard quick-edit) with DO meter & AE logging
- WAF one-liner to block
media.bdfz.net
direct downloads