Data Flow
This document describes how data flows through the Linkr system, from hotspot telemetry to coverage maps to rewards distribution.
Data Types
Linkr processes several categories of data:
| Category | Examples | Volume | Latency Requirements |
|---|
| Telemetry | Heartbeats, performance metrics | High | Near-real-time |
| Coverage | Location, signal estimates | Medium | Minutes |
| Sessions | Connection events, usage | High | Real-time |
| Rewards | Challenges, proofs, calculations | Medium | Eventual |
| Configuration | Hotspot settings, user preferences | Low | Best-effort |
Telemetry Flow
Hotspots continuously report telemetry data to the Linkr Network.
Ingestion Path
Hotspot Telemetry Processor Data Stores
│ │ │
│ POST /telemetry │ │
│ {heartbeat + metrics} │ │
├─────────────────────────────►│ │
│ │ │
│ │ 1. Validate payload │
│ │ 2. Normalize timestamps │
│ │ 3. Enrich with hotspot ID │
│ │ │
│ │ Write to time-series DB │
│ ├─────────────────────────────►│
│ │ │
│ │ Update status in cache │
│ ├─────────────────────────────►│
│ │ │
│ 200 OK │ │
│◄─────────────────────────────┤ │
Telemetry Payload
Hotspots send telemetry every 60 seconds:
{
"hotspot_id": "hs_7xk2m9p4",
"timestamp": "2024-01-15T14:32:00Z",
"sequence": 847293,
"metrics": {
"uptime_seconds": 86400,
"active_connections": 3,
"total_connections_24h": 47,
"bandwidth": {
"upload_mbps": 12.4,
"download_mbps": 45.2
},
"latency_ms": 12,
"cpu_percent": 23,
"memory_percent": 41
}
}
Telemetry Processing
The Telemetry Processor performs several operations:
- Validation: Check that the payload is well-formed and from a registered hotspot
- Deduplication: Detect and discard duplicate reports (using sequence numbers)
- Normalization: Convert all timestamps to UTC, standardize units
- Enrichment: Add metadata (operator ID, location, etc.)
- Storage: Write to the time-series database
- Alerting: Trigger alerts if metrics exceed thresholds
Telemetry is retained for 90 days at full resolution. Older data is aggregated to hourly and daily summaries.
Coverage Data Flow
Coverage information flows from hotspot registrations and telemetry to the global map.
Coverage Update Path
Hotspot Registry Coverage Engine Map Service
│ │ │
│ Hotspot status change │ │
├─────────────────────────►│ │
│ │ │
│ │ 1. Fetch hotspot data │
│ │ 2. Calculate coverage │
│ │ 3. Update coverage index │
│ │ │
│ │ Regenerate affected tiles │
│ ├───────────────────────────►│
│ │ │
│ │ │ Invalidate CDN
│ │ │ cache
Coverage Calculation
The Coverage Engine computes coverage using:
-
Static factors:
- Device type and antenna specifications
- Indoor vs outdoor placement
- Reported coverage radius
-
Dynamic factors:
- Current online status
- Recent performance metrics
- User-reported signal quality
-
Environmental factors:
- Known obstacles (buildings, terrain)
- Interference from nearby hotspots
- Historical signal propagation data
Coverage Data Model
{
"hotspot_id": "hs_7xk2m9p4",
"center": {
"lat": 40.7128,
"lng": -74.0060
},
"coverage_radius_m": 150,
"signal_quality": {
"0-50m": "excellent",
"50-100m": "good",
"100-150m": "fair"
},
"last_updated": "2024-01-15T14:35:00Z",
"status": "online"
}
Session Data Flow
User sessions generate real-time events that track connectivity.
Session Lifecycle
User App Session Manager Hotspot
│ │ │
│ 1. Request session token │ │
├─────────────────────────────►│ │
│ │ │
│ │ 2. Create session │
│ │ record │
│ │ │
│ 3. Receive token │ │
│◄─────────────────────────────┤ │
│ │ 4. Notify hotspot │
│ ├───────────────────────►│
│ │ │
│ 5. Connect using token │ │
├──────────────────────────────────────────────────────►│
│ │ │
│ │ 6. Session started │
│ │◄───────────────────────┤
│ │ │
│ ◄─── Active Session ───► │
│ │ │
│ │ 7. Periodic updates │
│ │◄───────────────────────┤
│ │ │
│ 8. Disconnect │ │
├──────────────────────────────────────────────────────►│
│ │ │
│ │ 9. Session ended │
│ │◄───────────────────────┤
│ │ │
│ │ 10. Finalize record │
Session Record
{
"session_id": "sess_m8k2p9x4",
"user_id": "usr_a3f2b7c9",
"hotspot_id": "hs_7xk2m9p4",
"started_at": "2024-01-15T14:30:00Z",
"ended_at": "2024-01-15T15:45:00Z",
"duration_seconds": 4500,
"bytes_transferred": {
"upload": 15728640,
"download": 524288000
},
"quality_score": 0.92
}
Rewards Data Flow
Rewards flow involves challenges, proofs, and settlement.
Challenge-Response Flow
Rewards Engine Hotspot
│ │
│ 1. Generate challenge │
│ (random nonce + timestamp) │
│ │
│ POST /challenge │
├─────────────────────────────────────────────────────►│
│ │
│ │ 2. Sign response
│ │ (nonce + hotspot key)
│ │
│ 3. Signed response │
│◄─────────────────────────────────────────────────────┤
│ │
│ 4. Verify signature │
│ 5. Record proof │
│ │
Rewards Calculation Flow
Telemetry Data Session Data Challenge Proofs
│ │ │
└────────────────┼────────────────┘
│
▼
┌────────────────┐
│ Rewards Engine │
│ (aggregation) │
└───────┬────────┘
│
┌───────────┼───────────┐
│ │ │
▼ ▼ ▼
Availability Performance Utilization
Score Score Score
│ │ │
└───────────┼───────────┘
│
▼
┌────────────────┐
│ Calculate │
│ Final Reward │
└───────┬────────┘
│
▼
┌────────────────┐
│ Credit to │
│ Operator Acct │
└────────────────┘
Rewards Aggregation
Rewards are aggregated over epochs (configurable time periods):
| Metric | Aggregation Method |
|---|
| Availability | % of successful challenges |
| Performance | Weighted average of metrics |
| Utilization | Sum of session-minutes served |
Data Retention
| Data Type | Retention Period | Storage |
|---|
| Telemetry (raw) | 90 days | Time-series DB |
| Telemetry (aggregated) | 2 years | Time-series DB |
| Session records | 1 year | Primary DB |
| Challenge proofs | 6 months | Primary DB |
| Rewards history | Indefinite | Primary DB |
| Coverage snapshots | 30 days | Object storage |
Real-time vs Batch Processing
Some data flows in real-time; others are processed in batches:
Real-time
- Hotspot status updates (online/offline)
- Session start/end events
- Coverage map updates for status changes
Near-real-time (< 5 minutes)
- Telemetry ingestion and processing
- Challenge-response verification
- Performance tier updates
Batch (hourly/daily)
- Rewards calculation and settlement
- Coverage recalculation (full)
- Analytics aggregation
- Data archival
Next Steps