Production-ready agent fleet management system for monitoring Linux servers with secure mTLS authentication, real-time metrics collection, and zero-configuration deployment.
Node Pulse uses a push-based approach where agents actively send metrics to the dashboard, unlike traditional pull-based systems (e.g., Prometheus) that scrape metrics from targets. This provides significant advantages:
-
Firewall-Friendly: Agents can push metrics through firewalls, NAT, and network restrictions without requiring inbound ports to be exposed. This makes it ideal for:
- Agents behind corporate firewalls
- Servers with strict security policies
- Cloud instances without public IPs
- Edge devices with dynamic IPs
-
Built-in Reliability: Each agent has a local buffer that stores metrics when the dashboard is unreachable, ensuring:
- No data loss during network outages or dashboard maintenance
- Automatic retry with exponential backoff
- Up to 48 hours of buffered metrics (configurable)
-
Simplified Network Configuration: No need to:
- Open inbound firewall rules on monitored servers
- Configure service discovery mechanisms
- Maintain allowlists of scraper IPs
- Set up VPN tunnels for monitoring access
-
Real-time Data: Metrics arrive as soon as they're collected (15-second default interval), providing:
- Immediate visibility into system state
- Faster incident detection and response
- No scrape interval delays
-
Scalability: The dashboard scales independently from the number of agents:
- Valkey Streams buffer incoming metrics during traffic spikes
- Multiple digest workers process metrics in parallel
- Horizontal scaling based on stream lag
- No need to manage scrape scheduling and intervals
-
Efficient Data Model: Agent-side parsing with simplified metrics:
- 98.32% bandwidth reduction (61KB → 1KB per scrape)
- 99.8% database reduction (1100+ rows → 1 row per scrape)
- 10-30x faster queries with direct column access
- Distributed parsing load (offloaded to agents, not central server)
Node Pulse Admiral uses a push-based metrics pipeline with Docker Compose orchestration:
node_exporter (:9100) and process_exporter (:9256)
│
│ scrapes (HTTP)
▼
Node Pulse Agent - parses & extracts 39 metrics, WAL buffer
Node Pulse Agent - parses & extracts top N processes, WAL buffer
│
│ pushes JSON via HTTPS POST
▼
Submarines Ingest (:8080) - validates & publishes
│
│ streams to Valkey
▼
Valkey Streams (:6379) - message buffer & backpressure
│
│ consumes (batch of 100)
▼
Submarines Digest (worker) - batch insert
│
│ INSERT query
▼
PostgreSQL (:5432) - admiral.metrics + admiral.process_snapshots
Submarines (Go) - Metrics Pipeline
- Ingest (:8080) - Receives metrics from agents, publishes to Valkey Stream (~5ms response)
- Digest (worker) - Consumes from stream, batch writes to PostgreSQL
- Status (:8081) - Public status pages and health badges (read-only)
- Deployer (worker) - Executes Ansible playbooks for agent deployment
- SSH WS (:6001) - WebSocket terminal for server access
Flagship (Laravel + React) - Management UI
- Web dashboard with real-time charts
- Server management and configuration
- User authentication (Laravel Fortify)
- API endpoints for metrics and processes
- Served by Nginx (:8090) + PHP-FPM (:9000) inside container
- Exposed via Caddy reverse proxy in production
Data Layer
- PostgreSQL 18 - Admiral schema (metrics, servers, users, alerts)
- Valkey - Message streams, caching, sessions (Redis-compatible)
Reverse Proxy & Web Server
- Caddy - Edge reverse proxy, TLS termination, automatic HTTPS, routes traffic between services
- Nginx - Application server for Flagship (serves static files, proxies PHP requests to PHP-FPM)
- Linux server (Ubuntu 22.04+ recommended)
- Docker Engine 24.0+
- Docker Compose v2.20+
- Root/sudo access
- Minimum 2GB RAM, 2 CPU cores
Download and deploy the latest release:
# Download latest release
curl -LO https://github.com/node-pulse/admiral/releases/latest/download/node-pulse-admiral-latest.tar.gz
# Verify checksum (optional but recommended)
curl -LO https://github.com/node-pulse/admiral/releases/latest/download/node-pulse-admiral-latest.tar.gz.sha256
sha256sum -c node-pulse-admiral-latest.tar.gz.sha256
# Extract
tar xzf node-pulse-admiral-latest.tar.gz
# Enter the extracted directory (e.g., node-pulse-admiral-0.8.7/)
cd node-pulse-admiral-*
# Run interactive deployment
sudo ./deploy.shThe deployment script will:
- Guide you through configuration
- Set up mTLS certificates automatically
- Pull pre-built Docker images
- Create initial admin user
- Start all services
Or use the Makefile:
# Quick deployment
make deploy
# Or step-by-step
make env-check # Validate configuration
make pull # Pull latest images
make mtls-setup # Bootstrap mTLS
make up # Start servicesFor development or manual setup from source:
-
Clone the repository:
git clone https://github.com/node-pulse/admiral.git cd admiral -
Copy environment file:
cp .env.example .env # Edit .env with your settings -
Start services (development mode):
docker compose -f compose.development.yml up -d
This starts all services:
- PostgreSQL (port 5432)
- Valkey (port 6379)
- Submarines Ingest (port 8080)
- Submarines Status (port 8081)
- Submarines SSH WS (port 6001)
- Submarines Digest (background worker)
- Submarines Deployer (background worker)
- Flagship (port 9000 + Vite HMR on 5173)
- Caddy (port 8000)
Or (production mode, with mTLS):
sudo ./scripts/deploy.sh
-
Check service status:
docker compose -f compose.development.yml ps
-
View logs:
docker compose -f compose.development.yml logs -f # Or specific service docker compose -f compose.development.yml logs -f submarines-ingest docker compose -f compose.development.yml logs -f flagship
Once all services are running (development mode):
- Caddy Reverse Proxy: http://localhost:8000 (routes to Flagship)
- Flagship (Admin Dashboard): http://localhost:9000 (direct access)
- Vite Dev Server (HMR): http://localhost:5173
- Submarines Ingest: http://localhost:8080 (metrics endpoint)
- Submarines Status: http://localhost:8081 (public status pages)
- Submarines SSH WS: http://localhost:6001 (WebSocket terminal)
- PostgreSQL: localhost:5432
- Valkey: localhost:6379
The PostgreSQL database uses a single admiral schema for all application data (shared by Submarines and Flagship):
servers: Server/agent registrymetrics: Simplified metrics - 39 essential metrics per row (98% bandwidth reduction vs raw Prometheus)- CPU: 6 fields (raw counter values)
- Memory: 7 fields (bytes)
- Swap: 3 fields (bytes)
- Disk: 8 fields (bytes and I/O counters)
- Network: 8 fields (counters for primary interface)
- System: 3 load average fields
- Processes: 3 fields
- Uptime: 1 field
alerts: Alert recordsalert_rules: Alert rule configurationsusers: User accounts (Laravel Fortify authentication)sessions: User sessionsssh_sessions: SSH session audit logsprivate_keys: SSH private keys for server accesssettings: Application settings
Node Pulse uses agent-side parsing for efficient metrics collection:
- node_exporter runs on each server (localhost:9100)
- Node Pulse Agent scrapes node_exporter locally
- Agent parses Prometheus metrics and extracts 39 essential fields
- Agent sends compact JSON (~1KB) to Submarines
- Submarines writes to PostgreSQL (1 row per scrape)
Benefits:
- 98.32% bandwidth reduction (61KB → 1KB)
- 99.8% database reduction (1100+ rows → 1 row)
- 10-30x faster queries (direct column access vs JSONB)
- Distributed parsing load (offloaded to agents)
Primary Endpoint:
POST http://your-domain/metrics/prometheus
Accepts simplified metrics format with 39 essential fields (agent-side parsed from Prometheus exporters).
Request Format:
{
"node_exporter": [
{
"timestamp": "2025-10-30T12:00:00Z",
"cpu_idle_seconds": 7184190.53,
"cpu_iowait_seconds": 295.19,
"cpu_system_seconds": 2979.08,
"cpu_user_seconds": 7293.29,
"cpu_steal_seconds": 260.7,
"cpu_cores": 4,
"memory_total_bytes": 8326443008,
"memory_available_bytes": 7920050176,
... (39 total fields)
}
],
"process_exporter": [
{
"timestamp": "2025-10-30T12:00:00Z",
"name": "nginx",
"num_procs": 4,
"cpu_seconds_total": 1234.56,
"memory_bytes": 104857600
}
]
}GET /api/servers- List all serversGET /api/servers/:id/metrics- Get metrics for a specific serverGET /api/processes/top- Get top N processes by CPU or memoryGET /health- Health check endpoint
The recommended deployment uses node_exporter + Node Pulse Agent with agent-side parsing:
node_exporter (localhost:9100) → Agent parses locally → Sends 39 metrics (1KB JSON) → Submarines
Update your Node Pulse agent configuration (/etc/nodepulse/nodepulse.yml):
# Prometheus scraper configuration
scrapers:
prometheus:
enabled: true
endpoints:
- url: "http://127.0.0.1:9100/metrics"
name: "node_exporter"
interval: 15s
# Server configuration
server:
endpoint: "https://your-dashboard-domain/metrics/prometheus"
format: "prometheus" # Sends parsed JSON in Prometheus format
timeout: 10s
# Agent behavior
agent:
server_id: "auto-generated-uuid"
interval: 15s # How often to scrape and push
# Buffering (Write-Ahead Log for reliability)
buffer:
enabled: true
retention_hours: 48
max_size_mb: 100
# Logging
logging:
level: "info"
file: "/var/log/nodepulse/nodepulse.log"
max_size_mb: 50
max_backups: 3
max_age_days: 7Use the included Ansible playbooks to deploy both node_exporter and the agent:
# 1. Deploy node_exporter (must be deployed first)
ansible-playbook ansible/playbooks/prometheus/deploy-node-exporter.yml -i inventory.yml
# 2. Deploy Node Pulse Agent
# Production (with mTLS):
ansible-playbook ansible/playbooks/nodepulse/deploy-agent-mtls.yml -i inventory.yml
# Development (no mTLS):
ansible-playbook ansible/playbooks/nodepulse/deploy-agent-no-mtls.yml -i inventory.ymlSee ansible/playbooks/nodepulse/QUICK_START.md for detailed deployment instructions.
cd submarines
go mod download
# Run ingest server (receives agent metrics)
go run cmd/ingest/main.go
# Run digest worker (consumes from Valkey Stream, writes to PostgreSQL)
go run cmd/digest/main.gocd flagship
composer install
npm install
# Run development server (all services)
composer dev
# Or run individually
php artisan serve # Laravel web server
npm run dev # Vite dev server
php artisan queue:listen # Queue worker
php artisan pail # Log viewer
# Other commands
php artisan migrate # Run migrations
php artisan test # Run testsFlagship uses Laravel 12 with Inertia.js for a modern SPA experience:
- Backend: Laravel for API, authentication, and business logic
- Frontend: React 19 with TypeScript
- Routing: Server-side routing via Inertia.js (no client-side router needed)
- UI Components: Radix UI + Tailwind CSS
- Authentication: Laravel Fortify with CAPTCHA support
- Create a controller in
flagship/app/Http/Controllers/:
<?php
namespace App\Http\Controllers;
use Inertia\Inertia;
class ExampleController extends Controller
{
public function index()
{
return Inertia::render('example', [
'data' => [...],
]);
}
}- Create a React component in
flagship/resources/js/pages/:
// resources/js/pages/example.tsx
export default function Example({ data }) {
return <div>Your page content</div>;
}- Add a route in
flagship/routes/web.php:
Route::get('/example', [ExampleController::class, 'index']);# Stop all services (development)
docker compose -f compose.development.yml down
# Stop and remove volumes (WARNING: This deletes all data)
docker compose -f compose.development.yml down -v# Rebuild and restart a specific service (development)
docker compose -f compose.development.yml up -d --build submarines-ingest
docker compose -f compose.development.yml up -d --build submarines-digest
docker compose -f compose.development.yml up -d --build flagship
# Rebuild all services
docker compose -f compose.development.yml up -d --buildCheck logs for specific services:
docker compose -f compose.development.yml logs submarines-ingest
docker compose -f compose.development.yml logs submarines-digest
docker compose -f compose.development.yml logs submarines-deployer
docker compose -f compose.development.yml logs flagship
docker compose -f compose.development.yml logs postgres
docker compose -f compose.development.yml logs valkey-
Ensure PostgreSQL is healthy:
docker compose -f compose.development.yml ps postgres
-
Check database logs:
docker compose -f compose.development.yml logs postgres
-
Verify connection from Submarines:
docker compose -f compose.development.yml exec submarines-ingest sh # Inside container: # Check if postgres is reachable on port 5432
docker compose -f compose.development.yml logs valkey
docker compose -f compose.development.yml exec valkey valkey-cli --raw incr ping-
Check Laravel application logs:
docker compose -f compose.development.yml logs flagship # Or check mounted logs tail -f logs/flagship/laravel.log -
Access Laravel container:
docker compose -f compose.development.yml exec flagship bash php artisan about # Show Laravel environment info
-
Check frontend build:
# Inside flagship container npm run build # Production build npm run dev # Development with hot reload
-
Check if Vite dev server is running:
docker compose -f compose.development.yml logs flagship | grep vite -
Ensure Submarines API is accessible:
curl http://localhost:8080/health
Check stream lag (how many messages are waiting to be processed):
docker compose -f compose.development.yml exec valkey valkey-cli \
--raw XPENDING nodepulse:metrics:stream nodepulse-workersIf lag is high, consider scaling digest workers.
For production:
- Update all secrets in
.env - Use
productiontarget in Dockerfiles - Set
GIN_MODE=releasefor Submarines - Set
APP_ENV=productionandAPP_DEBUG=falsefor Flagship - Run
php artisan optimizefor Laravel optimization - Build frontend assets with
npm run build - Configure proper domains in
.env(ADMIN_DOMAIN, INGEST_DOMAIN, etc.) - Use
Caddyfile.prodfor automatic HTTPS via Let's Encrypt - Set up proper backup strategy for PostgreSQL
- Configure monitoring and alerting
- Scale digest workers based on Valkey Stream lag
- Configure Laravel authentication and authorization
- Set up proper session and cache drivers
For issues and questions, please open an issue on GitHub.