Digitalia Full Node — Administrator Guide¶
Purpose¶
This guide is for infrastructure operators and SRE teams responsible for deploying and operating a Digitalia Full Node in production and staging environments.
Use this guide together with:
- docs/IT_OPERATIONS_MANUAL.md
- JFOS_FULL_NODE_PRODUCTION_OPS_RUNBOOK.md
- JFOS_DIGITALIA_FULL_NODE_ARCHITECTURE.md
1. Full Node Overview¶
A Digitalia Full Node maintains current chain state, serves JSON-RPC and WebSocket APIs, and participates in peer-to-peer network propagation.
Primary runtime ports:
| Port | Protocol | Purpose |
|---|---|---|
| 8545 | TCP | JSON-RPC HTTP |
| 8546 | TCP | JSON-RPC WebSocket |
| 8547 | TCP | GraphQL |
| 30303 | TCP/UDP | P2P |
Health probes:
- /health/live
- /health/ready
Core smoke RPC methods:
- web3_clientVersion
- net_listening
- net_peerCount
- dvm_blockNumber
2. Prerequisites¶
| Requirement | Recommended baseline |
|---|---|
| CPU | 8+ vCPU |
| Memory | 16+ GB RAM |
| Storage | Fast SSD/NVMe, persistent volume for chain data |
| Network | Stable low-latency internet, open 30303 TCP/UDP |
| Container runtime | Docker or Kubernetes-compatible runtime |
Operational requirements:
- Persistent storage mounted at data path
- Monitoring stack (Prometheus/Grafana) for production
- Alerting integrated with on-call workflow
3. Deployment¶
3.1 Docker deployment¶
docker run -d \
--name digitalia-full-node \
--restart unless-stopped \
-p 8545:8545 -p 8546:8546 -p 8547:8547 \
-p 30303:30303/tcp -p 30303:30303/udp \
-v $PWD/data:/opt/digitalia/data \
cosimousa/digitalia-full-node:latest \
--network=mainnet --sync-mode=full --data-path=/opt/digitalia/data
3.2 Kubernetes deployment¶
kubectl apply -f k8s/configmaps.yaml
kubectl apply -f k8s/pvc.yaml
kubectl apply -f k8s/rbac.yaml
kubectl apply -f k8s/services.yaml
kubectl apply -f k8s/deployments.yaml
kubectl rollout status deployment/digitalia-dvm -n digitalia --timeout=600s
3.3 Warm restart¶
kubectl rollout restart deployment/digitalia-dvm -n digitalia
kubectl rollout status deployment/digitalia-dvm -n digitalia --timeout=600s
docker restart digitalia-full-node
4. Day-2 Operations¶
4.1 Health validation¶
curl -sf http://localhost:8545/health/live
curl -sf http://localhost:8545/health/ready
curl -sf -X POST http://localhost:8545 \
-H 'Content-Type: application/json' \
-d '{"jsonrpc":"2.0","method":"net_listening","params":[],"id":1}'
curl -sf -X POST http://localhost:8545 \
-H 'Content-Type: application/json' \
-d '{"jsonrpc":"2.0","method":"dvm_blockNumber","params":[],"id":1}'
4.2 Log inspection¶
kubectl logs -n digitalia deploy/digitalia-dvm -f
docker logs -f digitalia-full-node
4.3 Routine checks¶
Weekly:
- health probes are 200
- net_listening is true
- peer count remains stable
- block height advances continuously
Monthly:
- image refresh and CVE review
- disk growth review for node data volume
- non-production rollback drill
4.4 RevenueRouter deployment record gate (Mainnet)¶
Administrators must enforce deployment-record validation for revenue-sharing policy readiness:
- Pre-launch accepted mode:
digitalia-contracts/deployments/revenue-router.digitalia-mainnet.planned.json- command:
npm run validate:mainnet-planned - Post-launch required mode:
digitalia-contracts/deployments/revenue-router.digitalia-mainnet.json- command:
npm run validate:mainnet-deployment
Always run these policy gates during operations checks:
cd digitalia-contracts
npm run validate:revenue-policy
npm run validate:service-pricing
npm run validate:revenue-production
If strict deployment validation fails after launch, treat it as a production readiness blocker.
5. Monitoring and SLO¶
Suggested SLO targets:
| Metric | Target |
|---|---|
| JSON-RPC availability | >= 99.5% |
| Block lag vs network head | <= 5 blocks |
| P2P peers | >= 3 peers |
Suggested alert triggers:
- peer count < 1 for 5 minutes
- block height unchanged for 10 minutes
- pod restarts > 2 in 10 minutes
- memory pressure above safe threshold
6. Incident Playbooks¶
6.1 Peer count drops to zero¶
- Check pod/container status.
- Verify P2P port exposure and firewall rules.
- Review peer discovery logs.
- Restart node to trigger peer rediscovery.
6.2 Sync stalled¶
- Compare block height over time.
- Query dvm_syncing and net_peerCount.
- Check data volume health and free space.
- If persistent, restore from known-good snapshot.
6.3 Crash loop or OOM¶
- Inspect deployment/pod events.
- Increase memory requests/limits with matching JVM settings.
- Verify data directory permissions and integrity.
6.4 RPC latency spike¶
- Check node resource saturation.
- Verify external request volume and abusive clients.
- Apply rate-limiting or isolate public RPC traffic.
7. Security Baseline¶
- Expose RPC interfaces only as needed.
- Restrict admin surfaces to trusted networks.
- Run regular vulnerability scans on node images.
- Keep backups/snapshots of chain data strategy documented.
- Enforce least-privilege access to deployment credentials.
8. Rollback¶
Kubernetes:
kubectl rollout undo deployment/digitalia-dvm -n digitalia
kubectl rollout status deployment/digitalia-dvm -n digitalia --timeout=600s
Docker:
docker stop digitalia-full-node && docker rm digitalia-full-node
# Re-run with previous image tag
After rollback:
- Re-run health probes.
- Confirm RPC smoke checks.
- Confirm block progression.
9. Documentation Map¶
- docs/FULL_NODE_ADMINISTRATOR_GUIDE.md (this file)
- docs/FULL_NODE_USER_MANUAL.md
- JFOS_FULL_NODE_PRODUCTION_OPS_RUNBOOK.md
- docs/IT_OPERATIONS_MANUAL.md
- JFOS_DIGITALIA_FULL_NODE_ARCHITECTURE.md