In our previous blog, we explored how we integrated data collection and predictive modeling into our API. Since then, we’ve taken our monitoring system to the next level with a dynamic agent-based approach. Here’s what we’ve accomplished:
The Problem We Solved
As we scaled up, we realized that each server’s monitoring requirements could differ. Some might use dnf
, others yum
, or even pkg
to track package updates. Managing this diversity with static scripts was inefficient. We needed a dynamic solution where:
- Each server would fetch its configuration from the API.
- The API would dictate what checks each agent should perform and how to perform them.
The Dynamic Agent: agent.sh
We revamped our approach by introducing agent.sh
. This script interacts with the API to fetch specific monitoring commands and executes them accordingly. Here’s how it works:
- The agent queries the
/api/agent/config
endpoint. - The API responds with a list of checks tailored to the server, including:
- The check name (e.g.,
os_updates
). - The command to execute (e.g.,
dnf check-update | wc -l
).
- The check name (e.g.,
- The agent executes the commands and sends the results back to the API for storage and analysis.
The Code in Action
Example API Configuration Response:
Copied!{ "checks": [ { "name": "os_updates", "command": "/usr/bin/dnf check-update | wc -l" } ] }
The Agent Script (agent.sh
):
Copied!#!/bin/bash # API Endpoint and API Key API_ENDPOINT="http://127.0.0.1:5000/api/agent/config" API_KEY="YOUR_API_KEY" # Fetch the configuration CONFIG=$(curl -s -H "X-API-Key: $API_KEY" $API_ENDPOINT) # Parse and execute commands echo "$CONFIG" | jq -c '.checks[]' | while read -r CHECK; do NAME=$(echo "$CHECK" | jq -r '.name') COMMAND=$(echo "$CHECK" | jq -r '.command') RESULT=$(bash -c "$COMMAND" 2>/dev/null) # Send the result back to the API curl -s -X POST http://127.0.0.1:5002/api/submit \ -H "Content-Type: application/json" \ -H "X-API-Key: $API_KEY" \ -d "{\"field\": \"$NAME\", \"value\": $RESULT}" done
Benefits of the Dynamic Agent Approach
- Centralized Control: All monitoring configurations are stored in the API’s database, making updates seamless.
- Dynamic Adjustments: We can add or modify checks without manually updating scripts on individual servers.
- Scalability: As we onboard more servers, the API dynamically provides the necessary configurations.
- Flexibility: Different servers can have tailored checks depending on their role, OS, or requirements.
What’s Next?
We’re already brainstorming the next steps:
- Extending the Agent: Supporting more complex checks, like disk usage or CPU metrics.
- Webhook Alerts: Automatically triggering actions or notifications when thresholds are breached.
- Dashboard Integration: Visualizing the data collected by our API in real-time.
- Modular: Rewrite the script in a modular fashion to keep everything managable.
This upgrade brings us closer to a fully autonomous, predictive monitoring system that adapts as our infrastructure evolves. Stay tuned for more updates as we continue to refine and expand our system!