, ,

Scanning Your Network for SNMP-Enabled Servers

Part 1 of Our Network Monitoring Series

As our networks grow and demand increases, understanding the performance and state of our infrastructure becomes essential. In this series, we’ll walk through setting up a network monitoring system using SNMP (Simple Network Management Protocol) that collects, processes, and visualizes critical data.

In this first part, we’ll focus on creating a Python script that discovers and scans an entire subnet, identifying servers that respond to SNMP requests. This script not only lists active servers but also prepares them for further monitoring, setting the stage for more advanced data collection in upcoming posts. By the end of this series, we aim to have a comprehensive system for network health monitoring.


Step 1: Setting Up the Subnet Scanner

Our Python-based SNMP scanner discovers all servers within a given subnet that respond to SNMP requests. Once a server responds, it’s added to a list of active devices for continuous monitoring. This setup helps to efficiently track active nodes and exclude unresponsive servers in future scans, making the system faster and more accurate over time.

The basic steps in the scanner:

  • Define the Subnet: Set up the target subnet for scanning.
  • Check SNMP Responses: Attempt an SNMP request on each device within the subnet.
  • List Active Servers: Save responding servers for ongoing monitoring, while ignoring non-responding servers.

Python Script Overview

Our scanner script, written in Python, leverages the pysnmp library. Here’s a high-level view of what this script does:

  1. Subnet Configuration – We set the target subnet (e.g., 192.168.10.0/24) and SNMP community string (default is public).
  2. SNMP GET Requests – Each IP in the subnet receives a quick SNMP request for an essential OID (Object Identifier), confirming connectivity.
  3. Efficient Rescanning – Responsive servers are saved in a list, allowing subsequent scans to target only these known responsive nodes, reducing load and time.

Copied!
import sys import ipaddress import json from pysnmp.hlapi import * import subprocess from pathlib import Path # Configure the target subnet and SNMP settings SUBNET = "192.168.10.0/24" COMMUNITY = "public" SNMP_OID = '1.3.6.1.2.1.1.1.0' RESPONDING_SERVERS_FILE = "responding_servers.json" # Load the previous list of responsive servers def load_responding_servers(): if Path(RESPONDING_SERVERS_FILE).is_file(): with open(RESPONDING_SERVERS_FILE, "r") as file: return set(json.load(file)) return set() # Save the list of responsive servers def save_responding_servers(servers): with open(RESPONDING_SERVERS_FILE, "w") as file: json.dump(list(servers), file) # Check if a server responds to an SNMP GET request def check_snmp_response(ip, community): iterator = getCmd( SnmpEngine(), CommunityData(community), UdpTransportTarget((str(ip), 161), timeout=2, retries=0), ContextData(), ObjectType(ObjectIdentity(SNMP_OID)) ) errorIndication, errorStatus, errorIndex, varBinds = next(iterator) return not (errorIndication or errorStatus) # Call our main SNMP-InfluxDB script for each responsive server def run_snmp_influx_script(ip, community): subprocess.run([sys.executable, "store_snmp.py", str(ip), community]) # Main loop for scanning the subnet def main(): subnet = ipaddress.ip_network(SUBNET) previous_responding_servers = load_responding_servers() current_responding_servers = set() servers_to_check = previous_responding_servers if previous_responding_servers else subnet.hosts() for ip in servers_to_check: if check_snmp_response(ip, COMMUNITY): run_snmp_influx_script(ip, COMMUNITY) current_responding_servers.add(str(ip)) save_responding_servers(current_responding_servers) if __name__ == "__main__": main()

What’s Next?

With our responsive servers identified, we’re set to begin collecting valuable metrics in the next post. Upcoming articles will guide you through:

  • Connecting to InfluxDB: Setting up data storage for all SNMP-based metrics.
  • Gathering Performance Metrics: Collecting key performance indicators (KPIs) like CPU usage, memory, and network activity.
  • Visualizing Data: Using tools like Grafana to create visual dashboards and alerts.

By following this series, you’ll build a powerful, efficient monitoring system tailored to your infrastructure.

Stay tuned for the next post, where we’ll dive into data collection and storage in InfluxDB!