It's slightly more complicated than that (MRTG is used to store the data, time is logged when the check is run), but that's the gist of it yes. The performance data of a typical bandwidth checks returns something like this:
Where the value of in and out is the value of the respective OIDs, and the time is collected when the data is inserted into MRTG's back-end databases (sort of like a SQL NOW()).
Are there incremented counters (number of packets or data quantities) that are distributed over time to obtain a bit rate, or the information received by snmp is directly a bit rate ?
Using RRD file: /var/lib/mrtg/192.168.5.42_1.rrd
Input warning level(Mb/s): 50.00
Output warning level(Mb/s): 50.00
Input critical level (Mb/s): 80.00
Output critical level (Mb/s): 80.00
Fetching data with command: rrdtool fetch /var/lib/mrtg/192.168.5.42_1.rrd AVERAGE -s-10minutes | grep -vi "nan"
RRD File Data:
ds0 ds1
1490043900: 1.7957144235e+06 1.1482139460e+05
1490044200: 2.3002856646e+06 1.3019000204e+05
Raw Input Traffic Value (b/s): 18402285.316800
Raw Output Traffic Value (b/s): 1041520.016320
Decimal Input Traffic Value (b/s): 18402285
Decimal Output Traffic Value (b/s): 1041520
Traffic IN scalar: 1000000
Traffic OUT scalar: 1000000
OK - Current BW in: 18.40Mbps Out: 1.04Mbps|in=18.402285Mb/s;50.00;80.00 out=1.041520Mb/s;50.00;80.00
So it's basically taking an average of the last 10 minutes worth of traffic and seeing if that value is greater than the provided thresholds. The information received from SNMP is the direct bit rate at that very moment returned by the SNMP agent on the device, but for our wizard's checks we're averaging the last 10 minutes. Historically, this is a better metric to use since it more accurately represents legitimate traffic spikes rather than intermittent blips.