Time Series Analysis made Easy with Influx, Chronograf and Python

Shelwyn Corte
7 min readAug 28, 2024

--

AI generated

Time series data, characterized by its sequential nature and timestamp association, is ubiquitous in various domains, from IoT and finance to healthcare and telecommunications. Effectively managing and analyzing such data is crucial for extracting valuable insights. InfluxDB, a high-performance time series database, and Chronograf, its powerful visualization tool, offer a comprehensive solution for handling time series data with ease. Together, they enable efficient data ingestion, storage, and querying, as well as intuitive visualization and exploration.

In this tutorial, we’ll take a look how we can write and visualize time series data using InfluxDB and Chronograf. We’ll start by setting up a local InfluxDB instance and the Chronograf UI. Next, we’ll create a Python application to simulate real-time data and ingest it into InfluxDB. Finally, we’ll harness the power of Chronograf to visualize the stored data, uncovering valuable insights and trends.
(we will doing this project on a linux system)

Lets start by checking the system architecture and distribution

$ cat /etc/os-release
You should see something like this
$ dpkg --print-architecture
You should see something like this

Now that you have the system architecture and distribution, let’s proceed with the InfluxDB and Chronograf installation.

SECTION 1 (Install InfluxDB):

Follow the appropriate commands below based on your operating system:

# Ubuntu/Debian AMD64
curl -LO https://download.influxdata.com/influxdb/releases/influxdb2_2.7.10-1_amd64.deb
sudo dpkg -i influxdb2_2.7.10-1_amd64.deb
# Ubuntu/Debian ARM64
curl -LO https://download.influxdata.com/influxdb/releases/influxdb2_2.7.10-1_arm64.deb
sudo dpkg -i influxdb2_2.7.10-1_arm64.deb
# Red Hat/CentOS/Fedora x86-64 (x64, AMD64)
curl -LO https://download.influxdata.com/influxdb/releases/influxdb2-2.7.10-1.x86_64.rpm
sudo yum localinstall influxdb2-2.7.10-1.x86_64.rpm
# Red Hat/CentOS/Fedora AArch64 (ARMv8-A)
curl -LO https://download.influxdata.com/influxdb/releases/influxdb2-2.7.10-1.aarch64.rpm
sudo yum localinstall influxdb2-2.7.10-1.aarch64.rpm

Start the InfluxDB service:

$ sudo service influxdb start

The installed package would have created a service file at /lib/systemd/system/influxdb.service to start InfluxDB as a background service on startup. Lets reboot the system once and then verify id the service is running correctly, enter the below command after system reboot

$ sudo service influxdb status
You should see this output

If by any chance you get an error for service masked

run the below command:

$ systemctl unmask influxdb.service

If you didn't get any errors, we should be good with Influxdb. Lets now proceed to create a bucket and user

Note: Please make sure the service is running.

Now, open up a browser and navigate http://localhost:8086/

Click “Get Started”

Create a new user, organization and a bucket (this is where all your data will be saved)

Hit Continue, and you should be ready to go, make sure you copy the API token and save it somewhere, you will see it only once

You can now close the InfluxDB CLI.

SECTION 2 (Install Chronograf):

Follow the appropriate commands below based on your operating system:

SHA256: 2831d4afb0585a04c17388bf0c9da9001241f6bb3770a1f3cc76c674b74eb59c
wget https://dl.influxdata.com/chronograf/releases/chronograf_1.10.5_amd64.deb
sudo dpkg -i chronograf_1.10.5_amd64.deb
SHA256: 89b294fca62bc878d0c2e86030faaa3f3d615c90b7d9b9081810064dbb3431b7
wget https://dl.influxdata.com/chronograf/releases/chronograf-1.10.5.x86_64.rpm
sudo yum localinstall chronograf-1.10.5.x86_64.rpm

Start Chronograf

$ chronograf
You should see an output like this

Now open up a browser and navigate to: localhost:8888, you will get a screen where we will need to configure connection details, and credentials

Click “Get Started”

Toggle the InfluxDB v2 Auth to fill in the details.

Give a connection name and fill in all the details we created earlier while configuring InfluxDB. To manage data storage and prevent excessive accumulation, we’ll implement a retention policy that limits data retention to a maximum of 7 days… Click Add Connection

From the list of Dashboards, select InfluxDB, Click Next

Since we have not installed Kapacitor, lets SKIP this Kapacitor connection configuration.

You should be all good at this point. Click on View All Connections and you should see a screen like below

Lets hold on till here for now, we will get back to creating dashboards once we have some data in our InfluxDB

SECTION 3 (Python Simulator):

Let us now create a python script to simulate real-time data.

Create a new file called main.py and add the below code:

# Import necessary libraries
import influxdb_client, os, time
from influxdb_client import InfluxDBClient, Point, WritePrecision
from influxdb_client.client.write_api import SYNCHRONOUS
import random

# Specify the InfluxDB connection parameters
token = "<your token here>"
org = "GalaxyORG"
url = "http://localhost:8086"

# Create an InfluxDB client
client = influxdb_client.InfluxDBClient(url=url, token=token, org=org)

# Specify the bucket to write data to
bucket="GalaxyBucket"

# Get the write API object
write_api = client.write_api(write_options=SYNCHRONOUS)

# Infinite loop to generate and write random data points
while True:
# Generate a random brightness value between 1.0 and 10.0
random_int = random.randint(1 * 10**2, 10 * 10**2)
random_int = random_int / 10**2

# Create a data point with the specified measurement, tags, and fields
point = (
Point("galaxy_brightness")
.tag("galaxy_name", "Andromeda")
.field("brightness_magnitude", random_int)
)

# Write the data point to InfluxDB via the write API
write_api.write(bucket=bucket, org="GalaxyORG", record=point)

# Print the generated brightness value
print(random_int)

# Sleep for 3 seconds before generating the next data point
time.sleep(3)

Save and run the code. This code will add data points to InfluxDB every 3 seconds.

Here is what the example simulates:

  • point: The point is named galaxy_brightness, indicating that it's a measurement related to galaxy brightness.
  • tag: The tag remains galaxy_name, specifying the Andromeda Galaxy.
  • field: The field is named brightness_magnitude, representing the brightness of the galaxy as measured in magnitudes. This value is likely to fluctuate over time due to various factors, such as changes in the galaxy's intrinsic brightness or variations in interstellar dust.

Lets get back to our SECTION 2 and create a visualization dashboard

Launch your browser and navigate to localhost:8888

Click on Dashboards

Click Create Dashboard

Name your dashboard and click on Add Data

Give the graph a name and select the database

Select the Measurements & Tags and the Fields. and you should have a quick view of the Graph

Save the graph

And you will see it appear on the dashboard.

You can also change the graph type in the Configure Graph section

Lets Change it a Bar Graph

Explore the versatility of Chronograf’s dashboard customization options. Add as many widgets as needed to visualize your data effectively. Tailor the dashboard to your preferences by adjusting the timezone to local time and configuring the data display range to focus on specific timeframes, such as the last 30 minutes or 5 minutes.

All your carefully crafted dashboards will be accessible under the Dashboards tab. Share the dashboard link with colleagues or stakeholders to enable real-time collaboration and data visibility

--

--

Shelwyn Corte
Shelwyn Corte

Written by Shelwyn Corte

Automation Engineer | https://shelwyn.in/ | Currently in Bengaluru

No responses yet