Category Archives: Allgemein

Capturing WinCC Unified Traces to Elasticsearch

In industrial automation, logging and monitoring are crucial for maintaining system health and troubleshooting issues. Siemens WinCC Unified provides built-in tracing capabilities that. In this post I will show how to capture that traces to Elasticsearch to allow seamless log collection, storage, and visualization.

Step 1: Capturing WinCC Unified Traces

WinCC Unified provides a trace tool that simplifies the process of collecting traces. The tool allows logs to be written to files, which can then be read by Logstash (a tool to process log files).

In that example we will write the log files to C:\Tools\logstash-siemens\logs directory.

"C:\Program Files\Siemens\Automation\WinCCUnified\bin\RTILtraceTool.exe" -mode logger -path C:\Tools\logstash-siemens\logs

Step 2: Collecting Logs with Logstash

Create a Logstash configuration file (e.g., C:\Tools\logstash-siemens\logstash.conf) with the following setup:

input {
  file {
    path => "C:/Tools/logstash-siemens/logs/*.log"  # Use forward slashes for Windows paths
    start_position => "beginning"
    sincedb_path => "C:/Tools/logstash-siemens/sincedb"  # Save the reading state
    codec => plain {
      charset => "UTF-8"
    }
  }
}

filter {
  # Drop empty lines
  if [message] =~ /^\s*$/ {
    drop { }
  }

  # Add a custom field to identify the log source
  mutate {
    add_field => { "Source" => "WinCC Unified" }
  }

  # Use dissect to parse the log format correctly
  dissect {
    mapping => {
      "message" => "%{#}|%{Host}|%{System}|%{Application}|%{Subsystem}|%{Module}|%{Severity}|%{Flags}|%{Timestamp}|%{Process/Thread}|%{Message}"
    }
	  remove_field => ["message"]
  }

  # Remove leading and trailing spaces
  mutate {
    strip => ["#", "Host", "System", "Application", "Subsystem", "Module", "Severity", "Flags", "Timestamp", "Process/Thread"]
  }

  # Convert timestamp to @Timestamp (ensure it matches your log format)
  date {
    match => ["Timestamp", "yyyy.MM.dd HH:mm:ss.SSS"]
    target => "@timestamp"
    timezone => "UTC"
    locale => "en"  # Add locale to avoid parsing issues due to different formats or locales
  }
}

output {
  # stdout {
  #   codec => json_lines
  # }

  # Elasticsearch output (uncomment to enable)
  elasticsearch {
     hosts => ["http://linux0:9200"] # Change it to your Elasticsearch host
     index => "wincc-traces-%{+YYYY.MM}"
     # user => "elastic"
     # password => "elastic"
  }
}

Start Logstash to collect log files. First, download Logstash (https://www.elastic.co/downloads/logstash) and extract it to C:\Tools.

Then, run the following command to start Logstash using the specified configuration file:

C:\Tools\logstash-8.17.3\bin\logstash.bat -f C:\Tools\logstash-siemens\logstash.conf

Forwarding Traces from WinCC Unified Panels


For WinCC Unified Panels, trace forwarding can be enabled, allowing traces to be captured with the WinCC Unified trace tool on a PC. The traces will then be also be written to files on the same PC (by the tool you started at Step 1).

“C:\Program Files\Siemens\Automation\WinCCUnified\bin\RTILtraceTool.exe” -mode receiver -host -tcp

Step 4: Visualizing Logs in Kibana

Once logs are stored in Elasticsearch, Kibana provides a powerful interface to explore and analyze them.

  1. Open Kibana and navigate to Stack Management > Index Patterns.
  2. Create a new index pattern matching wincc-traces-*.
  3. Use Discover to explore logs and apply filters.
  4. Create dashboards and visualizations to monitor system health and performance.

MQTT Bandwidth Efficiency: The Role of Topic Alias in MQTT 5 and Why I Only Got It Working with EMQX?

Recently, I conducted a test to analyze the bandwidth usage of MQTT, and one feature stood out as particularly impactful: the Topic Alias feature in MQTT 5. This feature can significantly reduce the overhead associated with long topic names, which is especially relevant in UNS (Unified Namespace) implementations using ISA-95-style topics, where topic names tend to be lengthy and sometimes consume more bytes than the payload itself.

💵 It can make an impact if you are being charged based on the amount of data transferred (cloud).

🚨 The Importance of Topic Alias

The Topic Alias feature allows a client to map a topic name to a shorter alias, reducing the amount of data transmitted. This can drastically lower bandwidth usage when transmitting messages with long topic names.

Using Topic Alias during publishing is straightforward but requires the topic-to-alias mapping logic to be implemented in the client program (which is not difficult to do).

On the subscriber side, the implementation should ideally be seamless. In theory, a subscriber needs only to set the maximum allowed alias number during the connection phase. This should make the feature easy to adopt for receiving applications.

👉 During my tests, I discovered something surprising: EMQX was the only broker (of the ones I have tested) to support Topic Alias for subscriptions(!) out of the box. With others I was unable to enable this functionality.

👉 To note: most articles about Topic Alias focus primarily on its use during publishing, not on subscriptions. I was focused on subscriptions.

Bachelors Thesis from 2014: Optimizing the use of photovoltaic systems in single-family homes

Abstract

Within this thesis we implement a prototype of a smart home process control system for increasing the consumption of self-produced photovoltaic energy. To increase the consumption of self-produced energy, the energy is stored in a thermal storage by charging it automatically in case there is excess energy.

Due to the increasing popularity of small energy-producing plants (mainly photovoltaic systems) the allowances for energy fed into the public electricity grid decreases. The income from this kind of electricity is lower than the price of purchased electricity. By increasing the self-consumption of energy, the profitability of a photovoltaic system can be significantly improved.

Based on the findings of the thesis 1 „Eigenverbrauchsoptimierung vonPhotovoltaikstrom in Einfamilienhäusern“ (Vogler, 2014), the components to build such an integrated control system are implemented. The goal is to build an automated control system, that enforces power consumption when there is enough self-produced electricity available.

Due to long product cycles, companies in the process control-business are comparatively innovation averse and expensive. But with systems such as Arduino or Raspberry Pi, which allow an easy entry to the embedded programming and control technology, an intelligent control system for self-consumption of photovoltaic electricity by thermal storage can be implemented with low cost.

Our system consists of loosely coupled separated components. By the use of service interfaces, the components are highly distributed, which follows the current IT trend of ” IoT – Internet of Things “.

Thesis 1: Optimizing self-consumption of photovoltaic electricity in single-family homes

Thesis 2: Optimizing the use of photovoltaic systems in single-family homes

From OPC UA & MQTT/SparkplugB to Snowflake with Frankenstein Automation-Gateway

In that example we take SparkplugB messages from a MQTT Broker, decode it and write it to Snowflake. And we take some OPC UA nodes and write it also to the same table.

Create a database and a schema for your destination table.

CREATE OR REPLACE SCHEMA scada;

Create the table for the incoming data:

CREATE TABLE IF NOT EXISTS scada.gateway (
  system character varying(1000) NOT NULL,
  address character varying(1000) NOT NULL,
  sourcetime timestamp with time zone NOT NULL,
  servertime timestamp with time zone NOT NULL,
  numericvalue float,
  stringvalue text,
  status character varying(30),
  CONSTRAINT gateway_pk PRIMARY KEY (system, address, sourcetime)
  );

Generate a private key:

> openssl genrsa 2048 | openssl pkcs8 -topk8 -v2 des3 -inform PEM -out snowflake.p8 -nocrypt

Generate a public key:

> openssl rsa -in snowflake.p8 -pubout -out snowflake.pub

Set the public key to your user:

> ALTER USER xxxxxx SET RSA_PUBLIC_KEY=’MIIBIjANBgkqh…’;

Replace MIIBIjANBgkqh… with your public key from the snowflake.pub file (without —–BEGIN PRIVATE KEY—– and without —–END PRIVATE KEY—–)

Details about creating keys can be found here

Prepare the Gateway

Add a Snowflake logger section to the gateways config.yml. In that example we take SparkplugB messages from a MQTT Broker, decode it and write it to Snowflake. And we take some OPC UA nodes and write it also to the same table.

Drivers:
  OpcUa:
    - Id: "test1"
      Enabled: true
      LogLevel: INFO
      EndpointUrl: "opc.tcp://test.monstermq.com:4840/server"
      UpdateEndpointUrl: true
      SecurityPolicy: None

  Mqtt:
    - Id: "test2"
      Enabled: true
      LogLevel: INFO
      Host: test.monstermq.com
      Port: 1883
      Format: SparkplugB

Loggers:
  Snowflake:
    - Id: "snowflake"
      Enabled: true
      LogLevel: INFO
      PrivateKeyFile: "snowflake.p8"
      Account: xx00000
      Url: https://xx00000.eu-central-1.snowflakecomputing.com:443
      User: xxxxxx
      Role: accountadmin
      Scheme: https
      Port: 443
      Database: SCADA
      Schema: SCADA
      Table: GATEWAY
      Logging:
        - Topic: opc/test1/path/Objects/Mqtt/#
        - Topic: mqtt/test2/path/spBv1.0/vogler/DDATA/+/#

The Url you can find in the Snowflake web console by going to Admin/Accounts and then hover over the “Locator” column.

Start the Gateway

> git checkout snowflake
> cd automation-gateway/source/app  
> ../gradlew run


Note: Using gradlew to start the gateway is not recommended for production. Instead, consider using a Docker image or the files from the build/distribution for a more robust setup..

MonsterMQ with Grafana

Because MonsterMQ can store topic values directly in PostgreSQL/Timescale, you can instantly create dashboards with Grafana! 📊

Here’s a simple example:

👉 Check out the live dashboard

It’s super easy to get started. Use the public available MonsterMQ at test.monstermq.com at port 1883. Just publish a JSON string to any topic under “Test”, like “Test/Sensor1” with a payload like this: {“value”: 1}, and you’ll see the value reflected in the Grafana dashboard in real time.

I’m currently publishing temperature sensor values to the public broker from my home automation using automation-gateway.com. Just that you see some values in the dashboard.

So, if you need a broker to store your IoT data directly into TimescaleDB without the need for any additional components, consider using MonsterMQ. It’s free and available at MonsterMQ.com.

Here is the exmaple docker-compose.yml file of the public availble test.monstermq.com broker:

services:
  timescale:
    image: timescale/timescaledb:latest-pg16
    restart: unless-stopped
    ports:
      - "5432:5432"
    volumes:
      - /data/timescale:/var/lib/postgresql/data
    environment:
      POSTGRES_USER: system
      POSTGRES_PASSWORD: xxx
  monstermq:
    image: rocworks/monstermq:latest
    restart: unless-stopped
    ports:
      - 1883:1883
    volumes:
      - ./config.yaml:/app/config.yaml
    command: ["+cluster", "-log INFO"]
  pgadmin:
    image: dpage/pgadmin4
    restart: unless-stopped
    environment:
      PGADMIN_DEFAULT_EMAIL: andreas.vogler@rocworks.at
      PGADMIN_DEFAULT_PASSWORD: xxx
    volumes:
      - /data/pgadmin:/var/lib/pgadmin/storage
    ports:
      - "8080:80"
  grafana:
    image: grafana/grafana
    restart: unless-stopped
    ports:
    - 80:3000

Here is the MonsterMQ config.yml file:

Port: 1883
SSL: false
WS: true
TCP: true

SessionStoreType: POSTGRES
RetainedStoreType: POSTGRES

SparkplugMetricExpansion:
  Enabled: true

ArchiveGroups:
  - Name: "All"
    Enabled: true
    TopicFilter: [ "#" ]
    RetainedOnly: false
    LastValType: POSTGRES
    ArchiveType: NONE
  - Name: "Test"
    Enabled: true
    TopicFilter: [ "Test/#" ]
    RetainedOnly: false
    LastValType: NONE
    ArchiveType: POSTGRES

Postgres:
  Url: jdbc:postgresql://timescale:5432/monster
  User: system
  Pass: xxx

Give it a try and let me know what you think!

#iot #mqtt #monstermq #timescale #postgresql #grafana

QuestDB: My time series data’s new best friend? 📈

My first tests with QuestDB on 10 years of home automation data (1.4 billion rows) are promising.

👉 Fast ingestion of parquet files (~1 hour on an old Intel NUC i5)

I have stored my data in parquet files, one per month, and imported it with a simple Python script. The import on my really old Intel NUC i5 took only about one hour. I’ve never been able to do this so quickly with any other database.

🤔 Btw.: I think storing data in #parquet files, or any other open table format, like Apache Iceberg, is one of the best choices to keep data. Because it’s independent of a database engine.

👉 Familiar SQL syntax. I experienced that QuestDB has a powerful SQL engine. I converted some Postgres SQL statements to QuestDB without big issues or changes. And I love SQL 💚

👉 Great query response times – see image. Not a representative query, but still impressive speed.

👉 By using ZFS with compression the used disk space can be reduced to a good value.

Do you want to log your OPCUA data to QuestDB? I have added this option to the automation-gateway.com seven days ago.

Node.JS for WinCC OA? And what about Java? GraalVM? Polyglot?

🥳 Last weekend I found some time to try out an upcoming feature in WinCC Open Architecture 3.20. With the Node.js integration you can write your business logic in JavaScript with native connectivity to WinCC OA. You can take full advantage of the Node.js ecosystem.

🧐 But I am a Java enthusiast and I love the JVM ecosystem. Have you ever heard about GraalVM? It is an advanced JDK written in Java. And it has a Node.js Runtime, which gives you the power of Node.js plus the power of polyglot programming, you can mix JavaScript with Java.

👍 And it turned out that the GraalVM Node.js Runtime also works with WinCC OA! It took me some time to figure out how the polyglot interoperability works, but now I have a first draft of a Java-Library which makes it easy to use Java and OA in the Node.js environment.

🤩 I can now use Java to develop great solutions with WinCC OA.

WinCC OA & Node-Red Integration

It is very easy to get data from WinCC Open Architecture to NodeRed.

Add a new user to WinCC OA – System Management / Permission / User Administation.

We will use “node” as username.

Add Config Entry

[wssServer]
httpsPort = 8449
resourceName = "/websocket"

Start Control Manager “wss.ctl -user <username>:” Note the trailing “:” !!

wss.ctl -user node:

Node-Red: Install Palette “node-red-contrib-winccoa”

You can now add a Node. In that example we will use the dpQuery node and use “SELECT ‘_online.._value’ FROM ‘Meter_Input_WattAct.'” as query. So we just query the online value of one tag.

You have to configure the Server by clicking on the pencil button. This points to the before started Websocket Control Manager and you have to set the username and password we have added in one of the previous steps.

Embed Grafana in WinCC Unified

In this scenario we will host Grafana over the IIS from WinCC Unified. So that it comes from the same origin and that we do not come over a CORS (Cross-Origin Request Blocked) problem.

What is needed to allow Grafana to be embedded in another application is to set allow_embedding = true in the Grafana configuration file.

To host Grafana over the IIS the following settings must be made:

Add a URL Rewrite to your IIS configuration file. Change “desktop-khlb071” to your computer where Grafana is running on. Restart the Webpage with the IIS Manager.

The IIS configuration file can be found here: (C:\Program Files\Siemens\Automation\WinCCUnified\SimaticUA\web.config)

                <rule name="grafana" enabled="true" stopProcessing="false">
                    <match url="grafana(/)?(.*)" ignoreCase="true" />
                    <action type="Rewrite" url="http://desktop-khlb071:3000/{R:0}" appendQueryString="true" logRewrittenUrl="false" />
                </rule>      

Change the following configuration of Grafana (defaults.ini). Change the domain to your computer name where Grafana is running on. It must be the same name what you use in the IIS configuration file!

# The public facing domain name used to access grafana from a browser
domain = desktop-khlb071

# Redirect to correct domain if host header does not match domain
# Prevents DNS rebinding attacks
enforce_domain = false

# The full public facing url
root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana

# Serve Grafana from subpath specified in `root_url` setting. By default it is set to `false` for compatibility reasons.
serve_from_sub_path = true

# set to true if you want to allow browsers to render Grafana in a <frame>, <iframe>, <embed> or <object>. default is false.
allow_embedding = true