With GraphQL for Unity you can execute GraphQL Queries in a Unity way with GameObjects. But with the asset you can also execute queries in Unity with C# code.
Here is a simple example:
public GraphQL Connection;
public void ScriptQuery()
{
var query = "query($Token: String!) { doit($Token) { result } } ";
var args = new JObject
{
{ "Token", "123" }
};
Connection.ExecuteQuery(query, args, (result) =>
{
Debug.Log(result.Result.ToString());
});
}
Link the Connection variable to your GraphQL GameObject where the connection is set.
Note: the result callback function is called asynchronous and it is not executed in the Game-loop.
A bit more complex example how the args variable could look like. This is just an example how you can create a JSON object in C# in a convenient way.
At startup it can also write the OPC UA node structure into the graph database, so that the basic model of the OPC UA server is mirrored to the graph database. For that you have to add the “Schemas” section in the config file (see an example configuration file below). There you can choose which RootNodes (and all sub nodes) of your OPC UA systems should be mirrored to the graph database.
Once you have the (simplified) OPC UA information model in the graph database, you can add on top of that your own knowledge graph data and create relations to OPC UA nodes of your machines to enrich the semantic data of the OPC UA model.
With that model you can leverage the power of your Knowledge Graphs in combination with live data from your machines and use Cypher queries to get the knowledge out of the graph.
Here we see an example of the OPC UA server from the SCADA System WinCC Open Architecture. The first level of nodes below the “Objects” node represent Datapoint-Types (e.g. PUMP1) followed by the Datapoint-Instances (e.g.: PumpNr) and below that we see the datapoint elements (e.g. value => speed). An datapoint element is an OPC UA variable where we also see the current value from the SCADA system.
Example Gateway configuration file:
Database:
Logger:
- Id: neo4j
Enabled: true
Type: Neo4j
Url: bolt://nuc1.rocworks.local:7687
Username: "neo4j"
Password: "manager"
Schemas:
- System: opc1 # Replicate node structure to the graph database
RootNodes:
- "ns=2;s=Demo" # This node and everything below this node
- System: winccoa1 # Replicate the nodes starting from "i=85" (Objects) node
WriteParameters:
BlockSize: 1000
Logging:
- Topic: opc/opc1/path/Objects/Demo/SimulationMass/SimulationMass_Float/+
- Topic: opc/opc1/path/Objects/Demo/SimulationMass/SimulationMass_Double/+
- Topic: opc/opc1/path/Objects/Demo/SimulationMass/SimulationMass_Int16/+
- Topic: opc/winccoa1/path/Objects/PUMP1/#
- Topic: opc/winccoa1/path/Objects/ExampleDP_Int/#
If your docker container do not use all your cpu’s, it may be the case that limits are set in /etc/systemd/system/docker.slice. To apply changed settings I had to reboot my machine (just a restart of docker didn’t change the behaviour).
#ModBus data from the Robot can now be used in #Unity for visualisation and also to control the Robot from Unity …
The Unity Package GraphQL for OPCUA is now not only for OPCUA anymore, it can also handle other types which are supported by the Automation Gateway – like the Plc option, which is based on PLC4X.
Here is a simple HTML page which fetches data from the OPC UA Automation Gateway “Frankenstein”. It uses HTTP and simple GraphQL queries to fetch the data from the Automation Gateway and display it with Google Gauges. It is very simple and it is periodically polling the data. GraphQL can also handle subscription, but then you need to setup a Websocket connection.
<html>
<head>
<script type="text/javascript" src="https://www.gstatic.com/charts/loader.js"></script>
<script type="text/javascript">
google.charts.load('current', {'packages':['gauge']});
google.charts.setOnLoadCallback(drawChart);
var data = null
var options = null
var chart = null
function drawChart() {
data = google.visualization.arrayToDataTable([
['Label', 'Value'],
['Tank 1', 0],
['Tank 2', 0],
['Tank 3', 0],
]);
options = {
width: 1000, height: 400,
redFrom: 90, redTo: 100,
yellowFrom: 75, yellowTo: 90,
minorTicks: 5
};
chart = new google.visualization.Gauge(document.getElementById('chart_div'));
chart.draw(data, options);
}
function refresh() {
const request = new XMLHttpRequest();
const url ='http://localhost:4000/graphql';
request.open("POST", url, true);
request.setRequestHeader("Content-Type", "application/json");
request_data = {
"query": `{
Systems {
unified1 {
HmiRuntime {
HMI_RT_5 {
Tags {
Tank1_Level { Value { Value } }
Tank2_Level { Value { Value } }
Tank3_Level { Value { Value } }
}
}
}
}
}
}`
}
request.send(JSON.stringify(request_data));
request.onreadystatechange = function() {
if (this.readyState==4 /* DONE */ && this.status==200) {
const result = JSON.parse(request.responseText).data
const x = result.Systems
data.setValue(0, 1, x.unified1.HmiRuntime.HMI_RT_5.Tags.Tank1_Level.Value.Value);
data.setValue(1, 1, x.unified1.HmiRuntime.HMI_RT_5.Tags.Tank2_Level.Value.Value);
data.setValue(2, 1, x.unified1.HmiRuntime.HMI_RT_5.Tags.Tank3_Level.Value.Value);
chart.draw(data, options);
}
}
}
setInterval(refresh, 250)
</script>
</head>
<body>
<div id="chart_div" style="width: 400px; height: 120px;"></div>
<!--<button name="refresh" onclick="refresh()">Refresh</button>-->
</body>
</html>
Really like Crate.io … based on Elasticsearch, but with #SQL interface and optimised for time series. Now also added to Frankenstein for #opcua tag logging…
Added #JDBC as logging option to the Open-Source Automation-Gateway Frankenstein. Values from #OPCUA servers can now also be logged to relational databases – #sql is still so great and powerful! Tested with #postgresql#mysql and #mssqlserver … fetching history values via the integrated #graphql server is also included…
You have to add the JDBC driver to your classpath and set the appropriate JDBC URL path in the Frankenstein configuration file – see an example below. PostgreSQL, MySQL and Microsoft SQL Server JDBC drivers are already included in the build.gradle file (see lib-jdbc/build.gradle) and also appropriate SQL statements are implemented for those relational databases. If you use other JDBC drivers you can add the driver to the lib-jdbc/build.gradle file as runtime only dependency and you may specify SQL statements for insert and select in the configuration file.
You can specify the table name in the config file with the option “SqlTableName”, if you do not specify the table name then “events” will be used as default name.
Create a table with this structure. For PostgreSQL, MySQL and Microsoft SQL Server the table will be created on startup automatically.
CREATE TABLE IF NOT EXISTS public.events
(
sys character varying(30) NOT NULL,
nodeid character varying(30) NOT NULL,
sourcetime timestamp without time zone NOT NULL,
servertime timestamp without time zone NOT NULL,
numericvalue numeric,
stringvalue text,
status character varying(30) ,
CONSTRAINT pk_events PRIMARY KEY (system, nodeid, sourcetime)
)
TABLESPACE ts_scada;
In this article we use the Frankenstein Automation Gateway to subscribe to one public available OPC UA server (milo.digitalpetri.com) and log tag values to Apache Kafka. Additionally we show how you can create a Stream in Apache Kafka based on the OPC UA values coming from the milo OPC UA server and query those stream with KSQL.
Setup Apache Kafka
We have used the all-in-one Docker compose file from confluent to quickly setup Apache Kafka and KSQL. Be sure that you set your resolvable hostname or IP address of your server in the docker-compose.yml file. Otherwise Kafka clients cannot connect to the broker.
Install Java 11 (for example Amazon Corretto) and Gradle for Frankenstein. Unzip Gradle to a folder and set your PATH variable to point to the bin directory of Gradle.
Then clone the source of Frankenstein and compile it with Gradle:
git clone https://github.com/vogler75/automation-gateway.git
cd automation-gateway/source/app
gradle build
There is a example config-milo-kafka.yaml file in the automation-gateway/source/app directory which you can use by setting the environment variable GATEWAY_CONFIG.
export GATEWAY_CONFIG=config-milo-kafka.yaml
In this config file we use a public Eclipse Milo OPC UA server. The Id of this connection is “milo“.
Here is the configuration of the Kafka Logger where you can configure what OPC UA tags should be published to Kafka. In that case we use a OPC UA Browse Path and a wildcard to use all variables below one node.