Location & Sensor Data Integration Guide

Integrating with the Quuppa system allows for the transfer of real-time location and sensor data to your application. This guide describes the standard methods for establishing data pipelines from the Quuppa Positioning Engine (QPE).

1. Start by simulating

When beginning your integration, use the Quuppa Software Suite to model your environment and data flow.

Important:
Always simulate your expected use cases before starting the technical implementation to ensure your data pipeline meets your operational requirements.

The Quuppa Site Planner (QSP) tool allows you to create simulation paths that mimic the expected behaviour of tags on-site. You can run these paths on the Quuppa Positioning Engine to:

  • Validate your application logic by testing your software with deterministic data.

  • Optimise your application logic to refine how your software processes location data.

  • Optimise your data traffic and delivery pipeline performance.

  • Perform load testing on your Positioning Engine server, data pipeline and other infrastructure related to data delivery.

You can run the simulation on top of an existing site plan to get a realistic experience, or you can create a sandbox environment to assist the development process.

2. Push data to your application or data platform

The recommended integration approach for location and sensor data is to push it to your application or data platform. This method provides several advantages over polling:

  • Efficient data delivery: data is delivered only when meaningful changes occur.

  • Stream filtering: you can pre-filter the stream using specific criteria.

  • Predictable delivery: you can deliver data at defined intervals and batch sizes.

3. Setting up your data stream (Output Target)

To deliver data from the Quuppa Positioning Engine (QPE) to your application, you must configure an Output Target. The Output Target defines the destination, the format, and the specific conditions that trigger a data transmission. By optimising these settings, you ensure that your application receives only the necessary data, which reduces network load and processing requirements.

You can define your data flow by following a three-step process to build your integration pipeline:

Step 1: Create a data template (Output Format).

Step 2: Define the logic rules for data collection (Output Target).

Step 3: Establish the delivery pipe for the data stream (Destination and Frequency).

3.1 Define what RTLS information you want to receive in your use cases

The first step is to create an Output format, which acts as the template for the data your application will receive. To identify these requirements, consider the following question:

What information do you need for your application?

  • Typical message types (see Examples section for detailed instructions):

    • Location Update Event

    • Zone Change Event

    • Button Event

    • Movement Event

  • Apply transformations if needed for your own application:

    • Date format: Unix or ISO Date

    • Measurement rounding: number of decimals

    • Coordinate types: local or georeferenced

    • Custom keys: map JSON object keys to your own terminology

(Create Output format and set Transformations if needed.)

3.2 Define in which cases you want the data to be collected

Next, configure an Output Target to establish the specific logic rules and filters for your data stream. Frame your requirements by answering this question:

In what kinds of situations do you want to collect/receive this data?

  • Typical stream types (pre-configured examples):

    • Location update event

    • Zone change events

    • Button press/release events

    • Movement start/stop events

(Create Output target, set Trigger mode, On data change and Filter parameters.)

3.3 Define how often and where you expect to receive the data

Finally, define the delivery parameters to determine where and how often the Positioning Engine pushes the data. To decide on the best approach, consider the following question:

Do you want to consume the data immediately or store it for later use?

Typically, there are two options:

  • Receive immediately: low-latency interaction for event-driven applications or map views updating in real-time.

  • Collect data and receive later: collecting for later use in analytics applications, reporting tools, or data lakes.

Choose the destination and the method, such as MQTT or UDP, to push the data. For testing purposes, you can also push the stream to a local File to easily verify what content it produces.

(Set Max Batch Size and Send Interval parameters for your output target.)

4. Examples for creating different types of output streams

The following examples demonstrate how to configure typical location or sensor-based event streams.

4.1 Location Update Events

In this example, we configure a message format and a stream that:

  • Collects an event on a 1-second interval.

  • Triggers only IF the coordinate of a tag changes.

  • Passes collected events to the stream either immediately (Option 1) or in batches (Option 2).

Step 1: Create Output Format for Location Update Event (The Template)

Name the format LocationUpdateMessage and choose the relevant fields using the Quuppa Output Format Syntax:

  • $(version.1): API format version (every format must start with version information).

  • $(location.ts.iso): timestamp of the coordinate change.

  • $(tag.id): ID of the tag.

  • $(tag.name): name of the tag.

  • $(coordinateSystem.id): ID of the coordinate system (Building or Floor).

  • $(coordinateSystem.name): name of the coordinate system (Building or Floor).

  • $(location.;.2): new location coordinate for the tag, as an [x;y;z] list, where the separator character is a semicolon (;) and results are provided with two decimals.

  • $(output.ts.iso): timestamp of when the message was collected.

Note:
In this example, we apply an .iso transformation for all timestamps to provide them in ISO 8601 format (e.g., 2025-12-09T12:22:26.459Z) instead of Unix format (1765282754376)

Example of full format definition:

$(version.1),$(location.ts),$(tag.id),$(tag.name),$(coordinateSystem.id),$(coordinateSystem.name),$(location.;.2),$(output.ts)

Message outputted to stream in JSON format:

{
 "locationTS": "2025-12-09T12:22:26.459Z",
 "tagId": "012345678901",
 "tagName": "TestName",
 "coordinateSystemName": "CoordSysName",
 "coordinateSystemId": "CoordSys001",
 "location": "location": [ 0.50, 2.50, 1.20],
 "outputTS": "2025-12-09T12:22:26.554Z"
}

Step 2: Create Output Target for Location Update Event (The Logic Rules)

Name the target LocationUpdateEvent. In the Notes section, you can describe the behaviour of the target. Select LocationUpdateMessage as the format for the target and set the type to JSON.

  • Trigger Mode: defines the frequency with which the data stream is checked for changes. Choose Interval and set the rate to 1 second. The system checks the stream every 1 second.

  • On Data Change: defines which specific changes are monitored. Set $(location) as the value so the system checks if the location coordinate has changed since the previous 1-second interval check.

  • Filter by: choose RadiusBelow2m. The system will filter out any location updates where the accuracy radius is bigger than 2 metres.

Step 3: Define the delivery parameters (The Pipe)

Configure the receiving parameters for the target based on your application needs:

  • Option 1 (Receive Immediately): set Max Batch Size to 1 (default) and Send Interval to -1 (default). The system will produce a separate message to the stream for each tag. The message is sent immediately when the event occurs.

  • Option 2 (Collect and receive later): Set Max Batch Size to 100 and Max Send Interval to 60 seconds. The system will collect up to 100 events to be sent to the stream in a single message. The batched message is sent when the message size reaches 100 events OR when 60 seconds have elapsed since the previous send.

Select File as the target destination and save. You can test this stream by running a simulation or using a live deployment. The stream will write results to a File in the defined folder.

Summary of the created target:

  • Collects location coordinate changes on a 1-second granularity.

  • Filters out results where the accuracy is not below a 2m radius.

  • Sends the results into a stream immediately (Option 1) or in batches of 100 events every 60 seconds (Option 2).

  • Stores the stream in a file.

4.2 Zone Change Events

In this example, we configure a message format and a stream that collects an event whenever a tag moves from one zone to another.

This configuration:

  • Collects an event on any new location update.

  • Triggers only IF the zone of a tag changes (tag enters or exits a zone or geofence).

  • Passes collected events to the stream either immediately (Option 1) or in batches (Option 2).

Step 1: Create the Output Format for Zone Change Event (The Template)

Name the format ZoneChangeMessage. Use the following syntax to capture zone transition data:

Choose the relevant fields for the use case, using the Quuppa Output Format Syntax:

  • $(version.1): API format version (every format must start with version information).

  • $(lastPosition.zonePreviousTransitionTS): timestamp of the zone change.

  • $(tag.id): ID of the tag.

  • $(tag.name): name of the tag.

  • $(lastPosition.zoneIds.;): new (current) zone ID for the tag.

  • $(lastPosition.zoneNames.;): new (current) zone name for the tag.

  • $(lastPosition.zonePreviousTransitionFromIds): previous zone ID for the tag.

  • $(lastPosition.zonePreviousTransitionFromNames): previous zone names for the tag.

  • $(lastPosition.zonePreviousStateDuration): duration spent in previous zone.

  • $(coordinateSystem.id): ID of the coordinate system (Building or Floor).

  • $(coordinateSystem.name): name of the coordinate system (Building or Floor).

  • $(output.ts): timestamp of when the message was collected to stream.

Example of full format definition:

$(version.1)$(lastPosition.zonePreviousTransitionTS),$(tag.id),$(tag.name),$(lastPosition.zoneIds.;),$(lastPosition.zoneNames.;),$(lastPosition.zonePreviousTransitionFromNames),$(lastPosition.zonePreviousTransitionFromIds),$(lastPosition.zonePreviousStateDuration),$(coordinateSystem.id),$(coordinateSystem.name),$(output.ts)

Message output to stream in JSON format:

{
 "lastPositionZonePreviousTransitionTS": null,
 "tagId": "012345678901",
 "tagName": "TestName",
 "lastPositionZoneIds": null,
 "lastPositionZoneNames": null,
 "lastPositionZonePreviousTransitionFromIds": null,
 "lastPositionZonePreviousTransitionFromNames": null,
 "lastPositionZonePreviousStateDuration": null,
 "coordinateSystemId": "CoordSys001",
 "coordinateSystemName": "CoordSysName",
 "outputTS": 1765281491569
}

Step 2: Create the Output Target for Zone Change Event (The Logic Rules)

Name the target Zone Change Event. In the Notes section, you can describe the behaviour of the target. Select ZoneChangeMessage as the format for the target and set the type as JSON.

  • Trigger Mode:defines the frequency with which the data stream is checked for new changes. Choose LocationUpdate. The system will now check the raw stream every time a new location is calculated.

  • On Data Change:defines what changes are being listened to from the raw stream. Set $(location.zoneNames) as the value. The system will now check if the zone has changed since the previous check.

  • Filter by:selection is not needed for this use case.

Step 3: Configure the receiving parameters for the target

Configure the delivery method for your zone change data based on your application needs:

  • Option 1 (Receive Immediately): set Max Batch Size to 1 (default) and Send Interval to -1 (default). The system will produce a separate message to the stream for each tag, which is sent immediately when the event occurs.

  • Option 2 (Collect and Receive Later): set Max Batch Size to 100 and Send Interval to 60 seconds. The system will collect up to 100 events to be sent in a single message when the size limit OR the 60-second time limit is reached.

Select File as the target destination and save. You can test the stream by running a simulation or using it on a live deployment. The stream will write results to a File in the defined folder.

Summary of the created target:

  • Data collection: monitors for zone transitions whenever a new location is calculated.

  • Trigger logic: only records an event when the tag enters or exits a zone.

  • Immediate delivery (Option 1): sends a separate message for each tag as soon as the event occurs.

  • Batched delivery (Option 2): sends up to 100 events in a single message OR every 60 seconds.

  • Storage: the output is saved directly to a file.