Our Technology Solutions Director, Phil Bolus, looks at the four steps to create value and identify opportunities from your data analytics.
From Building Energy Management Systems (BEMS) to utility meters, thermostats, appliances and even individual sensors – smart devices of all types are now capable of communicating valuable data about their environment and operation. Access to this data opens up new opportunities for businesses to improve operation of their systems, reduce costs, and validate investments in energy savings measures and system upgrades.
It’s fairly straight forward to get access to the data, but the real key is to create value from it. Companies are beginning to understand that the Internet of Things (IoT) can drastically change the way offerings are developed, sold and consumed. Technology advancements in areas such as communications, power and computational efficiencies, as well as cloud, big data and analytics capabilities have made the IoT accessible to most organisations.
There are four basic steps to this analytics process:
Let’s look at these steps and how best to achieve each one.
1. Retrieving Your Data
The first step is to “retrieve your data”. Building data is typically in a number of different formats – data from automation systems, utility data, facility asset information, and data that may be in Excel, CSV or other formats. A good analytics package accepts this multi-structured data into a database which is designed specifically for processing large volumes of “time-series” data. Because of these dissimilar data streams it is important to design a number of “data connectors” that allow the ingress of all of these into the analytics package.
Cloud based or “at the edge”
When retrieving data a decision has to be made on whether you want to store the data for analysis locally or in the cloud. Most systems retrieve the data back to a central database and analyse it following retrieval which may be once per day in some cases.
There are several things to consider when making this decision:
You can’t achieve the required speed of retrieval if latency is introduced from sending data and application calls between centralised systems and remote devices. Where large multi-site organisations have thousands of sensors connected to the IoT, latency can be exacerbated and these sensors would be constantly going on or offline.
Centralised storage can be costly both in terms of transport (bandwidth) and storage. As the number of connected IoT devices expands, these costs have to be considered carefully and the correct model adopted.
Security is the number one concern when implementing any data-driven solution and traditionally, IoT solutions have been designed to be closed-loop networks that have no exposure to the Internet. In order to avoid the security risk, systems are often designed to be isolated but this prevents the system from taking advantage of the value of external data feeds and tapping into even more powerful remote processing to supplement local analytics capabilities.
Bear in mind that many connected devices that collect and transfer data to a centralised repository lack the capability to avoid cyber-attacks and have limited security models built in.
If every single IoT device were linked across the Internet to a centralised cloud, it would expose an incredibly large attack surface for hackers to gain access to critical data and applications. Even more troubling, this centralised approach can potentially send malicious control commands back to the devices. One effective solution is to consolidate multiple sensor connections into a secure aggregation point behind a firewall. Centralising data behind the firewall helps reduce the overall attack surface.
2. Managing Your Data
To tag or not to tag!
At the risk of stating the obvious, when you “tag” something you give it a name that everyone or every system recognises. In order to make sense of the data you are receiving it is important to ensure that all “assets” are identified along with all their interoperable devices.
If we are going to bridge the gap between energy and BEMS then it is important that you create a structure that allows the database to correlate changes between the two. Tagging is a crucial element of setting up a site for data analytics; if the tagging is wrong then the results will be wrong!
Tags need to be consistent across a whole portfolio; whether that is a single site or multiple sites. There are industry standards such as Project Haystack that can be utilised to provide these references from a common source. At SSE Enterprise Energy Solutions we automatically tag most of the data through an advanced application program that recognises labels and assigns a standard tag from a centrally held library. It is also possible to manually assign a tag in cases where there is no standard.
In most BEMS systems, controllers are modelled as an object known as a device. Generally speaking, all control points exist underneath one of these devices. However, in a modern analytics structure there is no one-to-one relationship between an equipment record and a device. Any device can have its points belong to more than one equipment record, and any equipment record can have points from more than one device. This flexibility allows us to get away completely from the network-centric view of the world that is found in most BEMS workstations. You can create representations of your data that reflect the real-world equipment on your site, rather than the layout of your controller network.
Once the tags are applied and stored then they will be instrumental when we apply “rules” around the operation of the assets, but more on that later.
3. Analysing Your Data
Analysis is accomplished with a data analysis “engine” that provides a comprehensive set of functions for manipulating and analysing data. The system installer/supporter writes rules, which are then processed by the analysis engine.
These rules are powerful tools for automating the procedure of finding anomalies in plant operation, energy consumption, asset lifecycles etc. By using rules and applying them to tagged points you can create hugely powerful solutions that only need to be written once but applied many times. You can create new rules based upon new observations or ideas at any time.
Depending on the usage case, the data sources may be set up to feed data directly into an edge aggregation and analytics device or directly to the cloud. In the case of an intelligent gateway, some data can be processed with local analytic software in real or near-real time to generate data-driven actions and insights.
Additionally, data may simply be passed through to the next tier such as another gateway, datacentre or cloud. Several gateways can be deployed. Some of these gateways have a single source feeding them data whilst other gateways can have several sources streaming data to them in a variety of protocols. Consider an intelligent gateway inside of a rooftop HVAC unit that collects hundreds of data points a second. A customer’s central monitoring station may require only a few key points be sent every day. Meanwhile, the gateway can be used to analyse every piece of collected data in real time in order to optimise performance or sense an impending failure. The gateway can then trigger events to alert repair crews or safely shut itself down. Once in the centralised system, the subset of data from the HVAC unit can be used for batch analytics and longer-term energy efficiency planning. Transferring only the most important information greatly reduces the amount of data sent across the network while still providing insights and business value to the end user.
4. Presenting Your Findings
The final element in an IoT system is the ability for customers and employees to gain insight from the data. To gain deeper contextual insight, IoT data can be blended with internal and third-party data sources. For example, IoT data can be integrated with CRM or ERP system data, as well as social media or weather data. In addition, organisations can provide rich analytic-based applications for customers to view and interact with their data.
Many analytics engines present data in a complex format and this can be confusing to the end user. By interfacing to the analytics package API a more user friendly picture can be displayed using dashboards and mobile data.
In summary, there is a wealth of untapped information about buildings hidden within data that has previously only been examined when an incident has occurred. We now have the ability to check this data with pre-written ”rules” and automatically change parameters, generate, filter then announce events and present data in a meaningful manner to the building owner/operator. Deep analysis of the BEMS data is here and is looking likely to stay for a long time to come.