Presented By:
Kerry Williams
K-BIK Power Pty Ltd
TechCon 2018
Abstract
In recent years there has been a significant uptake in on-line condition monitoring of electrical network assets. This paper explores the impact of that change from years past to the present and looks at what might be in the future.
During the 1990’s, condition monitoring was in its infancy and mostly done to check if the routine time-based maintenance needed to have extra tasks added. During the early 2000’s came the ability to have more access to better technology, hence the ability to monitor assets escalated and now it is regarded as a mandatory requirement within any electrical network business. As technology has developed, the ability to add more devices to assets has also increased. We can now monitor almost anything on any asset anywhere and get terabytes of data in nanoseconds. Herein lies the new problem – what do we do with all that data, how do we store it, analyze it, and then actually use it to make decisions.
The power utilities in Australia operate under a regulatory environment and are under increasing pressures to reduce costs and increase reliability of power supply. The Government drive toward renewables has changed the face of the networks and building large power stations is all but off the agenda. In September of 2016 South Australia experienced the worst blackout in Australian history and now all utilities are being required to look at their exposure to similar events.
It has been realized that many utilities have expertise in condition monitoring but that expertise is held by a few key engineers and the data analysis is done on spreadsheets these people have on their hard drives. The organizations rely totally on those few people and their knowledge of the assets to make decisions that are critical to keeping the network operational. It has also come to light that these people get their data from field staff who are using a semi-automated inspection regime and on-line devices that send data to stand alone systems. This delays the acquisition of that data which could be vital in the prevention of an asset failure.
Their challenge now is how to move from well-developed spreadsheets, managed by key individuals, to an organization wide technology based system that does all the analysis in real time.
This paper explorers how the Australian utilities are grappling with this issue, how they are changing the way they need to work, and how they intend to meet this challenge head-on to future-proof their networks.
Introduction
The evolution of substation asset condition monitoring has been rapid and really has come of age in the last 20 years. The fact that we have the technologies to monitor assets on-line has helped with the speed of evolution. The business drivers for doing monitoring are different and not necessarily directly related to the advances in technologies. The power utilities of today tend to be more privately owned although in Australia there is still a significant Government ownership even if only in part. This privatization has seen a change in the business objectives and strategies and a clear move away from a cost centre model to a shareholder driven profit centre.
The change has brought about the need to reduce costly maintenance tasks by implementing lower cost condition based maintenance methods. Where in the past it was acceptable to have lengthy outages and large teams of maintenance staff doing routine maintenance, the game has changed to lower costs, fewer staff and less outages with shorter durations. The shareholders have not invested in a business to lose money and the need to reduce costs is ever increasing. In the past 10 years, the increase in pressure for cost reductions has intensified due to consumers using solar power to reduce their costs and a change away from the traditional power system.
We have seen a huge shift in technology since the 1980’s when the telephone first started to go mobile. Stand-alone desktop computers were becoming common place in offices and homes and M2M (machine to machine) technologies were introduced with modems. Then, 20 years ago, consumers started connecting to the Internet to research web pages, communicate via e mail and speed up the use of data analytics. Eventually this technology became available from mobile phones and as recently as only 10 years ago, iPhone and Android phones were widely launched with social media, twitter, and music and video streaming. Since about 2010 consumers have changed the way they use energy, they own PV and battery storage systems and rely heavily on artificial intelligence (AI) technology for information. Now, autonomous transmission of data from “things” can be stored in internet or “cloud” databases.
With the Internet of Things (IoT) the next revolution has started, and we are stepping into cyber physical systems (CPS) where data from connected and mutually interacting “things” control machines and systems using AI and AR (augmented reality) systems. This fast evolving trend is being realized as having the ability to provide services such as security and comfort in a “good way” with driverless cars, buses, trains, service robots, automation and much more.
So how is this changing technology able to help a business cut costs, increase profits and maintain a more reliable power system network and what does the future hold for the Asset Manager of a power system? The availability of these technologies is changing the way that utilities extract condition monitoring data from assets and turn it into near real-time decision making information.
Gathering Data from Assets
For many years field technicians gathered data from assets by visiting a site, inspecting assets, taking samples, and performing tests. This information was transferred to spreadsheets and analyzed over a number of days or weeks to help the asset manager understand the condition of the asset. The time-line between obtaining the data and turning it into information was so long that an asset at high risk could be missed and fail before action decisions could be made.
More recently, online devices have helped reduce the site visits by extracting the data on-line and sending it to a database where an analyst can download it, reformat, run some algorithms and use expert knowledge to decide on the asset condition. This online data has sped up the process, but it can still be days before any formal knowledge of the asset is known. Additionally, the amount of data to be stored needs to be filtered or the volume of data available could be so large that it is nearly impossible to analyze. Where a device can extract data every second or faster, that data needs to be “compressed” so a year of data takes only a few gigabytes or even less space on a server. This has been an issue for some time and it is known that some devices produce terabytes of data daily. These types of devices are not well liked as they are seen as producing so much data that is becomes too cumbersome to continue working with without specialist software.
This data volume problem will only get worse as data acquisition technology continues to evolve and improve. According to a recent report by the technology giant Ericsson (1), around 29 billion connected devices are forecast to be in operation by 2022, of which around 18 billion will be related to the IoT. The rapid increase of connected things is having a profound impact on the way governments are developing their infrastructure. The fact that they can monitor all kinds of “things” has allowed governments to gather huge volumes of data to analyze almost everything. Electronic device and software manufacturers can monitor almost everything its customers do and then, by using AI, start predicting what the customer needs before they realize they need it.

If we apply some of this thinking to utilities and their asset management needs, then we open a whole new world of possible forms of data gathering that can provide information in real time. The data and (then) the information can be used for short and long-term planning or regulatory submissions. It is somewhat difficult for a regulator to refute accurate data that has been gathered over time and provides a true reflection of the asset condition and risk.
The challenge though is understanding what “things” to put onto your asset and how much is too little or too much. It really boils down to 5 simple questions:
- What do I need to monitor?
- Why do I need to monitor it?
- What data do I need to deliver the information I want?
- What will I do with that information?
- What is the real benefit to my business?
You will notice in these questions there is no need to worry about what device you need. It really only matters after you answer the questions because the answers will help you look for the specific device and systems needed. We should never let the device drive what we need to do in our networks.
Figure 2 shows diagrammatically what the asset manager needs to consider when looking to get condition monitoring data from any asset. Fundamentally, it is deciding what is the right data needed to eventually make a decision on the asset. That decision maybe; do nothing through to replace or run to failure. Regardless of the decision, it is imperative that the right data is obtained and to do that there needs to be a bottom up and top down approach taken.
The top down approach starts with the decisions from the 5 questions above. Once these can be answered and the answers entered into the Figure 1 diagram then a clear direction is set for monitoring an asset. Decisions can now be made about the devices needed to deliver the data.

The bottom up approach where you know what device can produce occasionally works but it is harder to realize the business benefits and to fit the device data outputs to the business systems. Again, letting a device dictate your needs is not a good fit. It is a bit like buying a high-performance sports sedan for a work vehicle when your job entails carting logs to the mill.
Choosing the right tools and devices or “things” that are needed to provide the data is only part of the journey. The area that utilities often struggle with is the vehicle for bringing that data from the field to the asset manager in the form of clear information. Simply, data is data and is not information until presented in a way that provides a user with a view that can be used efficiently and effectively to decide something. Therefore, the data needs to go through a series of steps from the device on the asset to the person making the decision. These steps are critical in the process as they will define how the data is changed to information, the speed at which the information is received, the volume of information received and finally how the information can be used.
Condition Monitoring System Architecture
Today there is a shift toward Cloud based data storage and visualization. The cloud systems allow users to access data almost anywhere at any time and so everyone from the company Chief Executive to the field staff and customers can have access to information. The systems can be very simple and secure by allowing open access or can be extremely secure so that only specific people with appropriate access rights can use the data.
The modern utility has many stakeholders. These can be the end users of power, generators, other transmission or distribution utilities, regulators, system operators and more. Each of these stakeholders has some level of influence over the way the utility operates, particularly when it comes to taking an outage for maintenance. The utility needs to negotiate with some if not all the stakeholders to obtain an outage. It needs to be planned as much as 2 years in advance and take into consideration the system security and reliability. When negotiating the outage, the utility talks to the affected stakeholders and arranges a suitable window and discusses contingency plans for continued supply or restoration with another unplanned event.
In today’s competitive markets many generators, particularly solar and wind generators, do not want an outage when there is an opportunity to generate and increase the revenue. Therefore, the utility must be able to clearly articulate and justify when an outage is required, the time duration, what is being maintained and what the system contingency plans are.
When there is accurate information on the asset condition and all stakeholders can see the risks with not maintaining then the negotiations become a little easier. When the condition monitoring data is combined with system reliability and security modeling the argument can become even more compelling. Therefore, it is vital to have up to date accurate information on the asset condition. This brings us back to the need to fit those devices that give the right data and a system that can accept the data then turn it into useful information.
For a number of years now we’ve seen the continual roll out of smart grid systems that interact with smart city projects. Taking it down further the smart building is a reality and all these “smart” projects appear to be reasonably mature in their concepts and architectures. They tend to be seen as a common essential item in any new development, which makes us wonder why many utilities have not applied a lot of this type of technology to the vital assets that hold the smart building networks together. The smart grid (Figure 3) as we know it today ideally has all manner of infrastructure items connected back to main control centers. Ideally, they all “talk” to each other and interact so that as load shifts in the network throughout the day and night the network adjusts to suit the demand. This includes such changes in demand as the intermittent use of electric vehicles and fluctuating power supplies from renewables.
The true smart grid has the artificial intelligence to adjust itself to cope with any given event. This is provided if it is designed correctly from the outset and this is a huge challenge. With a mix of very old assets and new fast evolving technologies it is difficult to create a totally smart grid. The capital cost to roll out a major smart technology data acquisition system with devices across an entire network would be prohibitive and by the time it was rolled out it would be out of date. So where does this leave us and how does it apply to individual assets or a fleet of assets?

Earlier there were 5 questions to answer when implementing any condition monitoring system. As part of answering those questions, it is imperative that there is a strategic direction for the Condition Monitoring Architecture. Any organisation that simply adds devices that the good salesman convinces them are the only thing they will ever need and does not consider their business needs, objectives and strategies will end up with numerous devices that do not interact and need a great deal of specialist input to get any value from them. Therefore, answering the 5 questions allows the end user to make an informed decision that takes into consideration what the long-term whether the device can support that.
When a device has been selected most engineers are initially very happy as they have a new piece of technology that will help them. This is often true but in many cases the end user has not considered what tools and techniques are needed to turn that data into information. This is where many of those specialist engineers start to develop their own spreadsheets for analysis. The issue is that they procured the device but did not pay a lot of attention to how the data it delivers can be turned into useful information. They also find they are continually spending time and money to maintain the condition monitoring system to ensure the device is calibrated and data validated.
Most utilities have a large mainframe system such as SAP, Maximo, Ellipse, Primavera and so on. These systems were often procured in the late 1990’s or early 2000’s and focused on the commercial side of the business, logistics of plant movement and storing data. They are very powerful and a necessary tool for any utility. What they do not do well are the analytics that engineers need to pass the data through to get the information. So, this is where the engineers have adapted by developing their own bolt-on systems that help them do that last step.
A device in the field gathers and transmits condition data to a data warehouse but unless it goes through an analytic process it simply remains data and is very poor at supporting decision making. By having many different devices, engineers have generally created a variety of spreadsheets for each device type. In doing this they only analyze one set of data and need to build another layer of analytics spreadsheets to take each of the sub sheets and put it into a usable format where they can compare the data from all devices on one screen. The process map starts to look a lot like the organizational tree as seen in Figure 4 below and they need as many people to do the work as they have steps in the process.
With dozens of devices in the field, the work involved in analyzing the data can overwhelm a department and so more often than not they almost stop adding new devices. The way to overcome this is by having the ability to do much of the analytics within the intelligent devices or in a cloud based system that supports the use of complex mathematics. By sending the data directly from an intelligent device to the cloud for the analytics to be performed, the time to get information can be substantially reduced and lessens the amount of data needed to be stored. The output can be sent to the mainframe database or historian and used as an output to the dashboard the asset manager uses to make the decisions.

In the more traditional method as shown in Figure 4 above, the data is collected by the field staff and uploaded into the mainframe database. When available the engineers extract the data, load it into spreadsheets and perform the analysis. The time line can be days or weeks but by performing this on-line and in real-time the information is provided almost immediately. When a fault is developing in an asset, this real-time information is vital in assessing when it is appropriate to intervene and rectify the cause of the problem. Additionally, when outages are taken on a network system, security and reliability can be monitored by way of the changes in the asset condition and performance.
The process in Figure 4 has evolved with the new technologies available and now looks more like that shown in Figure 5. This process allows for captured asset data to be stored in the mainframe or historian and utilizes the existing asset data to support the analytics. The process is quite simple and has three (3) basic steps.
- Get the Data: Collecting the data from the sensors (or things) along with existing and historical data from the mainframe system or other data sources.
- Analyze and Integrate: This is where the data from connected devices and data sources have the data analyzed in real-time using integrated cloud based systems with the enterprise data applications and IoT platforms.
- Visualize and Act: This step allows for the visualization of the asset condition in a dashboard so that the output can provide information for asset managers to take action.
It is generally known that most asset engineers usually spend 80-85% of their time and effort on the first step trying to dig out data (data mining) from multiple sources and systems. They spend around 10-15% on analyzing what they have dug out and at best case 5-10% on the actions that really count. Most businesses and engineers would like to turn this upside down!
It should be noted that all three steps are closely linked and very much overlap, however, the whole aim is to have a more balanced and seamless way of taking the data from the field and getting the right decisions made as quickly as possible. It is also imperative that a historical database of appropriate information is kept. Data that does not fit the business needs or has no impact on the asset condition or performance can be compressed and stored separately for a time when it may be needed or disposed of.
Automated Condition Based Maintenance Systems
It is now possible to have the decision-making process automated and allow predetermined condition limits to trigger a set of actions associated with the assessed asset. This is as close to automated condition-based maintenance and smart grids as possible. The asset manager needs to set acceptable asset conditions for a range of maintenance criteria and fault conditions. The criteria can then be uploaded into the cloud-based system so that when a specific set of conditions is met the actions are then initiated.
There are many examples of the use of fuzzy logic to determine the outcome of any given set of data to determine the asset condition. An example of this is the work by A. Abu-Siada, et al of Curtin University in Western Australia (2) on using fuzzy logic to determine the correct DGA diagnosis of transformer oil. This type of fuzzy logic is ideal for cloud-based analytics where large volumes of data can be processed in milliseconds. On-line oil monitoring devices can send real-time data to the cloud and it automates the trending and sends action triggers for any abnormal events.
To move to a more automated condition monitoring system the asset manager should start with their known systems and those proven proprietary products immediately available and known to their business. By starting with known devices and data the system can be built and verified against the business’s historical data. When the system is operational and proven on a small scale then it can be scaled up in a sustainable way.
As mentioned earlier, to be able to fit intelligent devices across an entire network would be cost prohibitive. By starting small and having a strategy to increase their use over time will allow a utility to integrate the devices and software needed to support it. The change is more of a journey than a significant business change. Part of the journey is to extract those spreadsheets from the expert engineers and have the complex algorithms put into a cloud analytics database. This will be one of the greatest challenges as most expert engineers are reluctant to completely let go of what has been their main purpose. Therefore, the business strategy needs to incorporate a change management component that has the expert engineering teams understand how they can refocus their efforts into better devices and the analytical tools needed to support those devices. Additionally, they are still needed to review data sets to ensure the limits set are in accordance with best practice and the business objectives. This way the cost reductions can be made as the systems evolve and get smarter and more accurate.
There is a likelihood that another type of engineering role may emerge, and it would be that of a “data integrity manager”. A person who is responsible for deciding which data should be stored long term and how to ensure the data assembled for the engineering analyst is from a single source of truth and the values are accurate recordings from the devices. This role would be an integral part of the team that makes decisions on what assets to monitor and probably help relieve the Asset Management´s headache.
Conclusion: What the Future Holds
It goes without saying that there are endless possibilities for the future. What we saw in the 1980’s as visionary and probably impossible to achieve is now here. The speed of technological change was meant to have slowed but instead we see it continuing to grow at rates that are near impossible for any one person to comprehend.
What has been described in this paper is likely out of date by the time it is published, yet the assets we try to hide behind big fences around substations have not changed a great deal in the last 40 years. What we are now doing is managing those assets better. We are learning how to extract more accurate data from them and make the right decisions in a time frame that allows us to maximise the asset life.
If, as the author looking into my crystal ball, I could see what was coming I would say: “There will be robots with wireless devices that can go to a location and assess an asset without touching it. The full condition and performance data would be extracted and combined with the life history, and analyzed in a chip sized super-computer that provides the end user with all manner of asset information in a hologram. All this will be done in nano-seconds and the end user will be informed on what action was taken if it was needed.”
Whether this becomes reality or not is yet to be seen but what is real is that by not embracing the technologies and what it can do to support the businesses we run the risk of having networks that simply do not deliver what our customer will be demanding.
References
- Ericsson: Mobility Report, Internet of Things forecast: “Internet of Things Forecast” June 2017.
- A. Abu-Siada, S. Hmood and S. Islam, “A New Fuzzy Logic Approach for Consistent Interpretation of Dissolved Gas-in-Oil Analysis”, IEEE Transactions on Dielectrics and Electrical Insulation, vol. 20, No.6, pp. 2343-2349, December 2013.