6
min read

Unlocking scalable predictive analytics with next-gen AI models

The need for AI-based predictive solutions for electromechanical assets is on the rise. However, building large scale AI-dependent online systems is by no means an easy feat. Successfully creating useful prediction analytics pipelines requires experience and some of the big challenges are only uncovered once a solution is used during daily operations!

In this post, we introduce several challenges that our AI models solve by design. These are the results of many years of experiencing challenges on a daily basis. Sit tight!

Unified AI modelling

Differently from standard industry practice, our approach is to build a single large scale AI model (see Figure 1) for all wind turbines within a wind farm, for instance. This can also be applied for solar panels or industrial electromechanical machinery.

Figure 1 - We build a unified AI model that learns the normal behaviour of all internal components of a collection of assets, e.g. wind turbines.

Our models are based on the Transformer model architecture which has taken the AI field by storm and is now employed in most solutions that are current state-of-the-art in many different research fields. We have been continuously iterating on our AI model architecture over the last years to overcome the different challenges presented in this post. We have recently discovered that it is similar to the ones used by large AI-powerhouse companies such as Uber and Google.

It’s reassuring that we are in the right direction when the tech giants are deploying similar solutions like the ones we developed internally at Jungle! 😄

Solved challenges

Better generalisation

Learning the normal behaviour for all internal components of all wind turbines becomes a much easier task with a unified model and also prevents us from getting trapped into a suboptimal usage of the data. In the figure below we see an example of this. Turbine 2 has never seen sensor B reaching high values while wind turbine 1 has. Our single large-scale model will be able to leverage the experience of turbine 1 to model better turbine 2. This way it will follow better its normal dynamics in the future.

Figure 2 - Leveraging data cross-assets allows AI models to better generalise to all dynamics that assets experience.

Our AI model can learn at even larger contexts such as entire portfolios that contain different wind turbine types and makers. In fact, our modelling approach does not have constraints over having the exact same sensors present in all wind turbines or their electrical topology, e.g. DFIG and asynchronous generators with fully-rated converters (exemplified in Figure 3). That’s true scalability huh?

A single Jungle AI model can be used to model the normal behaviour of all (heterogenous) assets of a GW-sized portfolio.

Figure 3 - Our AI model can learn the normal behaviour of heterogeneous and large scale wind portfolios.

Logistic nightmare

The fact that we use a single model frees us from a logistic nightmare that would be to train and maintain many different AI models. Let’s say that we are trying to train an AI model for a modest-sized wind farm with 20 turbines and ten years of historical data. Supposing that each turbine has 100 sensors to monitor and that each one suffers a change every two years (our experience tells us that it can be much more than that...) we would require ten thousand ML models.

Figure 4 - If you are not careful, you will end up with many AI models to babysit!

A team of data scientists would need to train and maintain ten thousand models in order to cover a single wind farm. Imagine what would be required to cover an entire large scale renewables portfolio!

Robust AI model architecture

Missing sensor data challenges

Sensor data availability is another big roadblock that we solved with our advanced AI models. If an input sensor suddenly becomes permanently unavailable (e.g. the ambient temperature sensor as shown in Figure 5), any AI model that required it as an input sensor would be rendered useless. The same happens when new sensors become available. All models that did not use them will not automatically be able to leverage the new sensor’s data.

Figure 5 - Our AI model can handle sensors missing at its input.

Our models will adjust their prediction confidence bands according to the input sensors measurements as shown in Figure 6. For example, if only the generator power is used to predict the generator bearing temperature, we would see larger confidence bands than when compared to also feeding the ambient temperature. This extra sensor would allow the AI model to better understand the context in which the wind turbine is operating, and therefore, present narrower bands.

Figure 6 - Our AI models automatically adjust their probabilistic confidence bands according to the sensor data used to make predictions.

Having narrow and well-calibrated probabilistic predictions bands will lead to fewer false-positive alarms and to detect deviations from normal at much earlier stages!

Sensor availability challenges also increase the need to train even more AI models to attend to all the different combinations of available sensors. We have discussed in greater detail the impact of sensor availability in creating AI-based predictive solutions in this blog post.

Sensor sampling alignment

Our AI models not only handle sensors missing entirely as described above, but they also allow sensors measurements to be taken with different sampling rates. These are challenges that arise when working with data sources such as OPC, CMS and on-change industry databases, which have much more data and value to be unlocked, than standard ODBC databases filled with 10-minute statistical properties.

Figure 7 - Multivariate asynchronously sampled data does not represent a challenge for our AI models.

This frees us from cumbersome and difficult sensors time alignment and missing data imputation strategies (for more information please check this blog post). Our model predictions are purely based on actual measured data and not generated data to try to fit the data to the model!

Our models adapt to real-world data and not the other way around. By not tampering with sensor data, we can have higher confidence in the prediction of our models.

Dynamic model inputs and outputs

The architecture of our models allows us to dynamically use the available sensors as inputs to model other sensors but they themselves can also be modelled. Basically, with our model, we can easily interchange sensors as input or as targets.

An example of this is shown in the figure below. On the left side, we are predicting the bearing temperature using the ambient temperature and the generator power. On the left side, we are now predicting the generator power using the ambient temperature and the wind speed.

Figure 8 - Our AI models allow dynamic sensors at their input meaning that input sensors can also be used as targets. We can create multiple problem formulations within the same AI model.

In the extreme case, we can have wind turbine components that do not share sensors between them. For example, we can have part of the model to learn the normal behaviour of the internal components but also perform power forecasting using weather forecast providers’ data (see more about our power forecasting solutions here).

Solved challenges

Constrained problem formulations (logistic nightmare)

Most common AI models have a very constrained architecture. Sensors that are used as inputs need to always be present at the input and cannot be targets within the same model. Otherwise, we would leak information to the model by using as input a sensor that we also want to learn its normal operation (see Figure 9).

Figure 9 - Standard AI models cannot have multiple problem formulations. The users need to hard-code the inputs and targets of the model.

A standard AI model will therefore not be able to have multiple problem formulations and therefore model all sensors of a wind farm. For example, we would need two different models for the case shown in Figure 9: one to model internal temperatures and pressures and another one to model the generator power.

Take-aways

In summary, in this post we went over a few of the main advantages of the unified advanced AI model:

  • Better generalisation capabilities since it learns in a unified way the normal behaviour of all sensors of all wind turbines from the farm (can also scale to larger portfolios)
  • A single model that is easier to train (faster deployment for new wind farms) and maintain (happier ML engineers).
  • Robust to common data challenges such as disappearing sensors and sensors with different sampling rates.

And these advantages are directly translated into better and more accurate predictions for our users. This in turn will lower the amount of false-positive and negative alarms that our customers see.

In the next post, we will introduce several other key challenges that our AI models have helped us solve. Among others we will introduce how our models can learn all historical sensor behaviour changes and perform continual learning, i.e. models can keep on learning even after training! Stay tuned ;)

Silvio Rodrigues

CTO & Co-founder

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.