Article also visible on medium
We have been testing our ongoing research & development in the field of applied machine learning with our launching customers and partners. This has been a vigorous process over several years as everyone involved is pioneering. We have vastly improved our capabilities but more importantly, discovered what the true problems are that their industries face.
From the beginning, we had strong ingredients:
But our goal was never to keep on doing what we did. After we closed our first round of investment in Singapore, we took a couple of days off somewhere deep in south-east Asia and sketched out what we then called, Liana. Liana was never one thing, it was the whole.
And as with the best of things, it started with a what if…
What if we could create the infrastructure, tools, ui/ux and fundamental engine technology and allow these companies to apply the knowledge they have been cultivating for decades themselves?
In other words: how can we create something that scales, that doesn’t require us to do PoC after PoC with interesting but singularly applicable results? How do we allow our team to build on a core set of technologies that power all our customers? How do we make sure every single line of code serves the whole?
The answer is product (our CEO, Arnoud, would argue it’s a mindset, and I don’t disagree). A product built with a mindset where we don’t build yet another ephemerally applicable black box. We don’t tie our customers to something we thought of (with all our biases and engineering naivety) Something that allows our customers to create value from day one and serves as a powerful companion to their day to day operations & maintenance struggles. These guys and gals are the warriors that keep our economy running, and they need help.
We couldn’t see a clear way to do this though in the beginning, however. Our research & development didn’t yet solve a key problem we saw in large scale multivariate time series prediction. Everyone was stuck in a race for accuracy to predict x or y but we kept feeling that was a flawed approach. Feature engineering seems like a way to torture the data until it confesses. It would also never scale, even if done automatically, who verifies that the right things were done?
We felt deep inside it was possible though, that's what got us up every morning. In the end, it came down to only one call, an early Christmas a couple of years ago with a key partner and investor Innogy where we saw the light.
Instead of trying to predict the problem we know, what if we modelled the inverse? Model what is normal and compare that to the system so we see everything that is not normal by definition?
This is an oversimplification but it illustrates the point. We have a lot of secret sauce that empowers this but what we do is we essentially try to make the models tell us whether it can understand what is happening in one sensor given all the other sensors (+auxiliary information). If it can do that, you can put them side by side and that is where the really interesting questions begin 😉
Using this approach and the approaches that are layered on top of this to get to actionable insights you can start to see developing and deviating behavior that you otherwise definitely wouldn’t have. We have been able to flag countless failure modes far before any sophisticated condition monitoring system could (let alone threshold-based monitoring which is singularly focussed).
You can treat power output as yet another sensor ;) if you explain power output as a proxy of what exactly is happening in the turbine components and the farm as a whole (as well as sibling assets). You get to some very interesting insights around underperformance and losses (as well as their root causes, intentional or not).
This changed everything. We had something that not only delivered jaw-dropping results but something we could keep working on and it would only get better. Something that scales.
Zoom forward to now, we have survived a pandemic (although there might be more to come so let's not cheer too early) and we have transformed all learnings of deploying this incredible technology into something that we know scales and released it. It has users and we are already working on a whole new batch of features based on what we have learned in the deployment.
This web application is the window to our data ingress pipeline and normality modeling infrastructure that keeps a watchful eye on complex continuously running assets. The visuals here are early prototypes with the focus on an incredible factory run by very smart people in the Netherlands.
It allows anyone on the factory floor and back office to identify abnormality before it becomes a problem. It helps them focus their attention where is needed most and allows them to dissect the problem with the state of the art data science worked into modern UI/UX. The system records all these learnings and interactions to make sure it is raised intelligently the next time a similar problem occurs. It allows for deep collaboration through a case by case paradigm which is driven by the factory floor.
In the end, it’s finally the face to this enormous but beautiful Jungle our talented team has been building behind the scenes, our Liana.
In the coming weeks, we will be
If you are interested, our team would be over the moon to share the current product and our roadmap with you. We can discuss how we can create sustainable and repeatable value in your organisation through the untapped power of artificial intelligence.
A very heartfelt thank you to everyone who has worked on this with us from our partners/investors to our customers. Most importantly, however, our team and those that have worked with us in the past to build this.
small footnote we need a cool name for our product, something jungly (you wouldn’t believe the encyclopedia of rare species, forests and plants we have in our GitLab ❤️ repositories) → firstname.lastname@example.org
Article also visible on medium