There are many areas where Internet of Things (IoT) has already become reality: Intelligent street lights, smart meters and self-farmed fields to name a few. Devices can decide when to power up, when to buy energy because it is cheap, and when to start watering a field. Decisions based on data, not just pre-programmed activation.
There are even more: Wearables that predict an upcoming disease, or electronics that notify customer service themselves if there is a problem.
These use cases all have one thing in common: Data. Data volumes are already hitting one all-time high after another and the question is who should analyze these massive amounts of data? Since the beginning of the IoT era it isn’t possible anymore to monitor IT operations manually. IoT will soon become a standard solution, so you will have to automate availability checks and monitoring.
Cloud technology and IoT are almost simultaneously taking over the world. This means exploding amounts of data through the extensive networking of devices, as well as a high rate of change with today’s hyper-dynamic cloud-based applications.
McKinsey expects IoT to deliver worldwide economic value of $11 trillion annually by 2025. 90% of that total value will benefit users – consumers or companies that use IoT applications – through lower prices or time savings, for example. At the same time, the Internet of Things will soften the boundaries between technology companies and traditional businesses, enabling new, data-driven business models.
Let’s stay with the example of watering a field. Monitoring here – as elsewhere – does not concern itself with the evaluation of the sensor data. The back-end system is responsible for this, in some cases also the edge processing on site. But what happens if the communication does not work? Or if there is a problem in the back-end system, perhaps because a faulty update was deployed? The system wouldn’t start watering the field and as a consequence the crops would die.
Where is the problem?
Often it is hard to tell where the cause of a problem is. It can be a challenge to figure out why one device is working, and another is not. With IoT these questions and problems are multiplying, so it is necessary to automatically detect and analyze an IoT topology without any manual configuration, to understand the impact, and to resolve issues affecting business-critical systems quickly and proactively, in real-time. A system failure in a Smart Home probably won’t cause a life-endangering situation, one in a self-driving car may. That’s why it is necessary to detect problems immediately and fix them or switch to a back-up system.
One consequence of the huge amounts of data is that IoT devices must monitor themselves and that availability monitoring, as mentioned before, becomes a central aspect of IoT. Even now, professional providers need standardized solutions and not self-developed tools that only recognize a small subset of all dependencies.
AI to the rescue
Using artificial intelligence (AI) and machine learning, even the most complex systems can be monitored seamlessly. AI-based monitoring solutions need to understand the whole system. This applies to related back-end systems as well as to connected systems such as databases, middleware, applications and front-end apps in addition to the edge infrastructure of IoT devices.
End-to-end application performance monitoring (APM), therefore, becomes even more important. Businesses need smart solutions to avoid downtime and performance problems. Sustainable companies need to analyze in real-time if their systems are running smoothly and quickly, what their users are doing and experiencing right now, and how edge devices are behaving in the Internet of Things.