Edge computing, a popular buzzword in IT, is critical in the IoT cloud context. To provide a smooth low-latency experience for the IoT user, we have to shift the computing power closer to the user. We must organize “Edge locations” where all IoT processing power can be concentrated.
Only essential data should be sent to the central cloud. For example, when building an ML-based app for detecting car accidents, you don’t want to transfer all video files across the world to the central cloud where the processing cluster is placed.
Instead, you can put an ML algorithm on the edge locations, process video there, and send JSON-based events to the cloud with reports.
This is the essence of edge computing, and it suits IoT well here.
Edge computing brings additional complexity to infrastructure but offers several advantages:
Cost savings. You don’t have to send all the data to a cloud so you can save some bandwidth of the connection link.
Improved application responsiveness. The computing power is now located closer to IoT devices which reduces latency.
Better security. The distributed architecture of the IoT cloud with several edge locations is much harder to compromise.
Better reliability. Because of the physical separation of the edge locations, the overall system reliability becomes higher.
Of course, a distributed cloud with many edge locations is harder to maintain, especially if you combine multiple cloud providers with on-premises data centers. Such a system requires good DevOps architecture with centralized monitoring and alerting, requiring more skilled DevOps/SRE team members on the team.