top of page
Search
Writer's pictureQuantumx

Cloud Computing vs. Fog vs. Edge Computing. What are they? How do they work?

Cloud computing is the most used form of IoT data management. Alongside cloud computing, fog and edge computing have been widely used to increase the speed and efficiency of data processing and bring intelligence closer to the Internet of Things (IoT) devices that collect data (sensors) and act on it (actuators). In this post, I will provide brief explanations of these three forms of data technologies, learn their designs, and briefly explore examples of their real world application.



Cloud Computing

Cloud computing refers to the delivery of computing services, including storage, processing power, and software, over the internet. It involves accessing and utilizing resources remotely through a network of servers hosted by a cloud service provider. Users can leverage these resources without the need for local servers or personal devices. Cloud computing provides scalability, cost-effectiveness, flexibility, and accessibility.

The working mechanism of cloud computing involves several key components: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS).


Infrastructure as a Service (IaaS)

This is the basic building block of cloud computing, providing virtualized computing resources like virtual machines, storage, and networking. Users have control over the operating systems and applications running on the infrastructure, while the cloud provider manages the underlying hardware.

Example: Amazon Web Services (AWS) Elastic Compute Cloud (EC2) offers virtual servers that can be configured and managed by users. They have control over the operating system, applications, and networking, while AWS manages the physical infrastructure.


Platform as a Service (PaaS)

PaaS provides a higher level of abstraction by offering a platform where developers can build, deploy, and manage applications without worrying about the underlying infrastructure. It includes development frameworks, runtime environments, and other tools necessary for application development.

Example: Google Cloud Platform's App Engine allows developers to focus on writing code and deploying applications without managing the underlying infrastructure. It provides an environment for building and scaling web applications easily.


Software as a Service (SaaS)

SaaS delivers ready-to-use software applications over the internet. Users can access these applications through web browsers or APIs without the need for installation or maintenance. The cloud provider manages the entire software stack, including infrastructure, platform, and application.

Example: Salesforce.com provides a customer relationship management (CRM) solution as a cloud-based SaaS application. Users can access and use the CRM software through a web browser without having to install or maintain it locally.



Fog Computing

Fog computing is an extension of cloud computing that brings computing capabilities closer to the edge of the network, typically within the local network or on the edge devices themselves. The goal of fog computing is to enable faster processing and real-time analysis of data by reducing the distance between the data source and the computing resources. It involves deploying computing, storage, and networking resources at the network edge, closer to where the data is generated. Fog computing is particularly useful in scenarios that require low-latency processing, real-time analytics, and where bandwidth is limited or unreliable.

The three key aspects of how fog computing works include; Proximity to Edge Devices, Distributed Architecture, Heterogeneous Devices and Networks.


Proximity to Edge Devices

Fog computing deploys computing resources, such as servers or edge devices, at the network edge or within the local network. This proximity reduces the distance between the data source and the computing resources, resulting in lower latency and faster response times. It enables real-time analysis and decision-making without relying heavily on cloud-based processing.


Distributed Architecture

Fog computing employs a distributed architecture where the workload is distributed between the cloud and the fog nodes at the network edge. The fog nodes can perform data processing, analytics, and other computational tasks, offloading the cloud infrastructure and reducing the amount of data that needs to be transmitted to the cloud. The distribution of workload optimizes resource utilization and improves overall system performance.


Heterogeneous Devices and Networks

Fog computing accommodates a variety of edge devices with different computational capabilities, ranging from sensors and actuators to smartphones and industrial machinery. It leverages the connectivity of these devices and uses local networks, such as Wi-Fi or LAN, to establish communication between edge devices and fog nodes. This allows for efficient and localized data processing and analysis.


Examples of fog computing applications


Smart Cities

Fog computing can enhance various aspects of smart city applications. For instance, in traffic management systems, fog nodes placed at intersections can process data from traffic cameras and sensors in real-time to optimize traffic flow. Similarly, in smart waste management, fog nodes located within garbage bins can monitor the fill level and optimize waste collection routes, reducing operational costs and environmental impact.


Healthcare

Fog computing can be utilized in healthcare settings to enable real-time monitoring and analysis of patient data. For instance, in remote patient monitoring, fog nodes placed at the patient's location can process vital signs and sensor data, allowing healthcare professionals to receive real-time alerts and make prompt decisions. This improves patient care and reduces the need for constant cloud connectivity.


Smart Factories

Consider a smart factory that generates a massive amount of sensor data in real-time. Instead of sending all the data to the cloud for analysis, fog computing can be employed to process and analyze the data at the factory's local network edge. This allows for immediate decision-making based on real-time insights, reducing the time and bandwidth required to send the data to the cloud.

In conclusion, fog computing brings computational resources closer to the network edge, enabling real-time processing, low-latency analytics, and reducing reliance on the cloud. It is particularly valuable in scenarios that require immediate decision-making, low latency, and efficient utilization of network resources


Your data usage has a low latency and smaller energy footprint when it routes from edge device or nearby cloud server


Edge Computing

Edge computing takes the concept of fog computing a step further by pushing computing resources and data processing capabilities even closer to the edge devices themselves. In edge computing, the processing and analysis of data occur directly on the edge devices or on nearby servers, rather than relying on distant cloud servers or a centralized data center. Edge computing aims to minimize latency, reduce bandwidth usage, and enable real-time, low-latency applications and services.


The three key aspects of how edge computing works include; Localized Processing, Distributed Architecture, and Data Filtering and Analysis


Localized Processing

In edge computing, computing resources, such as servers, gateways, or edge devices, are deployed directly at the edge of the network, where data is generated. This allows for local processing and analysis of data, reducing the need to transmit large amounts of data to centralized cloud servers. By processing data closer to its source, edge computing minimizes latency and enhances real-time decision-making capabilities.


Distributed Architecture

Edge computing employs a distributed architecture that distributes computing workloads between the edge devices and cloud servers. The edge devices perform local processing and handle time-sensitive tasks, while the cloud servers handle more resource-intensive tasks or provide long-term storage and analytics. This distribution optimizes resource usage, reduces network congestion, and improves overall system performance.


Data Filtering and Analysis

Edge computing involves filtering and analyzing data locally to extract valuable insights and take immediate action. Edge devices can apply real-time analytics, machine learning algorithms, or predefined rules to process the data generated by sensors, IoT devices, or other edge sources. This localized analysis enables faster response times, efficient resource utilization, and reduces the amount of data that needs to be transmitted to the cloud.


Examples of edge computing applications


Industrial Internet of Things (IoT)

In industrial settings, edge computing can be used to monitor and control manufacturing processes. Edge devices placed on factory equipment can collect sensor data, perform real-time analysis, and make immediate adjustments or trigger alerts based on predefined thresholds. This reduces latency, improves operational efficiency, and enables real-time anomaly detection in manufacturing processes.


To enable real-time decision-making, GPS and Sensors of Autonomous Vehicles must capture massive amounts of data, including image recognition, LiDAR data, and radar information from edge devices. Close proximity to the edge device the vehicle is, the better it is capable of performing most of the driving functions.


Autonomous Vehicles

Edge computing plays a crucial role in autonomous vehicles by enabling real-time decision-making. The sensors on the vehicle capture massive amounts of data, including image recognition, LiDAR data, and radar information. Edge devices within the vehicle or located at the roadside process this data locally, allowing the vehicle to make immediate decisions for tasks like object detection, collision avoidance, and navigation, without relying solely on cloud connectivity.


Retail and Personalized Experiences

In retail environments, edge computing can enhance customer experiences and enable personalized services. For instance, smart shelves equipped with edge devices can analyze customer interactions and inventory data locally to optimize product placement, detect stockouts, or trigger personalized offers in real-time. This localized processing enables faster response times, improves customer satisfaction, and reduces the reliance on cloud resources.


Edge computing brings computational capabilities closer to the edge of the network, enabling real-time processing, low-latency decision-making, and reducing dependence on centralized cloud resources. It empowers applications that require immediate responsiveness, data filtering, and localized analysis, while minimizing latency and bandwidth constraints.


Ultimately, while cloud computing provides scalable and centralized computing resources over the internet, fog computing and edge computing bring computation closer to the data source, enabling faster processing, real-time analytics, and low-latency applications. Each paradigm offers unique benefits depending on the specific requirements of the application or use case.


SL. Simon Enriko is a Masters' student at Royal Holloway University of London's AI & Cyber Security post-graduate program. He Writes and researches on AI, Cyber Security & Cloud Technology.

45 views0 comments

Comments


Post: Blog2_Post
bottom of page