MCS-227: Cloud Computing and IoT
Study Material

MCS-227: Cloud Computing and IoT

Complete PYQs Guide (2022-2025)

MCS-227: CLOUD COMPUTING AND IoT - PYQs

Complete Questions for Exam Preparation


IMPORTANCE LEGEND:

  • 🔥 Most Important: High frequency (3+ times) OR critical foundational topics essential for the course.

  • Very Important: Medium frequency (2 times) OR key operational concepts/extensions.

  • 📌 Important: Standard frequency (1 time) OR specific/niche topics.


UNIT 1: CLOUD COMPUTING - AN INTRODUCTION

Q1. 🔥 Define Cloud Computing.

[ June 2025 Q1(a), June 2022 Q1(a) | Frequency: 2 ]

Answer

Cloud Computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

It represents a paradigm shift from traditional computing by leveraging the internet to deliver computing services remotely. Instead of owning physical infrastructure, users access a shared pool of resources on a pay-per-use basis.

Key Components:

  • Service Models: SaaS, PaaS, IaaS.

  • Deployment Models: Public, Private, Hybrid, Community.

  • Essential Characteristics: On-demand self-service, Broad network access, Resource pooling, Rapid elasticity, Measured service.


Q2. 🔥 What are the key characteristics that define cloud computing?

[ Dec 2024 Q1(a) | Frequency: 1 ]

Answer

According to NIST, Cloud Computing is defined by five essential characteristics:

  1. On-Demand Self-Service:

    • A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
  2. Broad Network Access:

    • Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
  3. Resource Pooling:

    • The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand.
    • Location Independence: The customer generally has no control or knowledge over the exact location of the provided resources but may specify location at a higher level of abstraction (e.g., country, state, or datacenter).
  4. Rapid Elasticity:

    • Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.
  5. Measured Service:

    • Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer.

Q3. ⭐ How do the key characteristics differentiate cloud services from traditional computing model?

[ Dec 2024 Q1(a) | Frequency: 1 ]

Answer

The transition from Traditional Computing to Cloud Computing represents a shift from capital-intensive, static infrastructure to flexible, service-oriented consumption.

Diagram: Traditional vs. Cloud Model

plantuml diagram

Comparison Table:

Feature Cloud Computing Traditional Computing
Cost Efficiency Cost-effective (OpEx): Shared resources reduce costs; pay-per-use model. Expensive (CapEx): Requires substantial upfront investment in hardware and maintenance.
Resource Management Scalable: Offers unlimited storage and processing power on demand. Fixed: Limited storage and processing power dependent on physical hardware.
Accessibility Ubiquitous: Data is accessible 24/7 from anywhere via the Internet. Restricted: Data access is limited to the specific device or local network.
User Experience User-friendly: Self-service portals allow instant access without IT intervention. Complex: Requires manual installation, configuration, and IT support for access.
Software Updates Automatic: Software (SaaS) is updated centrally by the provider. Manual: Users must purchase and install updates individually.
Reliability High: Data is backed up across multiple servers/locations. Variable: Dependent on local backup strategies; higher risk of data loss on drive failure.

Q4. 📌 Explain the evolution of cloud computing.

[ Dec 2023 Q3(c) | Frequency: 1 ]

Answer

Cloud computing is the result of the convergence of several technologies over decades. Its evolution can be traced through the following phases:

Diagram: Evolution Timeline

graphviz diagram

  1. Mainframe Computing (1950s): Large, powerful computers used by large organizations. Introduced Time Sharing to allow multiple users to access the central CPU simultaneously.

  2. Distributed Systems: Breaking a task into parts run on multiple networked computers.

  3. Cluster Computing (1980s): Interconnected computers (nodes) working together as a single system (Single System Image) to improve processing speed and availability. Tightly coupled via LAN.

  4. Grid Computing (1990s): Connecting geographically dispersed, heterogeneous resources to solve large-scale problems (e.g., scientific simulations). Loosely coupled.

  5. Virtualization (1970s/2000s): The core enabler. Allows creating a virtual layer over hardware to run multiple OS instances on a single physical machine, maximizing utilization.

  6. Web 2.0: The shift to interactive, dynamic web applications that facilitated SaaS.

  7. Utility Computing: The business model of offering computing resources as a metered service (like electricity), which became the financial foundation of the Cloud.


Q5-Q11. ⭐ Compare and contrast Cluster, Grid, and Cloud Computing.

[ June 2023 Q3(a), June 2022 Q2(a) | Frequency: 2 ]

Answer

These three paradigms represent the progression of distributed computing.

Feature Cluster Computing Grid Computing Cloud Computing
Characteristics Tightly coupled systems; Single System Image (SSI); Centralized management. Loosely coupled; Decentralized; Heterogeneous resources; Distributed job management. Dynamic infrastructure; Service-centric; Self-service; Multi-tenant; Consumption-based billing.
Physical Structure Computers are co-located (local), connected via high-speed LAN. Computers are geographically dispersed, connected via WAN/Internet. Computers are located in data centers (centralized or distributed), accessed via Internet.
Hardware Homogeneous: Nodes usually have identical hardware and OS. Heterogeneous: Nodes run different OSs and have varied hardware. Abstracted: Runs on commodity hardware but appears uniform via virtualization.
Resources Managed as a single system by a centralized resource manager. Each node is autonomous with its own resource manager. Resources are pooled and virtualized; dynamically assigned on-demand.
Applications High Performance Computing (HPC), Scientific simulation, Industrial automation. High Throughput Computing (HTC), Collaborative research, Drug discovery. Web hosting, Business apps, Big Data, Storage, General purpose.
Networking Dedicated, high-bandwidth, low-latency network. Mostly Internet; high latency and low bandwidth. Internet for access; High-speed internal data center networks.
Scalability Limited (100s of nodes). Hard to scale dynamically. High (1000s of nodes). Complex to manage. Massive (seemingly infinite). Automated and rapid scaling.

Q12. 📌 Write short note on Grid computing vs. Cloud computing.

[ June 2024 Q5(d) | Frequency: 1 ]

Answer

Grid Computing:

  • Concept: A "Virtual Supercomputer" constructed by linking geographically dispersed computers to solve massive computational problems.

  • Goal: To maximize the utilization of idle resources across organizations.

  • Usage: Primarily for batch processing and scientific research (e.g., SETI@home).

  • Ownership: Resources are often owned by different organizations (federation).

Cloud Computing:

  • Concept: Delivery of computing services (servers, storage, databases) over the internet.

  • Goal: To provide on-demand resources with a pay-as-you-go model.

  • Usage: General-purpose business applications, web hosting, and storage.

  • Ownership: Resources are usually owned by a single provider (e.g., AWS) and rented to users.

Key Difference: Grid is about collaboration and solving a specific large task, whereas Cloud is about service delivery and resource provisioning for individual needs.


Q13. ⭐ What are the primary advantages that an organization can derive from adopting cloud computing?

[ June 2024 Q1(c) | Frequency: 1 ]

Answer

Organizations adopting cloud computing realize significant strategic and operational benefits:

  1. Reduced IT Costs: Eliminates the need for capital expenditure (CapEx) on hardware, cooling, and power. Shifts to an Operational Expenditure (OpEx) model.

  2. Scalability: Resources can be scaled up or down instantly based on business demands (e.g., handling seasonal traffic spikes).

  3. Accessibility: Employees can access data and applications from anywhere, anytime, on any device, fostering remote work and collaboration.

  4. Business Continuity: Cloud providers offer robust data backup and disaster recovery solutions, ensuring data safety against local failures.

  5. Automatic Updates: The burden of software patching and hardware maintenance is shifted to the cloud provider.

  6. Agility: New applications and services can be deployed in minutes rather than weeks, accelerating time-to-market.


Q15. ⭐ Explain/Mention any five applications of cloud computing.

[ Dec 2023 Q3(a), Dec 2022 Q2(c) | Frequency: 2 ]

Answer

Cloud computing has revolutionized various sectors through diverse applications:

  1. Online Data Storage:

    • Allows users to store files, images, and videos in the cloud rather than on local drives.
    • Examples: Google Drive, Dropbox, OneDrive.
  2. Backup and Recovery:

    • Provides secure, off-site storage for data backup. In case of local hardware failure or disaster, data can be easily restored.
    • More reliable and cheaper than maintaining physical backup tapes.
  3. Big Data Analytics:

    • Offers the massive processing power and storage required to analyze vast datasets (Big Data).
    • Organizations can derive insights without building their own supercomputing clusters.
  4. E-Commerce Applications:

    • Enables e-commerce platforms to handle fluctuating traffic (e.g., during sales) through dynamic scaling.
    • Manages customer data, product catalogs, and transactions securely.
  5. Education (E-Learning):

    • Powers Learning Management Systems (LMS), virtual classrooms, and student portals.
    • Enables distance learning and collaboration among researchers (e.g., Google Classroom, Coursera).

Q16. ⭐ Write short note on Challenges in cloud computing.

[ June 2023 Q5(b), June 2022 Q5(c) | Frequency: 2 ]

Answer

Despite its benefits, cloud computing presents several challenges that organizations must address:

Diagram: Cloud Challenges

plantuml diagram

  1. Data Security and Privacy: Entrusting sensitive data to a third party raises concerns about data breaches, unauthorized access, and lack of visibility into where data is physically stored.

  2. Interoperability (Vendor Lock-in): Moving applications or data from one cloud provider to another is often difficult due to proprietary APIs and data formats.

  3. Network Dependence: Cloud services rely entirely on internet connectivity. High latency or network outages can disrupt business operations.

  4. Cost Management: While pay-as-you-go is flexible, hidden costs and lack of monitoring can lead to unexpected bills ("Bill Shock").

  5. Compliance: Meeting regulatory requirements (like GDPR or HIPAA) can be complex when data resides in shared infrastructure across different jurisdictions.


Q17. 📌 Write a short note on utility computing.

[ Dec 2023 Q3(b) | Frequency: 1 ]

Answer

Utility Computing is a service provisioning model where a service provider makes computing resources and infrastructure management available to the customer as needed, and charges them for specific usage rather than a flat rate.

  • Concept: It treats computing resources (computation, storage, bandwidth) like traditional public utilities (electricity, water, gas).

  • Business Model: It is the financial model behind Cloud Computing. While "Cloud" describes the technology/architecture, "Utility Computing" describes the metering and billing aspect.

  • Key Benefit: It transforms fixed IT costs into variable costs. Users plug into the "computing grid" and pay only for the units they consume, optimizing resource utilization and cost.

UNIT 2: CLOUD DEPLOYMENT MODELS, SERVICE MODELS AND CLOUD ARCHITECTURE

Q18. 🔥 List the four categories of cloud deployment models.

[ June 2025 Q1(a), June 2022 Q1(a) | Frequency: 2 ]

Answer

The four primary categories of cloud deployment models, as defined by NIST, are:

  1. Public Cloud

  2. Private Cloud

  3. Community Cloud

  4. Hybrid Cloud


Q19. 🔥 Explain the four categories of cloud deployment models.

[ June 2025 Q1(a), June 2022 Q1(a) | Frequency: 2 ]

Answer

Cloud deployment models define the specific environment in which the cloud infrastructure is deployed, determining who has access to it and who owns it.

1. Public Cloud:

  • Definition: The cloud infrastructure is provisioned for open use by the general public. It is owned, managed, and operated by a business, academic, or government organization (the Cloud Service Provider).

  • Access: Accessed via the public internet.

  • Characteristics: Multi-tenancy, economies of scale, pay-as-you-go pricing.

  • Example: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP).

2. Private Cloud:

  • Definition: The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units).

  • Access: Accessed via a private network (Intranet/VPN).

  • Management: It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.

  • Characteristics: High security, customization, greater control over resources.

  • Example: VMware vSphere, OpenStack deployed internally.

3. Community Cloud:

  • Definition: The infrastructure is provisioned for exclusive use by a specific community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations).

  • Ownership: It may be owned by one or more of the organizations in the community, a third party, or some combination.

  • Example: A cloud shared by several banks for financial transactions or government agencies for data sharing.

4. Hybrid Cloud:

  • Definition: The infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability.

  • Usage: Often used for "Cloud Bursting" (moving overflow traffic from private to public cloud).

  • Example: An organization keeping sensitive data on a private cloud while using a public cloud for high-volume data processing.

Diagram:

plantuml diagram

Q20-Q22. ⭐ What are the main characteristics of Public, Private, and Hybrid cloud deployment models?

[ June 2024 Q1(b) | Frequency: 3 (Combined) ]

Answer

Characteristics of Public Cloud:

  • Scalability: Unlimited resources are available on-demand.

  • Cost-Effectiveness: No capital expenditure (CapEx); operates on an operational expenditure (OpEx) model.

  • Reliability: Vast network of servers ensures no single point of failure.

  • Location Independence: Services are available wherever internet access is present.

Characteristics of Private Cloud:

  • Security & Privacy: Dedicated resources ensure high data isolation and security.

  • Control: The organization has complete control over the infrastructure and hardware.

  • Customization: Hardware and software can be tailored to specific business needs.

  • Compliance: Easier to meet strict regulatory requirements (e.g., HIPAA, GDPR).

Characteristics of Hybrid Cloud:

  • Flexibility: Ability to switch between public and private clouds based on sensitivity and load.

  • Cost Optimization: Run steady workloads on private cloud and burst to public cloud during peaks.

  • Security: Sensitive data remains behind the firewall (Private), while less sensitive data resides on the Public cloud.

  • Interoperability: Requires robust connectivity and API integration between environments.


Q23-Q25. 📌 What are the advantages of Public, Private, and Hybrid cloud deployment models?

[ June 2024 Q1(b) | Frequency: 3 (Combined) ]

Answer
Deployment Model Advantages
Public Cloud 1. Low Cost: No hardware purchase or maintenance costs.
2. No Maintenance: The service provider handles updates and maintenance.
3. Speed: Rapid deployment of resources.
4. Scalability: Near-infinite scalability to handle traffic spikes.
Private Cloud 1. Enhanced Security: Isolated network environment reduces external threats.
2. Compliance: Full control aids in meeting industry-specific regulations.
3. Performance: Dedicated resources prevent the "noisy neighbor" effect.
4. Control: Complete oversight of the hardware and software stack.
Hybrid Cloud 1. Best of Both Worlds: Balances security (Private) with scalability (Public).
2. Business Continuity: Public cloud can serve as a failover/backup for the private cloud.
3. Agility: Quickly innovate using public cloud services while maintaining legacy on-prem systems.
4. Cost Management: Optimizes spend by matching workloads to the cheapest appropriate environment.

Q26. 📌 How does an organization decide which deployment model is most suitable for its specific business requirements?

[ June 2024 Q1(b) | Frequency: 1 ]

Answer

An organization selects a deployment model based on a trade-off analysis of several key factors:

  1. Security and Privacy:

    • If data is highly sensitive (e.g., financial, health records) $\rightarrow$ Private Cloud.
    • If data is non-sensitive or public $\rightarrow$ Public Cloud.
  2. Cost and Budget:

    • If CapEx is limited and variable cost is preferred $\rightarrow$ Public Cloud.
    • If the organization already owns data centers and wants to utilize them $\rightarrow$ Private Cloud.
  3. Workload Characteristics:

    • Predictable, steady workloads $\rightarrow$ Private Cloud (often cheaper long-term).
    • Unpredictable, bursty workloads $\rightarrow$ Public Cloud (elasticity).
  4. Compliance and Regulations:

    • Strict data sovereignty laws $\rightarrow$ Private or Hybrid Cloud.
  5. Technical Expertise:

    • Limited IT staff $\rightarrow$ Public Cloud (managed services).
    • Strong IT team capable of managing infrastructure $\rightarrow$ Private Cloud.

Q30-Q31. 🔥 What is a Cloud Computing Service Delivery Model? Explain any four.

[ June 2025 Q5(a), June 2023 Q1(a) | Frequency: 2 ]

Answer

Definition: A Cloud Computing Service Delivery Model defines the specific type of service being offered by the cloud provider to the consumer. It categorizes services based on the abstraction level of the resources provided (infrastructure vs. platform vs. software).

The Four Main Service Delivery Models:

  1. Infrastructure as a Service (IaaS):

    • Description: Provides fundamental computing resources like virtual servers, storage, and networking over the internet. The consumer manages the OS, middleware, and applications; the provider manages the physical hardware.
    • User: Network Architects, System Admins.
    • Example: AWS EC2, Google Compute Engine.
  2. Platform as a Service (PaaS):

    • Description: Provides a development and deployment environment (runtime, libraries, tools) for developers to build applications without worrying about the underlying infrastructure.
    • User: Developers.
    • Example: Google App Engine, Heroku, Microsoft Azure App Service.
  3. Software as a Service (SaaS):

    • Description: Delivers fully functional software applications over the internet. The consumer accesses the application via a web browser, eliminating the need for installation or maintenance.
    • User: End Users.
    • Example: Gmail, Salesforce, Dropbox, Microsoft 365.
  4. Anything as a Service (XaaS):

    • Description: A collective term for the delivery of anything as a service. It recognizes the vast variety of services and applications emerging over the internet.
    • Includes: Database as a Service (DBaaS), Storage as a Service (STaaS), Security as a Service (SECaaS), etc.

Q32. 🔥 Differentiate between Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS).

[ December 2024 Q3(a) | Frequency: 1 ]

Answer

Diagram:

plantuml diagram
Feature IaaS PaaS SaaS
Full Form Infrastructure as a Service Platform as a Service Software as a Service
Focus Provides raw computing resources. Provides a framework for developing apps. Provides a complete software solution.
User Control High: User controls OS, apps, and middleware. Medium: User controls apps and data only. Low: User only configures app settings.
Target Audience System Administrators. Developers. End Users.
Technical Knowledge Requires high technical skill to manage. Requires coding/dev skills. Requires minimal technical skill.
Provider Manages Virtualization, Servers, Storage, Networking. IaaS layer + OS, Middleware, Runtime. Everything (Hardware + Software stack).
Examples AWS EC2, Rackspace. Google App Engine, OpenShift. Gmail, Zoom, Canva.

Q33-Q34. ⭐ Discuss Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) with examples.

[ December 2022 Q4(a) | Frequency: 2 ]

Answer

Infrastructure as a Service (IaaS): IaaS is the most flexible category of cloud services. It aims to replace the physical data center. Instead of buying hardware, users rent IT infrastructure components like servers, storage, and networking on a pay-as-you-go basis.

  • Key Capability: The consumer is able to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications.

  • Example: Amazon EC2 (Elastic Compute Cloud). A user can spin up a virtual machine (instance), select the operating system (Windows/Linux), configure the storage, and install a database or web server manually. The user is responsible for patching the OS and securing the data.

Platform as a Service (PaaS): PaaS removes the need for organizations to manage the underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications.

  • Key Capability: The consumer is able to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider.

  • Example: Google App Engine. A developer writes code in Python or Node.js and uploads it. Google handles the server provisioning, scaling, load balancing, and OS updates. The developer focuses solely on the code logic.


Q35-Q38. ⭐ Explain the Client Access, Internet Connectivity, Cloud Service Management, and Physical Resources layers of Cloud Architecture.

[ December 2023 Q1(a) | Frequency: 1 (Split into 4 parts) ]

Answer

Cloud architecture is generally conceptualized in four distinct layers:

  1. Client Access Layer (Front End):

    • This is the visible interface where end-users interact with the cloud.
    • It includes the hardware (devices) and software (browsers, thin clients, mobile apps) used to access cloud services.
    • Function: Enables users to initiate requests and view results.
  2. Internet Connectivity Layer:

    • This layer acts as the bridge between the client and the cloud provider.
    • It encompasses the public internet, VPNs, and dedicated network connections.
    • Function: Facilitates the transmission of data and service requests securely and reliably between the user and the cloud.
  3. Cloud Service Management Layer (Middle End):

    • This is the "brain" of the cloud architecture. It manages the resources and services.
    • It handles tasks like Resource Provisioning (allocating VMs), Service Level Agreement (SLA) Management, Billing/Metering, Security, and Load Balancing.
    • Function: Ensures the cloud operates efficiently and services are delivered according to contracts.
  4. Physical Resources Layer (Back End):

    • This layer consists of the actual physical hardware located in data centers.
    • It includes physical Servers (CPUs, RAM), Storage (Hard drives, SSDs), Network devices (Routers, Switches), and cooling/power systems.
    • Function: Provides the raw computing power required to run the virtualization layer and all upper layers.

Q39. 🔥 Explain the 4-levels in a cloud architecture with the help of a block diagram.

[ June 2022 Q3b | Frequency: 1 ]

Answer

Cloud architecture is a hierarchical structure that defines the components and relationships required for cloud computing. It consists of four main levels:

  1. User/Client Layer: The entry point for the consumer (browsers, mobile devices).

  2. Cloud Management Layer: Controls the infrastructure, manages security, billing, and resource allocation.

  3. Virtualization Layer: Abstracts physical hardware into virtual resources (VMs, containers).

  4. Physical Resource Layer: The actual hardware (Servers, Storage, Network) in the data center.

Diagram:

graphviz diagram

Q40. 📌 Write short note on Hierarchical structure of cloud.

[ June 2023 Q5(c) | Frequency: 1 ]

Answer

The hierarchical structure of the cloud, often referred to as Cloud Anatomy, organizes the components of cloud computing into layers that build upon one another to deliver services.

  1. Application Layer: The top layer where SaaS applications reside (e.g., CRM, Email). Users interact directly with this layer.

  2. Platform Layer: Provides the environment for developing and deploying applications (PaaS). It sits on top of the infrastructure.

  3. Virtualization Layer: The core technology enabling cloud. It partitions physical hardware into isolated virtual machines.

  4. Infrastructure Layer: The physical assets (IaaS context) like servers and storage arrays.

This hierarchy ensures modularity; changes in the physical layer do not necessarily disrupt the application layer due to the abstraction provided by virtualization.


Q41-Q42. 📌 Discuss Public Inter Cloud Networking and Private Intra Cloud Networking.

[ December 2022 Q4(b) | Frequency: 2 ]

Answer

1. Public Inter-Cloud Networking:

  • Definition: Connectivity between a cloud consumer (public) and a cloud provider, or between two different public cloud providers, over the public Internet.

  • Characteristics:

    • Relies on standard internet protocols (IP).
    • Cost-effective as it uses existing internet infrastructure.
    • Subject to variable latency and security risks (requires VPNs or HTTPS for security).
  • Use Case: A user accessing Google Drive from their home laptop.

2. Private Intra-Cloud Networking:

  • Definition: Connectivity within a private cloud environment, typically inside a specific organization’s data center.

  • Characteristics:

    • Connects servers, storage, and management nodes within the private infrastructure.
    • High bandwidth, low latency, and high security.
    • Isolated from the public internet.
  • Use Case: A corporate database server replicating data to a backup server within the same corporate data center.


Q43. 📌 Write short note on Multihoming and its types.

[ June 2022 Q5(a) | Frequency: 1 ]

Answer

Multihoming: Multihoming is a networking practice where a computer device, network, or cloud service is connected to more than one network connection (e.g., two different Internet Service Providers - ISPs) simultaneously. This is done to increase reliability and performance.

Types of Multihoming:

  1. Classic Multihoming: A network is connected to multiple ISPs. If one ISP fails, traffic is automatically routed through the other. It uses BGP (Border Gateway Protocol) for routing.

  2. Host Multihoming: A single host (server) has multiple network interfaces (NICs), each connected to a different network.

  3. Multiple Address Multihoming: The host is assigned multiple IP addresses, potentially from different providers, to ensure reachability even if one path goes down.

Benefits:

  • Redundancy/Fault Tolerance: Prevents network outages.

  • Load Balancing: Distributes traffic across connections.

UNIT 3: RESOURCE VIRTUALIZATION

Q44-Q45. 🔥 Define Virtualization and explain its underlying abstraction.

[ June 2023 Q2(b), June 2022 Q4(a) | Frequency: 2 ]

Answer

Definition: Virtualization is a technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. It involves creating a virtual version of a device or resource, such as a server, storage device, network, or an operating system, where the framework divides the resource into one or more execution environments.

Underlying Abstraction: The core abstraction of virtualization is the Hypervisor (or Virtual Machine Monitor - VMM).

  • Abstraction Layer: It inserts a layer of software between the physical hardware and the operating system(s).

  • Decoupling: It decouples the software (operating system and applications) from the underlying hardware.

  • Resource Mapping: The virtualization layer maps the physical resources (CPU, RAM, I/O) to virtual resources, allowing multiple "Guest" operating systems to run concurrently on a single "Host" physical machine, each believing it has exclusive access to the hardware.

Diagram:

plantuml diagram

Q46-Q47. ⭐ Mention important characteristics and features of Virtualization.

[ June 2023 Q2(b), June 2022 Q4(a) | Frequency: 2 ]

Answer

Virtualization offers four key characteristics/features that define its utility in cloud computing:

  1. Partitioning:

    • Many applications and operating systems can be supported within a single physical system.
    • System resources (CPU, memory, I/O) are partitioned between VMs.
  2. Isolation:

    • Each Virtual Machine (VM) is isolated from its host and other VMs.
    • If one VM crashes, it does not affect the others.
    • Data is not leaked between VMs, enhancing security.
  3. Encapsulation:

    • A VM is essentially a software container. The entire state of a VM (OS, apps, data) is encapsulated into a set of files.
    • This makes it easy to copy, move, and save VMs.
  4. Hardware Independence:

    • VMs are completely independent of the underlying physical hardware.
    • A VM created on one server can be migrated to another server with different physical components without modification.

Q48. 🔥 What is Machine or Server Level Virtualization?

[ December 2024 Q1(b), December 2023 Q4(a) | Frequency: 2 ]

Answer

Machine/Server Level Virtualization involves partitioning a physical server into smaller virtual servers. It is the process of running multiple independent virtual operating systems on a single physical computer.

  • Mechanism: It relies on a thin layer of software known as a Hypervisor (e.g., VMware ESXi, Microsoft Hyper-V, Xen) installed directly on the hardware or on top of a host OS.

  • Function: The hypervisor intercepts commands sent from the guest operating systems to the hardware and translates them, managing access to the physical CPU, memory, and peripherals.

  • Goal: To maximize resource utilization by ensuring that the physical server's capacity is fully used, rather than sitting idle.


Q49-Q51. 🔥 Differentiate between Full Virtualization, Para-Virtualization, and Hardware-Assisted Virtualization.

[ December 2024 Q1(b), December 2023 Q4(a-i, a-ii) | Frequency: 2 ]

Answer

These are different techniques used by hypervisors to manage guest operating systems.

Feature Full Virtualization Para-Virtualization Hardware-Assisted Virtualization
Guest OS Awareness Unaware: The Guest OS does not know it is virtualized. Aware: The Guest OS knows it is running in a virtual environment. Unaware: Uses hardware extensions to support unmodified OS.
OS Modification Unmodified: Runs standard OS (e.g., Windows on Linux). Modified: Requires kernel modification to communicate with the hypervisor. Unmodified: Relies on CPU features (Intel VT-x, AMD-V).
Technique Uses Binary Translation to trap and emulate sensitive instructions. Uses Hypercalls (API calls) to talk directly to the hypervisor. The CPU creates a new mode (Root Mode) for the hypervisor, eliminating binary translation overhead.
Performance Slower (due to binary translation overhead). Faster (near-native performance due to direct communication). High Performance (hardware handles the heavy lifting).
Compatibility High (Runs any OS). Low (Requires modified OS, hard for proprietary OS like Windows). High (Standard in modern CPUs).

Diagram:

plantuml diagram

Q52-Q54. ⭐ Define Type-1 and Type-2 Hypervisors and highlight their differences.

[ June 2024 Q1(a) | Frequency: 1 ]

Answer

1. Type-1 Hypervisor (Bare Metal):

  • Definition: Runs directly on the system's physical hardware without an underlying host operating system. It acts as a lightweight OS itself.

  • Examples: VMware ESXi, Microsoft Hyper-V, Citrix XenServer.

  • Efficiency: High efficiency and security because there is no intermediate OS layer.

2. Type-2 Hypervisor (Hosted):

  • Definition: Runs as a software application on top of a conventional operating system (Host OS).

  • Examples: VMware Workstation, Oracle VirtualBox, Parallels Desktop.

  • Efficiency: Lower performance due to overhead from the Host OS managing hardware access.

Key Differences:

Parameter Type-1 (Bare Metal) Type-2 (Hosted)
Location Directly on Hardware. On top of Host OS.
Performance High (Native speed). Low (Latency via Host OS).
Scalability Enterprise-level scalability. Limited scalability.
Management Complex (Requires management console). Easy (Standard app interface).
Use Case Data Centers, Cloud Infrastructure. Testing, Development, Personal Desktops.

Diagram:

plantuml diagram

Q55-Q56. 📌 Explain use cases of Type-1 and Type-2 hypervisors and their resource utilization contribution.

[ June 2024 Q1(a) | Frequency: 1 ]

Answer

Use Cases:

  • Type-1 (Bare Metal):

    • Enterprise Data Centers: Hosting hundreds of servers for a company.
    • Cloud Service Providers: Building IaaS platforms (e.g., AWS EC2 underlying tech).
    • High-Performance Computing: Applications requiring direct hardware access.
  • Type-2 (Hosted):

    • Software Development: Testing apps on different OSs (e.g., testing Linux code on Windows).
    • Legacy Applications: Running old apps compatible only with older OS versions.
    • Education: Students learning different operating systems without dual-booting.

Contribution to Resource Utilization:

  • Type-1: Maximizes resource utilization by eliminating the "Host OS tax." It allocates hardware resources (CPU/RAM) directly to VMs with strict policies, ensuring critical workloads get priority.

  • Type-2: Offers flexible utilization for temporary workloads. It allows a user to utilize unused capacity of a personal computer to run an additional OS, but is less efficient for heavy, continuous workloads due to context switching overhead.


Q57. ⭐ Explain Desktop Level Virtualization.

[ June 2025 Q3(b), December 2022 Q5(b) | Frequency: 2 ]

Answer

Desktop Virtualization (often referred to as Virtual Desktop Infrastructure or VDI) isolates the desktop environment and associated application software from the physical client device that is used to access it.

Key Concepts:

  1. Centralization: The user's desktop environment (OS, icons, wallpaper, files) runs on a centralized server in a data center, not on the local PC.

  2. Remote Access: Users access their desktop via a remote display protocol (like RDP or PCoIP) over a network (LAN/WAN).

  3. Client Flexibility: The client device can be a traditional PC, a thin client, a tablet, or a smartphone.

Benefits:

  • Mobility: Users can access their personal desktop from anywhere.

  • Security: Data remains on the central server; if a laptop is lost, no data is lost.

  • Management: Administrators can patch and update thousands of desktops simultaneously from the server.

Diagram:

graphviz diagram

Q58. ⭐ Define Resource Pooling in cloud environment.

[ June 2022 Q3(a) | Frequency: 1 ]

Answer

Resource Pooling is a fundamental characteristic of cloud computing where the provider's computing resources (storage, processing, memory, and network bandwidth) are pooled to serve multiple consumers using a multi-tenant model.

Key Mechanism:

  • Physical and virtual resources are dynamically assigned and reassigned according to consumer demand.

  • Location Independence: The customer generally has no control or knowledge over the exact location of the provided resources (e.g., which specific hard drive their data is on), but may specify location at a higher level of abstraction (e.g., country or region).

Significance: It allows cloud providers to achieve economies of scale and high efficiency. When one user is not using their allocated resources, those resources can be instantly reallocated to another user who needs them, without manual intervention.

UNIT 4: RESOURCE POOLING, SHARING AND PROVISIONING

Q59-Q68. ⭐ Explain the Resource Pooling Architecture and the structure/significance of Server, Storage, and Network Pools.

[ June 2024 Q4(a), June 2022 Q3(a)(i-iii) | Frequency: 2 (Combined) ]

Answer

Resource Pooling Architecture: Resource pooling is the aggregation of physical resources (compute, storage, network) into a shared pool from which virtual resources are dynamically provisioned to consumers. The architecture creates an abstraction layer where physical resources are grouped, managed, and served as virtual services.

Diagram:

plantuml diagram

1. Server Pools:

  • Structure: Server pools consist of multiple physical servers comprising CPUs and memory (RAM). Virtual machines (VMs) are configured over these physical servers.

  • Mechanism: Hypervisors abstract CPU cycles and RAM from physical hardware to create virtual CPUs (vCPUs) and virtual memory. These are grouped into pools.

  • Significance:

    • High Availability: If a physical server fails, VMs can migrate to another server within the pool without downtime (Live Migration).
    • Load Balancing: Workloads are distributed across the pool to prevent overheating or bottlenecks on specific nodes.

2. Storage Pools:

  • Structure: Storage pools aggregate physical disks (HDD/SSD) from various arrays (SAN/NAS) into a unified logical storage entity. They are typically categorized into:

    • File-based: Shared access (e.g., NFS).
    • Block-based: Low latency access for OS/Databases (e.g., iSCSI).
    • Object-based: Unstructured data (e.g., S3).
  • Significance:

    • Abstraction: Users see a single volume regardless of the underlying physical disk complexity.
    • Efficiency: Enables Thin Provisioning (allocating storage on-demand rather than upfront).

3. Network Pools:

  • Structure: Network pools consist of virtualized networking devices (virtual switches, routers, ports, and VLANs) created from physical network interface cards (NICs) and switches.

  • Significance:

    • Isolation: Allows creation of isolated virtual networks (VLANs) for different tenants over the same physical cable.
    • Connectivity: Facilitates dynamic bandwidth allocation and interconnection between VMs and storage pools.

Q69-Q71. 🔥 Define Resource Sharing and explain Single Tenancy vs. Multi-Tenancy implementation.

[ June 2025 Q2(a), June 2024 Q4(b), June 2022 Q1(b) | Frequency: 3 ]

Answer

Resource Sharing: Resource sharing in cloud computing is the mechanism of allocating shared computing resources (CPU, memory, storage) among multiple consumers. It relies on virtualization to allow multiple applications or users to utilize the same physical hardware simultaneously while maintaining logical isolation.

1. Single Tenancy:

  • Definition: An architecture where a single instance of a software application and its supporting infrastructure serves only one customer (tenant).

  • Implementation:

    • The customer gets a dedicated database and application instance.
    • There is no sharing of resources at the application layer with other customers.
  • Pros: High security, customization, and performance reliability.

  • Cons: Higher cost and lower resource utilization efficiency.

2. Multi-Tenancy:

  • Definition: An architecture where a single instance of a software application runs on a server and serves multiple customers (tenants).

  • Implementation:

    • Shared Database, Shared Schema: All tenants share the same database tables; rows are distinguished by a Tenant ID. (Highest efficiency, lowest isolation).
    • Shared Database, Separate Schema: Tenants share the database but have individual tables/schemas.
  • Pros: Cost-effective (economies of scale), efficient maintenance (updates applied once for everyone).

  • Cons: "Noisy neighbor" effect, complex security implementation.

Diagram:

blockdiag diagram

Q72. 📌 Explain tenancy at different levels of cloud services.

[ June 2024 Q4(b) | Frequency: 1 ]

Answer

Tenancy models vary depending on the service layer (IaaS, PaaS, SaaS):

  1. IaaS Level (Infrastructure):

    • Multi-tenancy: Occurs at the hypervisor level. Multiple VMs from different customers run on the same physical server.
    • Isolation: Achieved via hardware virtualization (CPU/RAM isolation) and VLANs.
  2. PaaS Level (Platform):

    • Multi-tenancy: Occurs at the runtime environment level. Multiple developers run their applications within the same operating system or container engine (e.g., Kubernetes).
    • Isolation: Achieved via process isolation and sandboxing.
  3. SaaS Level (Software):

    • Multi-tenancy: Occurs at the application and database level. A single code base serves thousands of users.
    • Isolation: Achieved via logical separation in the database (Tenant IDs) and access control lists (ACLs).

Q73-Q74. ⭐ Differentiate between Static and Dynamic Resource Provisioning and how they address allocation challenges.

[ December 2024 Q3(b) | Frequency: 1 ]

Answer

Resource provisioning is the process of selecting, deploying, and running software (e.g., DBMS) and hardware resources (e.g., CPU, storage) for an application.

Feature Static Provisioning Dynamic Provisioning
Definition Resources are allocated in advance based on peak load estimates. Resources are allocated/de-allocated on-demand during runtime.
Predictability Suitable for predictable, steady workloads. Suitable for unpredictable, fluctuating workloads.
Contract Usually fixed-price contracts. Pay-per-use or utility-based pricing.
Scalability Over-provisioning: Wasted resources during low traffic.
Under-provisioning: System crashes during high traffic.
Elasticity: Scales up/down automatically to match demand.
Setup Time No runtime overhead (resources are ready). Slight runtime latency during allocation.

Addressing Challenges:

  • Static: Addresses the challenge of guaranteed performance for critical, steady applications by reserving capacity. It avoids the latency of spinning up new instances.

  • Dynamic: Addresses the challenge of cost optimization and bursty traffic. It prevents paying for idle resources (over-provisioning) and prevents service denial during spikes (under-provisioning).


Q75-Q80. 📌 Explain Static and Dynamic Provisioning approaches with their advantages and disadvantages.

[ December 2022 Q1(a) | Frequency: 1 (Split) ]

Answer

1. Static Approach:

  • Concept: Resources (VMs, Storage) are assigned to an application before execution starts and remain fixed throughout the lifecycle.

  • Advantages:

    • Simplicity: Easier to implement and manage.
    • Performance: No overhead of monitoring and dynamically adjusting resources.
    • Predictability: Costs are known upfront.
  • Disadvantages:

    • Wastage: Leads to low resource utilization (resources sit idle during off-peak hours).
    • Inflexibility: Cannot handle unexpected traffic surges, leading to service degradation.

2. Dynamic Approach:

  • Concept: Resources are provisioned in real-time based on the current workload. The system monitors load and triggers scaling events.

  • Advantages:

    • Cost Efficiency: Users only pay for what they use.
    • Availability: High availability is maintained even during sudden load spikes.
    • Green Computing: Reduces energy consumption by powering down unused nodes.
  • Disadvantages:

    • Complexity: Requires sophisticated monitoring and auto-scaling algorithms.
    • Latency: There is a "spin-up" time involved in creating new VM instances.

Q82-Q84. ⭐ Define VM Sizing and discuss methods to perform it.

[ June 2025 Q1(b), December 2022 Q2(b) | Frequency: 2 ]

Answer

Definition: Virtual Machine (VM) Sizing is the process of determining the appropriate amount of computing resources (CPU cores, RAM, Disk I/O, Network Bandwidth) required for a VM to run a specific workload efficiently without over-provisioning (wasting money) or under-provisioning (hurting performance).

Ways to do VM Sizing:

  1. Rule-based / Fixed Sizing (Static):

    • Method: Selection is based on predefined templates or "T-shirt sizes" (e.g., Small, Medium, Large instances offered by AWS/Azure).
    • Process: The administrator estimates the peak requirement of the application and selects a fixed size that exceeds that peak.
    • Pros/Cons: Simple but often leads to wastage if the peak is rarely hit.
  2. Analytical / Model-based Sizing (Dynamic):

    • Method: Uses mathematical models or historical data analysis to predict resource needs.
    • Process: The system continuously monitors performance metrics (CPU usage, memory pressure). If utilization exceeds a threshold (e.g., 80%), it recommends resizing or automatically adjusts the allocation (Vertical Scaling).
    • Pros/Cons: Highly efficient and cost-effective but requires complex monitoring tools.

Q85-Q86. ⭐ What is Scaling/Scalability in Cloud Computing?

[ June 2024 Q3(b), December 2022 Q2(a) | Frequency: 2 ]

Answer

Definition: Scalability in cloud computing is the ability of a system to handle growing amounts of work by adding resources to the system. It ensures that the application performance remains consistent regardless of the number of users or the volume of data.

Types of Scaling:

  1. Horizontal Scaling (Scale Out/In): Adding more nodes (VMs) to a system (e.g., increasing from 2 web servers to 5).

  2. Vertical Scaling (Scale Up/Down): Increasing the power of an existing node (e.g., upgrading RAM from 4GB to 8GB).

Significance:

  • Ensures Quality of Service (QoS) and Service Level Agreements (SLAs) are met.

  • Fundamental to the "pay-as-you-go" elasticity of the cloud.

UNIT 5: SCALING

Q87-Q88. 🔥 Describe Proactive and Reactive Scaling Strategies.

[ June 2024 Q3(b), June 2023 Q1(b)(i)(ii), Dec 2022 Q2(a)(i)(ii) | Frequency: 3 ]

Answer

Scaling strategies determine when and how scaling actions are triggered in a cloud environment.

1. Reactive Scaling Strategy:

  • Definition: This strategy reacts to system changes in real-time. Scaling actions are triggered when specific monitored metrics (like CPU utilization, memory usage, or network throughput) breach predefined thresholds.

  • Mechanism:

    • Scale Out: If CPU usage > 80% for 5 minutes, add 2 VM instances.
    • Scale In: If CPU usage < 30% for 10 minutes, remove 1 VM instance.
  • Pros: Responds to actual demand; simple to implement; cost-effective for unpredictable workloads.

  • Cons: Lag time. Since it reacts after the threshold is crossed, there is a delay while new resources boot up, potentially causing temporary performance degradation.

2. Proactive (Predictive) Scaling Strategy:

  • Definition: This strategy anticipates future demand based on historical data, patterns, and analytics. It schedules scaling actions before the load actually hits.

  • Mechanism: Analyzes traffic trends (e.g., higher traffic on Black Friday, lower on weekends) and schedules resources to be ready at specific times.

  • Pros: Eliminates latency/lag; ensures resources are ready exactly when needed; improves user experience during known peaks.

  • Cons: Relies on accurate predictions. If predictions are wrong, it leads to either resource wastage (over-provisioning) or service crashing (under-provisioning).

Diagram:

graphviz diagram

Q89-Q93. 📌 Compare Proactive and Reactive scaling (Suitability, Working, Cost, Implementation) and explain Combinational Scaling.

[ June 2023 Q1(b) | Frequency: 1 ]

Answer

Combinational Scaling Strategy: This is a hybrid approach that utilizes both reactive and proactive methods. It uses predictive algorithms to handle expected base loads and traffic patterns while keeping reactive rules in place to handle sudden, unforeseen spikes that the prediction model missed. This offers the reliability of proactive scaling with the safety net of reactive scaling.

Comparison:

Parameter Proactive Scaling Reactive Scaling
Working anticipatory; scales based on predicted future load using historical data/analytics. Responsive; scales based on current real-time metrics breaching thresholds.
Suitability Best for predictable workloads (e.g., batch jobs, e-commerce sales, 9-to-5 apps). Best for unpredictable, bursty, or fluctuating workloads (e.g., viral news sites).
Implementation Complex; requires ML models, historical data analysis, and scheduling tools. Simple; requires setting up monitoring rules (e.g., CloudWatch alarms) and triggers.
Cost Risk of over-provisioning if predictions are too aggressive (higher cost). Highly cost-efficient as you only pay for resources when demand actually exists.
Latency Low/Zero latency (resources are pre-warmed). High latency (time taken to boot new instances after demand rises).

Q94. ⭐ What is Auto Scaling in cloud?

[ June 2022 Q2(b) | Frequency: 1 ]

Answer

Auto Scaling is a cloud computing feature that automatically adjusts the amount of computational resources (usually the number of active servers or virtual machines) in a server farm based on the load on the farm.

Key Functions:

  1. Scale Out (Expand): Automatically adds instances when traffic increases to maintain performance.

  2. Scale In (Shrink): Automatically terminates instances when traffic drops to save costs.

  3. Health Check: Replaces unhealthy instances automatically.

Benefits:

  • Cost Optimization: Pay only for what you use.

  • Fault Tolerance: Detects and replaces failed instances.

  • Availability: Ensures the application always has enough capacity to handle the current request volume.

Diagram:

blockdiag diagram

Q95-Q97. ⭐ Write and explain the Fixed Amount Auto Scaling Algorithm.

[ June 2025 Q4(a), June 2022 Q2(b) | Frequency: 2 ]

Answer

Explanation: The Fixed Amount Auto Scaling Algorithm (or Step Scaling) is a straightforward strategy where the capacity is increased or decreased by a constant, predefined number of instances (N) when a specific threshold is breached. Unlike percentage-based scaling (which scales exponentially), this scales linearly.

Algorithm Logic:

  1. Monitor a specific metric $M$ (e.g., Average CPU Utilization).

  2. Define Thresholds:

    • Upper Threshold ($T_{high}$): Point to scale out.
    • Lower Threshold ($T_{low}$): Point to scale in.
  3. Define Step Size ($S$): The fixed number of VMs to add/remove (e.g., 2 VMs).

  4. Cool-down Period: A wait time after a scaling action before another can begin (to prevent oscillation).

Pseudocode Algorithm:

LOOP continuously:
    1. GET current_metric_value (M)
    2. GET current_instance_count (C)

    3. IF (scaling_activity_in_progress OR in_cooldown):
           CONTINUE (Skip loop)

    4. IF (M > T_high):
           // Scale Out Condition
           New_Count = C + S
           IF (New_Count > Max_Capacity):
               New_Count = Max_Capacity
           EXECUTE provision_instances(New_Count)
           START cooldown_timer

    5. ELSE IF (M < T_low):
           // Scale In Condition
           New_Count = C - S
           IF (New_Count < Min_Capacity):
               New_Count = Min_Capacity
           EXECUTE terminate_instances(New_Count)
           START cooldown_timer

    6. ELSE:
           Do Nothing (System stable)

    7. WAIT for polling_interval
END LOOP

Key Characteristics:

  • Predictable: You always know exactly how many servers will be added.

  • Simple: Easy to configure and understand.

  • Limitation: May react too slowly to massive, sudden spikes compared to proportional scaling.


Q98-Q99. 📌 How does auto-scaling contribute to optimizing resource utilization in Vertical vs. Horizontal scaling?

[ Dec 2024 Q4(a) | Frequency: 1 ]

Answer

1. Context of Horizontal Scaling (Scale Out/In):

  • Optimization: Auto-scaling optimizes utilization by matching the count of resources to the demand.

  • Mechanism: When load increases, it adds identical commodity VMs. When load drops, it terminates them.

  • Contribution: This ensures that the system is never running 10 servers when only 2 are needed (preventing idle waste) and never running 2 when 10 are needed (preventing performance degradation). It is the most common form of cloud auto-scaling.

2. Context of Vertical Scaling (Scale Up/Down):

  • Optimization: Auto-scaling optimizes utilization by matching the size (capacity) of a single resource to the demand.

  • Mechanism: If a database RAM is 90% full, the system reboots it into a larger instance type (e.g., changing from 8GB RAM to 16GB RAM).

  • Contribution: It ensures a single workload has exactly the power it needs without paying for a massive server permanently. However, it typically involves downtime (restart required) and has a hard hardware limit, making it less flexible for "optimization" compared to horizontal scaling.


Q100-Q102. 📌 Write short notes on Scaling Strategies and Horizontal vs. Vertical Scaling.

[ Dec 2024 Q5(ii), June 2024 Q5(a), June 2022 Q5(b) | Frequency: 3 ]

Answer

Horizontal Scaling (Scaling Out):

  • Concept: Adding more units of resources (more servers, more VMs) to the pool.

  • Analogy: Adding more lanes to a highway to reduce traffic.

  • Pros: Infinite scaling (theoretically), no downtime (dynamic addition), redundancy/high availability.

  • Cons: Requires applications to be distributed/stateless; increased management complexity.

Vertical Scaling (Scaling Up):

  • Concept: Adding more power to an existing resource (more CPU, RAM, Disk) to a single node.

  • Analogy: Adding a second engine to a car to make it go faster.

  • Pros: Simple management (no change in architecture/code); good for monolithic apps and databases.

  • Cons: Hardware limits (ceiling), creates a single point of failure, usually requires downtime/reboot to resize.

Diagram:

plantuml diagram

UNIT 6: LOAD BALANCING

Q103-Q105. 🔥 Define Load Balancing and discuss its importance and functionality in cloud computing.

[ June 2025 Q5(b), December 2023 Q1(b), June 2023 Q3(b), December 2022 Q1(b) | Frequency: 4 ]

Answer

Definition: Load balancing is the process of efficiently distributing incoming network traffic and computing workloads across multiple servers, networks, or resources (such as a server farm or cloud instances). It acts as a traffic manager, sitting between client devices and backend servers, ensuring that no single server bears too much demand.

Importance (Why it is Imperative): In cloud computing, balancing the load is imperative to:

  1. Minimize Response Time: Ensures applications respond quickly to user requests by preventing server congestion.

  2. Maximize Throughput: Optimizes the use of available resources to process more transactions in less time.

  3. High Availability: If one server fails, the load balancer redirects traffic to the remaining online servers, preventing downtime.

  4. Avoid Overloading: Prevents any single resource from being overwhelmed, which could lead to system crashes or degradation.

Functionality:

  • Health Monitoring: Continuously checks the health of backend servers. If a server fails a health check, it is removed from the pool.

  • Traffic Distribution: Uses algorithms (like Round Robin or Least Connection) to route requests.

  • Session Persistence: Ensures a user's session remains on the same server if required.

Diagram:

plantuml diagram

Q106. ⭐ Explain Weighted Round Robin algorithm with reference to load balancing.

[ June 2025 Q5(b), December 2022 Q1(b-ii) | Frequency: 2 ]

Answer

Weighted Round Robin (WRR) is a static load balancing algorithm designed to handle servers with different processing capabilities.

Concept:

  • In a standard Round Robin, requests are distributed sequentially (A $\rightarrow$ B $\rightarrow$ C $\rightarrow$ A). This assumes all servers are equal.

  • In Weighted Round Robin, each server is assigned a weight (an integer value) indicating its processing capacity.

  • Servers with higher weights receive more connections than those with lower weights.

Working Mechanism:

  1. Administrators assign weights based on CPU or RAM specifications (e.g., Server A = 5, Server B = 1).

  2. The Load Balancer cycles through the servers but assigns requests proportional to these weights.

  3. If Server A has a weight of 5 and Server B has a weight of 1, Server A will receive 5 requests for every 1 request sent to Server B.

Suitability: It is ideal for heterogeneous environments where the server pool consists of machines with varying physical specifications.

Diagram:

graphviz diagram

Q107. 📌 Explain Static Algorithm approach with reference to load balancing.

[ December 2022 Q1(b-i) | Frequency: 1 ]

Answer

Static Load Balancing Algorithms distribute workloads based on predefined rules or prior knowledge of the system's properties, without taking into account the current state (load) of the servers.

Key Characteristics:

  1. No Monitoring: They do not monitor the real-time CPU or memory usage of the nodes.

  2. Predictability: The distribution pattern is deterministic.

  3. Speed: They are faster to execute because there is no overhead of gathering system state information.

Examples:

  • Round Robin: Distributes requests sequentially.

  • Weighted Round Robin: Distributes based on assigned capacity weights.

  • IP Hash: Uses the client's IP address to determine which server receives the request.

Limitation: Static algorithms can lead to load imbalances if tasks vary significantly in execution time, as a server might pile up long-running requests while others sit idle.


Q108-Q109. 📌 Explain Network Load Balancer and Application Load Balancer along with their features.

[ June 2023 Q3(b-i, ii) | Frequency: 1 (Split) ]

Answer

1. Network Load Balancer (NLB) - Layer 4:

  • Operation: Operates at the Transport Layer (Layer 4) of the OSI model (TCP/UDP).

  • Function: Routes traffic based on IP protocol data (Source IP, Destination IP, TCP/UDP ports). It does not inspect the content of the packet.

  • Features:

    • Ultra-low latency: extremely fast as it makes decisions based only on packet headers.
    • High Throughput: Capable of handling millions of requests per second.
    • Static IP Support: Can provide a static IP address for the application.

2. Application Load Balancer (ALB) - Layer 7:

  • Operation: Operates at the Application Layer (Layer 7) of the OSI model (HTTP/HTTPS).

  • Function: Makes routing decisions based on the actual content of the request (HTTP headers, cookies, URL path).

  • Features:

    • Path-based Routing: Can route /images to one server group and /api to another.
    • SSL Termination: Decrypts SSL traffic at the load balancer, relieving backend servers of this computational burden.
    • Content-based Routing: Can route based on user-agent (mobile vs desktop) or cookies.

Diagram:

plantuml diagram

Q110-Q111. 📌 Explain briefly Hardware-based and Virtual Load Balancers.

[ December 2023 Q1(b) | Frequency: 1 ]

Answer

1. Hardware-based Load Balancer:

  • Definition: A dedicated physical appliance (hardware box) designed specifically for load balancing tasks.

  • Characteristics:

    • Contains proprietary software running on specialized processors (ASICs) optimized for network traffic.
    • High performance and throughput.
    • Expensive to purchase and maintain (CapEx).
    • Examples: F5 Big-IP, Citrix ADC (NetScaler).

2. Virtual Load Balancer (Software/VLB):

  • Definition: A software-based load balancer that runs on a virtual machine or commodity hardware.

  • Characteristics:

    • Decouples the software from specific hardware, offering flexibility.
    • Cost-effective (OpEx model usually) and easier to scale.
    • Similar challenges to on-premise hardware regarding management but allows for automation and cloud integration.
    • Examples: HAProxy, NGINX, AWS ELB.

UNIT 7: SECURITY ISSUES IN CLOUD COMPUTING

Q112-Q114. 📌 What is cloud security? Explain threats and information security methods.

[ June 2023 Q4(b) | Frequency: 1 ]

Answer

Cloud Security Definition: Cloud security refers to the set of policies, technologies, applications, and controls utilized to protect virtualized IP, data, applications, services, and the associated infrastructure of cloud computing. It involves protecting data privacy and safety across online infrastructure.

Common Threats:

  1. Data Breaches: Unauthorized access to sensitive data caused by weak authentication or vulnerabilities.

  2. Data Loss: Permanent loss of data due to malicious deletion, corruption, or disaster.

  3. Account Hijacking: Attackers stealing user credentials (via phishing or spyware) to manipulate data or transactions.

  4. Insecure APIs: Weaknesses in the interfaces (APIs) provided by cloud providers for provisioning and management.

  5. Malicious Insiders: Employees or administrators abusing their authorized access.

  6. Denial of Service (DoS): Making cloud services unavailable to legitimate users by flooding the network.

Information Security Methods:

  1. Confidentiality: Ensuring data is accessible only to authorized users (Encryption, Access Control Lists).

  2. Integrity: Ensuring data is not altered unauthorizedly (Hash functions, Digital Signatures).

  3. Availability: Ensuring data/services are available when needed (Redundancy, DDoS protection).

  4. Accountability: Logging actions to trace responsible parties (Audit trails).


Q115-Q116. 📌 Explain Identity Management and Access Control mechanisms in cloud security.

[ December 2023 Q2(b)(i, ii) | Frequency: 1 ]

Answer

These are core components of IAM (Identity and Access Management).

1. Identity Management (IdM):

  • Definition: The framework of policies and technologies for ensuring that the right people (identities) have the appropriate access.

  • Key Function: It deals with identifying individuals in a system (authentication) and managing their roles.

  • Components:

    • Single Sign-On (SSO): Allows users to log in once and access multiple applications.
    • Multi-Factor Authentication (MFA): Requires more than one verification method (e.g., password + OTP).
    • Directory Services: Storing identity data securely.

2. Access Control Mechanism:

  • Definition: The process of restricting access to specific resources based on the identity of the user.

  • Key Function: It answers the question, "What is this user allowed to do?" (Authorization).

  • Approaches:

    • RBAC (Role-Based Access Control): Access based on job title/role (e.g., "Manager" can approve, "Clerk" can only view).
    • ABAC (Attribute-Based Access Control): Access based on attributes (e.g., User location, Time of day).
    • Least Privilege Principle: Users are granted the minimum level of access required to perform their job.

Diagram:

plantuml diagram

Q117-Q118. 📌 What are the benefits of SECaaS and how does it enhance cloud security?

[ December 2024 Q1(c) | Frequency: 1 ]

Answer

Security as a Service (SECaaS): SECaaS is a cloud delivery model where a third-party provider integrates security services into a corporate infrastructure on a subscription basis.

Benefits:

  1. Cost Savings: Eliminates the need for expensive on-premise security hardware and specialized in-house security experts. Operates on OpEx rather than CapEx.

  2. Continuous Updates: The provider manages virus definitions and threat intelligence, ensuring the latest protection is always active (e.g., Anti-virus updates).

  3. Faster Provisioning: Security services can be enabled instantly through a web dashboard without installing local hardware.

  4. Expertise: Grants access to specialized security experts who monitor threats 24/7.

Enhancement of Cloud Security:

  • Unified Management: Provides a centralized dashboard to manage security policies across hybrid environments (cloud and on-prem).

  • Scalability: Security capabilities scale automatically with the cloud infrastructure (e.g., scanning more traffic as the application grows).

  • Proactive Monitoring: SECaaS providers often use advanced analytics and AI to detect anomalies and mitigate threats (like DDoS) before they reach the customer's network.

  • Compliance: Helps organizations meet regulatory requirements (GDPR, HIPAA) by providing compliant security frameworks and reporting.

UNIT 8: INTERNET OF THINGS

Q119-Q120. 🔥 Define Internet of Things (IoT) and its characteristics.

[ June 2022 Q1(c) | Frequency: 1 ]

Answer

Definition: The Internet of Things (IoT) describes the network of physical objects—"things"—that are embedded with sensors, software, and other technologies for the purpose of connecting and exchanging data with other devices and systems over the internet. It extends internet connectivity beyond standard devices (desktops, laptops) to any range of non-internet-enabled physical devices and everyday objects.

Characteristics of IoT:

  1. Connectivity: Devices are interconnected through the internet or local networks (Wi-Fi, Bluetooth) to exchange data.

  2. Sensing: Devices are equipped with sensors to collect real-time data from the environment (e.g., temperature, motion).

  3. Data Processing: Capability to process data locally (Edge) or transmit it to central systems (Cloud) for analysis.

  4. Automation: Facilitates remote control and automated actions without direct human intervention.

  5. Scalability: Designed to handle a massive influx of devices and data streams.

  6. Interoperability: Diverse devices from different manufacturers communicate via standard protocols.

  7. Unique Identity: Each IoT device has a unique identifier (IP address) to be addressable in the network.

  8. Energy Efficiency: Many IoT devices are designed to operate on low power for extended periods.


Q121-Q123. 📌 Explain Industrial IoT, Infrastructure IoT, and Internet of Military Things (IoMT) categories.

[ June 2022 Q1(c) | Frequency: 1 ]

Answer

1. Industrial IoT (IIoT):

  • Focus: Augmenting existing industrial systems to increase productivity and efficiency.

  • Application: Used in large-scale factories, manufacturing plants, agriculture, and logistics.

  • Function: Enables predictive maintenance, real-time monitoring of machinery, and supply chain optimization. It is a key enabler of Industry 4.0.

2. Infrastructure IoT:

  • Focus: Monitoring and controlling the operations of urban and rural infrastructures.

  • Application: Bridges, railway tracks, wind farms, and smart city grids.

  • Function: Sensors boost efficiency and maintenance planning by monitoring structural health and usage patterns, leading to cost savings and improved safety.

3. Internet of Military Things (IoMT):

  • Focus: Enhancing military operations, situational awareness, and risk assessment.

  • Application: Connecting ships, planes, tanks, drones, and soldiers (Battlefield IoT).

  • Function: Creates an interconnected combat system for real-time data sharing, improving response times and strategic decision-making on the battlefield.


Q124-Q125. ⭐ List and explain any ten IoT technologies.

[ June 2025 Q2(b), Dec 2023 Q1(c) | Frequency: 2 ]

Answer

The baseline technologies that make IoT possible include:

  1. Sensors & Actuators: The fundamental hardware. Sensors gather data (input), and actuators perform physical actions (output) based on instructions.

  2. Connectivity Protocols: Technologies like Wi-Fi, Bluetooth, Zigbee, NFC, and Cellular (5G/LTE) that facilitate data transfer.

  3. Cloud Computing: Provides the massive storage and processing power required to handle the data generated by billions of devices.

  4. Edge Computing: Processes data locally on the device or network edge to reduce latency and bandwidth usage.

  5. Machine Learning & AI: Analyzing vast IoT data to derive insights, recognize patterns, and enable predictive maintenance.

  6. IoT Processors: Specialized low-power, high-efficiency chips (microcontrollers) designed specifically for IoT devices.

  7. IoT Operating Systems: Lightweight OSs (e.g., TinyOS, Contiki) designed for devices with limited memory and power.

  8. Low-Power Wide-Area Networks (LPWAN): Networks like LoRaWAN and Sigfox designed for long-range communication with minimal power consumption.

  9. IoT Security: Encryption, authentication, and secure boot technologies to protect devices and data from cyber threats.

  10. Event Stream Processing: Technologies to analyze high-rate data streams in real-time (e.g., thousands of events per second) for immediate decision-making.


Q126. 📌 Explain all the four components which support IoT system with the help of a sample block diagram.

[ June 2023 Q1(c) | Frequency: 1 ]

Answer

An IoT system is generally comprised of four distinct components that facilitate the flow of data from the physical world to actionable insights.

1. Sensors/Devices:

  • These connect to the physical world to collect data (sensing) or perform actions (actuating).

  • Examples: Temperature sensors, GPS, cameras.

2. Connectivity (Network):

  • The medium through which data is transmitted from the sensors to the processing system.

  • Technologies: Wi-Fi, Bluetooth, Cellular (4G/5G), LoRaWAN, Gateways.

3. Data Processing (Analytics):

  • The software infrastructure (often cloud-based) that ingests, stores, and analyzes the raw data.

  • Functions: Filtering, aggregation, machine learning inference.

4. User Interface (Action):

  • The point of interaction for the end-user to view data or control the system.

  • Examples: Mobile apps, web dashboards, alerts, or automatic triggers for actuators.

Diagram:

plantuml diagram

Q127-Q128. 🔥 Define a Sensor and explain its characteristics.

[ June 2025 Q3(a), June 2023 Q4(a), Dec 2022 Q3(a) | Frequency: 3 ]

Answer

Definition: A Sensor is a device that detects and responds to some type of input from the physical environment. The specific input could be light, heat, motion, moisture, pressure, or any one of a great number of other environmental phenomena. The output is a signal (generally electrical) that is converted to a human-readable display or transmitted for processing.

Characteristics of a Sensor:

  1. Sensitivity: The ratio of the change in output signal to the change in input physical quantity. High sensitivity indicates the sensor detects even minute changes.

  2. Resolution: The smallest detectable incremental change of input parameter that can be detected in the output signal.

  3. Linearity: The degree to which the sensor's output is directly proportional to the input over its specific range.

  4. Range: The minimum and maximum values of the physical variable that the sensor can measure.

  5. Drift: The deviation in sensor reading over time when the input remains constant (e.g., due to aging or temperature changes).

  6. Repeatability: The ability of the sensor to produce the same output for the same input under identical conditions.

  7. Response Time: The time it takes for the sensor to respond to a step change in the input variable.


Q129-Q133. 📌 Classification of Sensors and explanations of specific types.

[ June 2025 Q3(a), June 2023 Q4(a), Dec 2022 Q3(a) | Frequency: 3 ]

Answer

Sensors are classified based on the physical parameter they measure.

1. Temperature Sensors:

  • Function: Detect heat/cold and convert it into an electrical signal.

  • Use Cases: Industrial machine monitoring (overheating), agriculture (soil temp), smart thermostats.

2. Pressure Sensors:

  • Function: Measure force per unit area applied by a fluid (liquid or gas) or solid.

  • Use Cases: Monitoring tire pressure (TPMS), water tank levels, atmospheric pressure for weather forecasting.

3. Motion Sensors:

  • Function: Detect physical movement in a specific area.

  • Use Cases: Security systems (intruder detection), automated lighting, automatic doors.

  • Types: Passive Infrared (PIR), Ultrasonic, Microwave.

4. Proximity Sensors:

  • Function: Detect the presence of nearby objects without any physical contact.

  • Use Cases: Parking assist systems, conveyor belts (counting objects), smartphone screen off during calls.


Q134-Q137. 📌 Explain Image, Chemical, Acceleration, and Proximity Sensors in IoT.

[ Dec 2023 Q4(b) | Frequency: 1 ]

Answer

1. Image Sensors:

  • Description: Converts an optical image into an electronic signal.

  • Features: Captures visual data for digital processing.

  • Applications: Facial recognition systems, automated quality control (detecting defects on assembly lines), license plate readers.

2. Chemical Sensors:

  • Description: Detects the presence of specific chemical substances or changes in chemical composition.

  • Features: Can detect volatile compounds or liquid contaminants.

  • Applications: Industrial process control, environmental monitoring (detecting leaks), water quality testing.

3. Acceleration Sensors (Accelerometers):

  • Description: Measures proper acceleration (rate of change of velocity) or tilt.

  • Features: Can detect static (gravity) or dynamic (vibration/movement) forces.

  • Applications: Smartphone screen rotation, fall detection in elderly care wearables, vehicle crash detection.

4. Proximity Sensors:

  • Description: Detects the presence/absence of an object within a specific distance.

  • Features: Often uses electromagnetic fields or infrared light.

  • Applications: Retail (customer engagement when near a shelf), car reversing sensors.


Q138-Q141. ⭐ Define Actuator and explain its types.

[ June 2025 Q4(b), Dec 2023 Q5(a), Dec 2022 Q5(c) | Frequency: 3 ]

Answer

Definition: An Actuator is a machine component or mechanism that is responsible for moving and controlling a system. It acts as the "hands" of the IoT system, taking electrical control signals (from the processing layer) and converting them into physical action (movement, force, sound, etc.).

Types of Actuators:

  1. Electric Actuators:

    • Mechanism: Converts electrical energy into mechanical torque (rotary or linear).
    • Examples: Electric motors (DC, Servo, Stepper), Solenoids.
    • Use: Robotics, opening electric locks, controlling fans.
  2. Hydraulic Actuators:

    • Mechanism: Uses pressurized fluid (oil) to generate force.
    • Characteristics: High power and force generation.
    • Use: Heavy construction machinery (excavators), industrial presses.
  3. Pneumatic Actuators:

    • Mechanism: Uses compressed air to generate motion.
    • Characteristics: Fast, clean, and safe (no spark hazard).
    • Use: Bus doors, automated assembly lines, pneumatic brakes.
  4. Thermal/Magnetic Actuators:

    • Mechanism: Uses thermal energy (Shape Memory Alloys) or magnetic fields to produce movement.
    • Use: Precision instruments, specialized valves.

Q142-Q143. 📌 Explain Arduino and Raspberry Pi as computing components.

[ Dec 2022 Q3(b) | Frequency: 1 ]

Answer

1. Arduino:

  • Type: Microcontroller-based development board.

  • Focus: Hardware interaction, real-time control, and analog/digital I/O.

  • OS: None (Runs firmware/sketches directly).

  • Use Cases: Simple IoT projects, reading sensors, controlling motors, robotics. It is ideal for tasks requiring low power and real-time response.

2. Raspberry Pi:

  • Type: Single-Board Computer (SBC).

  • Focus: Computational tasks, software applications, and multimedia.

  • OS: Runs full Operating Systems (Linux/Raspbian).

  • Connectivity: Has built-in Ethernet, Wi-Fi, Bluetooth, and USB ports.

  • Use Cases: IoT Gateways, media servers, complex data processing, running Python scripts, hosting web servers.

Comparison Diagram:

plantuml diagram

UNIT 9: IoT NETWORKING AND CONNECTIVITY TECHNOLOGIES

Q144. 📌 Write short note on Technologies used for M2M Communication.

[ December 2024 Q5(iv) | Frequency: 1 ]

Answer

Machine-to-Machine (M2M) communication utilizes several key technologies to enable devices to exchange data without human intervention. The primary technologies include:

  1. Wireless Sensor Networks (WSNs):

    • These are networks of spatially distributed autonomous sensors to monitor physical or environmental conditions (temperature, sound, etc.) and pass data through the network to a main location.
    • Used extensively in industrial automation and environmental monitoring.
  2. Cellular Networks:

    • Leverages existing mobile network infrastructure (2G, 3G, 4G, 5G).
    • Provides wide area coverage, making it suitable for remote monitoring and asset tracking.
  3. Wi-Fi:

    • A high-speed wireless technology effective for devices in close proximity.
    • Widely used in home automation and office environments due to simplicity and high data transfer rates.
  4. Bluetooth:

    • A short-range wireless technology.
    • Favored for wearable devices and personal home automation due to low power consumption and adaptability.

Q145. 📌 Explain IPv6 (6LoWPAN) communication protocol with reference to IoT devices.

[ June 2022 Q4(b)(i) | Frequency: 1 ]

Answer

6LoWPAN (IPv6 over Low Power Wireless Personal Area Network) is the standard protocol designed to bring the Internet Protocol (IP) to the smallest of IoT devices.

  • Definition: It is a standard that allows IPv6 packets to be transmitted over IEEE 802.15.4 wireless networks (which are typically low power and low data rate).

  • Purpose: It allows small, limited-processing, and low-power IoT devices to have direct connectivity with IP-based servers on the Internet.

  • Key Feature: Unlike other LPWANs (like LoRaWAN) where devices talk to a gateway, in 6LoWPAN networks, host (end) nodes can communicate with other host nodes directly.

  • Significance: It enables the vision of "every device having an IP address" by utilizing the vast address space of IPv6 efficiently over constrained networks.


Q146-Q149. ⭐ Explain MQTT Communication Protocol for IoT.

[ June 2022 Q4(b)(ii), December 2023 Q5(b) | Frequency: 2 ]

Answer

MQTT (Message Queuing Telemetry Transport) is a lightweight messaging protocol designed for battery-powered devices and low bandwidth networks.

  • Architecture: It works on a Publish-Subscribe model rather than a client-server model.

  • Components:

    1. Publisher: The device (e.g., a temperature sensor) that generates data.
    2. Subscriber: The application or device that receives the data.
    3. Broker: The central server that receives messages from publishers, filters them, and distributes them to subscribers based on topics.
  • Working: Subscribers "subscribe" to a specific Topic. When a Publisher sends data to that topic, the Broker ensures all subscribers receive it.

  • Usage: Ideal for unreliable networks and devices with limited processing power.

Diagram:

plantuml diagram

Q147, Q150. ⭐ Explain CoAP Communication Protocol for IoT.

[ June 2022 Q4(b)(iii), December 2023 Q5(b) | Frequency: 2 ]

Answer

CoAP (Constrained Application Protocol) is a web transfer protocol designed to translate the HTTP model for use with restrictive devices and network environments.

  • Protocol Basis: It uses UDP (User Datagram Protocol) for establishing communication, which reduces overhead compared to TCP.

  • Function: It allows low-power sensors to interact with RESTful services (like HTTP GET, POST, PUT, DELETE) but with much lower bandwidth usage.

  • Key Feature: It supports multicast, allowing data to be transmitted to multiple hosts simultaneously using low bandwidth.

  • Usage: Best suited for M2M communication where devices need to be controlled over the internet using standard web methods but have limited power.


Q148, Q151. 📌 Explain XMPP Communication Protocol for IoT.

[ June 2022 Q4(b)(iv), December 2023 Q5(b) | Frequency: 2 ]

Answer

XMPP (Extensible Messaging and Presence Protocol) is a communication protocol based on XML (Extensible Markup Language).

  • Function: It enables the real-time exchange of extensible data between network entities.

  • Nature: It is an open standard, meaning anyone can implement these services without proprietary restrictions.

  • Usage: Originally used for instant messaging (chat), it has been adapted for IoT to support M2M communication across a variety of networks. It is useful for applications requiring near-real-time data exchange like video calls or multi-party chat in an IoT context.


Q152, Q156. 📌 Explain Zigbee IoT Connectivity Technology.

[ December 2023 Q2(a)(i), December 2022 Q5(d) | Frequency: 2 ]

Answer

Zigbee is a wireless technology based on the IEEE 802.15.4 standard, specifically designed to address the needs of low-power and low-cost IoT devices.

  • Range: It is used for short-range communication only.

  • Network Topology: It creates low data rate wireless ad-hoc networks (often mesh networks).

  • Features:

    • Resistant to unauthorized reading and communication errors.
    • Provides low throughput (speed).
    • Easy to install and supports a large number of nodes connected together.
  • Application: Home automation (smart bulbs, switches), industrial control.


Q153. 📌 Explain Z-wave IoT Connectivity Technology.

[ December 2023 Q2(a)(ii) | Frequency: 1 ]

Answer

Z-Wave is a wireless communications protocol used primarily for home automation.

  • Mechanism: It uses low-powered radio frequency communication.

  • Key Feature: It is interoperable, allowing different smart devices to connect and be controlled over the internet.

  • Performance: Supports data rates of up to 100kbps and includes encryption and multi-channel support.

  • Usage: Ideal for connecting smart devices (lights, locks, sensors) in a home environment while consuming very low power.


Q154, Q158. 📌 Explain RFID IoT Connectivity Technology.

[ December 2023 Q2(a)(iii), December 2022 Q5(d) | Frequency: 2 ]

Answer

RFID (Radio Frequency Identification) uses electromagnetic fields to automatically identify and track tags attached to objects.

  • Components: The system consists of a Reading Device (Reader) and RFID Tags.

  • RFID Tag: An electronic device consisting of a small chip and an antenna. It can carry data up to 2000 bytes.

  • Working: The tag stores data (identification info) and is attached to an object. The reader tracks the presence of the tag when the object passes near it.

  • Application: Inventory management, asset tracking, supply chain visibility.


Q155, Q157. 📌 Explain NFC IoT Connectivity Technology.

[ December 2023 Q2(a)(iv), December 2022 Q5(d) | Frequency: 2 ]

Answer

NFC (Near Field Communication) is a protocol used for very short-distance communication between devices.

  • Basis: It is based on RFID technology but has a much lower transmission range (approx. 10 cm).

  • Features:

    • Allows contactless transmission of data.
    • Has a shorter setup time than Bluetooth.
    • Provides better security due to the requirement of close physical proximity.
  • Application: Contactless payments (mobile wallets), identification of documents, pairing devices.


UNIT 10: IoT APPLICATION DEVELOPMENT

Q159. 📌 Discuss the challenges in IoT Application Development.

[ June 2025 Q1(d) | Frequency: 1 ]

Answer

Developing IoT applications is complex due to the specific characteristics of the ecosystem. Key challenges include:

  1. Deep Heterogeneity: IoT involves interactions among heterogeneous devices (different manufacturers, capabilities, protocols) and networks. Ensuring interoperability and portability across this diversity is difficult.

  2. Inherently Distributed: Components are distributed across cloud, fog, and edge layers. A centralized development methodology often fails here.

  3. Data Management: Handling huge volumes of data generated at different speeds and in various forms. Detecting invalid or corrupted data (due to sensor failure) is a major challenge.

  4. Application Maintenance: Debugging and updating code on millions of distributed, resource-constrained devices remotely poses security and bandwidth challenges.

  5. Humans in the Loop: Modeling complex human behaviors for human-centric applications (e.g., elderly care) is difficult.

  6. Application Inter-dependency: When multiple applications share the same sensors (e.g., HVAC and Security sharing a motion sensor), conflicts in decision-making can occur.


[ December 2024 Q1(d) | Frequency: 1 ]

Answer

Open-source platforms allow developers to build IoT solutions without vendor lock-in.

Platform Key Features & Capabilities
Kaa Focus: End-to-end IoT platform for enterprise.
Features: Unlimited connected devices, cross-device interoperability, real-time monitoring, remote provisioning.
Best For: Fast, scalable business applications.
Macchina.io Focus: Web-enabled toolkit for IoT Gateways.
Features: Supports JavaScript and C++. Great for automotive telematics, V2X, and industrial edge computing.
Connectivity: Supports Tinkerforge, Xbee, and smart sensors.
Zetta Focus: Server-oriented, API-first platform built on Node.js.
Features: Turns any device into an API. Creates geo-distributed networks. Optimized for data-intensive real-time streaming.
Best For: Streaming apps connecting Linux/Arduino boards.
DeviceHive Focus: Feature-rich data platform.
Features: Supports Docker/Kubernetes deployment. Runs batch analytics and ML on device data. Supports Java, Python, Go libraries.
Deployment: Public, Private, or Hybrid cloud.
Google Cloud Focus: Scalability and Big Data.
Features: Leverages Google's global network and BigData tools (BigQuery). Handles infrastructure, computing power, and storage managed by Google.

Q162, Q163, Q165. ⭐ What are the effective strategies and countermeasures to mitigate security issues in IoT environments?

[ December 2024 Q2(a), June 2024 Q2(b) | Frequency: 2 ]

Answer

To secure an IoT system, countermeasures must address specific security attributes:

1. Authentication & Authorization:

  • Strategy: Use Identity and Access Management (IAM).

  • Action: Configure strong security credentials for devices, remove default passwords, and ensure strict identification of users and devices.

2. Confidentiality:

  • Strategy: Encryption.

  • Action: Implement appropriate encryption mechanisms for data both in storage and during transmission. This ensures only authorized users can access the data, even if intercepted.

3. Data Integrity:

  • Strategy: Hashing.

  • Action: Use hashing techniques to ensure data has not been tampered with during transit.

4. Non-Repudiation:

  • Strategy: Digital Signatures.

  • Action: Use digital signatures to assure the authenticity of the origin source of the data.

5. Additional Measures:

  • Regular Updates: Establish a secure update mechanism to patch vulnerabilities (firmware updates).

  • Physical Hardening: Protect devices from physical tampering.


Q164, Q166-Q168. 📌 Write short notes on IoT Security Challenges and Threats.

[ December 2024 Q5(i), June 2024 Q2(b), June 2024 Q5(c), December 2023 Q5(c) | Frequency: 4 ]

Answer

IoT security challenges arise because IoT devices are diverse, deployed on a massive scale, and often physically accessible. Threats occur at three layers:

1. Perception Layer (Sensors) Threats:

  • Hardware Jamming: Attackers damage the node by replacing hardware parts.

  • Forged Nodes: Inserting a malicious node into the network to gain control.

  • Brute Force: Since sensors have weak computational power, they are vulnerable to brute force attacks on their access control.

2. Gateway Layer (Network) Threats:

  • Denial of Service (DoS): Flooding the gateway to stop services.

  • Man-in-the-Middle (MITM): Intersecting the communication channel between nodes to steal classified info.

  • Session Hijacking: Attackers hijack the session to gain network access.

3. Cloud Layer Threats:

  • Data Security: Risk of data breaches while processing or storing massive IoT data on the cloud.

  • Application Attacks: Attackers manipulating application layer protocols (e.g., via web services) to access the IoT network.

  • Virtual Machine Attacks: Breaches in the cloud VMs hosting the IoT backend.

UNIT 11: FOG COMPUTING AND EDGE COMPUTING

Q169. 🔥 What is Fog Computing?

[ June 2025 Q3(b), June 2023 Q1(d) | Frequency: 2 ]

Answer

Definition: Fog Computing is a decentralized computing infrastructure in which data, compute, storage, and applications are distributed in the most logical, efficient place between the data source and the cloud.

Origin: Introduced by Cisco in 2014, it extends cloud computing to the edge of an enterprise's network. It acts as a bridge between end-user devices (IoT nodes) and centralized cloud data centers.

Key Concept: Just as "fog" is a cloud close to the ground, Fog Computing brings the advantages of the cloud closer to where data is generated.


Q170. 📌 Explain the working of Fog Computing along with a use case.

[ June 2023 Q1(d) | Frequency: 1 ]

Answer

Working Principle:

  1. Ingestion: IoT devices (sensors) generate massive amounts of data.

  2. Local Processing (Fog Layer): Instead of sending everything to the cloud, data is routed to local Fog Nodes (routers, gateways).

  3. Filtration: The Fog Node analyzes the data in real-time.

    • Time-sensitive data (e.g., emergency alerts) is processed locally for immediate action.
    • Non-critical data (e.g., historical logs) is aggregated and sent to the Cloud.
  4. Cloud Storage: The centralized cloud receives only relevant summary data for long-term storage and deep analytics.

Use Case: Smart Traffic Control:

  • Scenario: Traffic lights equipped with sensors detect an ambulance approaching.

  • Fog Action: The local Fog Node processes this data instantly and turns the light green to let the ambulance pass.

  • Cloud Action: Traffic density data is sent to the cloud to analyze long-term traffic patterns for city planning.


Q171. 📌 Mention advantages of Fog Computing.

[ June 2023 Q1(d) | Frequency: 1 ]

Answer
  1. Low Latency: Processing data closer to the source reduces the round-trip time required to send data to the cloud, essential for real-time applications.

  2. Bandwidth Optimization: Only filtered/aggregated data is sent to the cloud, significantly reducing the bandwidth required for transmission.

  3. Enhanced Security: Sensitive data can be processed locally without traversing the public internet.

  4. Offline Operations: Fog nodes can operate autonomously, allowing systems to function even if the internet connection to the cloud is lost.


Q172-Q176. 📌 What are the key challenges in Fog Computing?

[ June 2024 Q2(a) | Frequency: 1 ]

Answer

Fog computing faces several implementation challenges:

  1. Security and Privacy: Fog nodes are physically located at the edge (e.g., street corners), making them vulnerable to physical tampering and "Man-in-the-Middle" attacks. Trust and authentication between decentralized nodes are difficult to maintain.

  2. Complexity: Managing a distributed network of heterogeneous devices (different hardware/OS) is far more complex than managing a centralized cloud data center.

  3. Power Consumption: Fog nodes (like gateways) are often battery-powered or resource-constrained. High computational loads can drain power quickly.

  4. Interoperability: Diverse devices from different vendors must communicate seamlessly. Lack of standardized protocols makes integration difficult.

  5. Data Management: Ensuring consistency when data is distributed across multiple fog nodes is challenging.


Q177. ⭐ Write short note on Applications of Fog Computing.

[ Dec 2023 Q5(d), Dec 2022 Q5(a) | Frequency: 2 ]

Answer
  1. Smart Cities: Managing traffic lights, waste management, and street lighting by processing sensor data locally to react to environmental changes immediately.

  2. Smart Grids: Monitoring energy load and switching power sources (e.g., solar to grid) in real-time based on demand, preventing outages.

  3. Connected Vehicles: Vehicle-to-Vehicle (V2V) communication where cars exchange speed and braking data to prevent accidents. Latency here is critical.

  4. Video Surveillance: Analyzing video feeds locally to detect intruders or anomalies rather than streaming terabytes of footage to the cloud.


Q178-Q181. 🔥 Define Edge Computing and explain its working.

[ June 2025 Q1(c), Dec 2023 Q1(d), June 2024 Q1(d), June 2023 Q2(a), Dec 2022 Q1(d) | Frequency: 5 ]

Answer

Definition: Edge Computing is a distributed computing paradigm that brings computation and data storage closer to the sources of data (IoT devices, sensors). It optimizes internet devices and web applications by bringing computing closer to the source of the data.

Working: Edge computing pushes the "intelligence" (processing capability) from the cloud to the Edge Devices themselves (e.g., smart cameras, industrial robots) or servers located on the premises.

  • Data is processed on the device where it is generated.

  • Only essential insights or anomalies are sent to the cloud.

  • This allows for near-zero latency responses.

Diagram:

plantuml diagram

Q182-Q183. ⭐ How does Edge Computing differ from Cloud and Fog Computing?

[ June 2024 Q1(d) | Frequency: 1 ]

Answer
Feature Cloud Computing Fog Computing Edge Computing
Location Centralized Data Centers (Remote). LAN/WAN (Intermediate nodes like routers/gateways). At the source (Embedded in the device itself).
Latency High (Round-trip to internet required). Medium (Hop to local gateway required). Low/Real-time (Processed on-site).
Processing Power Unlimited (Massive servers). Moderate (Gateways/Routers). Limited (Embedded processors).
Data Scope Global/Long-term analytics. Regional/Local aggregation. Single device/Immediate action.
  • Edge vs. Fog: Fog is infrastructure-centric (processing happens in the network nodes), while Edge is thing-centric (processing happens on the device/sensor side).

Q190-Q191. 📌 Draw a block diagram of Cloud-Fog-Edge collaboration and explain the layers.

[ Dec 2022 Q1(d) | Frequency: 1 ]

Answer

In a collaborative architecture, the three layers work together to balance load and capabilities.

  1. Edge Layer (Device Edge):

    • Role: Sensing and Actuation.
    • Action: Collects data and performs immediate, simple actions (e.g., a thermostat turning off heat).
    • Offloads: Sends raw data to the Fog Layer if processing capability is insufficient.
  2. Fog Layer (Local Edge):

    • Role: Aggregation and Near-Real-time Processing.
    • Action: Filters noise, compresses data, and handles local coordination between multiple edge devices.
    • Offloads: Sends aggregated summaries to the Cloud.
  3. Cloud Layer:

    • Role: Big Data and Long-term Storage.
    • Action: Trains machine learning models using massive historical datasets and pushes updated models back down to Fog/Edge nodes.

Diagram:

graphviz diagram

UNIT 12: IoT CASE STUDIES

Q192-Q193. 📌 What are the key features of a Smart Grid? How do they contribute to efficiency?

[ Dec 2024 Q2(b) | Frequency: 1 ]

Answer

Smart Grid Definition: A Smart Grid is an electrical grid which includes a variety of operation and energy measures including smart meters, smart appliances, renewable energy resources, and energy efficient resources.

Key Features:

  1. Load Handling: Automatically advises consumers to change usage patterns during peak load times to balance the grid.

  2. Demand Response Support: Helps consumers reduce bills by using low-priority devices (like washing machines) when electricity rates are lower.

  3. Decentralization of Power Generation: Facilitates the integration of distributed renewable energy sources like residential solar panels into the main grid.

Contribution to Efficiency:

  • Smarter Energy Use: Automatically adjusts output based on time of day and traffic, detecting equipment failure instantly.

  • Resilience: Detects energy spikes and reroutes power to prevent outages or speed up recovery (Self-healing).

  • Two-way Communication: Allows real-time data exchange between the utility and the consumer, optimizing distribution.


Q194-Q198. 📌 Discuss IoT in Smart Transportation (Applications, Challenges, Efficiency, Safety).

[ June 2024 Q3(a) | Frequency: 1 ]

Answer

Applications:

  1. Efficient Traffic Management: Uses CCTV and sensors to monitor traffic volume and adjust traffic lights automatically to reduce congestion.

  2. Automated Toll and Ticketing: RFID tags allow vehicles to pass through tolls without stopping, reducing queues and fuel wastage.

  3. Connected Cars (Self-Driving): Vehicles use LiDAR, radar, and GPS to sense the environment and navigate without human intervention.

  4. Fleet Management: Real-time tracking of vehicle location and driver behavior (speeding, idling) to optimize routes and fuel consumption.

Contribution to Safety:

  • V2X Communication: Vehicles communicate with infrastructure (V2I) and other vehicles (V2V) to receive warnings about accidents, weather, or speed limits, reducing collisions.

  • Emergency Response: Automatic SOS calls (eCall) are triggered in case of an accident, providing exact location details to first responders.

Contribution to Sustainability:

  • Reduced Idling: Optimized traffic flows mean cars spend less time idling at red lights, lowering $CO_2$ emissions.

  • Smart Parking: Apps guide drivers directly to open spots, eliminating the fuel burned while "cruising" for parking.