a. Cloud management encompasses not only cloud-based but also on-premises resource management. b. Cloud resource management necessitates new technologies. c. On-premises resource management allows suppliers to leverage well-known network management technologies. d. All of these 2. How many categories must be checked for cloud computing as a whole? a. 1 b. 2 c. 4 d. None of these 3. Various non-virtualized resources can be combined into virtualized resources based on the requirements is called a. Resource Bundling b. Resource Adaptation c. Resource Brokering d. Resource Prediction 4. Resource Pricing will be done based on cloud resource ________ a. Assignment b. Utilization c. Adaptation d. Prediction 5. Storage as a Service is provided by cloud computing using __________- 151 a. storage utilities (StaaS). b. communication utility (NaaS) c. Power utility CU IDOL SELF LEARNING MATERIAL (SLM)
d. Security utility Answers 1-d, 2-c,3-a, 4-b, 5-a. 9.8REFERENCES Reference books Resource Management in Cloud Computing: Classification and Taxonomy Swapnil M Parikh1, Dr. Narendra M Patel2, Dr. Harshadkumar B Prajapati3 Websites: https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/n-tier/n- tier-sql-server https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/enterprise- integration/basic-enterprise-integration 152 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT 10 -VIRTUALIZATION STRUCTURE 10.0Learning Objective 10.1Introduction 10.2Concept of virtualization 10.3Taxonomy of Virtualization techniques 10.4Hypervisor 10.5 Pros and Cons of Virtualization 10.6 Virtualization Cloud computing defined 10.7 Virtual Machine Provisioning and Lifecycle 10.8Load balancing 10.9 Introduction Cloud computing Architecture 10.10 On-demand computing 10.11 Summary 10.12 Keywords 10.13 Learning Activity 10.14 UnitEnd Questions 10.15 References 10.0 LEARNING OBJECTIVES After studying this unit students will be able to: Evaluate the concept of virtualization and its taxonomy of techniques Analyze pros and cons of virtualization Evaluate VM provisioning life cycle Analyze the concept of on-demand computing and load balancing 10.1INTRODUCTION Virtualization is one of the most important aspects of cloud computing, particularly for infrastructure-based services. Virtualization allows users to create a safe, configurable, and separated execution environment of running applications, even whether they are untrusted, 153 CU IDOL SELF LEARNING MATERIAL (SLM)
while compromising the applications of other users. The ability of a computer program—or a mix of software and hardware—to imitate an execution environment separate from one that hosts such programs is the foundation of this technology. We can, for example, run Windows OS over top of even a virtual machine that is running Linux. Virtualization is a wonderful way to create elastically scalable systems that really can give more functionality at low cost. As a result, virtualization is frequently employed to provide on-demand, configurable computing environments.This chapter covers the fundamentals of virtualization, as well as its evolution and numerous applications.In cloud computing environments, models and technologies are used. 10.2CONCEPT OF VIRTUALIZATION Virtualization is a broad term that encompasses a variety of technologies & concepts aimed at providing an abstract environment for running programs, such as virtual hardware or perhaps an operating system. Virtualization is commonly used interchangeably with hardware virtualization, which is critical for delivering Facilities (IaaS) solutions in cloud computing. Virtualization has been irregularly investigated and used since its beginnings, but there has been a constant and growing tendency to exploit this technology in recent years. The convergence of numerous phenomena has recently reignited interest in virtualization technologies: Improved computational power and performance: Today's average finished desktop PC has powerful enough to handle nearly all of a user's computing needs, with excess capability that's also rarely used. Almost every one of these PCs has enough resources to run a virtual machine manager or run a virtual machine with acceptable performance. The same consideration extends to the high-end PC market, wherein supercomputers can provide massive processing capability, allowing hundreds and thousands of virtual machines to run simultaneously. Hardware & system resources that are underutilized: Underutilization of hardware and software occurs as a result of (1) improved performance and computational capacity, and (2) the consequence of limited or intermittent resource use. Today's computers have become so powerful that most applications and systems only utilize a fraction of their total capability. Furthermore, many computers in an enterprise's IT infrastructure are only partially utilized, despite the fact that they'd be without interruption 24 hours a day, 7 days a week, 365 days a year. Desktop PCs, for example, that are primarily utilized for office automation chores and are used by administrative workers, are used during working hours and are left unattended overnight. And use these capabilities for other uses after hours could help the IT infrastructure run more efficiently. It would also be necessary to construct a fully distinct environment, that can be accomplished using virtualization, in order to deliver such a service in a transparent manner. 154 CU IDOL SELF LEARNING MATERIAL (SLM)
Lack of space: This constant need for more capacity, whether in terms of storage or computation power, causes data centers to expand rapidly. Google and Microsoft are expanding their infrastructures by constructing data centers the size of football fields that can house thousands of nodes. Although this is feasible for IT behemoths, most businesses cannot afford to establish a new data center to meet increased resource capacity. This situation, combined with underutilization of resources, has prompted the spread of a strategy known as server consolidation, which relies heavily on virtualization technology. Greening initiatives: Companies have recently become more interested in finding ways to lower their energy use and carbon footprint. Data centers are among largest power consumers, and they contribute significantly to a company's environmental effect. Maintaining a data center operation necessitates not just keeping servers running, but also keeping them cold, which consumes a significant amount of energy. A data center's carbon footprint is heavily influenced by its cooling infrastructure. As a result, lowering the number of servers via server consolidation will significantly reduce the data center's cooling and power consumption. Virtualization technology can be used to consolidate servers in a cost- effective manner. Increasing administrative costs: Power and cooling costs have now surpassed the cost of IT equipment. Furthermore, increased competition of additional capacity, that translates to even more servers in such a data center, has resulted in a large increase in administrative expenditures. Computers, especially servers, do not run on their own; they require the attention and feeding of system administrators. Hardware monitoring, defective hardware replacement, server setup & updates, server resource monitoring, and backups are all common system management duties. These are time-consuming tasks, and the more servers that must be handled, the greater the administrative expenditures. Virtualization can assist minimize the number of servers necessary for a given workload, lowering administrative personnel costs. 10.3TAXONOMY OF VIRUTUALIZATION TECHNIQUES Virtualization refers to a variety of emulation techniques used in various areas of computing. We can better comprehend the properties and applications of various strategies by classifying them (see Figure 10.1). The first classifies the service or institution that is being imitated as discriminatory. Virtualization is most commonly used to mimic execution environments, storage, or networks. Execution virtualization is the oldest, most widely used, and most developed of these types. As a result, it merits additional examination and categorization. By analyzing the sort of host that these execution virtualization approaches require, we may divide them into two broad types. Process-level approaches are built upon top of the existing operating system that has complete hardware control. System-level approaches were implemented directly upon hardware but do not require — or only require a minimal amount of — operating system support. We can list many ways that offer the guest a distinct type of virtual compute 155 CU IDOL SELF LEARNING MATERIAL (SLM)
environment within these two categories: bare hardware, operating system resources, low- level programming language, & application libraries. Execution virtualization All solutions aimed at simulating an execution environment except the one housing the virtualization layer is referred to as execution virtualization. Whether it's the operating system, a binary definition of a program compiled against an abstract simulation, or an application, all of these strategies are aimed at providing support again for execution of programs. As a result, the operating system, the application, and libraries dynamic or statically coupled to an application image can all implement execution virtualization directly over top of the hardware. Figure 10.1Taxonomy of virtualization Machine reference model Virtualizing an execution environment at various levels of a computing stack necessitates the use of a reference model that specifies the interfaces here between various degrees of 156 CU IDOL SELF LEARNING MATERIAL (SLM)
abstraction that obscure implementation specifics. Virtualization techniques, in this case, consider replacing one of the layers and intercept the calls coming toward it. As a result, having a clear boundary between levels simplifies their implementation, requiring only the emulation of interfaces and suitable interaction with both the underlying layer. The reference model depicted in Figure 10.2 can be used to represent modern computer systems. The hardware model is described at the lowest level in terms of the Instruction Set Architecture (ISA), which specifies the processor's instruction set, registers, memory, including interrupt management. Figure 10.2Machine Reference Model The operating system (OS)developer (System ISA) or developers of applications that directly operate the underlying hardware rely on ISA to connect hardware and software (User ISA). The application binary interface (ABI) is a layer that separates the operating system from the applications and libraries that the OS manages. ABI specifies a structure for executable programs and encompasses aspects such as low-level data types, alignment, and call conventions. This is where system calls are defined.This interface enables application and library portability across operating systems using the same ABI. Application programming represents the highest level of abstraction.The application programming interface (API) connects applications with libraries and/or the operating system. ABI and ISA are in charge of making any operation throughout the application-level API happen. To perform the real actions enabled by the CPU, the high-level abstraction is translated into machine-level instructions. Only at core level of the central processing unit, machine-level resources such as processor registers and main memory capacity are utilized to complete the task (CPU). This layered method makes it easier to create and deploy computer systems, as well as to integrate multitasking and the coexistence of various operating 157 CU IDOL SELF LEARNING MATERIAL (SLM)
contexts. Indeed, such a paradigm not only necessitates a limited understanding of the complete computer stack, but it also enables the implementation of a basic security architecture of managing or accessing shared resources. For this aim, the hardware's instruction set has been separated into several security classes that specify who can use them. The distinction among privileged versus nonprivileged instructions is the first. Nonprivileged instructions are ones that do not access shared resources and can be utilized without interfering with the other processes. This category includes all floating-point, fixed-point, and arithmetic instructions, for example. Privileged instructions are those that are run with special permissions and are typically used for sensitive actions that reveal (behavior-sensitive) and modify (control-sensitive) the privileged state. For example, behavior-sensitive instructions change the state of the CPU registers, whereas control-sensitive instructions change the state of the I/O. Some architectures provide multiple classes or privileged instructions and provide more control over how these instructions are accessed. A conceivable implementation, for example, has a privilege hierarchy (see Figure 10.3) Figure 10.3 Privilege hierarchy Ring 0, Ring 1, Ring 2, and Ring 3 are ring-based security levels, with Ring 0 being the most privileged and Ring 3 being the least privileged. The kernel of the operating system uses Ring 0, the OS-level services utilize Rings 1 and 2, and the user uses Ring 3. Ring 0 for supervisor mode and Ring 3 for user mode are the only levels supported by recent systems. 10.4 VIRTUALIZATION AT A INFRASTRUCTURE LEVEL Hardware-level virtualization 158 CU IDOL SELF LEARNING MATERIAL (SLM)
Virtualization at the hardware level is a technology that creates an abstract execution condition in order of computer hardware where a virtual machine can execute. The operating system represents the guest, the host represents the physical computer hardware, a virtual machine represents its emulation, as well as the virtual machine management represents the hypervisor in this architecture (see Figure 10.4). The hypervisor is a program or a mix of operating systems that allows its underlying physical hardware to be abstracted. Because it offers ISA to virtual machines, which is a representation of a system's hardware interface, hardware-level virtualization also is known as system virtualization. This distinguishes this from processes virtual machines, which make ABI available to virtual machines. Figure 10.4 Hardware Level Virtualization The hypervisor, meaning virtual machine manager, is a key component of hardware virtualization (VMM). It establishes a hardware environment wherein guests can run their operating systems. Hardware virtualization techniques Hardware-assisted virtualization: This is a scenario in which the hardware assists in the development of a virtual machine manager capable of running a guest operating system in total isolation. The IBM System/370 was the first to use this technology. The improvements 159 CU IDOL SELF LEARNING MATERIAL (SLM)
to the x86-64-bit architecture released with Intel VT (previously known as Vanderpool) and AMD V are instances of hardware-assisted virtualization at the moment (formerly known as Pacifica). These extensions, that vary between the two suppliers, are designed to decrease the performance penalty associated with hypervisors that emulate x86 hardware. Prior to the emergence of hardware-assisted virtualization, software emulation of x86 hardware proved extremely inefficient in terms of performance. The reason for this is that the x86 architecture was designed to fall short of Popek and Goldberg's explicit requirements, and early products relied on binary translation for trap certain sensitive instructions and produce an emulated version. This technology was used in products like VMware Virtual Platform, which was developed in 1999 by VMware, a pioneer of x86 virtualization. After 2006, Intel and AMD introduced CPU extensions, which were used by a variety of virtualization technologies, including KVM, VirtualBox, Xen, VMware, Hyper-V, Sun xVM, Parallels, and many others. Full virtualization refers to the capacity to run a software, most commonly any operating system, directly on top of the virtual machine without modification, as if it were running on raw hardware. Virtual machine managers must provide a thorough simulation of the full underlying hardware to make this possible. The main benefit of virtualized environments is full isolation, which improves security, allows for easy emulation of diverse architectures, and allows multiple systems to coexist on the very same platform. While full virtualization is a desirable aim for so many virtualization solutions, it raises significant problems about performance and technical execution. Interception of privileged instructions, such as I/O instructions, is a major challenge since they modify the state of the host's exposed resources, hence they should be contained inside the virtual machine management. To accomplish full virtualization, a simple option is to offer a virtual environment for all instructions, putting some performance constraints in place. A mix of hardware and software is used to achieve an effective implementation of full virtualization, preventing potentially hazardous instructions from being performed directly on the host. This is possible thanks to hardware-assisted virtualization. Paravirtualization is a non-transparent virtualization approach that allows thin virtual machine administrators to be implemented. As a result of paravirtualization strategies exposing an application software to the virtual machine that really is slightly different from the host, guests must be adjusted. The goal of paravirtualization is to enable the option to demand that performance-critical tasks be executed directly on the host, avoiding performance losses that otherwise occur with managed execution. This simplifies the development of virtual machine administrators, who must merely move the implementation of these difficult-to-virtualize operations to the host. In order to take advantage of this opportunity, guest operating systems must be updated and expressly ported by remapping performance-critical operations via the virtual machine desktop application. This is only possible if the operating system's source code is public, which is why paravirtualization has 160 CU IDOL SELF LEARNING MATERIAL (SLM)
primarily been studied in the opensource and academic communities. While this method was first used in its IBM VM operating system generations, the word paravirtualization was first used in the literature by the University of Washington's Denali project. Xen has leveraged this technology to provide virtualization solutions for Unix operating systems that have been expressly converted to operate on Xen hypervisors. Operating systems which can be ported can nevertheless use paravirtualization by employing ad hoc device drivers which remap the execution of crucial instructions to the hypervisor's paravirtualization APIs. This technology from Xen allows Windows-based operating systems to run on x86 architectures. VMWare, Parallels, and several solutions in embedded and real-time settings like TRANGO, Wind River, and XtratuM all use paravirtualization. Partial virtualization provides just a partial simulation of the underlying hardware, preventing the guest operating system from running in total isolation. While partial virtualization enables many applications to operate invisibly, it does not support all of the operating system's functions, like full virtualization does. Address space virtualization, which is used in time-sharing systems, is an example of partial virtualization; it allows many applications or users to run concurrently in a separate memory region while still sharing the same hardware resources (disk, processor, and network). Partial virtualization has long been considered a necessary step toward complete virtualization, but it was first implemented on the IBM M44/44X. Virtualization of address space is a prevalent feature of modern operating systems. Programming language-level virtualization Virtualization at the programming language level is primarily used to facilitate application deployment, managed execution, and portability across platforms and operating systems. It is made up of a virtual machine that executes the byte code of a program that is generated during the compilation process. This technology was implemented and used by compilers to generate a binary format that represented the machine code for an abstract architecture. The architecture's properties differ from one implementation to the next. In general, virtual machines simplify the underlying hardware instruction set and provide a set of high-level instructions that transfer some of the features of the languages created for them. The byte code can be parsed or compiled on the fly—or jitted5 —against the underlying hardware instruction set during runtime. The capability to have a uniform execution environment throughout different platforms is the major benefit of programming-level virtual machines, often known as process virtual machines. Programs that have been compiled into byte code could be run on any operating system or platform that has a virtual machine capable of running that code. From the standpoint of the development lifecycle, this streamlines development and deployment activities by eliminating the need to deliver several versions of the same code. The virtual 161 CU IDOL SELF LEARNING MATERIAL (SLM)
machine implementation for many platforms still is a costly operation, but it is done only once and not for each application. Process virtual machines also give you more control over program execution because they don't provide you direct access to a memory. Another benefit of managed programming languages is security; by filtering I/O activities, the process virtual machine may readily support application sandboxing. Both Java and.NET, for example, provide a foundation for pluggable security rules and code access security frameworks. All of these benefits come at a cost: performance. When compared to languages compiled against real architecture, virtual machine programming languages often perform worse. This performance gap is shrinking, and the tremendous computing power available on typical CPUs is making it even smaller. Storage Virtualization: A virtual storage system manages a group of servers known as storage virtualization. The servers seem more like worker drones in a hive in that they are unaware of where their data is stored. It allows you to manage and use storage from numerous sources as if it were a single repository. Despite modifications, breakdowns, and variances in the underlying equipment, storage virtualization software ensures smooth operations, constant performance, and a continuing array of sophisticated functions. Server virtualization is a type of virtualization that involves the masking of server resources. By altering the identity number and processors, the central-server (physical server) is split into several virtual servers. As a result, each machine can run its own os in isolation. Where each sub-server is aware of the primary server's identity. By deploying main server resources into a semi resource, it improves speed while also lowering running costs. It aids virtual migration, reduces energy usage, and lowers infrastructure costs, among other things. Desktop virtualization allows users' operating systems to be stored remotely on a server inside the data center. It allows users to virtually access their desktop from any machine, at any location. Virtual desktops are required for users who want to run operating systems apart from Windows Server. User mobility, portability, and easy management of software installation, updates, and patches are the main advantages of desktop virtualization. Virtualization of Networks: The capacity to manage several virtual networks, each with its own data and management plan. It coexists on top of a single physical network. It can be handled by individuals who may or may not be aware of each other's identities. Within days or even weeks, network virtualization allows you to construct and provision virtual networks, including logical switches, routers, firewalls, load balancers, VPNs, and workload security. 162 CU IDOL SELF LEARNING MATERIAL (SLM)
10.5 CPU VIRTUALIZATION: CPU virtualization emphasizes the use of a virtual machine to run applications and instructions, giving the impression of working on a physical desktop. All of the operations are managed by an emulator, which directs software to follow its instructions. CPU Virtualization, on the other hand, is not an emulator. The emulator works in the same way that a real computer does. It works in the same way as a physical machine, replicating the same copy of data and producing the same result. The emulation function provides excellent portability and allows you to operate on a single platform while acting as though you were working on numerous systems. All virtual machines behave as physical machines with CPU Virtualization, and their hosting resources are distributed as if they had several virtual CPUs. If all hosting services receive a request, physical resources are shared among virtual machines. Lastly, the virtual machines are allotted a portion of the single CPU, which is a single-processor that doubles as a dual- processor. Virtualization of applications: Program virtualization allows a user to access a server-based application from a remote location. The server keeps all of the application's personal data and other characteristics, yet it can still be run on the local workstation over the internet. A user who has to run two separate versions of same software is an example of this. Hosted apps and packaged applications are examples of technologies that leverage application virtualization. 10.4A DISCUSSION ON HYPERVISORS STORAGE A hypervisor is a type of virtualization software that divides and allocates resources across multiple pieces of hardware in Cloud hosting. Virtualization hypervisor is a program that enables segmentation, isolation, or abstraction. The hypervisor is indeed a hardware virtualization method that enables many guests operating systems (OS) to operate simultaneously on a single host system. A hypervisor is also known as a virtual machine manager (VMM). Hypervisors come in a variety of shapes and sizes. TYPE-1 Hypervisor: This is the most basic type of hypervisor. The hypervisor is a virtual machine that operates on top of the host system. It's also known as a \"Bare Metal Hypervisor\" or \"Native Hypervisor.\" It does not necessitate the installation of any server operating system. It has access to hardware resources directly. VMware ESXi, Citrix XenServer, and Microsoft Hyper-V hypervisors are examples of Type 1 hypervisors. Type-1 Hypervisor Benefits and Drawbacks: 163 CU IDOL SELF LEARNING MATERIAL (SLM)
Pros: Because they have direct access to the physical hardware resources, such hypervisors are incredibly efficient (like CPU, Memory, Network, Physical storage). This increases security because no third-party resources are available, making it impossible for an attacker to compromise anything. Cons: One issue with Type-1 hypervisors is that they often require a dedicated separate computer to run them and to train multiple VMs as well as control host hardware resources. Hypervisor type 2: The underlying host system runs a Host operating system. It's also referred to as a \"Hosted Hypervisor.\" Hypervisors of this type do not run directly on the underlying hardware, but rather as an application in a Host system (physical machine). Software that is installed on a computer's operating system. The operating system is asked to perform hardware calls by the hypervisor. VMware Player or Parallels Desktop are examples of Type 2 hypervisors. Endpoints such as PCs are frequently encountered with hosted hypervisors. Engineers and security analysts will benefit greatly from the type-2 hypervisor (for checking malware, or malicious source code and newly developed applications). Type-2 Hypervisor Benefits and Drawbacks: Pros: Hypervisors of this type provide rapid and easy access to a guest operating system while the host machine is functioning. These hypervisors frequently include additional functionality that are beneficial to guest machines. These technologies make it easier for the host and guest machines to work together. Cons: Because these hypervisors do not have direct access to physical hardware resources, their efficiency lags behind that of type-1 hypervisors, and there are potential security risks as well. If an attacker gains access to the host operating system, he can also gain access to the guest operating system. Selecting the Most Appropriate Hypervisor: Because there is no intermediary layer, Type 1 hypervisors provide far superior performance than Type 2 hypervisors, making them the obvious choice for mission-critical applications and workloads. That isn't to say that hosted hypervisors aren't useful; they're considerably easier to set up, so they're a fantastic choice if you need to quickly deploy a test environment, for example. Comparing performance metrics is one of the greatest ways to figure out which hypervisor is right for you. These include CPU overhead, the maximum amount of host and visitor memory, and virtual processor support. Before selecting a good hypervisor, consider the following factors: 1. Recognize your requirements: The data center exists to serve the corporation and its applications (and your job). You (and your IT coworkers) have personal requirements in addition to the needs of your organization. 164 CU IDOL SELF LEARNING MATERIAL (SLM)
The following are the requirements for a virtualization hypervisor: a. Flexibility b. Flexibility c. Convenience d. Accessibility e. Dependability f. Effectiveness f. Consistent assistance 2. Hypervisor cost: For many purchasers, the most difficult aspect of selecting a hypervisor is achieving the correct balance between price and capability. While many entry-level solutions are free or very free, the costs at the other end of the market can be shocking. Licensing frameworks differ as well, so make sure you know exactly what you're receiving for your money. 3. Virtual machine performance: Virtual machines should match or outperform their physical equivalents, at least in terms of the apps running on each server. Everything above and beyond this criterion is profit. 4. Ecosystem: It's easy to ignore the impact of a hypervisor's ecosystem in determining whether or not a solution is cost-effective in the long run – that is, the availability of documentation, support, training, third-party developers and consultancies, and so on. 5. Put it to the test: You can get started with your current desktop or laptop. To create a great virtual learning and testing environment, you can use VMware Workstation or VMware Fusion to run either VMware vSphere and Microsoft Hyper-V. Hypervisor Reference Model: To simulate the underlying hardware, there are three primary module coordinates: Dispatcher: The dispatcher acts as the monitor's entry point, rerouting the virtual machine instance's commands to one of the two modules. Allocator: The allocator is in charge of determining which system resources should be made available towards the virtual machine instance.The allocator is called by the dispatcher whenever a virtual machine attempts to execute an operation that changes all machine prices of competitors with the virtual machine. Interpreter: 165 CU IDOL SELF LEARNING MATERIAL (SLM)
Interpreter procedures make up the interpreter module.When the virtual machine performs a privilege instruction, these are executed. Figure 10.5 Hypervisor Reference Model 10.5 PROS AND CONS OF VIRTUALIZATION Virtualization has grown in popularity and use, particularly in cloud computing. The fundamental reason for its widespread popularity is the removal of technological constraints that previously prohibited virtualization from being a feasible and effective option. Performance has been the most significant impediment. Virtualization has become an exciting possibility to supply on-demand IT services and infrastructure because to the capillary proliferation of Internet connections and developments in computing technology. Despite its resurgence in popularity, this technology offers both advantages and disadvantages. Advantages: Virtualization's most critical benefits are probably managed execution and isolation. These two qualities allow for the establishment of secure & controllable computing environments when using approaches that assist the creation for virtualized execution environments. A virtual execution environment can be set up as a sandbox, prohibiting any potentially damaging operations from leaving the virtual host. Furthermore, because the virtual host is managed by a program, resource allocation and partitioning among distinct guests is simpler. This allows for resource fine-tuning, which is critical in a server consolidation scenario also and necessary for effective quality of service. Another benefit of virtualization, particularly for execution virtualization techniques, is portability. In most cases, virtual machine instances are represented through one or even 166 CU IDOL SELF LEARNING MATERIAL (SLM)
more documents that can be easily moved between real computers. They generally tend to be self-contained, as they do not rely on anything other than the virtual machine management to function. Their administration is made easier by their portability and self-containment. Java programs are “collated once and run everywhere,” requiring only the installation of the Java virtual machine on the host. The same may be said for virtualization at the hardware level. It is feasible to create our own operating system within such a virtual machine instance or carry it around with us as if it were our own laptop. In a server consolidation scenario, this notion also allows for migration techniques. Because the number on hosts is expected to be lower than the amount of virtual machine instances, portability and self-containment also help to reduce maintenance expenses. Because the guest program runs in a virtual environment, the guest program has a very small chance of causing damage to the underlying hardware. Furthermore, in terms of number of virtual machine instances handled, it is projected that there will be fewer virtual machine administrators. Finally, virtualization allows for more efficient resource utilization. Multiple systems can safely live and share the underlying host's resources without interfering with one another. This is a requirement for server consolidation, that allows for dynamically changing the number of active material assets based on the current demand of the system, allowing for energy savings and reduced environmental effect. Disadvantages: Virtualization has drawbacks as well. The most obvious effect is a loss in performance of guest systems of the virtualization layer's intermediation. Performance degradation: Due to the abstraction layer that virtualization creates between both the hospitality is the relationship, the guest may face increased latency. The fact that the virtual machine manager is implemented and scheduled alongside other applications, thus sharing the host's resources, is a major source of performance degradation when hardware virtualization is realized through a program that is installed or executed on top of the host operating systems. Inefficiency and a poor user experience: Virtualization can sometimes result in wasteful host usage. Some of the host's special characteristics, in instance, are not exposed by abstraction layer and hence become unavailable. This could occur for device drivers throughout the event of hardware virtualization: In other cases, the virtual machine may merely supply a default graphic card that translates only a portion of the host's characteristics. Some elements of underlying operating systems could become unreachable in programming- level virtual computers unless particular libraries are used. Security holes and new threats: 167 CU IDOL SELF LEARNING MATERIAL (SLM)
Virtualization provides the lead to a wide and unanticipated form of phishing thanks to security flaws and new threats. The ability to entirely transparently emulate a host paved the way for malicious program essential to remove sensitive information from guests. 10.6VIRTUALIZATION CLOUD COMPUTING DEFINED Virtualization is significant in cloud computing because it enables for the right amount of flexibility, security, isolation, and manageability, all of which are necessary for offering IT services on demand. Virtualization technologies are typically used to provide flexible computer and storage environments. Network virtualization is much less common and, in most situations, is a supplementary function that is required in the construction of virtual computing systems. The significance of virtual computing environments with execution virtualization techniques is particularly essential. Virtualization of hardware and programming languages are two of the approaches used in cloud computing systems. Hardware virtualization is a key enabler for Infrastructure-as-a-Service (IaaS) solutions, whereas programming language virtualization is indeed a technique used in Platform-as-a-Service (PaaS) solutions. In both cases, the ability to provide a configurable and sandboxed environment provided a compelling economic opportunity for organizations with a big computing infrastructure capable of supporting and processing large workloads. Furthermore, virtualization enables separation and better control, facilitating the lease of services or their vendor accountability. 10.7 VIRTUAL MACHINE PROVISIONING AND LIFECYCLE The cycle begins with a request to the IT department, as illustrated in Figure 10.6, explaining the need to create a new server for a certain service. The IT administration is processing this request to begin viewing the servers' resource pool, matching those resources to the needs, and initiating the provisioning of the required virtual machine. It is ready to offer the required service per an SLA, or a time limit whereby the virtual is released, once it has been deployed and launched; free resources will not be required in this instance. 168 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 10.6 Virtual Machine Provisioning Life Cycle VM Provisioning Process Provisioning a virtual machine / server can be discussed and shown as seen in Figure 10.7. The following are the stages involved in deploying a virtual server: i) First, you must choose a server from the number of eligible servers (physical servers with sufficient capacity) as well as the right OS template for the virtual machine to be provisioned. ii) Secondly, you must install the necessary software (the operating system you chose in the previous step, as well as device drivers, middleware, as well as the required applications for the service). iii) Finally, you must design and configure this machine (for example, IP address and Gateway) in order to configure network and storage resources. iv) At this point, the virtual server was ready to use its freshly installed software.Usually, these are the procedures that must be completed or are being completed by an IT or data centre specialist in order to provision a virtual machine. 169 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 10.7 Deploying Virtual Server To summaries, server provisioning is the process of determining a server's configuration depending on the needs of the company, as well as hardware and software components (processor, Memory, storage, network, operating system, applications, and so on). Virtual machines are often provisioned via manually installing the operating system, using a prepackaged VM template, cloning an existing VM, or importing a physical or virtual server from some other hosting platform. P2V (physical to virtual) technologies and approaches can also be used to virtualize and provide physical servers (e.g., virt-p2v). A template can be produced from a virtual machine created by virtualizing a physical server or by constructing a new virtual server inside the virtual environment. Most virtualization management solutions (VMware, XenServer, and others) make it simple for data centre administrators to perform such operations. Provisioning from the template is a time-saving option that cuts down on the time it takes to establish a new virtual machine. Different templates can be created by administrators for various objectives. For example, for the finance department, you may develop a Windows 2003 Server template, and for the engineering department, a Red Hat Linux template. This allows the administrator to create a properly configured virtual server on request. With this convenience and flexibility comes the challenge of virtual machine sprawl, when virtual machines are provided so quickly that documenting and controlling their life cycles becomes difficult. 10.8LOAD BALANCING The method of spreading workloads and computational properties in such a cloud computing environment is known as cloud load balancing. It allows companies to manage workloads and application needs by distributing resources over multiple PCs, networks, and servers. The flow of workload traffic or demands that occur across the Internet is maintained by cloud load balancing. As the internet's traffic grows at a rapid rate, it will soon account for nearly 100% of current traffic. As a result, the load on the server is rapidly increasing, resulting in server overload, 170 CU IDOL SELF LEARNING MATERIAL (SLM)
particularly for popular web servers. There are two basic solutions to the problem of server overloading: The first one is a single-server approach wherein the server is updated to a more powerful server. Nevertheless, the database website may quickly become overburdened, necessitating even another update. Furthermore, the upgrade procedure is time-consuming and costly. The second option is a multi-server solution, which involves constructing a scalable system design on even a cluster of servers. As a result, building a server lexical approach for network services is much more cost effective and scalable. Almost any form of service, such as Ftp, Mail, Smtp, FTP, and POP/IMAP, benefits from load balancing. It also improves reliability by incorporating redundancy. A hardware - based device or application provides the balancing service. Server load balancing allows cloud- based server farms to achieve more precise performance and reliability. Load balancing systems are divided into two categories: proactive and reactive. Load balancers based on software: Load balancers based on software operate on standard hardware (desktops, PCs) and operating systems. Hardware-based load balancers were dedicated boxes that contain Application Specific Integrated Circuits (ASICs) that have been tailored for a specific function. ASICs allow for high-speed network traffic promotion and are commonly included in transport-level load balancing since hardware-based load balancing is faster than software-based load balancing. 10.9INTRODUCTION CLOUD COMPUTING ARCHITECTURE The term architecture comes from the construction industry, and it relates to the art or practice of designing and creating structures. While it is commonly used to allude to an art form, it also describes how functionality is produced via the application of common principles. If one of the components along the access chain fails in the realm of information technology, the cloud implementation would fail. i) Multi-user or multi-tenant application design: whereas most discussion of cloud deployment focuses on the infrastructure; multi-tenant architecture refers to the client application that allows for multi-user design. With a monolithic application design, several instants will be created, making cloud implementation less than ideal. The multi-tenancy feature spreads costs among a large number of users and improves resource sharing, resulting in cheaper costs, higher prime time’s capacity utilization, and efficiency. ii) Pay as you go: This relates to the capability to track (or metre) the consumption of a cloud-based application in order to create a billing system. 171 CU IDOL SELF LEARNING MATERIAL (SLM)
iii) Another essential characteristic is the capability to reuse resources such as hardware and software through virtualization. Layers of cloud architecture implementation The implementation of cloud architecture is made up of numerous levels. On top of these layers is indeed the browser, which runs both mobile and desktop devices and allows users to access cloud-based applications. The following two layers, Cloud services and apps, are used by cloud users. These applications and services are executed on software platforms (e.g., Oracle, SAP,.Net, etc.) that make up the cloud architecture's next tier. The infrastructure layer, which includes servers, databases, storage, and CPU, is further down the architecture. Figure 10.8 depicts these. Figure 10.8Cloud computing layers 10.10ON DEMAND COMPUTING On-demand computing is a commercial computing approach in which users can access computing resources on an \"as needed\" basis. On-demand flexible employment cloud hosting 172 CU IDOL SELF LEARNING MATERIAL (SLM)
providers aim of providing their clients with access to computer resources as needed, rather than all at once. Advantages: The on-demand computing paradigm was created to address the common problem that businesses face in meeting unpredictable, changing computer demands efficiently. Today's businesses must be nimble, with the capacity to scale resources effectively and quickly in response to constantly changing market demands. Equipped with skills resources to satisfy maximum requirements could be costly because an enterprise's demand on computing resources might vary drastically through one period to the next. On-demand computing, on the other hand, allows businesses to save money by retaining minimal computer resources until they need to expand them, all whereas only paying for what they need. 10.11SUMMARY Virtualization is a broad umbrella term that encompasses a wide range of technologies and concepts. The partitioning of your hard drive into distinct pieces, which illustrates how virtualization works in your everyday life, is an excellent illustration. Despite the fact that you may only have one hard disc, your system perceives it as two, three, or more distinct and independent portions. In a similar vein, this technology has been around for quite some time. It began as the capacity to run different operating systems on a single hardware configuration, and has since evolved into a critical component of testing and cloud-based computing. The Virtual Machine Monitor, often known as the virtual manager, is a cloud computing tool that incorporates the essential fundamentals of virtualization in a single package. It is used to distinguish between physical hardware and its mimicked counterparts. This covers the CPU's memory, I/O, and network traffic, among other things. Virtualized hardware is used to emulate a secondary operating system that would normally communicate with the hardware. The guest operating system is often unaware that it is running on virtualized hardware. Because most secondary operating systems and applications do not require the full usage of the underlying hardware, the technology continues to function even when its performance is not equivalent to that of the \"real hardware\" operating system in which it is running. By reducing the reliance on a certain hardware platform, additional flexibility, control, and isolation can be achieved. The ability to create the illusion of a specific environment, if it's a runtime environment, a storage facility, a network connection, or even a remote desktop, using some kind of emulation or abstraction layer is at the heart of all forms of virtualization. All of these notions are crucial in the development of cloud computing services and infrastructure, in which hardware, IT infrastructure, software, 173 CU IDOL SELF LEARNING MATERIAL (SLM)
and services were delivered on demand over the Internet or a network connection in general. 10.12KEYWORDS ISA- An instruction set architecture is indeed an abstract model of a computer in computer science. It's also known as computer architecture or architecture. An implementation is a realization of an ISA, including a central processing unit. ABI - A binary interface among two binary programmer modules is known as an application binary interface. One of these modules is frequently a library and operating system facility, while the other is a user-run software. VMM - The hypervisor, or virtual machine manager, is a key component of hardware virtualization (VMM). Since it runs natively on hardware, this sort of hypervisor is also known as a native virtual machine. Partial virtualization - Partial virtualization is used when whole operating systems cannot execute in a virtual machine but some or several applications may. On-demand computing - On-demand computing is a business-level technology approach in which a customer can buy cloud services as and when they are needed. For example, if a customer needs more servers for the duration of a project, they can do that and then return to the old level after the project is through. 10.13LEARNING ACTIVITY 1. To make provisioning of virtual machines much faster and less error prone than provisioning of physical machines, as an IT administrator what are all the action should you perform? 2. “Virtualization has its limitations, despite its widespread acceptance today.” Comment on it. 10.14UNIT END QUESTIONS 174 A. Descriptive Questions Short Question 1. What is virtualization and what are advantages of doing so? 2. What distinguishes virtualized environments from other types of settings? CU IDOL SELF LEARNING MATERIAL (SLM)
3. Discuss virtualization classification and taxonomy at various levels. 4. Discuss the execution virtualization machine reference model. 5. What are the different types of hardware virtualization techniques? Long Question 1. Identify and discuss several types of virtualizations. 2. In the context of cloud computing, what are the advantages of virtualization? 3. What are some of the drawbacks of virtualization? 4. Discuss Hyper-architecture. V's Discuss how it can be applied to cloud computing. 5. What is load balancing and how does it work? B. Multiple Choice Questions 1. Point out the wrong statement. a. Abstraction provides cloud computing's main benefit: shared, ubiquitous access. b. When a request is made, virtualization gives a logical name to a physical resource and then offers a link to that physical resource. c. Most cloud computing programmes pool their resources, which can be assigned to users on demand. d. All of these 2. Which one of the following virtualization techniques is also a feature of cloud computing? a. Storage b. Application c. CPU d. All of these 3. The technology used to distribute service requests to resources is called to as _____________ a. load performing b. load scheduling c. load balancing d.All of these 4. Point out the correct statement. 175 CU IDOL SELF LEARNING MATERIAL (SLM)
a. A client can use a cloud service from anywhere. b. A cloud has several application instances and routes requests to the appropriate instance based on criteria. c. Computers could be partitioned into a collection of virtual machines, each with its own task. d.All of these 5. Which of the below software could be used to implement load balancing? a. Apache mod_balancer b. Apache mod_proxy_balancer c. F6’s BigIP d.All of these 6. Which of the below network resources could be load balanced? a. Connections via intelligent switches b. DNS c. Storage resources d. All of these 7. A ______ is a combination load balancer as well as application server which is a server placed among a firewall or router. a. ABC b. ACD c. ADC d.All of these 8. Which of the below should be comein the question mark for the following figure? a. Abstraction 176 b. Virtualization c. Mobility Pattern d. All of these CU IDOL SELF LEARNING MATERIAL (SLM)
9. Which Hypervisor typehas been shown in the below figure? a. Type 1 b. Type 2 c. Type 3 d. All of these 10. Point out the wrong statement. a. By translating a logical address to a physical address, load balancing virtualizes systems and resources. b. On different hosts, many instances for various Google applications were operating. c. Hardware virtualization is used by Google. d. All of these Answers 1-c, 2-d, 3-c, 4-d, 5-b, 6-d, 7-c, 8-b, 9-a, 10-c 10.15REFERENCES Reference Book: Rajkumar Buyya, Christian Vecchiola, S. Thamarai Selvi, “Mastering Cloud Computing” Kailash Jayaswal, Jagannath Kallakuruchi, Donald J. Houde, Dr. Devan Shah, “Cloud Computing: Black Book Cloud Computing: Principles and Paradigms, Editors: Rajkumar Buyya, James Broberg, Andrzej M. Goscinski, Wile, 2011. Websites: 177 CU IDOL SELF LEARNING MATERIAL (SLM)
https://www.geeksforgeeks.org/ https://www.atlantic.net/ 178 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT 11 -DATA MANAGEMENT STRUCTURE 11.0Learning Objective 11.1Introduction 11.2 Challenges with storing data 11.3 Data Center 11.4 Storage of data and Database 11.5Data Security in cloud 11.6 Data Privacy in cloud 11.7Summary 11.8Keywords 11.9Learning Activity 11.10 UnitEnd Questions 11.11 References 11.0LEARNINGOBJECTIVES After studying this unit students will be able to: Evaluate Microsoft azure Analyze difference between ARM and classic portal Evaluate architecture of azure and its components Create, configure, deploy and monitor the website 11.1 INTRODUCTION The velocity as well as volume of data collected at the start of the last decade plainly shown that current data management capacity of \"institutions\" is insufficient. Cloud-based data management, in turn, is assisting in the realization of the potential of large-scale data management systems by allowing for optimal resource scaling. Inside a cloud-based data management situation, organizations rent storage and processing resources to run data management applications rather of investing significant capital in infrastructure in-house. One of most important study areas within cloud computing is data management. Several cloud-based data management systems now are operational, including Google's BigTable, 179 CU IDOL SELF LEARNING MATERIAL (SLM)
Facebook's Cassandra and Hive, Streamy's HBase, YahooPNUTS,!'s and many others. Cloud computing has had a significant impact on data management research, and it continues to play an important role. 11.2CHALLENGES WITH STORING DATA Challenges: Data storage on the cloud is not an easy undertaking. Customers encounter a number of obstacles in addition to its flexibility and convenience. Customers has to be able to do the following: Make space for extra storage as needed. The exact position of a stored data should be known and restricted. Examine the manner in which the data was deleted. Have accessibility to a documented data storage hardware disposal process. Have access to data as an administrator. 11.3DATA CENTER A data centre, at its most basic level, is a physical facility where businesses keep mission- critical information and services. The network of computing resources that make up the data centre. In order to distribute the packages, these resources share a variety of applications and hardware. In today's world, information available and is linked through numerous data centres, its edge, both public and private clouds. All of these diverse sites, both on in the cloud, should be able to link to the data centre. Data centres also make up the public cloud. Whenever apps were stored on the cloud, the data centre services of the cloud provider are employed. Data centres were involved in virtually every aspect of internet activity. as an example: Email & file sharing are both handled by the data centre. The data centre was involved in application productivity. Customer relationship management (CRM) and other services are provided by the data centre. A data center's components Because these components hold and handle business-essential data and applications, data centre protection is critical in data centre architecture. Routers, switches, firewalls, storage devices, servers, as well as application delivery controllers are just a few examples of the numerous components. They're all a responsible for data center's design. 180 CU IDOL SELF LEARNING MATERIAL (SLM)
When they are joined in a meaningful way, they provide a variety of services. It could be: The network's infrastructure: This connects physical or virtualized servers, data centre facilities, memory, and external connection to end-user locations. The data center's storage infrastructure: The contemporary data center's lifeblood is data. Storage systems are used to keep this important commodity. These storage systems are massive in size. Resources for computing: The engines of a data centre are applications. These servers provide the greatest power to the application during execution by providing processor, memory, local storage, as well as network access. The quantity of space available for IT equipment is referred to as a facility. Because they have 24hr access to data, data centres are among the most energy-intensive buildings on the planet. In order to keep equipment within required temperature/humidity values, both architecture & environmental control are stressed. Data centre operations are frequently safeguarded so needed to shield the performance & integrity of a data center's main components, as well as the data, which is extremely valuable. Additionally, application resiliency and availability are ensured by automatic failover & load balancing in order to maintain data center application efficiency. Data centers in the cloud: In an off-premises data center, data and applications are hosted by a cloud services provider including such Amazon Web Services (AWS), Microsoft (Azure), IBM Cloud, or the other public cloud provider. Building and operating hybrid cloud data centers, renting space at colocation facilities (colos), using pooled compute power, or employing public cloud-based services are all options available to businesses. As a result, applications were no longer restricted to a single location. The data center has grown in scale and complexity in this multi-cloud era, with the purpose of providing the best user experience. Traditional Datacenter differ from Cloud data center Regardless of size or industry, every company needs a data centre. A Data Center is a physical facility that firms utilize to store their data and other applications that are critical to their operations. While a Data Center is commonly assumed to be one thing, in actuality, it is 181 CU IDOL SELF LEARNING MATERIAL (SLM)
typically made up of a variety of technological devices, ranging between routers as well as security devices through storage systems and software delivery controllers, depending on what needs to be kept. A Data Center requires a substantial amount of infrastructure to keep all of the gear and software up to date and working. Ventilation as well as cooling systems, uninterruptible power supplies, backup power, and other equipment may be included in these facilities. A cloud Data Center is not the same as a traditional Data Center; except from the fact how they both store data, they are completely separate computer systems. A data Storage Center also isn't physically placed in a company's office; everything is done online! When you put data on cloud servers, it is split and duplicated across multiple locations enabling secure storage. In the event of a breakdown, the cloud services provider would ensure that a backup of your backup is also available. Cost: With a typical Data Center, you'll need to purchase a variety of items, including server and networking hardware. This is not simply a drawback in and of itself; users also will have to repair this gear because it ages and becomes obsolete. Users will also have to pay staff to oversee the operation of the equipment in comparison to the payment of purchasing it. Because you are essentially to use someone else's technology and infrastructure whenever users host their data on the cloud servers, you save a lot of money that would have been spent building up a service Data Center. It also takes care of a variety of other maintenance-related issues, allowing you to better maximise your resources. Accessibility: A traditional Data Center gives you more freedom in terms of equipment selection, allowing you to understand precisely which software and hardware you're working with. Because there is nobody in the equation but you may make modifications as needed, later adaptations are easier. Accessibility becomes a problem with cloud hosting. Ones remote data are becoming inaccessible if you lose your Internet connection at any moment, which could be an issue for some. However, in reality, such situations with no Internet connectivity are likely are so few and far between, thus this shouldn't be a major issue. Furthermore, if there is an issue on the backend, you may need to contact their cloud services provider – but this, too, should be remedied quickly. Security: Traditional Data Centers must be protected in the traditional manner: security personnel must be hired to guarantee that your data is safe. One benefit is that you'll have complete control over the data and equipment, making it somewhat safer. Only those you trust will have access to your system.Cloud hosting, at least in theory, is riskier because your data can be hacked by 182 CU IDOL SELF LEARNING MATERIAL (SLM)
anybody with an internet connection. In actuality, many cloud service providers go to great lengths to protect the security of your data. They employ skilled personnel to ensure that all necessary safeguards are in place, ensuring that customer data is always protected. Scalability: Creating their own infrastructure from the ground up necessitates a significant amount of financial and human resources. You will be responsible for your own maintenance and administration, among other things, which means it will take a very long time to get off of the ground. It is expensive to set up a typical Data Center. Furthermore, if you want to build up your System Center, you may have to pay more money, albeit reluctantly. Furthermore, there are no upfront expenditures in regards of purchasing equipment with cloud hosting, which results in cost savings that may be used to scale up later. Cloud service providers offer a variety of flexible plans to meet your needs, and you may purchase additional storage as needed. If you need to, they can also decrease the amount of memory you have. 11.4STORAGE OF DATA AND DATABASE Cloud Storage is a service which allows users to save data on an offsite storage system that is controlled by a third party and accessed via a web services API. Devices for Storage: There are two types of storage devices that can be roughly classified: i) Devices for Block Storage ii) Devices for storing files Devices for Block Storage: Clients can access raw storage using block storage devices. To construct volumes, the raw storage is partitioned. Devices for storing files: File Storage Devices provide clients with storage in the form of files while keeping their own file system. This storage is provided by Network Attached Storage (NAS). Types of Cloud Storage: There are two types of cloud storage: local and remote. i) Cloud Storage That Isn't Managed ii) Cloud Storage that is Managed Cloud Storage That Isn't Managed: 183 CU IDOL SELF LEARNING MATERIAL (SLM)
The term \"unmanaged cloud storage\" refers to storage that has been pre-configured for the customer. The consumer is unable to format, install his own file system, or change the properties of his hard drive. Cloud Storage that is Managed: Managed cloud storage provides on-demand internet storage capacity. The user sees any managed cloud storage system as a raw disc that they can partition and format. Putting Together a Cloud Storage System: Multiple copies of data are stored on multiple servers in multiple locations by the cloud storage system. If one of the systems fails, all that is required is to update the reference to the object's storage location. Storage virtualization technologies such as StorageGRID can be used by the cloud provider to aggregate storage assets in cloud storage systems. It builds a virtualization layer which collects data from various storage devices and consolidates this into a single control system. It can also use the Internet to handle data from CIFS & NFS file systems. The picture below illustrates how StorageGRID virtualizes storage into storage clouds: Figure 11.1Storage GRID virtualizes storage into storage clouds Containers for Virtual Storage High-performance cloud storage systems are provided via virtual storage containers. Virtual storage containers create logical unit numbers (LUNs) for devices, files, and other objects. A virtual storage container establishing a cloud storage domain is depicted in the diagram below: 184 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 11.2 virtual Storage Container A cloud database is a database service that is produced and accessible via the internet. It performs many of the same tasks as a traditional database, but with the extra benefit of cloud computing flexibility. To implement the database, users install software on a cloud infrastructure. Features to look for: A cloud-based database service that may be accessed from anywhere. Allows business customers to store databases without having to purchase dedicated hardware. It can be controlled by the user or can be actually paid and managed by the supplier. Supports both relational and non-relational databases (including MySQL & PostgreSQL) (including MongoDB & Apache CouchDB) Accessed via a web interface or perhaps an API given by the vendor Accessibility: Users can use a vendor's API or web interface to access cloud databases any nearly anywhere. Scalability: To meet changing needs, cloud databases can grow their storage capacities in real time. Organizations just pay for the services that they actually use. Disaster Recovery: Data is maintained secure through copies on faraway servers throughout the case of a natural disaster, mechanical failure, or power loss. 185 CU IDOL SELF LEARNING MATERIAL (SLM)
The Top 7 Cloud Databases for the Year 2020 1 - Amazon Web Service (AWS) is the first (AWS) Amazon has risen to the top of the database-as-a-service (DBaaS) market. 2 - Oracle Database 3 - Microsoft Azure 4 - Google Cloud Platform 5 - IBM DB2 6-MongoDB Atlas 7-OpenStack 11.5DATA SECURITY IN CLOUD Given the growing number of companies considering shifting data to cloud and the vital nature of the apps, cloud security is critical. A data owners would not have control over where it is stored on the cloud, which poses a significant security risk. In cloud computing, the virtualization paradigm raises various security risks. The authors point out that one of the primary security challenges is that users are uninformed of cloud security. Cloud customers may believe that because their software or data are in competent hands, they wouldn't have to worry about security. When companies use public clouds, they lose control of their important data and services. Data is deemed far less reliable than organizational intranets if other actors obtain control. Despite the fact that existing solutions provide a data storage service with high dependability, accessibility & availability assurances, as well as a geographically independent location, their adoption can be difficult. Security risks are underappreciated as more firms shift their data to cloud-based storage services. An organization's vital data, services, and infrastructure should all be under its control. Non- essential elements could be outsourced to third-party vendors. The data, services, or infrastructure needed to access them are all retained in-house. Cloud Security Issues: Cloud companies provide a level of security for their users' data. However, it is insufficient because data confidentiality is frequently jeopardized. There are many different forms of attacks, ranging from password guessing and dude attacks through insider attacks, shoulder surfing, and phishing. The list consists of the security issues that exist in the cloud: When diverse firms use storage to store sensitive data, there is sometimes a risk for data misuse. To avoid this danger, data repositories must be secured immediately. To do this, one can utilize authentication and access control restrictions for cloud data. 186 CU IDOL SELF LEARNING MATERIAL (SLM)
Locality: In the cloud, data is frequently scattered over a number of regions, making it difficult to pinpoint the specific site of data storage. Nevertheless, as data gets moved through one nation to the other, the rules regulating data storage change, bringing compliance difficulties like data privacy laws under play, which are relevant to cloud data storage. As just a cloud service provider, you must inform your customers about your data storage policies and the specific location of your data storage server. Integrity: The system must be set up in such a way that security and access limits are enforced. In other words, only authorized individuals should have access to data. To avoid data loss inside a cloud environment, data integrity must be maintained at all times. Aside from restricting access, permissions to make modifications to the data should be confined to certain people so that there is no later issue with universal access. Access: In the long run, data security regulations governing data access and management are critical. Individuals with part access are expected to offer part access to authorized data owners so that everyone has only the access they need to sections of the data kept in the data mart. There seems to be a lot of systems and data security that can be enforced by regulating and restricting access to provide optimal protection for the stored data. Confidentiality: The cloud may contain a great deal of sensitive information. Extra layers of protection must be applied to this data to limit the risk of breaches including phishing attacks; this could be done by both the service provider and the enterprise. Data confidentiality should, however, be a top focus for sensitive material as a precaution. Breaches: Cloud-to-cloud breaches are not uncommon. Hackers can use security flaws in the cloud to steal data that would otherwise be considered secret by businesses. On the other hand, a breach might be an inside attack, thus companies must pay special attention to evaluating job care to minimize any undesired data breaches. Data storage and virtual availability: Data is saved and made virtually available for companies. However, it is important for service providers to store information in physical infrastructures, that makes the data vulnerable to physical attacks. These are among the security concerns that come with working in the cloud. Yet, these are not insurmountable obstacles, especially given the current state of technology resources. There is a lot of attention on ensuring that the stored data is as secure as possible in order to comply with the law and regulations as well as the organization's internal compliance policies. 11.6 DATA PRIVACY IN CLOUD The challenge of safeguarding confidential information is not a new one. In the field of statistical databases, there has been a lot of research. Information technology is raising worries about invasions of privacy and potential dangers to personal information privacy. 187 CU IDOL SELF LEARNING MATERIAL (SLM)
Many authors offer a variety of data privacy tips. To avoid privacy problems, a couple of them recommended using categorical data. Merging categorical variables can also help to reduce the number of records that are recognizable. Another intriguing aspect is that when data is submitted to analysts for classification, it is important to minimize the disclosure to confidential data for identifying records. This proposal can be made either automatically or manually. For data privacy, various techniques have been proposed. A data perturbation approach is one strategy that organizations can employ to prevent the leaking of secret information while yet allowing analysts to mine the data. Data switching is another method of protecting categorical data's privacy. Data privacy also has a legislative or governmental component. Inside the United States, for example, the Patriot Act empowers the government to seek access to any computer's data. Figure 11.3Data Privacy in Cloud Computing The following are some of the privacy concerns in cloud computing: Data protection: In a cloud computing environment, data security is critical, and encryption technology seems to be the best solution for data at rest and data transmitted over the network. Even if you can safeguard your data with encryption software, hard drive manufacturers are also offering self-encrypting drives that give automated encryption. When it comes to data security, SSL encryption is the greatest option for securing your online communications while also providing authentication to the website and/or business, ensuring data integrity and ensuring that users' information is not tampered with during transmissions. User control: It can be a legal concern as well as one highlighted by customers. Because a SaaS environment gives the service provider access over the data of its customers, data visibility will be constrained. Because users have had no control over the cloud, there is a risk of data being stolen, abused, or stolen. Even data openness is lacking, such as where data is stored, what maintains it, and its use. Nonetheless, data exposure can occur during data 188 CU IDOL SELF LEARNING MATERIAL (SLM)
transfer, as many nations have enacted laws allowing them to access data if they are suspicious. Employee education and training: In many positions that entail managing information, basic employee training should include a thorough grasp about when cloud providers should and should not be used. People may not grasp the consequences of their privacy decisions due to a lack of training. Unauthorized data usage can range from targeted advertising to data resale on the cloud. The service provider can profit from secondary data consumption. Client-provider agreements must be precise about unlawful usage in order to increase confidence and reduce security concerns. Loss of legal protection: Storing data on the cloud may result in a loss of legal privacy protection. Following all of the legislation for cloud computing, such as Canada's privacy act and health requirements, can be difficult. Other policies, such as the Patriot Act in the United States, can actually force data to be shared with third parties. Various laws safeguard (or, in some situations, impinge on) the privacy of these users in various locations. In terms of localization, data stored in the cloud is, as best, exceedingly ambiguous. In the worst-case scenario, the nature of such an ambiguous and rapid data flow over borders could render privacy rules unenforceable. 11.7SUMMARY For the future generation of IT applications, cloud technology is a modern and growing technology. Privacy and security concerns are the biggest roadblocks to cloud computing's rapid rise. When compared to traditional infrastructure, cloud computing offers a number of distinct advantages. It is no longer necessary to comprehend the benefits that the Cloud provides to the user, but rather to comprehend the issue of data security when data is hosted by a third-party service provider. This chapter discusses the topic of data security in the context of cloud computing technologies. First and foremost, we have classified data security concerns into three categories: cloud characteristics, data life cycle, and data security properties. After that, we've identified the most common data security solutions that have been implemented for each category in this classification. Data security in the Cloud cannot be reduced to a set of technical security solutions; rather, it is dependent on the level of trust that the user has in his or her provider, as well as on legal considerations. IP rights protection and management is a critical issue in the Cloud Computing context: the transfer of data, its storage and processing are all reliant on the determination of solutions for the protection of intellectual property rights on that data in order for Cloud Computing to function properly. Who actually owns the data on the Cloud is a 189 CU IDOL SELF LEARNING MATERIAL (SLM)
straightforward question. This is a problem. Under European law, data entrusted to a third party remain the property of the customer, and the law bans the provider from releasing the data to anybody else. In Anglo-Saxon countries, on the other hand, the supplier is considered the owner of the information. As a result, cloud security necessitates a thorough reexamination of existing security procedures. As soon as the data is stored in a cloud, the responsibility for its management and control is given to the service providers. The concept of data being controlled by a third party is still not widely accepted, particularly among large corporations. Reduced data storage and computational costs are a must for any firm, and data and information analysis are always one of the most critical jobs for decision-making in any organization. As a result, no organizations will move their information and data to the cloud unless the cloud providers have established confidence. Researchers have offered a number of strategies for data protection including achieving the maximum level of data protection inside the cloud. Nevertheless, there are still numerous holes that can be addressed by improving the effectiveness of these strategies. To make cloud computing acceptable to cloud service users, further effort is needed in this field. This chapter examined several data security and privacy strategies, with a focus on data storage and use it in the cloud, for data protection in cloud computing settings, with the goal of fostering trust among cloud service providers and customers. 11.8 KEYWORDS CRM - Customer relationship management (CRM) is the process through which a company or other organization manages its contacts with customers, usually by analyzing vast amounts of data. AWS -Amazon Web Services is an Amazon company that offers metered pay-as-you- go cloud computing platforms including APIs to people, businesses, and governments. NFS & CIFS - NFS (Network File System) & CIFS (Common Internet File System) were protocols for viewing and accessing stored data on a remote computing device, including a server or a PC. Most modern storage systems employ CIFS, which is a variant of a Server Message Block (SMB) protocol. Data privacy - Data privacy, often known as information privacy, is a subset of data security that deals with how data is processed properly. Data Integrity - Data integrity is a vital part of the design, implementation, or use of any computer that provides, processes, nor retrieves data because it ensures data reliability and precision throughout its life cycle. 190 CU IDOL SELF LEARNING MATERIAL (SLM)
11.9 LEARNING ACTIVITY 1. Cloud computing can save money,' it is said. What are your thoughts on the matter? What open-source cloud computing platform databases can you think of? 2. Explain how a cloud environment's security architecture is designed and how such measures might be implemented in a typical banking situation. 11.10 UNIT END QUESTIONS A. Descriptive Questions Short Questions 1. How Traditional data center differ from Cloud datacenter? 2. Describe briefly storage of cloud data. 3. What is meant by data privacy in the cloud? Brief it. 4. What are the security issues faced by cloud computing? Brief it. 5. Difference between integrity and accessibility. Long question 1. List out the challenges while storing data in the cloud. 2. What is Data Center? Give an example. 3. What are the types of storage devices? 4. Write short notes on Virtual storage containers. 5. List out the security issues in the Cloud. B. Multiple choice Questions 1. CRM stands for a. Customer Relationship Management b. Custom Relationship Management c. Customer Ration Management d. Customer Relationship Migration 2. NAS expansion 191 CU IDOL SELF LEARNING MATERIAL (SLM)
a. Network Attached Storage b. Network Attack Storage c. Network Attached Shipment d. Network Attached Store 3. Virtual storage containers create ___________for devices, files, and other objects. a. logical unit numbers b. physical unit numbers c. storage numbers d. service number 4. LUN a. logical unit numbers b. logical unit normal c. logical unique numbers d. local unit numbers 5. Which one of the following models of cloud computing is the most challenging? a. Auditing b. Data integrity c. e-Discovery for legal compliance d. All of these Answers 1-a, 2-a, 3-a, 4-a, 5-d 11.11REFERENCES Reference Book: Data Security in Cloud Computing Shucheng Yu1, Wenjing Lou2, and Kui Ren3 192 CU IDOL SELF LEARNING MATERIAL (SLM)
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/323185562 Data Management in the Cloud Computing Websites: https://www.javatpoint.com/what-is-a-data-centre https://journals.sagepub.com/ 193 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT 12 -TRAFFIC MANAGER STRUCTURE 12.0Learning Objective 12.1Introduction 12.2Traffic Manager 12.3Benefits of Traffic Manager 12.4Manage traffic between datacenters 12.5Summary 12.6Keywords 12.7Learning Activity 12.8Unit End Questions 12.9References 12.0 LEARNING OBJECTIVES After studying this unit students will be able to: Evaluate Traffic Manager Analyze benefits of Traffic manager Managing traffic between datacenters 12.1 INTRODUCTION Cloud Computing (CC) is the act of performing processing on someone else's computer. CC services are available from a variety of vendors. A constant internet connection is the automatically created that must be 0et in order to access CC services. Because everything is done online, traffic from across internet must be efficiently managed in order to reduce transmission delays and provide better service to clients. At no point in time should the network be overburdened. As a result, traffic management has become a critical aspect in improving the CC network's performance. 12.2 TRAFFIC MANAGER Customer traffic can be directed and distributed over many places, such as numerous cloud services or Azure web apps, using Traffic Manager. And use the geographic routing strategy, Traffic Manager can also assist business with your geofencing needs. 194 CU IDOL SELF LEARNING MATERIAL (SLM)
In today's cloud context, data analysis throughout the data centre is more important than ever. To provide client data optimization, many online applications rely on data traffic analysis. Current network operators must have a basic understanding of the data traffic in order to implement efficient data management and avoid traffic congestion. Current data traffic analysis and management tactics in Internet Service Providers (ISPs) organizations are now having some issues reaching out to online data centers.The density of links is substantially larger than that of the number of ISPs other operating systems, posing the most severe problem for existing solutions.The majority of available techniques can calculate traffic among thousands of end server hosts, but even flexible data centers can have thousands of servers. Existing techniques frequently anticipate some flow pattern-based designs that are appropriate for the internet and organizational network systems, but data center applications, such as the use of Map Reduce, irreversibly alter the traffic flow pattern.The application's use of the network, storage resources, or compute is more tightly coupled than what is found in other scenarios.There is currently very little work being done on traffic analysis and management. Some of them are discussed further down. Virtual Private Network (VPN): The Internet/VPN is the simplest way for a company to access a CC application. A small amount of data synchronization between the cloud and the local data centers is required. The use of VPN has a minor impact on network enterprise, and it might result in technical issues that must be resolved in both specialized and contractual terms with both the CC provider. The VPN is difficult to set up and maintain, and it is also expensive. The main disadvantages of using a VPN upon on cloud for traffic service are really that cloud VPNs were difficult to scale, nearby VPN clients won't operate in the great majority of scenarios, and records such as how much the connection was used and who accessed your server are difficult to keep track of. The VPN, on the other hand, is the most straightforward method for connecting to the cloud. Microsoft Azure Traffic Manager is an organizational Azure cloud service that allows your website to run in multiple data centers across the world. The Traffic Manager can be used to govern traffic distribution to an organization's preset destinations. When it comes to DNS questions for both the area name of your Internet resources, the Traffic Manager employs a clever approach motor. The Traffic Manager configuration improves the availability of basic apps as well as the responsiveness of performance applications. It allows for overhauling and QoS maintenance with no downtime. Traffic dispersion improves the execution of a large, sophisticated system. Model for Cloud Network Management (CNM): The CNM approach displays the utilization of a group of specialists who convey updates regarding their execution to cloud administrators. Every operator has a set of articles called the Management Information Base (MIB) that maintains the execution and other associated 195 CU IDOL SELF LEARNING MATERIAL (SLM)
information. To provide better QoS for cloud services, the system director should be aware of the current status of the group's chief, their CPU, stockpiling, and system consumption, the number of virtual machine samples assigned, and so on. Every message in the CNM model is recognized with the sole objective of ensuring proper servicing. The CNM paradigm combines centralized and decentralized service in a single package. The CNM paradigm is a hybrid of centralized and decentralized approaches. The SNMP provisos show that the CNM model may be improved quickly. It eliminates or reduces a few of the previous model's flaws. It has a higher level of security than the SNMP display. The CNM model display enhances system performance because it uses fewer bundles, which reduces jitter and thus improves system performance. i) The arrange traffic has decreased. ii) There are no concerns with surveying. iii) Ensures the security of correspondence iv) The concept of virtualization aids in faster recovery in the event of setbacks. v) It has a higher level of security. In light of the characteristics, such as execution, security, speed, and adaptability, a system has always been breaking down. The execution of the cloud system should be broken down from the perspective of both specialist co-ops and clients. The viewpoint of a specialist organization is as follows: The expert organization can be more concerned with the cloud arrangement's foundation execution. The capacity, VMs, or system traffic must all be monitored by a cloud professional company. Client's point of view: Clients are more likely to choose system execution based on how applications work, such as system speed, distant information availability, and unwavering quality. After concentrating the preceding apparatuses, dynamic traffic service is unmistakably productive and a wise option for traffic service-connected concerns over distributed computing systems rather than equipment arrangements. Traffic Management (VTM) Services from VeriSign: The benefit of Dynamic Traffic Management (DTM) allows you to deal with system traffic based on continuous data. It allows for virtually limitless pathways for managing and customizing the traffic of an organization. Failover, geo-allocation, weighted load adjustment, and DTM are some of the traffic service units offered by VeriSign. The VTM is the only way to access the Lua scripting dialect. Verisign provides dynamic traffic management services via the cloud. Global organizations may easily monitor their system traffic, traffic patterns can be examined, and application downtime can be avoided in the vast majority of cases since the system is alert. It is a product that monitors traffic on the network. 196 CU IDOL SELF LEARNING MATERIAL (SLM)
Low cost: When compared to equipment arrangements, it offers lower operating costs, and because it is delivered all-inclusive, it allows for easy customization.It can be easily transmitted because it is a product package.Because of its improved accessibility and execution, it is suitable for simple web-based services. Speed: Because it has the least amount of dormancy or slack time, the information may be transferred to the objective considerably faster.The traffic across the cloud system is just not constant; it fluctuates. For example, traffic to and from the cloud via VPN may vary at different times. When there is less traffic, it will not cause any problems. However, if the system's traffic increases, the system becomes noticeably congested, and the chances of usage downtime due to bottlenecks are considerable. Downtime is not recognized in a distributed computing environment. The programs should be able to run smoothly at all times. As a result, traffic management will become increasingly important. The system should be able to scale up or down in response to changing requirements. When the traffic is low, the hardware arrangements become prohibitively expensive. 12.3 BENEFITS OF TRAFFIC MANAGER The user benefits from the Traffic Manager in a number of ways: Increase Application Performance: You may improve the efficiency of the application by making it faster to load pages and provide a better user experience. This is true for users who are served by the hosted service that is nearest to them. High Availability: In the case that one of the application instances fails, users can use the Traffic Manager can enhance application availability via allowing automatic customer traffic lose scenarios. There is no need for downtime for upgrades or maintenance: You won't require downtime either application maintenance, patch purging, or entire new package deployment once you've configured the Traffic Manager. Quick Setup:For example,Configuring Azure Traffic Manager on the Windows Azure interface is a breeze. If you already have a Windows Azure application (cloud service, Azure website), you can quickly configure this Traffic Manager using a simple approach (setting routing policy). 12.4MANAGING TRAFFIC BETWEEN DATA CENTERS. Applications are closely connected with traffic parameters like packet sizes, flow size distributions, and flow inter-arrival periods. Furthermore, traffic localization to racks is greatly dependent on how applications are deployed across the network and how such apps interface with one another; there might be a significant number or flow arrivals per server (thousands per second) with multiple concurrent flows. Finally, due to possible uneven traffic 197 CU IDOL SELF LEARNING MATERIAL (SLM)
distribution from across network (flows may have to compete for bandwidth) and the use of techniques such as connection pooling, which results in long-lived connections which are not always transmitting, distributions of flow sizes and durations may be significantly different. The objective of datacenter traffic control has been listed in the table 12.1 Table 12.1 Datacentre traffic control objectives Objective Description Minimizing Flow Completion Times (FCT) Distributed applications benefit from faster completion Minimizing Deadline Miss Rate or times because they eliminate communication delays and Lateness improve end-to-end responsiveness. Maximizing Utilization It is critical to meet certain deadline criteria for time- Fairness sensitive applications. Only transactions that finish before respective deadlines are valuable in some applications, in which case the deadline failure rate is the appropriate performance statistic. Lateness (i.e., how often we miss deadlines) is the ideal statistic for some applications where transactions performed beyond the deadlines were also beneficial or even critical. It is desirable to employ available resources as often as feasible in order to maximize performance. While paying respect to the class of services and service level agreements, resources should be distributed evenly across tenants and users. To enforce traffic management, certain level of coordination between network nodes is required. Traffic control can be totally distributed or wholly centralized in general. The three main approaches, distributed, centralized, and hybrid, are discussed here. Distributed Most congestion management techniques work in a distributed fashion because it is more dependable and scalable. End-hosts, switches, or even both may be used to build a distributed scheme. Designs that require no changes to the basic network operations or require additional features at the switches, such as custom priority queues, in-network rate negotiation and allocation, sophisticated calculations in switches, or per flow status information, are frequently favored. Because each server manages its own traffic, end-host systems are usually more scalable. For example, because in cast congestion occurs at receiving ends, the in-cast 198 CU IDOL SELF LEARNING MATERIAL (SLM)
problem, which is a frequent traffic control concern, can be effectively addressed utilizing end-host based techniques. Server based Flow Scheduling (SFS), pHost, NDP, and ExpressPass are a few examples. To manage the flow of information towards receivers and avoid congestion, SFS generates ACKs. The transmitter examines the flows it is delivering and prioritizes the higher-priority flows initially, while the receiver manages reception by determining when to send ACKs. pHost employs a pull-based strategy, in which the sender sets the receiving schedule depending on some policy (preemptive or non-preemptive, fair sharing, etc). A receiver receives a Request to Send (RTS) from a source. The receiver will then be aware of all the hosts that want to send data to it and will be able to issue tokens that allow them to do so. When fresh packets arrive from a sender, NDP limits the aggregate transmission rate of all in cast senders by keeping a PULL queue at the receiver that is loaded with additional PULL requests (a PULL request contains a counter which determines number of packets its associated sender is allowed to send). Then, in a timed manner, PULL requests are issued to senders to ensure that the entire incoming transmission rate at the receiver does not exceed the per interface line rate. ExpressPass controls the flow of credit packets at switches and end-hosts in response to network constraints, reducing network congestion (a sender is allowed to send a new data packet when it receives a credit packet). Because of the capacity to regulate flows from various servers and the availability of information about flows from multiple end-hosts, shifting more control to a network may allow for better resource management. Flows from many hosts, for example, may pass via a ToR switch, providing it with additional information to make scheduling or allocation optimizations. Advantages: Increased scalability and dependability. End-hosts can be used to implement the entire system. Network (i.e., switches, routers, etc.) components can also be used to improve performance. End-to-end techniques can be simple to implement by changing the transmission rate based on implicit network feedback (e.g., loss, latency). The network can provide explicit feedback (for example, the occupancy of network queues) to help senders make more precise control decisions, potentially improving transmission rate management. For instance, Explicit Congestion Notification (ECN) earnings targets to indicate high queue occupancy to end-hosts, and reducing packet payloads in case of overburdened network queues (rather than discarding them entirely) might help receivers acquire a comprehensive picture of transmitted packets. Drawbacks: Typically, only a localized view of network status & flow attributes is available, limiting the number of locally optimal options. While every end-point may attempt to attain maximum throughput for its flows by default (locally optimum), if some end-points drop their transmission rate in favor of other end-points with more vital traffic, it may result in a greater network wide utility (globally optimal). Due to a lack of coordination and a limited view of network status and attributes, it may be more difficult to implement new network- wide policies (e.g., rate restrictions in such a multi-tenant cloud environment) when using distributed management. 199 CU IDOL SELF LEARNING MATERIAL (SLM)
Centralized To avoid congestion in centralized schemes, a central unit controls transmissions throughout the network. The central unit has a global perspective of the network topology & resources, as well as switch state information and end-host needs. Flow sizes, deadlines, and priorities, as well as switch queuing status and link capacity, are among them. The scheduler can proactively assign resources spatially and temporally (multiple time slots and different networks) and organize transmissions inside a way that maximizes performance while minimizing congestion. This entity can also convert the scheduling problem through an optimization problem having resource limitations, the answer to which could be approximated using quick heuristics to improve speed even more. The efficiency of a scheduler in a big network is determined by its computing capacity plus communication latency to end-hosts. A totally centralized method has a number of drawbacks. Because all network transmissions rely on it, the central controller may be a single point of failure. End-hosts will have to temporarily fall back to the basic distributed system if there is a failure. Upon flow initiation, there are scheduling overhead, which is the time taken for scheduler to receive, process, then allocate a transmission slot. Because most datacenter flows are short, the scheduling overhead must be minimal. Furthermore, due to the processing overhead of requests and the establishment of a congested hotspot from around controller due to the large number of flow arrivals, this strategy may still scale to moderate datacenters. Temporary congestion of the controller due to bursts in stream arrivals can cause scheduling delays. It may well be possible to use conventional scaling strategies, such as a hierarchical design, for centralized control of bigger networks. Advantages: A wider perspective of network architecture and flow attributes can improve performance. This information could include the use of various network edges, their personal health, as well as the size, deadline, and priority of flows. With this information, it is possible to carefully guide traffic across a range of paths based on network capacity, yet allowing flow to communicate as per their priorities to maximize utility. The flexibility & ease of maintaining network settings can be improved with central management. New routing/scheduling approaches, for example, can be deployed considerably more quickly by simply updating the central fabric. In the event that stringent resource management is required for guaranteed SLAs, centralized systems make admission control easier. Drawbacks: In big networks, a central controller and management fabric might be a point of failure or a network hotspot. There is also the latency and processing overhead of gathering network status information flow data from a large number of end-hosts (the controller must receive and comprehend incoming messages quickly and act accordingly). Network resource allocation & scheduling overhead (if central rate allocation is used). Finally, in big networks, network updates could be inconsistent. For example, some updates may not be implemented 200 CU IDOL SELF LEARNING MATERIAL (SLM)
Search
Read the Text Version
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222
- 223
- 224
- 225
- 226
- 227
- 228
- 229
- 230
- 231
- 232
- 233
- 234
- 235
- 236
- 237
- 238
- 239
- 240
- 241
- 242
- 243
- 244
- 245