Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore MCA643 CU-MCA-Cloud Computing

MCA643 CU-MCA-Cloud Computing

Published by kuljeet.singh, 2021-01-04 06:28:16

Description: MCA643 CU-MCA-Cloud Computing

Search

Read the Text Version

 Dynamically modification application settings while not the requirement to deploy or restart AN application  Control feature accessibility in period of time 6.4 USE APP CONFIGURATION The easiest thanks to add associate degree App Configuration store to your application is thru a consumer library provided by Microsoft. The subsequent ways area unit offered to attach along with your application, looking on your chosen language and framework. Key groupings App Configuration provides 2 choices for organizing keys: • Key prefixes • Labels You can use either one or each choice to cluster your keys. Key prefixes area unit the start elements of keys. you'll be able to logically blood type set of keys by victimisation constant prefix in their names. Prefixes will contain multiple parts connected by a delimiter, such as /, just like a computer address path, to create a namespace. Such hierarchy’s area unit helpful once you are storing keys for several applications, part services, and environments in one App Configuration store. 100 CU IDOL SELF LEARNING MATERIAL (SLM)

An important issue to stay in mind is that keys area unit what your application code references to retrieve the values of the corresponding settings. Keys should not modification, as an alternative you will have to switch your code anytime that happens. Labels area unit associate degree attribute on keys. they are wont to produce variants of a key. for instance, you'll be able to assign labels to multiple versions of a key. A version may be associate degree iteration, associate degree setting, or another discourse info. Your application will request a completely different set of key values by specifying another label. As a result, all key references stay unchanged in your code. Key-value compositions App Configuration treats all keys keep with it as freelance entities. App Configuration does not conceive to infer any relationship between keys or to inherit key values supported their hierarchy. you'll be able to combination multiple sets of keys, however, by victimisation labels in addition to correct configuration stacking in your application code. Let's cross-check associate degree example. Suppose you've got a setting named Asset1, whose worth may vary supported the event setting. You produce a key named \"Asset1\" with associate degree empty label and a label named \"Development\". within the initial label, you place the default worth for Asset1, and you place a particular worth for \"Development\" within the latter. In your code, you initially retrieve the key values with none labels, then you retrieve constant set of key values a second time with the \"Development\" label. once you retrieve the values the second time, the previous values of the keys area unit overwritten. The .NET Core configuration system permits you to \"stack\" multiple sets of configuration knowledge on prime of every alternative. If a key exists in additional than one set, the last set that contains it's used. With a contemporary programming framework, such as .NET Core, you get this stacking capability for free of charge if you employ a native configuration supplier to access App Configuration. the subsequent code snipping shows however you'll be able to implement stacking during a .NET Core application: C#Copy // Augment the ConfigurationBuilder with Azure App Configuration 101 CU IDOL SELF LEARNING MATERIAL (SLM)

// Pull the connection string from an environment variable configBuilder.AddAzureAppConfiguration(options => { options. Connect(configuration[\"connection_string\"]) . Select(KeyFilter.Any, LabelFilter.Null) .Select(KeyFilter.Any, \"Development\"); }); Use labels to enable different configurations for different environments provides a complete example. App Configuration bootstrap To access an App Configuration store, you can use its connection string, which is available in the Azure portal. Because connection strings contain credential information, they're considered secrets. These secrets need to be stored in Azure Key Vault, and your code must authenticate to Key Vault to retrieve them. A better choice is to use the managed identities feature in Azure Active Directory. With managed identities, you would like solely the App Configuration termination computer address to bootstrap access to your App Configuration store. you'll insert the computer address in your application code (for example, within the appsettings.json file). App or perform access to App Configuration You can offer access to App Configuration for net apps or functions by exploitation any of the subsequent methods: • Through the Azure portal, enter the affiliation string to your App Configuration store within the Application settings of App Service. • Store the affiliation string to your App Configuration store in Key Vault and reference it from App Service. • Use Azure managed identities to access the App Configuration store. For a lot of info 102 CU IDOL SELF LEARNING MATERIAL (SLM)

• Push configuration from App Configuration to App Service. App Configuration provides AN export perform (in Azure portal and therefore the Azure CLI) that sends knowledge directly into App Service. With this methodology, you do not ought to modification the applying code the least bit. Reduce requests created to App Configuration Excessive requests to App Configuration may result in choking or over-the-hill charges. to cut back the quantity of requests made: • Increase the refresh timeout, particularly if your configuration values don't modification often. Specify a brand-new refresh timeout exploitation the SetCacheExpiration methodology. • Watch one picket key, instead of observance individual keys. Refresh all configuration providing the picket key changes. Use Azure Event Grid to receive notifications once configuration changes, instead of perpetually polling for any changes. For a lot of info, Importing configuration knowledge into App Configuration App Configuration offers the choice to bulk import your configuration settings from your current configuration files exploitation either the Azure portal or command line interface. you'll additionally use an equivalent choice to export values from App Configuration, as an example between connected stores. If you’d prefer to found out AN in progress synchronize along with your GitHub repo, you'll use our GitHub Action in order that you'll continue exploitation your existing supply management practices whereas obtaining the advantages of App Configuration. Multi-region preparation in App Configuration App Configuration is regional service. For applications with completely different configurations per region, storing these configurations in one instance will produce one purpose of failure. Deploying one App Configuration instances per region across multiple regions could also be a more robust choice. It will facilitate with regional disaster recovery, performance, and security soiling. Configuring by region additionally improves latency and uses separated choking quotas, since choking is per instance. to use disaster recovery mitigation, you'll use multiple configuration stores. 103 CU IDOL SELF LEARNING MATERIAL (SLM)

6.5 DIAGNOSTICS Azure Diagnostics extension is an agent in Azure Monitor that collects monitoring data from the guest operating system of Azure compute resources including virtual machines. Note Azure Diagnostics extension is one of the agents available to collect monitoring data from the guest operating system of compute resources. Primary scenarios The primary scenarios addressed by the diagnostics extension are:  Collect guest metrics into Azure Monitor Metrics.  Send guest logs and metrics to Azure storage for archiving.  Send guest logs and metrics to Azure event hubs to send outside of Azure. Comparison to Log Analytics agent The Log Analytics agent in Azure Monitor can also be used to collect monitoring data from the guest operating system of virtual machines. You may choose to use either or both depending on your requirements. The key differences to consider are:  Azure Diagnostics Extension can be used only with Azure virtual machines. The Log Analytics agent can be used with virtual machines in Azure, other clouds, and on- premises.  Azure Diagnostics extension sends data to Azure Storage, Azure Monitor Metrics (Windows only) and Event Hubs. The Log Analytics agent collects data to Azure Monitor Logs. 104 CU IDOL SELF LEARNING MATERIAL (SLM)

 The Log Analytics agent is required for solutions, Azure Monitor for VMs, and Costs There is no cost for Azure Diagnostic Extension, but you may incur charges for the data ingested. Table 6.1 other services such as Azure Security Centre WINDOWS DIAGNOSTICS EXTENSION (WAD) Data Source-Description Windows Event logs-Events from Windows event log. Performance counters-Numerical values measuring performance of different aspects of operating system and workloads. IIS Logs-Usage information for IIS web sites running on the guest operating system. Application logs-Trace messages written by your application. .NET Event Source logs-Code writing events using the .NET Event Source class Manifest based ETW logs-Event Tracing for Windows events generated by any process. Crash dumps (logs)-Information about the state of the process if an application crash. File based logs-Logs created by your application or service. Agent diagnostic logs-Information about Azure Diagnostics itself. Data collected The following tables list the data that can be collected by the Windows and Linux diagnostics extension. 105 CU IDOL SELF LEARNING MATERIAL (SLM)

Table 6.2 Data collected LINUX DIAGNOSTICS EXTENSION (LAD) Data Source-Description Syslog-Events sent to the Linux event logging system. Performance counters-Numerical values measuring performance of different aspects of operating system and workloads. Log files-Entries sent to a file based log. Windows diagnostics extension (WAD) Linux diagnostics extension (LAD) Data destinations The Azure Diagnostic extension for both Windows and Linux always collect data into an Azure Storage account. Configure one or more data sinks to send data to other additional destinations. The following sections list the sinks available for the Windows and Linux diagnostics extension. Table 6.3 Windows diagnostics extension (WAD) WINDOWS DIAGNOSTICS EXTENSION (WAD) Destination Description Azure Monitor Collect performance data to Azure Monitor Metrics. Metrics 106 CU IDOL SELF LEARNING MATERIAL (SLM)

WINDOWS DIAGNOSTICS EXTENSION (WAD) Destination Description Event hubs Use Azure Event Hubs to send data outside of Azure. Write to data to blobs in Azure Storage in addition to tables. Azure Storage blobs Application Collect data from applications running in your VM to Application Insights to integrate Insights with other application monitoring. You can also collect WAD data from storage into a Log Analytics workspace to analyse it with Azure Monitor Logs although the Log Analytics agent is typically used for this functionality. It can send data directly to a Log Analytics workspace and supports solutions and insights that provide additional functionality Linux diagnostics extension (LAD) Table 6.4 Linux diagnostics extension (LAD) LINUX DIAGNOSTICS EXTENSION (LAD) Destination-Description Event hubs-Use Azure Event Hubs to send data outside of Azure. Azure Storage Blobs-Write to data to blobs in Azure Storage in addition to tables. Azure Monitor Metrics-Install the Telegraph agent in addition to LAD. 107 CU IDOL SELF LEARNING MATERIAL (SLM)

LAD writes data to tables in Azure Storage. It supports the sinks in the following table. Installation and configuration The Diagnostic extension is implemented as a virtual machine extension in Azure, so it supports the same installation options using Resource Manager templates, PowerShell, and CLI. 6.6 MONITORING AND DEPLOYMENT OF WEB APPS. Azure platform as a service (PaaS) offerings manage compute resources for you and affect how you monitor deployments. Azure includes multiple monitoring services, each of which performs a specific role. Together, these services deliver a comprehensive solution for collecting, analysing, and acting on telemetry from your applications and the Azure resources they consume. This scenario addresses the monitoring services you can use and describes a dataflow model for use with multiple data sources. When it comes to monitoring, many tools and services work with Azure deployments. In this scenario, we choose readily available services precisely because they are easy to consume. Relevant use cases Other relevant use cases include:  Instrumenting a web application for monitoring telemetry.  Collecting front-end and back-end telemetry for an application deployed on Azure.  Monitoring metrics and quotas associated with services on Azure. Architecture This scenario uses a managed Azure environment to host an application and data tier. The data flows through the scenario as follows: 1. A user interacts with the application. 2. The browser and app service emit telemetry. 108 CU IDOL SELF LEARNING MATERIAL (SLM)

3. Application Insights collects and analyses application health, performance, and usage data. 4. Developers and administrators can review health, performance, and usage information. 5. Azure SQL Database emits telemetry. 6. Azure Monitor collects and analyses infrastructure metrics and quotas. 7. Log Analytics collects and analyses logs and metrics. 8. Developers and administrators can review health, performance, and usage information. Components  Azure App Service is a PaaS service for building and hosting apps in managed virtual machines. The underlying compute infrastructures on which your apps run is managed for you. App Service provides monitoring of resource usage quotas and app metrics, logging of diagnostic information, and alerts based on metrics. Even better, you can use Application Insights to create availability tests for testing your application from different regions.  Application Insights is an extensible Application Performance Management (APM) service for developers and supports multiple platforms. It monitors the application, detects application anomalies such as poor performance and failures, and sends telemetry to the Azure portal. Application Insights can also be used for logging, distributed tracing, and custom application metrics.  Azure Monitor provides base-level infrastructure metrics and logs for most services in Azure. You can interact with the metrics in several ways, including charting them in Azure portal, accessing them through the REST API, or querying them using PowerShell or CLI. Azure Monitor also offers its data directly into Log Analytics and other services, where you can query and combine it with data from other sources on premises or in the cloud. 109 CU IDOL SELF LEARNING MATERIAL (SLM)

 Log Analytics helps correlate the usage and performance data collected by Application Insights with configuration and performance data across the Azure resources that support the app. This scenario uses the Azure Log Analytics agent to push SQL Server audit logs into Log Analytics. You can write queries and view data in the Log Analytics blade of the Azure portal. DevOps considerations Monitoring A recommended practice is adding Application Insights to your code during development using the Application Insights SDKs, and customizing per application. These open-source SDKs are available for most application frameworks. To enrich and control the data you collect, incorporate the use of the SDKs both for testing and production deployments into your development process. The main requirement is for the app to have a direct or indirect line of sight to the Applications Insights ingestion endpoint hosted with an Internet-facing address. You can then add telemetry or enrich an existing telemetry collection. Runtime monitoring is another easy way to get started. The telemetry that is collected must be controlled through configuration files. For example, you can include runtime methods that enable tools such as Application Insights Status Monitor to deploy the SDKs into the correct folder and add the right configurations to begin monitoring. Like Application Insights, Log Analytics provides tools for analysing data across sources, creating complex queries, and sending proactive alerts on specified conditions. You can also view telemetry in the Azure portal. Log Analytics adds value to existing monitoring services such as Azure Monitor and can also monitor on-premises environments. Both Application Insights and Log Analytics use Azure Log Analytics Query Language. You can also use cross-resource queries to analyse the telemetry gathered by Application Insights and Log Analytics in a single query. Azure Monitor, Application Insights, and Log Analytics all send alerts. For example, Azure Monitor alerts on platform-level metrics such as CPU utilization, while Application Insights alerts on application-level metrics such as server response time. Azure Monitor alerts on new events in the Azure Activity Log, while Log Analytics can issue alerts about metrics or event 110 CU IDOL SELF LEARNING MATERIAL (SLM)

data for the services configured to use it. Unified alerts in Azure Monitor is a new, unified alerting experience in Azure that uses a different taxonomy. Alternatives This article describes conveniently available monitoring options with popular features, but you have many choices, including the option to create your own logging mechanisms. A recommended practice is to add monitoring services as you build out tiers in a solution. Here are some possible extensions and alternatives:  Consolidate Azure Monitor and Application Insights metrics in Grafana using the Azure Monitor Data Source for Grafana.  Data Dog features a connector for Azure Monitor  Automate monitoring functions using Azure Automation.  Add communication with ITSM solutions.  Extend Log Analytics with a management solution. For more information see [Monitoring for DevOps] [devops-monitoring] in the Azure Well- Architected Framework. Scalability and availability considerations This scenario focuses on PaaS solutions for monitoring in large part because they conveniently handle availability and scalability for you and are backed by service-level agreements (SLAs). For example, App Services provides a guaranteed SLA for its availability. Application Insights has limits on how many requests can be processed per second. If you exceed the request limit, you may experience message throttling. To prevent throttling, implement filtering or sampling to reduce the data rate High availability considerations for the app you run, however, are the developer's responsibility. For information about scale, for example, see the Scalability considerations section in the basic web application reference architecture. After an app is deployed, you can set up tests to monitor its availability using Application Insights. 111 CU IDOL SELF LEARNING MATERIAL (SLM)

Security considerations Sensitive information and compliance requirements affect data collection, retention, and storage. Learn more about how Application Insights and Log Analytics handle telemetry. The following security considerations may also apply:  Develop a plan to handle personal information if developers are allowed to collect their own data or enrich existing telemetry.  Consider data retention. For example, Application Insights retains telemetry data for 90 days. Archive data you want access to for longer periods using Microsoft Power BI, Continuous Export, or the REST API. Storage rates apply.  Limit access to Azure resources to control access to data and who can view telemetry from a specific application.  Consider whether to control read/write access in application code to prevent users from adding version or tag markers that limit data ingestion from the application. With Application Insights, there is no control over individual data items once they are sent to a resource, so if a user has access to any data, they have access to all data in an individual resource.  Add governance mechanisms to enforce policy or cost controls over Azure resources if needed. For example, use Log Analytics for security-related monitoring such as policies and role-based access control, or use Azure Policy to create, assign and, manage policy definitions.  To monitor potential security issues and get a central view of the security state of your Azure resources, consider using Azure Security Centre. Cost considerations Monitoring charges can add up quickly. Consider pricing up front, understand what you are monitoring, and check the associated fees for each service. Azure Monitor provides basic metrics at no cost, while monitoring costs for Application Insights and Log Analytics are based on the amount of data ingested and the number of tests you run. 112 CU IDOL SELF LEARNING MATERIAL (SLM)

To help you get started, use the pricing calculator to estimate costs. Change the various pricing options to match your expected deployment. Telemetry from Application Insights is sent to the Azure portal during debugging and after you have published your app. For testing purposes and to avoid charges, a limited volume of telemetry is instrumented. After deployment, you can watch a Live Metrics Stream of performance indicators. This data is not stored — you are viewing real-time metrics — but the telemetry can be collected and analysed later. There is no charge for Live Stream data. Log Analytics is billed per gigabyte (GB) of data ingested into the service. The first 5 GB of data ingested to the Azure Log Analytics service every month is offered free, and the data is retained at no charge for first 31 days in your Log Analytics workspace. 6.7 SUMMARY  Microsoft Azure is Microsoft's cloud computing platform, providing a wide variety of services you can use without purchasing and provisioning your own hardware. Azure enables the rapid development of solutions and provides the resources to accomplish tasks that may not be feasible in an on-premises environment. Azure's compute, storage, network, and application services allow you to focus on building great solutions without the need to worry about how the physical infrastructure is assembled.  Cloud computing provides a modern alternative to the traditional on-premises datacenter. A public cloud vendor is completely responsible for hardware purchase and maintenance and provides a wide variety of platform services that you can use. You lease whatever hardware and software services you require on an as-needed basis, thereby converting what had been a capital expense for hardware purchase into an operational expense. It also allows you to lease access to hardware and software resources that would be too expensive to purchase. Although you are limited to the hardware provided by the cloud vendor, you only have to pay for it when you use it.  The easiest way to add an App Configuration store to your application is through a client library provided by Microsoft. The following methods are available to connect 113 CU IDOL SELF LEARNING MATERIAL (SLM)

with your application, depending on your chosen language and framework  Azure Diagnostics extension is an agent in Azure Monitor that collects monitoring data from the guest operating system of Azure compute resources including virtual machines. 6.8 KEY WORDS/ABBREVIATIONS  Azure tenant A dedicated and trusted instance of Azure AD that’s automatically created when your organization signs up for a Microsoft cloud service subscription, such as Microsoft Azure, Microsoft Intune, or Office 365. An Azure tenant represents a single organization.  Single tenant Azure tenants that access other services in a dedicated environment are considered single tenant.  Multi-tenant Azure tenants that access other services in a shared environment, across multiple organizations, are considered multi-tenant.  Azure AD directory: Each Azure tenant has a dedicated and trusted Azure AD directory. The Azure AD directory includes the tenant’s users, groups, and apps and is used to perform identity and access management functions for tenant resources.  Custom domain: Every new Azure AD directory comes with an initial domain name, domainname.onmicrosoft.com. In addition to that initial name, you can also add your organization’s domain names, which include the names you use to do business and your users, use to access your organization’s resources, to the list. 6.9 LEARNING ACTIVITY 1. Give the diagrammatic view of Azure Deployment foe web Apps ___________________________________________________________________________ ___________________________________________________________________ ________ 2. Draw the detail steps of Monitoring in Azure Cloud Computing ___________________________________________________________________________ ___________________________________________________________________ ________ 114 CU IDOL SELF LEARNING MATERIAL (SLM)

6.10 UNIT END QUESTIONS (MCQ AND DESCRIPTIVE) A. Descriptive Questions 1. Explain with diagram the Configuration Management of Azure. 2. Discuss, what are the Diagnostics attributes of Azure. 3. Explain the monitoring and deployment of web apps., that can be implemented through Microsoft Azure. 4. How Azure provides support for different web applications? Explain. B. Multiple Choice Questions 1. Which of the following element is a non-relational storage system for large-scale storage? a) Compute b) Application c) Storage d) None of the mentioned 2. Azure Storage plays the same role in Azure that ______ plays in Amazon Web Services. a) S3 b) EC2 c) EC3 d) All of the mentioned 3. Which of the following element in Azure stands for management service? a) config b) application c) virtual machines d) None of the mentioned 4. A particular set of endpoints and its associated Access Control rules for an application is referred to as the _______________ a) service namespace 115 CU IDOL SELF LEARNING MATERIAL (SLM)

b) service rules c) service agents d) All of the mentioned 5. Which of the following was formerly called Microsoft .NET Services? a) AppFabric b) PHP c) WCF d) All of the mentioned Answer 1. c 2. a 3. a 4. a 5. a 6.11 REFERENCES  Buyya Rajkumar, Broberg James, Goscinski A.M., Wile (Editors). (2011). Cloud Computing: Principles and Paradigm. New Jersey: John Willy & Sons Inc.  Microsoft Documents: https://docs.microsoft.com/en-us/azure/  https://channel9.msdn.com/Azure  \"SQL Data Warehouse | Microsoft Azure\". azure.microsoft.com. Retrieved May 23, 2019.  \"Introduction to Azure Data Factory\". microsoft.com. Retrieved August 16, 2018.  \"HDInsight | Cloud Hadoop\". Azure.microsoft.com. Retrieved July 22, 2014.  \"Sanitization\". docs.particular.net. Retrieved November 21, 2018.  Seth Manheim. \"Overview of Azure Service Bus fundamentals\". docs.microsoft.com. Retrieved December 12, 2017.  \"Event Hubs\". azure.microsoft.com. Retrieved November 21, 2018.  \"Azure CDN Coverage by Metro | Microsoft Azure\". azure.microsoft.com. Retrieved September 14, 2020.  eamonoreilly. \"Azure Automation Overview\". azure.microsoft.com. Retrieved September 6, 2018.  \"Why Cortana Intelligence?\". Microsoft.  \"What is the Azure Face API?\". Microsoft. July 2, 2019. Retrieved November 29, 2019. 116 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 7: RESOURCE MANAGEMENT Structure 7.0. Learning Objectives 7.1. Introduction 7.2. Resource Management 7.3. Scope of Cloud Computing Resource Management 7.4. Provision of resource allocation in cloud computing. 7.5. Summary 7.6. Key Words/Abbreviations 7.7. Learning Activity 7.8. Unit End Questions (MCQ and Descriptive) 7.9. References 7.0 LEARNING OBJECTIVES At the end of the unit learner, will able to understand and have knowledge of following aspects of Resource Management in Azure:  Learning of Resource Management in Azure  Scope of Cloud Computing as in Cloud Computing  Provision of Resource allocation 7.1 INTRODUCTION Cloud computing has become a replacement era technology that has vast potentials in enterprises and markets. By victimization this technology, Cloud user will access applications and associated knowledge from anyplace. it's several applications as an example, firms square measure ready to rent recourses from cloud for storage and different process functions so infrastructure price will be reduced considerably. For managing great deal of virtual machine request, the cloud suppliers need Associate in Nursing economical resource programing algorithmic rule. Here we tend to summarize completely different recourse management ways and its impacts in cloud system we tend to try and analyze the resource allocation ways supported numerous matrices and it points out that a number of the ways 117 CU IDOL SELF LEARNING MATERIAL (SLM)

square measure economical than others in some aspects. therefore, the usability of every of the strategies will varied in step with their application space A cloud is characterized by snap that enables a dynamic amendment within the range of resources supported the varied demand from a client furthermore as a pay-as-you-go chance, each of which might cause substantial savings for the purchasers. applicable management of resources in clouds is important for effectively harnessing the facility of the underlying distributed resources and infrastructure. the issues vary from handling resource nonuniformity, allocating resources to user requests with efficiency furthermore as effectively programing the requests that square measure mapped to given resource, furthermore as handling uncertainties related to the work and also the system. As a shopper or user of cloud, one ought to bear in mind of the that} and means that with which the cloud resources square measure allotted to user necessities, and the way square measure the applications being dead during a cloud setting. As an investigator, one will perceive the opportunities to dig additional to carry-on with a lot of innovations to contribute higher solutions to the present issues. Resource management in cloud computing is to mean the economical use of heterogeneous and geographically distributed resources for consumer requests for cloud service provisioning. Since the resources unfold across multiple organizations with completely different policy of usages, the management of those is absolutely a giant challenge. 7.2 RESOURCE MANAGEMENT We think about the resource management because the method of allocating computing, storage, networking and indirectly energy resources to a collection of applications, within the context that appears to conjointly meet the performance objectives of the infrastructure suppliers, users of the cloud resources and applications. The objectives of the cloud users tend to target application performance. The abstract framework provides a high level read of the purposeful part of cloud resource management systems and everyone their interactions. This field is assessed into eight purposeful square measures or we will say that resource management activities that are as follow: Global coming up with of virtualized resources • Resource demand identification 118 CU IDOL SELF LEARNING MATERIAL (SLM)

• Resource exercise estimation • Resource valuation and profit maximization • Native programing of cloud resources • Application scaling and provisioning • employment management • Cloud management systems Cloud computing is appeared as a business necessity, being animated by the concept of simply mistreatment the infrastructure while not managing it. Although, ab initio this concept was gift solely within the tutorial space, recently, it had been reversed into industries by corporations like Microsoft, Amazon, Google, Yahoo! and Salesforce.com. This makes it potential for brand spanking new start-ups to enter the market easier, since the price of infrastructure is greatly diminished. There square measure varied types of problems even as variety of servers becomes Brobdingnagian and dependencies between servers become complicated within the terms of managing cloud systems in static manner. Cloud computing suppliers deliver common on-line business applications that square measure accessed from servers through net browsers 7.2.1 Algorithm of resource allocation Assume that the managed system is in a state in which its current resource principals meet a prescribed QoS specification using the resources already acquired (call it a \"healthy\" state). A control algorithm for resource management attempts to keep the system in a healthy state, using the three means of action outlined in 12.1.3: dynamic resource provisioning, admission control, and service degradation. These may be controlled through feed-forward, feedback, or through a mixed approach. In addition, the problem of bringing the system in an initial healthy state should also be solved. Several common questions arise in the design of a control algorithm. We examine them in turn. 1) System States and Metrics 119 CU IDOL SELF LEARNING MATERIAL (SLM)

The first question is how to define a healthy state. In other words, what metric is used to assess the state of the system? QoS is specified by Service Level Objectives (SLOs), which are the technical expression of an SLA. SLOs involve high-level performance factors, such as global response time or global throughput. While these factors are related to client satisfaction, they cannot be directly used to characterize the state of a system, for which resource allocation indicators are more relevant. These indicators are more easily measured, and can be used for capacity planning and for controlling the system during operation. The problem of SLA decomposition for performance QoS [Chen et al. 2007] is to derive low- level resource occupation factors from Service Level Objectives. Note that equivalent versions of this problem exist for other aspects of QoS, such as availability and security. These are discussed in Chapters 10 and 13, respectively. Here we only consider the performance aspects of QoS. To illustrate this issue, consider a 3-tier implementation of an Internet service, for which SLOs are expressed as a maximum mean response time R and a minimum throughput T. For a given system infrastructure, the problem is to map these requirements onto threshold values for resource occupation at the different tiers: (R, T) � (hhttp-cpu, hhttp-mem, happ-cpu, happ-mem, hdb-cpu, hdb-mem ) where h*-cpu and h*-mem are the occupation rates of CPU and memory, respectively, for the three tiers: HTTP, Application, and Database. Two main approaches have been proposed to solve the SLA decomposition problem. Using a model of the system to derive low-level resource occupation thresholds from high- level SLOs. Using statistical analysis to infer relevant system-related metrics from user-perceived performance factors. The model-based approach relies on the ability to build a complete and accurate model of the system. This is a difficult task, due to the complexity of real systems, and to the widely varying load conditions. However, progress is being made; the most promising approach 120 CU IDOL SELF LEARNING MATERIAL (SLM)

seems to be based on queueing network models. [Doyle et al. 2003] uses a simple queueing model to represent a static content Web service. [Chen et al. 2007] use a more elaborate queueing network to describe a multi-tier service with dynamic web content. This model is similar to that of [Urgaonkar et al. 2007], also described in Section 12.5.5. The statistical analysis approach is fairly independent of specific domain knowledge, and may thus apply to a wide range of systems and operating environments. An example of this approach is [Cohen et al. 2004]. The objective is to correlate system-level metrics and threshold values with high-level performance factors such as expressed in SLOs. To that end, they use Tree-Augmented Naive Bayesian Networks, or TANSs [Friedman et al. 1997], a statistical tool for classification and data correlation. Experiments with a 3 -tier e-commerce system have shown that a small number of system-level metrics (3 to 8) can predict SLO violations accurately, and that combinations of such metrics are significantly more predictive than individual metrics (a similar conclusion was derived from the experiments described in 12.5.4). The method is useful for prediction, but its practical use for closed loop control has not been validated. 2) Predictive vs Reactive Algorithms Are decisions based on prediction or on reaction to real-time measurement through sensors? In the predictive approach, the algorithm tries to assess whether the decision will keep the system in a healthy state. This prediction may be based on estimated upper limits of resource consumption (using a model, as described above), or on the prior observation of a typical class of workload. Both approaches to prediction are useful for estimating mean values and medium-term evolution, but does not help in the case of load peaks. Thus a promising path seems to design algorithms that combine prediction with reaction and thus implement a mixed feed-forward-feedback control scheme. 3) Decision Instants What are the decision instants? The decisions may be made periodically (with a predefined period), or may be linked to significant events in the system, such as the arrival or the termination of a request, or depending on measurements (e.g., if some load factor exceeds a pre-set threshold). These approaches are usually combined. 4) Heuristics and Strategies 121 CU IDOL SELF LEARNING MATERIAL (SLM)

While the design of a resource management algorithm depends on the specific features of the controlled system and of its load, a few heuristic principles apply to all situations. Allocate resources in proportion of the needs. As discussed above, the needs may be estimated by various methods. Techniques for proportional allocation are discussed in 12.3.2. In the absence of other information, attempt to equally balance resource occupation. An example illustrating this principle (load balancing algorithms) is presented in 12.3.2. Shed load to avoid thrashing. Experience shows that the best way of dealing with a peak load is to keep only a fraction of the load that can be serviced within the SLA, and to reject the rest. This is the main objective of admission control (see 12.5 for detailed examples). Recall (12.1.3) that three forms of resource management algorithms may be used, in isolation or combined. The base for their decisions is outlined below. In the case of resource provisioning, the decision is to allocate a resource to a principal (acting on behalf of a request or set of requests), either from an available resource pool, or by taking it from another principal. The decision is guided by the \"proportional allocation\" principle, possibly complemented by an estimate of the effect of the action on the measured QoS. In the case of admission control, a request goes through an admission filter. The filtering decision (admit or reject) is based on an estimate of whether admitting the request would keep or not the system in a healthy state. There may be a single filter at the receiving end, or multiple filters, e.g., at the entry of each tier in a multi-tiered system. A rejected request may be kept by the system to be resubmitted latter, or to be granted when admission criteria are met again, e.g., after resources have been released. Alternatively, in an interactive system such as a web server, the request may be discarded and a rejection message may be sent to the requester, with possibly an estimate of a future acceptance time. In the case of service degradation, the decision is to lower the service for some requests, with the goal of maintaining the system in a healthy state with respect to other requests. There are two aspects to the decision: how to select the \"victim\" requests, and by what amount to degrade the service. 122 CU IDOL SELF LEARNING MATERIAL (SLM)

In all cases, the decision involves an estimate of its impact on the state of the system. As explained in the discussion on system states, this estimate involves correlating resource utilization thresholds with SLOs, through the use of a model and/or statistical analysis. 7.3 SCOPE OF CLOUD COMPUTING RESOURCE MANAGEMENT Business applications hosted in the cloud are probably the most promising cloud service and the most interesting topic for computer science education because it can give business the option to pay as they go while providing the big impact benefit of the latest technology advancement [6]. Resource management decisions by the Cloud Service Provider and Cloud Service User need accurate estimations of the condition of the physical and virtual resources which are required to deliver the applications hosted by cloud. The functional elements of Resource Utilization Estimation provide state estimation for compute, network, storage and power resources. It also provides input into cloud monitoring and resource scheduling processes. The functional elements are mapped to the Cloud Provider and Cloud User roles in line with an IaaS cloud offering. The cloud service provider is responsible for overseeing the exercising of computer, networking, storage, power resources and controlling this utilization via global and local scheduling process. 123 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 7.1 Scope of Cloud Computing Resource Management As shown in figure 4 arrows represent the principal information flows between functional elements. The diagram shows the responsibilities of the actors in an IaaS environment. The portioning is different in the case of PaaS and SaaS environment. The framework is depicted from IaaS perspective. However, it is applicable to the PaaS and SaaS perspectives - the functional elements remain the same, but the responsibility for supplying of more of them rests with the Cloud Provider whereas in the case of PaaS, the role of Cloud User is split into a Platform Provider along with an Application Provider. The degree of resource allocation responsibility falling on each varies depending on the scope of the provided platform. In the case of SaaS, the Platform and Application Provider are basically the same organization which is also the Cloud Provider. In all sorts of resource management functionality, the responsibilities would then have on these organizations. Resource Management and Virtualization One of the most important technologies is the use of virtualization. It is the way to gist the hardware and system resources from an operating system. In computing, virtualization means to create a virtual version of a device or a resource, such as a server, storage device, network or even an operating system where the framework divides the resource into one or more execution environments. One of the most basic concepts of virtualization technology gives employed in cloud environment is resource consolidation and management. Hypervisors or Virtual Machine Monitors are used to perform virtualization within a cloud environment across a large set of servers. These monitors lie in between the hardware and the operating systems. The figure mentioned below defines one of the key advantages of cloud computing which allows for a consolidation of resources within any data centre. Within a cluster environment managing of multiple operating systems is performed to allow for a number of standalone physical machines which is further combined to a virtualized environment. The entire processes require fewer physical resources than ever before. Thousands of physical machines amidst with megawatts of power are required for the deployment of large clouds, which brings forth the necessity of developing an efficient Cloud Computing system that utilizes the strengths of the cloud while minimizing its energy footprint. Cloud Operations Management System 124 CU IDOL SELF LEARNING MATERIAL (SLM)

7.4 PROVISION OF RESOURCE ALLOCATION IN CLOUD COMPUTING. The strategies of resource allocation can be defined as the mechanism for obtaining the guaranteed VM and/or physical resources allocation to the cloud users with minimal resource struggle, avoiding over, under-provisioning conditions and other parameters, this needs the amount and its types of resources required by the applications in order to satisfy the user's tasks, the time of allocations of the resources and its sequels also matters in case of the resource allocation mechanism. Resource Allocation can be defined as efficiently distributing the resources among multiple users as per their demands for given period of time. However, resource allocation has proven to be bit complicated in cloud computing. Therefore, there is a need to increase the computing capability for allocating the resources. The main aim of smartly allocating the resources is to gain financial profits in the market. This technique also boosts up the objectives of cloud computing i.e. pay as per use because the client need not pay for the resources that he has not used. Dynamic resource allocation shoots up the work flow implementation and allows the users to differentiate among different policies available. Resource allocation strategy must avoid the following issues: 1) Over provisioning: This means that an application receives a greater number of resources than actually demanded. 2) Under provisioning: This means that an application receives less amount of resources than actually demanded. 3) Contention of resources: This means that different applications try to access a single resource at same time. 4) Scarcity: This means their lack of resources. Shown below is Table 1 that explains the required input for both the service provider and the client Due to limited resources various restrictions and increasing demands from the users, there is a need to efficiently allocate the resources to fulfil cloud requirement. The demand and supply of resources ma be available, hence there arises a need for different strategies that allocate the resource smartly. Given below are few strategies that analyses the issue of resource allocation in cloud computing. 1) Rule Based Resource Allocation: To reduce the maintenance cost of resources, the resources allocation algorithm negotiates between multiple users to provide safer access to the resource across a network. Any failure in efficient negotiation may lead to the failure of whole cloud system. In RBRA, the distribution of resources is dynamic and the utilization of 125 CU IDOL SELF LEARNING MATERIAL (SLM)

resources is at its peak. The resources are allocated based upon the priority and hence a queue is formed, if any resource „R‟ is being used by and any user X, and another user X demands it, similarly user Z demands it any another time instant, then this algorithm creates a queue, giving the priority to Y and then Z. This further means after X has used the resource, Y will use it and then Z will use it. This increases the performance of the whole system. The priority may also be divided on the basis of task size. If the criticality of task is least, it is assigned last place. After the resource are allocated, the execution of tasks place and results are given to client. 2) Optimized Resource Scheduling: This algorithm is based on the infrastructure as a service. Iaas. To provide best results, cloud computing makes use of virtual machine and in this algorithm, the virtual machine is distributed amongst many users so as to maximize the resource usage. An improved genetic algorithm is used here so as to allocate the resources in finest possible way 3) Fair Resource Allocation for Congestion Control: Whenever resources are being allocated to any user or any service, definitely there are chances of congestion over the network. Congestion is a big problem as it depletes the overall performance, hence must be controlled. The FRA allows fair use of resources among different users because the need of resources may vary from every user. In this technique, whenever any user demands a particular service, a particular bandwidth is selected and is allocated to the client for a particular period of time. After zero resources are left, all the new requests from a customer are rejected 4) Federated Computing and Network System: In this model, both computing resources and network resources are mixed together. Therefore, for a combination of resources, FCNS is required. The synchronization of resources whether compute or network are altogether presented to FCNS which makes use of wavelength division multiplexing and offers best data transfer with least traffic over the network 7.5 SUMMARY  The cloud computing technology enables all its resources as a single point of access to the customer and is implemented as pay per usage. Even though there are many undisputed advantages in using cloud computing, one of the major concerns is to understand how the user / customer requests are executed with proper allocation of resources to each of such request. Unless the allocation and management of resources 126 CU IDOL SELF LEARNING MATERIAL (SLM)

is done efficiently in order to maximize the system utilization and overall performance, governing the cloud environment for multiple customers becomes more difficult.  Swiftly increasing demand of computational calculations in the process of business, transferring of files under certain protocols and data centres force to develop an emerging technology cater to the services for computational need, highly manageable and secure storage. To fulfil these technological desires cloud computing is the best answer by introducing various sorts of service platforms in high computational environment. Cloud computing is the most recent paradigm promising to turn around the vision of “computing utilities” into reality. The term “cloud computing” is relatively new, there is no universal agreement on this definition. In this paper, we go through with different area of expertise of research and novelty in cloud computing domain and its usefulness in the genre of management. Even though the cloud computing provides many distinguished features, it still has certain sorts of short comings amidst with comparatively high cost for both private and public clouds. It is the way of congregating amasses of information and resources stored in personal computers and other gadgets and further putting them on the public cloud for serving users. Resource management in a cloud environment is a hard problem, due to the scale of modern data centres, their interdependencies along with the range of objectives of the different actors in a cloud ecosystem.  Cloud computing is turning to be one of the most explosively expanding technologies in the computing industry in this era. It authorizes the users to transfer their data and computation to remote location with minimal impact on system performance. With the evolution of virtualization technology, cloud computing has been emerged to be distributed systematically or strategically on full basis. The idea of cloud computing has not only restored the field of distributed systems but also fundamentally changed how business utilizes computing today. Resource management in cloud computing is in fact a typical problem which is due to the scale of modern data centres, the variety of resource types and their inter dependencies, unpredictability of load along with the range of objectives of the different actors in a cloud ecosystem.  It is fact that the research and analysis of cloud computing is still in its initial period, apparent impacts may be brought by cloud computing. As the prevalence of cloud computing continues to raise, the need for power saving mechanisms within the cloud also increases. While a number of cloud terminologies are discussed in this paper, 127 CU IDOL SELF LEARNING MATERIAL (SLM)

there is a need of amendments in cloud infrastructure both in the academic and commercial sectors where management of different segments will be in quick span of time and believing that green computing will be one of the major segments of the coming generation cloud computing. Its uses in the management sectors in modern era not only embellish the utilization rate of resources to address the imbalance in the development between regions, but also can make more extensive use of cloud computing to our work life. Consequently, cloud services must be designed under assumption that they will experience frequent and open unpredictable failures. Services must recover from failures autonomously, and this implies that cloud computing platforms must offer standard, simple and fast recovery procedures. To sum up, we can further conclude that research and development related to cloud computing technology forms a virtual role in the future of resource management and internet technology. Getting view on the basis of ongoing research efforts and continuing advancements of computing technology, we come into cropper that this technology hover to have a major impact on scientific research as well as management planning. 7.6 KEY WORDS/ABBREVIATIONS  Cloud Management Platform (CMP) – A cloud management platform (CMP) is a product that gives the user integrated management of public, private, and hybrid cloud environments.  Cloud Marketplace A cloud marketplace is an online marketplace, operated by a cloud service provider (CSP), where customers can browse and subscribe to software applications and developer services that are built on, integrate with, or supplement the CSP’s main offering. Amazon’s AWS Marketplace and Microsoft’s Azure store are examples of cloud marketplaces.  Cloud Migration – Cloud migration is the process of transferring all of or a piece of a company’s data, applications, and services from on-premise to the cloud.  Cloud Native – Applications developed specifically for cloud platforms.  Cloud Washing – Cloud washing is a deceptive marketing technique used to rebrand old products by connecting them to the cloud, or at least to the term cloud. 128 CU IDOL SELF LEARNING MATERIAL (SLM)

7.7 LEARNING ACTIVITY 1. How Resources are managed in Azure Cloud Computing. ___________________________________________________________________________ ___________________________________________________________________ ________ 2. Draw a comparative study between various resource allocation techniques ___________________________________________________________________________ ___________________________________________________________________ ________ 7.8 UNIT END QUESTIONS (MCQ AND DESCRIPTIVE) A. Descriptive Questions 1. Explain Resource Management. 2. Describe, how Azure helps to resource management? 3. Discuss the provision of resource allocation in cloud computing. 4. State the various techniques to implement Resource allocation. 5. Explain the various methods to manage Resource management. B. Multiple Choice Questions 1. Which of the following is used to negotiate the exchange of information between a client and the service? a) Compute Bus b) Application Bus c) Storage Bus d) Service Bus 2. Which of the following can be used to create distributed systems based on SOA? a) Passive Directory Federation Services b) Application Directory Federation Services c) Active Directory Federation Services d) None of the mentioned 129 CU IDOL SELF LEARNING MATERIAL (SLM)

3. SQL Azure is a cloud-based relational database service that is based on ____________ a) Oracle b) SQL Server c) MySQL d) All of the mentioned 4. Which of the following was formerly called SQL Server Data Service? a) AppFabric b) SQL Azure c) WCF d) All of the mentioned 5. Azure data is replicated ________ times for data protection and writes are checked for consistency. a) one b) two c) three d) all of the mentioned 6. SQL Azure Database looks like and behaves like a local database with a few exceptions like _____________ a) CLR b) CDN c) WCF d) All of the mentioned Answer 1. c 2. d 3. b 4. b 5. c 6. a 130 CU IDOL SELF LEARNING MATERIAL (SLM)

7.9 REFERENCES  Jayaswal K., Kallakuruchi J., Houde D.J., Shah D. (2014). Cloud Computing: Black Book. New Delhi: Dreamtech Press.  Buyya Rajkumar, Broberg James, Goscinski A.M., Wile (Editors). (2011). Cloud Computing: Principles and Paradigm. New Jersey: John Willy & Sons Inc.  Microsoft Documents: https://docs.microsoft.com/en-us/azure/  https://channel9.msdn.com/Azure  R. Buyya, S. Pandey, and C. Vecchiola, \"Cloud bus toolkit for market-oriented cloud computing,\" In Proceedings of the 1st International Conference on Cloud Computing (CloudCom '09), volume 5931 of LNCS, pages 24–44. Springer, Germany, December 2009.  VMware, “Understanding Full Virtualization, Paravirtualization, and Hardware Assist,” VMware, Tech. Rep., 2007. [Online]. Available: http://www.vmware.com/files/pdf/VMware paravirtualization.pdf  R. Buyya, A. Beloglazov, J. Abawajy, \"Energy-efficient management of data centre resources for cloud computing: a vision, architectural elements, and open challenges,\" In Proceedings of the 2010 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA ’10), LasVegas, USA, 2010.  Martin Randles, David Lamb, A. Taleb-Bendiab, \"A Comparative Study into Distributed Load Balancing Algorithms for Cloud Computing,\" IEEE 24th International Conference on Advanced Information Networking and Applications Workshops (WAINA), vol., no., pp.551-556, 20-23 April 2010.  D. Minarolli and B. Freisleben, \"Utility-based resource allocation for virtual machines in Cloud computing,\" In Proceedings of the 2011 IEEE Symposium on Computers and Communications (ISCC '11), vol., no., pp.410-417, June 28 2011-July 1 2011. 131 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 8: VIRTUALIZATION Structure 8.0. Learning Objectives 8.1. Introduction 8.2. Concept of Virtualization 8.3. Characteristics of Virtualization 8.4. Taxonomy of Virtualization Techniques 8.5. Pros and cons of Virtualization 8.6. Virtual Machine provisioning and lifecycle 8.7. Provisioning Virtual Machines 8.8. Load Balancing 8.9. Summary 8.10.Key Words/Abbreviations 8.11.Learning Activity 8.12.Unit End Questions (MCQ and Descriptive) 8.13.References 8.0 LEARNING OBJECTIVES At the end of the unit learner will able to learn and have knowledge of following aspects Virtualization:  Learning of Virtualization  Pros and Cons of Virtualization  Introduction to Load Balancing  Life cycle of Virtual Machine 8.1 INTRODUCTION When you ‘virtualize,’ you’re cacophonous a physical hard-drive into multiple, smaller elements. That way, you'll run multiple operational systems (OS) off identical laptop. You’ve in all probability seen of us run Windows on macOS as a guest OS — that’s associate degree example of virtualization. 132 CU IDOL SELF LEARNING MATERIAL (SLM)

Cloud computing is just virtualization on associate degree epic scale. You’re currently taking several virtual machines, and forcing them to run many alternative environments for many several users across the globe. Virtualization could be a technique of a way to separate a service from the underlying physical delivery of that service. it's the method of making a virtual version of one thing like element. it absolutely was ab initio developed throughout the mainframe era. It involves victimization specialised software package to form a virtual or software-created version of a computing resource instead of the particular version of identical resource. With the assistance of Virtualization, multiple operational systems and applications will run on same machine and its same hardware at identical time, increasing the employment and adaptability of hardware. 8.2 CONCEPT OF VIRTUALIZATION One of the most value effective, hardware reducing, and energy saving techniques utilized by cloud suppliers is virtualization. Virtualization permits to share one physical instance of a resource or AN application among multiple customers and organizations at just one occasion. It will this by assignment a logical name to a physical storage and providing a pointer thereto physical resource on demand. The term virtualization is usually synonymous with hardware virtualization, that plays a basic role in with efficiency delivering Infrastructure-as-a-Service (IaaS) solutions for cloud computing. Moreover, virtualization technologies give a virtual setting for not solely death penalty applications however conjointly for storage, memory, and networking. 133 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 8.1 Concept of Virtualization The machine on which the virtual machine is going to be build is known as Host Machine and that virtual machine is referred as a Guest Machine. 8.2.1. Types 1. Full Virtualization: Full Virtualization was introduced by IBM in the year 1966. It is the first software solution of server virtualization and uses binary translation and direct approach technique. In full virtualization, guest OS is completely isolated by the virtual machine from the virtualization layer and hardware. Microsoft and Parallels systems are examples of full virtualization. 134 CU IDOL SELF LEARNING MATERIAL (SLM)

Fig 8.2 Full Virtualization 2. Paravirtualization: Paravirtualization is the category of CPU virtualization which uses hypercells for operations to handle instructions at compile time. In paravirtualization, guest OS is not completely isolated but it is partially isolated by the virtual machine from the virtualization layer and hardware. VMware and Xen are some examples of paravirtualization. 135 CU IDOL SELF LEARNING MATERIAL (SLM)

Fig 8.3 Paravirtualization 8.3 CHARACTERISTICS OF VIRTUALIZATION 1. Increased Security – The ability to control the execution of a guest programs in a completely transparent manner opens new possibilities for delivering a secure, controlled execution environment. All the operations of the guest programs are generally performed against the virtual machine, which then translates and applies them to the host programs. A virtual machine manager can control and filter the activity of the guest programs, thus preventing some harmful operations from being performed. Resources exposed by the host can then be hidden or simply protected from the guest. Increased security is a requirement when dealing with untrusted code. 136 CU IDOL SELF LEARNING MATERIAL (SLM)

 Example-1: Untrusted code can be analysed in Cuckoo sandboxes environment. The term sandbox identifies an isolated execution environment where instructions can be filtered and blocked before being translated and executed in the real execution environment.  Example-2: The expression sandboxed version of the Java Virtual Machine (JVM) refers to a particular configuration of the JVM where, by means of security policy, instructions that are considered potentially harmful can be blocked. 2. Managed Execution In particular, sharing, aggregation, emulation, and isolation are the most relevant features. Figure 8.4 Managed Execution 3. Sharing – Virtualization allows the creation of a separate computing environments within the same host. This basic feature is used to reduce the number of active servers and limit power consumption. 4. Aggregation Not only it is possible to share physical resource among several guests, but virtualization also allows aggregation, which is the opposite process. A group of separate hosts can be tied together and represented to guests as a single virtual host. This functionality is implemented 137 CU IDOL SELF LEARNING MATERIAL (SLM)

with cluster management software, which harnesses the physical resources of a homogeneous group of machines and represents them as a single resource. 5. Emulation Guest programs are executed within an environment that is controlled by the virtualization layer, which ultimately is a program. Also, a completely different environment with respect to the host can be emulated, thus allowing the execution of guest programs requiring specific characteristics that are not present in the physical host. 6. Isolation Virtualization allows providing guests—whether they are operating systems, applications, or other entities—with a completely separate environment, in which they are executed. The guest program performs its activity by interacting with an abstraction layer, which provides access to the underlying resources. The virtual machine can filter the activity of the guest and prevent harmful operations against the host. Besides these characteristics, another important capability enabled by virtualization is performance tuning. This feature is a reality at present, given the considerable advances in hardware and software supporting virtualization. It becomes easier to control the performance of the guest by finely tuning the properties of the resources exposed through the virtual environment. This capability provides a means to effectively implement a quality-of-service (QoS) infrastructure. 7. Portability The concept of portability applies in different ways according to the specific type of virtualization considered. 1. In the case of a hardware virtualization solution, the guest is packaged into a virtual image that, in most cases, can be safely moved and executed on top of different virtual machines. 2. In the case of programming-level virtualization, as implemented by the JVM or the .NET runtime, the binary code representing application components (jars or 138 CU IDOL SELF LEARNING MATERIAL (SLM)

assemblies) can run without any recompilation on any implementation of the corresponding virtual machine. 8.4 TAXONOMY OF VIRTUALIZATION TECHNIQUES There are many techniques for absolutely virtualizing hardware resources and satisfying the virtualization necessities (i.e., Equivalence, Resource management, and Efficiency) as originally conferred by Popek and Reuben Lucius Goldberg. These techniques are created to boost performance, and to handle the pliability drawback in kind one design. Popek and Reuben Lucius Goldberg classified the directions to be dead in a very virtual machine into 3 groups: privileged, management sensitive, and behavior sensitive directions. whereas not all management sensitive directions area unit essentially privileged (e.g., x86). Goldberg’s Theorem one mandates that each one management sensitive directions should be treated as privileged (i.e., trapped) so as to possess effective VMMs. Depending on the virtualization technique used, hypervisors are often designed to be either tightly or loosely in addition to the guest software. The performance of tightly coupled hypervisors (i.e., OS assisted hypervisors) is above loosely coupled hypervisors (i.e., hypervisors supported binary translation). On the opposite hand, tightly coupled hypervisors need the guest in operation systems to be expressly changed, that isn't continuously doable. one amongst the Cloud infrastructure style challenges is to possess hypervisors that area unit loosely coupled, however with adequate performance. Having hypervisors that area unit software agnostic will increase system modularity, flexibility, maintainability, and suppleness, and permits upgrading or ever-changing the in operation systems on the fly. The following area unit the most virtualization techniques that area unit presently in use: Binary translation and native execution: This technique uses a mixture of binary translation for handling privileged and sensitive directions, and direct execution techniques for user-level directions. this system is extremely economical each in terms of performance and in terms of compatibility with the guest OS, that doesn't ought to understand that it's virtualized. However, building binary translation support for such a system is extremely tough, and leads to important virtualization overhead. OS assisted virtualization (paravirtualization): 139 CU IDOL SELF LEARNING MATERIAL (SLM)

In this technique, the guest OS is changed to be virtualization-aware (allow it to speak through hyper calls with the hypervisor, thus on handle privileged and sensitive instructions). as a result of modifying the guest OS to change paravirtualization is straightforward, paravirtualization will considerably scale back the virtualization overhead. However, paravirtualization has poor compatibility; it doesn't support in operation systems that can't be changed (e.g., Windows). Moreover, the overhead introduced by the hyper calls will have an effect on performance underneath significant workloads. Besides the intercalary overhead, the modification created to the guest OS, to create it compatible with the hypervisor, will have an effect on system’s maintainability. Hardware-assisted virtualization: As another approach to binary translation and in a trial to boost performance and compatibility, hardware suppliers (e.g., Intel and AMD) started supporting virtualization at the hardware level. In hardware-assisted virtualization (e.g., Intel VT-x, AMD-V), privileged and sensitive calls area unit set to mechanically lure to the hypervisor. This eliminates the requirement for binary translation or paravirtualization. Moreover, since the interpretation is finished on the hardware level, it considerably improves performance. Network Virtualization Network virtualization in cloud computing could be a technique of mixing the accessible resources in a very network by ripping up the accessible information measure into completely different channels, every being separate and distinguished. they'll be either allotted to a specific server or device or keep unassigned fully — tired real time. the thought is that the technology disguises verity complexness of the network by separating it into components that area unit simple to manage, very like your segmental disc drive makes it easier for you to manage files. Storage Virtualizing Using this system offers the user a capability to pool the hardware cupboard space from many interconnected devices into a simulated single storage device that's managed from one single command console. This storage technique is commonly utilized in cargo deck networks. Storage manipulation within the cloud is generally used for backup, archiving, and convalescent of knowledge by concealing the $64000 and physical complicated storage 140 CU IDOL SELF LEARNING MATERIAL (SLM)

design. directors will implement it with package applications or by using hardware and package hybrid appliances. Server Virtualization This technique is that the masking of server resources. It simulates physical servers by ever- changing their identity, numbers, processors and in operation systems. This spares the user from incessantly managing complicated server resources. It conjointly makes tons of resources accessible for sharing and utilizing, whereas maintaining the capability to expand them once required. Data Virtualization This kind of cloud computing virtualization technique is abstracting the technical details typically utilized in information management, like location, performance or format, in favor of broader access and additional resiliency that area unit directly associated with business wants. Desktop Virtualizing As compared to alternative styles of virtualization in cloud computing, this model allows you to emulate a digital computer load, instead of a server. this permits the user to access the desktop remotely. Since the digital computer is actually running in a very information Centre server, access thereto is often each safer and transportable. Application Virtualization Software virtualization in cloud computing abstracts the applying layer, separating it from the software. this manner the applying will run in Associate in Nursing encapsulated kind while not being dependent upon the software beneath. additionally, to providing tier of isolation, Associate in Nursing application created for one OS will run on a very completely different software. 8.5 PROS AND CONS OF VIRTUALIZATION Benefits of Virtualization in Cloud Computing i. Security 141 CU IDOL SELF LEARNING MATERIAL (SLM)

During the process of virtualization security is one of the important concerns. The security can be provided with the help of firewalls, which will help to prevent unauthorized access and will keep the data confidential. Moreover, with the help of firewall and security, the data can protect from harmful virus’s malware and other cyber threats. Encryption process also takes place with protocols which will protect the data from other threads. So, the customer can virtualize all the data store and can create a backup on a server in which the data can store. ii. Flexible operations With the help of a virtual network, the work of it professional is becoming more efficient and agile. The network switch implement today is very easy to use, flexible and saves time. With the help of virtualization in Cloud Computing, technical problems can solve in physical systems. It eliminates the problem of recovering the data from crashed or corrupted devices and hence saves time. iii. Economical Virtualization in Cloud Computing, save the cost for a physical system such as hardware and servers. It stores all the data in the virtual server, which are quite economical. It reduces the wastage, decreases the electricity bills along with the maintenance cost. Due to this, the business can run multiple operating system and apps in a particular server. iv. Eliminates the risk of system failure While performing some task there are chances that the system might crash down at the wrong time. This failure can cause damage to the company but the virtualizations help you to perform the same task in multiple devices at the same time. The data can store in the cloud it can retrieve anytime and with the help of any device. Moreover, there is two working server side by side which makes the data accessible every time. Even if a server crashes with the help of the second server the customer can access the data. v. Flexible transfer of data The data can transfer to the virtual server and retrieve anytime. The customers or cloud provider don’t have to waste time finding out hard drives to find data. With the help of virtualization, it will very easy to locate the required data and transfer them to the allotted 142 CU IDOL SELF LEARNING MATERIAL (SLM)

authorities. This transfer of data has no limit and can transfer to a long distance with the minimum charge possible. Additional storage can also provide and the cost will be as low as possible. Cons of Virtualization Although you cannot find many disadvantages for virtualization, we will discuss a few prominent ones as follows − Extra Costs Maybe you have to invest in the virtualization software and possibly additional hardware might be required to make the virtualization possible. This depends on your existing network. Many businesses have sufficient capacity to accommodate the virtualization without requiring much cash. If you have an infrastructure that is more than five years old, you have to consider an initial renewal budget. Software Licensing This is becoming less of a problem as more software vendors adapt to the increased adoption of virtualization. However, it is important to check with your vendors to understand how they view software use in a virtualized environment. Learn the new Infrastructure Implementing and managing a virtualized environment will require IT staff with expertise in virtualization. On the user side, a typical virtual environment will operate similarly to the non-virtual environment. There are some applications that do not adapt well to the virtualized environment. 8.6 VIRTUAL MACHINE PROVISIONING AND LIFECYCLE When a virtual machine or cloud instance is provisioned, it goes through multiple phases. First, the request must be made. The request includes ownership information, tags, virtual hardware requirements, the operating system, and any customization of the request. Second, the request must go through an approval phase, either automatic or manual. Finally, the request is executed. This part of provisioning consists of pre-processing and post-processing. Pre-processing acquires IP addresses for the user, creates CMDB instances, and creates the virtual machine or instance based on information in the request. Post-processing activates the 143 CU IDOL SELF LEARNING MATERIAL (SLM)

CMDB instance and emails the user. The steps for provisioning may be modified at any time using CloudForms ManagementEngine. Figure 8.5 Virtual Machine Provisioning and Lifecycle 8.7 PROVISIONING VIRTUAL MACHINES There are three types of provisioning requests available in CloudForms Management Engine: 1. Provision a new virtual machine from a template You can provision virtual machines through various methods. One method is to provision a virtual machine directly from a template stored on a provider. IMPORTANT To provision a virtual machine, you must have the \"Automation Engine\" role enabled. To Provision a Virtual Machine from a Template: 1. Navigate to Infrastructure → Virtual Machines. 2. Click (Lifecycle), and then (Provision VMs). 3. Select a template from the list presented. 4. Click Continue. 5. On the Request tab, enter information about this provisioning request. 144 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 8.6 Request information In Request Information, type in at least a First Name and Last Name and an email address. This email is used to send the requester status emails during the provisioning process for items such as auto-approval, quota, provision complete, retirement, request pending approval, and request denied. The other information is optional. If the CloudForms Management Engine server is configured to use LDAP, you can use the Look Up button to populate the other fields based on the email address. NOTE Parameters with a * next to the label are required to submit the provisioning request. To change the required parameters, see Customizing Provisioning Dialogs. 1. Click the Purpose tab to select the appropriate tags for the provisioned virtual machines. 2. Click the Catalog tab to select the template to provision from. This tab is context sensitive based on provider. 145 CU IDOL SELF LEARNING MATERIAL (SLM)

i. For templates on VMware providers: Figure 8.7 templates on VMware For Provision Type, select VMware or PXE. i. If VMware is selected, select Linked Clone to create a linked clone to the virtual machine instead of a full clone. Since a snapshot is required to create a linked clone, this box is only enabled if a snapshot is present. Select the snapshot you want to use for the linked clone. ii. If PXE is selected, select a PXE Server and Image to use for provisioning ii. Under Count, select the number of virtual machines to create in this request. iii. Use Naming to specify a virtual machine name and virtual machine description. When provisioning multiple virtual machines, a number will be appended to the virtual machine name. 3. For templates on Red Hat providers: i. Select the Name of a template to use. ii. For Provision Type, select either ISO, PXE, or Native Clone. You must select Native Clone in order to use a Cloud-Init template. 146 CU IDOL SELF LEARNING MATERIAL (SLM)

i. If Native Clone is selected, select Linked Clone to create a linked clone to the virtual machine instead of a full clone. This is equivalent to Thin Template Provisioning in Red Hat Enterprise Virtualization. Since a snapshot is required to create a linked clone, this box is only enabled if a snapshot is present. Select the snapshot to use for the linked clone. ii. If ISO is selected, select an ISO Image to use for provisioning iii. If PXE is selected, select a PXE Server and Image to use for provisioning iii. Under Count, select the number of virtual machines you want to create in this request. iv. Use Naming to specify a VM Name and VM Description. When provisioning multiple virtual machines, a number will be appended to the VM Name. 4. Click the Environment tab to decide where you want the new virtual machines to reside. i. If provisioning from a template on VMware, you can either let CloudForms Management Engine decide for you by checking Choose Automatically, or select a specific cluster, resource pool, folder, host, and datastore. ii. If provisioning from a template on Red Hat, you can either let CloudForms Management Engine decide for you by checking Choose Automatically, or select a datacenter, cluster, host and datastore. 147 CU IDOL SELF LEARNING MATERIAL (SLM)

5. Click the Hardware tab to set hardware options. Figure 8.8 Hardware tab to set hardware i. In VM Hardware, set the number of CPUs, amount of memory, and disk format: thin, pre-allocated/thick or same as the provisioning template (default). ii. For VMware provisioning, set the VM Limits of CPU and memory the virtual machine can use. iii. For VMware provisioning, set the VM Reservation amount of CPU and memory. 6. Click Network to set the vLan adapter. Additional networking settings that are internal to the operating system appear on 148 CU IDOL SELF LEARNING MATERIAL (SLM)

the Customize tab. Figure 8.9 Customize tab i. In Network Adapter Information, select the vLan. 7. Click Customize to customize the operating system of the new virtual machine. These options vary based on the operating system of the template. Figure 8.10 Customize to customize 8. For Windows provisioning: i. To use a customer specification from the Provider, click Specification. To select an appropriate template, choose from the list in the custom specification area. The values that are honoured by CloudForms Management Engine display. NOTE Any values within the specification that don't show within the CloudForms Management Engine console’s request dialogs don't seem to be utilized by CloudForms Management Engine. as an example, for Windows operational systems, if you've got any run once values within the specification, they're not employed in making the new virtual machines. Currently, for a Windows software package, CloudForms Management Engine honours the unattended interface, identification, workgroup info, user data, windows choices, and server license. If over one network card is nominal, solely the primary is employed. 149 CU IDOL SELF LEARNING MATERIAL (SLM)


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook