Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore CU-MCA-SEM-II-Cloud Computing-Second Draft

CU-MCA-SEM-II-Cloud Computing-Second Draft

Published by Teamlease Edtech Ltd (Amita Chitroda), 2023-07-17 07:23:29

Description: CU-MCA-SEM-II-Cloud Computing-Second Draft

Search

Read the Text Version

2. CSP stands for a. Cloud Server Provider b. Client Server Provider c. Cloud Service Provider d. Client service Provider 3. Which of the following parameters are commonly specified in Service Level Agreements? a. Warranty b. Accountability c. Reliability d. All of these 4. If a client needs access to a specific level of resources, he or she must _______to service. a. Subscribe b. Own c. Rent d. Buy 5. The person or entity who buys, leases, or rents cloud services? a. Cloud customer b. CSP c. Cloud Access security Broker d. Regulators 6. Which of the following services that need to be negotiated in Service Level Agreements? a. Logging b. Auditing 101 CU IDOL SELF LEARNING MATERIAL (SLM)

c. Regulatory compliance d. All of these Answers 1-a,2-c, 3-d,4-a,5-a,6-d 6.9REFERENCES Reference books  Rajkumar Buyya, Christian Vecchiola, S. Thamarai Selvi, “Mastering Cloud Computing”  Kailash Jayaswal, Jagannath Kallakuruchi, Donald J. Houde, Dr. Devan Shah, “Cloud Computing: Black Book  Cloud Computing: Principles and Paradigms, Editors: Rajkumar Buyya, James Broberg, Andrzej M. Goscinski, Wile, 2011.  Cloud_computing_for_energy_management_in_smart_gri.pdf Websites:  https://go4hosting.com/blog/cloud-hosting/role-of-cloud-service-providers-in- enterprise-data-management/ 102 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT7 -SCALABILITY STRUCTURE 7.0 Learning Objective 7.1 Introduction 7.2 Scalability 7.3 Scale up and down services 7.4 Cloud economics 7.5 Adopt services using b Amazon 7.6 Google App engine 7.7 Microsoft 7.8 Summary 7.9 Keywords 7.10 Learning Activity 7.11 UnitEnd Questions 7.12 References 7.0LEARNING OBJECTIVES After studying this unit students will be able to:  Evaluate Scalability of cloud  Analyze scale up and scale down services  ObtainAmazon, Google App Engine, Microsoft adopt services 7.1INTRODUCTION There are several reasons to shift to the cloud, yet scalability is among the most prevalent. The ability to readily add or withdraw computation or storage resources is referred to as scalability. Scalability was extremely expensive, slow, as well as difficult to manage in the \"old days\" of on-premise data centers. Scaling up back then required purchasing new server gear and disc arrays. Well after the purchase is approved within budget as well as the order was placed, the equipment would take months to arrive. Meanwhile, a few of the company's most highly compensated engineers will spend hours opening cardboard boxes containing servers and storage, plugging them in, and connecting them to the system. 103 CU IDOL SELF LEARNING MATERIAL (SLM)

The crucial to achieving that really have enough resources while again and wasting the cloud budget is to manage scaling appropriately. One of the most appealing aspects of migrating to the cloud is the ability to auto-scale, which, when used appropriately, may ensure that only paying only for resources they actually need. 7.2SCALABILITY Scalability is defined as “a system's ability to fit an issue even as range of that problem grows (quantity or objects, increasing volumes of labour, and/or being amenable to enlargement)”. To cope with a greater workload, for example, improving system throughput via introducing more software and hardware resources (Schlossnagle, 2007; Bondi, 2000). The capacity to scale up a system may be influenced by its architecture, the data structures, algorithms, and communication protocols that used implement the system's components. Bondi (2000) provides a characterization of distinct types of scalabilities; here we summarize some pertinent examples: Load Scalability: a system's ability to make optimal the use available resources at various workload levels (i.e., avoiding excessive delay, inefficient consumption, or contention). A faulty use of parallelism, incorrect shared resource scheduling, or excessive overheads can all hinder load scalability. An example, a web server has strong load scalability unless the system's performance remains acceptable even though the amount of threads which execute HTTP requests is raised during a workload peak. Space Scalability: When the workload increases, the method has the capacity can keep its consumption of system resources (such as memory or bandwidth) within acceptable limits. A virtual memory mechanism, for example, allows an operating system to scale gracefully by swapping unused virtual memory pages across physical memory to disc, preventing physical memory exhaustion. Another example is when a web 2.0 service, such as a social network, goes between thousands to millions of user accounts. Structural scalability: The system's implementation of standards allows for a growth in the number of managed objects, or at the very least will do so within a reasonable time frame. The size of a data type, for example, may have an impact on the number many elements that may be represented (using a 16 bit integer as an entity identifier only allows representing 65,536 entities). It would be ideal if a system's scaling capabilities were both short and long term, with short term reactivity to adapt to both high and low rates of incoming work. Scaling down would be just as important as scaling up since it has a direct impact on the business's long-term viability by lowering the cost of wasted resources as the workload falls and preventing over- provisioning. The characteristics that can increase or decrease scalability can be difficult to pinpoint and even customized to the target system. Actions performed to strengthen one of these abilities can sometimes wreak havoc on others. For example, using compression 104 CU IDOL SELF LEARNING MATERIAL (SLM)

technique to predict space scalability (i.e., reducing bandwidth when compressing messages) has an influence on load scalability (i.e., increasing processing usage while compressing messages). 7.3SCALE UP AND SCALE DOWN SERVICES Scaling actions can be categorized into the following categories: Vertical scaling: increasing the processing power (processors, memory, bandwidth, and so on) of the systems' equipment. On massive shared-memory servers, this is how applications are deployed. Horizontal scaling: this refers figure much of the same software and hardware. Whenever the number of customers and workload rise, extra front-end nodes were added (or released) inside a typical two-layer service (decreases). On distributed servers, this is how applications are deployed. When building a system's architecture, scalability must be taken in mind. Although a quick time-to-market, rapid prototyping, or targeting a limited number of users may necessitate swift development, the solution's architecture should consider scalability. This means the system's user base might grow from hundreds to thousands, if not millions, and its complexity might rise as well. The danger of failures for system reimplementation will be reduced as a result of this. The Cloud is a computational paradigm that, among other things, attempts to simplify the provisioning of services by giving the illusion of boundless associated with care and automatic scaling to service providers. This article explains how Cloud Computing can aid in the development of scalable applications besides automating the service provisioning process of IaaS (Infrastructure as a Service) Clouds (lowering management costs and maximizing resource utilization) and supplying PaaS (Platform as a Service) frameworks (with scalar execution environments, service building blocks, and APIs) to create Cloud-aware applications. Manual vs Scheduled vs automatic In a cloud environment, there are three basic ways to scale: manually, scheduled, and automatically. Manual scaling is exactly what it sounds like. Scaling up and out, or down and in, necessitates the assistance of an engineer. Vertical and horizontal scaling could be achieved with the click of a button on the cloud;therefore, the actual scaling isn't complicated. Manual scaling, on the other hand, cannot account for all of the minute-by-minute swings in demand encountered by a standard application because it necessitates the attention of a team member. This can also result in human error. It's possible that someone will neglect to scale back down, resulting in additional expenses. 105 CU IDOL SELF LEARNING MATERIAL (SLM)

Scaling on a schedule: Scaling on a schedule eliminates some of the drawbacks of manual scaling. Users can scale out to ten instances from 5 p.m. to 10 p.m., go back into two instances from 10 p.m. to 7 a.m., again and back out to five instances at 5 p.m., based on their regular demand curve. This makes it easy to adjust their provisioning to your real usage without having to rely on a team member to make daily modifications. Automatic scaling (also known as autoscaling)is the process of automatically scaling your compute, database, and storage resources based on established rules. One can scale up, down, out, or in when metrics like CPU, memory, and network utilization rates fall above or below a given threshold. Autoscaling allows us to ensure that the application is always available— and that adequate resources are constantly provided to prevent performance issues or outages—without having to pay for significantly more resources than they actually use. 7.4CLOUD ECONOMICS The key drivers for cloud computing are economies of scale and the ease with which software can be delivered and managed. In fact, the most significant financial benefit of this occurrence is the cloud providers' pay-as-you-go strategy. Cloud computing, in particular, enables:  Lowering the capital expenditures for IT infrastructure  Minimizing the depreciation of life cycle costs of IT capital assets  Subscriptions to replace software licensing  Lowering IT resource maintenance and administrative expenditures A capital cost is the price paid for an asset that can be used in either production of goods and the provision of services. Capital costs were an expense that are paid in full up front and contribute to long-term profit generation. Because businesses rely on IT infrastructure and software to run their operations, they are considered capital assets. It doesn't matter whether a company's primary business is IT-related anymore; the company will almost certainly have an IT department that automates many of the company's processes, such as payroll, customer relationship management, enterprise resource planning, product tracking and inventory management, and so on. As a result, IT resources are a capital expense for any business. Capital costs should be kept low since it introduces expenses which will generate profit over time; moreover, because they are related with tangible items, there are subject to depreciation through time, which affects the enterprise's profit since such costs are directly subtracted from revenue. Depreciation costs are reflected by the loss of value of hardware over time and the ageing of software products which need to be replaced since new functionalities are required in the case of IT capital expenses. Prior to the widespread adoption of cloud computing, the cost of IT equipment and software was a substantial burden for medium- and large businesses. Many businesses possess a small and medium datacenter, which comes with 106 CU IDOL SELF LEARNING MATERIAL (SLM)

a variety of maintenance, electricity, and cooling expenditures. Managing an IT department as well as an IT help centre incurs additional operational costs. Furthermore, the procurement of potentially expensive software triggers additional costs. These costs are considerably lowered or even eliminated with cloud computing, depending on its adoption. One of the benefits of the cloud computing paradigm is that it converts capital expenditures associated with the purchase of software and hardware into operational expenses associated with renting infrastructure and paying software subscriptions. Such costs could be better managed based on the needs of the business and the company's success. Administrative and maintenance expenditures are also reduced with cloud computing. That is, there is no or just a limited requirement for administrative personnel to administer the cloud infrastructure. Simultaneously, the cost of IT support personnel is lowered. When it comes to depreciation charges, they simply vanish for the business because there are no IT capital assets which depreciate over time when all IT needs are met by the cloud. The amount significant cost savings which cloud computing can bring to a company is determined by the precise situation wherein cloud services are being used and how they help the company make money. It is possible for a small business to totally rely on the cloud for many elements, including:  IT infrastructure  Software development  CRM and ERP Since there are no initial IT assets, capital costs can be fully eliminated in this situation. In the case of businesses who already have a significant number of IT assets, the situation is quite different. In this instance, cloud computing, particularly IaaS-based solutions, might aid in the management of unforeseen capital expenditure resulting from the enterprise's short- term needs. Such costs can be translated into operating costs that continue so long as there will need to be them in this scenario, thanks to cloud computing. IT infrastructure leasing, for example, enables more efficient peak load management without incurring capital costs. When the increasing load no longer warrants the utilization of additional resources, they can be released and the related expenses can be eliminated. Because many businesses already have IT infrastructure, this is the most often used cloud computing approach. Another alternative is to undertake a gradual shift to cloud-based solutions as capital IT systems depreciate and require replacement. Between these two examples, there are plethora other scenarios wherein cloud computing could assist businesses in earning money. Another significant benefit is the reduction of some IT-related indirect costs, such as software licensing and support, as well as carbon emissions. An enterprise uses cloud computing 107 CU IDOL SELF LEARNING MATERIAL (SLM)

software applications on the subscription basis, so there is no need for a licensing fee since the software that provides the service stays the provider's property. Using IaaS solutions allows for datacenter consolidation, which could result in a lesser carbon footprint in the long run. Carbon footprint emissions were taxable in some countries, such as Australia, thus businesses can save money by reducing or fully eliminating them. We can differentiate three different pricing tactics used by cloud computing companies when it comes to cloud computing pricing models: Differentiated pricing. In this paradigm, cloud services are divided into tiers, which each provides a predetermined compute specification and service level agreement (SLA) at a set fee per unit of time. Amazon uses this technique to price its EC2 service, which offers a variety of server configurations in respect of processing capability (CPU type and speed, memory) with varying hourly rates. Pricing on a per-unit basis. This approach is more suited to scenarios in which the cloud provider's primary source of revenue is determined in comparison to the total amount of certain services, including such data transfer & memory allocation. Customers can setup the systems more efficiently in this case based on the application requirements. GoGrid, for example, uses this methodology to charge clients for servers hosted throughout the GoGrid cloud based on RAM/hour units. Pricing that is dependent on a subscription. Customers pay a periodical subscription fee for use of software or certain component services which are integrated in existing applications in this model, which is typically employed by SaaS providers. All of these expenses were based on such a pay-as-you-go model, which provides a more flexible solution for delivering IT services on demand. This is what allows IT capital expenses to be converted into operational costs, because the cost of purchasing hardware becomes a cost of leasing it, and the cost of purchasing software becomes a subscription price paid for utilizing it. Clouds are classified into three categories based on the services and resources they provide: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) (SaaS). Basic computational resources (e.g., storage, servers) are made available as services through the Internet in IaaS clouds. PaaS clouds make it simple to develop and deploy scalable applications in a variety of environments. SaaS clouds enable whole end-user applications to also be deployed, maintained, and delivered as a service via the Internet, typically through a browser. On its infrastructure, SaaS clouds exclusively support the apps of the providers. 108 CU IDOL SELF LEARNING MATERIAL (SLM)

7.5 ADOPT SERVICES USING BY AMAZON These three core cloud categories are well represented by the well-known four clouds: EC2, Azure, AppEngine, and Salesforce. Amazon Elastic Compute Cloud (EC2) EC2, an IaaS cloud, provides clients with \"elastic\" access to physical resources for the creation of virtual servers. Clients either host the program they want to run on the virtual servers or host their own services that they may access through the Internet. It is possible to build a copy (instance) of the virtual machine to distribute the load from across instances when demand for the virtual machine's services grows. The first issue with EC2 has been its low abstraction level. Clients must create a virtual machine, install software on it, transfer the virtual machine to EC2, and then start it using a command line tool, according to tutorials. Despite the fact that EC2 provides a range of possible virtual machines for EC2 customers to use, it is still the responsibility of the clients to guarantee that their own software is loaded and configured correctly. Amazon's latest scalability features, Auto-Scaling & Elastic Load Balancing, were only recently revealed. Prior to the launch of these services, EC2 clients were required to either change their EC2 services or install additional management software inside their EC2 virtual servers. While Auto-Scaling or Elastic Load Balancing lessen the amount of customization required for applications hosted on EC2, both technologies are difficult to use and need client participation. The EC2 client must have a reserve of virtual servers in both cases, and then enable Auto-Scaling & Elastic Load Balancing to employ the virtual servers based on the market. Ultimately, EC2 does not allow alternative providers to publish their services, nor does it allow users to discover and select services within EC2. According to EC2 documentation, network multicasting (a critical component of discovery) is not permitted, making service discovery and selection in EC2 problematic. Clients must manually publish the services to something like a discovery service outside of EC2 when services were hosted inside virtual machines on EC2. 7.6 GOOGLE APP ENGINE Google App Engine is indeed a PaaS cloud that provides clients with a complete Web service environment, including all necessary hardware, operating systems, including applications. As a result, clients just need to worry about installing or creating its own services, whereas App Engine manages them on Google's servers. 109 CU IDOL SELF LEARNING MATERIAL (SLM)

However, the languages that can be utilized to construct services using App Engine are severely limited. App Engine currently supports the Java and Python programming languages as of this writing. If the App Engine client is unfamiliar with either of the supported programming languages, he or she must first learn the language before developing their own services. Existing apps cannot be simply uploaded to App Engine; only services created entirely in Java or Python are supported. Furthermore, App Engine doesn't really facilitate the publication of services built by other parties, nor provide discovery or selection services. Clients must publish their services on discovery services outside of App Engine once creating and hosting them. An examination of the App Engine code pages at the time of writing showed no matches when the keyword \"discovery\" was applied as a search phrase. 7.7 MICROSOFT Microsoft's Azure, another PaaS cloud, lets clients to create services with developer libraries that employ Azure's communication, compute, and storage resources, but instead simply upload the finished services.Azure also offers a discovery function within the cloud to help with service-based development. Services hosted in the.NET Service Bus are known as the.NET Service Bus.Azure was published once and may be found even if they are moved regularly.When a service is formed or started, it uses a URI to publish itself to the Bus and then waits for requests from clients. While it's intriguing that perhaps the service can migrate and be available so long as the user uses the URI, there's no mention of how the client obtains the URI.Furthermore, it appears that only the URI can be published to the Bus, without any other information like state or quality of service (QoS) available. 7.8 SUMMARY  Scalability has taken a tortuous path from mainframes to distributed systems, back to mainframes, and finally back to a \"centralized\" Cloud that is distributed and diverse, but recognized as a single entity by edge devices contacting the Cloud through standardized interfaces to perform services. Following the prevalent trend in system design and implementation, scaling capabilities have so meandered between horizontal and vertical scalability. This chapter has discussed the primary scalability characteristics supplied by some of the most prominent centralized and distributed systems available today in a very concise manner.  At the various Cloud tiers, we've highlighted some of the most prominent examples of Cloud-enabled scalability. It's worth noting that this scalability is provided to the end user in a transparent manner (either a service provider or a service consumer). 110 CU IDOL SELF LEARNING MATERIAL (SLM)

7.9KEYWORDS  Vertical scaling - Vertical scaling can effectively resize your server without requiring any code changes. It's the ability to add resources to current gear or software to boost its capability.  Horizontal scaling - Horizontal scaling in the cloud refers to adding more servers to fulfil your demands, usually by spreading workloads amongst servers and decrease the number of requests each server receives.  EC2 - Amazon Elastic Computation Cloud (Amazon EC2) is a cloud computing service that offers safe, scalable compute power. It's intended to make web-scale cloud computing more accessible to programmers.  URI - A unified resource identifier (URI) is a set of characters that can be used to identify names and resources on the Internet. The URI identifies the method by which resources are accessed, as well as the machines on which they are stored as well as the names of the resources on each computer.  QoS- Quality of service (QoS) was its description or measurement of a service's overall performance, including a telephony and computer network, or a cloud computing service, with a focus on how consumers perceive that performance. 7.10LEARNING ACTIVITY 1. Suppose you are scale your cloud infrastructure; how does policy evolve. 2. There are a variety of companies that provide various applications and services. What role do the services/applications play in a user's business? Describe the financial and operational advantages. 7.11UNIT END QUESTIONS 111 A. Descriptive Questions Short Questions 1. How scalability of cloud service can be improved? 2. What is meant by scalability? 3. What is the EC2 instance? 4. What Microsoft azure? CU IDOL SELF LEARNING MATERIAL (SLM)

5. What are the platforms used for large scale cloud computing? Long Questions 1. Describe the fundamental features of the economic and business model behind cloud computing. 2. How does cloud computing help to reduce the time to market for applications and to cut down capital expenses? 3. How is cloud scalability achieved? 4. Detail cloud economics 5. Brief google app engine B. Multiple Choice Questions 1. Which of the following statement is correct aboutcloud? a. None of the listed b. Cloud gets scaled when demand increases c. No scalability in cloud d. Disaster recovery can't setup in cloud 2. Identify the fundamentals of cloud computing a. innovation b. Time value of money c. scalability d. All of these 3. Identify the characteristic of cloud computing a. Scalability b. reliability c.No Elasticity d. No Scalability 4. ___________is the process of automatically scaling your compute, database, and storage resources based on established rules a. Autoscaling b. Scheduled scaling 112 CU IDOL SELF LEARNING MATERIAL (SLM)

c. Manual scaling d. Vertical scaling 5. On distributed servers, applications are deployed based on __________ scaling. a. Vertical scaling b. Horizontal scaling c. Manual scaling d. Autoscaling Answers 1-b, 2-d, 3-a, 4-a, 5-b 7.12REFERENCES Reference books  Rajkumar Buyya, Christian Vecchiola, S. Thamarai Selvi, “Mastering Cloud Computing”  Kailash Jayaswal, Jagannath Kallakuruchi, Donald J. Houde, Dr. Devan Shah, “Cloud Computing: Black Book  Cloud Computing: Principles and Paradigms, Editors: Rajkumar Buyya, James Broberg, Andrzej M. Goscinski, Wile, 2011.  Cloud_computing_for_energy_management_in_smart_gri.pdf Websites:  https://cloudcheckr.com/cloud-cost-management/cloud-vs-data-center-what-is- scalability-in-cloud-computing/ 113 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 8 -MICROSOFT AZURE STRUCTURE 8.0 Learning Objective 8.1 Introduction 8.2 Architecture 8.3 Difference between ARM and Classic portal 8.4 Creating and Configuring Websites 8.5 Azure Diagnostics of a website 8.6 Summary 8.7 Keywords 8.8 Learning Activity 8.9 Unit End Questions 8.10 References 8.0 LEARNING OBJECTIVES After studying this unit students will be able to:  Evaluate Microsoft azure  Analyze difference between ARM and classic portal  Evaluate architecture of azure and its components  Create, configure, deploy and monitor the website 8.1 INTRODUCTION Microsoft Azure is the company's public cloud computing platform, formerly known as Windows Azure. It offers compute, analytics, storage, and networking as well as other cloud services. Azure is a public cloud computing platform that offers solutions such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) for services including analytics, virtual computing, storage, networking, and more. The Microsoft Azure architecture is built on a huge collection of servers and networking hardware, which has in turn host a complex set of applications that manage the software or virtualized hardware on these servers. Azure's power comes from its intricate orchestration. 114 CU IDOL SELF LEARNING MATERIAL (SLM)

8.2ARCHITECTURE Azure Integration Services is indeed a set of services for connecting apps and data in Azure. Both of those services are used in this architecture: Logic Apps for orchestrating workflows and API Management for creating API catalogues. For basic integration cases where the process is initiated via synchronous calls from backend services, this design is suitable. This basic architecture is built upon by the more sophisticated architecture that uses queues and events. The following elements make up the architecture: Backend systems: The diagram's right-hand side depicts the numerous backend systems which the company has implemented or relies on. SaaS systems, Azure services, and online services who expose REST and SOAP endpoints are examples of these. Azure Logic Apps: Logic Apps is really a serverless platform for integrating apps, data, and services in enterprise workflows. The logic apps in this architecture are activated by HTTP requests. Workflows can also be nested for more complicated orchestration. Connectors are utilised by Logic Apps to connect to regularly used services. Logic Apps has hundreds of connectors to choose from, as well as the ability to develop your own. API Management is a managed service that allows you to publish catalogues of HTTP APIs that promote reuse and discoverability. API management is made up of two parts: management of APIs and management of APIs. 115 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 8.1 Azure Architecure The API gateway receives HTTP requests & routes it to the backend. Developer portal: A developer portal is available for each Azure API Management instance. This portal provides documentation & code samples for calling APIs to your developers. In the developer site, users can also test APIs. Composite APIs are created in this architecture simply importing logic apps as APIs. Existing web services can also be imported by importing OpenAPI (Swagger) specifications using SOAP APIs from WSDL specifications. The API gateway aids in the separation of front-end and back-end clients. It can rewrite URLs or alter requests as they reach the backend, for example. Authentication, cross-origin resource sharing (CORS) support, plus response caching are among the many cross-cutting challenges it addresses. Azure DNS is a DNS domain hosting service provided by Microsoft. Azure DNS uses on Microsoft Azure infrastructure to perform name resolution. You can manage your DNS records that use the same credentials, APIs, tools, including billing that you use for your other Azure services if you host your domains in Azure. Create DNS records that translate that custom domain website to the Ip address if you want to use a custom domain name like contoso.com. Read Configure a custom domain name with API Management for additional details. Azure Active Directory (Azure AD): To authenticate customers who call the API gateway, use Azure AD. The OpenID Connect (OIDC) protocol is supported by Azure AD. API Gateway validates an access token to authorise the request after it is obtained from Azure AD. Azure AD can secure access to both the developer portal when utilising the Standard and Premium tiers of API Management. 8.3DIFFERENCE BETWEEN AZURE RESOURCE MANAGER (ARM) & CLASSIC PORTAL Businesses are completely focused on increasing the effectiveness of common resources in this technology-driven world, rather than on the goods that differentiate their initiatives and solutions. They continually develop and deploy technology which support their objectives and ambitions in this endeavor. Amazon.com have made significant investments in computing infrastructure in order to cut expenses and preserve their costly existing technologies. Cloud computing had become a possibility as more disruptive technologies emerged. Cloud computing is a model for providing ubiquitous, on-demand, and accessible demand access to 116 CU IDOL SELF LEARNING MATERIAL (SLM)

a shared of programmable computing resources over the internet. Microsoft Azure, on the other hand, is indeed a cloud platform which allows developers to create, deploy, and manage commercial applications. It is a ground-breaking solution that is both a PaaS and a SaaS offering. Data storage, analytics, networking, hybrid integration, identity and access management, internet of things, DevOps, migration, and more Azure cloud services are available. This Microsoft cloud platform has been on the market about seven years and seen substantial improvements throughout that time. The development of a new model known as the Azure Resource Manager is one such upgrade (ARM). With the implementation of a new deployment approach, a slew of queries and misunderstandings arose. Questions like, \"Should I use the ARM gateway or the Classic?\" are typical. If I've already deployed classic, should I update to ARM? What is the difference between the ARM and Classic architectures? Etc. Figure 8.2 Azure Portal Classic Azure Portal This portal's main feature is that it's being used to create & configure resources which only work with resource managers. An essential cloud service that serves as just a logical container for virtual machines determines the virtual machine's network properties. In classic Azure, this means that the VM should be contained within a virtual device called cloud service. This also suggests that several VMs can be included under a single cloud service. 117 CU IDOL SELF LEARNING MATERIAL (SLM)

All VMs under a single cloud service, on the other hand, have a single VIP to ensure VM availability and load balancing. Furthermore, in this approach, cloud services support virtual networks but do not always enforce them. There are also some other qualities of classic Azure, including: i) ASM's API set is an XML-driven REST API. ii) Azure Power Shell may be used to setup security features such as Network Security Groups on VMs. ARM portal There is no specialized support to cloud services, so ARM provides numerous extra resource types to achieve analogous functionality. Any resources within it will be able to be created and configured by a user. The ARM portal provides a logical container called resource group that simplifies and streamlines all Azure resource-related processes. Most importantly, compared to the traditional portal, deleting resources is simple with ARM. In addition, the on-premises data centre can be used to develop private portals. Aside from these, ARM has a number of other advantages, including:  Unlike traditional Azure, ARM allows for fine-grained access control using RBAC across all resources inside a resource group.  On ARM, JSON-based templates can be used for deployment.  The resources on the ARM portal could be categorized and properly organized in an Azure subscription.  Because the resources were grouped in ARM, it's also easier to delete them than it is in standard Azure.  To configure the complete pattern, JSON templates could be created. Both modalities are currently accessible to subscribers, and it is important to understand the differences between them. Many functions are still available on the old portal, however Microsoft is quickly adding new features to ARM. 118 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 8.3 Features of ARM 8.4CREATING AND CONFIGURING AZURE WEBSITES Azure Websites is indeed a managed cloud solution that enables you to quickly deploy and make a web application available to your clients on the Internet. The VMs on which the website operates are handled for you, so you don't have to worry about them. .NET, Java, PHP, Node.js, & Python are among the supported languages. There are various online applications available to utilize as a starting point, including WordPress, Umbraco, Joomla!, or Drupal, in addition to constructing your own website.  You can utilize continuous deployment using Team Foundation Server (TFS), Git, or GitHub to deploy a new version of the website every time you commit a modification.  You can increase the couple of examples in on demand, and you can also use autoscaling to have Azure scale it in and out for you based on specified performance metrics like throughput.  Percentage of the CPU You can use load balancing too get the most of your resources if your website has numerous instances.  You may collect performance statistics, application logging, web server logging, and IIS logs for diagnostics.  IIS Failed Request logs, and logs from IIS. One can even remotely debug your application while it's running in the cloud if you're using Microsoft Visual Studio. In summary, Azure Websites has a number of capabilities that make it simple to install, administer, and troubleshoot a web application. 119 CU IDOL SELF LEARNING MATERIAL (SLM)

Creating a new website Let's start by making a new website. We'll publish stuff to the website later in this part. To get started, go to the Microsoft Azure Preview Portal (portal.azure.com). You'll need an Azure account at this point. You can join up for a free trial at azure.microsoft.com if you don't already have one. Using the portal After logging into the portal, pick Website from the huge +NEW icon in the lower-left corner of the screen, as shown in Figure 8.4. Figure 8.4Add a new website in the Azure Preview Portal. You should now see something similar to Figure 8.5, with the fields ready to be filled in. Figure 8.5Create a new website. 120 CU IDOL SELF LEARNING MATERIAL (SLM)

The URL has to be distinct from all other Azure Websites entries. If accepted, a green square with such a smiley face will appear. Remember that whatever prefix you enter here will be combined with.azurewebsites.net to form the website's URL. The name of the subscription associated with Microsoft account with which your logged in is displayed in SUBSCRIPTION. If you use the same Microsoft account to manage several accounts, click SUBSCRIPTION and choose the subscription that want to utilise. The area of the datacentre in which the website will be hosted is referred to as LOCATION. Choose the LOCATION that is the most convenient for you. For RESOURCE GROUP, accept the default. WEB HOSTING PLAN specifies how the website's resources are allocated, such as the multiple cores and memory, as well as the quantity of local storage and services including such autoscaling and backups. When you pick WEB HOSTING PLAN from the drop-down menu, the panel shown in Figure 8.6 appears. You can take a fresh web hosting plan a name and then select the plan you want. On that screen, not many of the plans are visible. You may see all of them by scrolling down below the OK button (beyond what is visible here) and clicking the BROWSE ALL PRICING TIERS option. Or Use Existing is a checkbox under that option that tells you not to create a new web hosting plan. Choose the free tier or Use Existing if the free tier is your default. Figure 8.6 web hosting plan selection 121 CU IDOL SELF LEARNING MATERIAL (SLM)

Use the defaults for the remaining fields, check the Add toStarboard box, and click Create at the bottom of the new website screen (Figure 8.7). Figure 8.7 create the webste and add it to the startboard Azure will construct your new website, pin it to your portal's Starboard for easy access, and display the website and its characteristics, as shown in Figure 8.8. Figure 8.8 Website options You can see all of the options by clicking the three dots to the right of SWAP: A new webpage has been added by ADD. i) BROWSE takes you to your website in a browser. It displays a default page guiding you to various deployment tools if you haven't published anything yet. 122 CU IDOL SELF LEARNING MATERIAL (SLM)

ii) START/STOP initiates and terminates the webpage. iii) Swapping deployment environments is what SWAP does. If you have a production and a staging environment, for example, you may publish your website to staging and test it there. When you're happy with it, use the SWAP option to promote it to production, and then delete the staging environment, which is now the previous production version. iv) These are the credentials for accessing FTP and Git. v) GET PUBLISH PROFILE obtains information from Visual Studio that is required to publish a website. vi) WEB HOSTING PLAN allows you to modify the size, instance count, and other parameters of the host on which the website is hosted. vii) RESTART is a command that restarts your website. viii) DELETE is a command that deletes a website from your account. ix) RESET PUBLISH PROFILE invalidates the old credentials and resets the publishing credentials. Configure and scale a website Let’slookattheconfigurationandscalingoptionsforawebsiteintheAzureManagementPortal (manage.windowsazure.com).(NotallofthefeaturesareavailableintheAzurePreviewPortalyet.)Log intotheportal,selectWEBSITESintheleftcolumn,andthenclickoneofyourwebsites. Configuration Click the CONFIGURE option at the top of the screen to access the website's configuration settings; see Figure 8.9. 123 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 8.9 configuration setting for the website The head of the CONFIGURE page is this. You may set the versions of.NET, PHP, Java, or Python here, as you can see. As we scroll further down the page, we see more general website settings, as illustrated in Figure 8.10. PLATFORM (32-BIT/64-BIT) This is set to 32-bit when you create a Free website. You might wish to alter this to 64-bit once you've changed your website to Standard. WEB SOCKETS (ON/OFF) If you activate this, you could use real-time request pattern applications, such as chat, that interact through web sockets. Constantly ON (ON/OFF) When this option is selected, Azure will ping your site on a frequent basis to guarantee it is always active in a warm/running state. This ensures that a website is constantly responsive and that the process or app domain does not page out due to a lack of external HTTP requests. EDIT IN VISUAL STUDIO ONLINE (PREVIEW) If you enable this, a link to the editor will display in the DASHBOARD tab's fast look area. This allows you to edit your website using Visual Studio Online while it is still live. Whenever you do live editing and have Deployment 124 CU IDOL SELF LEARNING MATERIAL (SLM)

From Source Control set, all changes you make will be overwritten if someone checks in a change. Figure 8.10 More geeral website option As shown in Figure 8.11, there are choices for uploading certificates, controlling domains, and controlling your Secure Sockets Layer (SSL) bindings. Certificates of Achievement: An SSL certificate can be uploaded here. End customers can access your site using HTTPS if you connect your SSL certificate to your custom domain name. Names of domains: Instead of mywebsiteatcontoso.azurewebsites.net, you can use a custom domain like mywebsite.contoso.com 125 CU IDOL SELF LEARNING MATERIAL (SLM)

SSL Bindings: That's where the SSL certificate is linked to the custom domain name. Figure 8.12 shows how to establish application diagnostics in the next section. All of the parameters are enabled in order to show as much of the image as possible. Figure 8.11 Manage certificates domain names; and SSl bindings for the website Application Logging (File System) (On/Off):If it is enabled, the web application's logging will be written to the file system. FTPing into the website will provide you access to the logs. It will be activated for 12 hours but instead disable itself due to the restricted amount of disc space available. Error, Warning, Information, & Verbose are the different logging levels. Application Logging (Table Storage) (On/Off):If this is enabled, any web application logging will be written to Azure Tables. Error, Warning, Information, & Verbose are the different logging levels. If you choose this option, you'll be asked to choose a storage account or table (see Figure 8.13). These logs never are automatically erased. Application Logging (Blob Storage) (On/Off):If this is enabled, the logs are written to Azure Blob storage, with each hour's logs stored in a distinct blob. You can select a retention time in days for these logs; if you leave it blank, these logs will never be automatically erased. If you choose this option, you'll be asked to provide your storage account and container information (Figure 8.14). 126 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 8.12 configure applcation diagnastics for website Figure 8.13 Configuring table storage for application diagnastics 127 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 8.14 Configuring Blob storage for application diagnostics. The following part is used to set up site diagnostics (see Figure 8.15). Off/Storage/File System Web Server Logging: This specifies whether the web server (IIS) logs should be written to Azure Tables or even to the local file system. If you want to choose STORAGE or FILE SYSTEM, you may set the retention time. You can also select the QUOTA, or maximum amount of disc space the logs can take up, for FILE SYSTEM, which must be between 25 and 100 MB. Detailed Error Messages (On/Off): This controls whether summary or detailed error messages are written. 128 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 8.15 configure site diagnastics Failed Request Tracking (On/Off): Indicates whether or not IIS Failure Logs should be written. You can set up remote debugging in the next section (see Figure 8.16). You can use Visual Studio to attach a debugger and debug your website while it's running in Azure if you enable this and publish a debug version of your website. Figure 8.16 configuring remote debugging As illustrated in Figure 8.17, you can specify up to two endpoints to be watched in the following section. You can monitor the availability of HTTP or HTTPS endpoints from up to three different locations, including Chicago (IL), Amsterdam, Singapore, San Jose (CA), San Antonio (TX), Ashburn (VA), Hong Kong, and Dublin, by configuring this. This might help you locate latency around the world if you have a globally used application. 129 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 8.17 Configuring endpoint monitoring scaling A Free website cannot be scaled; it is limited to one instance. A Basic website can be scaled manually up to three times. You must utilise a Standard website for autoscaling, which allows for up to 10 instances. Because not all of the features have been moved to Azure Preview Portal yet, let's look at the possibilities using the Azure Management Portal (manage.windowsazure.com). To begin, we must ensure that the site hosting plan is STANDARD. Log in to the Azure Management Portal (manage.windowsazure.com), then select WEBSITES in the left column to configure or autoscale the website. At the top of the screen, select SCALE. Figure 8.18 is an example of what you should see. Figure 8.18 web hosting plan To modify your plan, select STANDARD and then click SAVE at the bottom of the screen. On this screen, you may also alter the Instance Size (number of instances). More information about the web hosting plan is available in the Azure Preview Portal than in the Azure Management Portal. To learn more about those possibilities, go to portal.azure.com, choose your website, and then click WEB HOSTING PLAN in the top actions. If you go ahead and do that now, make sure to return to the Azure Management Portal to continue. The scale choices are available now that we have a standard website. To begin, you can scale on a set schedule. The entry page shown in Figure 8.19 will appear when you click Set Up Schedule Times. 130 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 8.19 scaling by schedule Deploying and monitoring websites Options for creating websites There are several ways to build a website and publish the content to Azure Websites. HTML editor or notepad This is a somewhat limited method of creating a website, but if you're just getting started with web development and want to make a simple HTML page, you may do so with Notepad or your favourite text editor. HTML-editing software. After you've finished, you can FTP the files to the website. Login into the Azure Management Portal, select Websites, and afterwards select your website to FTP your files to. In the quick look column, select Reset Your Deployment Credentials if you haven't set up your login credentials yet. Provide such a username and password when requested. This is for Git or FTP access. The FTP HOST NAME and the DEPLOYMENT / FTP USER are also listed in the quick glance column. Those two pieces of information, along with the password, would be used to access your website and upload your files when you FTP in. Webmatrix This is a cloud-connected, free, lightweight web development tool that allows users to create, publish, and administer your websites. This is available for download at http://www.microsoft.com/web/webmatrix/. The following are some of the app's features: Azure Websites connection is seamless. PHP, Node.js, ASP.NET, HTML5, CSS3, and jQuery are all supported. 131 CU IDOL SELF LEARNING MATERIAL (SLM)

Allows you to serve many of the websites in the Azure Management Portal's Website Gallery and the Azure Preview Portal's Website Marketplace. Umbraco, WordPress, Joomla!, and Drupal are some of the websites that are available. Allows you to manage databases in SQL Server, SQL CE, or MySQL. It's compatible both Git and TFS. Using FTP or WebDeploy, it is possible to construct websites locally or remotely. You may sign in with the Microsoft account that use for Azure, develop a new web application, and publish it using one of the available templates. Your new web application will be visible when your log into either of the Azure Management Portals. Changes can be made, the results verified in a local browser, and then the page republished. Only the updated files are deployed when you republish. Visual studio Visual Studio is a complete development environment that allows you to construct ASP.NET MVC apps,.NET client apps, Windows Communication Foundation (WCF) services, Web API, and Cloud Services utilising languages including C#, C++, VB, F#, and XAML. Publishing website from Visual studio In Visual Studio, open one of your web applications. If you don't already have a web application, use Visual Studio to build one by going to FILE > NEW PROJECT, selecting ASP.NET Web Application, specifying the solution folder, and then selecting MVC Application. This creates a simple MVC application that may be used right away. Later on, you can tweak it to make it your own. Let's put the web app on the Azure website we made earlier in this chapter. 1. Launch Visual Studio and open your web application. Select Publish Web Site from the context menu when you right-click the website. The publish web dialouge box wiill be displayed 132 CU IDOL SELF LEARNING MATERIAL (SLM)

1. Select Windows Azure Web Sites from the drop-down menu. You'll be prompted to login in to your Microsoft Azure account. 2. After logging in, you'll be asked to choose which website you want to deploy to. Click OK after selecting your website from the drop-down menu. It shows the connection information after retrieving the publishing settings from Azure. 3. Verify the connection by clicking Validate Connection. 133 CU IDOL SELF LEARNING MATERIAL (SLM)

5. Select Debug or Release Configuration from the following screen by clicking Next. Accepting the defaults on that screen and proceed to the last step by clicking Next. 6. Users can view the files which will be published on the final screen. To publish the website, click Publish. The website will be opened after all of the files have been deployed to it.When you make modifications to your website and repeat the process of publishing it, this will only post the items which have been added or changed. Monitoring Websites Many metrics can be set up to track a website's performance. Log into the Azure Management Portal (manage.windowsazure.com), then click WEBSITES, then your website. Select MONITOR from the settings at the top of the screen when it puts up the Quick Start or the DASHBOARD. At first, only six metrics will be visible: CPU Time, Data In, Data Out, Http Server Errors, and Requests. You'll notice the response times here if your set up Endpoint Monitoring on the CONFIGURE screen (Figure 8.18). 134 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 8.18 Monitoring a website The previous defined endpoints for Hong Kong, Dublin, and San Jose, CA can be seen. You can request as many metrics to be displayed in the list as you need, but only six metrics can be displayed on the chart at a time. By clicking +ADD METRICS at the bottom of the screen, you can add metrics. There are a few more options available. As selected in the upper-right corner, the time frame displayed is 1 HOUR. There is no y-axis in this graph. This is due to the fact that each measure has its own y-axis value, which is charted to make the most of the available space. New Relic and App Dynamics are two other monitoring tools offered through the Azure Management Portal's Azure Store. These can be chosen and adjusted in the Developer Analytics section's CONFIGURE page. 8.5AZURE DIAGNOSTICS Microsoft Azure is one of the most popular cloud computing platforms on the market today, thanks towards its business-friendly features and capabilities. Enterprises use its IaaS and 135 CU IDOL SELF LEARNING MATERIAL (SLM)

PaaS services to take advantage of a variety of digital solutions, such as virtual computing, analytics, and storage. While businesses choose Azure cloud solutions for a range of reasons, including flexibility, scalability, and security, Azure Diagnostics is a lesser-known but equally significant reason. What are Azure Diagnostics, and how does it work? One of the most important advantages of Azure cloud services is scalability. Enterprises may scale and run applications on many Azure VMs as the number of users increases. Azure diagnostic extensions make it easier for Azure customers to keep track on the health of an application instances operating on each of these virtual machines. Users may gather diagnostic information from large VM instances using the Azure diagnostic extension. Once a user has configured the diagnostic extension on a virtual machine, a central record of all diagnostic data from all virtual machines is available on a single storage device. In the event of a bug, Azure administrators can check timestamps and pinpoint the precise impacted VM in the VM cluster. Microsoft Azure initially only published the addon for Cloud Services. However, the extension is currently available for Azure Virtual Machines and Infrastructure as a Service (IaaS). In this chapter, we'll go through five reasons why you should enable Azure Diagnostic Extensions. 1. Simple to set up: Windows Azure Diagnostics configuration is simple and just takes a few clicks. The configuration is done via a graphical user interface, and the newest SDK has made the entire process graphical. In fact, there are a slew of customization options for aligning diagnostic measures with customer demands and objectives. 2. Simple to View: Diagnostic logs can be viewed in a variety of ways in Azure, such as a single storage account, event hubs, as well as a log analytics workspace. Those features are available throughout all CDN endpoints, regardless of the user's Azure price tier. Additionally, Azure provides for customizable log viewing. Diagnostic logs can be exported to a variety of viewing sources, including Excel graphs, Azure log analytics workspace, and more. The configurable viewing and visualizationallow for convenient real-time VM health monitoring. A screenshot of CDN core metrics is seen below. 3. Only Errors: Azure diagnostics allows users to filter log entries in advance. The extension transfers every log entry to the storage points by default. However, users can narrow down their search by using filters like \"Errors Only\" to track down, monitor, and analyses \"error\" and \"critical\" log 136 CU IDOL SELF LEARNING MATERIAL (SLM)

entries. However, users should be aware that if the \"Errors Only\" extension is applied, no performance counter data will be sent. 4.Azure Monitor: Figure 8.20 Azure Monitor Azure Monitor provides a variety of functions and functionalities for analyzing the health of cloud and on-premise applications. It alerts you to issues that are hurting the application's and other resources' performance. Furthermore, the diagnostic is simple, clear, and decision-centric thanks to the use of various performance counters. By combining Azure diagnostics using computing agents, users may increase the meaning of the information collected for performance monitoring and act in real- time to crucial scenarios such as threat alerts, auto-scaling, and so on. 5. Individualized Strategy: Last but not least, Azure provides the custom plan WAD extension for window users that want more flexibility and control. Custom performance metrics having up to ten dimensions can be created by users. The custom WAD extension gives the Azure monitor complete control over factors including data captured from cloud resources, the data storage procedure, and metrics. Things Users Should Think About When Trying to Cut Costs: 137 CU IDOL SELF LEARNING MATERIAL (SLM)

Monitoring and analyzing data in the cloud resources can help a company get the most out of its cloud budget. Before using diagnostic extensions, users must consider all cost implications. Here are some pointers that users should keep in mind: i) Only business or application critical data should be identified and collected. ii) Carefully and inexpensively choose performance counters. iii) On a regular basis, clean up diagnostic data. Enterprise mobility, productivity, or success have all become dependent on cloud computing. It's a tool that assists businesses in replacing their ageing IT infrastructure with an effective and scalable digital ecosystem. 8.6SUMMARY  This chapter analyzed definition of Microsoft azure and its architecture. The concept of Microsoft architecture has explored clearly.Azure's fundamental functionality is to replace or complement your existing on-premise infrastructure. It does, however, provide a wide variety of other services that serve to improve the working of numerous divisions within your firm as well as assist you in resolving crucial business difficulties.  For example, you can get big data insights of Azure analytics, and manage your billions of IoT devices on a unified Azure platform. You can also interact with your users through artificial intelligence bots on a variety of platforms, and get a secure and scalable cloud data storage solution with Azure storage. With DevOps, you can also automate your development, testing, and deployment processes, allowing you to distribute content to users all over the world without experiencing any latency concerns. The Difference between Azure Resource Manager (ARM) & Classic Portal has been explained briefly with proper examples. With clear screenshot explained the concept of creating, configuring, monitoring and deployment of a website. 8.7 KEYWORDS  HTTP - The Hypertext Transfer Protocol (HTTP) is an application layer protocol for hypermedia information systems that are dispersed and collaborative.  SSL - The successor to the now-deprecated Secure Sockets Layer, Transport Layer Security is a cryptographic technology meant to offer communications security above a computer network.  HTTPS - The Secure Socket Layer (SSL)/Transport Layer Security (TLS) protocol is a combination of the Hypertext Transfer Protocol (HTTP) and the Secure Socket Layer (SSL)/Transport Layer Security (TLS). 138 CU IDOL SELF LEARNING MATERIAL (SLM)

 Subscription- A subscription is a contract in which products, services, or shares are sold on a regular basis rather than on an individual basis. Customers must sign a contract or agree to certain terms and conditions in order to subscribe to a service.  ARM - In Azure, Azure Resource Manager (ARM) is the native infrastructure as code (IaC) platform. It allows you to centralise Azure resource management, deployment, and security. 8.8LEARNING ACTIVITY 1. Create, configure, deploy and monitor your own app in Microsoft Azure 2. Is it possible to create a VM in a Virtual Network using Microsoft Azure Resource Manager? 8.9 UNIT END QUESTIONS A. Descriptive Questions Short Questions 1. What is Azure active directory? 2. How do you publish websites using webmatrix? 3. How Azure Diagnostics works? 4. How do you Monitor the websites? List out the steps. 5. How do you deploy website using notepad? Long Questions 1. Evaluate Microsoft azure architecture in detail. 2. Compare ARM and classic portal. 3. How can we create new website in Azure? 4. List out the advantages of Monitoring and analyzing data in the cloud resources 5. Create steps to publish websites from visual studio. B. Multiple choice Questions 1. Which of the following tool will be useful when publishing the content to Azure Websites? 139 CU IDOL SELF LEARNING MATERIAL (SLM)

a. HTML b. Visual Studio c. Webmatrix d. All of these 2. CDN stands for a. Content delivery network b. Content developed network c. Content deploy network d. Content destruct network 3. WAD stands for a. Windows Azure Delivery b. Windows Azure Diagnostics c. Windows Azure Deployment d. Windows Azure Development 4. While publishing website from VS how do you verify the connection? a. By clicking Validate connection b. By clicking new connection c. By clicking publish d. By clicking ok 5. If any web application logging will be written to Azure Tables, which of the following option should be enabled? a. Application Logging (Blob Storage) (On/Off): b. Application Logging (Table Storage) (On/Off): c. Application Logging (File System) (On/Off) d. None of these 140 CU IDOL SELF LEARNING MATERIAL (SLM)

Answers 1-d, 2-a, 3-b, 4-a, 5-b. 8.10REFERENCES Reference books  Fundamentals of Azure by Micheal Collier and Robin shahan. Websites:  https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/n-tier/n- tier-sql-server  https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/enterprise- integration/basic-enterprise-integration 141 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 9 -RESOURCE MANAGEMENT STRUCTURE 9.0 Learning Objective 9.1 Introduction 9.2 Resource Management 9.3 Provision of resource allocation in cloud computing 9.4 Summary 9.5 Keywords 9.6 Learning Activity 9.7 Unit End Questions 9.8 References 9.0LEARNING OBJECTIVES After studying this unit students will be able to:  Evaluate Resource management  Analyze provision of resource allocation in cloud computing 9.1INTRODUCTION Cloud Computing is an emerging age in remote computing / Internet-based computing in which users may quickly access its personal resources via the Internet from any computer. Cloud computing is delivered as a utility since it is accessible to cloud users on demand. It's a straightforward pay-per-use consumer-provider paradigm. It has a lot of shared resources in it. As a result, just like any other computer paradigm, resource management is often a big concern in cloud computing. It is extremely difficult for cloud providers to deliver all requested resources due to the limited availability of finite resources. Cloud resources need to be allocated fairly and efficiently from the perspective of cloud providers. As a result, this chapter presents a step-by-step overview of resource model for cloud computing. 9.2RESOURCE MANAGEMENT Classification of cloud resources Cloud computing is a platform that allows cloud users / cloud consumers to rent resources as a service through the Internet. As a result, we can claim that cloud computing provides computing as just a utility because it is available to cloud users on demand. 142 CU IDOL SELF LEARNING MATERIAL (SLM)

A resource in cloud computing is any service that may be utilized via cloud users / cloud consumers. Physical and logical resources, as well as hardware and software resources, have been classified by many academics. Cloud providers manage numerous resources in cloud computing. Because cloud computing would be a utility-based computing model, this study categorizes cloud resources according to their utility. Table 9.1 depicts the categorization of resources in cloud computing in further detail. Table 9.1 Classification of Cloud resources Fast computation Processor Application web proxy memory Intermediate devices hosts/ Storage Algorithms workstation sensors Operating systems communication link Classification of cloud Communication APIs Bandwidth resources Hard drive Delay Flash drive Protocols Power/ energy Softwares like Communication link Security Hadoop Database servers Physical Logical Colling devices UPS Trust Authentication Integrity Privacy Availability 1. Fast Computation Utility: In a cloud computing context, this sort of resource provides fast computational utility. Computation as a Service is made possible by cloud computing's quick computation usefulness (CaaS). Processing power, memory size, and efficient algorithms are all examples of fast computation utility. 2. Data Storage Utility: Rather than storing data on a local storage device, can store it on a distant storage device. Thousands of hard discs, flash drives, database servers, and other 143 CU IDOL SELF LEARNING MATERIAL (SLM)

storage devices make up the storage utility. Because computer systems are prone to failure over time, data redundancy is essential. Because of the cloud's time-varying service paradigm, storage utilities must include characteristics such as cloud elasticity. Storage as a Service is provided by cloud computing using storage utilities (StaaS). 3. Communication Utility: Also known as Network Utility or Network as a Service, it is a type of communication utility (NaaS). Communication utility is inextricably linked to fast computation and storage utility. Physical (intermediate devices, hosts, sensors, physical communication link) and logical (bandwidth, latency, protocols, virtual communication link) resources make up the communication utility. Every service in cloud computing is delivered via high-speed Internet. From a network standpoint, bandwidth & delay are the most critical factors. 4. Power / Energy Utility: A lot of study is being done on energy saving strategies in cloud computing these days. Using power aware strategies, energy costs can be drastically lowered. The power consumption of cloud computing is extremely high due to the thousands of data servers. These types of resources are centered on cooling devices or UPS systems. They can also be regarded as secondary sources of information. 5.Security Utility: In every computing environment, security is often a big concern. As cloud users, we desire cloud services that are highly dependable, trustworthy, safe, and secure. 9.3PROVISION OF RESOURCE ALLOCATION IN CLOUD COMPUTING The purpose of resource management in cloud computing is to provide high resource availability, resource sharing, time variant service model fulfilment, and resource usage efficiency and reliability. Resource management, in the context of cloud computing, is a procedure that effectively and efficiently manages the aforementioned resources while also delivering QoS assurances to cloud users. It is the initial resource assignment, in which resources are requested for the first time by an application (on behalf of cloud users). Figure 9.3 depicts the several sequential stages that must be taken in order to complete this phase. 144 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 9.2 Taxonomy of resource management in Cloud Figure 9.3 Resource Assignment 1. Request Identification: The first and most important step in resource assignment is to identify the request. Cloud providers will identify numerous resources in this step. 2. Resource Gathering / Resource Formation: After identifying resources in step 1, resources will be gathered or formed. This stage will determine what resources are available. This stage may also include the creation of bespoke resources. 145 CU IDOL SELF LEARNING MATERIAL (SLM)

3. Resource Brokering: This phase entails negotiating resource availability with cloud customers to ensure that they really are available as needed. 4. Resource Discovery: In this step, diverse resources will be logically grouped based on the needs of cloud users. 5. Resource Selection: This stage involves selecting the best resources from among those available to meet the needs of cloud users. 6. Resource Mapping: In this step, cloud providers will map virtual resources to physical resources (such as nodes, links, and so on). 7. Resource Allocation: In this step, resources will be allocated and distributed to cloud users. Its main purpose is to meet the needs of cloud users while also generating cash for cloud providers. Following these steps, resource optimization is given as a method for two types of resources: non-virtualized resources or virtualized resources. Physical resources are non-virtualized resources that are not virtualized. 1. Resources that aren't virtualized (Refer Figure 9.4) (a) Monitoring of Resources: The first and most important stage in Periodic Resource Optimization is resource monitoring. To analyse resource use, several nonvirtualized cloud resources are monitored. This process will also keep an eye on the availability of free resources in the future. The most difficult aspect of cloud resource monitoring is determining and defining metrics and parameters. (b) Resource Modeling / Resource Prediction: This stage will forecast the nonvirtualized resources that cloud users' applications will use. Because cloud resources are not homogeneous in nature, this is one of the more difficult steps. Because of this non- uniformity, predicting resource requirements for peak and non-peak periods is extremely challenging. 146 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 9.4 Resource optimization for non-virtualized Resources (c) Resource Brokering: In this step, non-virtualized resources are negotiated with cloud users to ensure that they are available as needed. (d) Resource Adaptation: Non-virtualized cloud resources can be scaled up or down to meet the needs of cloud customers. From the standpoint of cloud providers, this action may boost costs. (f) Resource Reallocation: In this step, resources will be reallocated / redistributed to cloud users. Its main purpose is to meet the needs of cloud users while also generating cash for cloud providers. (f) Resource Pricing: From the standpoint of both cloud providers and cloud consumers, this is one of the most crucial steps. Pricing will be done based on cloud resource utilization. 2. Resources that are virtualized (Refer Figure 9.5) (a) Resource Monitoring: The first and most important phase in Periodic Resource Optimization is resource monitoring. To analyse resource use, several virtualized cloud resources are monitored. This process will also keep an eye on the availability of free resources in the future. The most difficult aspect of cloud resource monitoring is determining and defining metrics and parameters. 147 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 9.5 Resource optimization for virtualized Resources (b) Resource Prediction/Modeling: This stage would predict all various virtualized necessary resources by cloud consumer apps. Because resources are not uniform in nature, this is one of the most difficult steps. Because of this non-uniformity, predicting resource requirements for peak and non-peak periods is extremely challenging. (c) Resource Brokering: In this step, virtualized resources are negotiated with cloud consumers to ensure that they have been available as needed. (d) Resource Adaptation: Virtualized cloud resources can be scaled up or down to meet the needs of cloud customers. From the standpoint of cloud providers, this action may boost costs. (e) Resource Bundling:Various non-virtualized resources can be combined into virtualized resources based on the requirements (f) Resource Fragmentation: To release non-virtualized resources, various virtualized resources must be fragmented. As part of resource bundling, numerous non-virtualized resources could be bundled in to virtualized resources after this stage. (g) Resource Reallocation: In this step, resources will be reallocated / redistributed to cloud users. Its main purpose is to meet the needs of cloud users while also generating cash for cloud providers. (h) Resource Pricing: From the standpoint of both cloud providers and cloud consumers, this is one of the most crucial steps. Pricing will be done based on cloud resource utilization. 148 CU IDOL SELF LEARNING MATERIAL (SLM)

9.4 SUMMARY  Cloud computing makes it possible to use cloud resources as a utility. Cloud computing is an internet-based technology that provides dynamic and flexible resource allocation for responsible and assured services in a pay-as-you-go paradigm. It is a type of computing that is becoming increasingly popular. There are many different aspects to this system, and it has a large number of shared resources.  The Resource Management System (RMS) is the key component of network computing systems, and it has several functions. Because of the sheer size of current data centres, RMS in cloud infrastructure is a complicated challenge to solve. The unique feature of cloud computing is that any number of cloud services can be accessed at the same time by any number of users. A significant challenge is determining the number of consumers to support on a single server, as well as where to implement the user applications at any given time. Cloud resource management necessitates complex policies and decisions for multi-objective optimization, which can be difficult to achieve. Because of the sheer size of the cloud infrastructure, as well as the unpredictable interactions of the system with a huge number of users, effective resource management is exceedingly difficult.  With the scale, it is impossible to have accurate global state information, and the large user population makes it nearly impossible to predict the type and intensity of system workload. After presenting an overview of policies and mechanisms for cloud resource management, the paper goes on to discuss energy efficiency and cloud resource utilisation, along with the implications of application scaling for resource management. This chapter began by classifying cloud resources by analyzing cloud computing for resource management. After that, a cloud resource management taxonomy was presented. The resource management in cloud computing is presented in this chapter as a sequential process including multiple strategies. This chapter also states that effective cloud resource management must meet criteria such as resource efficiency, cloud provider cost reduction, and energy / power reduction. 9.5KEYWORDS  API – An application programming interface (API) is a computer interface that specifies how several software programmes or hybrid hardware-software intermediates interact.  StaaS -Storage as a service (STaaS) is a managed service wherein the customer is given access to a data storage platform by the supplier. Individual storage services are accessed by STaaS customers via regular system interface protocols or application programme interfaces (APIs). 149 CU IDOL SELF LEARNING MATERIAL (SLM)

 NaaS - WAN services, transport, hybrid cloud, multi-cloud, Private Network Interconnect, and Internet Exchanges all benefit from Network as a Service, which includes Software Defined Networking, programmable networking, and API-based operation.  CaaS - Content as a service, also known as managed material as a service, is a service-oriented model in which the service provider provides on-demand content towards the service consumer via subscription-based online services. 9.6LEARNING ACTIVITY 1. With so many cloud apps, how do you keep up with provisioning and deprovisioning? 2. Explain in detail about how to set up a private cloud for an academic university using any one of the cloud environments. 9.7UNITEND QUESTIONS A. Descriptive Questions Short Questions 1. What are the goals of cloud computing resource management? 2. What is cloud resource management, and how does it work? 3. What does the term \"resources\" signify in the context of cloud computing? 4. Define about Fast Commutation utility resources. 5. How would you assign the Resources? Long Questions 1. What is the need for cloud resource management? 2. How does cloud computing handle resource management? 3. How can cloud resource management be improved? 4. Classify the cloud resources in detail. 5. Describe in detail resource optimization for virtualized Resources and non-virtualized resources. B. Multiple choice Questions 1. Point out the correct statement: 150 CU IDOL SELF LEARNING MATERIAL (SLM)


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook