Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore CU-MCA-SEM-II-Cloud Computing-Second Draft

CU-MCA-SEM-II-Cloud Computing-Second Draft

Published by Teamlease Edtech Ltd (Amita Chitroda), 2023-07-17 07:23:29

Description: CU-MCA-SEM-II-Cloud Computing-Second Draft

Search

Read the Text Version

Figure 3.4Community cloud scenario Figure 3.4 depicts a general perspective of the community cloud usage scenario, as well as a reference design. Consumers of a given community cloud belong to a well-defined group with similar worries or wants; these can be government agencies, businesses, or even ordinary users, but they all have the same worries or requirements when interacting with the cloud. This is not the same as public clouds, which service a large number of customers with varying demands. Community clouds are therefore distinct from private clouds, which typically serve services within the organization that owns the cloud. A community cloud is often built over numerous administrative domains from an architecture standpoint. This implies that a variety of entities, including government agencies, commercial businesses, research institutions, and perhaps even public virtual infrastructure providers, all contribute resources to the cloud infrastructure's development. The following are some of the potential community cloud sectors: • The media business. Companies in the media business are searching for low-cost, flexible, and easy solutions to boost content creation efficiency. The majority of media works entail a large number of collaborators. The development of digital material, in particular, is the result of a joint process that includes large-scale data transfer, compute-intensive rendering activities, and complicated workflow executions. Community clouds can create a shared environment in which services may promote business-to-business cooperation while also providing the necessary horsepower throughout comparison to the overall bandwidth, CPU, as well as storage to enable media production effectively. 51 CU IDOL SELF LEARNING MATERIAL (SLM)

• The healthcare sector.There are a variety of circumstances wherein community clouds might be useful in the healthcare profession. Community clouds, in particular, can provide a worldwide platform for sharing information and expertise without exposing sensitive data stored in private infrastructure. Community clouds' inherently hybrid deployment strategy may readily allow the storage of patient-related data in a private cloud while leveraging shared infrastructure for non-critical services and process automation within hospitals. • The energy sector, as well as other essential businesses. Community clouds may be used in various industries to bundle a full collection of solutions that cover the administration, deployment, including orchestration of services and operations vertically. Because these sectors involve a variety of suppliers, vendors, and organizations, a community cloud offers the necessary infrastructure to ensure a level playing field. • The government sector. The use of public cloud services to the public may be hampered by legal and political constraints. Furthermore, governmental procedures entail a number of institutions and organizations and are aimed at offering strategic solutions at the local, national, and international levels. Business-to-government, citizen-to-government, and perhaps business-to-business procedures are all involved. Invoice approval, infrastructure planning, as well as public hearings are just a few examples. A community cloud may be the best option for providing a distributed environment in which to build a communication platform for such tasks. • Scientific investigation. Community clouds in the form of science clouds are indeed a fascinating example. Scientific computing is the shared interest motivating diverse firms to share a huge distributed infrastructure in this situation. The phrase \"community cloud\" can also refer to a more specialized sort of cloud that aspires to integrate the ideas for digital ecosystems only with research study of cloud computing, as a result of concerns about vendor restrictions in cloud computing. A community cloud was created by combining the unused resources of user PCs with an architecture that allows each user to be a consumer, producer, or coordinator of the cloud's services at the same time. The following are some of the advantages of these community clouds: Transparency. Community clouds were open platforms that allow for fair competition amongst diverse solutions by reducing the reliance on cloud companies. A sense of belonging. The infrastructure is more scalable since it is based on even a collective that offers resources and services. The system may grow simply by increasing its user base. Failures with grace. There really is no point of failure since the infrastructure is not controlled by a single supplier or vendor. 52 CU IDOL SELF LEARNING MATERIAL (SLM)

Control and convenience. Since the cloud was shared and controlled by the community, that makes all choices through a democratic process, there is no conflict among convenience and control in a community cloud. Long-term environmental stability. Because it uses unused resources, the community cloud was claimed to have a lower carbon impact. Furthermore, these clouds are more organic since they expand and decrease in a symbiotic connection to satisfy the community's needs, which in turn maintains it. This is an alternate concept of either a community cloud, emphasizing the social component of clouds generated by the aggregation of community members' resources. The concept of a heterogeneous infrastructure intended to suit the requirements of a community of people is similarly mirrored in the previous definition, but here the focus will be on the commonality of interests which brings the cloud users together as a community. The sense of community is crucial in both circumstances. 3.7COMPARE CLOUD COMPUTING WITH TRADITIONAL CLIENT/SERVER ARCHITECTURE In terms of technicality, there isn't much.In a client/server design, a user connects on to a server and verifies their identity using credentials stored on the server rather than on the local computer, well before accessing the operating system. After the user has signed in to the computer or other devices using locally-saved credentials, cloud access normally occurs without the requirement for manual user-provided credentials. Both of these allow users to store important data on their computers. Some argue such cloud storage is even more transparent to users, which is unquestionably correct. 53 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 3.5 Traditional vs Cloud Client/server architectures are typically used in businesses where central administration and control of user computers and computer access, such as centrally maintained user credentials, operating system upgrades, and application updates, are required. Although cloud storage is also a transparent sub-function of the client/server architecture, the opposite is not true: a client/server architecture is not instantly a sub-function of cloud storage, while we may reasonably anticipate the latter to become the paradigm sooner rather than later. No one can completely know how safe the Cloud is, or not if access to customer data is genuinely safe. The implementation, or in one word, \"virtualization,\" is the major distinction between cloud computing versus traditional networking or hosting. Virtualization enables large-scale scalability, allowing customers to access almost unlimited resources. In a typical networking system, the server is fixed in hardware, so if you want to scale out to so many users than the present hardware can handle, you'll have to spend more money on upgrades, and there will still be a limit. However, with cloud computing architecture, several servers are still in place from the outset, and virtualization is used to render just the resources that a given user need, allowing it to scale from modest personal resource demands to large corporate resource requirements. A cloud provider can easily increase resources, and the customer will only be charged for what they utilize. Even if you just require a little amount of resource, you must pay for everything in traditional networking: gear, installation, and maintenance, or simply rent this for a monthly fixed fee. To summaries, cloud architecture is or may be a type of client/server architecture in which the user is cleverly shielded from the client/server components of its implementation. This all varies on who owns which cloud and whatever cloud we're discussing. Expect all client/server designs to appear very much like the cloud inside the near future than networks of the past, but it'll still be the same thing. User data is stored remotely but may be changed locally and accessed by user regardless of the platform they use. 3.8PROS AND CONS OF CLOUD COMPUTING Business benefits of cloud computing Building apps on the cloud has a number of obvious commercial advantages. Listed below are a handful of them: Infrastructure Investment with Almost No Upfront Costs:If you need to establish a large- scale system, real estate, physical security, hardware (racks, servers, routers, backup power supply), hardware management (power management, cooling), and operations employees may be prohibitively expensive. Because of the significant initial expenses, the project would 54 CU IDOL SELF LEARNING MATERIAL (SLM)

usually require many sets of management approvals so that it could begin. There are no fixed costs or starting costs with utility-style cloud computing. Just-in-Time Infrastructure:Throughout the past, if your software grew in popularity but your systems or infrastructure couldn't keep up, you became a victim of your own success. On the other hand, if you put a lot of money into something and didn't succeed, then was becoming a victim about your own failure. You don't have to worry about pre-purchasing capacity for large-scale systems when you install apps in the cloud using just-in-time self- provisioning. Because you only scale as you expand but only pay on what you need, this boosts agility, reduces risk, and decreases operating costs. System administrators are generally concerned about acquiring hardware (after they run out of capacity) and increasing infrastructure usage (when they have excess and idle capacity). They may allocate projects more effectively and efficiently using the cloud by allowing apps to seek and surrender resources on demand. Utility-style pricing: With utility-style pricing, you're only charged for the infrastructure that you've utilized. You are paying for idle infrastructure rather than allotted infrastructure. This gives cost-cutting a whole new meaning. When you upgrade your cloud application with an optimization patch, you can witness instant cost reductions (often as soon as the next month's payment). If a caching layer can cut their data requests by 70%, for example, the savings start immediately and you realize the benefit in your next payment. Furthermore, if you're developing platforms on center of the cell, you may offer your clients the same flexible, changeable usage-based fee structure. Reduced Time to Market:One of the most effective ways to speed up processing is to use parallelization. With cloud architectures, once a compute-intensive or data-intensive operation that can be executed in parallel takes 500 hours to perform on a single machine, the same work may be spawned and launched 500 times in an hour. The capacity to use parallelization in such a cost-effective manner is provided by having an elastic infrastructure available, which reduces time to market. Technical benefits of cloud computing The following are some of the technological advantages of cloud computing:  Automation—\"Scriptable Infrastructure\": Using programmable (API-driven) infrastructure, you can design repeatable development and deployment processes.  Auto-scaling: Without requiring human involvement, you may scale their applications down and up to meet unforeseen demand. Auto-scaling promotes automation and increases productivity.Scale ones application up or down to match your expected demand with careful planning and awareness of your traffic patterns to keep your expenses low when scaling. 55 CU IDOL SELF LEARNING MATERIAL (SLM)

 Development that is more efficient Life Cycle: Development as well as test environments may be simply cloned from production systems. Production environments may simply be promoted from staging settings.  Improved Testability: You'll never run out of testing hardware. At every level of the development process, inject and automate testing. For the length of the testing phase, you can create an \"instant test lab\" with predefined conditions.  Disaster Recovery and Business Continuity: Keeping a fleet of DR servers and data storage in the cloud is less expensive. You may use the cloud to take full advantage the geo-distribution and quickly reproduce the environment in another place.  By sending extra traffic to the cloud, you may develop a comprehensive overflow- proof application with only a few clicks and smart load balancing strategies. Challenges of cloud computing In conclusion, the new cloud computing paradigm offers several benefits and advantages over prior computer paradigms, and many businesses are embracing it. However, academics and experts in the field are still grappling with a slew of issues (Leavitt, 2009). Here's a quick rundown of who they are and what they do. Performance: The major performance issue might be with some intense transaction-oriented and other data- heavy applications, for which cloud computing may not be enough. Furthermore, consumers that are located distant from cloud providers may have significant latency and delay. Security and privacy: The corporation is still concerned about security while employing cloud computing. When it comes to information as well as time, the client is concerned about the attack's susceptibility. The critical IT resources are on the other side of the firewall. As previously stated, the security solution presupposes because cloud computing providers strictly follow security policies. Control: Because cloud computing companies have complete control over the platforms, several IT departments are concerned. Platforms for individual firms and their business activities are rarely designed by cloud computing providers. Bandwidth cost: Companies can save money both hardware and software by using cloud computing, but they may have to pay more for network bandwidth. The cost of bandwidth for smaller Internet- based apps that aren't data intensive may be minimal, but it can quickly rise for data-heavy apps. 56 CU IDOL SELF LEARNING MATERIAL (SLM)

Reliability: Even with cloud computing, round-the-clock availability is not always guaranteed. Cloud computing services were down for a few hours in certain cases.Additional cloud computing providers, greater services, defined standards, and best practices may all be expected in the future. HP Labs, Intel, and Yahoo have developed the Cloud Research Test Bed, which has locations in Asia, Europe, and North America, with the goal of producing breakthroughs such as cloud computing-specific CPUs. IBM has unveiled the Research Computing Cloud, that is a set of computing resources that can be used on-demand and from anywhere in the world to support business activities. 3.9KEY TECHNOLOGIES The essential technologies that underpin cloud computing are discussed in this section. Virtualization, Web services as well as service-oriented architecture, service flows but also workflows, and Web 2.0 and Mashup are among them. Virtualization: The capacity to virtualize and distribute resources across multiple applications with the goal of greater server utilization is a benefit of cloud computing. Virtualized infrastructure is the foundation for both the majority of high-performance clouds. Virtualization has been used in data centres as a successful IT approach for combining servers for some years. Virtualization, which is more widely used to pool infrastructure resources, may also offer as essential components for your cloud environment, enhancing agility and flexibility. The major focus of virtualization is still on servers nowadays. Virtualizing storage and networks, on the other hand, is becoming a common practice. According to a Gartner poll of 505 data centre administrators throughout the world, planned or ongoing virtualization of infrastructure workloads should rise from over 60% in 2012 to over 90% in 2014. As a result of this continued expansion, cloud computing has become a natural step for many businesses. Virtualization is the first practical step in establishing cloud infrastructure and the basis for just an agile, scalable cloud. Virtualization abstracts and isolates the underlying hardware into virtual machines (VMs) in their own runtime environment, with numerous VMs in a single hosting environment for compute, storage, and networking resources. These virtualized resources are essential for managing data, transferring it into and out of cloud, and executing high-utilization, high-availability applications. 57 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 3.6 Virtualization A hypervisor—software, firmware, and hardware that builds and executes virtual machines—manages virtualization on a host server. Guest machines are the names given to the virtual computers. The hypervisor is a virtual operating platform based the application's guest operating system. Numerous virtual machines (VMs) can share multiple instances of the guest operating system on a host server. Virtualization also allows for resource sharing, VM separation, and load balancing, which are all important features for cloud computing. These qualities provide scalability, high use of pooled resources, quick provisioning, workload segregation, and higher uptime in a cloud environment. Nowadays, the trend within virtualization has shifted from cost reduction through data center consolidation to increased flexibility and agility via the widespread usage of virtualization for quicker service launch and dynamic workload distribution. Pervasive virtualization is a strategy methodology for strategically transferring legacy applications into the cloud to satisfy stated objectives or as time and budget allow. Better service quality, increased availability as well as business continuity, quicker resource deployment, plus lower energy usage is just a few of the advantages. Figure 4 depicts the importance of virtualization. Virtual machine technologies like VMware and Xen, as well as virtual networks like VPN, are examples of virtualization technologies. Virtual machines provide on-demand virtualized IT infrastructures, while virtual networks enable users to access cloud services through a customised network environment. Web service and Service oriented architecture Although Web Services as well as Service Oriented Architecture (SOA) really aren't new concepts, they serve as the foundation for cloud computing. Web services are the most common type of cloud service, featuring WSDL, SOAP, & UDDI as industry standards. Inside clouds, a Service Oriented Architecture organises and maintains Web services (Vouk, 2008). A SOA can also comprise a collection of cloud services that may be used on a variety of distributed platforms. 58 CU IDOL SELF LEARNING MATERIAL (SLM)

Service Flows and Work Flow The term \"service flow and workflow\" refers to a unified picture of cloud-based service- based operations. Workflows have emerged as one of the most significant study areas in the database & information systems sectors (Vouk, 2008). Web 2.0 and mashup Figure 3.7Web 2.0 and Mash up Web 2.0 is a new concept that aims to improve user creativity, information sharing, as well as collaboration through the use of web technology and design (Wang et al., 2008). Mashup, on either hand, is really a web application which integrates data from several sources into a single storage tool. Those two technologies are extremely advantageous in cloud computing. Figure 3.6 depicts a cloud computing architecture (Hutchinson and Ward, 2009) in which an application reuses numerous component. This architecture's components were highly dynamic, run on a SaaS basis, and leverage SOA. Closer to the user, the components are smaller and more reusable. The center's components include Mashup servers and portals that aggregate and extend services. Data from such a service (such as addresses in a database) could be combined with mapping data (from Yahoo or Google maps) to create aggregated views of the data. 59 CU IDOL SELF LEARNING MATERIAL (SLM)

3.10 SUMMARY  The main objective of this chapter is to make us clearly understand the types of clouds and needed capabilities for the current business. Cloud computing is a new computing paradigm that provides a massive quantity of compute and storage resources to a large number of people at the same time. The resources are available to individuals (such as scientists) and businesses (such as start-up companies) who are willing to pay a little amount of money only for what they require.  After reviewing a variety of technologies that have contributed to the development of cloud computing, this introduction concludes that this new paradigm is the outcome of an evolution rather than a revolution.  As a service, cloud computing services aim to provide compute, storage, network, software, or a combination of those in a scalable and cost-effective manner. The terms \"infrastructure,\" \"platform,\" and \"software\" refer to the various levels of abstraction that cloud computing services can provide, ranging from \"raw\" virtual servers to elaborate hosted applications.  In this sector, there has been a noticeable increase in popularity as well as apparent success.  To ensure the long-term viability of cloud computing, however, as explored in this chapter, both industry and academia must work together to address important obstacles and risks that have arisen. The emergence of standards, the creation of value-added services by augmenting, combining, and brokering existing compute, storage, and software services, and the availability of more providers at all levels, all of which contribute to increased competition and innovation are all visible trends in this field. In this regard, there are several chances for practitioners who are interested in developing solutions for cloud computing.The brief explanation of each type of the cloud also helps us to makethe right decision in designing and choosing the business needs. Although cloud has chosen by many companies since it has many benefits, they have to face much more challenges too. This chapter clearly explore both the advantages and disadvantages of using cloud in the emerging world. The comparison between traditional client-server architecture and the cloud structure made us clear that why do we choose or decline the cloud in many situations. The concept of key technologies in the cloud such as virtualization, web service, service-oriented architecture, service flows and work flows and web 2.0 and mashup has been explained thoroughly in this chapter. We will discuss distributed and grid computing in the next chapter. 60 CU IDOL SELF LEARNING MATERIAL (SLM)

3.11KEYWORDS  NIST - The National Institute of Standards and Technology is a non-regulatory organisation of the United States Department of Commerce that conducts research in physical sciences.  SOA - Service-oriented architecture (SOA) is an architectural approach that encourages the use of services. As a result, it is also used in field of software design, where application components give services to other components through a network using a communication protocol.  WSDL - The Web Services Description Language (WSDL) is also an XML-based interface description language for specifying a web service's capabilities.  SOAP- SOAP is really a messaging protocol specification enabling exchanging structured data in computer networks when implementing online services.  UDDI- The Universal Description, Discovery, and Integration (UDDI) registry is an XML-based registry that allows businesses all over the world to advertise themselves on the Internet.  Hypervisor - A hypervisor is a type of emulator that generates and runs virtual computers using computer software, firmware, or hardware. 3.12 LEARNING ACTIVITY 1. Suppose you are a university administrator, what kind of cloud (pubic, private, community or hybrid) you will prefer? 2. If you are a university administrator describe the role of cloud computing in E-learning 3.13UNIT END QUESTIONS 61 A. Descriptive questions ShortQuestions 1. List out the types of cloud. 2. What is meant by public cloud? 3. Difference between private and public cloud. 4. What is the use of OpenNebula? CU IDOL SELF LEARNING MATERIAL (SLM)

5. List out the advantages of community cloud. Long Questions 1. How the private cloud is implemented? 2. Detail Hybrid cloud. 3. Describe in detail community cloud sectors 4. Differentiate between Traditional computing with cloud computing 5. What are all the advantages and challenges faced in the cloud computing? 6. What are all the key enabling technologies of cloud computing? B. Multiple Choice Questions 1. Which of the following cloud is available to the general public? a. Private cloud b. Community cloud c. Hybrid cloud d. Public cloud 2. resources and services are leased for the duration of the project and then released. It is called a. Cloud bursting b. Cloud organizing c. Cloud gathering d. Cloud testing 3. The ___________-Cloud is more useful in healthcare sector. a. Public b. Community c. Hybrid d. Private 4. SOA stands for 62 a. Service Oriented Architecture CU IDOL SELF LEARNING MATERIAL (SLM)

b. Service Oral Architecture c. Service Oriented All d. Security Oriented Architecture 5. Which one of the following deployment models is used? a. public b. private c. hybrid d. All of these 6. An organisation that sells cloud services owns which of the following? a. Public b. Community c. Hybrid d. Private Answers 1-d, 2-a, 3-b, 4-a, 5-d, 6-a. 3.13REFERENCES Reference books  Rajkumar Buyya, Christian Vecchiola, S. Thamarai Selvi, “Mastering Cloud Computing”  Kailash Jayaswal, Jagannath Kallakuruchi, Donald J. Houde, Dr. Devan Shah, “Cloud Computing: Black Book  Cloud Computing: Principles and Paradigms, Editors: Rajkumar Buyya, James Broberg, Andrzej M. Goscinski, Wile, 2011.  Cloud_computing_for_energy_management_in_smart_gri.pdf Websites:  https://timesofcloud.com/cloud-tutorial/characteristics-of-cloud-computing-as-per- nist/ 63 CU IDOL SELF LEARNING MATERIAL (SLM)

 https://www.dummies.com/programming/cloud-computing/hybrid-cloud/how-bpaas- works-in-the-real-world-of-cloud-computing/  https://cyfuture.com/blog/cloud-based-bpaas-all-you-need-to-know/ 64 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 4 –CLOUD COMPUTING STRUCTURE 4.0 Learning Objectives 4.1 Introduction 4.2 Distributed Systems 4.3 Mainframe computing 4.4 Cluster computing 4.5 Grid computing 4.6 Web 2.0 4.7 Service Oriented Computing 4.8 Summary 4.9 Keywords 4.10Learning Activity 4.11Unit end questions 4.12References 4.0LEARNING OBJECTIVES After studying this unit students will be able to:  Analyze the evolution of distributed computing technologies  Evaluateweb2.0 services  outlineService Oriented Computing 4.1 INTRODUCTION Renting computer services by utilizing huge distributed computing capabilities is a long- standing concept. It dates back to the early 1950s, when mainframe computers were popular. Technology has progressed and improved since then. This process has generated a number of favorable conditions for cloud computing to be realized.The evolution of distributed computing technologies which have inspired cloud computing is depicted in Figure 4.1. We explore five basic technologies that had a key role in the implementation of cloud computing as part of the historical evolution. Distributed systems, virtualization, Web 2.0, service 65 CU IDOL SELF LEARNING MATERIAL (SLM)

orientation, & utility computing are examples of these technologies. In the following section will elaborate each of these historical evolution with proper examples. 4.2DISTRIBUTED SYSTEMS Figure 4.1 Evolution of distributed computing technologies(1950-2010) Clouds are enormous distributed computing facilities which make their services available on demand to other parties. As a starting point, we use Tanenbaum et alcharacterizations of a distributed system: A distributed system is a group of individual servers that seem to consumers as a single cohesive system. This is a broad definition that encompasses a wide range of computer systems, and it highlights two key characteristics of a distributed system: that fact that it will be made up of several independent components that are viewed as just a single entity the users. This is especially true in cloud computing, as clouds disguise the complicated architecture that rely on and give consumers with a simple interface. The main goal of distributed systems would be to share resources and make better use of them. This is particularly true in the case of cloud computing, in which this concept is brought to its logical conclusion and users are leasing resources (infrastructure, runtime environments, & services). Indeed, one of the driving forces behind cloud computing is the availability of enormous computing facilities from IT behemoths like Amazon and Google, which discovered that delivering their processing skills as a service allowed them to better utilize their infrastructure. Homogeneity, transparency, flexibility, openness, parallelism, continuous availability, & independent failures are all common characteristics of distributed systems. Clouds have 66 CU IDOL SELF LEARNING MATERIAL (SLM)

several of these characteristics, particularly in terms of scalability, concurrency, & continuous availability. Cloud computing is the result of three important milestones: mainframe computing, cluster computing, and grid computing. 4.3 MAINFRAME COMPUTING Mainframe computers. Those are the first large-scale computer facilities that made use of many processing units. Mainframes were strong, dependable computers designed to handle vast amounts of data and perform vast input/output (I/O) activities. They were largely utilized by huge corporations.organizations for bulk data processing tasks including online transactions, enterprise resource planning, or other processes that require large amounts of data to be processed. Despite the fact that mainframes cannot be termed distributed systems, they did have tremendous computational capabilities. Multiple processors were combined into a single unit and shown to users as a single entity. One of the most appealing characteristics of mainframe computers was their potential to always be highly reliable systems that were \"always on\" and capable of transparently tolerating outages. There was no need to shut down the systemto replace faulty components, as well as the system would continue to function normally. Mainframes were mostly used for batch processing. Although its popularity & deployments have declined in recent years, updated versions of these systems are still used as transaction processing (like online shopping). Banking, airline ticketing, supermarkets, telecommunications, and government services are just a few examples. 4.4 CLUSTER COMPUTING Cluster computing. Cluster computing began as a reduced alternative to mainframes and supercomputers. As a result of technological advancements that resulted in larger and more efficient mainframes and supercomputers, the availability of low-cost commodity machines expanded. Those machines can then be linked through a network, high-bandwidth network, which is managed as a single system by special software tools. Clusters became the standard technology in parallel & high-performance computing in the 1980s. They were less expensive than mainframes that made high-performance computing accessible to a wide range of organizations because they were built on commodity hardware. Universities & small research institutes are among them. Condor, Parallel Virtual Machine (PVM), & Message Passing Interface (MPI) are just a few of the tools and frameworks that have evolved as a result of cluster technology. 67 CU IDOL SELF LEARNING MATERIAL (SLM)

Clusters had the advantage of leveraging the computational capacity of commodity machines to solve issues that were previously only feasible on expensive supercomputers. Furthermore, if more computational capacity was required, clusters could be quickly expanded. 4.4 GRID COMPUTING Grids are useful. Grid computing emerged as an extension of cluster computing in the early 1990s. Grid computing presented a new technique to access big processing capacity, massive storage facilities, as well as a range of services, analogous to the electricity grid. Users can “consume” information.They use resources the same way we use other utilities like electricity, gas, and water. Grids began as an Internet-based aggregation of geographically distributed clusters. Different entities owned these clusters;thus, arrangements were formedto divide computational power among them A computer grid, unlike a \"big cluster,\" had been a dynamic aggregation with heterogeneous computing nodes with a countrywide or even global scale. The spread of computing was made possible by a number of advancements. (a) clusters became widely available resources; (b) they were frequently underutilized; (c) new problems necessitated computational power beyond the capabilities of single clusters; and (d) advances in networking and the widespread adoption of the Internet enabled long- distance, high-bandwidth connectivity. All of these factors contributed to the development. grids, which currently serve a large number of consumers all around the world. Grid computing is frequently seen as the forerunner of cloud computing. In actuality, it incorporates elements from all three primary technologies. Computing clouds are set up in massive datacenters that are hosted by a single company and offer services to others. Clouds, like mainframes, are defined by their essentially endless capacity, their ability to tolerate errors, and the fact that they are always on. In many situations, like with clusters, the computing nodes which make up the infrastructure for computing clouds were commodity equipment. A cloud vendor's services are utilized on even a pay-per-use model, and clouds fully realize the utility vision introduced by grid computing. 4.5WEB 2.0 The Web is the principal means by which cloud computing services are delivered. At this time, the Web refers to a collection of technologies and services that enable interactive data sharing, collaboration, user-centered design, & application development. Web 2.0 is the result of this progress, which has turned the Web it into rich platform enabling application development. This word describes a new means for developers to design and distribute apps 68 CU IDOL SELF LEARNING MATERIAL (SLM)

and services over the Internet, as well as a new experience for consumers of these apps and services. Web 2.0 adds interactivity and flexibility to Web pages, enhancing the user experience by allowing users to access all of the functions commonly present in desktop apps through the Web. These features are achieved by combining a number of standards and technologies, including XML, Asynchronous JavaScript and XML (AJAX), Web Services, and etc. These technologies enable us to create apps that rely on the contributions of users, who are now content creators. Furthermore, the Internet's capillary spread creates more opportunities as well as markets for such Web, whose services may now be accessed via a range of devices such as mobile phones, automobile dashboards, television sets, and other similar devices. These new scenarios necessitate higher application dynamism, which is another fundamental feature of this technology. Web 2.0 apps are incredibly dynamic: they improve over time, because new updates & features are added at a consistent rate based on the community's usage patterns. On the client side, there is no requirement to push new software releases to the installed base. By connecting with cloud applications, users may take use of new software features. For efficient support of such dynamism, lightweight deployment & programming approaches are critical. Another important trait is loose coupling. New applications could be \"synthesised\" by simply assembling and integrating existing services, hence adding value. It becomes easy to track the interests of consumers in this manner. Lastly, Web 2.0 applications strive to tap into the Internet's \"long tail\" of users by making themselves accessible to everybody in terms of media accessibility or cost. Google Docs, Google Maps, Flickr, Facebook, Twitter, YouTube, de.li.cious, Blogger, & Wikipedia are examples of Web 2.0 apps. Social networking websites, in particular, benefit the most from Web 2.0. AJAX, Really Simple Syndication (RSS), as well as other tools which make the user experience very interactive would not have been feasible without the support of AJAX, RSS, and other techniques that make the user experience very dynamic. Furthermore, community Websites tap on the community's collective intelligence, which feeds content to the applications themselves: Flickr offers enhanced capabilities for storing digital pictures and videos, Facebook is a social networking site that uses user behaviour to generate content, and Blogger, like any other blogging platform, allows users to maintain an online journal. Darcy DiNucci5 first conceived the concept of both the Web as a means of enabling and enhancing contact in 1999, and it began to take shape in 2004. It is already a sophisticated platform for serving cloud computing demands that heavily leverages Web 2.0. Rich Internet applications (RIAs) and the apps and frameworks that enable them are essential in making cloud services available to people. Web 2.0 apps have undoubtedly contributed to people becoming more accustomed to using the Internet in their daily lives, and have paved the way 69 CU IDOL SELF LEARNING MATERIAL (SLM)

for the acceptance and usage computing as a paradigm, in which the IT infrastructure is provided via a Web interface. 4.6SERVICE-ORIENTED COMPUTING The basic reference paradigm of cloud computing systems is service orientation. The concept of services is used as the basic building block of application software development in this approach. The development of quick, low-cost, flexible, interoperable, or evolvable systems and applications is aided by service-oriented computing (SOC). A service is a platform-agnostic abstraction that really can perform any function, out of a simple function to a sophisticated business process. Almost any line of code which performs a task can be transformed into a service and exposed via a network-accessible protocol. A service should be loosely connected, reusable, independent of programming languages, and location transparent. Loose coupling makes it easier for services to fulfil multiple scenarios and makes them more reusable. The freedom from a single platform improves service accessibility. As a result, a broader range of customers can be supplied, including those who can seek up services in global registries & consume them in a location-independent manner. A service-oriented architecture (SOA) is a logical approach of structuring software systems to provide services to end users or other entities spread across the network via published but also discoverable interfaces. Quality of service (QoS) and Software-as-a-Service are two significant concepts introduced and diffused by service-oriented computing, which are also fundamental to cloud computing (SaaS). Quality of service (QoS) refers to a collection of functional and non - functional characteristics that can be used to assess a service's performance from several angles. Performance measures like reaction time, as well as security qualities like transactional integrity, reliability, scalability, & availability, are examples. The client and the provider define QoS criteria through an SLA that specifies the lowest value (or perhaps an acceptable range) again for QoS attributes that must be met throughout the service call. The idea of Software-as-a-Service (SaaS) presents a new application delivery model. The phrase comes from the realm of application service providers (ASPs), which supply software services-based solutions across a wide area network out of a central datacenter and charge a subscription or rental fee for them. The ASP is in charge of maintaining the infrastructure as well as making the application available, freeing the client from costly maintenance and complex upgrades. Because multitenancy allows for economies of scale, this software delivery paradigm is possible. Using service-oriented computing (SOC), loosely connected software components, rather than entire systems, can be exposed and priced individually, 70 CU IDOL SELF LEARNING MATERIAL (SLM)

completing the SaaS strategy. This enables for the delivery of complicated business execution of transactions as a service, as well as the creation of on-the-fly applications and the reuse of services from anywhere and by anyone. Web Services is one of the most often used examples of service oriented (WS). These notions of SOC are introduced into another World Wide Web by making it consumable by apps rather than only humans. Web services were software components that offer functionality via a method invocation mechanism through the Hypertext Transfer Protocol (HTTP) (HTTP). The metadata expressed in the Web Service Description Language (WSDL) can be used to programmatically infer the interface of a Web service. WSDL is also an XML language which defines the characteristics of the service and all of the methods, as well as parameters, descriptions, as well as return type, exposed by the service. Simple Object Access Protocol (SOAP) is used to communicate with Web services. It is an XML language that specifies how to call a Web service method & receive the response. Web services become platform neutral yet accessible to the World Wide Web when they use SOAP and WSDL via HTTP.The World Wide Web Consortium (W3C) is in charge of Web service standards and specifications (W3C). ASP.NET and Axis are two of the most common architectures for implementing Web services. SOC's key contribution to the reality of cloud computing is the creation of systems on the basis of distributed services that can be combined. Web services technologies have given us the tools we need to create such compositions simple and easy to integrate into mainstream World Wide Web (WWW) environment. 4.7 SUMMARY  This chapter describes the evolution of cloud computing from the beginning computing technologies such as mainframe computing grid computing and so on. The concept of Service Oriented Computing and web 2.0 has been introduced and discussed elaborately in this chapter. Cloud computing, while it solves several difficulties such as mass storage, computation, resource sharing and indivisible systems, it still has many problems to solve in its current state of development. As a development of grid computing, cloud computing offers the ability to deploy resources on-demand. Grid computing but may not be located in the cloud, depending on the sort of users who are interacting with it. The maintenance of items in the cloud is important to users who work as systems administrators and integrators. They perform server and application upgrades, installations, and virtualizations. Customers do not care how the system operates if they are consumers of the service. This is an Internet-based computer solution in which shared resources are given, such as electricity distributed through the electrical grid. Computers throughout the cloud are set to cooperate, and the many applications make advantage of the combined computing capacity as if they were running on even a single system. 71 CU IDOL SELF LEARNING MATERIAL (SLM)

4.8KEYWORDS  Webservice - The phrase Web service refers to either a service provided by one electronic device to another through the World Wide Web, or perhaps a server running on a computer device that listens for requests at a certain port via a network and serves web content.  SOC - SOC is a concept that brings together software applications to form a network of services with order to achieve dispersed business processes that are quick, low- cost, and cross-organizational.  RIA -Rich Internet applications (RIAs) are Web-based programmes with certain graphical desktop application qualities. RIAs can operate faster and become more interesting if they are built with robust development tools.  Web - The World Wide Web, also referred as the Web, is also an information system that uses Uniform Resource Locators to identify documents or other web resources, which can then be connected together using hyperlinks and accessed via the Internet.  PVM -A Parallel Virtual Machine is just a piece of software that allows computers to communicate in parallel.  MPI - The Message Passing Interface (MPI) is a standardised, portable message- passing standard for parallel computing architectures. 4.9 LEARNING ACTIVITY 1. If Grid Computing is the available option, what kind of companies will prefer it. Give some examples. 2. Provide some type of applications that are taking advantage of cluster computing. 4.10UNIT END QUESTIONS 72 A. Descriptive Questions Short Questions 1. What is the most significant change brought about by Web 2.0? 2. Give some Web 2.0 application examples. 3. Describe the key features of a service orientation. 4. Give a quick description of a distributed system. CU IDOL SELF LEARNING MATERIAL (SLM)

5. How mainframe computers are used? Long Questions 1. What are the major distributed computing technologies that led to cloud computing? 2. Write short description on SOC 3. Write brief notes on cluster computing and its advantages. 4. Differentiate between grid and cloud computing. 5. How business can get benefits from grid computing? B. Multiple Choice Questions 1. The first large scale computer in the world a. Cluster b. Grid c. Cloud d. Mainframe 2. Which of the following isstandard technology in parallel & high-performance computing in the 1980s? a. Cluster b. Grid c. Cloud d. Mainframe 3. PVM stands for a. Parallel Virtual Machine b. Parallel Visual Machine c. Parallel Vital Machine d. Portal Vital Machine 4. _________emerged as an extension of cluster computing in the early 1990s. a. Cluster 73 CU IDOL SELF LEARNING MATERIAL (SLM)

b. Grid c. Cloud d. Mainframe 5. Web 2.0 applications strive to tap into the Internet's ______of users a. long tail b. short tail c. medium tail d. small tail Answers 1-d, 2-a, 3-a, 4-b, 5-a. 4.11REFERENCES Reference books  Rajkumar Buyya, Christian Vecchiola, S. Thamarai Selvi, “Mastering Cloud Computing”  Kailash Jayaswal, Jagannath Kallakuruchi, Donald J. Houde, Dr. Devan Shah, “Cloud Computing: Black Book  Cloud Computing: Principles and Paradigms, Editors: Rajkumar Buyya, James Broberg, Andrzej M. Goscinski, Wile, 2011.  Cloud_computing_for_energy_management_in_smart_gri.pdf Websites:  https://www.guru99.com/iot-tutorial.html  https://pluto-men.com/insights/everything-you-need-to-know-about-web-of-things/  https://webofthings.org/2017/04/08/what-is-the-web-of-things/ 74 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 5–CLOUD COMPUTING ARCHITECTURE STRUCTURE 5.0 Learning Objective 5.1 Introduction 5.2 Cloud computing Architecture 5.3 Distributed application Design 5.4 Automated resource Management 5.5 Virtualization 5.6 Distributed Processing Design 5.7 Summary 5.8 Keywords 5.9 Learning Activity 5.10 Unit End Questions 5.11 References 5.0LEARNING OBJECTIVES After studying this unit students will be able to:  Evaluate Cloud computing architecture  Analyze Distributed application design  Describe Automated Resource management and its uses  Outline Virtualization  Analyze Distributed Processing Design 5.1INTRODUCTION Constitutes electricity without understanding how it is generated. The adoption of such an idea in the Information Technologies (IT) sectors is known as cloud computing. As a result, cloud computing is indeed a business model in which IT services are made available to consumers. The main goal of this system is to make providing computing services as simple as providing water, electricity, gas, and telephone. As a result, consumers utilize the resource and/or service if they need it, based on the amounts of their needs, and pay according on the rate at which they use it. Cloud computing is, in fact, the new means of consuming 75 CU IDOL SELF LEARNING MATERIAL (SLM)

information. It is a computational model where a resource (software or hardware) be hosted, managed, and administrated in big data centers over the internet. This resource is offered as a courtesy. As a result, cloud computing provides an IT architecture that is both adaptable and dynamic. Cloud computing is the IT industry's answer to a new necessity. In fact, cloud computing might be considered a development of the concept of \"grid computing.\" The most significant distinction behind them is in management style. The user must handle the entire system (server, network element, os, software...) in grid computing. In cloud computing, however, the technology is provided as a service. As a result, the user just interacts with what he requires and is unconcerned about other services or concerns. It implies that cloud computing could be used in a beneficial manner. In fact, cloud computing is based on the utility computing paradigm. As a result, most people can utilize it without needing to understand how the system works or manage anything. Grid computing is primarily aimed at scientific researchers with a strong background in computer science. 5.2CLOUD COMPUTING ARCHITECTURE We'll go through cloud computing's composition model, business model, and deployment model in this part. Cloud computing composition model The hardware layer, the infrastructure layer, a platform layer, and lastly the application layer are the four layers of cloud computing. This layer architecture makes it simple to develop the system. In reality, any layer could be updated or changed without requiring knowledge of or changes to other layers. This layer divide is compared to the network protocol's OSI model. As a result, with such an architecture, deploying new software or upgrading existing software is a breeze. The introduction of a new hardware device has no impact on other system components. The layer architecture is made up of the following elements: The hardware layer collects all of cloud computing system's hardware components. It has to do with the physical server, network component, power, and control system management. The data center's supervision is discussed in this tier. A cloud computing provider usually operates a number of data center. The virtualization layer is represented by the infrastructure layer. It allows you to create the virtual resource which the upper layer will use. Xen1, KVM2, and VMware3 are the most widely used virtualization technologies. The operating system & application frameworks are housed at the platform layer. It is determined by the virtual machine that has been established in the bottom layer. This is the uppermost level of the hierarchy, the application layer. This layer contains all of the cloud applications. It depicts the cloud computing system's front office. This layer design 76 CU IDOL SELF LEARNING MATERIAL (SLM)

is the foundation of cloud computing's business model. In the architectural mode, every offer inside the business model correlates to one or two levels. Cloud computing business Model Cloud computing, as we saw in the introduction, is a solution to a new necessity in the field of information technology. Everything is a service (XaaS), such as SaaS (Software as a Service), PaaS (Platform as a Service), HaaS (Hardware as a Service), DaaS ([Development, Database, Desktop] as both a Service), IaaS (Infrastructure as a Service), and so on.... IaaS, PaaS, & SaaS are the three most popular services in this approach (figure 5.1). IaaS (Infrastructure as a Service) allows customers to access IT infrastructure (processing power, networks, storage, etc.) directly. Virtualization technologies are used to supply these resources. To adapt to customer demand, physical resources were integrated or disassembled. The virtualization method entails generating as many virtual machines as are required. As a result, the provider merely handles the resources in this service, and it is up to the customers to choose the operating software system. PaaS (Platform as a Service) is a software resource that includes operating systems, development frameworks, and so on. As a result, in this sort of service, the customer is exclusively responsible for developing and managing his own application. The service provider provides the consumer with all of the tools he needs to execute his application. Software as a service (SaaS) is a term that refers to the provision of on-demand applications over the internet. As a result, the service provider manages and controls the entire system, from of the hardware layer towards the final application. The customer only uses the programwhen he has a need but has nothing to maintain or develop in order to fulfil that need. 77 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 5.1Cloud computing Business model Cloud computing deployment model Figure 5.2Deployment model The deployment model was made up of four categories that have been specified by the cloud community: Infrastructure for the public cloud: This type of system's provider provides a collection of resources (hardware and/or software) to the broader public as a service. There are numerous advantages to using public clouds, including the lack of an upfront capital commitment in infrastructure. In exchange for this infrastructure, the user has less control over the system, which reduces efficiency in a variety of business circumstances. 78 CU IDOL SELF LEARNING MATERIAL (SLM)

A private cloud infrastructure is one that is only used by one person (organization). This system's management can be done by the organization or by a third party. The user has much more control over the system in a private cloud. As a result, this type is favored in business, particularly when integrating cloud technologies for the first time. Hybrid cloud infrastructure: This system combines the benefits of both types of cloud computing deployments. It adapts to the needs of the business, using the private cloud for core functions as well as the public cloud when such needs grow. As a result, hybrid cloud could be utilized to optimize users' resources based on their current activity (figure 2). Community Cloud Infrastructure:The system provides support for a group of people who have a common function or goal. It can be founded by a single organization or a group of organizations that share common goals, policies, and security concerns. The constituent organization may be in charge of the community cloud (s). 5.3DISTRIBUTED APPLICATION DESIGN When designing a distributed application, be sure to include the following features: i) The application addresses the business problem it was created to tackle. ii) From the start, security concerns are addressed. iii) Is available and resilient, and can be used in high-availability data centers with redundant power supplies. iv) Is controllable, allowing operators to install, monitor, & troubleshoot the app as needed for any particular issue. v) Provides excellent performance and is well-suited to routine tasks. vi) Scales to meet predicted demand it supports a wide range of activities or users with little effort. vii) It can be used in a variety of scenarios and deployment strategies. Services A service-based strategy to designing applications has evolved as firms strive to integrate existing systems beyond departmental and organizational boundaries. Traditional components are quite similar to services, except those services contain their own business logic & data and are not strictly part of your system. Rather, your application uses them to deliver a specific service, including such credit card authorization. All inbound messages were forwarded to a service interface through which services disclose their services. A contract is defined as the set of messages which must be exchanged with both the service in order for it to accomplish a given business job. A service interface resembles a COM interface or any other interface created by a C# class. 79 CU IDOL SELF LEARNING MATERIAL (SLM)

To validate the customer's credit card and authorize the sale, an online shopping application, for example, would use an external credit-card authorization service. The application could then use an external service to organize delivery of items after a successful authorization. In the overall business process, the credit-card authorization plus delivery-of-goods services both play a part. These services, unlike conventional components, live outside the application's trust boundary that manage their own data. This necessitates the following two design considerations: i) You must ensure that the application as well as the service have a safe authentication connection. ii) You should use a message-based technique to communicate. If your application makes use of a service, you just need to understand the business functionality which the service is seeking to deliver, as well as the details of the contract which you must follow in order to communicate with the service properly. The service's internal implementation has no bearing on your design. Internally, services typically comprise the same logic, business, & data access components as other program. Furthermore, services disclose their capabilities via a service contract, which takes care of the semantics of disclosing the underlying business logic. Frequently, your application will not interface directly with services, but rather through service agents that engage with the service on your behalf. Message-based communication Finally, keep in mind that integration-related applications and services may be developed on separate platforms and different teams in different businesses, and they may be updated and maintained independently. As a result, it's crucial that we implement communication among them with as little coupling as possible. To this end, message-based mechanisms must be used to communicate among applications as well as the services they consume in order to ensure high levels of stability and scalability. While message-based communication could be configured to be done synchronously, calling service interfaces asynchronously is far more advantageous. The ability to design highly scalable, available, and long-lasting solutions by asynchronously invoking service interfaces provides for a more loosely linked strategy to distributed application development. However, the advantages of asynchronous architecture are not free. You'll have to consider things like message correlation, optimistic data concurrency management, and external service unavailability, among other things. Components and Tiers The logical division of distributed applications into display, business, and data access tiers has become widely accepted. Components which perform similar functions should be placed in same layer, which is often organized in a stacked manner as seen below: 80 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 5.4Components and Tiers In this case, a component makes use of the services of other parts with its own layer as well as in levels above and below it. Services can benefit from this partitioned perspective of components and levels. A service- based application could be thought of as a collection of interconnected services that communicate with one another using message-based communication. These services could be thought of as additional components of the total solution. However, and this is an essential point, each service, like any other application, is made up of a different set of components, and these belonging to different of components can be logically divided into presentation, business, and data services. The following diagram illustrates this concept: 81 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 5.5Components and Levels Take note of the following key features of this diagram: One or more layers make up an application or service, and a layer is just a collection of logically related components. As a result, a layer aids in distinguishing between the many types of jobs done by the components, making it difficult to learn how the whole system operates. You can create a conceptual map of a service provider by identifying the generic types of components that exist in certain solutions, and then using that service as the pattern for your design by identifying the generic types of devices that exist in most solutions. The following diagram depicts the Microsoft-recommended application and service conceptual view: 82 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 5.6Service Conceptual view A distributed application must often span numerous layers or even organizations, in which case it must have its own application security, operational management, & communications policies. These policies specify the application's deployment environment as well as how services & application tiers communicate. 5.4AUTOMATED RESOURCE MANAGEMENT The majority of commercial applications are deployed in a hybrid of physical, virtual, and cloud computing environments. By their very nature, cloud environments are extremely dynamic. Cloud platforms provide IT resources to applications dynamically, execute load balancing based on the resource consumption levels, and conduct dynamic power management to reduce power costs. IT managers need to guarantee sure sufficient server power was available to handle these dynamic situations. If done manually, however, this process is time consuming and mistake prone.IT administrators can use ManageEngine Applications Manager for automate the provisioning on cloud resources based on the threshold breaches. 83 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 5.7Automated Resource management Whenever the quantity sessions in a Tomcat server and Oracle application server exceeds the set threshold, Applications Manager could automatically start/stop/restart Amazon EC2 instances. You may greatly enhance operational efficiency by decreasing manual involvement, as well as optimizing resource consumption by reusing dormant cloud resources, with this automation capabilities. You can also verify that your business-critical apps and data running in the cloud don't run out of resources at any point. If you combine this one with Applications Manager's other in support for watching application servers, servers, databases, and more across physical, virtual, and cloud infrastructures, this feature becomes even more beneficial. 5.5VIRTUALIZATION Virtualization is the primary enabler of Cloud Computing. The division of a single physical server to numerous logical servers is known as virtualization. When a physical server is partitioned into logical servers, each logical server acts like such a physical server and can run its own operating system and applications. Many well-known firms, such as VMware and Microsoft, offer virtualization services, which allow you to utilize their virtual server instead of your personal computer for storage and computation. They are quick, cost-effective, and 84 CU IDOL SELF LEARNING MATERIAL (SLM)

time-saving.Virtualization is particularly useful for software developers because it allows them to build code that works in a variety of contexts and, more crucially, test that code. Virtualization is mostly utilized for three reasons. Network virtualization, server virtualization, and storage virtualization are the three types of virtualizations. Network virtualization is a technique for combining network resources by dividing available bandwidth in channels, which are independent of the others and may be given to a single device or computer in real time. Storage virtualizationis the process of combining physical storage from numerous network storage devices in what seems to be a single storage device which can be handled from a single location. In storage area networks, storage virtualization is a widespread practice (SANs). Server virtualizationis indeed the practice of concealing server resources such as processors, RAM, and operating systems from server users. The goal of server virtualization would be to increase resource sharing while lowering the computational cost and complexity for users. Virtualization is indeed the key to unlocking the Cloud system; it decouples the software from of the hardware, which is why it is so vital for the cloud. Virtual memory, for example, allows PCs to borrow memory from of the hard disc. Hard discs usually have a huge amount of space than RAM. Although virtual discs are slower that real memory, they can be used in place of real memory if properly maintained. Similarly, software exists that can simulate a whole computer, meaning that one computer can execute the functions of twenty computers. 5.6DISTRIBUTED PROCESSING DESIGN A vast volume of data flows into a distributed processing system from multiple different sources. Data intake is the term for this data flow process. Once the data streams in, the system design has multiple levels that divide down the complete processing into several sections. 85 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 5.8 Distributed Processing Design Data Collection & Preparation Layer: This layer is responsible for gathering data from various external sources and preparing it for system processing. There is no uniform structure when data pours in. Nature is raw, unstructured, or semi-structured. This could be a blob of text, video, audio, or image format, as well as tax return forms, insurance papers, and medical bills, among other things. The data preparation layer's job is to convert data into a standard format and classify it according to the business logic that will be performed by the system. The layer is smart enough to accomplish all of this without the need for human interaction. Data Security Layer: Data is vulnerable to unauthorized access when it is moved. The data security layer's job is to guarantee that data transportation is secure by keeping an eye on it and implementing security protocols, encryption, and other measures. Data Storage Layer: Once the data has been ingested, it must be saved. This can be done in a variety of ways. In-memory distributed caches are now used to store and manage data when analytics is performed on streaming data in real-time. On the other hand, if the data is handled in a traditional manner, such as batch processing, distributed databases designed to handle large amounts of data are utilised to store the information. Data Processing Layer: This is the layer that includes the real deal logic and is in charge of data processing. The layer applies business logic to the data in order to extract useful 86 CU IDOL SELF LEARNING MATERIAL (SLM)

information. For this, machine learning, predictive, descriptive, and decision modelling are commonly utilised. Data Visualization Layer: All of the extracted data is transferred to the data visualisation layer, which often consists of browser-based dashboards that display the data as graphs, charts, and infographics, among other things. Kibana is a terrific example of a data visualisation tool that has gained a lot of traction in the business. Technologies Involved in Distributed Data Processing: MapReduceis a programming methodology for handling distributed data processing among multiple machines in a cluster, assigning jobs to multiple machines, doing work in parallel, and coordinating all communication and data transmission between different components of the system.The programming model's Map part includes sorting data based on a parameter, as well as the Reducing part involves summarising the sorted data. Apache Hadoopis the most widely used open-source implementation of MapReduce programming model.All of the industry's big guns use the framework to manage big amounts of information in their systems. Twitter uses it for analytics purposes. Facebook uses it to store large amounts of data. Apache Spark is a cluster computing framework that is open-source. It has excellent batch and real-time in-stream processing capabilities.It can deal with a variety of data sources and allows for parallel job execution in a cluster. Spark includes a cluster manager as well as a distributed data storage system. The cluster manager makes it easier to communicate across nodes in a cluster, while distributed storage makes it easier to store large amounts of data. Spark works well with distributed data stores such as Cassandra, HDFS, MapReduce File System, Amazon S3, and others. Apache Stormis a framework for distributed stream processing. It is mostly utilised in the industry for processing huge quantities of streaming data.It has a variety of applications, including real-time analytics, machine learning, and distributed remote procedure calls, among others. Apache Kafkais a distributed stream processing and messaging platform that is open-source. It was created by LinkedIn and is written in Java and Scala.A distributed scalable pub/sub message queue is used in Kafka's storage layer. It functions as a messaging system, allowing it to read and publish data streams.In the industry, Kafka is used to create real-time functionality such as alerting platforms, handling enormous data streams, monitoring website activity and metrics, messaging, and log aggregation. Hadoop is favoured for batch data processing, whereas Spark, Kafka, and Storm were preferred for real-time streaming data processing. 87 CU IDOL SELF LEARNING MATERIAL (SLM)

5.7SUMMARY  In this chapter the concept of Cloud computing architecture with three different model such as composition model, deployment model and business model has been evaluated.  A cloud computing architecture is characterised by the fact that all applications were controlled, managed, and serviced by a single cloud server. As part of the cloud arrangement, its data can be backed up and preserved in a remote location. A well- integrated cloud system has the potential to generate virtually unlimited efficiencies and opportunities. Cloud computing architecture is straightforward; it identifies all of the components and subcomponents that make up the system. There's little doubt that cloud computing is here to stay for the foreseeable future. It has a significant impact on every aspect of our lives today, providing numerous advantages in terms of flexibility, storage, sharing, maintenance, as well as a variety of other factors.  Cloud-based applications and services such as Google Docs, Skype, and Netflix are accessible over a conventional internet connection or a virtual network. The majority of organisations are moving their operations to the cloud because they demand large amounts of storage, which cloud platforms can provide. Because a cloud computing architecture gives greater bandwidth to its users, data stored in the cloud can be accessed from anywhere in the globe at any time, regardless of location. In part, this is due to the design, which allows it to share resources not only with client-side consumers, but also with open source communities such as Microsoft and Red Hat.  The design of distributed application has been discussed with proper diagrams. The concept of Automated Resource Management and virtualization has been introduced. Finally,the design of distributed data processing and the technologies involved has been clearly explained. 5.8KEYWORDS  SAN - Storage virtualization (also known as software-defined storage or virtual SAN) is the process of combining numerous physical storage arrays of SANs into a single virtual storage device.  Execution runtime - The final step of a computer program's life cycle is runtime, often known as execution time, when the code is executed as machine code on the computer's central processing unit (CPU). In other terms, \"runtime\" refers to a program's execution phase. 88 CU IDOL SELF LEARNING MATERIAL (SLM)

 Thin-client - A thin client is a machine that has been optimized for establishing a remote connection with such a server-based computing environment in computer networking.  HaaS- As a service hardware (HaaS) is a computer hardware solution in Cloud computing in which a company delegated the duty for replacing, updating, and maintaining its computer equipment to something like a third party. That third party – the HaaS provider – provides all of the technology required to run the business.  XaaS -“Anything as a service” (XaaS) refers to a broad category of cloud computing and remote access services. It acknowledges the large number of products, tools, & technology that are now available on demand over the internet to users. 5.9LEARNING ACTIVITY 1. Provide some Examples on Distributed System? 2. Suppose you are in AI organization what kind of technology will you use for your data processing? 5.10UNIT END QUESTIONS 89 A. Descriptive Questions Short Questions 1. How the cloud computing deployment model made off? 2. What are all the design consideration must follow when designing distributed application? 3. What is ARM? 4. What are the types of virtualizations? 5. What is the use of Apache Storm? Long Questions 1. Describe in detail the layer structure of Cloud computing architecture. 2. Explain Cloud computing Business model. 3. Discuss briefly on virtualization. 4. Write short notes on Distributed processing design? CU IDOL SELF LEARNING MATERIAL (SLM)

5. Detail the technologies which are involved in the distributed data processing. B Multiple Choice Questions 1. A distributed scalable pub/sub message queue is used in a. Apache Kafka b. Apache Storm c. Apache Spark d. Apache Hadoop 2. Which of the following programming methodology used for handling distributed data processing among multiple machines in a cluster, assigning jobs to multiple machines, doing work in parallel? a. MapReduce b. Apache Kafka c. Apache Storm d. Apache Spark 3. Which of the following virtualization is concealing server resources such as processors, RAM, and operating systems from server users? a. Server virtualization b. Storage virtualization c. Network virtualization d. Desktop virtualization 4. _____________infrastructure provides support for a group of people who have a common function or goal. a. Community Cloud Infrastructure b. Hybrid cloud infrastructure c. A private cloud infrastructure d. Infrastructure for the public cloud 5. Which of the following infrastructure that is only used by one person? 90 CU IDOL SELF LEARNING MATERIAL (SLM)

a. Community Cloud Infrastructure b. Hybrid cloud infrastructure c. A private cloud infrastructure d. Infrastructure for the public cloud Answers 1-a, 2-a, 3-a,4-a, 5-c. 5.11REFERENCES Reference books  Rajkumar Buyya, Christian Vecchiola, S. Thamarai Selvi, “Mastering Cloud Computing”  Kailash Jayaswal, Jagannath Kallakuruchi, Donald J. Houde, Dr. Devan Shah, “Cloud Computing: Black Book  Cloud Computing: Principles and Paradigms, Editors: Rajkumar Buyya, James Broberg, Andrzej M. Goscinski, Wile, 2011.  Cloud_computing_for_energy_management_in_smart_gri.pdf Websites:  https://www.guru99.com/iot-tutorial.html 91 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 6 -CLOUD SERVICE MANAGEMENT STRUCTURE 6.0 Learning Objective 6.1 Introduction 6.2 Service Level Agreement(SLA) 6.3 Service Provider 6.4 Role of service provider in cloud computing 6.5 Summary 6.6 Keywords 6.7 Learning activity 6.8 Unit End Questions 6.9 References 6.0LEARNING OBJECTIVES After studying this unit students will be able to:  Evaluate SLA and its agreements  Analyze example of SLA defined companies and its features  Describe CSP and theirroles and responsibilities 6.1INTRODUCTION In hybrid clouds, service management & automation are also crucial. As cloud services evolve, it's more likely that networking services to cloud applications will be provided via application-oriented abstraction layer APIs rather than specialized networking technologies in the future. Through service management or network self-adjustment, network resources can be modified and provisioned in a much more automated and optimal manner within this network design paradigm. These changes can be made in two ways: by operator-initiated provisioning via service management systems to exercise direct control over network services, or through “smart” networking technologies that can adapt services inside an autonomic or self-adjusting manner. Furthermore, network service management and smart networking technologies must be tightly integrated with overall cloud service delivery management so that changes in network resources required either by upper layers of the cloud \"stack\" can be carried out by network service management as well as self-adaptations in an automated manner. 92 CU IDOL SELF LEARNING MATERIAL (SLM)

Many of these \"smart\" networking solutions aim to improve cloud deployment resiliency in terms of availability, performance, and workload mobility. Application delivery networking services, for example, optimize information flow and provide application acceleration by classifying and prioritizing applications, content, or user access; virtual switching technology abstracts the switching fabric and enables virtual machine mobility. As these \"smart\" networking technologies evolve, their capabilities will expand beyond those of a single cloud to include \"intra-cloud\" and \"intercloud\" capabilities. The hybrid cloud will achieve unprecedented levels of global interconnectivity as it matures, allowing for real-time or near-real-time information access, application-to-application integration, and collaboration. 6.2 SERVICE LEVEL AGREEMENT IN CLOUD COMPUTING A Service Level Agreement (SLA) is a contract between a cloud service provider and a client that guarantees a certain level of performance. Previously, all Service Level Agreements in cloud computing was negotiated here between client as well as the service consumer. Most Service Level Agreements are now standardized until a client becomes a significant consumer of cloud services, thanks to the emergence of major utility-like cloud computing providers. Different degrees of service level agreements also are defined, as seen below:  Service Level Agreement (SLA) based on the needs of the customer  SLAs based on services  SLA with several levels Most Service Level Agreements were enforceable as contracts; instead, they are usually agreements or contracts that are more akin to an Operating Level Agreement (OLA) but may not be subject to legal restrictions. Before signing a large agreement with a cloud service provider, it's acceptable to have an attorney check the documentation. The following are some of the parameters that are commonly specified in Service Level Agreements: i) The service's accessibility (uptime) ii) Latency, or the time it takes for anything to happen. iii) Reliability of service components iv) Accountability of each party v) Warranties Inside any event, when a cloud service provider fails to satisfy the set minimums, the provider is obligated to pay the agreed-upon cost to the cloud service consumer. As a result, Service Level Agreements were similar to insurance policies in that the company must pay according to the terms if a disaster occurs. 93 CU IDOL SELF LEARNING MATERIAL (SLM)

Microsoft makes the Service Level Agreements for the Windows Azure Platform components public, which is standard practice among cloud service providers. Individual Service Level Agreements exist for each component. Two important Service Level Agreements (SLAs) are detailed below: SLA for Windows Azure: SLAs for computing and storage in Windows Azure are different. Whenever a client deploys two or more-part instances in different fault and upgrade domains, there is a guarantee that the customer's internet facing parts will now have external connectivity at least 99.95 percent of the time. Furthermore, all of the client's role instances are monitored, and 99.9% of the time, when a role instance's process does not start and initiate properly, it is detected. SLA for SQL Azure: SQL Azure clients will be able to connect to SQL Azure's database and internet gateway. Within a month, SQL Azure will be able to handle a “Monthly Availability” of 99.9%. Availability during the month the proportion of both the time the database was available to clients to the overall time in a month is the proportion for a given tenant database. In a 30-day monthly cycle, time is measured in minute increments. Availability is always paid for the entire month. If the customer's attempts can connect to the database are blocked by the SQL Azure gateway, a period of time is reported as unavailable. The use model is used to create Service Level Agreements. Cloud companies frequently charge a premium for pay-per-use resources and only employ regular Service Level Agreements for that purpose. Clients also can subscribe at various levels, ensuring access to a specific number of paid resources. Many often, the Service Level Agreements (SLAs) that come with a subscription include a variety of terms and conditions. If a client needs access to a specific level of resources, he or she must subscribe to something like a service. Under peak traffic conditions, a use model may not be able to provide that degree of access. 6.3 SERVICE PROVIDER Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), network services, business apps, mobile applications, and cloud infrastructure are some of the services offered by cloud service providers (CSPs). These services are hosted in a data centre by cloud service providers, and customers can access them through cloud service providers via an Internet connection. The following are some example Companies that provide cloud services: Amazon Web Services (AWS) is a cloud computing service (AWS) Amazon's AWS (Amazon Web Services) is a secure cloud service platform. To help the company expand, it provides services such database storage, processing power, content distribution, Relational Database, Simple Email, Simple Queue, and other features. 94 CU IDOL SELF LEARNING MATERIAL (SLM)

AWS features include:  AWS offers a number of useful tools for developing enterprise apps that are scalable and cost-effective. The following are some key aspects of AWS:  AWS is scalable because it can scale computing resources up or down based on the needs of the enterprise.  AWS is cost-effective since it operates on a pay-as-you-go basis.  It comes with a variety of storage possibilities.  It provides infrastructure security, data encryption, monitoring and logging, identity and access management, penetration testing, and DDoS attacks, among other security services.  It is capable of effectively managing and securing Windows workloads. Microsoft azure Windows Azure is another name for Microsoft Azure. It supports a wide range of operating systems, databases, programming languages, and frameworks, allowing IT professionals to quickly develop, deploy, and manage applications over a global network. It also allows users to organize their utilities into separate groups. Microsoft Azure has the following features:  Microsoft Azure is a scalable, adaptable, and cost-effective cloud computing platform.  It enables developers to administer applications and websites easily.  It was in charge of each resource on its own.  Its IaaS infrastructure enables us to start a general-purpose virtual machine on a variety of platforms, including Windows and Linux.  It has a Content Delivery System (CDS) that allows images, videos, audios, and applications to be sent. Google cloud platform Google's cloud platform is a Google product. It is made up of many physical equipment such as computers, hard discs, and virtual machines. It also aids in the simplification of the migrating procedure. Google Cloud has the following features:  Google Big Query, Google CloudDataproc, Google CloudDatalab, and Google Cloud Pub/Sub are just a few of the big data services available on Google Cloud. 95 CU IDOL SELF LEARNING MATERIAL (SLM)

 Google Virtual Private Cloud (VPC), Content Delivery Network, Google Cloud Load Balancing, Google Cloud Interconnect, and Google Cloud DNS are some of the networking services it offers.  It provides a number of scalable and high-performance options.  Messaging, Data Warehouse, Database, Compute, Storage, Data Processing, and Machine Learning are some of the serverless services offered by GCP (ML)  With Boost Mode, it gives a free cloud shell environment. IBM Cloud Provider: IBM Cloud is an open-source, more dependable, and speedier platform. It's made up of a variety of powerful data and AI techniques. It provides Infrastructure as a Service (IaaS), Software as a Service (SaaS), and Platform as a Service (Platform as a Service). You can use an internet connection to access its services such as computational power, cloud data and or analytics, cloud use cases, and storage networking. IBM Cloud's feature is:  that it boosts operational efficiency.  Its agility and speed boost consumer satisfaction.  Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service are all available (SaaS)  It provides our IT environment with a variety of cloud communications options. 6.4ROLE OF SERVICE PROVIDER IN CLOUD COMPUTING Cloud Service Provider(CSP) The provider of cloud services. The CSP will own and operate the datacenter, employ the workforce, own as well as utilize the risks (hardware or software), monitor service provision & security, and provide administrative support for the customer and their data and processing needs.Roles and Responsibilities in Cloud Computing Amazon Web Services (AWS), Rackspace, & Microsoft's Azure are among examples. Roles of Cloud Service Provider: Cloud use is increasing at a rapid pace in both small and large businesses. As a result, cloud hosting firms now offer a wide range of cloud services, including cloud delivery methods and a wide range of cloud computing services. Cloud Service Providers' Role: Improved accessibility & security: Cloud adoption not only improves business operations and IT infrastructure efficiency, but it also lowers the expenses for running, upgrading, 96 CU IDOL SELF LEARNING MATERIAL (SLM)

&maintaining on-site IT facilities. In the cloud, all business-critical data is protected with additional security. In reality, the data is transferred to a number of remote data centre facilities owned and controlled by third-party service providers, rather than being stored in the cloud. These facilities are comprised of climate-controlled rooms that store enterprise- grade servers providing seamless protection and simple access to provide business continuity in the case of a catastrophic incident affecting your company's main office. Cloud data centres are meant to contain a large number of computers that will be used to store data under strict security constraints. The agreement aims to provide continuous connectivity across massive networks containing millions of devices. Cloud computing is used by both end users and cloud hosting organizations to improve the quality of their services. Understanding the role of the cloud in business: To fully comprehend the causes for rising cloud adoption within enterprise settings, we must first gain a thorough understanding of the cloud's features that improve business processes. Cloud services are designed to relieve your IT personnel of the time-consuming and tedious chore for maintaining, repairing, including upgrading hardware such as servers. After transferring workloads onto cloud data centers, organizations’ on-site IT infrastructure will be leaner. In the vast majority of circumstances, separate room for servers as well as other IT equipment will not be required. The immediate advantage of cloud computing is that it reduces capital expenditure because enterprises do not have to spend money on expensive hardware. The reduction in hardware costs is accompanied by the elimination of web server maintenance and repair expenditures. The upfront expenditures of purchasing cost-intensive software and hardware have decreased significantly. Performance with the assurance of security: Cloud Hosting outperforms a physical server in terms of performance. That was because experienced web hosting service providers, as opposed to small and medium-sized businesses, can afford enterprise-grade cloud servers. Cloud hosting companies invest a huge amount of raw and human resources to ensure the security of their clients' digital assets. Firewalls, anti-malware, and anti-virus deployments are among the steps used by these companies to strengthen their defenses. Furthermore, the host data centers are fortified with fortress-like protection to protect both physical and networking assets. Greater affordability: Cloud hosting service providers assist businesses in lowering their capital and operating costs without sacrificing performance by providing top-of-the-line operating systems resources to consumers at affordable pricing. Cloud services really go for it, investing large sums of money to provide consumers with world-class resources at affordable pricing. Regardless of the time of day or weekday, their efficient crew is well qualified to handle basic chores and also technical issues. 97 CU IDOL SELF LEARNING MATERIAL (SLM)

Demand-driven resource provisioning: Users of cloud services were given access to the most appropriate quantity of resources based on their needs. This not only ensures resource availability, but also assists organizations in resource optimization for lower operational expenses. Users can also access a number of resources like applications or platforms through any internet-enabled device from any place using cloud-based infrastructure. These services are available 24 hours a day, 7 days a week to help businesses enhance their efficiency. Employees may access a variety various files and folders using a variety of devices, including smartphones, tablets, and computers, without having to visit the office. Businesses may simply maintain their staff well-connected with one other for increased productivity with cloud-based solutions because they are intrinsically adaptable and accessible. Maintenance-free: On-site IT infrastructures use a lot of resources and must be upgraded and maintained on a regular basis. Cloud service providers, on the other hand, are fully responsible for the performance on servers, bandwidth, networks, and software applications. This comprises operating system as well as other business-critical application upgrades and security patches on a regular basis. This type of infrastructure management necessitates the availability of big teams of software professionals 24 hours a day, 365 days a year. Inside the absence of any on-premise facility, the majority of firms that use cloud are motivated by the need to have continuously available, versatile, secure, and well-managed IT infrastructure. Cloud Customer The person or entity who buys, leases, or rents cloud services. Cloud Access Security Broker A third-party business that acts as a middleman for CSPs and cloud clients, providing independent access control (IAM) services. This can take the shape of single sign-on, certificate management, or cryptographic key escrow, among other things. Regulators The bodies in charge of ensuring that organizations follow the regulatory framework on which they are liable. Government agencies, certification bodies, and contracting parties are examples of these.The Health Insurance Portability and Accountability Act (HIPAA) and the Graham-Leach-Bliley Act are two examples of regulations (GLBA). The Payment Card Industry Data Security Standards (PCI-DSS), ISO, the Sarbanes–Oxley Act (SOX), and so on. Cloud computing reseller A company that buys cloud server hosting and computing services from a provider and afterwards resells them to one's own consumers. 98 CU IDOL SELF LEARNING MATERIAL (SLM)

6.5SUMMARY  This unit dealt with how Service level agreement (SLA) are standardized in the cloud computing environment and discussed different degrees of service level agreement. We go into great detail about service level agreements (SLAs) for cloud computing services, including the differences between web service SLAs and cloud service SLAs. Aside from that, we explored the needs for cloud service level agreements (SLAs), presented metrics for SLAs in clouds, and lastly provided a comparison between the existing main cloud service providers' SLAs. As previously stated, there is a very real need for a comprehensive way to handling service level agreements (SLAs) in cloud computing. One of the most noteworthy observations we noticed from the perspective of clouds was the lack of established standards. This is especially important when we are attempting to implement monitoring through a variety of clouds. It is possible to handle numerous different types of cloud interfaces by utilising middleware, but a universal set of metrics aimed at monitoring various cloud services has not yet been developed. There are genuine attempts to standardise a model for service level agreements (SLAs) in the cloud, and we underline the importance of such initiatives. And also, some of the common parameters of SLA have been defined and discussed in detail. The concept of Service Provider has been explored and real time example of service providers such as AWS, IBM, and Google cloud platform has been shown with its features. Finally,the roles and responsibilities of Cloud service provider has been elaborated in the IT infrastructure. 6.6KEYWORDS  SLA - A service-level agreement is a contract between a client and a service provider. The service provider and the service user agree on specific characteristics of the service, such as quality, availability, and responsibility.  OLA -An agreement at the operational level the interdependent relationships that underpin a service-level agreement are defined by an operational-level agreement. The agreement outlines each internal support group's responsibilities toward other support groups, as well as the procedure and timeline for delivering services.  CSP -A cloud service provider, or CSP, is a corporation that provides some aspect of cloud computing to other organisations or individuals; often, when searching the internet, a cloud service is characterised as infrastructure as a service (IaaS), software as a service (SaaS), or platform as a service (PaaS).  AWS - Amazon Web Services is an Amazon company that offers metered pay-as- you-go cloud computing platforms with APIs to people, businesses, and governments. 99 CU IDOL SELF LEARNING MATERIAL (SLM)

 VPC -A virtual private cloud (VPC) is a logical partition of a service provider's multi- tenant public cloud architecture that allows private cloud computing. 6.7LEARNING ACTIVITY 1. Find out the Top 5 cloud service provider Companies in the world. 2. Suppose you are a client to an IT organization;how will you define SLA? 6.8UNIT END QUESTIONS A. Descriptive Questions Short Questions 1. What is SLA? 2. Define different degrees of SLA in Cloud computing. 3. Describe SLA for AZURE. 4. What is CSP? 5. Write short notes on IBM Cloud Provider. Long Questions 1. How service level Agreement are standardized in the cloud computing. Elaborate it. 2. List out the Service provider in the world. Detail any one of it. 3. Describe the role of service provider in the cloud. 4. Who is CASB? 5. List out the IBM Cloud Feature. B. Multiple Choice Questions. 1. SLA stands for a. Service Level Agreement b. Service Loan Agreement c. Service Like Agreement d. Service Level Approval 100 CU IDOL SELF LEARNING MATERIAL (SLM)


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook