Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore CU-MCA-SEM-IV-Web Application Development-Second Draft

CU-MCA-SEM-IV-Web Application Development-Second Draft

Published by Teamlease Edtech Ltd (Amita Chitroda), 2022-11-11 08:15:56

Description: CU-MCA-SEM-IV-Web Application Development-Second Draft

Search

Read the Text Version

well as \"the automatic notification of a reader when new material of interest to him/her has become available\". While the read-only goal was met, accessible authorship of web content took longer to mature, with the wiki concept, WebDAV, blogs, Web 2.0, and RSS/Atom. With help from his colleague and fellow hypertext enthusiast Robert Cailliau he published a more formal proposal on 12 November 1990 to build a \"Hypertext project\" called \"WorldWideWeb” as a \"web\" of \"hypertext documents\" to be viewed by \"browsers\" using a client-server architecture. At this point HTML and HTTP had already been in development for about two months and the first Web server was about a month from completing its first successful test. This proposal estimated that a read-only web would be developed within three months and that it would take six months to achieve \"the creation of new links and new material by readers, authorship becomes universal\" as well as \"the automatic notification of a reader when new material of interest to him/her has become available\". While the read-only goal was met, accessible authorship of web content took longer to mature, with the wiki concept, WebDAV, blogs, Web 2.0 and RSS/Atom, her proposal was modelled after the SGML reader DYNA text by Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University. The DYNA text system, licensed by CERN, was a key player in the extension of SGML ISO 8879:1986 to Hypermedia within HY Time, but it was considered too expensive and had an inappropriate licensing policy for use in the general high energy physics community, namely a fee for each document and each document alteration. A NeXT Computer was used by Berners-Lee as the world's first web server and to write the first web browser in 1990. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web the first web browser, which was a web editor as well) and the first web server. The first website, which described the project itself, was published on 20 December 1990.The first web page may be lost, but Paul Jones of UNC-Chapel Hill in North Carolina announced in May 2013 that Berners-Lee gave him what he says is the oldest known web page during a visit to UNC in 1991. 3.2 COMMON LANGUAGE SPECIFICATION (CLS) A Common Language Specification is a document that says how computer programs can be turned into Common Intermediate Language code. When several languages use the same bytecode, different parts of a program can be written in different languages. Microsoft uses a Common Language Specification for their .NET Framework. To fully interact with other objects regardless of the language they were used in, objects must expose to callers only those features that are common to all the languages they must exchange information with. It was always a dream of Microsoft to unite all different languages into one umbrella and CLS is one step towards that. Microsoft has defined CLS which are nothing but guidelines for languages to follow so that it can communicate with other .NET languages in a seamless manner. Most of the members defined by types in the .NET Framework class library canwork 51 CU IDOL SELF LEARNING MATERIAL (SLM)

with CLS. However, some types in the class library have one or more members that are not able to work with CLS. These members allow support for language features that are not in the CLS. Most of the members defined by types in the .NET Framework class library can work with CLS. However, some types in the class library have one or more members that are not able to work with CLS. These members allow support for language features that are not in the CLS. The CLS was designed to be large enough to include the language constructs that are commonly needed by developers, yet small enough that most languages can support it. Any language constructs that make it impossible to quickly confirm the type of safety of code was excluded from the CLS so that all languages that can work with CLS can produce verifiable code if they choose to do so. The Common Language Specification (CLS) is a fundamental set of language features supported by the Common Language Runtime (CLR) of the .NET Framework. CLS is a part of the specifications of the .NET Framework. CLS was designed to support language constructs commonly used by developers and to produce verifiable code, which allows all CLS-compliant languages to ensure the type of safety of code. CLS includes features common to many object-oriented programming languages. It forms a subset of the functionality of common type system (CTS) and has more rules than defined in CTS. CLS defines the base rules necessary for any language targeting common language infrastructure to interoperate with other CLS-compliant languages. For example, a method with parameter of \"unsigned int\" type in an object written in C# is not CLS-compliant, just as some languages, like VB.NET, do not support that type. CLS represents the guidelines to the compiler of a language, which targets the .NET Framework. CLS-compliant code is the code exposed and expressed in CLS form. Even though various .NET languages differ in their syntactic rules, their compilers generate the Common Intermediate Language instructions, which are executed by CLR. Hence, CLS allows flexibility in using non-compliant types in the internal implementation of components with CLS-compliant requirements. Thus, CLS acts as a tool for integrating different languages into one umbrella in a seamless manner. The Common Language Infrastructure (CLI) is an International Standard that is the basis for creating execution and development environments in which languages and libraries work together seamlessly. The CLI specifies a virtual execution system that insulates CLIcompliant programs from the underlying operating system. Where virtual execution systems are developed for different operating systems, programs written with CLI-compliant languages can be run in these different systems without recompiling, or worse, rewriting. Programming with CLI-compliant languages ultimately gives the programmer a simple but rich development model, allowing development in multiple languages, promoting code reuse across languages, and removing most of the plumbing required in traditional programming. The CLI makes it possible for modules to be self-registering, to run in remote processes, to handle versioning, to deal with errors through exception handling, and more. This book, by amplifying the standard, provides a blueprint for creating the infrastructure for this simpler 52 CU IDOL SELF LEARNING MATERIAL (SLM)

programming model across languages and across platforms. Because the theme of the CLI is broad reach, it also includes provisions for running modules compiled by existing languages into \"native code\"-machine code targeted at a specific system. This is called unmanaged code, as opposed to the managed code that is CLI-compliant. This book also describes what is required of languages to be CLI-compliant, and what library developers need to do to ensure that their libraries are accessible to any programmer writing in any CLI-compliant language. In addition, it provides the guidelines for implementing a virtual execution system, which insulates executables from the underlying operating system. This chapter is an overview of the CLI and attempts to provide a key to understanding the standard. In addition, throughout the specification, annotations explain many of the details-either clarifying what is written in the specification or explaining the origins of some of its elements. Each programming language that complies with the CLI uses a subset of the Common Type System that is appropriate for that language. Language-based tools communicate with each other and with the Virtual Execution System using metadata to define and reference the types used to construct the application. When a constructor is called to create an instance of an object, the VES uses the metadata to create instances of types, and to provide data type information to other parts. Languages and programming environments that do target the CLI-there are currently more than 20, and the list is growing-produce what is called managed code and managed data. The key to these is metadata-information associated with the code and data that describes the data, identifies the locations of references to objects, and gives the VES enough information to handle most of the overhead associated with older programming models. This overhead includes handling exceptions and security and providing information to tools that can ensure memory safety. It may also include running on remote systems by creating proxies for the programmer, as well as managing object lifetime (called garbage collection). Data types are more than just the contents of the bits that the data occupy. They are also the methods that can be used to manipulate them. In value-oriented programming, \"type\" usually means data representation. In object-oriented programming, it usually refers to behaviour rather than to representation. The CTS combines these notions, so \"type\" means both things: two entities have the same type if and only if they have both compatible representations and compatible behaviours. Thus, in the CTS, if a type is derived from a base type, instances of the derived type may be substituted for instances of the base type because both the representation and the behaviour should be compatible. The idea of the Common Type System is that compatible types allow language interoperation. If you can read the contract provided by any type and use its operations, you can build data structures and use your control structures to manipulate them. The CTS presents a set of rules for types. If you follow those rules, you can define as many of those types as you like-in effect, the types are extensible, but the type of system is not. For example, you can define any object or value type you like, if it follows the rules, but you cannot, for example, define a CTS-compliant type that uses multiple inheritances, which is outside of the type of system. The Common Type System was designed for broad reach: 53 CU IDOL SELF LEARNING MATERIAL (SLM)

for object-oriented, procedural, and functional languages, generally in that order. It provides a rich set of types and operations. Although many languages have types that they have found useful that are not in the CTS, the advantages of language integration usually outweigh the disadvantages. Out of 20 languages that carefully investigated the CTS, at the time of this writing 15 have chosen to implement it. The Common Language Specification is a subset of the Common Type System. It is a set of types that may be used in external calls in code that is intended to be portable. All the standardized framework (described in Partition IV, including the Base Class Library, XML Library, Network Library, Reflection Library, and Extended Numeric’s Library) are intended to be used on any system running a compliant VES, and in any CLScompliant language. Therefore, the framework follows the CLS rules, and all (well, almost all) the types it defines are CLS-compliant to ensure the broadest possible use. In the few cases in which types or methods are not CLS-compliant, they are labelled as such (that's one of the CLS rules), and they are intended for use by compilers and language runtimes rather than direct use by programmers. Custom attributes were designed as part of the CLI to allow extensibility without requiring languages to continue to add new keywords. Custom attributes include markers for CLS compliance, security, debugging, and many language- and tool-specific attributes. Some custom attributes are defined by the CLI, but they can also be defined by a compiler or by the tools that use the attribute. Languages that identify as distinct types what the VES sees simply as 32-bit integers for example, C++, which distinguishes int from long even when both are 32-bit would create a custom attribute that would identify the different types to the compiler. Custom attributes are essential to tools. In a programming environment with designers, you might create a new button object. A custom attribute would tell the designer that this object is a button, and at runtime the designer would list it as one of the available buttons. If the VES that you’re using includes a proxy generator, you could make an object available externally to a Web service by putting in a custom attribute telling the proxy generator to create a proxy for the object, and another attribute telling the Web service that the proxy should be included. Metadata stores custom attributes, making them readily available to any tool. Assemblies are the unit of deployment for one or more modules. They are not a packaging mechanism and are not intended to be an “application,” although an application will consist of one or more assemblies. An assembly is defined by a manifest, which is metadata that lists all the files included and directly referenced in the assembly, what types are exported and imported by the assembly, versioning information, and security permissions that apply to the whole assembly. Although the compilers capture the versioning and security information, it is the implementation of the VES that allows users to set policies that determine which versions are to be used, and how security is implemented. An assembly has a security boundary that grants the entire assembly some level of security permission, e.g., the entire assembly may write to a certain part of the disk, and methods can demand proof that everyone in the call chain has permission to perform a given operation. The notion of “application” is not part of this standard. It is up to the implementer to determine how applications relate to assemblies. 54 CU IDOL SELF LEARNING MATERIAL (SLM)

The standard does, however, encompass the idea of an application domain. A process may have more than one application domain. Assemblies are loaded into application domains. Information about the available classes, the associated code, and the static variables is housed in that application domain. The execution model is that compilers generate the Common Intermediate Language. How and when the CIL is compiled to machine code is not specified as part of the standard, and those determinations rest with the implementation of the VES. The most frequently used model is just-in-time compilers that generate native code as it is needed. Install-time compilers are another option, and it is also possible to implement an interpreter, rather than a compiler, for the CIL. 3.3 TYPES OF JIT COMPILERS In the .NET Framework, all the Microsoft .NET languages use a Common Language Runtime, which solves the problem of installing separate runtimes for each of the programming languages, When the Microsoft .NET Common Language Runtime is installed on a computer then it can run any language that is Microsoft .NET compatible. Before the Microsoft Intermediate Language can be executed, it must be converted by a .NET Framework Just-In-Time compiler to native code, which is CPU-specific code that runs on the same computer architecture as the JIT compiler. A Web Service or WebForms file must be compiled to run within the CLR. Compilation can be implicit or explicit. Although you could explicitly call the appropriate compiler to compile your Web Service or WebForms files, it is easier to allow the file to comply implicitly. Implicit compilation occurs when you request the .ASMX via HTTP-SOAP, HTTP-GET, or HTTP-POST. The parser determines whether a current version of the assembly resides in memory or in the disk. If it cannot use an existing version, the parser makes the appropriate call to the respective compiler. When the Web Service is implicitly compiled, it is compiled twice. On the first pass, it is compiled into IL. On the second pass, the Web Service (now an assembly in IL) is compiled into machine language. This process is called Just-In-Time JIT compilation because it does not occur until the assembly is on the target machine. The reason you do not compile it ahead of time is so that the specific JIT for your OS and processor type can be used. As a result, the assembly is compiled into the fastest possible machine language code, optimized, and enhanced for your specific configuration. It also enables you to compile once and then run on any number of operating systems. Before MSILcan be executed, it must be converted by .net Framework Just in time compiler to native code, which is CPU-specific code that runs on some computer architecture as the JIT compiler. Rather than using time and memory to convert all the MSIL in portable executable file to native code, it converts the MSIL as it is needed during execution and stored in resulting native code, so it is accessible for subsequent calls. 55 CU IDOL SELF LEARNING MATERIAL (SLM)

The runtime supplies another mode of compilation called install-time code generation. The install-time code generation mode converts MSIL to native code just as the regular JIT compiler does, but it converts larger units of code at a time, storing the resulting native code for use when the assembly is subsequently loaded and executed. As part of compiling MSIL to native code, code must pass a verification process unless an administrator has established a security policy that allows code to bypass verification. Verification examines MSIL and metadata to find out whether the code can be determined to be type-safe, which means that it is known to access only the memory locations it is authorized to access. In computing, just-in-time compilation (also dynamic translation or run-time compilations) is a way of executing computer code that involves compilation during execution of a program (at run time) rather than before execution. Most often, this consists of source code or more commonly bytecode translation to machine code, which is then executed directly. A system implementing a JIT compiler typically continuously analyses the code being executed and identifies parts of the code where the speedup gained from compilation or recompilation would outweigh the overhead of compiling that code. JIT compilation is a combination of the two traditional approaches to translation to machine code (ahead-of-time compilation (AOT), and interpretation) and combines some advantages and drawbacks of both. Roughly, JIT compilation combines the speed of compiled code with the flexibility of interpretation, with the overhead of an interpreter and the additional overhead of compiling and linking. JIT compilation is a form of dynamic compilation and allows adaptive optimization such as dynamic recompilation and micro architecture-specific speedups Interpretation and JIT compilation are particularly suited for dynamic programming languages, as the runtime system can handle late-bound data types and enforce security guarantees. JIT compilation is a combination of the two traditional approaches to translation to machine code and combines some advantages and drawbacks of both. Roughly, JIT compilation combines the speed of compiled code with the flexibility of interpretation, with the overhead of an interpreter and the additional overhead of compiling and linking (not just interpreting). JIT compilation is a form of dynamic compilation and allows adaptive optimization such as dynamic recompilation and micro architecture-specific speed ups Interpretation and JIT compilation are particularly suited for dynamic programming languages, as the runtime system can handle late-bound data types and enforce security guarantees. The earliest published JIT compiler is generally attributed to work on LISP by John McCarthy in 1960. In his seminal paper Recursive functions of symbolic expressions and their computation by machine, Part I, he mentions functions that are translated during runtime, thereby sparing the need to save the compiler output to punch cards. Another early example was by Ken Thompson, who in 1968 gave one of the first applications of regular expressions, here for pattern matching in the text editor QED. For speed, Thompson implemented regular expression matching by JIT to IBM 7094 code on the Compatible Time-Sharing System. An influential technique for deriving compiled code from interpretation was pioneered by James G. Mitchell in 1970, which he implemented for the experimental language LC².Smalltalk pioneered new aspects of JIT 56 CU IDOL SELF LEARNING MATERIAL (SLM)

compilations. For example, translation to machine code was done on demand, and the result was cached for later use. When memory became scarce, the system would delete some of this code and regenerate it when it was needed again. Sun's Self language improved these techniques extensively and was at one point the fastest Smalltalk system in the world, achieving up to half the speed of optimized C but with a fully object-oriented language. Self was abandoned by Sun, but the research went into the Java language. The term \"Just-in-time compilation\" was borrowed from the manufacturing term \"Just in time\" and popularized by Java, with James Gosling using the term from 1993. Currently JITis used by most implementations of the Java Virtual Machine, as HotSpot builds on, and extensively uses, this research base. The HP project Dynamo was an experimental JIT compiler where the 'bytecode' format and the machine code format were the same; the system turned PA-6000 machine code into PA-8000 machine code.Counter intuitively, this resulted in speed ups, in some cases of 30% since doing this permitted optimization at the machine code level, for example, in lining code for better cache usage and optimizations of calls to dynamic libraries and many other run-time optimizations which conventional compilers are not able to attempt. Normal JIT It compiles only those methods that are called at run time and after compilation stores them into a memory cache called JITTED.If that method is called again then it provides the compiled method from the memory cache for execution. Figure 3.1: Normal JIT 57 CU IDOL SELF LEARNING MATERIAL (SLM)

ECONO JIT It also compiles only those methods that are called at run time, but once the execution of those methods takes place, they are removed from the memory. Figure 3.2: ECONO JIT PRE JIT It compiles the entire code in a single cycle, i.e., it compiles the entire MSIL code into Native code in a single go. Figure 3.3: Pre JIT 58 CU IDOL SELF LEARNING MATERIAL (SLM)

3.4 SECURITY MANAGER Many national governments have been utilizing information and communication technology to improve public services, for effective communication and interactions with their constituents, and in administrative organizations. Sustainable computing services are driving sustainability beyond simply energy use and product considerations, and deal with the loss of control by individuals, businesses, and governments. Sustainable computing services can be defined as effective and reliable processes for delivering sustainable IT services. Sustainable computing services consider managing performance and doing what is necessary to keeps the service operating smoothly, including ensuring constant security, providing systems recovery planning, and keeping versions current. Essentially, sustainable computing provides secure computing services to users. Security can be considered the dividing line between non- sustainable and sustainable computing service. Any system is considered unsustainable if it cannot protect data or ensure a required computing quality. Information security has been regarded as a serious issue, especially in e-government contexts. Organizations attempting to protect information must consider controls on internal stakeholders many information security breaches are due to poor user compliance with information security protocol. The violation of information security harms private organizations by causing financial losses and reputation damage in the public sector, the violation of information security can lead to serious, complex financial, political, and economic losses; reputation damage; and the loss of public trust in e-government and government organizations that adopt e-government methods. Thus, to prevent cybercrime, it is natural for e-governments to seek advanced security management processes and continuous information security innovation. Many governments have attempted to overcome the barriers to information systems security (ISS) within their organizations for sustainable computing. ISS has become an important focus of e-governments since the late 1990s; ISS issues have received attention. ISS is defined as secure systems and policies for protecting an organization’s information resources from disclosure to unauthorized persons who attempt to access those information resources. Burney argued that the combination of information security management and information programs could improve the effectiveness of ISS. They described the critical duties of information security managers in establishing information security programs. Burney argued that information security managers can be information mediators between the general management department and the technical department. Rohmeyer investigated the major constructs of information security manager skills and information security program maturity within organizational information security. Rohmeyer argued that the effectiveness of the organization can be improved by skilled information security managers. As mentioned in previous studies, the role of information security managers is central to directing e- government ISS efforts, and to encouraging employees to comply with ISS policies. Research has concluded that for maintaining information security within an organization, information security managers must specify appropriate information security policies and motivate their 59 CU IDOL SELF LEARNING MATERIAL (SLM)

employees to follow them. Although information security managers cannot directly enforce ISS policy compliance, they must constantly encourage and motivate all members to comply with information security policies, including monitoring and warning those organizational members who violate security policy. Sometimes, information security managers must also persuade top managers to invest in people, budgets, and technological security controls for ISS. The leadership of the information security manager could encourage employee compliance with ISS and improve the alertness of top managers, thus improving the effectiveness of ISS. The leadership of the information security manager could improve the effectiveness of ISS in e-governments. This study attempts to evaluate the factors that affect ISS effectiveness from the perspective of information security manager leadership for secure sustainable computing. Accordingly, this study attempts to elucidate the interplay of information security manager leadership and ISS effectiveness. Using advanced information and communications infrastructure, the Korean government has actively promoted e- government to improve national competitiveness. The Korean e-government model has been found to be one of the most successful models. To ensure public trust and confidence in e- government, the Korean government has invested budgets, human resources, and legislative attention in developing, implementing, and imitating advanced ISS and have treated continuous ISS innovation as necessary. However, individual Korean government agencies’ information security systems still have problems, including a lack of manpower, budget limitations, and ill-defined roles and scopes regarding information security. In the development of e-governments worldwide, information security manager leadership issues could have theoretical and practical implications for the development of other countries’ e- governments. ISS research has adopted four perspectives, namely, functionalist, radical humanist, radical structuralism, and interpretive. The functionalist perspective has been a major research theme in information security research. HU et al and Knapp et al. argued the importance of both top management’s role and information security policy when investigating ISS using social role theory. According to this theory, managers are expected to model the manager’s role behaviour expected by the organization to achieve goals and outcomes in most daily activities. Regarding social role theory most researchers have focused on the role of top management. HU et al and Knapp et alshowed the importance of top management in influencing employee behaviour, resulting in compliance with information security policies. Many studies have suggested the need for formal research on the relationship between leadership and ISS effectiveness however, studies of these and related areas are limited to a small number of academic studies. Interest groups, such as the Computer Security Institute, and industry publications such as Information Security Magazine and CSO Magazine have conducted various surveys. Rohmeyer investigated the major constructs of information security effectiveness, information security manager skills, and information security program maturity within organizational information security. High effectiveness in information security management has been shown to be positively related to the leadership and 60 CU IDOL SELF LEARNING MATERIAL (SLM)

qualifications of information security managers. Rohmeyer argued that organizations that hire skilled information security managers are expected to be more effective at information security. A skilled information security manager is one with higher skills and qualifications; the required skills of an information security manager can be summarized as technical, administrative, bureaucratic, and technocratic. Many studies have described the role of information security managers in establishing information security programs. Burney and Kim and Choi have described the essential roles and responsibilities of ISS managers. Burney stressed the leadership activities of information security managers in establishing information security measures. Burney also described the important roles of information security managers as information mediators between technical and general management departments. Wylder described the roles of the information security manager in establishing the information security program. When the information security program reaches maturity, the security manager’s skills are required. Luftman described the role of the information security manager from the perspective of IT governance. Information security managers are involved in making decisions and obtaining IT resources in the context of information security tasks. This study adopts a human behaviour approach and an institutional approach to improving ISS effectiveness. Unlike previous studies, which focus on the technological controls for ISS effectiveness, Chaudhary et al. proposed a human behaviour and institutional approach as a development framework for enterprise ISS. The framework consists of four main pillars, namely, security policy, security awareness, access control, and top-level management support. This study also considers information security policy as an important part of information security in an organization and investigates it as a meditating effect on ISS effectiveness. The purpose of transformational leadership of an information manager is improving employee awareness of information security in organizations. Top-level management support and corporate governance are also essential factors in supporting the activities of the information security manager, information security policy, and ISS effectiveness. In the information security realm, deterrents are defined as administrative tools that can include information security policies that describe the secure use of information systems. The controls of administrative deterrents have been validated as effective in reducing IS and computer abuses such as software piracy and violation of information security policies also regarded information security policies as deterrent measures and noted that the effectiveness of information security policy can be maintained when computer abuse incidents and their seriousness are monitored and reported. The theory of general deterrence states that policy can prevent potential abusive acts by presenting the threat of sanctions and unpleasant consequences. 3.5 VS.NET AND C C# is pronounced “see sharp”. C# is an object-oriented programming language and part of the .NET family from Microsoft. C# is very similar to C++ and Java. C# is developed by Microsoft and works only on the Windows platform. The .NET Framework (pronounced “dot 61 CU IDOL SELF LEARNING MATERIAL (SLM)

net”) is a software framework that runs primarily on Microsoft Windows. It includes a large library and supports several programming languages which allow language interoperability (each language can use code written in other languages). The .NET library is available to all the programming languages that .NET supports. Programs written for the .NET Framework execute in a software environment, known as the Common Language Runtime (CLR), an application virtual machine that provides important services such as security, memory management, and exception handling. The class library and the CLR together constitute the .NET Framework. Object-oriented programming is a programming language model organized around \"objects\" rather than \"actions\" and data rather than logic. Historically, a program has been viewed as a logical procedure that takes input data, processes it, and produces output data. The first step in OOP is to identify all the objects you want to manipulate and how they relate to each other, an exercise often known as data modelling. Once you've identified an object, you generalize it as a class of objects and define the kind of data it contains and any logic sequences that can manipulate it. Each distinct logic sequence is known as a method. A real instance of a class is called an “object” or an “instance of a class”. The object or class instance is what you run in the computer. Its methods provide computer instructions, and the class object characteristics provide relevant data. You communicate with objects - and they communicate with each other. Rich Functionality out of the box: MS.Net framework provides rich set of libraries to achieve basic, intermediate, and advanced functionalities. You need not to go for third party libraries oftenly.SDK also has more than single framework with IDE, which gives us backward and forward backward compatibility. Easy Development of Web Application: Before ASP.Net Classic ASP was used to develop web application but there was certain drawback of classic ASP that It was quite cumbersome job to design web application using HTML tags.ASP.Net Provides various server side and client-side controls in toolbox to drag and drop on pages. Classic ASP pages interpret and ASP.Net pages compile rather than interpretation, it gives performance to web pages. OOP’s Support: MS.Net has more than 60 framework compatible languages and all are OOP.This approach gives programmer code reusability, robustness and security makes programmer task more comfortable. Multi-Language Support: Before Visual Studio.Net 2008 there was a single framework with anIDE, but history was changed with VS.Net 2008 onwards, It has more than single framework with single IDE, this is known as “Multi-Target Support”. e) Multi-Device Support: Application built on .Net framework can execute on Desktops, Tablet, Hand held and Note book Devices. These devices need only MS.Net framework on which it was built. f) Ease of Deployment: After the development now, finally we need to deploy application to release it for customers.MS.Net gives us easy deployment processes like Copy, SetUp wizard. We need not to do more while deploy applications. g) Security: Windows always criticize for security but in .Net there are many ways to implement security. Framework gives libraries like System Security it has many types like cryptography, Windows policy etc. h) Automatic Memory Management: MS.Net does not require garbage collection, CLR collects garbage on behalf of programmer, and it 62 CU IDOL SELF LEARNING MATERIAL (SLM)

means CLR releases memory of object when it goes out of scope. We can also do it forcefully using System GC library No more DLL hell: VB 6.0 always requires DLL registration while working with active control. In .Net normally CLR does not require any type of registration. .Net does not require API viewer as well. j) Strong XML Support: XML (Extensible Mark-up Language) comes from SGML (Standard Generalized Mark-up Language).XML is core of MS.Net as xml has basically three forms in .net platform. Run time is an environment in which programs are executable. CLR is runtime provide for .Net application. e.g., To execute a program written in Vb6 the machine must have a VB runtime installed, Java Programs required JVM since different languages require different runtime the developer life become more difficult. To avoid such problems .NET introduced a single common Language Runtime that all .NET languages share. Each assembly you build can be either an executable application or a DLL containing a set of types for use by an executable application. Of course, the CLR is responsible for managing the execution of code contained within these assemblies. This means that the .NET Framework must be installed on the host machine. Microsoft has created a redistribution package that you can freely ship to install the .NET Framework on your customers' machines. Some versions of Windows ship with the .NET Framework already installed. 3.6 SUMMARY  The relationship between information security managers and other employees, regardless of their position in the organization, can be treated as the relationship between leaders and followers. To urge employees within the organization to maintain information security, information security managers should persuade, inspire, and motivate their employees. Information security managers do not have any direct controls by which to order, monitor, or punish other employees. To effectively lead other employees in complying with information security policies, information security managers should display leadership via implementation of information security policy. Here, the authors review the related components leadership.  In the past 100 years, leadership has been defined in terms of the behaviours, traits, role relationships, interaction patterns, and occupations of someone in an administrative position. There is a fundamental and highly controversial issue in the field of leadership, namely, “what we do know and what we should know about leadership and leaders”. A wide variety of views on leadership involve the question of whether to judge leadership as a transmission process or a specialized role. Burns and Bass suggested the need to shift the emphasis of leadership studies from mainly examining transactional models grounded on “how leaders and followers make an exchange with each other to models that might expand transactional leadership and were labelled charismatic, transformational, inspirational, and visionary”. 63 CU IDOL SELF LEARNING MATERIAL (SLM)

 Both transactional and transformational leadership are originally embedded in the dyadic paradigm, the theory of which retains the relationship of the leader subordinate dyad, as described above. Unlike traditional leadership models that describe leader behaviour in terms of providing direction, support, reinforcement behaviours, goals, and leader-follower exchange relationships or indeed being based on “economic cost- benefit assumptions”, new leadership models highlight “symbolic leader behaviour; visionary, inspirational messages; feelings; ideological and moral values; individualized attention; and intellectual stimulation”. Emerging from these studies, transformational leadership theories have been the most frequently researched theories over the past 20 years.  Transformational leadership has been redefined as the mutual commitment to the objectives and mission/vision of the work unit. The theory of transformational leadership indicates that such leaders have reinforced their higher-order values and elevated followers’ aspirations such that the followers can identify their mission/vision, work more effectively and efficiently, and work to do their part beyond base expectations and mere transactions. Transformational leadership appeals to the moral values of followers to raise their consciousness about ethical issues and mobilize their energy and resources to reform institutions. Judge and Piccolo state that transformational leadership is positively related to leadership effectiveness and to several significant organizational outcomes across many different types of organizations, levels of analyses, situations, and cultures using a series of meta- analytic studies.  Many researchers have studied different processes using transformational leadership effects that are eventually realized in the form of performance outcomes. These processes involve follower formation of identification; satisfaction; commitment; perceived fairness job characteristics such as identity, significance, variety, feedback, and autonomy; trust in the leader; and how followers feel about themselves and their group in terms of cohesion, potency, and efficacy. New theories of transformational leadership are more concerned with goal attainment in pragmatic task objectives by followers, groups, and organizations than with the moral elevation of followers. Jansen and Crossan state that it is necessary to consider interactions between leaders and followers, rather than the leaders’ unreciprocated behaviours. Kahai and colleagues showed that transformational leadership reduces the incidence of social loafing.  Transformational leadership not only reduces the impact of counterproductive behaviours but also improves the performance of individuals and groups, because transformational leaders can gather followers committed to collective goals, rather than simply to satisfying the followers’ personal goals. Social role theory is a perspective in sociology in which socially defined categories or has distinct 64 CU IDOL SELF LEARNING MATERIAL (SLM)

expectations associated with them; correct leader behaviours are required to achieve organization goals and outcomes in most daily activities.  Kark et al suggested that transformational leadership has an impact on both social identification within the work unit and personal identification with the leader. Leadership research on social identity formation has also focused heavily on what constitutes prototypicality, which has shown that followers can be closer to those leaders who are exemplars of the groups the followers want to join or to which they already belong. Lord and Brown presented a model that studies two specific ways in which leaders can influence the way followers choose to behave, in terms of the motivations they have regulated through actions and behaviours. The idea of a working self-concept brings up issues of identity.  Transformational leadership by information security managers is expected to improve ISS effectiveness. Although ISS managers can inspire employees to comply with ISS, information security managers do not have direct means for influencing employees. Information security policy is an important mediator in influence among employees. Researchers have suggested that transformational leadership behaviours include four components: inspirational motivation, idealized influence, individualized consideration, and intellectual stimulation. The first two components are like the concept of “charisma”. Inspirational motivation includes the demonstration of enthusiasm and optimism, presentation and creation of symbols and emotional arguments, and an attractive vision of the future. 3.7 KEYWORDS  First Content Paint -First Content Paint reports the time when the browser first rendered any text, image (including background images), non-white canvas or SVG. This includes text with pending web-fonts. This is the first-time users could start consuming page content.  First Paint Reports-First Paint reports the time when the browser first rendered after navigation. This excludes the default background paint but includes non-default background paint. This is the first key moment developer’s care about in page load – when the browser has started to render the page.  FLIP - FLIP is a technique to set up high-performance animations using CSS transforms. To avoid junking animations, start and end position are evaluated during the setup so that the animation doesn't have to do any expensive calculations.  The Flexible Box- the Flexible Box Layout Module is an API that provides tools to make web content responsive. Flex-box provides an efficient way to lie out, align, and distribute space among items in a container, even when their size is unknown and/or 65 CU IDOL SELF LEARNING MATERIAL (SLM)

dynamic. The API allows the rapid creation of complex, flexible layouts, and features that have historically proved difficult with CSS.  HTTPS - HTTPS is a protocol in which the connection between the server and client is encrypted, helping to protect users' information and prevent tampering. APIs such as the Service Worker, Google Maps API, and File API must be served over HTTPS. HTTPS is implemented using the Transport Layer Security protocol. Although TLS supersedes Secure Sockets Layer, it is often referred to as SSL. 3.8 LEARNING ACTIVITY 1. Create a session on Common Language Specification (CLS). ___________________________________________________________________________ ___________________________________________________________________________ 2. Create a survey on Security Manager. ___________________________________________________________________________ ___________________________________________________________________________ 3.9UNIT END QUESTIONS A. Descriptive Questions Short Questions 1. Write the full form of CLS? 2. Who is Security Manager? 3. What is .NET? 4. Define VS and C. 5. Define the term compilers. Long Questions 1. Explain the concept of Common Language Specification (CLS)? 2. Discuss the Types of JIT Compilers. 3. Describe the features of Security Manager. 4. Illustrate the advantages of Security Manager. 5. Examine the concept of VS.NET. B. Multiple Choice Questions 1. In a URL, what is the full name of the file where the information is located? 66 CU IDOL SELF LEARNING MATERIAL (SLM)

a. Path b. Protocol c. Host d. None of these 2. Which is the model on which www is based on? a. Local-server b. Client-server c. 3-tier d. None of these 3. Which of the following statements is incorrect regarding multimedia on the web? a. The MPEG, AIFF and WAV are cross-platform formats b. The MPEG, AU and MIDI are cross-platform formats c. The SND format has a relatively low fidelity d. VRML can be used to model and display 3D interactive graphics 4. What is the full form of URL? a. Uniform Resource Library b. Uniform Resource Locator c. United Resource Library d. United Resource Locators 5. What does .net domain is used? a. educational institution b. Internet infrastructure and service providers c. International organizations d. None of these Answers 1-a, 2-b, 3-a, 4-b, 5-b 3.10 REFERENCES Book References 67 CU IDOL SELF LEARNING MATERIAL (SLM)

 Cox, B.J. Object-Oriented Programming. Addison-Wesley, Reading, Mass., 1986. 6.  Gamma, E., Helm, R., Johnson, R., and Vlissides, J. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, Reading, Mass., 1995. 7.  Helm, R., Holland, I.M., and Gangopadhyay, D. Contracts: Specifying behavioral compositions in object-oriented systems.  Krueger, C.W. Software reuse. ACM Comput. Surveys 24, 2 (June 1992),  Lubars, M.D. and Harandi, M.T. Knowledge-based software design using design schemas.  Roberts, D. and Johnson, R. Evolving frameworks: A pattern language for developing frameworks. In D. Riehle, F. Buschmann, and R.C. Martin, Eds., Pattern Languages of Program Design 3, Addison-Wesley, Reading, Mass., 1997 E-References  file:///C:/Users/Sony/Downloads/Leadership_of_Information_Security_Manager_on_t he_.pdf  https://www.researchgate.net/publication/305042873_Leadership_of_Information_Se curity_Manager_on_the_Effectiveness_of_Information_Systems_Security_for_Secur e_Sustainable_Computing/link/577fb54b08ae5f367d36fe4c/download  http://findnerd.com/list/view/Different-types-of-JIT-Compiler/4569/ 68 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT4: INTRODUCTION TO PROJECT AND SOLUTION IN STUDIO STRUCTURE 4.0 Learning Objectives 4.1 Introduction 4.2 Command Line Arguments 4.2.1 ARGS[] Array 4.3 Global 4.4 Stack and Heap Memory 4.5Reference Type and Value Type 4.6Boxing and Un-boxing 4.7Pass by Value and By Reference and Out Parameter 4.8Array Lists & Hash Tables 4.9 Summary 4.10 Keywords 4.11 Learning Activity 4.12 Unit End Questions 4.13 Reference 4.0 LEARNING OBJECTIVES After studying this unit, you will be able to:  Describe the concept of Stack and Heap Memory.  Illustrate the Boxing and Un-boxing.  Explain the Pass by Value and By Reference And Out Parameter. 4.1 INTRODUCTION Realization of these objectives requires systematic planning and careful implementation. To this effect, application of knowledge, skill, tools, and techniques in the project environment, refers to project management. Project management in recent years has proliferated, reaching new heights of sophistication. It has emerged as a distinct area of management practices to 69 CU IDOL SELF LEARNING MATERIAL (SLM)

meet the challenges of new economic environment, globalization process, rapid technological advancement, and quality concerns of the stakeholders. Project in general refers to a new endeavour with specific objective and varies so widely that it is very difficult to precisely define it. Some of the commonly quoted definitions are as follows. Project is a temporary endeavour undertaken to create a unique product or service or result. Project is a unique process, consist of a set of coordinated and controlled activities with start and finish dates, undertaken to achieve an objective confirming to specific requirements, including the constraints of time cost and resource. It is evident that any change in any one of dimensions would affect the other. For example, if the scope is enlarged, project would require more time for completion and the cost would also go up. If time is reduced the scope and cost would also be required to be reduced. Similarly, any change in cost would be reflected in scope and time. Successful completion of the project would require accomplishment of specified goals within scheduled time and budget. In recent years a fourth dimension, stakeholder satisfaction, is added to the project. However, the other school of management argues that this dimension is an inherent part of the scope of the project that defines the specifications to which the project is required to be implemented. Thus, the performance of a project is measured by the degree to which these three parameters (scope, time, and cost) are achieved. Every project, from conception to completion, passes through various phases of a life cycle synonym to life cycle of living beings. There is no universal consensus on the number of phases in a project cycle. An understanding of the life cycle is important to successful completion of the project as it facilitates to understand the logical sequence of events in the continuum of progress from start to finish. Typical project consists of four phases Conceptualization, Planning, and Scope Time Cost Figure 1. Project performance dimensions 3 Execution and Termination. Each phase is marked by one or more deliverables such as Concept note, Feasibility report, Implementation Plan, HRD plan, Resource allocation plan, Evaluation report etc. There is no standard classification of the projects. However, considering project goals, these can be classified into two broad groups, industrial and developmental. Each of these groups can be further classified considering nature of work (repetitive, non-repetitive), completion time (long term, shot term etc), cost (large, small, etc.), level of risk (high, low, no-risk), mode of operation. Industrial projects also referred as commercial projects, which are undertaken to provide goods or services for meeting the growing needs of the customers and providing attractive returns to the investors/stake holders. Following the background, these projects are further grouped into two categories i.e., demand based, and resource / supply based. The demand-based projects are designed to satisfy the customers’ felt as well the latent needs such as complex fertilizers, agro-processing infrastructure etc. The resource/ supply-based projects are those which take advantage of the available resources like land, water, agricultural produce, raw material, minerals, and even human resource. Projects 70 CU IDOL SELF LEARNING MATERIAL (SLM)

triggered by successful R&D are also considered as supply based. Examples of resource- based projects include food product units, metallurgical industries, oil refineries etc. Examples of projects based on human resource (skilled) availability include projects in IT sector, Clinical Research projects in bio services and others. Project management is a distinct area of management that helps in handling projects. It has three key features to distinguish it from other forms of management and they include: a project manager, the project team, and the project management system. The project management system comprises organization structure, information processing and decisionmaking and the procedures that facilitate integration of horizontal and vertical elements of the project organization. The project management system focuses on integrated planning and control. A project in the economic sense directly or indirectly adds to the economy of the Nation. However, an introspection of the project performance clearly indicates that the situation is far from satisfactory. Most of the major and critical projects in public sector that too in crucial sectors like irrigation, agriculture, and infrastructure are plagued by tremendous time and cost overruns. Even in the private sector the performance is not all that satisfactory as is evident from the growing sickness in industry and rapid increase in non-performing assets (NPAS) of Banks and Financial Institutions. The reasons for time and cost over runs are several and they can be broadly classified under-technical, financial, procedural, and managerial. Most of these problems mainly stem from inadequate project formulation and haphazard implementation. Project identification is an important step in project formulation. These are conceived with the objective of meeting the market demand, exploiting natural resources, or creating wealth. The project ideas for developmental projects come mainly from the national planning process, whereas industrial projects usually stem from identification of commercial prospects and profit potential. As projects are a means to achieving certain objectives, there may be several alternative projects that will meet these objectives. It is important to indicate all the other alternatives considered with justification in favour of the specific project proposed for consideration. Sect oral studies, opportunity studies, support studies, project identification essentially focuses on screening the number of project ideas that come up based on information and data available and based on expert opinions and to come up with a limited number of project options which are promising. A pre-feasibility study should be viewed as an intermediate stage between a project opportunity study and a detailed feasibility study, the difference being primarily the extent of details of the information obtained. It is the process of gathering facts and opinions pertaining to the project. This information is then vetted for the purpose of tentatively determining whether the project idea is worth pursuing furthering. Pre-feasibility study lays stress on assessing market potential, magnitude of investment, technical feasibility, financial analysis, risk analysis etc. The breadth and depth of pre-feasibility depend upon the time available and 71 CU IDOL SELF LEARNING MATERIAL (SLM)

the confidence of the decision maker. Pre-feasibility studies help in preparing a project profile for presentation to various stakeholders including funding agencies to solicit their support to the project. It also throws light on aspects of the project that are critical in nature and necessitate further investigation through functional support studies. Support studies are carried out before commissioning pre-feasibility or a feasibility study of projects requiring large-scale investments. These studies also form an integral part of the feasibility studies. They cover one or more critical aspects of project in detail. The contents of the Support Study vary depending on the nature of the study and the project contemplated. Since it relates to a vital aspect of the project the conclusions should be clear enough to give a direction to the subsequent stage of project preparation. Feasibility Study forms the backbone of Project Formulation and presents a balanced picture incorporating all aspects of possible concern. The study investigates practicalities, ways of achieving objectives, strategy options, methodology, and predict likely outcome, risk, and the consequences of each course of action. It becomes the foundation on which project definition and rationale will be based so that the quality is reflected in subsequent project activity. A well conducted study provides a sound base for decisions, clarifications of objectives, logical planning, minimal risk, and a successful cost-effective project. Assessing feasibility of a proposal requires understanding of the STEEP factors. These are as under Social, Technological, Ecological, Economic, and Political. In the recent years the market analysis has undergone a paradigm shift. The demand forecast and projection of demand supply gap for products / services can no longer be based on extrapolation of past trends using statistical tools and techniques. One must look at multiple parameters that influence the market. Demand projections are to be made keeping in view all possible developments. Review of the projects executed over the years suggests that many projects have failed not because of technological and financial problems but mainly because the projects ignored customer requirements and market forces. In market analysis several factors need to be considered covering – product specifications, pricing, channels of distribution, trade practices, threat of substitutes, domestic and international competition, opportunities for exports etc. It should aim at providing analysis of future market scenario so that the decision on project investment can be taken in an objective manner keeping in view the market risk and uncertainty. 4.2 COMMAND LINE ARGUMENTS Arguments to the main function are called Command-Line arguments.  A command-line argument is the information that follows the name of the program on the command line of the operating system.  Command-line arguments are used to pass information into a program when the program is executed. 72 CU IDOL SELF LEARNING MATERIAL (SLM)

 E.g.: When we write program to append two files, the file names are supplied when program starts executing rather than specifying it as constants. A Java application can accept any number of arguments from the command-line. Command- line arguments allow the user to affect the operation of an application. The user enters command-line arguments when invoking the application and specifies them after the name of the class to run. A command-line argument is the information that follows the name of the program on the command line of the operating system. Command-line arguments are used to pass information into a program when you run it. They facilitate the use of your program in batch files. They give a professional appearance to your program 4.2.1 Args [] Array ARGV is a pointer to an array of character pointers. Each character pointer in the ARGV array corresponds a string containing a command-line argument ARGV points the name of the program, ARGV points to the first argument, ARGV points to the second argument. Each command-line argument is a string. If you want to pass numerical information to your program, your program should convert the corresponding argument into its numerical equivalent, each command-line argument must be separated by spaces or tabs • Commas, semicolons, and the like are not valid argument separators Figure 4.1: Array 73 CU IDOL SELF LEARNING MATERIAL (SLM)

4.3 GLOBAL Global is a British media company formed in 2007. It is the owner of the largest commercial radio company in Europe having expanded through a number of historical acquisitions, including Chrysalis Radio, GCap Media and GMG Radio. Global owns and operates seven core radio brands, all employing a national network strategy. Global also owns and operates one of the leading Out-of-home advertising companies in the UK through its Outdoor Division. Global was founded by Ashley Tabor-King in 2007, with financial backing from his father Michael Taborand purchased Chrysalis Radio, where Global took control of the radio brands Heart, Galaxy, LBC and The Arrow. A year later 31 October 2008 Global Radio officially took control of all GCap Media and its brands. The GCap Media name was dropped at this time. The GCap purchase gave Global the network of FM stations which GCap had operated as The One Network (many of which are now part of the Heart or Capital networks), plus Classic FM, Radio X, Choice FM, Gold and Chill. Following the acquisition of GCap Media, Global was required to sell off several stations in the Midlands. The stations were bought by Orion Media, headed by Phil Riley, former Chief Executive of Chrysalis Radio. Heritage local radio stations in areas not already served by Heart FM were gradually rebranded and incorporated into a larger Heart Network that covers most of southern England and parts of North Wales - the stations which would become Heart in the North were acquired later. The remaining stations briefly formed The Hit Music Network before being merged with the Galaxy network and Capital London into the Capital network. On 25 June 2012, Global acquired GMG Radio for a sum thought to be between £50 and £70 million; it continued to be run separately while a regulatory review was conducted. In May 2013, the Competition Commission ruled that Global would be required to sell seven stations across the network The company initially offered to dispose of three stations, Real XS in Manchester and Scotland, and Gold in the East Midlands, to try to prevent the sale of the seven stations mentioned in the ruling. When this failed Global Radio launched an appeal against the decision. The appeal was based on three grounds: Real and smooth as alternatives to the Greater Manchester stations, reliance on \"significant adverse effects\" in the North-WestGlobalX’s remedy proposal (see above). The appeal was rejected on all grounds and the company must sell the seven stations it was ordered to in the original judgement, Global said it was disappointed with the decision and was considering it further. 74 CU IDOL SELF LEARNING MATERIAL (SLM)

On 6 February 2014, it was announced that several stations would be sold to the Irish broadcaster with programming generally to be supplied by Global under contract. The deal involved control of Smooth Radio in the North East, the North West, and the West Midlands, of Capital in South Wales and Scotland, of Real Radio in North Wales and Yorkshire, and of Real XS in Manchester. Most stay under their current brands though the Real stations will be renamed Heart and carry the Heart network off-peak programming as provided by Global. Global will retain control of all other stations, re-launching the existing Heart North west and Wales as Capital to allow Real North Wales to take on the Heart affiliation. Real XS in Paisley will be retained by Global and join the XFM network; the future branding and direction of Real XS in Manchester, is unclear at the present time. Most of the Gold stations switched to taking the Smooth London/Network output, with the exception that, in areas where Smooth is available on FM (London, Manchester and the East Midlands), a reduced Gold oldies service will remain, run by Global and taking programmes from London as now. It was announced in June 2015 that Darren Singer would be appointed as Global chief financial officer. In February 2017, Global changed its company name from 'This is Global Limited' to 'Global Media & Entertainment Limited'. It also changed all its social media handles from 'THISISGLOBAL' to 'global' and its web domain to global.com. Global also combined the three sub-companies, Global Radio, Global Entertainment and Global Television into just 'Global'. On 1 March 2018, Global launched a brand-new awards show called The Global Awards celebrating the stars of music, news & entertainment across genres in the UK and from around the world. It took place at London's Eventim Apollo. In September 2018, Global announced the double acquisition of two key outdoor companies, Prime sight, and Outdoor Plus, creating Global Outdoor Division. The acquisitions were rumoured to be worth several hundred million pounds. On 19 September 2018, rival commercial radio group Bauer announced that they were pulling out of the biggest networked commercial radio chart show, The Official Vodafone Big Top 40, produced by Global Capital. The move led to Global discontinuing the Sunday evening show for all stations outside of their own Heart & Capital networks, which the show continues to air on. On 26 February 2019, Global Radio announced plans to replace the regional breakfast shows on Capital and Heart with a single national breakfast show for each network, whilst Smooth kept its regional breakfast shows, instead turning its drive time show national. Capital's new breakfast show launched in April with Roman Kemp, Heart breakfast with Jamie Theakston and Amanda Holden launched in June and Smooth Drivetime with Angie Grieves launched in September. 75 CU IDOL SELF LEARNING MATERIAL (SLM)

In September 2019, it was announced that Quidem, the owners of Banbury Sound, Rugby FM and Touch FM had entered into a brand-licensing agreement with Global Radio. This change will see the Quidem stations rebrand under the Global brands. At the beginning of October, Ofcom opened a consultation following Quidem's request for its six stations to make significant changes to their formats. A network of stations principally dedicated to music from the 1950s to the 1980s; there are two different variants of the station; England & Scotland, and Wales. Many of these were the AM sister stations to heritage CHR stations which are now Heart or Capital stations; though Gold Manchester was originally a standalone station Fortune 1458 and Late AM before becoming part of the Big-AM and later Capital Gold networks. On DAB, Gold is available in some areas which do not have Gold on AM; in these areas Gold UK is carried, though it may carry local branding on the label. Global chose to close some unviable AM relays of Gold but has continued to serve these areas on DAB. In the West Midlands, after the divesture of some radio holdings to Orion Media, the Gold brand continued as a franchise, however, in late 2012 these stations were rebranded as Free Radio 80s and no longer carried Gold network programming. Most Gold stations on AM/local DAB transferred to receive their network programming provision from Smooth Radio on 24 March 2014; local news/travel and advertising drop-ins into the network programming feed continue as previously provided under Gold, and the former Gold stations in Wales continue to offer a four-hour local show as Smooth Wales. Three Gold areas where Smooth is already provided on FM London, Manchester and the East Midlands retain a reduced Gold service on AM local DAB, with most presented shows ceasing. Several areas gained or regained Gold as a DAB service in September 2015 in space vacated by XFM, following XFM's move from local to national transmission as Radio X. 4.4 STACK AND HEAP MEMORY In a programming language implementation that uses garbage collection; all procedure activation records can be allocated on the heap. This is convenient for higher-order languages whose “closures” can have indefinite extent, and it is even more convenient for languages with first-class continuations. One might think that it would be expensive to allocate, at every procedure call, heap storage that becomes garbage on return. But not necessarily to allocate a stack frame, the program must add a constant to the stack pointer. This takes one instruction. It is also necessary to check for stack overflow; but since overflow is so rare, this can usually be done at no cost using an inaccessible virtual memory page. Heap overflow must be checked. As explained by Appel and Li, and contrary to the silly ideas of Appel, this should not be done by a virtual memory fault: operating-system fault handling is too expensive, heap overflow is unrelated to locality of reference, and the 76 CU IDOL SELF LEARNING MATERIAL (SLM)

technique is almost impossible on machines without precise interrupts. Thus, a comparison and a conditional branch are required; by keeping the free-space pointer and the limit pointer in registers, this takes about two instructions. When a stack frame is popped, the frame pointer must be set back to the caller’s frame. Some implementations of stack frames have put a copy of the frame pointer in each frame, and this is fetched back upon function return. But for contiguous stack frames of known size, this is clearly unnecessary; the stack pointer itself can be used as the frame pointer, and the pop can just be a subtraction from the stack pointer. This is the common modern practice. Efficient heap allocation uses a free-space pointer and a free-space limit which should be kept in registers. However, the cost of reserving these registers should not be charged to heap allocation of frames, because we are assuming that the implementation in question already has garbage collection for other purposes. A language with higher-order functions needs closures to hold the free variables of functions that have been created but not yet called. If one function’s free variables overlap with another’s, then one closure might point to another there are two kinds of objects: activation records, whose lifetimes have last-in first-out behaviour; and higher-order function closures, which have indefinite extent. The former can be stack allocated but the latter must be allocated on a garbage-collected heap. Furthermore, stack frames may point at heap closures, but heap closures may never point at stack frames, otherwise there will be dangling pointers. This means that if the compiler wants to build a closure containing free variables which are available in a stack frame, all three variables must be copied into the closure; the closure cannot just point to the stack frame. But if all activation records are heap-allocated, then closures may point at them. This flexibility allows the closure analysis phase of a good compiler to choose much better representations for closures, with more sharing and less copying the restriction that heaps cannot point to stacks must be counted as a “cost” of using stack-allocated frames. To quantify this cost, we measured two versions of the Standard ML of New Jersey outfitted with our recently improved closure-representation analysis phase. The difference in execution time between the two versions is attributable only to the slightly more cumbersome representations that are imposed by the “closures cannot point to frames” restriction. The frames themselves are not much bigger, but the closures are since they can’t point to the frames, data from frames must be copied into the closures. Some programs suffer more from this than others, but on the average the difference is quite significant: about 3.4 extra instructions are executed per every frame creation because of this restriction. Perhaps our lambda-lifting algorithm is better tuned for heaps than it is for stacks, and this “copying vs. sharing” cost is overstated; it is difficult to tell. It is possible to allow dead variables in frames and closures, if the garbage collector knows they are dead. This can be accomplished using special descriptors, which would reduce the “copying and sharing” penalty for stack frames. For example, in the Chalmers Lazy ML compiler or the Gallium compiler, associated with each return address is a descriptor telling 77 CU IDOL SELF LEARNING MATERIAL (SLM)

which variables in the caller’s frame are live after the return5. But this is not sufficient; heap closures still cannot point to stack frames. A fully flexible system must be able to let the stack frame point to a heap closure that contains several variables, some of which may die before the frame itself. The return-address descriptor would need to indicate not only which variables in the frame are dead, but which live variables point to records in which some of the fields are dead. This is complicated to implement, and we do not know of anyone who has done it. 4.5REFERENCE TYPE AND VALUE TYPE This isn’t C++, in which you define all types as value types and can create references to them. This isn’t Java, in which everything is a reference type. You must decide how all instances of your type will behave when you create it. It’s an important decision to get right the first time. You must live with the consequences of your decision because changing later can cause quite a bit of code to break in subtle ways. It’s a simple matter of choosing the class keyword when you create the type, but it’s much more work to update all the clients using your type if you change it later. It’s not as simple as preferring one over the other. The right choice depends on how you expect to use the new type. Value types are not polymorphic. They are better suited to storing the data that your application manipulates. Reference types can be polymorphic and should be used to define the behaviour of your application. Consider the expected responsibilities of your new type, and from those responsibilities, decide which type to create store data. Classes define behaviour. The distinction between value types and reference types was added to .NET and C# because of common problems that occurred in C++ and Java. In C++, all parameters and return values were passed by value. Passing by value is very efficient, but it suffers from one problem: partial copying (sometimes called slicing the object). If you use a derived object where a base object is expected, only the base portion of the object gets copied. You have effectively lost all knowledge that a derived object was ever there. Even calls to virtual functions are sent to the base class version. The Java language responded by removing value types from the language. All user-defined types are reference types. In the Java language, all parameters and return values are passed by reference. This strategy has the advantage of being consistent, but it’s a drain on performance. Let’s face it, some types are not polymorphic they were not intended to be. Java programmers pay a heap allocation and an eventual garbage collection for every variable. They also pay an extra time cost to dereference every variable. All variables are references. In C#, you declare whether a new type should be a value type or a reference type using the struct or class keywords. Value types should be small, lightweight types. Reference types form your class hierarchy. This section examines different uses for a type so that you understand all the distinctions between value types and reference types. Now, v is a copy of the original _my Data. As a reference type, two objects are created on the heap. You don’t have the problem of exposing internal data. Instead, you’ve created an extra 78 CU IDOL SELF LEARNING MATERIAL (SLM)

object on the heap. If v is a local variable, it quickly becomes garbage and Clone forces you to use runtime type checking. All in all, it’s inefficient. Types that are used to export data through public methods and properties should be value types. But that’s not to say that every type returned from a public member should be a value type. There was an assumption in the earlier code snippet that MyData stores values. Its responsibility is to store those values. How many objects are created? How big are they? It depends. If MyType is a value type, you’ve made one allocation. The size of that allocation is twice the size of MyType. However, if MyType is a reference type, you’ve made three allocations: one for the C object, which is 8 bytes (assuming 32-bit pointers), and two more for each of the MyType objects that are contained in a C object. The difference results because value type stored inline in an object, whereas reference types are not. Each variable of a reference type holds a reference, and the storage requires extra allocation. To drive this point home, consider this allocation: MyType [] VAR = new MyType [100]; If MyType is a value type, one allocation of 100 times the size of a MyType object occurs. However, if MyType is a reference type, one allocation just occurred. Every element of the array is null. When you initialize each element in the array, you will have performed 101 allocations and 101 allocations take more time than allocation. What was a one-time bump in pay to add a bonus just became a permanent raise? Where a copy by value had been used, a reference is now in place. The compiler happily makes the changes for you. The CEO is probably happy, too. The CFO, on the other hand, will report the bug. You just can’t change your mind about value and reference types after the fact: It changes behaviour. This problem occurred because the Employee type no longer follows the guidelines for a value type. In addition to storing the data elements that define an employee, you’ve added responsibilities in this example, paying the employee. Responsibilities are the domain of class types. Classes can define polymorphic implementations of common responsibilities easily; cannot and should be limited to storing values. The documentation for .NET recommends that you consider the size of a type as a determining factor between value types and reference types. A much better factor is the use of the type. Types that are simple structures or data carriers are excellent candidates for value types. It’s true that value types are more efficient in terms of memory management: There is less heap fragmentation, less garbage, and less indirection. More important, value types are copied when they are returned from methods or properties. There is no danger of exposing references to internal structures. But you pay in terms of features. Value types have very limited support for common object- oriented techniques. You cannot create object hierarchies of value types. You should consider all value types as though they were sealed. You can create value types that implement interfaces, but that requires boxing, which Item 17 shows causes performance degradation. Think of value types as storage containers, not objects in the OO sense Build low-level data storage types as value types. Build the behaviour of your application using reference types. You get the safety of copying data that gets exported from your class 79 CU IDOL SELF LEARNING MATERIAL (SLM)

objects. You get the memory usage benefits that come with stack-based and inline value storage, and you can utilize standard object-oriented techniques to create the logic of your application. When in doubt about the expected use, use a reference type. 4.6BOXING AND UNBOXING The wrapper classes provide a mechanism to \"wrap\" primitive values in an object so that the primitives can be included in activities only for objects, like being added to Collections. There is a wrapper class for every primitive in Java. The wrapper class for INT is Integer and the class for float is Float and so on. Wrapper classes also provide many utility functions for primitives like Interparent (). Use case: Write a method which can accept anything. You can write the method as public method (Object obj) {}. Now since all classes are children of Object class, their objects can be passed. However primitive types are not children of Object and hence cannot be passed. We can wrap the primitives inside corresponding wrapper classes and then pass to the function.Boxing and un-boxing are the most important concepts you always get asked in your interviews. It’seasy to understand, and simply refers to the allocation of a value type on the heap rather than the stack. Implicit conversion of a value type to a reference type is known as Boxing. In boxing process, a value type is being allocated on the heap rather than the stack.Explicit conversion of same reference type back to a value type is known as un-boxing. In un-boxing process, boxed value type is unboxed from the heap and assigned to a value type which is being allocated on the stack. 80 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 4.2: Boxing and Un-boxing 4.7PASS BY VALUE AND BY REFERENCE AND OUT PARAMETER When a function is called, the arguments in a function can be passed by value or passed by reference. CALLEE is a function called by another and the caller is a function that calls another function. The values that are passed in the function call are called the actual parameters. The values received by the function are called the formal parameters. Pass by value means that a copy of the actual parameter’s value is made in memory, i.e., the caller and CALLEE have two independent variables with the same value. If the CALLEE modifies the parameter value, the effect is not visible to the caller. Overview:  Passes an argument by value.  CALLEE does not have any access to the underlying element in the calling code. 81 CU IDOL SELF LEARNING MATERIAL (SLM)

 A copy of the data is sent to the CALLEE.  Changes made to the passed variable do not affect the actual value. Pass by reference (also called pass by address) means to pass the reference of an argument in the calling function to the corresponding formal parameter of the called function so that a copy of the address of the actual parameter is made in memory, i.e., the caller and the CALLEE use the same variable for the parameter. If the CALLEE modifies the parameter variable, the effect is visible to the caller’s variable. Overview:  Passes an argument by reference.  CAELLE gives a direct reference to the programming element in the calling code.  The memory address of the stored data is passed.  Changes to the value influence the original data. 4.8ARRAY LISTS & HASH TABLES The function that converts a value to an index is called a hash function. - A hash function is a function which when given a key, generates an index in the array. - A hash function that returns a unique hash number is called universal hash function. It is extremely difficult to assign unique hash numbers to objects in practice, unless the number of objects to be processed can be known. A hash function has the following properties: It always returns a number for an object, two equal objects will always have the same number, two unequal objects will not always have different number. So far, we have used many sorts of variables, but it has always been true that each variable stores one value at a time one String or one Boolean. The Java ArrayList class can store a group of many objects. This capability will greatly expand what our programs can do. Java has a whole suite of a \"Collection\" classes that can store groups of objects in various ways. The ArrayList is the most famous and commonly used type of collection, and it is the one we will use the most. An ArrayList is an object that can store a group of other objects for us and allow us to manipulate those objects one by one. For example, we could use an ArrayList to store all the String names of the pizza toppings offered by a restaurant, or we could store all the URLs that make up a user's favourite sites. The Java collection classes, including ArrayList, have one major constraint: they can only store pointers to objects, not primitives. So, an ArrayList can store pointers to String objects or Colour objects, but an ArrayList cannot store a collection of primitives like INT or double. This objects-only constraint stems from fundamental aspects of the way Java works, but as a practical matter it is not much of a problem. First, we will look at a small ArrayList example to see roughly how it works, and then we will look at the real syntax. 82 CU IDOL SELF LEARNING MATERIAL (SLM)

How can we identify each of the three elements in the ArrayList? The ArrayList gives each element an \"index number\". The first element added is called number 0, the next added is number 1, the next is number 2, and so on (the index numbers are shown in the drawing above). The index numbers identify the individual elements, so we can do things with them. This \"zero based indexing\" scheme is extremely common in Computer Science, so it's worth getting accustomed to it. (Indeed, it is the same numbering scheme used to identify individual chars within a String.) The ArrayList responds to two methods that allow us to look at the elements individually. The size () method returns the INT current number of elements in the ArrayList. The get () method takes an INT index argument and returns the pointer at that index number. To add an object to an ArrayList, we pass a pointer to the object we want to add. This does not copy the object being stored. There is just one object, and we have stored a pointer to it in the ArrayList. Indeed, copying objects is very rare in Java. Usually, we have a few objects, and we copy pointers to those objects around. The prototype of add () is public void add (Object element); The type \"Object\" means that the argument can be any pointer type String, or Colour, or DRect. We will study the Object type in more detail soon when we look at Java's \"inheritance\" features. For now, Object works as a generic pointer type that effectively means \"any type of pointer\". ArrayList and the other standard Java collection classes support many convenient utility methods to operate on their elements. We mention these now for completeness, although our code will mostly use the basic methods: add (), size (), and get (). For the methods below that do comparisons, the ArrayList always uses the equals () method to determine if two objects are the same. The equals () method works correctly for the standard Java classes like String, Colour, and so on. The basic methods add (), size () and get () run very fast, no matter how big the collection is. In contrast, some of the methods below must search over the whole collection, and so are potentially much slower. 4.9 SUMMARY  Web application security is a stack of attack surfaces and defensive mitigating solutions. It is not enough to protect web applications with only one technique, or at only one layer of the stack. Vulnerabilities in the platform, or in protocols, such as TCP or HTTP, are just as devastating to the security and availability of applications as attacks against the application itself.  A full stack of mitigating solutions is necessary to realise a positive web application security posture. It is important to note that a comprehensive approach requires collaboration across network, security, operations, and development teams, as each has a role to play in protecting applications and their critical data. 83 CU IDOL SELF LEARNING MATERIAL (SLM)

 The Web took the world by storm, and as a result developed rapidly in many directions. However, it still exhibits many aspects of its early development, such as its visual and computer-screen orientation. But the Web is still developing rapidly: there are now more browsers on mobile telephones than on desktops, and there is a vast diversity in types of devices, types and orientations of screens, and sizes (in number of pixels), and resolutions of screens.  Dealing with this diversity is impossible to address just by keeping a list of all the possible devices, or even a list of the most-used ones, and producing different sites for them, since the complexity would be unmanageable, and because once sites started turning away browsers and devices they didn't know, the browser makers responded by disguising themselves to such sites as other browsers.  On top of this diversity there is also the diversity required for accessibility. Although providing access for the visually impaired is an important reason for accessibility, we are all visually impaired at one time or another. When displaying an application on a projector screen at a conference or meeting, the whole audience will typically be visually impaired in comparison to someone sitting behind a computer screen. The existence of separate so-called \"Ten-foot Interfaces\" (for people controlling their computers by remote control from an armchair ten feet away) demonstrates that the original applications are not designed for accessibility. Furthermore, Google (and all other search engines) is blind and sees only what a blind user sees of a page; as the webmaster of a large bank has remarked, \"we have noticed that improving accessibility increases our Google rating\".  The success of the Web has turned the browser into a central application area for the user, and you can spend most of your day working with applications in the browser, reading mail, shopping, searching your own diskdrive. The advent of applications such as Google Maps and G-Mail has focussed minds on delivering applications via the web, not least because it eliminates the problems involved with versioning: everyone always has the most recent version of your application. Since Web-based applications have benefits for both user and provider, we can only expect to see more of them in the future.  But this approach comes at a cost. Google Maps is of the order of 200K of Javascript code. Such applications are only writable by programming experts, and producing an application is not possible by the sort of people who often produce web pages for their own use.  The Web Interfaces landscape is in turmoil now. Microsoft has announced a new mark-up language and vector graphics language for the next version of Windows; probably as a response Adobe has acquired Macromedia and therefore Flash; W3C have standards for applications in the form of X-Forms, XHTML and SVG and are 84 CU IDOL SELF LEARNING MATERIAL (SLM)

working on 'compound documents'; and other browser manufacturers are calling for their own version of HTML.  This talk discusses the requirements for Web Applications, and the underpinnings necessary to make Web Applications follow in the same spirit that engendered the Web in the first place. 4.10 KEYWORDS  Fields- If you build it, they will come: The most basic of the building blocks for data collection. These are the storage units that your website visitors use to enter their names, email addresses, notes, etc. If you’re asking for first name, last name, email address, city, and zip code across five different entry boxes, that’s five fields.  Framework - Suite of programs used in website or software development. This lays the groundwork for the type of programming language used for your site or app development.  Front End - The part of the website or app that the user sees. If the back end of your website is everything behind-the-scenes, this is what happens onstage.  Graphical User Interface-” The image of how a website is laid out and meant to be interacted with. In website design, this is how everything will ideally look in layout (your mileage may vary when you move into development given the number of different browsers and versions).  Meta Tag - Additional information on web pages or elements, such as the way a piece of content should display in Google search results, the photo credit for an image, or the main keywords associated with a plug-in. This is huge for SEO. We recommend the Yoast plug-in on WordPress for adding all the necessary meta information to set up your site for SEO success. 4.11 LEARNING ACTIVITY 1. Create a session on Boxing and Un-boxing. ___________________________________________________________________________ ___________________________________________________________________________ 2. Create a survey on Stack and Heap Memory. ___________________________________________________________________________ ___________________________________________________________________________ 85 CU IDOL SELF LEARNING MATERIAL (SLM)

4.12 UNIT END QUESTIONS A. Descriptive Questions Short Questions 1. Define ARGS. 2. Define Array. 3. What is boxing? 4. What is Un-boxing? 5. What do you mean by Hash Tables? Long Questions 1. Explain the Stack and Heap Memory? 2. Discuss the Reference Type and Value Type. 3. Describe the Pass by Value and By Reference And Out Parameter. 4. Illustrate the Array Lists & Hash Tables. 5. Examine the concept of Command Line Arguments. B. Multiple Choice Questions 1. What are cookies? a. Cookies are text files stored on the client computer and they are kept for various information tracking purpose b. Cookies are binary files stored on the server computer and they are kept for various information tracking purpose c. Cookies are binary files stored on the client computer and they are kept for data storage purpose. d. None of these 2. Who developed the Internet Relay Chat? a. Jarkko Oikarinen b. Tim Berners-Lee c. Robert Cailliau d. None of these 3. Which of the following is not an example of search engine? a. Google 86 CU IDOL SELF LEARNING MATERIAL (SLM)

b. Gmail c. Yahoo d. Altavista 4. Which is called the inserting spurious data or information into an organization’s system to disrupt or overload services? a. Interruption b. Interception c. Modification d. Fabrication 5. Which was the first search engine on the Internet? a. Google b. Archie c. Alta Vista d. WAIS Answers 1-a, 2-a, 3-b, 4-d, 5-b 4.13 REFERENCE Book References  Prosser, David (15 September 2010). \"The Business On Ashley Tabor OBE, Founder and Global Group CEO, Global Radio\".  \"Global/GMG final report\". Competition Commission. 21 May 2013.  Martin, Roy (8 March 2013). \"Global Radio offers to sell XS & Gold EM\"..  Martin, Roy (15 November 2013). \"Global disappointed at appeal dismissal\". Radio Today. Archived from the original on 14 February 2014. Retrieved 6 February 2014.  Martin, Roy (6 February 2014). \"Communicorp buys 8 Global stations\". Radio Today. Archived from the original on 9 February 2014.  Martin, Roy (8 March 2013). \"Global Radio offers to sell XS & Gold EM\". Radio Today. Archived from the original on 12 October 2013. E-References 87 CU IDOL SELF LEARNING MATERIAL (SLM)

 https://social.msdn.microsoft.com/Forums/en-US/c9c12a84-7c47-45d3-84b8- 0ccd317b48f9/how-to-specify-windows-installer-condition-that-must-be-satisfy-in- order-for-the-selected-item-to-be?forum=csharplanguage  https://www.w3.org/2005/Talks/09-steven-interact/  https://www.educative.io/edpresso/pass-by-value-vs-pass-by-reference 88 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 5:GENERIC COLLECTIONS. NET ASSEMBLY STRUCTURE 5.0 Learning Objectives 5.1 Introduction 5.2 Classification of Assembly 5.2.1 Single – File 5.2.2 Multi – File 5.3 Creating and Using Managed Dlls 5.4 Private Assembly and Shared Assembly 5.5 The Global Assembly Cache 5.6 Property Procedures 5.7 Summary 5.8 Keywords 5.9 Learning Activity 5.10 Unit End Questions 5.11 References 5.0 LEARNING OBJECTIVES After studying this unit, you will be able to:  List the concept of Assembly.  Illustrate the Classification of Assembly.  Explain the Creating and Using Managed DLLS. 5.1 INTRODUCTION The language to command a computer architecture is comprised of instructions and the vocabulary of that language is called the instruction set. The only way computers can represent information is based on high or low electric signals, i.e., transistors (electric switches) being turned on or off. Being limited to those 2 alternatives, we represent information in computers using bits (binary digits), which can have one of two values: 0 or 1. So, instructions will be stored in and read by computers as sequences of bits. This is called machine language. To make sure we don’t need to read and write programs using bits, every 89 CU IDOL SELF LEARNING MATERIAL (SLM)

instruction will also have a” natural language” equivalent, called the assembly language notation. For example, in C, we can use the expression c = a + b; or, in assembly language, we can use add c, a, b and these instructions will be represented by a sequence. Since every bit can only be 0 or 1, with a group of n bits, we can generate 2n different combinations of bits. For example, we can make 28 combinations with one byte, 2 16 with one half word, and 232 with one word. Please note that we are not making any statements, so far, on what each of these 2n combinations is representing: it could represent a number, a character, an instruction, a sample from a digitized CD-quality audio signal, etc. In this chapter, we will discuss how a sequence of 32 bits can represent a machine instruction. In the next chapter, we will see how a sequence of 32 bits can represent numbers. In a high-level programming language such as C, we can declare as many variables as we want. In a low-level programming language such as MIPS R2000, the operands of our operations have to be tied to physical locations where information can be stored. We cannot use locations in the main physical memory for this, as such would delay the CPU significantly (indeed, if the CPU would have to access the main memory for every operand in every instruction, the propagation delay of electric signals on the connection between the CPU and the memory chip would slow things down significantly). Therefore, the MIPS architecture provides for 32 special locations, built directly into the CPU, each of them able to store 32 bits of information (1 word), called “registers”. A small number of registers that can be accessed easily and quickly will allow the CPU to execute instructions very fast. Therefore, each of the three operands of a MIPS R2000 instruction is restricted to one of the 32 registers. For instance, each of the operands of ads and sub instructions needs to be associated with one of the 32 registers. Each time add or sub instruction is executed, the CPU will access the registers specified as operands for the instruction (without accessing the main memory). The instruction means “add the value stored in the register named $2 and the value stored in the register named $3, and then store the result in the register named $1.” The notation $x refers to the name of a register and, by convention, always starts with a $ sign. In this text, if we use the name of a register without the $ sign, we refer to its content (what is stored in the register), for example, x refers to the content of $x. Large, complex data structures, such as arrays, won’t fit in the 32 registers that are available on the CPU and need to be stored in the main physical memory (implemented on a different chip than the CPU and capable of storing a lot more information). To perform, e.g., arithmetic operations on elements of arrays, elements of the array first need to be loaded into the registers. Inversely, the results of the computation might need to be stored in memory, where the array resides. Again, this MIPS R2000 instruction performs one operation and has 3 operands. The first operand refers to the register the memory content will be loaded into. The register specified by the third operand, the base register, contains a memory address. The actual memory address the CPU accesses is computed as the sum of “the 32-bit word stored in the base register ($r2 in this case)” and “the offset” (100 in this case). Overall, the above instruction 90 CU IDOL SELF LEARNING MATERIAL (SLM)

will make the CPU load the value stored at memory address into register $r1. Also, it needs to be pointed out that a LW instruction will not only load MEM [100 + r2] into a register, but also the content of the 3 subsequent memory cells, at once. The 4 bytes from the 4 memory cells will fit nicely in a register that is one word long. The register corresponding to each operand is encoded using 5 bits, which is exactly what we need to represent one of 32 = 25 registers. The SHAMT field will be discussed later, when shift instructions are introduced. The FUNCT field can be used to select a specific variant of the operation specified in the OP field (e.g., OP can specify a shift instruction and FUNCT then indicates whether it is a right-shift or a left-shift) Using the previous instruction format would only provide 5 bits to represent the third operand, a constant, in an addi or lw instruction. Therefore, an instruction like, e.g., addi $r1, $r2, 256, where the constant value of 256 cannot be represented using only 5 bits (since 111112 = 31), could not be represented in machine language. Similarly, the offset in a lw instruction would be limited. To solve this problem without having to increase the length of instructions beyond 32 bits, MIPS provide for a small number of different instruction formats. The instruction format we described before is called the R(register)-type instruction format (where all 3 operands are registers). Instructions like lw and addi will be represented using the I(immediate)-type instruction format (where one operand is a 16-bit constant, allowing much larger constants and offsets). Essentially, each program is an array of instructions, where each instruction is represented as a 32-bit word. Just like data (integer numbers, characters, etc. represented by a sequence of bits), instructions can be stored in memory. This stored-program concept was introduced by von Neumann and architectures that store both data and programs in memory are called von Neumann architectures. In Harvard architecture, programs and data are stored in separate memory chips (e.g., DSPs). In non-Harvard architecture, programs and data are stored on one and the same memory chip, which gives rise to the picture below: in one and the same memory, data, C source code, a C compiler program, compiled machine code, the OS program, etc. are all simultaneously present, in many different sections of the memory, as depicted below. Decision making instructions compare the first two operands and behave differently based on the result of the comparison. In the case of BEQ, the contents of the two registers, $r1 and $r2 in this case, will be compared and if they are equal, the CPU will continue with the instruction at the memory address specified by the third operand, label. If the compared values are not equal, the CPU will continue with the next instruction. In machine language, the third operand will describe the actual memory address. In assembly language, the programme is being relieved from the burden of having to specify the address explicitly and, instead, a symbolic reference, label, to the actual address is used, called a “label” (this also addresses the problem that the actual memory address of an instruction is only known exactly when the OS allocates physical space in memory to a program). In the case of BNE, if the 91 CU IDOL SELF LEARNING MATERIAL (SLM)

compared values are not equal, the CPU will continue execution at label, otherwise, it will continue with the next instruction. Figure 5.1: .NET Assembly 5.2 CLASSIFICATION OF ASSEMBLY The integration of design and assembly planning has been considered as a crucial method for achieving assembly-oriented product development to reduce manufacturing cost and enhance the production efficiency and product quality. An important foundation for developing algorithms for integration of design with assembly planning is the development of assembly features classification. Feature-based assembly modelling and planning not only improves the link between design and downstream applications, but the designer’s task can also be made somewhat easier. Assembly feature is the elementary connection feature containing mating relations between the components. Assembly features are divided into connection features and handling features. Handling features represent handling information and connection features represent connections between components. The later is regarded as an association between two form features on different parts. Form features are specific configurations on surfaces, edges, or corners of a part and are defined for different aspects. One of the aspects is robot assembly. Robot assembly process is defined and simulated using form features defined on the parts of the product to be assembled. The connection features and their mating 92 CU IDOL SELF LEARNING MATERIAL (SLM)

relations can be used to obtain the Connection Graph and/or Component-Mating Graph to efficiently represent all feasible and complete disassembly sequences with correct precedence relations. In some connections the two connecting features are enforced to connect by a physical connector. A connector provides constraints on its jointed components to ensure that these components perform the required functions. Several definitions for assembly features provided by various researchers are given below: DeFazio defined an assembly feature as any geometry or non-geometric attribute of a discrete part whose presence or dimensions are relevant to the products or part’s function, manufacture, engineering analysis and use. The first appropriate use of the term assembly feature was by Sodhi and Turner. Before that the term was sometimes used, but only to specify, in fact, elementary relations. Sodhi and Turner used assembly features for specification of relations between components on a higher abstraction level. In their opinion, assembly features served as a higher-level interface, capturing assembly relations at the functional level, and removing from the designer the burden of identifying the underlying elementary relations. Shah and Tadepalli defined an assembly feature as an association between two form features on different parts. Shah and Mantyla defined assembly features as grouping of various feature types to define assembly relations, such as mating conditions, part relative position and orientation, various kinds of fits and kinematic relations. Deneux defined an assembly feature as a generic solution referring to two groups of parts that need to be related by a relationship to solve a design problem. Bronsvoort and Van Holland defined assembly features as “features with significance for assembly processes” and are subdivided into connection features and handling features. Raymond Chun Wai Sung introduced the notion of assembly features that are composed of three adjacency relationships: Contact Adjacency, Internal Spatial Adjacency, and External Spatial Adjacency. Contact Adjacency is like the connection features used in Bronsvoort and Van Holland. Internal Spatial Adjacency is like the handling features used in Bronsvoort and Van Holland. External Spatial Adjacency shows spatially opposing faces separated by empty space. N. Shyamsundar and Rajit Gadh defined assembly feature as a property of an assembly unit with respect to other component, which provides assembly related information relevant to the design, manufacture, or function of the product assembly. Chan and Tan defined an assembly feature as the elementary connection feature containing mating relations between the components. Zha XF and Du H defined assembly features as particular form features that affect assembly operations, which are defined by connectors. Assembly feature is also defined as an element to specify the relationships between a pair of assembled components. According to Kamal YoucefToumi, assembly features are the carriers of constraint between parts and their shape determines which degrees of freedom are constrained and which remain free. The literature review shows that most of the published work on assembly features focuses on two things: the connection between features, and the assembly and/or mating relations between the connecting features of the mating parts. Therefore, using the concept of connecting features, definition of assembly feature with its mathematical representation is provided in the next 93 CU IDOL SELF LEARNING MATERIAL (SLM)

section. Owing to the importance of assembly and/or mating relations between a pair of assembled parts, description of various types of mating relations existing in most of the assembled parts is provided afterward. 5.2.1 Single – File Now that we have seen how the program counter is used to determine which instruction will be executed next, let’s take a closer look at how branching instructions are exactly implemented. Conditional branching instructions are I-type instructions. That instruction type provides 16 bits to encode the branch address. Since this is too few to encode an actual, full memory address, MIPS R2000 will encode the word address of the instruction to branch to, relative to the instruction that follows right after the branch instruction. Indeed, since instructions are 1 word long, we can specify the branch address as several words before or after the instruction following the branch instruction. This allows jumping up to 215 words away, this is usually enough. Going back to the 6 steps that are required to make use of procedures, we still need to address step 3 or more than 2 values to be returned (more than what fits in the 2 available registers). Extra storage space can be acquired by the procedure by moving the content of some of the registers temporarily to the memory. Extra arguments or return values can also be accommodated using memory. The section of memory that is used for all of this is called the stack. The stack is a data structure that is organized as a last-in-first-out queue: what was added last to the stack, i.e., pushed last onto the stack, will be the first item to be removed from it, i.e., popped off the stack. The stack is implemented using physical memory and located in the upper end of it. By historical precedent, the stack grows from the higher to the lower memory addresses. One register is dedicated to keeping track of the next available memory address for the stack, to know where to place new data onto the stack (for a push) or find the most recently added item: the stack pointer registers, $SP. One stack entry consists of one word, so $SP gets adjusted by 1 word for each push or pop operation, Keeping in mind that the stack grows from the higher to the lower memory addresses. 5.2.2 Multi – File Portable Document Format, standardized as ISO 32000, is a file format developed by Adobe in 1993 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems. Based on the PostScript language, each PDF file encapsulates a complete description of a fixed-layout flat document, including the text, fonts, vector graphics, raster images and other information needed to display it. PDF was standardized as ISO 32000 in 2008. The last edition as ISO 32000-2:2020 was published in December 2020. 94 CU IDOL SELF LEARNING MATERIAL (SLM)

PDF files may contain a variety of content besides flat text and graphics including logical structuring elements, interactive elements such as annotations and form-fields, layers, rich media (including video content), three-dimensional objects using U3D or PRC, and various other data formats. The PDF specification also provides for encryption and digital signatures, file attachments, and metadata to enable workflows requiring these features. Objects may be either or indirect. Indirect objects are numbered with an object number and a generation number and defined between the obj and endobj keywords if residing in the document root. Beginning with PDF version 1.5, indirect objects (except other streams) may also be in special streams known as object streams. This technique enables non-stream objects to have standard stream filters applied to them, reduces the size of files that have large numbers of small indirect objects and is especially useful for Tagged PDF. Object streams do not support specifying an object's generation number (other than 0). An index table, also called the cross-reference table, is located near the end of the file, and gives the byte offset of each indirect object from the start of the file.This design allows for efficient random access to the objects in the file, and also allows for small changes to be made without rewriting the entire file. Before PDF version 1.5, the table would always be in a special ASCII format, be marked with the xref keyword, and follow the main body composed of indirect objects. Version 1.5 introduced optional cross-reference streams, which have the form of a standard stream object, possibly with filters applied. Such a stream may be used instead of the ASCII cross-reference table and contains the offsets and other information in binary format. The format is flexible in that it allows for integer width specification (using the /W array), so that for example, a document not exceeding 64 KiB in size may dedicate only 2 bytes for object offsets. As in PostScript, vector graphics in PDF are constructed with paths. Paths are usually composed of lines and cubic Bezier but can also be constructed from the outlines of text. Unlike PostScript, PDF does not allow a single path to mix text outlines with lines and curves. Paths can be stroked, filled, fill then stroked, or used for clipping. Strokes and fills can use any colour set in the graphics state, including patterns. PDF supports several types of patterns. The simplest is the tiling pattern in which a piece of artwork is specified to be drawn repeatedly. This may be a colour tiling pattern, with the colours specified in the pattern object, or an uncoloured tiling pattern, which defers colour specification to the time the pattern is drawn. Beginning with PDF 1.3 there is also a shading pattern, which draws continuously varying colours. There are seven types of shading patterns of which the simplest are the axial shading and radial shading. The original imaging model of PDF was, like PostScript's, opaque: each object drawn on the page completely replaced anything previously marked in the same location. In PDF 1.4 the imaging model was extended to allow transparency. When transparency is used, new objects interact with previously marked objects to produce blending effects. The addition of transparency to PDF was done by means of new extensions that were designed to be ignored 95 CU IDOL SELF LEARNING MATERIAL (SLM)

in products written to PDF 1.3 and earlier specifications. As a result, files that use a small amount of transparency might view acceptably by older viewers, but files making extensive use of transparency could be viewed incorrectly by an older viewer without warning. The transparency extensions are based on the key concepts of transparency groups, blending modes, shape, and alpha. The model is closely aligned with the features of Adobe Illustrator version 9. The blend modes were based on those used by Adobe Photoshop at the time. When the PDF 1.4 specification was published, the formulas for calculating blend modes were kept secret by Adobe. They have since been published. The concept of a transparency group in PDF specification is independent of existing notions of \"group\" or \"layer\" in applications such as Adobe Illustrator. Those groupings reflect logical relationships among objects that are meaningful when editing those objects, but they are not part of the imaging model. 5.3 CREATING AND USING MANAGED DLLS Basically, a DLL is a file on consisting of global data, compiled functions, and resources, that becomes part of your process. It is compiled to load at a preferred base address, and if there's no conflict with other DLLs, the file gets mapped to the same virtual address in your process. The DLL has various exported functions, and the client program (the program that loaded the DLL in the first place) imports those functions. Windows matches up the imports and exports when it loads the DLL. Win32 DLLs allow exported global variables as well as functions. In Win32, each process gets its own copy of the DLL's read/write global variables. If you want to share memory among processes, you must either use a memory-mapped file or declare a shared data section as described in Jeffrey Richter's Advanced Windows (Microsoft Press, 1997). Whenever your DLL requests heap memory, that memory is allocated from the client process's heap. A DLL contains a table of exported functions. These functions are identified to the outside world by their symbolic names and by integers called ordinal numbers. The function table also contains the addresses of the functions within the DLL. When the client program first loads the DLL, it doesn't know the addresses of the functions it needs to call, but it does know the symbols or ordinals. The dynamic linking process then builds a table that connects the client's calls to the function addresses in the DLL. If you edit and rebuild the DLL, you don't need to rebuild your client program unless you have changed function names or parameter sequences. In a simple world, you'd have one EXE file that imports functions from one or more DLLs. In the real world, many DLLs call functions inside other DLLs. Thus, a particular DLL can have both exports and imports. This is not a problem because the dynamic linkage process can handle cross-dependencies. By default, the compiler uses the __CDECL argument passing convention, which means that the calling program pops the parameters off the stack. Some client languages might require 96 CU IDOL SELF LEARNING MATERIAL (SLM)

the __STDCALL convention, which replaces the Pascal calling convention, and which means that the called function pops the stack. Therefore, you might have to use the __STDCALL modifier in your DLL export declaration. Just having import declarations isn't enough to make a client link to a DLL. The client's project must specify the import library (LIB) to the linker, and the client program must contain a call to at least one of the DLL's imported functions. That call statement must be in an executable path in the program. The preceding section primarily describes implicit linking, which is what you as a C++ programmer will probably be using for your DLLs. When you build a DLL, the linker produces a companion import LIB file, which contains every DLL's exported symbol and (optionally) ordinals, but no code. The LIB file is a surrogate for the DLL that is added to the client program's project. When you build (statically link) the client, the imported symbols are matched to the exported symbols in the LIB file, and those symbols are bound into the EXE file. The LIB file also contains the DLL filename which gets stored inside the EXE file. When the client is loaded, Windows finds and loads the DLL and then dynamically links it by symbol or by ordinal. Explicit linking is more appropriate for interpreted languages such as Microsoft Visual Basic, but you can use it from C++ if you need to. In Win16, the more efficient ordinal linkage was the preferred linkage option. In Win32, the symbolic linkage efficiency was improved. Microsoft now recommends symbolic over ordinal linkage. The DLL version of the MFC library, however, uses ordinal linkage. A typical MFC program might link to hundreds of functions in the MFC DLL. Ordinal linkage permits that program's EXE file to be smaller because it does not have to contain the long symbolic names of its imports. If you build your own DLL with ordinal linkage, you must specify the ordinals in the project's DEF file, which doesn't have too many other uses in the Win32 environment. If your exports are C++ functions, you must use decorated names in the DEF file (or declare your functions with extern \"C\"). Each DLL in a process is identified by a unique 32-bit HINSTANCE value. In addition, the process itself has an HINSTANCE value. All these instance handles are valid only within a particular process, and they represent the starting virtual address of the DLL or EXE. In Win32, the HINSTANCE and HMODULE values are the same and the types can be used interchangeably. The process (EXE) instance handle is almost always 0X 400000. and the handle for a DLL loaded at the default base address is 0x10000000. If your program uses several DLLs, each will have a different HINSTANCE value, either because the DLLs had different base addresses specified at build time or because the loader copied and relocated the DLL code. Instance handles are particularly important for loading resources. The Win32 Find Resource () function takes an HINSTANCE parameter. EXEs and DLLs can each have their own resources. If you want a resource from the DLL, you specify the DLL's instance handle. If you want a resource from the EXE file, you specify the EXE's instance handle. How do you get an instance handle? If you want the EXE's handle, you call the Win32 GetModuleHandle () function with a NULL parameter. If you want the DLL's handle, you 97 CU IDOL SELF LEARNING MATERIAL (SLM)

call the Win32 GetModuleHandle () function with the DLL name as a parameter. Later you'll see that the MFC library has its own method of loading resources by searching various modules in sequence. We've been looking at Win32 DLLs that have a DllMain () function and some exported functions. Now we'll move into the world of the MFC application framework, which adds its own support layer on top of the Win32 basics. AppWizard lets you build two kinds of DLLs with MFC library support: extension DLLs and regular DLLs. You must understand the differences between these two types before you decide which one is best for your needs. Of course, Visual C++ lets you build a pure Win32 DLL without the MFC library, just as it lets you build a Windows program without the MFC library. This is an MFC-oriented book, however, so we'll ignore the Win32 option here. An extension DLL supports a C++ interface. In other words, the DLL can export whole classes and the client can construct objects of those classes or derive classes from them. An extension DLL dynamically links to the code in the DLL version of the MFC library. Therefore, an extension DLL requires that your client program be dynamically linked to the MFC library (the AppWizard default) and that both the client program and the extension DLL be synchronized to the same version of the MFC DLLs. Extension DLLs are quite small; you can build a simple extension DLL with a size of 10 KB, which loads quickly. A big restriction here is that the regular DLL can export only C- style functions. It can't export C++ classes, member functions, or overloaded functions because every C++ compiler has its own method of decorating names. You can, however, use C++ classes (and MFC library classes, in particular) inside your regular DLL. When you build an MFC regular DLL, you can choose to statically link or dynamically link to the MFC library. If you choose static linking, your DLL will include a copy of all the MFC library code it needs and will thus be self-contained. A typical Release-build statically linked regular DLL is about 144 KB in size. If you choose dynamic linking, the size drops to about 17 KB, but you'll have to ensure that the proper MFC DLLs are present on the target machine. That's no problem if the client program is already dynamically linked to the same version of the MFC library. 5.4 PRIVATE ASSEMBLY AND SHARED ASSEMBLY An instanced assembly feature is defined as “the identification of an assembly feature that can be described by a set of attributes that represent the characteristics of the assembly feature.” An attempt is made to establish all possible instances and sub-instances of assembly features. For example, as shown in the Fig. 2, the four instances namely round pin-hole; threaded pin-hole, conical pin-hole, and rectangular pin-hole are sub-instances of the instance pin-hole assembly feature. The instance pin-hole assembly feature is a sub-instance of fit assembly feature which is a particular type of assembly features. The four instances of the instance pin-hole assembly feature can be also further instantiated. For example, the sub instance round pin-hole of the instance pin-hole assembly feature can be further instantiated 98 CU IDOL SELF LEARNING MATERIAL (SLM)

into sub-instances of round pin-through hole and round pin-blind hole. The pin-hole feature instance contains the properties and operations required to define the pin-hole assembly feature. Similarly, instances are provided for other types of the assembly features, i.e., plane- plane, pin slot, and rib-slot assembly features. Assembly feature definition and mathematical representation is provided using the concept of assembly intents which does not specify the only information of assembly and/or mating relations with the connecting form features but associates the connecting form features with other assembly-specific information as well, for example, assembly operations, and assembly degrees of freedom. Assembly features are classified into against, fit; single, multiple; soft, hard, composite; and functioning, interlocking assembly features. Against and fit assembly features are the assembly features that depend upon the assembly or mating relationships between the connecting form features. Single and multiple assembly features depend upon the number of connecting form features (on two mating parts) that connect simultaneously, when the two parts mates with each other. Soft and hard assembly features are defined keeping in view the type of attachment and/or connection between the two connecting features. Composite assembly feature is a type of hard assembly features and gives compound assembly features when the form features relate to the help of a connector. Depending upon the DOF, assembly features are divided into functioning assembly features and interlocking assembly features. Possible instances of different assembly features, widely used inside for assembly, are provided. The list of assembly features and their instances can be further extended. For example, by combining the already existing single assembly features, more multiple assembly features can be obtained. Important assemblyspecific information that can be obtained from the assembly features is also provided. This information can be very useful for assembly and/disassembly process planning. The assembly features classification and instantiation can be of great significance in the integration of CAD and assembly planning. 5.5 THE GLOBAL ASSEMBLY CACHE The .NET Framework global assembly cache is a code cache. The global assembly cache is automatically installed on each computer that has the .NET Framework common language runtime installed. Any application that is installed on the computer can access the global assembly cache. The global assembly cache stores assemblies that are designated to be shared by several applications on the computer. Component assemblies are typically stored in the C:\\WINNT\\Assembly folder. To install an assembly .DLL file in the .NET Framework global assembly cache, you can use the .NET Framework SDK Global Assembly Cache tool. You can also use the Global Assembly Cache tool to verify that the assembly is installed in the global assembly cache. To accomplish this task, you may have Administrator rights to the computer where the shared assembly is installed. What's more, you must install the .NET Framework SDK. An assembly is a fundamental part of programming with the .NET 99 CU IDOL SELF LEARNING MATERIAL (SLM)

Framework. An assembly is a reusable, self-describing building block of a .NET Framework common language runtime application. An assembly contains one or more code components that the common language runtime executes. All types and all resources in the same assembly form an individual version of the unit. The assembly manifest describes the version dependencies that you specify for any dependent assemblies. By using an assembly, you can specify version rules between different software components, and you can have those rules enforced at run time. An assembly supports side-by-side execution. An assembly must have a strong name to be installed in the global assembly cache. A strong name is a globally unique identity that can't be spoofed by someone else. By using a strong name, you prevent components that have the same name from conflicting with each other or from being used incorrectly by a calling application. Assembly signing associates a strong name together with an assembly. Assembly signing is also named strong-name signing. 5.6 PROPERTY PROCEDURES A property procedure is a series of Visual Basic statements that manipulate a custom property on a module, class, or structure. Property procedures are also known as property accessors. Visual Basic provides for the following property procedures:  A Get procedure returns the value of a property. It is called when you access the property in an expression.  A Set procedure sets a property to a value, including an object reference. It is called when you assign a value to the property.  You usually define property procedures in pairs, using the Get and Set statements, but you can define either procedure alone if the property is read-only (Get Statement) or write-only (Set Statement).  You can omit the Get and Set procedure when using an auto-implemented property. For more information, see Auto-Implemented Properties.  You can define properties in classes, structures, and modules. Properties are Public by default, which means you can call them from anywhere in your application that can access the property's container.  For a comparison of properties and variables, see Differences Between Properties and Variables in Visual Basic. Declaration syntax A property itself is defined by a block of code enclosed within the Property Statement and the End Property statement. Inside this block, each property procedure appears as an internal block enclosed within a declaration statement (Get or Set) and the matching End declaration. 100 CU IDOL SELF LEARNING MATERIAL (SLM)


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook