Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore CU-MCA-SEM-IV-Web Application Development-Second Draft

CU-MCA-SEM-IV-Web Application Development-Second Draft

Published by Teamlease Edtech Ltd (Amita Chitroda), 2022-11-11 08:15:56

Description: CU-MCA-SEM-IV-Web Application Development-Second Draft

Search

Read the Text Version

be a goal although, with the need to start a system and the relay in place, there are those who do mix and match voltages on the contact terminals of a single relay. 8.2 MDI APPLICATIONS The Multiple Document Interface is a Microsoft Windows specification that allows managing multiple documents using a single graphic interface application. An MDI application allows opening several documents simultaneously. Only one document is active at a particular time. MDI applications can be deployed using Win32 or Microsoft Foundation Classes. Programs developed using Win32 are faster than those using MFC. However, Win32 applications are difficult to implement and prone to errors. It should be mentioned that, learning how to properly use MFC to deploy MDI applications is not simple, and performance is typically worse than that of Win32 applications. A method to simplify the development of MDI applications using Object-Oriented Programming is proposed. Subsequently, it is shown that this method generates compact code that is easier to read and maintain than other methods. Finally, it is demonstrated that the proposed method allows the rapid development of MDI applications without sacrificing application performance. In general, a window is divided in two sections, the client area and the no client area as shown in Figure 1. The non-client area includes the title bar and the surrounding borders of the window. On the other hand, the client window covers completely the interior area delimited by the title bar and the window borders. A typical Windows application is responsible of drawing the client area, while the operating system is responsible of drawing and managing the non-client area sequent, each window has a message queue to store its messages while it is busy and a function to process these messages. Plainly, a queue is a line that contains elements that are waiting for a service, i.e., when a customer goes to the bank, he enters a queue (line) and waits to be attended. On a Windows application, once a message is processed, it is removed from the queue and a new message may be processed. When the message queue is empty, the window object temporarily sleeps if no further processing is required. The function that processes the Windows messages is commonly known as the window procedure, and it explicitly determines the window behaviour. A Microsoft Windows class is a generic blueprint of a window object, and a Windows class needs to be registered before creating a window of that class. Specifically, the class registration process requires several parameters that establish window operation; the most important of these parameters is the window procedure. Typically, the window procedure must respond to the messages of interest for that window and leave Windows to process the remaining messages using the Application Program Interfac: DefWindowProc. Note the use of the two colons before the function name to indicate that the function has been declared inside the global namespace. An application may register one or more Windows classes or use a pre-registered Windows class. In general, Multiple Document Interface applications have some similarities with typical Single document interface applications; however, there are some differences that will be 151 CU IDOL SELF LEARNING MATERIAL (SLM)

addressed next, see the Microsoft Software Development Network. The area inside the main window, also known as workspace, is the area where the document windows officially live. These documents windows are best known as MDI children and can be moved inside the application workspace by the user. An MDI application clips its MDI child windows to its workspace preventing users from moving MDI child windows outside the frame window. MDI child windows do not have a toolbar or a menu; for this reason, they onlyhave a title bar as it can be seen. The operating system, Microsoft Windows, allows several applications to share the mouse and the keyboard using the active-application concept. Thus, there is only one active application, and this application receives mouse and keyboard input. Similarly, MDI extends this concept to document windows; as a result, only one MDI child window can be active inside an MDI application. The active child window has a highlighted title bar that allows the user to easily identify this window. Naturally, the active MDI child can be manipulated using the application menu and the application toolbar. The menu and toolbar of an MDI application must be synchronized depending on the state of the active child and the application itself. When an MDI application has no children, its menu and toolbar may offer only a subset of viable options; usually the only available options are those of document creation. MDI applications may support different kinds of documents: i.e., a graph application may be able to create and manipulate pie charts as well as bar charts. For this type of applications, every time a document window becomes active an appropriate menu and toolbar is displayed at the top of the main window. Several commercial software products use MDI, i.e., Microsoft Excel and Corel Draw. Moreover, several researchers have been used MDI for simulation purposes, An MDI application has a special menu called the “Window” menu, which is located right before the “Help” menu as shown in Figure 2. The “Window” menu provides layout and activation options to operate the MDI children and maintains a list of open documents for quick access and handling. Generally, this menu contains the items In particular, the layout and activation options operate through a special window called the client window, which will be introduced next. Inside the frame window there is a child window, called the client window that is responsible of controlling the position and size of the document windows. The client window appears invisible to most users because it fills the interior of the frame window and has a dark gray colour. The pre-registered class MDICLIENT is used to create the client window, and its window procedure encapsulates most of the MDI required functionality. Because the client window is created using a pre- registered class, it is not necessary to provide a window procedure for this window. Even though the frame window receives the command messages through its menu, the client window and the active MDI child are responsible for processing most of these messages. Finally, it is important not to confuse the client window with the client area of a window; the client window applies only to MDI applications, while the client area is not a window but a specific area of it In order to successfully create the main window, a typical Windows 152 CU IDOL SELF LEARNING MATERIAL (SLM)

application starts by registering a Windows class. On the other hand, an MDI application typically starts by registering two Windows classes, one for the frame window and another one for the MDI children. Technically, this implies that a normal Windows application (non- MDI) requires at least one window procedure, while an MDI application requires at least two. As it was mentioned before, the frame window has a child, classically known as the client window. Thus, as soon as the frame window is processing the message WM_CREATE, it must proceed to create the client window by filling up a CLIENTCREATESTRUCT structure and call: Create Window using the pre-registered class MDICLIENT. Right after these two windows have been successfully created, the MDI application is ready for user operation. Typically, the user will open an existing document or create a new one; in both cases the frame window will receive the message WM_COMMAND. Thus, the frame window must respond to this message by creating a new MDI child object. As it will be shown later, the proposed method reduces the application development time by hiding most of the MDI implementation details. 8.3 MDI PARENT AND MDI CHILD FORMS Microsoft Corporation strategically develops their products using the Executive, which is a collection of APIs rarely known for most programmers. Because Microsoft does not publicly document the Executive, programmers must alternatively use an Executive subset called Win32 Because the Win32 APIs were deployed when computer memory and speed were limited, they are extremely efficient in nowadays computers. Unfortunately, the set of Win32 APIs was planned when most of the programming was done using plain C, and most of the application developers, back then, did not want to move to C++. Thus, Win32 has several shortcomings when OOP is required, and two of these are worth mentioning. First, window data storing is difficult to implement and requires either the use global variables or a custom data structure. Second, the window procedure, that is responsible of message processing, needs to be a global function or a static member function of a class; this means that the window procedure does not have any context information. Consequently, code written using Win32 contains a considerable number of global variables, and is difficult to read and maintain, not to be mentioned, prone to errors Microsoft Corporation created a set of classes, better known as MFC, to simplify application development and tolerably provide OOP. To replace the window procedure, they introduced the concept of the message map, which is responsible for calling the appropriate function in response of a specific Windows message. Unfortunately, the message map unquestionably increases the message processing time, makes the object description confusing, and programmers eventually need to learn how to use it. Additionally, an application deployed using MFC has a direct overhead that does not exist in a Win32 application. On the bright side, Microsoft Visual Studio generously provides more than a few wizards to drastically simplify application development. There is a specific wizard to deploy MDI applications 153 CU IDOL SELF LEARNING MATERIAL (SLM)

using MFC. This wizard allows the creation of complex projects to directly start working with. Once the wizard is completed, the programmer must regularly add and remove code; this can be simple if the programmer knows how to do it. However, adding or removing code with no previous MFC knowledge may break the application easily. As it was mentioned previously, there are two major shortcomings of using Win32. We will describe, now, how they can be eliminated. To simplify MDI application development, we propose the creation of a C++ class to represent a rectangular object that has a size, a position, and a window procedure. This window procedure may be deployed as a member function of a class, however, the APIs: RegisterClassor:: RegisterClassEx accept only static functions for message processing. Consequently, a static window procedure will not be able to access any member variables or call any member functions that are not static. This issue can be solved using the API:: SetWindowLongPtr in combination with ::GetWindowLongPtr to store and retrieve context information (Yuan, 2000). To simplify application development, it was suggested (see Petzold, 1999), to use a custom data structure to store window data, however, this traditional approach requires a lot of housekeeping, and the resulting code is diffi cult to read and maintain. Alternatively, we propose to store the this pointer of a C++ class using the fl ag GWLP_USERDATA during the calls of:: SetWindowLongPtr and ::GetWindowLongPtr instead of storing a custom data structure. We will show that this approach results in clear and clean code and has the advantage that can be implemented on a base class making its use completely transparent to the programmer. This approach is motivated by the method proposed by Yuan, 2000. To simplify the notation, assume that a data structure MsgInfo, as shown in the UML diagram of Figure 3, has been defi ned. This structure basically stores Windows message information, namely, a window handle and two context-parameters wParam and lParam of type WPARAM and LPARAM, respectively. After providing appropriate background information on MDI technology, we proceed to describe in detail the proposed method. The UML diagram in Figure 4 describes the proposed classes: Window, MdiChild and MdiFrame. Observe that an appropriate namespace, called Win, has been defined to enable us to re-use the keywords already defined inside the global space and avoid name clashes. In this diagram, any computer screen object is represented by the base class Window; this class is abstract due to the virtual member function Window:GetClassName and the protected constructor of the class; thus, a Window object can be created only by class derivation (note that abstract classes and abstract methods are denoted in an UML diagram using italics). As it was previously established, Microsoft Windows requires registering a Windows class before creating an object of that class. Therefore, the member function Window:GetClassName must be called once during class registration, and then repeatedly for each object that is created. The implementation of the member function Window:GetClassName is simple; it only returns the class name that is used for class registration. Specifically, a Windows class name is a text string that helps the operating system to identify this class. This can be verified using Spy++, the standard tool provided by Microsoft Visual Studio that uses the Windows class name to find and spy 154 CU IDOL SELF LEARNING MATERIAL (SLM)

windows for debugging purposes. The member function Window:GetClassName is called by the member functions MdiFrame::RegisterClass and MdiChild::RegisterClass to register one MDI frame class and one MDI child class, respectively. The implementation of the Win:Window class is straightforward; the public static variable Window:: hInstance becomes handy to store the application instance and is used for window creation and resource loading (note that static variables and functions are represented as underlined text following the UML notation). The member functions Create, Destroy, Update, Show are just simple wrappers for the standard APIs:: CreateWindow, ::DestroyWindow, ::UpdateWindow and ::ShowWindow, respectively. It is important to note that:: CreateWindowEx can be used instead of the most traditional API ::CreateWindow; the only difference between these two is that ::CreateWindowEx supports additional window styles called extended window styles One of the most interesting aspects of the Win::Window class is the implementation of the public operator HWND, which conveniently allows using a Win::Window object whenever a window handle is required. This can become handy because several Windows APIs require a window handle. The implementation of this operator is just a return statement that provides the proper window handle that was stored during window creation, thus, there is no need to write wrappers for quite a lot of Windows APIs 8.4 MANAGE MENUS We propose the Win:MdiFrame class to ease the development of MDI applications. This class is depicted in Figure 4 and can be used to create the application window. Observe that the Win:MdiFrame class is abstract because it is directly derived from Win::Window and does not implement GetClassName. Additionally, Win:MdiFrame contains two abstract methods, MdiFrame::OnCommand and MdiFrame::GetFirstChildID; these can be implemented easily and they will be explicitly described on the RESULTS section. This class includes several helper functions that conveniently provide most of the functionality of a typical MDI application, for example, MdiFrame:SendMessageToActive allows sending a message to the active MDI child by calling internally MdiFrame::GetActiveWindow and then sending the respective message using the popular API ::SendMessage. Another handy function is MdiFrame:GetActiveWindow that allows getting the handle of the active child by sending a WM_MDIGETACTIVE to the client, and then validating the returned handle window. Even though most of the member functions of the Win: MdiFrame are simple, they properly grant nearly all the support required by an MDI application. The operation of the Win:MdiFrame class is described next. The MdiFrame:CreateFrame function basically creates a local CREATESTRUCTURE variable and set the lpCreateParams value of this structure to the value of this pointer of the current object, and internally calls :CreateWindow (the function Window::Create can also be called). Remember that the standard API:: CreateWindow accepts a user defi ne value than can be passed to the window procedure by using the parameter lpCreateParams. For this specific case, this pointer must be sent as a user default value so that the new window object is able to store it using the API:: 155 CU IDOL SELF LEARNING MATERIAL (SLM)

SetWindowLongPtr. The step-by-step code to store and retrieve the pointer of C++ class inside the window object will be discussed in detail now. The function MdiFrame:RegisterClass registers the static member function MdiFrame::GWndProc for message processing. The letter G at the beginning of the function name denotes, in this case, “Generic”. That is, Win::MdiFrame has a common window procedure that will be called for all the MdiFrame objects. The documentation of WM_NCCREATE message specifically indicates that it is possible to transfer user defi ned information by using the parameter lpCreateParams of the CREATES TRUCT variable that is passed during the initial call to:: CreateWindowor:: CreateWindowEx. Once the message WM_NCCREATE is being processed, the generic window procedure validates the parameter lParam to find out if it corresponds to the size of a variable of type CREATESTRUCT. If it is valid, the lParam is transformed by casting it to a CREATESTRUCT long pointer; peculiarly its lpCreateParams is another CREATESTRUCT structure. The lpCreateParams value of this last CREATESTRUCT variable is the user defi ned value, in this case, this pointer of a C++ class. Once the pointer has been validated, the function proceeds to store it using the API:: SetWindowLongPtr using the fl ag GWLP_USERDATA as shown. Future calls of the generic window procedure will result in a call to:: GetWindowLongPtr to retrieve the original this pointer, and be able to call the specifi c window procedure. The proposed implementation completely hides the static nature of the generic window procedure, and programmers mind only on implementing the non-static function MdiFrame::WndProc. For this to work properly, it is very important not to forget setting the parameter lpCreateParams to the value of this pointer during the previous call to:: CreateWindow. In practice, this does not represent a problem because the function MdiFrame::CreateFrame does this automatically. Finally, it is imperative to mention that whenever an error occur during the execution of the generic window procedure, MdiFrame::GWndProc, the right thing to do is call ::DefWindowProc for default processing instead of just doing nothing. The code shown in Figure 5 can be used on a debug version of the application; for a release version it is not mandatory to check the validity of the pointers received inside the CREATESTRUCT structure, and all the calls to :IsBadReadPtr can be safely removed from the code, resulting in a very compact code. Finally, we will address some of the actions that must occur during the execution of the virtual function MdiFrame::OnCreate, which is called during the processing of the message WM_CREATE. This function is declared as protected and is responsible of creating the client window using the pre-registered class MDICLIENT when calling the API ::CreateWindow. Once the client window has been successfully created, the function stores the client window handle in the variable MdiFrame::hWndClient. Because MdiFrame::OnCreate is declared as a virtual function, it is possible to overwrite this function to alternatively perform other initialization actions and then call the base class function; this can be useful for toolbar or rebar creation. As it can be seen from this figure, this class is much simpler than the Win::MdiFrame class. There are, however, some special considerations that need to be taken since objects of this 156 CU IDOL SELF LEARNING MATERIAL (SLM)

class can be created dynamically; that is, a user can create as many MDI child windows as he wants at run time. To adequately provide dynamic object creation, we propose the function MdiChild::CreateChild as shown in Figure 4. MdiChild::CreateChild fills up a MDICREATESTRUCT to be able to send the message WM_MDICREATE to the client. The important thing to remember is to store the this pointer of the class using the lParam variable of the MDICREATESTRUCT so that we can successfully retrieve it and store it during the processing of the message WM_NCCREATE. This function is pretty like the function MdiFrame:GWndProc, however there are some evident differences. First, the lpCreateParams value is not a CREATESTRUCT but a MDICREASTRUCT. Second, instead of calling :DefWindowProc for default processing, we must call ::DefMDIChildProc. Third, object destruction occurs dynamically during the processing of the message WM_DESTROY. It is important to mention that the main frame object must create a new MdiChild object using the operator new, and then call the function MdiChild::CreateChild. Finally, it can be seen from Figure 4 that the message WM_MDIACTIVATE plays an important role in MDI application development, and this message will be explained in detail next.. A user may have several documents open simultaneously, and each time a user clicks on an inactive window, the frame window sends a WM_MDIACTIVATE message to the client, which in turn, sends a MW_MDIACTIVATE message to both the window becoming active and the window becoming inactive. The active window receives this message as a request to become inactive, while the inactive window receives it as a notification that it is becoming active. An MDI child window may prevent itself of losing activation by processing the WM_NCACTIVE message as indicated in the Software Development Kit better known as the SDK (see MSDA, 2005). Because users typically switch among different open documents, the WM_MDIACTIVATE message is strongly related with menu and toolbar activation, as it will be explained. Usually, an MDI application has at least two menus, one that is displayed before any document window has been created or opened, and another one that is displayed after the creation of the first MDI child. Both menus should be created right after the frame and child classes have been registered. One menu must be set as soon as the frame window is created, while the other one should be set once the client has at least one child. Because one of the menus is attached to the frame window when it is destroyed, it is not necessary to destroy the initial menu; however, the other menu needs to be destroyed explicitly. Users expect the toolbar and the menu to display as enabled only those options that apply to the current state of the application. That is why the WM_MDIACTIVATE message is so important. Each time a window document receives the WM_MDIACTIVATE message the lParam parameter has the handle of the window that is becoming active. This is the perfect moment for menu and toolbar synchronization. For applications that handle more than one type of documents, this is the opportunity to switch to a different menu and/or toolbar. To set the menu, a MM_MDISETMENU must be sent to the client (note that the API ::Set Menu cannot be use for MDI applications as clearly indicated in the SDK.) To manage menu synchronization, we expressly suggest the classes described in the UML diagram of Figure 7. 157 CU IDOL SELF LEARNING MATERIAL (SLM)

At the top of the diagram, the base Win:: Menu class is described. A Win::Menu object is a generic Windows menu that has an ID and some menu items that can appropriately be selected, enabled, or checked. The Win::MdiMedu class, at the bottom of Figure 7, represents a menu for MDI applications. To create a Win::MdiMenu a Win::MdiFrame object is required as it can be seen from the constructor prototype. The second parameter of the constructor is an integer offset value to specify the position of the “Window” menu described previously, see Figure 2. The key function of this class is the MdiMenu::Set function, which sets a menu. A typical MDI application creates two Win::MdiMenu objects (one when no MDI child exists and another when there is at least one MDI child). As it can be seen from Figure 6, menu activation is managed directly by the Win::MdiChild:: GWndProc when the message WM_MDIACTIVATE is processed. As it can be seen from this fi gure, once the WM_MDIACTIVATE message is being processed, the function compares the current window handle with the value of the parameter lParam, if they are equal the child menu is set using its MdiChild::Set function, otherwise the default menu is set. 8.5 SUMMARY  Once the Win::MdiFrame and Win::MdiChild classes have been clearly defi ned, developing an MDI application is straightforward. To illustrate this, consider Figure 8 and 9 that show the deployment of Multigraph, a graph editing applications using MDI technology. As it can be seen from Figure 8, the Multigraph class is derived directly from the abstract class Win::MdiFrame.  The application must be able to create an object of type Multigraph, thus this class must be non-abstract and must implement GetClassName, OnCommand and GetFirstChildID. Implementing these functions is easy, GetClassName simply returns the text string “Multigraph”, GetFirstChildID returns the ID of the first MDI child, and OnCommand must respond to the application commands.  Note that the proposed method basically requires the implementation of two custom classes, while MFC requires the implementation of five classes. If each class is stored in a pair of files the proposed method requires four files, in contrast, an MDI application deployed using MFC requires ten fi les, On the other hand and despite the fact that a Win32 application may be implemented using only two files, its structure makes the code difficult to read and maintain, Thus, it can be seen that the proposed method is simpler than existing methods, and its structure makes the development, creation and maintenance of MDI applications easy.  To properly evaluate the proposed method, the Multigraph application was deployed using the proposed method and MFC. All the experiments were performed on a computer running Microsoft Windows XP on an Intel Pentium 4 CPU 3.2 GHz, 1.00 GB of RAM. First, we proceeded to measure the time the client requires to create a fi 158 CU IDOL SELF LEARNING MATERIAL (SLM)

xed number of MDI children by running the program 100 times and averaging the creation time of each experiment. The mean value obtained from these 100 measurements is shown in Figure 10. As it can be seen from this figure, MFC takes approximately twice the time to create the same amount of children than using the proposed method. This extra time may be due to the natural overhead of an MFC application.  An MDI application is a special kind of application that allows managing several documents at the same time. MDI offers a common platform to deploy commercial application or perform research analysis in a shared environment. An MDI application can be deployed using MFC or Win32. While applications deployed using Win32 are much faster and efficient than applications deployed MFC, they usually contain several global variables and are prone to errors. On the other hand, Microsoft Visual Studio provides a set of wizards using MFC to simplify MDI application development.  However, MFC adds an overhead that as a rule results in application performance degradation. We propose a method to simplify the development of MDI applications. We showed that our method allows creating clean code that is easy to read and maintain. The proposed method requires the derivation of only two custom classes, while MFC requires five. We also showed that our method offers a better performance than applications deployed using MFC; specifically, the proposed method is generally twice faster than an MFC application. 8.6 KEYWORDS  Response, Animation, Idle, Load - RAIL is a user-centric performance model. Every web app has these four distinct aspects to its life cycle, and performance fits into them in different ways.  Responsive Design - Responsive web design is an approach to web design aimed at making web pages visually appealing and performant on any form factor. In addition, it is important to understand that responsive web design tasks include offering the same content to a variety of devices for a single website.  Service Worker - Service workers essentially act as proxy servers that sit between web applications, and the browser and network. They are intended tenable the creation of effective offline experiences, intercept network requests and take appropriate action based on whether the network is available and updated assets reside on the server. They also allow access to push notifications and background sync APIs.  Shadow DOM - Shadow DOM introduces scoped CSS and DOM to the web platform. It lets developers write encapsulated UI components which can be used in any application. 159 CU IDOL SELF LEARNING MATERIAL (SLM)

 Single Page App- A single page app is user-friendly app that performs more like a desktop app. typically; SPA content is rendered dynamically using JavaScript, rather than opening a new page. As a result, single-page applications typically load only data rather than pre-rendered HTML. This decreases the data transferred on the wire. 8.7 LEARNING ACTIVITY 1. Create a session on MDI Applications. ___________________________________________________________________________ ___________________________________________________________________________ 2. Create a survey on MDI Parent and MDI Child Forms. ___________________________________________________________________________ ___________________________________________________________________________ 8.8 UNIT END QUESTIONS A. Descriptive Questions Short Questions 1. What are Buttons? 2. What are Text Boxes? 3. Define Combo Box Control. 4. Define Radio Buttons. 5. Write the meaning of Tab Control? Long Questions 1. Explain the advantages of Panel Control? 2. Elaborate the concept of Radio Button Lists. 3. Discuss on the Tab Control. 4. Illustrate the scope of Menu Strip Control. 5. Examine the limitations of Progress Bar. B. Multiple Choice Questions 1. When an instance method declaration includes the abstract modifier, what is the method is said to be an? a. Abstract method b. Instance method 160 CU IDOL SELF LEARNING MATERIAL (SLM)

c. Sealed method d. Expression method 2. What theory implies that user can control the access to a class, method, or variable? a. Data hiding b. Encapsulation c. Information Hiding d. Polymorphism 3. What is the type of nature of Inheritance is? a. Commutative b. Associative c. Transitive d. Iterative 4. Which is called the point at which an exception is thrown? a. Default point b. Invoking point c. Calling point d. Throw point 5. Select the right option for the statement, in C#, having unreachable code is always an. a. Method b. Function c. Error d. Iterative Answers 1-b, 2-c, 3-d, 4-d, 5-c 8.9 REFERENCES Book References  Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1986. 161 CU IDOL SELF LEARNING MATERIAL (SLM)

 Prince B, Makrides L, Richman J. Research methodology and applied statistics. Part 2: the literature search. Physiother Can 1980.  Shrout PE, Fleiss JL. Intraclass correlations uses in assessing rater reliability. Psychological Bulletin 1979.  Chen HM, Hsieh CL, Sing Kai L, Liaw LJ, Chen SM, Lin JH. The test-retest reliability of 2 mobility performance tests in patients with chronic stroke. Neurorehabil Neural Repair 2007.  Hsueh IP, Lin JH, Jeng JS, Hsieh CL. Comparison of the psychometric characteristics of the functional independence measure, 5 item Barthel index, and 10 item Barthel index in patients with stroke. J Neurol Neurosurg Psychiatry 2002.  Hsueh IP, Lee MM, Hsieh CL. Psychometric characteristics of the Barthel activities of daily living index in stroke patients. J Formos Med Assoc 2001. E-References  file:///C:/Users/Sony/Downloads/An_OOP_Approach_to_Simplify_MDI_Application _Develo.pdf  https://www.researchgate.net/publication/237034581_An_OOP_Approach_to_Simpli fy_MDI_Application_Development/link/00463530cedc04acb1000000/download  https://en.wikipedia.org/wiki/Text_box 162 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 9: ADO.NET STRUCTURE 9.0 Learning Objectives 9.1 Introduction 9.2 Overview of Ado.Net 9.3 Connection Object 9.4 Command Object 9.5 Data Readers 9.5.1Data Sets & Data Adapters 9.6Execute Non-Query 9.7Execute Scalar 9.8Execute Reader 9.9Data Grid View Control 9.10Stored Procedures 9.11 Summary 9.12 Keywords 9.13 Learning Activity 9.14 Unit End Questions 9.15 References 9.0 LEARNING OBJECTIVES After studying this unit, you will be able to:  Describe the concept of Command Object.  Illustrate the concept of Connection Object.  Explain the concept of Execute Non-Query. 9.1 INTRODUCTION ADO.NET is a data access technology from the Microsoft .NET Framework that provides communication between relational and non-relational systems through a common set of components.ADO.NET is a set of computer software components that programmers can use 163 CU IDOL SELF LEARNING MATERIAL (SLM)

to access data and data services from a database. It is a part of the base class library that is included with the Microsoft .NET Framework. It is commonly used by programmers to access and modify data stored in relational database systems, though it can also access data in non-relational data sources. ADO.NET is sometimes considered an evolution of ActiveX Data Objects technology, but was changed so extensively that it can be considered an entirely new product. ADO.NET is conceptually divided into consumers and data providers. The consumers are the applications that need access to the data, and the providers are the software components that implement the interface and thereby provide the data to the consumer. Functionality exists in Visual Studio IDE to create specialized subclasses of the DataSet classes for a particular database schema, allowing convenient access to each field in the schema through strongly typed properties. This helps catch more programming errors at compile-time and enhances the IDE's Intelligence feature. Figure 9.1: ADO.NET 164 CU IDOL SELF LEARNING MATERIAL (SLM)

A provider is a software component that interacts with a data source. ADO.NET data providers are analogous to ODBC drivers, JDBC drivers, and OLE DB providers. ADO.NET providers can be created to access such simple data stores as a text file and spreadsheet, through to such complex databases as Oracle Database, Microsoft SQL Server, MySQL, PostgreSQL, SQLite, IBM DB2, Sybase ASE, and many others. They can also provide access to hierarchical data stores such as email systems. However, because different data store technologies can have different capabilities, every ADO.NET provider cannot implement every possible interface available in the ADO.NET standard. Microsoft describes the availability of an interface as \"provider-specific,\" as it may not be applicable depending on the data store technology involved. Providers may augment the capabilities of a data store; these capabilities are known as \"services\" in Microsoft parlance. Entity Framework (EF) is an open-source object-relational mapping (ORM) framework for ADO.NET, part of .NET Framework. It is a set of technologies in ADO.NET that supports the development of data-oriented software applications. Architects and developers of data- oriented applications have typically struggled with the need to achieve two very different objectives. The Entity Framework enables developers to work with data in the form of domain-specific objects and properties, such as customers and customer addresses, without having to concern themselves with the underlying database tables and columns where this data is stored. With the Entity Framework, developers can work at a higher level of abstraction when they deal with data and can create and maintain data-oriented applications with less code than in traditional applications. ADO.NET is a large set of .NET classes that enable us to retrieve and manipulate data, and update data sources, in many ways.  ADO.NET is the latest in a long line of data access technologies released by Microsoft (ODBC, DAO, RDO, OLE DB, ADO).  ADO.NET differs somewhat from the previous technologies, however, in that it comes as part of a platform called the .NET Framework.  Just as .NET includes a library of classes for managing rich client UI (Windows Forms) and for handling HTTP requests (ASP .NET), .NET includes a library for connecting to a wide range of databases. That library is named ADO.NET 165 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 9.2: Adobe Symbols 9.2 OVERVIEW OF ADO.NET ADO.NET provides consistent access to data sources such as SQL Server and XML, and to data sources exposed through OLE DB and ODBC. Data-sharing consumer applications can use ADO.NET to connect to these data sources and retrieve, handle, and update the data that they contain. ADO.NET separates data access from data manipulation into discrete components that can be used separately or in tandem. ADO.NET includes .NET Framework data providers for connecting to a database, executing commands, and retrieving results. Those results are either processed directly, placed in an ADO.NET DataSet object in order to be exposed to the user in an ad hoc manner, combined with data from multiple sources, or passed between tiers. The DataSet object can also be used independently of a .NET Framework data provider to manage data local to the application or sourced from XML. 166 CU IDOL SELF LEARNING MATERIAL (SLM)

The ADO.NET classes are found in System.Data.dll and are integrated with the XML classes found in System.Xml.dll. For sample code that connects to a database, retrieves data from it, and then displays that data in a console window, see ADO.NET Code Examples. ADO.NET provides functionality to developers who write managed code like the functionality provided to native component object model (COM) developers by ActiveX Data Objects. We recommend that you use ADO.NET, not ADO, for accessing data in your .NET applications. ADO.NET provides the most direct method of data access within the .NET Framework. For a higher-level abstraction that allows applications to work against a conceptual model instead of the underlying storage model, see the ADO.NET Entity Framework. Privacy Statement: The System.Data.dll, System.Data.Design.dll, System.Data.OracleClient.dll, System.Data.SqlXml.dll, System.Data.Linq.dll, System.Data.SqlServerCe.dll, and System.Data.DataSetExtensions.dll assemblies do not distinguish between a user's private data and non-private data. These assemblies do not collect, store, or transport any user's private data. However, third-party applications might collect, store, or transport a user's private data using these assemblies. ADO.NET is a large set of .NET classes that enable us to retrieve and manipulate data, and update data sources, in very many ways. As an integral part of the .NET framework, it shares many of its features; features such as multi-language support, garbage collection, just-in-time compilation, object-oriented design, and dynamic caching, and is far more than an upgrade of previous versions of ADO. ADO.NET is set to become a core component of any data-driven .NET application or Web Service and understanding its power will be essential to anyone wishing to utilize .NET data support to maximum effect. ADO.NET is a part of the .NET framework architecture. It is a model used by .NET applications to communicate with a database for retrieving, accessing, and updating data, as shown in the following figure:  SQL Server: It's used to work specifically with Microsoft SQL Server. It exists in a namespace within the System.Data. SqlClient.  OLE DB: It's used to work with the OLEDB provider. The System.Data.dll assembly implements the OLEDB .NET framework data provider in the System.Data. OleDb namespace.  ODBC: To use this type of provider, you must use an ODBC driver. The System.Data.ODBC.dll assembly implements the ODBC .NET framework data provider. This assembly is not part of the Visual Studio .NET installation.  Oracle: The System.Data.OracleClient.dll assembly implements the Oracle .NET framework data provider in the System.Data.OracleClient namespace. The Oracle 167 CU IDOL SELF LEARNING MATERIAL (SLM)

client software must be installed on the system before you can use the provider to connect to an Oracle data source. 9.3 CONNECTION OBJECT Given a set of consecutive slices resulting from a non-invasive examining device, there is an expectation to be able to reconstruct the 3D original object regardless of if it is a human organ or the channelling of underground petroleum resources. The slices, however, identify a set of curves, which need to be properly connected to give rise of a coherent representation of the object. This analysis is made by a correspondence algorithm within 3D reconstruction software. This paper presents -connection, a simple and flexible algorithm for the correspondence problem.Connection relies on the well-known heuristic approach of proximal curves. Tests have shown that -connection grows linearly with the size of the raw-data and can be fine-tuned by a user-defined parameter to produce a 3D model. The heuristic, advantages, and limitations of -connection will also be shown in detail. The increasingly popular use of non-invasive measurement devices, such as Magnetic Resonance Imagingand Computerized Tomography, has made it possible to visualize a sequence of planar sections of three-dimensional objects. This fact has motivated the development of many three-dimensional object reconstruction techniques. Threedimensional reconstructions became a very interesting and important research technique since it builds a 3D model of the object that is being analyzed using two-dimensional images. 3D reconstruction can be executed to obtain, various information about original model basically for two reasons: It can be observed that this division leads to two possible approaches regarding the application of the 3D reconstruction techniques: one aimed at the object visualization, and the other at accurately representing the object. For the former objective, the reconstruction process can be done in a simpler way, without the need to accurately represent the saddle point on the branching, for instance. On the other hand, if the reconstruction does not consider the accuracy of the consequent branching and surfacing, any measurements taken from the model can be misleading. Reconstruction techniques aiming the visualization of structures are useful to applications such as the identification of increasing density of capillaries a congenital bad formation, tumours, or even undesired/unexpected connections. These applications take advantage of the power such techniques have to make visible a formation whose visualization would be, in other ways, invasive. For these techniques the flexibility and the rules of correspondence identification are of great importance along with the fact that a fast response is needed. This paper will present an algorithm to identify the correspondence between curves in consecutive slices aimed at object visualization and focused on flexibility and efficiency. The text will initially identify the problem and then, show related approaches. After that, heuristic solutions will be thoroughly discussed so that, 168 CU IDOL SELF LEARNING MATERIAL (SLM)

in the following section, the -connection algorithm will be presented. Implementation details, results, analysis, and the conclusion will finish this text. In one of the cases each curve on a plane relates to only one nearest curve on the next plane; in another, one of the curves from the inferior slice relates to the two others on the superior slice, while the other is connected only with the nearest and on the last illustration only one of the curves form the inferior slice relates to the curves of the superior plane The deformable models approach uses geometry, physics, and the theory of approximation for reconstruction. The geometry is used to represent the shape of the object, the physics impose confinements on how the shape can vary in space and time, and the theory of approximation provides mechanisms and techniques to approximate the reconstructed models to the original measured data. On this method, deformations are made on an initial model, to reach the final object. McInerney and Terzopoulos presented a reconstruction work applied to medicine that uses deformable models and proved to be efficient to start from a sphere and promote deformations and approximations until a desired model is achieved. One advantage of this technique is that the image segmentation process, where a polygonal representation of the curves from the original image is obtained, is part of the reconstruction process. The authors asserted that deformable models overcome many of the limitations of low-level techniques for image processing, providing compact and analytical representations of the object's shape. However, the reconstruction process is not an isolated process, and it can be said that the reconstruction techniques through deformable methods use more image processing concepts than geometric modelling. The implicit approach uses an implicit function to interpolate the curves and generate the object, in a way that the object surface is on the zero set of this function, that is, in f (x, y) =0. This function is determined from the interpolation of the functions of each parallel planar section that contain the curves to be connected. Peixoto and Gattass describe the implicit approaches through two steps: the definition of the functions that represent the curves' slices, called field functions, and the interpolation of these functions to form the implicit function that will represent the final object's surface. For this approach the matrix-based (also raster-based) representation of curves is more adequate, since there is a natural correspondence between the matrix representation and the implicit function, i.e., a curve represented as a matrix can be defined as the set of points (x, y) of the slice, such that f (x, y) represents each field function used to generate implicit function. The correspondence definition step does not have much flexibility in the implicit approach because they are automatically defined by the function; the result is a unique interpolation solution for a given initial set of curves, not all alternatives and the connection determination for this type of approach is considered one of its major problems. Implicit approaches deal with the reconstruction problem in an automatic way, but it does not generate all the possible models from a set of curves. Approaches that use heuristics deal with the correspondence criteria with more flexibility. According to Peixoto and Gattas, the decision of the correspondence can be taken computing somehow the distances between 169 CU IDOL SELF LEARNING MATERIAL (SLM)

curves. The heuristics used in the work of Barequet and Sharir decide on the correspondence of the curves based on a XY projection of two consecutive planes. The heuristic is the following: if there is an intersection on the projected area of the curves they are connected, otherwise they are not. The work of Treece and colleagues is based on the calculation of the distance between regions of the curves. To each plane containing the curves a set of discs is created. Internal discs are used to represent internal regions of each curve and are considered to loosely represent the shape of a curve. To each disc, its centre is calculated, called centroid, which will be used to calculate the distance between each pair of discs of two consecutive planes. The correspondence calculation is based on the distance of each pair of discs. The heuristics defined in this algorithm is the following: the regions on two consecutive planes will be connected if the distance between the related discs is smaller than the radius of both discs. Then, to each two consecutive planes, a comparison of distances between the centroids of each pair of discs is done. In this way, the necessary distance to connect to discs may vary without user control. Also, the correspondence does not consider the whole area of the curve, but each region represented by a disc. Another technique that uses heuristics is proposed by Cuadros-Vargas. It is a volumetric reconstruction strategy called -Connection that has the flexibility to produce a family of objects constructed from the same set of planar sections, making it possible to obtain multiple options of a final object. To solve the correspondence problem the algorithm performs a calculation of the smaller distance between each two curves in terms of tetrahedrons. Afterwards, it takes a user defined integer parameter, called β, to perform the heuristics: if the distance between any two curves is less than the value of the parameter defined by the user, then these curves are connected. 9.4 COMMAND OBJECT This pattern revisits the Command and Command Process patterns. The reasons for this revisiting are that we think that the Command and Command Processor patterns do not really capture the essence of what the Command pattern is. We think that Command is basically a way to emulate the concept of closures in object-oriented languages that don’t natively have this feature.Suppose you are building complex communication software. The individual sub- sequences of the protocol are similar, but not quite identical. The sub-sequences contain logic that operates on some form of state, also called context. For the various sub-sequences, adapted context information is necessary. Because of maintainability and to keep the footprint small the communication software is to be used in an embedded device you want to reuse repeating sub-sequences. For example, in the figure above you can detect process step “B” be involved in protocol I as well as in protocol II. The state on which the sub-sequence depends is similar in both cases. The isolation of reusable sub-sequences is the result of an in- depth analysis of the communication protocol. How can you encapsulate the protocol sub- sequences as a reusable software building block? 170 CU IDOL SELF LEARNING MATERIAL (SLM)

A command may need to access state that is determined by the creator. In this case, the command object must remember the state determined by the creator. Thus, for each parameter the execute operation accesses, the command class needs to have an attribute. The creator passes the parameters to the constructor, which assigns the respective values to the attributes. They can then be accessed during execution. A second variation point is how the command accesses the execution context. In some cases, it might be implicit (for example, since the context is global), sometimes it needs to be given access explicitly. In that case, the execute operation must have the respective parameters. The executor must pass these values to the execute operation. Beginning with the curves situated in parallel sections ; in the first solution, the value of the parameter is less than all the distances between the curves of two consecutive planes, with no connection occurring; in the second solution, the value of the parameter is 3, then all the curves of two consecutive planes with distances between each other less than or equal to 3 are connected, and; when the value of is greater than the distance between any curves of two consecutive planes all of the curves of those consecutive planes are connected. An important feature of the work of Treece and colleagues is that the regions represented by the discs will only connect with the other closest regions of each consecutive plane if this relation is reciprocal; this allows regions to be left without connection. These authors also emphasize that traditional branching and correspondence problems are combined by determining “regions correspondence”. For Barequet and Sharir it is not necessary for two curves to overlap to connect to each other. According to Cuadros-Vargas, the -Connection reconstruction technique offers more flexibility on the choice of the connected components, since from a same set of planar sections it is possible to obtain different shapes of objects, which is difficult through other algorithms in the literature. Comparing the heuristic approaches presented, one notices that all existing algorithms employ some form of distance calculation between curves: This can be a very detailed and time-consuming comparison of point by point of the curves in search for the smaller edges; Indirectly, by enclosed or enclosing discs; Indirectly by the resulting overlapping of the projections that occur when the curves are next to each other. Ingenious calculations, that consider the distance in units of volume, has also been tried and by-producing tiling with great flexibility of results with the cost of greater computational demand. This paper presents another solution to the calculation of the curve’s proximity with the same flexibility as in but keeping the correspondence stage totally isolated from any others The minimum and maximum distances in the matrix have the purpose of informing the user which is the interval of distances among all the curves. The centroids are calculated as the centre of the curve’s bounding box. The distances are calculated with the curves projected in the same plane that is, despising the height between planes, which simplifies the distance calculation. 171 CU IDOL SELF LEARNING MATERIAL (SLM)

9.5 DATA READERS In ADO.NET, a Data Reader is a broad category of objects used to sequentially read data from a data source. Data Readers provide a very efficient way to access data, and can be thought of as a Firehouse cursor from ASP Classic, except that no server-side cursor is used. A DataReader parses a Tabular Data Stream from Microsoft SQL Server, and other methods of retrieving data from other sources. A Data Reader is usually accompanied by a Command object that contains the query, optionally any parameters, and the connection object to run the query on. When using a DataReader to retrieve data, the developer can choose to read field values in strongly typed manner or a weakly typed manner, returning then as System Objects Both approaches have their pros and cons. Using the strongly typed retrieval methods can be more cumbersome, especially without specific knowledge of the underlying data. Numeric values in the database can translate to several .NET types: Int16, Int32, Int64, Float, Decimal, or Currency. Trying to retrieve a value using the wrong type results in an exception being thrown, which stops code from running further, and slows the application down? This is also true when you use the right type but encounter a DbNull value. The benefit to this retrieval method is that data validation is performed sooner, improving the probability of data correction being possible. Weakly typed data retrieval allows for quick code writing and allows for the data to be used in some fashion when the developer doesn't know beforehand what types will be returned. Further, with some effort, the programmer can extract the value into a variable of the proper type by using the GetFieldType or GetDataTypeName methods of the DataReader. 9.5.1Data Sets & Data Adapters In ADO.NET, a Data Adapter functions as a bridge between a data source, and a disconnected data class, such as a Dataset. At the simplest level it will specify SQL commands that provide elementary CRUD functionality. At a more advanced level it offers all the functions required to create Strongly Typed Datasets, including DataRelations. Data adapters are an integral part of ADO.NET managed providers, which are the set of objects used to communicate between a data source and a dataset. Adapters are used to exchange data between a data source and a dataset. In many applications, this means reading data from a database into a dataset, and then writing changed data from the dataset back to the database. However, a data adapter can move data between any source and a dataset. For example, there could be an adapter that moves data between a Microsoft Exchange server and a dataset. Sometimes the data you work with is primarily read-only and you rarely need to make changes to the underlying data source some situations also call for caching data in memory to minimize the number of database calls for data that does not change. The data adapter makes 172 CU IDOL SELF LEARNING MATERIAL (SLM)

it easy for you to accomplish these things by helping to manage data in a disconnected mode. The data adapter fills a Dataset object when reading the data and writes in a single batch when persisting changes back to the database. A data adapter contains a reference to the connection object and opens and closes the connection automatically when reading from or writing to the database. Additionally, the data adapter contains command object references for SELECT, INSERT, UPDATE, and DELETE operations on the data. You will have a data adapter defined for each table in a Dataset and it will take care of all communication with the database for you. All you need to do is tell the data adapter when to load from or write to the database. Some automatic stuff has happened for us, thanks to the Data Adapter and Dataset classes as made by Microsoft. The classes have figured out for us that since the Data Adapter has a table named Suppliers, we should also have a table named Suppliers in the Dataset; so, the Dataset automatically creates a Data Table within itself, named Suppliers. 9.6EXECUTE NON-QUERY Execute Reader: Execute Reader used for getting the query results as a DataReader object. It is read-only forward only retrieval of records, and it uses select command to read through the table from the first to the last. ExecuteNon-Query: ExecuteNonQuery used for executing queries that does not return any data. It is used to execute the sql statements like update, insert, delete etc. ExecuteNon-Query executes the command and returns the number of rows affected. Figure 9.3: Adobe Program Although ExecuteNonQuery does not return any rows, it populates any output parameters or return values mapped to parameters with data.ExecuteNonQuery saves the changes in that 173 CU IDOL SELF LEARNING MATERIAL (SLM)

XML document to the table or view that is specified in the XmlSaveProperties property. The return value is the number of rows that are processed in the XML document. Also, each row in the XML document could affect multiple rows in the database, but the return value is still the number of rows in the XML document. 9.7EXECUTE SCALAR The exponential growth in the fabrication technology and the continuous improvements in transistor density have allowed tens of billions of transistors to be integrated onto one single chip. This thesis proposes simple matrix processor architectures that exploit this huge number of transistors to improve the performance of data-parallel applications. Nowadays, data- parallel applications, which include scientific and engineering, multimedia, network, security, etc., are growing in importance and demanding increased performance from hardware. This thesis describes three microarchitectures for matrix processors: simple matrix processor (SMP), simple super-matrix processor (SSMP), and multithreaded simple super-matrix processor. In SMP/SSMP/ThrSSMP, the well-known 5-stage pipeline (baseline scalar processor) is extended with matrix register file and matrix control unit in the decode stage. Unified execution datapath is used for processing scalar/vector/matrix data. Vector/matrix data are loaded directly from L2 cache; however, scalar data are loaded from data cache in the memory access stage. On SMP, data-level parallelism (DLP) is exploited by fetching a single scalar/vector/matrix instruction from instruction cache, decoding, executing on a single execution unit, and finally, writing back a single result into scalar/matrix registers. To further improve the performance, instruction-level parallelism (ILP) is exploited in SSMP by processing multiple operations in parallel. Four scalar/vector/matrix instructions are fetched from instruction cache. The fetched instructions are decoded, and their dependencies are checked. Up to four independent scalar instructions can be issued in-order to the parallel execution units. However, vector/matrix instructions iterate the issuing of four vector/matrix operations without checking since a vector/matrix instruction groups multiple independent operations. On ThrSSMP, thread-level parallelism (TLP) is exploited in addition to DLP and ILP, by parallel processing two threads on unified hardware. Two scalar/vector/matrix instructions are fetched from each thread. The fetched instructions are decoded, dependencies of each thread's instructions are checked, and up to two independent scalar instructions per thread can be issued in-order to the unified parallel execution units. In case of vector/matrix instructions, a single instruction is issued individually in round-robin fashion to be executed on the four execution units. This thesis explains in detail the implementation of our proposed designs for simple matrix processors (SMP, SSMP, and ThrSSMP) using VHDL targeting FPGA Virtex-6, XC6VLX550T2FF1760 device. Moreover, the performances of SMP/SSMP/ThrSSMP are evaluated on some vector/matrix kernels from basic linear algebra subprograms (BLAS). Our results show that on SMP, the hardware complexity is 2.79 times higher than the baseline. However, speedups over the baseline are 2.01, 3.05, 4.08, 4.21, 3.46, 4.80, 4.57, 5.16, and 7.44 on applying Given’s rotation, scalar times vector plus another, 174 CU IDOL SELF LEARNING MATERIAL (SLM)

vector addition, vector scaling, setting up Given’s rotation, dot product, matrix-vector multiplication, Euclidean norm, and matrix-matrix multiplications, respectively. On the same kernels, SSMP gives higher speedups over the baseline (4.32, 4.92, 5.48, 5.92, 6.91, 7.10, 7.38, 8.41, and 18.23, respectively) with higher hardware complexity of 3.77. On ThrSSMP, the hardware complexity is 5.68 times higher than the baseline; however, higher speedups are achieved on vector/matrix kernels of 4.9, 6.09, 6.98, 8.2, 8.25, 8.72, 9.36, 11.84, and 21.57, respectively. In conclusion, the average speedups are 4.31, 7.63, and 9.55 and speedups over complexities are 1.54, 2.02, and 1.68 on SMP, SSMP, and ThrSSMP, respectively. Modern processors have taken advantage of the exponential improvements in the density and speed of circuits in semiconductor chips to deliver exponentially increasing performance. As fabrication technology progresses, an ever-growing number of transistors can be integrated on a single chip. The number of transistors that can be put on a chip has risen at the same pace as the frequency, but without any levelling off. In 1965, Gordon Moore, one of the founders of Intel, predicted that the number of transistors on a given piece of silicon would double every two years, this prediction has remained essentially true and is now known as Moore’s law. The 4004 was a 4-bit processor consisting of approximately 2300 transistors. As of 2011, the highest transistor count in a commercially available CPU is over 2.5 billion transistors, in Intel’s 10-core Xeon Westmere-EX. Nowadays, more than 4 billion transistors are in Intel’s 15-Core Xeon Ivy Bridge-EX. Thus, the exponential growth in the fabrication technology has allowed the design of faster, larger, and increasingly complicated processors. In this decade, it is expected that tens of billions of transistors can be integrated onto one single chip. On the applications side, data-parallel applications are growing in importance and demanding increased performance from hardware. These applications include scientific and engineering, DSP, multimedia, network, security, etc. Contemporary computer applications are multimedia-rich, involving significant amounts of video and audio compression, 2-D image processing, 3-D graphics, speech recognition, and signal processing. Moreover, with the proliferation of the World Wide Web and the internet, future workloads are believed to be even more multimedia dominant. These applications run on a variety of systems ranging from the low-power personal mobile computing environment to the high-performance desktop, workstation, and server environment. For example, it is very common for desktop computers to run video editing, image processing applications, and 3-D games in addition to basic productivity applications. Thus, with evolving standards and changing consumer needs, future general-purpose processors require good multimedia processing capabilities. These data-parallel applications are computationally intensive, with significant data parallelism where they have a relatively small set of operations that must be repeatedly performed over a large volume of data. To satisfy the demand performance, specialized hardware is commonplace in data-parallel applications, with continuing improvements in transistor. 175 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 9.4: Applying Moore’s law on microprocessor from 1971 to 2014 Density, computer architects should propose new processor architectures that use this huge number of transistors in improving performance of data-parallel applications. This thesis proposes simple matrix processor architectures to improve the performance of data-parallel applications. Since the fundamental data structures for a wide variety of data-parallel applications are scalar, vector, and matrix, our proposed processor architectures are based on a three-level instruction set architecture to be executed on zero-, one-, and two-dimensional arrays of data. This instruction set is used to express a great amount of fine-grain parallelism (up to three-dimensional data-level parallelism: 3D DLP) to hardware instead of extracting dynamically by a complicated logic (superscalar processors) or statically with sophisticated compilers (very long instruction word processors). The organization of this chapter is as follows. 176 CU IDOL SELF LEARNING MATERIAL (SLM)

Nowadays, billions of transistors could be fit in a single die. A major challenge facing computer designers today is determining how to translate the increasing number of transistors per chip into a correspondingly large increase in computing performance. Therefore, it is necessary to find new processor architectures to use this huge transistor budget efficiently and meet the requirements of future applications. Traditionally, additional transistors have been used to improve processor performance by exploiting parallelism. Parallelism allows the hardware to accelerate applications by executing multiple, independent operations concurrently. There are three major forms of parallelism: instruction-level parallelism, thread-level parallelism, and datalevel parallelism, which are not mutually exclusive. Superscalar architectures have used the increasable chip resources to dynamically extract and dispatch more independent instructions in the same clock cycle. However, very long instruction word processors rely on software technology to find and exploit ILP, statically at compile time. VLIW architectures have increased the number of decoders and the execution datapaths to process more parallel scalar instructions explicitly packed by the compiler into a very long instruction word. Note that superscalar and VLIW microprocessors use scalar ISA, which cannot express parallelism to hardware. However, the performance can be improved only by processing more scalar instructions concurrently extracted dynamically by complex hardware or statically by sophisticated compiler techniques. As a result of the low inherent ILP of commercial workloads, these techniques have reached a point of rapidly diminishing returns. Thus, the traditional sources of performance improvements have all been flattening. Instead of pursuing more ILP, architects are increasingly focusing on TLP implemented with single-chip multiprocessors. The number of processor cores continues to increase in proportion to increases in available transistors as silicon processes improve according to Moore’s law. Patterson and Hennessy estimate the number of cores per chip is likely to double every 18–24 months henceforth. Moreover, they showed that moving to a simpler core design results in modestly lower clock frequencies but has enormous benefits in power consumption and chip area. A many-core design would still be an order of magnitude more power- and area efficient in terms of sustained performance, if the simpler core will offer only one-third the computational efficiency of the more complex out-of-order cores On the other hand, the cheapest and the most prevalent form of parallelism available in many applications is DLP. DLP needs only fetch and decode a single vector instruction to describe a whole array of parallel operations. This reduces control logic complexity, while the regular nature of vector instructions allows compact parallel datapath structures. Thus, major vendors of general-purpose microprocessors have announced multimedia extensions to their architectures to enhance the performance by exploiting DLP. For example, MAX and MAX-2 added to HP’s PA-RISC; MDMX to MIPS; MVI to Compaq’s Alpha; MMX, SSE SSE2, SSE3, SSE4, and recently AVX to Intel’s IA-32; VIS to Sun’s Ultra-Sparc; AltiVec to Motorola’s PowerPC; and 3DNow! To AMD processors. Such techniques have been implemented not only in commercial general-purpose processors, but also in DSP processors such as the TMS320C64xx series from Texas Instruments and the TigerSharc processor from 177 CU IDOL SELF LEARNING MATERIAL (SLM)

Analogy Devices. The key idea in these extensions is the exploitation of sub word parallelism in a single instruction multiple data fashion. Four, eight, or 16 data elements of 32, 16, or 8- bits width can be operated simultaneously in a single register. However, these multimedia extensions have limited vector instruction sets. 9.8EXECUTE READER Vector instruction sets have many fundamental advantages and deserve serious consideration for implementation on microprocessors. Vector ISA packages multiple homogenous, independent operations into a single short instruction, which results in compact, expressive, and scalable code. See for more details about the advantages of vector ISA over scalar or VLIW ISAs. This thesis proposes simple matrix processor architectures to extend the advantages of the vector ISA by adding a matrix instruction set. The semantic content of the vector and matrix instructions already includes the notion of parallel operations. The proposed matrix processors exploit ILP, DLP, and TLP to improve performance of data- parallel applications. In our proposed matrix processors, ILP is exploited by parallel processing multiple scalar instructions; however, DLP is exploited using vector/matrix ISAs; moreover, TLP is exploited through fetching, decoding, and executing multiple instructions from multiple threads simultaneously. Note that exploiting ILP, DLP, and TLP leads to simpler and high-performance processor architecture. Moreover, the proposed simple matrix processor architectures use multi-level ISA to provide a flexible and high-level interface between hardware and software. The parallelism found in data-parallel applications can be explicitly expressed directly to the hardware in the following sets of instructions: scalar instructions (0-D DLP), vector-scalar instructions (1-D DLP), vector-vector instructions (1-D DLP), matrix-scalar instructions (2-D DLP), matrix-vector instructions (2-D DLP), and matrix instructions (2-D or 3-D DLP). The complete designs of our proposed simple matrix processor architectures are implemented using VHDL (Very high-speed integrated circuit Hardware Description Language targeting the Xilinx Virtex-6 FPGA. FPGA is a semiconductor device that is based around a matrix of configurable logic blocks connected via programmable interconnects for more details). A single CLB comprises two slices, with each containing four 6-input look-up tables (LUTs) and eight flip-flops for a total of eight 6-input LUTs and sixteen FFs per CLB. Our proposed processor is implemented on Virtex-6, XC6VLX550T-2FF1760 device, which has 42,960 configurable logic blocks (85,920 slices: 343,680 6-input LUTs and 687,360 FFs). By proper loading LUTs, FPGAs can be reprogrammed to a desired application or functionality requirements after manufacturing. This is the main feature that distinguishes FPGAs from application specific integrated circuits (ASICs), which are custom manufactured for specific design tasks. Moreover, FPGAs are growing fast with cost reduction compared to ASICs design. While FPGAs used to be selected for lower speed/complexity/volume designs in the past, today’s FPGAs easily push the 500 MHz performance barrier. With unprecedented logic 178 CU IDOL SELF LEARNING MATERIAL (SLM)

density increases and a host of other features, such as embedded processors, DSP blocks, clocking, and high-speed serial at ever lower price points, FPGAs are a compelling proposition for almost any type of design [39]. As FPGAs have grown in capacity, they have become capable of implementing complete embedded systems. To augment their area efficiency and speed for certain operations, FPGA vendors have included dedicated circuits for better implementing certain operations that are typical in an embedded system. These dedicated circuits presently include flip-flops, random access memory (RAM), multiply- accumulate logic, and microprocessor cores. Today there are many different manufacturers of FPGA devices including Actel, Altera, Atmel, Cypress, Lucent, and Xilinx. For the hardware programming languages, VHDL was developed in the early 1980s and quickly gained the acceptance for not only description and documentation but also design entry, simulation, and synthesis of large ASICs [36]. VHDL was originally sponsored by the U.S. Department of Defence and later transferred to the IEEE (Institute of Electrical and Electronics Engineers). The language is formally defined by IEEE Standard 1076. In 1987, IEEE started to develop the first standard for VHDL (referred to as VHDL 87), and the second revision of this standard was completed in 1993 (referred to as VHDL 93). VHDL can be thought of as a programming language‖ for hardware. It is intended for describing and modelling a digital system at various levels; therefore, it is an extremely complex language. Nowadays, most of the companies that manufacturing ASICs have developed their own VHDL editors based on these standards to provide the users with an easy-to-use tool to customize their ASICs. For example, Altera has Quartus II and MAX Plus II, while Xilinx has Xilinx ISE and Vivado. In this thesis Synthesis and Place & Route are both performed using Xilinx ISE 14.5. 9.9DATA GRID VIEW CONTROL It showed many detailed examples of data binding to simple bound controls and list bound controls. However, one of the most common ways of presenting data is in tabular form. Users can quickly scan and understand large amounts of data visually when it is presented in a table. In addition, users can interact with that data in several ways, including scrolling through the data, sorting the data based on columns, editing the data directly in the grid, and selecting columns, rows, or cells. In .NET 1.0, the DataGrid control was the primary Windows Forms control for presenting tabular data. Even though that control had a lot of capability and could present basic tabular data well, it was difficult to customize many aspects of the control. Additionally, the DataGrid control didn’t expose enough information to the programmer about the user interactions with the grid and changes occurring in the grid due to programmatic modifications of the data or formatting. Due to these factors and many new features that customers requested, the Windows Client team at Microsoft decided to introduce a replacement control for the DataGrid in .NET 2.0. That new control, the DataGridView control, is the focus of this chapter. 179 CU IDOL SELF LEARNING MATERIAL (SLM)

The DataGridView control is a very powerful, flexible, and yet easy-to-use control for presenting tabular data. It is far more capable than the DataGrid control and is easier to customize and interact with. You can let the grid do all the work of presenting data in tabular form by setting the databinding properties on the control appropriately. You can also take explicit control of presenting data in the grid through the new features of unbound columns and virtual mode. Unbound columns let you formulate the contents of the cell as the cells are being added to the grid. Virtual mode gives you a higher degree of control by allowing you to wait until a cell is being displayed to provide the value it will contain. You can make the grid act like a spreadsheet, so that the focus for interaction and presentation is at the cell level instead of at the row or column level. You can control the formatting and layout of the grid with finegrained precision simply by setting a few properties on the control. Finally, you can plug in several predefined column and cell control types, or provide your own custom controls, and you can even mix different control types within different cells in the same row or column. You can see that the grid picks up the visual styles of Windows XP; they are much like many of the Windows Forms controls in .NET 2.0. The grid is composed of columns and rows, and the intersection of a column and a row is a cell. The cell is the basic unit of presentation within the grid and is highly customizable in appearance and behaviour through the properties and events exposed by the grid. There are header cells for the rows and columns that can be used to maintain the context of the data presented in the grid. These header cells can contain graphical glyphs to indicate different modes or functions of the grid, such as sorting, editing, new rows, and selection. The grid can contain cells of many different types and can even mix different cell types in the same column if the grid isn’t data bound. For basic data-binding scenarios, the DataGridView functions exactly like the DataGrid control did in .NET 1.0, except that the combination of DataSource and DataMember must resolve to a collection of data items, such as a DataTable or object collection. Specifically, they need to resolve to an object that implements the I List interface. The DataGrid could be bound to a collection of collections, such as a DataSet, and if so, the DataGrid presented hierarchical navigation controls to move through the collections of data. However, this capability was rarely used, partly because the navigation controls that were presented inside the DataGrid were a little unintuitive and could leave the user disoriented. As a result, the Windows Client team that developed the DataGridView control decided not to support hierarchical navigation within the control. The DataGridView is designed to present a single collection of data at a time. You can still achieve an intuitive hierarchical navigation through data, but you will usually use more than one control to do so, adopting a master-details approach as discussed in previous chapters. The DataSource property can be set to any collection of objects that implements one of four interfaces: IList, IListSource, IBindingList, or IBindingListView. If the data source is itself a collection of data collections, such as a data set or an implementer of IListSource, then the DataMember property must identify which data collection within that source to bind to. If the DataSource property is set to an implementer of IList (from which both IBindingList and IBindingListView derive), then the 180 CU IDOL SELF LEARNING MATERIAL (SLM)

DataMember property can be null (the default value). When you bind the DataGridView to a binding source, the BindingSource class itself implements IBindingListView (as well as several other data-binding related interfaces), so you can bind a grid to any kind of collection that a binding source can work with through a binding source, which includes simple collections that only implement IEnumerable. Any time the DataSource and/or DataMember properties are set, the grid will iterate through the items found in the data collection and will refresh the data-bound columns of the grid. If the grid is bound to a binding source, any change to the underlying data source to which the binding source is bound also results in the data-bound columns in the grid being refreshed. This happens because of events raised from the binding source to any bound controls whenever its underlying collection changes. Like most properties on the DataGridView control, any time the DataSource and DataMember properties are set, they fire the DataSourceChanged and DataMemberChanged events, respectively. This lets you hook up code that responds to the data binding that has changed on the grid. You can also react to the DataBindingComplete event since that will fire after the data source or data member has changed and data binding has been updated. However, if you are trying to monitor changes in the data source, you usually are better off monitoring the corresponding events on the BindingSource component rather than subscribing to the events on the grid itself. This is especially true if the code you are using to handle the event affects other controls on the form. Because you should always bind your controls to a binding source instead of the data source itself, if possible, the binding source is the best place to monitor changes in the data source. 9.10STORED PROCEDURES If you're anything like me, you don't easily pick up on development techniques just by hearing about them. When I first installed my MS SQL server on my computer, it opened a whole new world of features that I had never used. Among these were Stored Procedures. This article is designed to tell you how to begin writing stored procedures. I am using Microsoft SQL Server 7.0, but these examples should work in any SQL version. Writing Stored Procedures doesn't have to be hard. When I first dove into the technology, I went to every newsgroup, web board, and IRC channel that I knew looking for answers. Through all the complicated examples I found in web tutorials, it was a week before I finally got a working stored procedure. I'll stop rambling now and show you what I mean: Normally, you would call a database with a query like: Select column1, column2 From Table1 To make this into a stored procedure, you simple execute this code: CREATE PROCEDURE sp_myStoredProcedure AS Select column1, column2 From Table1 Go That's it, now all you must do to get the recordset returned to you is execute the stored procedure. You can simply call it by name like this: sp_myStoredProcedure Note: You can name a stored procedure anything you want, provided that a stored procedure with that name doesn't already exist. Names do not need to be prefixed with sp_ but that is something I choose to do just as a 181 CU IDOL SELF LEARNING MATERIAL (SLM)

naming convention. It is also somewhat a standard in the business world to use it, but SQL server does not require it. Now, I realize you aren't gaining much in this example. I tried to make it simple to make it easy to understand. In part II of this article, we'll look at how it can be useful, for now let's look at how you can call a Stored Procedure with parameters. Let's say that we want to expand on our previous query and add a WHERE clause. So, we would have: Select column1, column2 From Table1 Where column1 = 0 Well, I know we could hard code the Where column1 = 0 into the previous stored procedure. But wouldn't it be neat if the number that 0 represents could be passed in as an input parameter? That way it wouldn't have to be 0, it could be 1, 2, 3, 4, etc. and you wouldn't have to change the stored procedure. Let's start out by deleting the stored procedure we already created. Don't worry; we'll recreate it with the added feature of an input parameter. There isn't a way that I'm aware of to simply over-write a stored procedure. You must drop the current one and re-create it with the changes. We will drop it like this: DROP PROCEDURE sp_myStoredProcedure Now we can recreate it with the input parameter built in: CREATE PROCEDURE sp_myStoredProcedure 2 @myInput int AS Select column1, column2 From Table1 Where column1 = @myInput Go Ok, why we don’t pause here, and I'll explain in more detail what is going on. First off, the parameter: you can have as many parameters as you want, or none. Parameters are set when the stored procedure is called, and the stored procedure receives it into a variable. @myInput is a variable. All variables in a stored procedure have a @ symbol preceding it. A name preceded with @@ are global variables. Other than that, you can name a variable anything you want. When you declare a variable, you must specify its datatype. In this case the datatype is of type Int (Integer). Now, before I forget, here's how to call the stored procedure with a parameter: sp_myStoredProcedure 0 If you want more than one parameter, you separate them with commas in both the stored procedure and the procedure call. 9.11 SUMMARY  Three micro architectures are described for simple matrix processors: SMP, SSMP, and ThrSSMP. In SMP/SSMP/ThrSSMP, the well-known 5-stage pipeline is extended with matrix register file and matrix control unit in the decode stage. Moreover, the execution data path is used not only for processing scalar data but also for processing vector/matrix data. SMP/SSMP/ThrSSMP has a three-level ISA, where matrix register file stores vector/matrix data and scalar register file stores scalar data.  The first design is called SMP, where only one execution unit is used for performing arithmetic/logic operations on scalar/vector/matrix data. Scalar/vector/matrix instructions are fetched from instruction cache, decoded, and executed on the single unified execution data path. Finally, a single result is written back in scalar/matrix registers. The second design is called SSMP, where four execution units are used in parallel for performing arithmetic/logic operations on scalar/vector/matrix data. 182 CU IDOL SELF LEARNING MATERIAL (SLM)

 Four scalar/vector/matrix instructions are fetched from instruction cache. The fetched instructions are decoded, and their dependencies are checked. Up to four independent scalar instructions can be issued in-order to the parallel execution units. However, vector/matrix instructions iterate the issuing of four vector/matrix operations without checking. - The third design is called ThrSSMP (multithreaded simple super-matrix processor), where two threads are fetched from instruction cache simultaneously. Two scalar/vector/matrix instructions are fetched from each thread.  The fetched instructions are decoded, dependencies of each thread instructions are checked, and up to two independent scalar instructions per thread can be issued in- order to the parallel execution units. In case of vector/matrix instructions, a single instruction is issued individually in round-robin fashion to be executed on the four execution units. This thesis implements our proposed designs for simple matrix processor architectures using VHDL hardware programming language targeting the Virtex-6, XC6VLX550T-2FF1760 FPGA device.  As discussed in the previous chapter, data-parallel applications are growing in importance and demanding higher performance from hardware. Thus, many research directions have emphasized on accelerating data-parallel applications. This chapter reviews some of these researches, which are related to our proposed simple matrix processor architectures. Since our proposed processor modifies the well-known five- stage pipeline and exploits ILP/DLP/TLP, pipelining, superscalar, vector processing, and multi-threading, these concepts will be described before presenting the related work.  This chapter provides the background and the related work, and it is organized as follows. Section 2.1 reviews various forms of computer parallelism for executing multiple operations per clock cycle. Moreover, pipelining technique, superscalar processors, vector processing, and multithreading technique are discussed in detail to show the advantages and disadvantages of each on processing data-parallel applications. Finally, Section 2.2 presents some selected recently related work based on vector and matrix architectures.  Pipelining is an implementation technique in which multiple instructions are overlapped in execution. It takes advantage of parallelism that exists among the actions needed to execute an instruction. Today, pipelining is the key implementation technique used to make fast CPUs. The throughput of an instruction pipeline is determined by how often an instruction exits the pipeline. Because the pipeline stages are hooked together, all the stages must be ready to proceed at the same time. The time required for moving an instruction one step down the pipeline is known as processor cycle. Because all stages proceed at the same time, the length of a processor cycle is determined by the time required for the slowest pipe stage. 183 CU IDOL SELF LEARNING MATERIAL (SLM)

9.12 KEYWORDS  Backup Policy- an organization's procedures and rules for ensuring that adequate numbers and types of backups are made, including suitably frequent testing of the process for restoring the original production system from the backup copies  Backup Rotation Scheme- a method for effectively backing up data where multiple media are systematically moved from storage to usage in the backup process and back to storage. There are several different schemes. Each takes a different approach to balance the need for a long retention period with frequently backing up changes. Some schemes are more complicated than others  Backup Site- a place where business can continue after a data loss event. Such a site may have ready access to the backups or possibly even a continuously updated mirror.  Backup Software- computer software applications that are used for performing the backing up of data, i.e., the systematic generation of backup copies. See also: List of backup software.  Backup Window- the period that a system is available to perform a backup procedure. Backup procedures can have detrimental effects to system and network performance, sometimes requiring the primary use of the system to be suspended. These effects can be mitigated by arranging a backup window with the users or owners of the system. 9.13 LEARNING ACTIVITY 1. Create a session on Overview of Ado.Net. ___________________________________________________________________________ ___________________________________________________________________________ 2. Create a survey on Data Readers. ___________________________________________________________________________ ___________________________________________________________________________ 9.14 UNIT END QUESTIONS A. Descriptive Questions Short Questions 1. What is Connection Object? 2. What is Command Object? 184 CU IDOL SELF LEARNING MATERIAL (SLM)

3. Write the meaning of Execute Scalar? 4. Write the meaning of Execute Reader? 5. How to determine theData Grid view control? Long Questions 1. Explain the concept of Connection Object? 2. Elaborate the Data Sets & Data Adapters. 3. Illustrate the criticism of Execute Non-Query. 4. Examine the criticism of Execute Scalar. 5. Discuss on the Stored Procedures. B. Multiple Choice Questions 1. What is the reason that C# does not support multiple inheritances? a. Method collision b. Name collision c. Function collision d. Interface collision 2. Which is a set of devices through which a user communicates with a system using interactive set of commands? a. Console b. System c. Keyboard d. Monitor 3. Select the right option for the statement; exponential formatting character (‘E’ or ‘e’) converts a given value to string in the form of. a. m.dddd E+xxx b. m.dddd c. E+xxx d. None of these 4. What is the Graphical User Interface (GUI) components created for web- 185 basedinteractions? a. Web forms b. Window Forms CU IDOL SELF LEARNING MATERIAL (SLM)

c. Application Forms d. None of these 5. Which technology and a programming language such as C# is used to create a Web based application, in Microsoft Visual Studio, a. JAVA b. J# c. VB.NET d. ASP.NET Answers 1-b, 2-a, 3-a, 4-b, 5-d 9.15 REFERENCES Book References  Collin C, Wade DT, Davies S, Horne V. The Barthel ADL Index: a reliability study. Int Disabil Stud 1988.  Mahoney FI, Barthel DW. Functional evaluation: the Barthel Index. Md State Med J 1965.  Wang CH, Hsieh CL, Dai MH, Chen CH, Lai YF. Inter-rater reliability and validity of the stroke rehabilitation assessment of movement (STREAM) instrument. J Rehabil Med 2002.  Hsieh YW, Lin JH, Wang CH, Sheu CF, Hsueh IP, Hsieh CL. Discriminative, predictive, and evaluative properties of the simplified stroke rehabilitation assessment of movement instrument in patients with stroke. J Rehabil Med 2007.  Hsueh IP, Wang WC, Wang CH, Sheu CF, Lo SK, Lin JH, et al. A simplified stroke rehabilitation assessment of movement instrument. Phys Ther 2006.  Daley K, Mayo N, Wood-Dauphinee S. Reliability of scores on the Stroke Rehabilitation Assessment of Movement (STREAM) measure. Phys Ther 1999. E-References  https://www.researchgate.net/publication/283345112_Design_Implementation_and_P erformance_Evaluation_of_a_Simple_Processor_for_Executing_Scalar_Vector_and_ Matrix_Instructions  http://ptgmedia.pearsoncmg.com/images/032126892x/samplechapter/noyes_ch06.pdf 186 CU IDOL SELF LEARNING MATERIAL (SLM)

 https://www.dougv.com/2006/12/the-difference-between-data-adapters-data-sets-and- data-readers-in-plain-english/ 187 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 10: N-TIER LAYERED ARCHITECTURE APPLICATION STRUCTURE 10.0 Learning Objectives 10.1 Introduction 10.2 Understanding Tier and Layer 10.3 Dividing 10.4 Application into Multiple Layers 10.5 XML Reading and Writing XML 10.6 Important Classes in the System 10.7 XML, Namespace 10.8 Read and Write 10.9 XML Nodes and Attributes 10.10 Summary 10.11 Keywords 10.12 Learning Activity 10.13 Unit End Questions 10.14 References 10.0 LEARNING OBJECTIVES After studying this unit, you will be able to:  Describe the understanding of Tier and Layer.  Illustrate the Important Classes in The System.  Explain the XML Nodes and Attributes. 10.1 INTRODUCTION The choice of proper system architecture is always a serious challenge in the software development process. Large web applications are not an exception. This choice has a great influence on the system maintainability or scalability. The purpose of this paper is to propose an architectural framework that tries to respond to the architectural challenges of the web 188 CU IDOL SELF LEARNING MATERIAL (SLM)

applications. The proposed XWA framework is based on two well-known architectural frameworks: MVC and PCMEF but takes into consideration web application specificity. XWA is not only a theoretical phenomenon but has its practical implementation (in the form of the eInformatyka portal). There are also presented some guidelines for software developers interested in web applications development generally and Cocoon web application framework specifically. Section 2 discusses in detail MVC, the classic architectural framework and PCMEF – an interesting, layered architecture. In section 3, both are confrontedThe beginnings of MVC (Model-View-Controller) date back to late seventies and SmallTalk-80 language. The MVC paradigm has quickly become the core idea behind user interfaces in most of object-oriented languages, including SmallTalk. The MVC paradigm is also a superb example of the separation of concerns idea. As shown on Figure 1 system classes are separated into three groups: model, view, and controller. Semantics of each MVC element and rules of communication inside the triad are discussed below. The Model consists of static and dynamic parts of an application domain. The most important part of the model is the application logic with contained business logic. In other words, the model specifies the application data and behaviour. The designer, while working on the model organization, should consider that it must be independent from a chosen presentation and user actions processing technology (the model does not know anything about the view and controller). Change notification is the only connection originating from the model. It is usually implemented using events or Observer design pattern. The View is responsible for graphical or textual presentation of the model. The view implementation is strongly coupled with the model because it should be aware of the specificity of presented data or operation results. Figure 1 illustrates this connection: the view collects the model state (state query) every time when it is notified about a change. On the other hand, the model is not coupled with the presentation technology, so the view can be reimplemented or even exchanged without any changes in the model implementation. The main Controller's responsibility is to react to user actions (e.g., mouse button clicks) and map them to the model operations (state change) or view changes. The controller in conjunction with the view takes care of the look and feel of the application. Unfortunately, the controller semantics is commonly misinterpreted. It is sometimes mistakenly regarded as an element responsible for the behaviour of application when at the same time the model is regarded as an element, which contains data only. This interpretation leads to a problematic tight coupling between the presentation layer and application logic. Changes in the first one requires changes in the second one and vice versa. This is the main reason why the controller should not contain the application logic but only references to it. The value of MVC is based on two essential rules. The first one is separation of the model and presentation, which allows exchanging user interface. The second rule is separation of the view and controller. A typical example is providing two controllers (editable and not editable) for one view. It must be noted that classic MVC form is often degenerated. This deformation is usually manifested in 189 CU IDOL SELF LEARNING MATERIAL (SLM)

tightly coupled view and controller. For instance, some SmallTalk versions and Swing library on the Java platform have this kind of design. In Swing MVC framework is replaced by the Model-Delegate architecture. The specificity of web applications causes that the classic form of MVC architectural framework should be taken into consideration. PCMEF is a layered architectural framework and is shown on Figure 2. It consists of four layers: presentation, control, domain, and foundation. The domain layer consists of two packages: entity and mediator. The responsibility of presentation layer is the application presentation. It is usually composed of classes based on graphical user interface components. For example, in the Java language, Swing or SWT components will be used. MVC equivalent to presentation layer will be the view strongly connected with the controller. A similar architecture has been used in the Swing library. The control layer is responsible for the processing of user requests from higher layer. It contains main application logic, computation algorithms or even user session maintaining. The entity package, from the domain layer, contains business objects. Usually, they are persistent and stored in some external data source. The mediator package, from the domain layer, mediates between control and entity packages and the foundation layer. Its introduction eliminates entity dependence on foundation package. In result it removes the necessity of business objects modification when the persistence technology is changed. It also gives possibility to separate the construction of queries to persistent data from the application logic included in the control layer. The foundation layer is responsible for communication with data sources, such as databases, document repositories, web services or file systems. Web architecture design facets, discussed before, were concerning HTTP protocol. It is important to note, that behind HTML presentation there is often complex business logic, a communication with external data sources or web services. Web application, to meet all requirements of maintainability and scalability, must have a well-defined internal structure. Unfortunately, MVC framework, discussed before, does not give any hints about this issue. This can lead to arise the network structure of objects dependences from different packages. Those structures can grow in exponential time and are difficult to control or maintain. A verified solution to this problem is the introduction of a hierarchical structure. This is a recommended architecture in enterprise systems based on J2EE platform. A good example of the hierarchical architecture is also, discussed before, PCMEF framework. It has well-defined semantics of each layer and at the same time, it is generic enough to be technology independent. A naturally arising question is whether PCMEF framework is taking advantage of basic MVC's principles: the separation of the model and presentation and separation of the view and controller. The introduction of the presentation layer is probably motivated by the first MVC rule. Unfortunately, the second MVC rule was not expressed in PCMEF. Like in Swing library, the view and controller are strongly coupled. In case of desktop applications, this rule is often omitted for practical reasons. But in web applications separation of the view 190 CU IDOL SELF LEARNING MATERIAL (SLM)

and controller is always treated as a good practice, and should be expressed in the architectural design 10.2 UNDERSTANDING TIER AND LAYER With the development of the mobile internet and computer software, the requirements for portability, encapsulation and expansibility of computer software system are increasingly high. The traditional three-tier architecture is no longer applicable because of its limitations in the current application environment. Consider the problem of platform migration, changes in the demand and improving the efficiency and effectiveness of maintenance. In recent years, exploration direction in software system development is increasingly turned to the procedural framework and design patterns. In this paper, we propose a four- tier architecture, which is introduced a new layer - a data service layer into traditional three-tier architecture. We describe the advantages of the four-tier architecture from the structure and apply it to the design and development of TV shopping integrated audio management platform. Three-tier architecture can disperse concerns, loose coupling, reuse logic and define standard that is currently widely used. Every layer of the three-tier architecture can increase, update, delete and replace individually. The architecture can not only reduce the dependence among layers and the costs of construction and maintenance effectively, but also be beneficial to standardization. Upgrading application-level and database-level configuration to server-level configuration can provide strong scalability and fault tolerance. In addition, the biggest advantage of three-tier architecture is its security. The three-tier architecture hierarchically manages data and programs, data control and application logic independently, that can more tightly control access to information. However, the three-tier architecture also has obvious shortcomings. First, it is not conducive to the function expansion. It will result in a modification of the cascade when we modify from the top to the bottom. For example, if you need to add a function in the presentation layer, you may have to increase the corresponding code in the business logic layer and data access layer, so that the design can meet the hierarchical structure’s demand. Secondly, system migration inconveniences. When we are migrating system, if there have differences between the target platform environment and existing system environment, the system cannot work properly, and the costs is too high. At last, but not least, the code reusability is too bad. When we develop system again or integrate system, if the development language used is different, the three layers substantially all need to re-development. In recent years, mobile Internet technology develops rapid. As a result, panel computer, smart phones and other mobile devices achieved universal. The range of their applications is increasingly wide. Therefore, it is necessary to develop more and more cross-platform application system to meet the demand of the user terminal running diversity. The system needs to have good platform portability and the ability to support the mobile terminal. In this case, the three-tier architecture in response to platform migration, changes in demand, the mixed issues such as database, its shortcomings, such as, bad code reusability 191 CU IDOL SELF LEARNING MATERIAL (SLM)

and applicability, high cost of system maintenance and platform porting is particularly prominent. In view of this, we bring a four-tier architecture, as theoverall architecture of the system. It is introduced a new layer - a data service layer into traditional three-tier architecture. In this paper, we discuss features of the four-tier architecture with a data services layer, and then we verify them with the development of TV shopping integrated audio management platform based on four-tier architecture. As can be seen from the diagram, according to the geographical location the layers can be divided into two types. One is data access layer, business logic layer, and the data services layer which is in server-side, as well as the business entity model and the generic class library. The other one is the presentation layer located in client. The characteristic of the four- tier architecture is as follows. Presentation Layer (PL): It is in the outermost layer; popular talk is the interface showed to users, users’ WYSIWYG when using a system. Its functions contain receiving data inputted, interpreting users' instructions, and sending requests to the data services layer and displaying the data obtained from the data services layer to users by the way they can understand. It closest to users and provide an interactive operation interface. Data service layer (DSL): It is located between the presentation layer and business logic layer (BLL). As the isolation layer, it will separate the business logic from the client, to guarantee the security of information. According to the needs of each module, data services layer encapsulates the business logic on high level. Operational activities played a role of confidentiality. For large software systems, cross-platform distributed computing and server farms between communications are essential, which is the function of the service layer’s establishment. The main function of DSL is referring data processed by BLL to its immediate upper layer (presentation layer) or transferring data submitted by PL to its directly below that is BLL according to the specified model definitions. Business logic layer (BLL): It is located between the PL and data access layer (BLL), playing a connecting role in the data exchange. Business logic layer is responsible for the various types of business operations of system, the completion of the corresponding functions, which are issue-specific operations, the data business logic processing. The layer’s concerns are focused primarily on the development of business rules, business processes and business need related system, meaning that it is related to area of systems addressed by the logic. Very often, it’s also known as the domain layer. Data access layer (DAL): It is in the innermost layer that implements persistence logic. The function of this layer is responsible for access to the database; you can access the database system, binary files, text documents or XML document. Operations on the data contain finding, adding, deleting, modifying, etc. This level work independently, without relying on other layers. In accordance with upper layer’s requests, DAL extracts the appropriate data from the database and passes the data to the upper. DAL also does the CRUD operations on the data in the database in accordance with the instructions of the upper. 192 CU IDOL SELF LEARNING MATERIAL (SLM)

10.3 DIVIDING The design of the data access layer using the idea of object-oriented programming was mainly based on factory pattern. We abstracted a database access module and designed a class called DBHelper to access the database, so that the system can face a variety of database. The business logic layer contained all the core businesses and the rules applied of TV shopping integrated audio management platform. According to the requests made by PL, BLL issued the corresponding requests to the DAL and returned the results to PL by a certain format. To ensure loose coupling between layers, the call for DAL was done through the interface that has nothing to do with the specific data access logic. The concrete realization of the need to modify the data access layer, if it does not relate to the interface definition, BLL would not be affected in any way. Data services were deployment on a network server. DSL, as the buffer zone of PL and BLL, Implemented the functions of screening the clients out logical businesses, thus avoiding unnecessary risks. Service was made up of address contract and binding. Each service had a unique address, which contains two important elements, service location and the transport protocol or transport scheme for the transmission. The contract had nothing to do with the platform, and it’s the standard way of describing functions of service. Bind grouped the data communication features together, which encapsulates such as the transport protocol, message encoding, communication mode, reliability, security, transaction propagation, and interoperability and other related options, making them consistent. Each service must be hosted in the hosting process. We developed two types of clients PC client running on the Windows platform and mobile terminal running on the Android platform. They were located at the presentation layer of the system. 10.4 APPLICATION INTO MULTIPLE LAYERS In software engineering, multitier architecture (often referred to as n-tier architecture) or multilayer architecture is a client–server architecture in which presentation, application processing and data management functions are physically separated. The most widespread use of multitier architecture is the three-tier architecture. N-tier application architecture provides a model by which developers can create flexible and reusable applications. By segregating an application into tiers, developers acquire the option of modifying or adding a specific layer, instead of reworking the entire application. Three-tier architecture is typically composed of a presentation tier, a logic tier, and a data tier. While the concepts of layer and tier are often used interchangeably, one common point of view is that there is indeed a difference. This view holds that a layer is a logical structuring mechanism for the elements that make up the software solution, while a tier is a physical structuring mechanism for the system infrastructure. For example, a three-layer solution could easily be deployed on a single tier, such as a personal workstation. 193 CU IDOL SELF LEARNING MATERIAL (SLM)

The book Domain Driven Design describes some common uses for the above four layers, although its primary focus is the domain layer. If the application architecture has no explicit distinction between the business layer and the presentation layer (i.e., the presentation layer is considered part of the business layer), then a traditional client-server (two-tier) model has been implemented.[citation needed] The more usual convention is that the application layer (or service layer) is considered a sub layer of the business layer, typically encapsulating the API definition surfacing the supported business functionality. The application/business layers can, in fact, be further subdivided to emphasize additional sub layers of distinct responsibility. For example, if the model–view– presenter pattern is used, the presenter sub layer might be used as an additional layer between the user interface layer and the business/application. Some also identify a separate layer called the business infrastructure layer, located between the business layer and the infrastructure layer. It's also sometimes called the \"low-level business layer\" or the \"business services layer\". This layer is very general and can be used in several application tiers. The infrastructure layer can be partitioned into different levels (high-level or low-level technical services). Developers often focus on the persistence capabilities of the infrastructure layer and therefore only talk about the persistence layer or the data access layer (instead of an infrastructure layer or technical services layer). In other words, the other kind of technical services are not always explicitly thought of as part of any layer. A layer is on top of another because it depends on it. Every layer can exist without the layers above it and requires the layers below it to function. Another common view is that layers do not always strictly depend on only the adjacent layer below. For example, in a relaxed layered system (as opposed to a strict layered system) a layer can also depend on all the layers below it. Three-tier architecture is a client-server software architecture pattern in which the user interface (presentation), functional process logic (\"business rules\"), computer data storage and data access are developed and maintained as independent modules, most often on separate platforms. It was developed by John J. Donovan in Open Environment Corporation (OEC), a tools company he founded in Cambridge, Massachusetts. Apart from the usual advantages of modular software with well-defined interfaces, the three-tier architecture is intended to allow any of the three tiers to be upgraded or replaced independently in response to changes in requirements or technology. For example, a change of operating system in the presentation tier would only affect the user interface code. Typically, the user interface runs on a desktop PC or workstation and uses a standard graphical user interface, functional process logic that may consist of one or more separate modules running on a workstation or application server, and an RDBMS on a database server or mainframe that contains the computer data storage logic. The middle tier may be multitier itself (in which case the overall architecture is called” n-tier architecture\"). 194 CU IDOL SELF LEARNING MATERIAL (SLM)

10.5 XML READING AND WRITING XML This is the topmost level of the application. The presentation tier displays information related to such services as browsing merchandise, purchasing, and shopping cart contents. It communicates with other tiers by which it puts out the results to the browser/client tier and all other tiers in the network. In simple terms, it is a layer which users can access directly (such as a web page, or an operating system's GUI). The end-to-end traceability of data flows through n-tier systems is a challenging task which becomes more important when systems increase in complexity. The Application Response Measurement defines concepts and APIs for measuring performance and correlating transactions between tiers. Generally, the term \"tiers\" is used to describe physical distribution of components of a system on separate servers, computers, or networks (processing nodes). Three-tier architecture then will have three processing nodes. The term \"layers\" refers to a logical grouping of components which may or may not be physically located on one processing node. XWA architectural framework is a variation of discussed before solutions, adapted to web applications specificity. XWA combines advantages of frameworks, introducing a separation of the view and controller (according to MVC) and organizing model classes into the hierarchical structure (according to PCMEF). The proposal is shown on Figure 3. XWA is a layered architecture with clearly separated MVC triad. It consists of six packages arranged into four- level hierarchy where higher layers depend on lower ones. Semantics of each package are described below. The View package is responsible for the presentation of application. It is an exact equivalent of MVC view element. In web applications the View package consists of files describing the appearance of web pages. Depending on the technology used, this could be HTML page templates or JSP page in Model 2 architecture. But the most interesting solution seems to be the use of technologies based on XML. This introduces a new type of contract between layers based on XML. For example, the contract between the view and application logic is specified by an XML document scheme (expressed in DTD, XML Schema or Relax NG scheme definition languages). The Controller package is responsible for processing users’ actions. It calls logic included in lower layers. The main responsibility of the Controller is to separate HTTP protocol specificity from the application logic. It is responsible for controlling application control flow within a single interaction and sequence of interactions in case of more complex applications. XWA suggests using the continuations controller for these purposes, which is discussed in the next section. Another crucial controller’s responsibility is to control the application view. The Controller may be realized by Front Controller or Application Controller design patterns. The Service package is responsible for providing application services. It centralizes application logic contained in many business objects which requires access to data sources or web services (e.g., e–mail sending). XWA suggests using Application Service or Service Layer design patterns in this package. The Business Objects package contains business objects which form application domain model using Business Object or Domain Model design patterns. The Mediator 195 CU IDOL SELF LEARNING MATERIAL (SLM)

package isolates Business Objects and Service package classes from the implementation of access to data sources, persistence mechanisms and web services. Classes from this package realize Data Access Object pattern. 10.6 IMPORTANT CLASSES IN THE SYSTEM In object-oriented programming, a class is an extensible program-code-template for creating objects, providing initial values for state (member variables) and implementations of behaviour (member functions or methods).In many languages, the class name is used as the name for the class (the template itself), the name for the default constructor of the class (a subroutine that creates objects), and as the type of objects generated by instantiating the class; these distinct concepts are easily conflated.Although, to the point of conflation, one could argue that is a feature inherent in a language because of its polymorphic nature and why these languages are so powerful, dynamic, and adaptable for use compared to languages without polymorphism present. Thus, they can model dynamic systems (i.e., the real world, machine learning, AI) more easily. When an object is created by a constructor of the class, the resulting object is called an instance of the class, and the member variables specific to the object are called instance variables, to contrast with the class variables shared across the class. In some languages, classes are only a compile-time feature (new classes cannot be declared at run-time), while in other languages classes are first-class citizens, and are generally themselves objects (typically of type Class or similar). In these languages, a class that creates classes is called a metaclass. In casual use, people often refer to the \"class\" of an object, but narrowly speaking objects have type: the interface, namely the types of member variables, the signatures of member functions (methods), and properties these satisfy. At the same time, a class has an implementation (specifically the implementation of the methods), and can create objects of a given type, with a given implementation. In the terms of type theory, a class is an implementation a concrete data structure and collection of subroutines while a type is an interface. Different (concrete) classes can produce objects of the same (abstract) type (depending on type system); for example, the type of Stack might be implemented with two classes Small Stack (fast for small stacks, but scales poorly) and Scalable Stack (scales well but high overhead for small stacks). Similarly, a given class may have several different constructors. Class types generally represent nouns, such as a person, place or thing, or something nominalised, and a class represents an implementation of these. For example, a Banana type might represent the properties and functionality of bananas in general, while the ABCBanana and XYZBanana classes would represent ways of producing bananas (say, banana suppliers or data structures and functions to represent and draw bananas in a video 196 CU IDOL SELF LEARNING MATERIAL (SLM)

game). The ABCBanana class could then produce bananas: instances of the ABCBanana class would be objects of type Banana. Often only a single implementation of a type is given, in which case the class name is often identical with the type of name. 10.7 XML, NAMESPACE XML namespaces are used for providing uniquely named elements and attributes in an XML document. They are defined in a W3C recommendation.[1][2] An XML instance may contain element or attribute names from more than one XML vocabulary. If each vocabulary is given a namespace, the ambiguity between identically named elements or attributes can be resolved. A simple example would be to consider an XML instance that contained references to a customer and an ordered product. Both the customer element and the product element could have a child element named id. References to the id element would therefore be ambiguous; placing them in different namespaces would remove the ambiguity. A namespace name is a uniform resource identifier. Typically, the URI chosen for the namespace of a given XML vocabulary describes a resource under the control of the author or organization defining the vocabulary, such as a URL for the author's Web server. However, the namespace specification does not require nor suggest that the namespace URI be used to retrieve information; it is simply treated by an XML parser as a string. For example, the document at http://www.w3.org/1999/xhtml itself does not contain any code. It simply describes the XHTML namespace to human readers. Using a URI (such as \"http://www.w3.org/1999/xhtml\") to identify a namespace, rather than a simple, reduces the probability of different namespaces using duplicate identifiers. 10.8 READ AND WRITE Although the term namespace URI is widespread, the W3C Recommendation refers to it as the namespace name. The specification is not entirely prescriptive about the precise rules for namespace names, and many XML parsers allow any character string to be used. In version 1.1 of the recommendation, the namespace name becomes an Internationalized Resource Identifier, which licenses the use of non-ASCII characters that in practice were already accepted by nearly all XML software. The term namespace URI persists, however, not only in popular usage, but also in many other specifications from W3C and elsewhere. Following publication of the Namespaces recommendation, there was an intensive debate about how a relative URI should be handled, with some intensely arguing that it should simply be treated as a character string, and others arguing with conviction that it should be turned into an absolute URI by resolving it against the base URI of the document.The result of the debate was a ruling from W3C that relative URIs were deprecated. 197 CU IDOL SELF LEARNING MATERIAL (SLM)

The use of URIs taking the form of URLs in the http scheme (such as http://www.w3.org/1999/xhtml) is common, despite the absence of any formal relationship with the HTTP protocol. The Namespaces specification does not say what should happen if such a URL is dereferenced (that is, if software attempts to retrieve a document from this location). One convention adopted by some users is to place an RDDL document at the location. In general, however, users should assume that the namespace URI is simply a name, not the address of a document on the Web. 10.9 XML NODES AND ATTRIBUTES Technologies based on the Java language, XML and Apache Cocoon framework are technological core of the e-Informatyka portal. A combination of these technologies offers a wide range of possibilities, but there is a lack of documents describing guidelines and good practices for designing Apache Cocoon based applications. Authors try to fill this gap. Since the beginning the project has been based on the Apache Cocoon publication framework. But at the end of the year of 2003 the system architecture was significantly redesigned. XWA architectural framework consists of crucial elements of contemporary e-Informatyka architecture. The core idea behind the Cocoon framework is a pipe architecture concept and therefore mapping it to the layered architecture is a challenge. The system architecture and organization of Cocoon components proposed by the authors is a result of their experiences. It also takes into consideration some suggestions of experts and consultants from the enterprise community. The architecture of e-Informatyka portal is shown on Figure 4. Implementation details concerning each package are described in the next section. XML- based technologies are intensively used in the presentation layer of e-Informatyka portal. The View package includes XML documents describing logical content of the portal and XSL style sheets which describe transformations of XML documents (e.g., into HTML pages or PDF documents). Main responsibility of the Controller package is to control the flow within interactions between the user and the system. The Controller calls actions from the application logic and activates a generation of the view. The Controller maps the user requests to the application logic calls or to the view change. This mapping is usually based on requested URL address. It can also consider the user profile or user agent type, which may be useful to personalize the web application. The Cocoon sitemap provides the controller's functionality. The sitemap is an XML document which contains mapping of user requests to proper pipelines. Pipelines specify the view generation process by describing sequence of XML document transformations. During this process system actions (from the Service package) can be called. The reuse of all pipeline elements is very simple. For instance, a transformer which inserts article content into the portal layout can be also used for inserting another document. More details about Cocoon's mechanisms and components can be found in. In case of more complex web applications, which require an implementation of long interaction sequences between a user and the system, there is a need to store the state of interaction (current position in interaction sequence). The continuations idea and introduction 198 CU IDOL SELF LEARNING MATERIAL (SLM)

of the continuation’s controller can extremely simplify implementation of that task and accelerate the development. Programmers do not have to deal with low level HTTP protocol mechanisms. 10.10 SUMMARY  The Avalon component container is the core technology in the Apache Cocoon. The architectural design of the e-Informatyka portal also uses it. All application logic from the Service package is based on the component interfaces. Classes from this package can be divided into three groups: generators, actions, and jobs. Generators are components, which produce XML document in the form of SAX events. Their main responsibility is providing data from business objects to the presentation layer (e.g., generating a list of recently uploaded articles).  Generators hide details of business objects implementation and express business data in XML. Actions are components, which implement application logic. They centralize calls to business logic, data source access or data persistence mechanisms concerning a single use case. Article upload action is a good example. It would consist of calls which add some information about the new article to database and stores article contents in an external repository. It must be emphasized that the action does not contain implementation of those operations.  Their details are in lower layers. Actions just group operations into transactions and could be seen as the implementation of Application Service pattern but with fine- grained interfaces, which is a cause of Apache Cocoon’s specificity. Jobs are like actions. The difference lies in the activation method. Jobs are invoked not by a user request but by a built-in schedule. A good example is a job responsible for removing user accounts, which were not activated within a specific time frame. The Business Objects package consists of classes, which implement Business Object design pattern.  In case of e-Informatyka portal, these classes represent mostly static aspects of the system, but in larger systems behavioural aspect may appear. The Mediator package isolates access to a wide range of external data sources, Classes from this package usually implement the Data Access Object design pattern. A typical example is the persistence mechanism, which in case of the e-Informatyka portal is implemented by the Hibernate object/relational mapping system. Another example is the access to the content repository, which is implemented by the Subversion versioning system.  The DataGridView control gives you explicit control over whether users can edit, delete, or add rows in the grid. After the grid has been populated with data, the user can interact with the data presented in the grid in several ways, as discussed earlier. By default, those interactions include editing the contents of cells (fields) in a row, 199 CU IDOL SELF LEARNING MATERIAL (SLM)

selecting a row, and deleting it with the Delete keyboard key, or adding a new row using an empty row that displays as the last row in the grid.  The most common way of using the grid is with data-bound columns. When you bind to data, the grid creates columns based on the schema or properties of the data items and generates rows in the grid for each data item found in the bound collection. If the data binding was set up statically using the designer (as has been done in most of the examples in this book), the types and properties of the columns in the grid were set at design time. 10.11 KEYWORDS  Web App Manifest- A web app manifest is a JSON-formatted file named manifest. son that is a centralized place to put metadata that controls how the web application appears to the user and how it can be launched Some browsers, such as Chrome, use the web app manifest to enable 'add to home screen'.  Web Components -Web components are a set of web platform APIs that allow you to create new custom, reusable, encapsulated HTML tags to use in web pages and web apps. Custom components and widgets build on the Web Component standards, will work across modern browsers, and can be used with any JavaScript library or framework that works with HTML.  Custom Element- A Custom Element is a developer defined HTML tag. These elements are the foundation of Web Components and can be used to create any sort of UI.  DOMContentLoaded (DCL) - The DOMContentLoaded reports the time when the initial HTML document has been completely loaded and parsed, without waiting for style sheets, images, and sub frames to finish loading.  First Contentful Paint (FCP)- First Contentful Paint reports the time when the browser first rendered any text, image (including background images), non-white canvas or SVG. This includes text with pending webfonts. This is the first-time users could start consuming page content. 10.12 LEARNING ACTIVITY 1. Create a session on Application into Multiple Layers. ___________________________________________________________________________ ___________________________________________________________________________ 2. Create as survey on XML, Namespace. 200 CU IDOL SELF LEARNING MATERIAL (SLM)


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook